ID: 2309.05352

Neural Discovery of Permutation Subgroups

September 11, 2023

View on ArXiv

Similar papers 3

Why does Deep Learning work? - A perspective from Group Theory

December 20, 2014

87% Match
Arnab Paul, Suresh Venkatasubramanian
Machine Learning
Neural and Evolutionary Comp...
Machine Learning

Why does Deep Learning work? What representations does it capture? How do higher-order representations emerge? We study these questions from the perspective of group theory, thereby opening a new approach towards a theory of Deep learning. One factor behind the recent resurgence of the subject is a key algorithmic step called pre-training: first search for a good generative model for the input samples, and repeat the process one layer at a time. We show deeper implications ...

Find SimilarView on arXiv

LieGG: Studying Learned Lie Group Generators

October 9, 2022

87% Match
Artem Moskalev, Anna Sepliarskaia, ... , Smeulders Arnold
Machine Learning

Symmetries built into a neural network have appeared to be very beneficial for a wide range of tasks as it saves the data to learn them. We depart from the position that when symmetries are not built into a model a priori, it is advantageous for robust networks to learn symmetries directly from the data to fit a task function. In this paper, we present a method to extract symmetries learned by a neural network and to evaluate the degree to which a network is invariant to them...

Find SimilarView on arXiv

On Learning Sets of Symmetric Elements

February 20, 2020

86% Match
Haggai Maron, Or Litany, ... , Fetaya Ethan
Machine Learning
Machine Learning

Learning from unordered sets is a fundamental learning setup, recently attracting increasing attention. Research in this area has focused on the case where elements of the set are represented by feature vectors, and far less emphasis has been given to the common case where set elements themselves adhere to their own symmetries. That case is relevant to numerous applications, from deblurring image bursts to multi-view 3D shape recognition and reconstruction. In this paper, we ...

Find SimilarView on arXiv

Learning Stable Group Invariant Representations with Convolutional Networks

January 16, 2013

86% Match
Joan Bruna, Arthur Szlam, Yann LeCun
Artificial Intelligence
Numerical Analysis

Transformation groups, such as translations or rotations, effectively express part of the variability observed in many recognition problems. The group structure enables the construction of invariant signal representations with appealing mathematical properties, where convolutions, together with pooling operators, bring stability to additive and geometric perturbations of the input. Whereas physical transformation groups are ubiquitous in image and audio applications, they do ...

Find SimilarView on arXiv

Identifying the Group-Theoretic Structure of Machine-Learned Symmetries

September 14, 2023

86% Match
Roy T. Forestano, Konstantin T. Matchev, Katia Matcheva, Alexander Roman, ... , Verner Sarunas
Machine Learning
Group Theory
Mathematical Physics

Deep learning was recently successfully used in deriving symmetry transformations that preserve important physics quantities. Being completely agnostic, these techniques postpone the identification of the discovered symmetries to a later stage. In this letter we propose methods for examining and identifying the group-theoretic structure of such machine-learned symmetries. We design loss functions which probe the subalgebra structure either during the deep learning stage of sy...

Find SimilarView on arXiv

Unsupervised Learning of Group Invariant and Equivariant Representations

February 15, 2022

86% Match
Robin Winter, Marco Bertolini, Tuan Le, ... , Clevert Djork-Arné
Machine Learning

Equivariant neural networks, whose hidden features transform according to representations of a group G acting on the data, exhibit training efficiency and an improved generalisation performance. In this work, we extend group invariant and equivariant representation learning to the field of unsupervised deep learning. We propose a general learning strategy based on an encoder-decoder framework in which the latent representation is separated in an invariant term and an equivari...

Find SimilarView on arXiv

A Group Theoretic Perspective on Unsupervised Deep Learning

April 8, 2015

86% Match
Arnab Paul, Suresh Venkatasubramanian
Machine Learning
Neural and Evolutionary Comp...
Machine Learning

Why does Deep Learning work? What representations does it capture? How do higher-order representations emerge? We study these questions from the perspective of group theory, thereby opening a new approach towards a theory of Deep learning. One factor behind the recent resurgence of the subject is a key algorithmic step called {\em pretraining}: first search for a good generative model for the input samples, and repeat the process one layer at a time. We show deeper implicat...

Find SimilarView on arXiv

A Characterization Theorem for Equivariant Networks with Point-wise Activations

January 17, 2024

86% Match
Marco Pacini, Xiaowen Dong, ... , Santin Gabriele
Machine Learning
Artificial Intelligence

Equivariant neural networks have shown improved performance, expressiveness and sample complexity on symmetrical domains. But for some specific symmetries, representations, and choice of coordinates, the most common point-wise activations, such as ReLU, are not equivariant, hence they cannot be employed in the design of equivariant neural networks. The theorem we present in this paper describes all possible combinations of finite-dimensional representations, choice of coordin...

Find SimilarView on arXiv

Equivariant Architectures for Learning in Deep Weight Spaces

January 30, 2023

86% Match
Aviv Navon, Aviv Shamsian, Idan Achituve, Ethan Fetaya, ... , Maron Haggai
Machine Learning

Designing machine learning architectures for processing neural networks in their raw weight matrix form is a newly introduced research direction. Unfortunately, the unique symmetry structure of deep weight spaces makes this design very challenging. If successful, such architectures would be capable of performing a wide range of intriguing tasks, from adapting a pre-trained network to a new domain to editing objects represented as functions (INRs or NeRFs). As a first step tow...

Find SimilarView on arXiv

Learning functions on symmetric matrices and point clouds via lightweight invariant features

May 13, 2024

86% Match
Ben Blum-Smith, Ningyuan Huang, ... , Villar Soledad
Machine Learning
Commutative Algebra

In this work, we present a mathematical formulation for machine learning of (1) functions on symmetric matrices that are invariant with respect to the action of permutations by conjugation, and (2) functions on point clouds that are invariant with respect to rotations, reflections, and permutations of the points. To achieve this, we construct $O(n^2)$ invariant features derived from generators for the field of rational functions on $n\times n$ symmetric matrices that are inva...

Find SimilarView on arXiv