ID: 2312.08550

Harmonics of Learning: Universal Fourier Features Emerge in Invariant Networks

December 13, 2023

View on ArXiv

Similar papers 2

Learning Stable Group Invariant Representations with Convolutional Networks

January 16, 2013

89% Match
Joan Bruna, Arthur Szlam, Yann LeCun
Artificial Intelligence
Numerical Analysis

Transformation groups, such as translations or rotations, effectively express part of the variability observed in many recognition problems. The group structure enables the construction of invariant signal representations with appealing mathematical properties, where convolutions, together with pooling operators, bring stability to additive and geometric perturbations of the input. Whereas physical transformation groups are ubiquitous in image and audio applications, they do ...

Find SimilarView on arXiv

Equivariant neural networks and piecewise linear representation theory

August 2, 2024

89% Match
Joel Gibson, Daniel Tubbenhauer, Geordie Williamson
Machine Learning
Group Theory
Representation Theory
Machine Learning

Equivariant neural networks are neural networks with symmetry. Motivated by the theory of group representations, we decompose the layers of an equivariant neural network into simple representations. The nonlinear activation functions lead to interesting nonlinear equivariant maps between simple representations. For example, the rectified linear unit (ReLU) gives rise to piecewise linear maps. We show that these considerations lead to a filtration of equivariant neural network...

Find SimilarView on arXiv

Learning Linear Groups in Neural Networks

May 29, 2023

89% Match
Emmanouil Theodosis, Karim Helwani, Demba Ba
Machine Learning
Neural and Evolutionary Comp...

Employing equivariance in neural networks leads to greater parameter efficiency and improved generalization performance through the encoding of domain knowledge in the architecture; however, the majority of existing approaches require an a priori specification of the desired symmetries. We present a neural network architecture, Linear Group Networks (LGNs), for learning linear groups acting on the weight space of neural networks. Linear groups are desirable due to their inher...

Find SimilarView on arXiv

Feature emergence via margin maximization: case studies in algebraic tasks

November 13, 2023

89% Match
Depen Morwani, Benjamin L. Edelman, Costin-Andrei Oncescu, ... , Kakade Sham
Machine Learning

Understanding the internal representations learned by neural networks is a cornerstone challenge in the science of machine learning. While there have been significant recent strides in some cases towards understanding how neural networks implement specific target functions, this paper explores a complementary question -- why do networks arrive at particular computational strategies? Our inquiry focuses on the algebraic learning tasks of modular addition, sparse parities, and ...

Find SimilarView on arXiv

A PAC-Bayesian Generalization Bound for Equivariant Networks

October 24, 2022

89% Match
Arash Behboodi, Gabriele Cesa, Taco Cohen
Machine Learning
Machine Learning

Equivariant networks capture the inductive bias about the symmetry of the learning task by building those symmetries into the model. In this paper, we study how equivariance relates to generalization error utilizing PAC Bayesian analysis for equivariant networks, where the transformation laws of feature spaces are determined by group representations. By using perturbation analysis of equivariant networks in Fourier domain for each layer, we derive norm-based PAC-Bayesian gene...

Find SimilarView on arXiv

Universal approximations of permutation invariant/equivariant functions by deep neural networks

March 5, 2019

89% Match
Akiyoshi Sannai, Yuuki Takai, Matthieu Cordonnier
Machine Learning
Machine Learning

In this paper, we develop a theory about the relationship between $G$-invariant/equivariant functions and deep neural networks for finite group $G$. Especially, for a given $G$-invariant/equivariant function, we construct its universal approximator by deep neural network whose layers equip $G$-actions and each affine transformations are $G$-equivariant/invariant. Due to representation theory, we can show that this approximator has exponentially fewer free parameters than usua...

Find SimilarView on arXiv

MatrixNet: Learning over symmetry groups using learned group representations

January 16, 2025

89% Match
Lucas Laird, Circe Hsu, ... , Walters Robin
Machine Learning
Artificial Intelligence
Representation Theory

Group theory has been used in machine learning to provide a theoretically grounded approach for incorporating known symmetry transformations in tasks from robotics to protein modeling. In these applications, equivariant neural networks use known symmetry groups with predefined representations to learn over geometric input data. We propose MatrixNet, a neural network architecture that learns matrix representations of group element inputs instead of using predefined representat...

Find SimilarView on arXiv

Universal approximations of invariant maps by neural networks

April 27, 2018

89% Match
Dmitry Yarotsky
Neural and Evolutionary Comp...

We describe generalizations of the universal approximation theorem for neural networks to maps invariant or equivariant with respect to linear representations of groups. Our goal is to establish network-like computational models that are both invariant/equivariant and provably complete in the sense of their ability to approximate any continuous invariant/equivariant map. Our contribution is three-fold. First, in the general case of compact groups we propose a construction of ...

Find SimilarView on arXiv

Neural Discovery of Permutation Subgroups

September 11, 2023

89% Match
Pavan Karjol, Rohan Kashyap, Prathosh A P
Machine Learning

We consider the problem of discovering subgroup $H$ of permutation group $S_{n}$. Unlike the traditional $H$-invariant networks wherein $H$ is assumed to be known, we present a method to discover the underlying subgroup, given that it satisfies certain conditions. Our results show that one could discover any subgroup of type $S_{k} (k \leq n)$ by learning an $S_{n}$-invariant function and a linear transformation. We also prove similar results for cyclic and dihedral subgroups...

Find SimilarView on arXiv

Why does Deep Learning work? - A perspective from Group Theory

December 20, 2014

89% Match
Arnab Paul, Suresh Venkatasubramanian
Machine Learning
Neural and Evolutionary Comp...
Machine Learning

Why does Deep Learning work? What representations does it capture? How do higher-order representations emerge? We study these questions from the perspective of group theory, thereby opening a new approach towards a theory of Deep learning. One factor behind the recent resurgence of the subject is a key algorithmic step called pre-training: first search for a good generative model for the input samples, and repeat the process one layer at a time. We show deeper implications ...

Find SimilarView on arXiv