ID: 2312.08550

Harmonics of Learning: Universal Fourier Features Emerge in Invariant Networks

December 13, 2023

View on ArXiv

Similar papers 4

Neural Group Actions

October 8, 2020

88% Match
Span Spanbauer, Luke Sciarappa
Machine Learning
Machine Learning
Neural and Evolutionary Comp...

We introduce an algorithm for designing Neural Group Actions, collections of deep neural network architectures which model symmetric transformations satisfying the laws of a given finite group. This generalizes involutive neural networks $\mathcal{N}$, which satisfy $\mathcal{N}(\mathcal{N}(x))=x$ for any data $x$, the group law of $\mathbb{Z}_2$. We show how to optionally enforce an additional constraint that the group action be volume-preserving. We conjecture, by analogy t...

Find SimilarView on arXiv

A Group Theoretic Perspective on Unsupervised Deep Learning

April 8, 2015

88% Match
Arnab Paul, Suresh Venkatasubramanian
Machine Learning
Neural and Evolutionary Comp...
Machine Learning

Why does Deep Learning work? What representations does it capture? How do higher-order representations emerge? We study these questions from the perspective of group theory, thereby opening a new approach towards a theory of Deep learning. One factor behind the recent resurgence of the subject is a key algorithmic step called {\em pretraining}: first search for a good generative model for the input samples, and repeat the process one layer at a time. We show deeper implicat...

Find SimilarView on arXiv

On the Symmetries of Deep Learning Models and their Internal Representations

May 27, 2022

88% Match
Charles Godfrey, Davis Brown, ... , Kvinge Henry
Machine Learning
Artificial Intelligence

Symmetry is a fundamental tool in the exploration of a broad range of complex systems. In machine learning symmetry has been explored in both models and data. In this paper we seek to connect the symmetries arising from the architecture of a family of models with the symmetries of that family's internal representation of data. We do this by calculating a set of fundamental symmetry groups, which we call the intertwiner groups of the model. We connect intertwiner groups to a m...

Find SimilarView on arXiv

Mathematics of Neural Networks (Lecture Notes Graduate Course)

March 6, 2024

88% Match
Bart M. N. Smets
Machine Learning
Artificial Intelligence

These are the lecture notes that accompanied the course of the same name that I taught at the Eindhoven University of Technology from 2021 to 2023. The course is intended as an introduction to neural networks for mathematics students at the graduate level and aims to make mathematics students interested in further researching neural networks. It consists of two parts: first a general introduction to deep learning that focuses on introducing the field in a formal mathematical ...

Find SimilarView on arXiv

Encoding Involutory Invariances in Neural Networks

June 7, 2021

88% Match
Anwesh Bhattacharya, Marios Mattheakis, Pavlos Protopapas
Machine Learning
Artificial Intelligence
Neural and Evolutionary Comp...

In certain situations, neural networks are trained upon data that obey underlying symmetries. However, the predictions do not respect the symmetries exactly unless embedded in the network structure. In this work, we introduce architectures that embed a special kind of symmetry namely, invariance with respect to involutory linear/affine transformations up to parity $p=\pm 1$. We provide rigorous theorems to show that the proposed network ensures such an invariance and present ...

Find SimilarView on arXiv

On the Universality of Invariant Networks

January 27, 2019

88% Match
Haggai Maron, Ethan Fetaya, ... , Lipman Yaron
Machine Learning
Machine Learning

Constraining linear layers in neural networks to respect symmetry transformations from a group $G$ is a common design principle for invariant networks that has found many applications in machine learning. In this paper, we consider a fundamental question that has received little attention to date: Can these networks approximate any (continuous) invariant function? We tackle the rather general case where $G\leq S_n$ (an arbitrary subgroup of the symmetric group) that acts ...

Find SimilarView on arXiv

Nonlinearities in Steerable SO(2)-Equivariant CNNs

September 14, 2021

88% Match
Daniel Franzen, Michael Wand
Machine Learning

Invariance under symmetry is an important problem in machine learning. Our paper looks specifically at equivariant neural networks where transformations of inputs yield homomorphic transformations of outputs. Here, steerable CNNs have emerged as the standard solution. An inherent problem of steerable representations is that general nonlinear layers break equivariance, thus restricting architectural choices. Our paper applies harmonic distortion analysis to illuminate the effe...

Find SimilarView on arXiv

A Structural Approach to the Design of Domain Specific Neural Network Architectures

January 23, 2023

88% Match
Gerrit Nolte
Machine Learning
Neural and Evolutionary Comp...

This is a master's thesis concerning the theoretical ideas of geometric deep learning. Geometric deep learning aims to provide a structured characterization of neural network architectures, specifically focused on the ideas of invariance and equivariance of data with respect to given transformations. This thesis aims to provide a theoretical evaluation of geometric deep learning, compiling theoretical results that characterize the properties of invariant neural networks wit...

Find SimilarView on arXiv

Neural network interpretation using descrambler groups

December 2, 2019

88% Match
Jake L. Amey, Jake Keeley, ... , Kuprov Ilya
Signal Processing

The lack of interpretability and trust is a much-criticised feature of deep neural networks. In fully connected nets, the signalling between inner layers is scrambled because backpropagation training does not require perceptrons to be arranged in any particular order. The result is a black box; this problem is particularly severe in scientific computing and digital signal processing (DSP), where neutral nets perform abstract mathematical transformations that do not reduce to ...

Find SimilarView on arXiv

Data-driven emergence of convolutional structure in neural networks

February 1, 2022

88% Match
Alessandro Ingrosso, Sebastian Goldt
Disordered Systems and Neura...
Neurons and Cognition
Machine Learning

Exploiting data invariances is crucial for efficient learning in both artificial and biological neural circuits. Understanding how neural networks can discover appropriate representations capable of harnessing the underlying symmetries of their inputs is thus crucial in machine learning and neuroscience. Convolutional neural networks, for example, were designed to exploit translation symmetry and their capabilities triggered the first wave of deep learning successes. However,...

Find SimilarView on arXiv