ID: 1504.02462

A Group Theoretic Perspective on Unsupervised Deep Learning

April 8, 2015

View on ArXiv

Similar papers 2

Harmonics of Learning: Universal Fourier Features Emerge in Invariant Networks

December 13, 2023

88% Match
Giovanni Luca Marchetti, Christopher Hillar, ... , Sanborn Sophia
Machine Learning
Artificial Intelligence
Signal Processing

In this work, we formally prove that, under certain conditions, if a neural network is invariant to a finite group then its weights recover the Fourier transform on that group. This provides a mathematical explanation for the emergence of Fourier features -- a ubiquitous phenomenon in both biological and artificial learning systems. The results hold even for non-commutative groups, in which case the Fourier transform encodes all the irreducible unitary group representations. ...

Find SimilarView on arXiv

Why & When Deep Learning Works: Looking Inside Deep Learnings

May 10, 2017

88% Match
Ronny Ronen
Machine Learning

The Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI) has been heavily supporting Machine Learning and Deep Learning research from its foundation in 2012. We have asked six leading ICRI-CI Deep Learning researchers to address the challenge of "Why & When Deep Learning works", with the goal of looking inside Deep Learning, providing insights on how deep networks function, and uncovering key observations on their expressiveness, limitations, and po...

Find SimilarView on arXiv

A Toy Model of Universality: Reverse Engineering How Networks Learn Group Operations

February 6, 2023

88% Match
Bilal Chughtai, Lawrence Chan, Neel Nanda
Machine Learning
Artificial Intelligence
Representation Theory

Universality is a key hypothesis in mechanistic interpretability -- that different models learn similar features and circuits when trained on similar tasks. In this work, we study the universality hypothesis by examining how small neural networks learn to implement group composition. We present a novel algorithm by which neural networks may implement composition for any finite group via mathematical representation theory. We then show that networks consistently learn this alg...

Find SimilarView on arXiv

Deep learning systems as complex networks

September 28, 2018

88% Match
Alberto Testolin, Michele Piccolini, Samir Suweis
Disordered Systems and Neura...
Machine Learning
Machine Learning

Thanks to the availability of large scale digital datasets and massive amounts of computational power, deep learning algorithms can learn representations of data by exploiting multiple levels of abstraction. These machine learning methods have greatly improved the state-of-the-art in many challenging cognitive tasks, such as visual object recognition, speech processing, natural language understanding and automatic translation. In particular, one class of deep learning models,...

Find SimilarView on arXiv

The Modern Mathematics of Deep Learning

May 9, 2021

88% Match
Julius Berner, Philipp Grohs, ... , Petersen Philipp
Machine Learning
Machine Learning

We describe the new field of mathematical analysis of deep learning. This field emerged around a list of research questions that were not answered within the classical framework of learning theory. These questions concern: the outstanding generalization power of overparametrized neural networks, the role of depth in deep architectures, the apparent absence of the curse of dimensionality, the surprisingly successful optimization performance despite the non-convexity of the pro...

Find SimilarView on arXiv

Representation Learning: A Review and New Perspectives

June 24, 2012

88% Match
Yoshua Bengio, Aaron Courville, Pascal Vincent
Machine Learning

The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implem...

Find SimilarView on arXiv

Deep representation learning: Fundamentals, Perspectives, Applications, and Open Challenges

November 27, 2022

88% Match
Kourosh T. Baghaei, Amirreza Payandeh, Pooya Fayyazsanavi, Shahram Rahimi, ... , Ramezani Somayeh Bakhtiari
Machine Learning
Artificial Intelligence
Computer Vision and Pattern ...

Machine Learning algorithms have had a profound impact on the field of computer science over the past few decades. These algorithms performance is greatly influenced by the representations that are derived from the data in the learning process. The representations learned in a successful learning process should be concise, discrete, meaningful, and able to be applied across a variety of tasks. A recent effort has been directed toward developing Deep Learning models, which hav...

Find SimilarView on arXiv

What Really is Deep Learning Doing?

November 6, 2017

87% Match
Chuyu Xiong
Machine Learning
Neural and Evolutionary Comp...

Deep learning has achieved a great success in many areas, from computer vision to natural language processing, to game playing, and much more. Yet, what deep learning is really doing is still an open question. There are a lot of works in this direction. For example, [5] tried to explain deep learning by group renormalization, and [6] tried to explain deep learning from the view of functional approximation. In order to address this very crucial question, here we see deep learn...

Find SimilarView on arXiv

On the Symmetries of Deep Learning Models and their Internal Representations

May 27, 2022

87% Match
Charles Godfrey, Davis Brown, ... , Kvinge Henry
Machine Learning
Artificial Intelligence

Symmetry is a fundamental tool in the exploration of a broad range of complex systems. In machine learning symmetry has been explored in both models and data. In this paper we seek to connect the symmetries arising from the architecture of a family of models with the symmetries of that family's internal representation of data. We do this by calculating a set of fundamental symmetry groups, which we call the intertwiner groups of the model. We connect intertwiner groups to a m...

Find SimilarView on arXiv

Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges

April 27, 2021

87% Match
Michael M. Bronstein, Joan Bruna, ... , Veličković Petar
Machine Learning
Artificial Intelligence
Computational Geometry
Computer Vision and Pattern ...
Machine Learning

The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods. Indeed, many high-dimensional learning tasks previously thought to be beyond reach -- such as computer vision, playing Go, or protein folding -- are in fact feasible with appropriate computational scale. Remarkably, the essence of deep learning is built from two simple algorithmic principles: first, the notion of representation or feature learnin...

Find SimilarView on arXiv