ID: 1504.02462

A Group Theoretic Perspective on Unsupervised Deep Learning

April 8, 2015

View on ArXiv

Similar papers 3

Mechanisms of dimensionality reduction and decorrelation in deep neural networks

October 4, 2017

87% Match
Haiping Huang
Machine Learning
Statistical Mechanics
Machine Learning

Deep neural networks are widely used in various domains. However, the nature of computations at each layer of the deep networks is far from being well understood. Increasing the interpretability of deep neural networks is thus important. Here, we construct a mean-field framework to understand how compact representations are developed across layers, not only in deterministic deep networks with random weights but also in generative deep networks where an unsupervised learning i...

Find SimilarView on arXiv

When Representations Align: Universality in Representation Learning Dynamics

February 14, 2024

87% Match
Rossem Loek van, Andrew M. Saxe
Machine Learning
Neurons and Cognition

Deep neural networks come in many sizes and architectures. The choice of architecture, in conjunction with the dataset and learning algorithm, is commonly understood to affect the learned neural representations. Yet, recent results have shown that different architectures learn representations with striking qualitative similarities. Here we derive an effective theory of representation learning under the assumption that the encoding map from input to hidden representation and t...

Find SimilarView on arXiv

Automatic Discoveries of Physical and Semantic Concepts via Association Priors of Neuron Groups

December 30, 2016

87% Match
Shuai Li, Kui Jia, Xiaogang Wang
Machine Learning

The recent successful deep neural networks are largely trained in a supervised manner. It {\it associates} complex patterns of input samples with neurons in the last layer, which form representations of {\it concepts}. In spite of their successes, the properties of complex patterns associated a learned concept remain elusive. In this work, by analyzing how neurons are associated with concepts in supervised networks, we hypothesize that with proper priors to regulate learning,...

Find SimilarView on arXiv

Recent advances in deep learning theory

December 20, 2020

87% Match
Fengxiang He, Dacheng Tao
Machine Learning
Machine Learning

Deep learning is usually described as an experiment-driven field under continuous criticizes of lacking theoretical foundations. This problem has been partially fixed by a large volume of literature which has so far not been well organized. This paper reviews and organizes the recent advances in deep learning theory. The literature is categorized in six groups: (1) complexity and capacity-based approaches for analyzing the generalizability of deep learning; (2) stochastic dif...

Find SimilarView on arXiv

Deep Feature Space: A Geometrical Perspective

June 30, 2020

87% Match
Ioannis Kansizoglou, Loukas Bampis, Antonios Gasteratos
Computer Vision and Pattern ...
Computational Geometry
Machine Learning

One of the most prominent attributes of Neural Networks (NNs) constitutes their capability of learning to extract robust and descriptive features from high dimensional data, like images. Hence, such an ability renders their exploitation as feature extractors particularly frequent in an abundant of modern reasoning systems. Their application scope mainly includes complex cascade tasks, like multi-modal recognition and deep Reinforcement Learning (RL). However, NNs induce impli...

Find SimilarView on arXiv

A Classification of $G$-invariant Shallow Neural Networks

May 18, 2022

87% Match
Devanshu Agrawal, James Ostrowski
Machine Learning
Machine Learning

When trying to fit a deep neural network (DNN) to a $G$-invariant target function with $G$ a group, it only makes sense to constrain the DNN to be $G$-invariant as well. However, there can be many different ways to do this, thus raising the problem of ``$G$-invariant neural architecture design'': What is the optimal $G$-invariant architecture for a given problem? Before we can consider the optimization problem itself, we must understand the search space, the architectures in ...

Find SimilarView on arXiv

Towards Understanding Learning Representations: To What Extent Do Different Neural Networks Learn the Same Representation

October 28, 2018

87% Match
Liwei Wang, Lunjia Hu, Jiayuan Gu, Yue Wu, Zhiqiang Hu, ... , Hopcroft John
Machine Learning
Machine Learning

It is widely believed that learning good representations is one of the main reasons for the success of deep neural networks. Although highly intuitive, there is a lack of theory and systematic approach quantitatively characterizing what representations do deep neural networks learn. In this work, we move a tiny step towards a theory and better understanding of the representations. Specifically, we study a simpler problem: How similar are the representations learned by two net...

Find SimilarView on arXiv

Using brain inspired principles to unsupervisedly learn good representations for visual pattern recognition

April 30, 2021

87% Match
Luis Sa-Couto, Andreas Wichert
Computer Vision and Pattern ...
Neural and Evolutionary Comp...

Although deep learning has solved difficult problems in visual pattern recognition, it is mostly successful in tasks where there are lots of labeled training data available. Furthermore, the global back-propagation based training rule and the amount of employed layers represents a departure from biological inspiration. The brain is able to perform most of these tasks in a very general way from limited to no labeled data. For these reasons it is still a key research question t...

Find SimilarView on arXiv

How deep learning works --The geometry of deep learning

October 30, 2017

87% Match
Xiao Dong, Jiasong Wu, Ling Zhou
Machine Learning
Machine Learning

Why and how that deep learning works well on different tasks remains a mystery from a theoretical perspective. In this paper we draw a geometric picture of the deep learning system by finding its analogies with two existing geometric structures, the geometry of quantum computations and the geometry of the diffeomorphic template matching. In this framework, we give the geometric structures of different deep learning systems including convolutional neural networks, residual net...

Find SimilarView on arXiv

Identifying the Group-Theoretic Structure of Machine-Learned Symmetries

September 14, 2023

87% Match
Roy T. Forestano, Konstantin T. Matchev, Katia Matcheva, Alexander Roman, ... , Verner Sarunas
Machine Learning
Group Theory
Mathematical Physics

Deep learning was recently successfully used in deriving symmetry transformations that preserve important physics quantities. Being completely agnostic, these techniques postpone the identification of the discovered symmetries to a later stage. In this letter we propose methods for examining and identifying the group-theoretic structure of such machine-learned symmetries. We design loss functions which probe the subalgebra structure either during the deep learning stage of sy...

Find SimilarView on arXiv