ID: 1504.02462

A Group Theoretic Perspective on Unsupervised Deep Learning

April 8, 2015

View on ArXiv

Similar papers 5

Towards Building A Group-based Unsupervised Representation Disentanglement Framework

February 20, 2021

87% Match
Tao Yang, Xuanchi Ren, Yuwang Wang, ... , Zheng Nanning
Machine Learning
Computer Vision and Pattern ...

Disentangled representation learning is one of the major goals of deep learning, and is a key step for achieving explainable and generalizable models. A well-defined theoretical guarantee still lacks for the VAE-based unsupervised methods, which are a set of popular methods to achieve unsupervised disentanglement. The Group Theory based definition of representation disentanglement mathematically connects the data transformations to the representations using the formalism of g...

Find SimilarView on arXiv

Generalizing in the Real World with Representation Learning

October 18, 2022

87% Match
Tegan Maharaj
Machine Learning
Machine Learning

Machine learning (ML) formalizes the problem of getting computers to learn from experience as optimization of performance according to some metric(s) on a set of data examples. This is in contrast to requiring behaviour specified in advance (e.g. by hard-coded rules). Formalization of this problem has enabled great progress in many applications with large real-world impact, including translation, speech recognition, self-driving cars, and drug discovery. But practical instant...

Find SimilarView on arXiv

Representation Learning: A Statistical Perspective

November 26, 2019

87% Match
Jianwen Xie, Ruiqi Gao, Erik Nijkamp, ... , Wu Ying Nian
Machine Learning
Machine Learning

Learning representations of data is an important problem in statistics and machine learning. While the origin of learning representations can be traced back to factor analysis and multidimensional scaling in statistics, it has become a central theme in deep learning with important applications in computer vision and computational neuroscience. In this article, we review recent advances in learning representations from a statistical perspective. In particular, we review the fo...

Find SimilarView on arXiv

Learning to be Simple

December 8, 2023

87% Match
Yang-Hui He, Vishnu Jejjala, ... , Sharnoff Max
Machine Learning
Group Theory
Mathematical Physics

In this work we employ machine learning to understand structured mathematical data involving finite groups and derive a theorem about necessary properties of generators of finite simple groups. We create a database of all 2-generated subgroups of the symmetric group on n-objects and conduct a classification of finite simple groups among them using shallow feed-forward neural networks. We show that this neural network classifier can decipher the property of simplicity with var...

Learning a Lie Algebra from Unlabeled Data Pairs

September 20, 2020

87% Match
Christopher Ick, Vincent Lostanlen
Machine Learning
Artificial Intelligence
Computer Vision and Pattern ...
Sound
Machine Learning

Deep convolutional networks (convnets) show a remarkable ability to learn disentangled representations. In recent years, the generalization of deep learning to Lie groups beyond rigid motion in $\mathbb{R}^n$ has allowed to build convnets over datasets with non-trivial symmetries, such as patterns over the surface of a sphere. However, one limitation of this approach is the need to explicitly define the Lie group underlying the desired invariance property before training the ...

Find SimilarView on arXiv

Hyper-Representations: Learning from Populations of Neural Networks

October 7, 2024

87% Match
Konstantin Schürholt
Machine Learning

This thesis addresses the challenge of understanding Neural Networks through the lens of their most fundamental component: the weights, which encapsulate the learned information and determine the model behavior. At the core of this thesis is a fundamental question: Can we learn general, task-agnostic representations from populations of Neural Network models? The key contribution of this thesis to answer that question are hyper-representations, a self-supervised method to lear...

Find SimilarView on arXiv

An Overview on Data Representation Learning: From Traditional Feature Learning to Recent Deep Learning

November 25, 2016

87% Match
Guoqiang Zhong, Li-Na Wang, Junyu Dong
Machine Learning
Machine Learning

Since about 100 years ago, to learn the intrinsic structure of data, many representation learning approaches have been proposed, including both linear ones and nonlinear ones, supervised ones and unsupervised ones. Particularly, deep architectures are widely applied for representation learning in recent years, and have delivered top results in many tasks, such as image classification, object detection and speech recognition. In this paper, we review the development of data re...

Find SimilarView on arXiv

On the Generalization Mystery in Deep Learning

March 18, 2022

87% Match
Satrajit Chatterjee, Piotr Zielinski
Machine Learning

The generalization mystery in deep learning is the following: Why do over-parameterized neural networks trained with gradient descent (GD) generalize well on real datasets even though they are capable of fitting random datasets of comparable size? Furthermore, from among all solutions that fit the training data, how does GD find one that generalizes well (when such a well-generalizing solution exists)? We argue that the answer to both questions lies in the interaction of the ...

Find SimilarView on arXiv

How Deep Neural Networks Learn Compositional Data: The Random Hierarchy Model

July 5, 2023

87% Match
Francesco Cagnetta, Leonardo Petrini, Umberto M. Tomasini, ... , Wyart Matthieu
Machine Learning
Computer Vision and Pattern ...
Machine Learning

Deep learning algorithms demonstrate a surprising ability to learn high-dimensional tasks from limited examples. This is commonly attributed to the depth of neural networks, enabling them to build a hierarchy of abstract, low-dimensional data representations. However, how many training examples are required to learn such representations remains unknown. To quantitatively study this question, we introduce the Random Hierarchy Model: a family of synthetic tasks inspired by the ...

Find SimilarView on arXiv

Dynamic neurons: A statistical physics approach for analyzing deep neural networks

October 1, 2024

87% Match
Donghee Lee, Hye-Sung Lee, Jaeok Yi
Statistical Mechanics
Disordered Systems and Neura...
Machine Learning

Deep neural network architectures often consist of repetitive structural elements. We introduce a new approach that reveals these patterns and can be broadly applied to the study of deep learning. Similar to how a power strip helps untangle and organize complex cable connections, this approach treats neurons as additional degrees of freedom in interactions, simplifying the structure and enhancing the intuitive understanding of interactions within deep neural networks. Further...

Find SimilarView on arXiv