October 23, 2012
In this paper we determine the irreducible projective representations of sporadic simple groups over an arbitrary algebraically closed field F, whose image contains an almost cyclic matrix of prime-power order. A matrix M is called cyclic if its characteristic and minimum polynomials coincide, and we call M almost cyclic if, for a suitable a in F, M is similar to diag(a Id_h, M_1), where M_1 is cyclic and 0 <= h <= n. The paper also contains results on the generation of spora...
December 27, 2020
Group symmetry is inherent in a wide variety of data distributions. Data processing that preserves symmetry is described as an equivariant map and often effective in achieving high performance. Convolutional neural networks (CNNs) have been known as models with equivariance and shown to approximate equivariant maps for some specific groups. However, universal approximation theorems for CNNs have been separately derived with individual techniques according to each group and se...
June 13, 2020
Several works have proposed Simplicity Bias (SB)---the tendency of standard training procedures such as Stochastic Gradient Descent (SGD) to find simple models---to justify why neural networks generalize well [Arpit et al. 2017, Nakkiran et al. 2019, Soudry et al. 2018]. However, the precise notion of simplicity remains vague. Furthermore, previous settings that use SB to theoretically justify why neural networks generalize well do not simultaneously capture the non-robustnes...
August 9, 2020
Let $G$ be a finite simple group. In this paper we consider the existence of small subsets $A$ of $G$ with the property that, if $y \in G$ is chosen uniformly at random, then with high probability $y$ invariably generates $G$ together with some element of $A$. We prove various results in this direction, both positive and negative. As a corollary, we prove that two randomly chosen elements of a finite simple group of Lie type of bounded rank invariably generate with probabilit...
February 10, 2023
Recent work has used deep learning to derive symmetry transformations, which preserve conserved quantities, and to obtain the corresponding algebras of generators. In this letter, we extend this technique to derive sparse representations of arbitrary Lie algebras. We show that our method reproduces the canonical (sparse) representations of the generators of the Lorentz group, as well as the $U(n)$ and $SU(n)$ families of Lie groups. This approach is completely general and can...
July 4, 2023
The problem of detecting and quantifying the presence of symmetries in datasets is useful for model selection, generative modeling, and data analysis, amongst others. While existing methods for hard-coding transformations in neural networks require prior knowledge of the symmetries of the task at hand, this work focuses on discovering and characterizing unknown symmetries present in the dataset, namely, Lie group symmetry transformations beyond the traditional ones usually co...
March 8, 2023
We introduce and investigate, for finite groups $G$, $G$-invariant deep neural network ($G$-DNN) architectures with ReLU activation that are densely connected-- i.e., include all possible skip connections. In contrast to other $G$-invariant architectures in the literature, the preactivations of the$G$-DNNs presented here are able to transform by \emph{signed} permutation representations (signed perm-reps) of $G$. Moreover, the individual layers of the $G$-DNNs are not require...
October 13, 2015
In this paper, we classify the finite simple groups with an abelian Sylow subgroup.
March 7, 2024
The current state-of-the-art in artificial intelligence is impressive, especially in terms of mastery of language, but not so much in terms of mathematical reasoning. What could be missing? Can we learn something useful about that gap from how the brains of mathematicians go about their craft? This essay builds on the idea that current deep learning mostly succeeds at system 1 abilities -- which correspond to our intuition and habitual behaviors -- but still lacks something i...
February 11, 2022
The recent progresses in Machine Learning opened the door to actual applications of learning algorithms but also to new research directions both in the field of Machine Learning directly and, at the edges with other disciplines. The case that interests us is the interface with physics, and more specifically Statistical Physics. In this short lecture, I will try to present first a brief introduction to Machine Learning from the angle of neural networks. After explaining quickl...