ID: 2306.04734

Machine-Learning Kronecker Coefficients

June 7, 2023

View on ArXiv

Similar papers 2

Fast Kronecker product kernel methods via generalized vec trick

January 7, 2016

84% Match
Antti Airola, Tapio Pahikkala
Machine Learning
Machine Learning

Kronecker product kernel provides the standard approach in the kernel methods literature for learning from graph data, where edges are labeled and both start and end vertices have their own feature representations. The methods allow generalization to such new edges, whose start and end vertices do not appear in the training data, a setting known as zero-shot or zero-data learning. Such a setting occurs in numerous applications, including drug-target interaction prediction, co...

Find SimilarView on arXiv

Machine Learning in Physics and Geometry

March 22, 2023

83% Match
Yang-Hui He, Elli Heyes, Edward Hirst
Algebraic Geometry
Mathematical Physics

We survey some recent applications of machine learning to problems in geometry and theoretical physics. Pure mathematical data has been compiled over the last few decades by the community and experiments in supervised, semi-supervised and unsupervised machine learning have found surprising success. We thus advocate the programme of machine learning mathematical structures, and formulating conjectures via pattern recognition, in other words using artificial intelligence to hel...

Find SimilarView on arXiv

A note on certain Kronecker coefficients

September 22, 2008

83% Match
Laurent IF Manivel
Representation Theory

We prove an explicit formula for the tensor product with itself of an irreducible complex representation of the symmetric group defined by a rectangle of height two. We also describe part of the decomposition for the tensor product of representations defined by rectangles of heights two and four. Our results are deduced, through Schur-Weyl duality, from the observation that certain actions on triple tensor products of vector spaces, are multiplicity free.

Find SimilarView on arXiv

Algebraic Machine Learning

March 14, 2018

83% Match
Fernando Martin-Maroto, Polavieja Gonzalo G. de
Machine Learning
Discrete Mathematics
Commutative Algebra
Rings and Algebras

Machine learning algorithms use error function minimization to fit a large set of parameters in a preexisting model. However, error minimization eventually leads to a memorization of the training dataset, losing the ability to generalize to other datasets. To achieve generalization something else is needed, for example a regularization method or stopping the training when error in a validation dataset is minimal. Here we propose a different approach to learning and generaliza...

Find SimilarView on arXiv

Representations of the symmetric group are decomposable in polynomial time

November 22, 2022

83% Match
Sheehan Olver
Group Theory
Numerical Analysis
Numerical Analysis

We introduce an algorithm to decompose orthogonal matrix representations of the symmetric group over the reals into irreducible representations, which as a by-product also computes the multiplicities of the irreducible representations. The algorithm applied to a $d$-dimensional representation of $S_n$ is shown to have a complexity of $O(n^2 d^3)$ operations for determining multiplicities of irreducible representations and a further $O(n d^4)$ operations to fully decompose rep...

Find SimilarView on arXiv

On vanishing of Kronecker coefficients

July 10, 2015

83% Match
Christian Ikenmeyer, Ketan D. Mulmuley, Michael Walter
Computational Complexity
Representation Theory

We show that the problem of deciding positivity of Kronecker coefficients is NP-hard. Previously, this problem was conjectured to be in P, just as for the Littlewood-Richardson coefficients. Our result establishes in a formal way that Kronecker coefficients are more difficult than Littlewood-Richardson coefficients, unless P=NP. We also show that there exists a #P-formula for a particular subclass of Kronecker coefficients whose positivity is NP-hard to decide. This is an e...

Find SimilarView on arXiv

Reduced Kronecker coefficients and counter-examples to Mulmuley's strong saturation conjecture SH

October 17, 2008

83% Match
Emmanuel Universidad de Sevilla Briand, Rosa Darmouth College Orellana, Mercedes Universidad de Sevilla Rosas
Combinatorics
Computational Complexity
Representation Theory

We provide counter-examples to Mulmuley's strong saturation conjecture (strong SH) for the Kronecker coefficients. This conjecture was proposed in the setting of Geometric Complexity Theory to show that deciding whether or not a Kronecker coefficient is zero can be done in polynomial time. We also provide a short proof of the #P-hardness of computing the Kronecker coefficients. Both results rely on the connections between the Kronecker coefficients and another family of struc...

Find SimilarView on arXiv

Machine Learning the Dimension of a Polytope

July 15, 2022

83% Match
Tom Coates, Johannes Hofscheier, Alexander Kasprzyk
Combinatorics

We use machine learning to predict the dimension of a lattice polytope directly from its Ehrhart series. This is highly effective, achieving almost 100% accuracy. We also use machine learning to recover the volume of a lattice polytope from its Ehrhart series, and to recover the dimension, volume, and quasi-period of a rational polytope from its Ehrhart series. In each case we achieve very high accuracy, and we propose mathematical explanations for why this should be so.

Find SimilarView on arXiv
Jiakang Bao, Yang-Hui He, Edward Hirst, Johannes Hofscheier, ... , Majumder Suvajit
Algebraic Geometry

We describe how simple machine learning methods successfully predict geometric properties from Hilbert series (HS). Regressors predict embedding weights in projective space to ${\sim}1$ mean absolute error, whilst classifiers predict dimension and Gorenstein index to $>90\%$ accuracy with ${\sim}0.5\%$ standard error. Binary random forest classifiers managed to distinguish whether the underlying HS describes a complete intersection with high accuracies exceeding $95\%$. Neura...

Tensor machines for learning target-specific polynomial features

April 7, 2015

83% Match
Jiyan Yang, Alex Gittens
Machine Learning
Machine Learning

Recent years have demonstrated that using random feature maps can significantly decrease the training and testing times of kernel-based algorithms without significantly lowering their accuracy. Regrettably, because random features are target-agnostic, typically thousands of such features are necessary to achieve acceptable accuracies. In this work, we consider the problem of learning a small number of explicit polynomial features. Our approach, named Tensor Machines, finds a ...

Find SimilarView on arXiv