January 22, 2025
Similar papers 5
March 6, 2023
Neural networks adapt very well to distributed and continuous representations, but struggle to generalize from small amounts of data. Symbolic systems commonly achieve data efficient generalization by exploiting modularity to benefit from local and discrete features of a representation. These features allow symbolic programs to be improved one module at a time and to experience combinatorial growth in the values they can successfully process. However, it is difficult to desig...
February 8, 2021
This is an introductory machine-learning course specifically developed with STEM students in mind. Our goal is to provide the interested reader with the basics to employ machine learning in their own projects and to familiarize themself with the terminology as a foundation for further reading of the relevant literature. In these lecture notes, we discuss supervised, unsupervised, and reinforcement learning. The notes start with an exposition of machine learning methods withou...
August 15, 2024
In the field of machine learning, comprehending the intricate training dynamics of neural networks poses a significant challenge. This paper explores the training dynamics of neural networks, particularly whether these dynamics can be expressed in a general closed-form solution. We demonstrate that the dynamics of the gradient flow in two-layer narrow networks is not an integrable system. Integrable systems are characterized by trajectories confined to submanifolds defined by...
May 1, 2023
Humans interact with the environment using a combination of perception - transforming sensory inputs from their environment into symbols, and cognition - mapping symbols to knowledge about the environment for supporting abstraction, reasoning by analogy, and long-term planning. Human perception-inspired machine perception, in the context of AI, refers to large-scale pattern recognition from raw data using neural networks trained using self-supervised learning objectives such ...
January 3, 2024
We study the problem of learning equivariant neural networks via gradient descent. The incorporation of known symmetries ("equivariance") into neural nets has empirically improved the performance of learning pipelines, in domains ranging from biology to computer vision. However, a rich yet separate line of learning theoretic research has demonstrated that actually learning shallow, fully-connected (i.e. non-symmetric) networks has exponential complexity in the correlational s...
September 29, 2022
Inspired by constraints from physical law, equivariant machine learning restricts the learning to a hypothesis class where all the functions are equivariant with respect to some group action. Irreducible representations or invariant theory are typically used to parameterize the space of such functions. In this article, we introduce the topic and explain a couple of methods to explicitly parameterize equivariant functions that are being used in machine learning applications. I...
November 7, 2024
Neuro-symbolic AI is an effective method for improving the overall performance of AI models by combining the advantages of neural networks and symbolic learning. However, there are differences between the two in terms of how they process data, primarily because they often use different data representation methods, which is often an important factor limiting the overall performance of the two. From this perspective, we analyzed 191 studies from 2013 by constructing a four-leve...
December 10, 2020
Current advances in Artificial Intelligence (AI) and Machine Learning (ML) have achieved unprecedented impact across research communities and industry. Nevertheless, concerns about trust, safety, interpretability and accountability of AI were raised by influential thinkers. Many have identified the need for well-founded knowledge representation and reasoning to be integrated with deep learning and for sound explainability. Neural-symbolic computing has been an active area of ...
January 17, 2019
These are lecture notes for a course on machine learning with neural networks for scientists and engineers that I have given at Gothenburg University and Chalmers Technical University in Gothenburg, Sweden. The material is organised into three parts: Hopfield networks, supervised learning of labeled data, and learning algorithms for unlabeled data sets. Part I introduces stochastic recurrent networks: Hopfield networks and Boltzmann machines. The analysis of their learning ru...
April 13, 2021
Machine learning applications to symbolic mathematics are becoming increasingly popular, yet there lacks a centralized source of real-world symbolic expressions to be used as training data. In contrast, the field of natural language processing leverages resources like Wikipedia that provide enormous amounts of real-world textual data. Adopting the philosophy of "mathematics as language," we bridge this gap by introducing a pipeline for distilling mathematical expressions embe...