December 7, 2021
Machine learning plays a crucial role in enhancing and accelerating the search for new fundamental physics. We review the state of machine learning methods and applications for new physics searches in the context of terrestrial high energy physics experiments, including the Large Hadron Collider, rare event searches, and neutrino experiments. While machine learning has a long history in these fields, the deep learning revolution (early 2010s) has yielded a qualitative shift i...
February 2, 2021
Modern machine learning techniques, including deep learning, are rapidly being applied, adapted, and developed for high energy physics. Given the fast pace of this research, we have created a living review with the goal of providing a nearly comprehensive list of citations for those developing and applying these approaches to experimental, phenomenological, or theoretical analyses. As a living document, it will be updated as often as possible to incorporate the latest develop...
April 3, 2024
This article attempts to summarize the effort by the particle physics community in addressing the tedious work of determining the parameter spaces of beyond-the-standard-model (BSM) scenarios, allowed by data. These spaces, typically associated with a large number of dimensions, especially in the presence of nuisance parameters, suffer from the curse of dimensionality and thus render naive sampling of any kind -- even the computationally inexpensive ones -- ineffective. Over ...
March 27, 2019
We propose deep reinforcement learning as a model-free method for exploring the landscape of string vacua. As a concrete application, we utilize an artificial intelligence agent known as an asynchronous advantage actor-critic to explore type IIA compactifications with intersecting D6-branes. As different string background configurations are explored by changing D6-brane configurations, the agent receives rewards and punishments related to string consistency conditions and pro...
March 7, 2019
Supervised machine learning can be used to predict properties of string geometries with previously unknown features. Using the complete intersection Calabi-Yau (CICY) threefold dataset as a theoretical laboratory for this investigation, we use low $h^{1,1}$ geometries for training and validate on geometries with large $h^{1,1}$. Neural networks and Support Vector Machines successfully predict trends in the number of K\"ahler parameters of CICY threefolds. The numerical accura...
We describe how simple machine learning methods successfully predict geometric properties from Hilbert series (HS). Regressors predict embedding weights in projective space to ${\sim}1$ mean absolute error, whilst classifiers predict dimension and Gorenstein index to $>90\%$ accuracy with ${\sim}0.5\%$ standard error. Binary random forest classifiers managed to distinguish whether the underlying HS describes a complete intersection with high accuracies exceeding $95\%$. Neura...
May 2, 2019
Artificial Intelligence (AI), defined in its most simple form, is a technological tool that makes machines intelligent. Since learning is at the core of intelligence, machine learning poses itself as a core sub-field of AI. Then there comes a subclass of machine learning, known as deep learning, to address the limitations of their predecessors. AI has generally acquired its prominence over the past few years due to its considerable progress in various fields. AI has vastly in...
June 24, 2012
The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implem...
January 5, 2022
On the long-established classification problems in general relativity we take a novel perspective by adopting fruitful techniques from machine learning and modern data-science. In particular, we model Petrov's classification of spacetimes, and show that a feed-forward neural network can achieve high degree of success. We also show how data visualization techniques with dimensionality reduction can help analyze the underlying patterns in the structure of the different types of...
February 12, 2024
The fact that accurately predicted information can serve as an energy source paves the way for new approaches to autonomous learning. The energy derived from a sequence of successful predictions can be recycled as an immediate incentive and resource, driving the enhancement of predictive capabilities in AI agents. We propose that, through a series of straightforward meta-architectural adjustments, any unsupervised learning apparatus could achieve complete independence from ex...