July 9, 2024
We present a methodology for performing scans of BSM parameter spaces with reinforcement learning (RL). We identify a novel procedure using graph neural networks that is capable of exploring spaces of models without the user specifying a fixed particle content, allowing broad classes of BSM models to be explored. In theory, the technique is applicable to nearly any model space with a pre-specified gauge group. We provide a generic procedure by which a suitable graph grammar can be developed for any BSM model which features user-specified symmetry groups and a finite number of different possible particle species. As a proof of concept, we construct the graph grammar for theories with vector-like leptons that may or may not be charged under a dark U(1) group, inspired by portal matter extensions of the sub-GeV vector portal/kinetic mixing simplified dark matter models. We then use this graph grammar to create a RL environment tasked with creating models with these vector-like leptons that are consistent with a list of a variety of precision observables. The RL agent succeeds in developing models that can address the observed muon anomalous magnetic moment discrepancy while remaining consistent with flavor violation and electroweak precision observables, including both constructions that have previously been studied as well as new models which have not, to our knowledge, previously been identified. By inspecting the resulting ensembles of models that the agent produces and experimenting with different configurations for our RL environment and graph grammar, we also infer various lessons about the development of these environments that can be transferable to RL scans of more complicated model spaces, and comment on future directions for the development of this technique into a more mature tool.
Similar papers 1
July 9, 2024
We provide a framework for exploring physics beyond the Standard Model with reinforcement learning using graph representations of new physics theories. The graph structure allows for model-building without a priori specifying definite numbers of new particles. As a case study, we apply our method to a simple class of theories involving vectorlike leptons and a dark U(1) inspired by the portal matter paradigm. Using modern policy gradient methods, the agent successfully explor...
March 8, 2021
In this paper, we apply reinforcement learning to particle physics model building. As an example environment, we use the space of Froggatt-Nielsen type models for quark masses. Using a basic policy-based algorithm we show that neural networks can be successfully trained to construct Froggatt-Nielsen models which are consistent with the observed quark masses and mixing. The trained policy networks lead from random to phenomenologically acceptable models for over 90% of episode...
July 27, 2020
Particle physics is a branch of science aiming at discovering the fundamental laws of matter and forces. Graph neural networks are trainable functions which operate on graphs---sets of elements and their pairwise relations---and are a central method within the broader field of geometric deep learning. They are very expressive and have demonstrated superior performance to other classical deep learning approaches in a variety of domains. The data in particle physics are often r...
March 29, 2021
We present an approach to cosmology in which the Universe learns its own physical laws. It does so by exploring a landscape of possible laws, which we express as a certain class of matrix models. We discover maps that put each of these matrix models in correspondence with both a gauge/gravity theory and a mathematical model of a learning machine, such as a deep recurrent, cyclic neural network. This establishes a correspondence between each solution of the physical theory and...
March 23, 2022
Many physical systems can be best understood as sets of discrete data with associated relationships. Where previously these sets of data have been formulated as series or image data to match the available machine learning architectures, with the advent of graph neural networks (GNNs), these systems can be learned natively as graphs. This allows a wide variety of high- and low-level physical features to be attached to measurements and, by the same token, a wide variety of HEP ...
April 27, 2023
We propose a method to explore the flavor structure of quarks and leptons with reinforcement learning. As a concrete model, we utilize a basic value-based algorithm for models with $U(1)$ flavor symmetry. By training neural networks on the $U(1)$ charges of quarks and leptons, the agent finds 21 models to be consistent with experimentally measured masses and mixing angles of quarks and leptons. In particular, an intrinsic value of normal ordering tends to be larger than that ...
May 15, 2019
Deep learning, a branch of machine learning, have been recently applied to high energy experimental and phenomenological studies. In this note we give a brief review on those applications using supervised deep learning. We first describe various learning models and then recapitulate their applications to high energy phenomenological studies. Some detailed applications are delineated in details, including the machine learning scan in the analysis of new physics parameter space...
February 19, 2024
This study introduces a novel Graph Neural Network (GNN) architecture that leverages infrared and collinear (IRC) safety and equivariance to enhance the analysis of collider data for Beyond the Standard Model (BSM) discoveries. By integrating equivariance in the rapidity-azimuth plane with IRC-safe principles, our model significantly reduces computational overhead while ensuring theoretical consistency in identifying BSM scenarios amidst Quantum Chromodynamics backgrounds. Th...
March 27, 2019
We propose deep reinforcement learning as a model-free method for exploring the landscape of string vacua. As a concrete application, we utilize an artificial intelligence agent known as an asynchronous advantage actor-critic to explore type IIA compactifications with intersecting D6-branes. As different string background configurations are explored by changing D6-brane configurations, the agent receives rewards and punishments related to string consistency conditions and pro...
May 2, 2019
Artificial Intelligence (AI), defined in its most simple form, is a technological tool that makes machines intelligent. Since learning is at the core of intelligence, machine learning poses itself as a core sub-field of AI. Then there comes a subclass of machine learning, known as deep learning, to address the limitations of their predecessors. AI has generally acquired its prominence over the past few years due to its considerable progress in various fields. AI has vastly in...