ID: 2103.04759

Particle Physics Model Building with Reinforcement Learning

March 8, 2021

View on ArXiv
T. R. Harvey, A. Lukas
High Energy Physics - Theory
High Energy Physics - Phenom...

In this paper, we apply reinforcement learning to particle physics model building. As an example environment, we use the space of Froggatt-Nielsen type models for quark masses. Using a basic policy-based algorithm we show that neural networks can be successfully trained to construct Froggatt-Nielsen models which are consistent with the observed quark masses and mixing. The trained policy networks lead from random to phenomenologically acceptable models for over 90% of episodes and after an average episode length of about 20 steps. We also show that the networks are capable of finding models proposed in the literature when starting at nearby configurations.

Similar papers 1

Towards Beyond Standard Model Model-Building with Reinforcement Learning on Graphs

July 9, 2024

89% Match
George N. Wojcik, Shu Tian Eu, Lisa L. Everett
High Energy Physics - Phenom...
High Energy Physics - Experi...
High Energy Physics - Theory

We provide a framework for exploring physics beyond the Standard Model with reinforcement learning using graph representations of new physics theories. The graph structure allows for model-building without a priori specifying definite numbers of new particles. As a case study, we apply our method to a simple class of theories involving vectorlike leptons and a dark U(1) inspired by the portal matter paradigm. Using modern policy gradient methods, the agent successfully explor...

Find SimilarView on arXiv

Exploring the flavor structure of quarks and leptons with reinforcement learning

April 27, 2023

88% Match
Satsuki Nishimura, Coh Miyao, Hajime Otsuka
Machine Learning

We propose a method to explore the flavor structure of quarks and leptons with reinforcement learning. As a concrete model, we utilize a basic value-based algorithm for models with $U(1)$ flavor symmetry. By training neural networks on the $U(1)$ charges of quarks and leptons, the agent finds 21 models to be consistent with experimentally measured masses and mixing angles of quarks and leptons. In particular, an intrinsic value of normal ordering tends to be larger than that ...

Find SimilarView on arXiv

Graph Reinforcement Learning for Exploring BSM Model Spaces

July 9, 2024

87% Match
George N. Wojcik, Shu Tian Eu, Lisa L. Everett
High Energy Physics - Phenom...
High Energy Physics - Experi...
High Energy Physics - Theory

We present a methodology for performing scans of BSM parameter spaces with reinforcement learning (RL). We identify a novel procedure using graph neural networks that is capable of exploring spaces of models without the user specifying a fixed particle content, allowing broad classes of BSM models to be explored. In theory, the technique is applicable to nearly any model space with a pre-specified gauge group. We provide a generic procedure by which a suitable graph grammar c...

Find SimilarView on arXiv

Physicist's Journeys Through the AI World - A Topical Review. There is no royal road to unsupervised learning

May 2, 2019

86% Match
Imad Alhousseini, Wissam Chemissany, ... , Nasrallah Aly
Machine Learning
Disordered Systems and Neura...
Computational Physics
Machine Learning

Artificial Intelligence (AI), defined in its most simple form, is a technological tool that makes machines intelligent. Since learning is at the core of intelligence, machine learning poses itself as a core sub-field of AI. Then there comes a subclass of machine learning, known as deep learning, to address the limitations of their predecessors. AI has generally acquired its prominence over the past few years due to its considerable progress in various fields. AI has vastly in...

Find SimilarView on arXiv

Hierarchical clustering in particle physics through reinforcement learning

November 16, 2020

85% Match
Johann Brehmer, Sebastian Macaluso, ... , Cranmer Kyle
Artificial Intelligence
Machine Learning

Particle physics experiments often require the reconstruction of decay patterns through a hierarchical clustering of the observed final-state particles. We show that this task can be phrased as a Markov Decision Process and adapt reinforcement learning algorithms to solve it. In particular, we show that Monte-Carlo Tree Search guided by a neural policy can construct high-quality hierarchical clusterings and outperform established greedy and beam search baselines.

Find SimilarView on arXiv

Model-free and Bayesian Ensembling Model-based Deep Reinforcement Learning for Particle Accelerator Control Demonstrated on the FERMI FEL

December 17, 2020

85% Match
Simon Hirlaender, Niky Bruchon
Machine Learning
Artificial Intelligence
Systems and Control
Systems and Control
Accelerator Physics

Reinforcement learning holds tremendous promise in accelerator controls. The primary goal of this paper is to show how this approach can be utilised on an operational level on accelerator physics problems. Despite the success of model-free reinforcement learning in several domains, sample-efficiency still is a bottle-neck, which might be encompassed by model-based methods. We compare well-suited purely model-based to model-free reinforcement learning applied to the intensity ...

Find SimilarView on arXiv

Autonomous Control of a Particle Accelerator using Deep Reinforcement Learning

October 16, 2020

85% Match
Xiaoying Pang, Sunil Thulasidasan, Larry Rybarcyk
Artificial Intelligence
Machine Learning
Accelerator Physics

We describe an approach to learning optimal control policies for a large, linear particle accelerator using deep reinforcement learning coupled with a high-fidelity physics engine. The framework consists of an AI controller that uses deep neural nets for state and action-space representation and learns optimal policies using reward signals that are provided by the physics simulator. For this work, we only focus on controlling a small section of the entire accelerator. Neverth...

Find SimilarView on arXiv

Parameterized Machine Learning for High-Energy Physics

January 28, 2016

85% Match
Pierre Baldi, Kyle Cranmer, Taylor Faucett, ... , Whiteson Daniel
Machine Learning

We investigate a new structure for machine learning classifiers applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters. The physics parameters represent a smoothly varying learning task, and the resulting parameterized classifier can smoothly interpolate between them and replace sets of classifiers trained at individual values. This simplifies the training process and gives improved performance at i...

Find SimilarView on arXiv

An Introduction to Deep Reinforcement Learning

November 30, 2018

84% Match
Vincent Francois-Lavet, Peter Henderson, Riashat Islam, ... , Pineau Joelle
Machine Learning
Artificial Intelligence
Machine Learning

Deep reinforcement learning is the combination of reinforcement learning (RL) and deep learning. This field of research has been able to solve a wide range of complex decision-making tasks that were previously out of reach for a machine. Thus, deep RL opens up many new applications in domains such as healthcare, robotics, smart grids, finance, and many more. This manuscript provides an introduction to deep reinforcement learning models, algorithms and techniques. Particular f...

Find SimilarView on arXiv

QMDP-Net: Deep Learning for Planning under Partial Observability

March 20, 2017

84% Match
Peter Karkus, David Hsu, Wee Sun Lee
Artificial Intelligence
Machine Learning
Neural and Evolutionary Comp...
Machine Learning

This paper introduces the QMDP-net, a neural network architecture for planning under partial observability. The QMDP-net combines the strengths of model-free learning and model-based planning. It is a recurrent policy network, but it represents a policy for a parameterized set of tasks by connecting a model with a planning algorithm that solves the model, thus embedding the solution structure of planning in a network learning architecture. The QMDP-net is fully differentiable...

Find SimilarView on arXiv