ID: 2211.01369

Gravitational Dimensionality Reduction Using Newtonian Gravity and Einstein's General Relativity

October 30, 2022

View on ArXiv
Benyamin Ghojogh, Smriti Sharma
Computer Science
General Relativity and Quant...
Physics
Machine Learning
Computer Vision and Pattern ...
Classical Physics

Due to the effectiveness of using machine learning in physics, it has been widely received increased attention in the literature. However, the notion of applying physics in machine learning has not been given much awareness to. This work is a hybrid of physics and machine learning where concepts of physics are used in machine learning. We propose the supervised Gravitational Dimensionality Reduction (GDR) algorithm where the data points of every class are moved to each other for reduction of intra-class variances and better separation of classes. For every data point, the other points are considered to be gravitational particles, such as stars, where the point is attracted to the points of its class by gravity. The data points are first projected onto a spacetime manifold using principal component analysis. We propose two variants of GDR -- one with the Newtonian gravity and one with the Einstein's general relativity. The former uses Newtonian gravity in a straight line between points but the latter moves data points along the geodesics of spacetime manifold. For GDR with relativity gravitation, we use both Schwarzschild and Minkowski metric tensors to cover both general relativity and special relativity. Our simulations show the effectiveness of GDR in discrimination of classes.

Similar papers 1

Yang-Hui He, Juan Manuel Pérez Ipiña
Machine Learning

On the long-established classification problems in general relativity we take a novel perspective by adopting fruitful techniques from machine learning and modern data-science. In particular, we model Petrov's classification of spacetimes, and show that a feed-forward neural network can achieve high degree of success. We also show how data visualization techniques with dimensionality reduction can help analyze the underlying patterns in the structure of the different types of...

Discriminative Dimensionality Reduction using Deep Neural Networks for Clustering of LIGO Data

May 27, 2022

87% Match
Sara Bahaadini, Yunan Wu, Scott Coughlin, ... , Katsaggelos Aggelos K.
Instrumentation and Methods ...
Astrophysics of Galaxies

In this paper, leveraging the capabilities of neural networks for modeling the non-linearities that exist in the data, we propose several models that can project data into a low dimensional, discriminative, and smooth manifold. The proposed models can transfer knowledge from the domain of known classes to a new domain where the classes are unknown. A clustering algorithm is further applied in the new domain to find potentially new classes from the pool of unlabeled data. The ...

Find SimilarView on arXiv

Higher order multi-dimension reduction methods via Einstein-product

March 27, 2024

86% Match
Alaeddine Zahir, Khalide Jbilou, Ahmed Ratnani
Numerical Analysis
Numerical Analysis

This paper explores the extension of dimension reduction (DR) techniques to the multi-dimension case by using the Einstein product. Our focus lies on graph-based methods, encompassing both linear and nonlinear approaches, within both supervised and unsupervised learning paradigms. Additionally, we investigate variants such as repulsion graphs and kernel methods for linear approaches. Furthermore, we present two generalizations for each method, based on single or multiple weig...

Find SimilarView on arXiv

Gravitational Clustering

September 5, 2015

86% Match
Armen Aghajanyan
Machine Learning

The downfall of many supervised learning algorithms, such as neural networks, is the inherent need for a large amount of training data. Although there is a lot of buzz about big data, there is still the problem of doing classification from a small dataset. Other methods such as support vector machines, although capable of dealing with few samples, are inherently binary classifiers, and are in need of learning strategies such as One vs All in the case of multi-classification. ...

Find SimilarView on arXiv

Interpretable Discriminative Dimensionality Reduction and Feature Selection on the Manifold

September 19, 2019

86% Match
Babak Hosseini, Barbara Hammer
Machine Learning
Machine Learning

Dimensionality reduction (DR) on the manifold includes effective methods which project the data from an implicit relational space onto a vectorial space. Regardless of the achievements in this area, these algorithms suffer from the lack of interpretation of the projection dimensions. Therefore, it is often difficult to explain the physical meaning behind the embedding dimensions. In this research, we propose the interpretable kernel DR algorithm (I-KDR) as a new algorithm whi...

Find SimilarView on arXiv

On Manifold Learning in Plato's Cave: Remarks on Manifold Learning and Physical Phenomena

April 27, 2023

85% Match
Roy R. Lederman, Bogdan Toader
Machine Learning
Machine Learning

Many techniques in machine learning attempt explicitly or implicitly to infer a low-dimensional manifold structure of an underlying physical phenomenon from measurements without an explicit model of the phenomenon or the measurement apparatus. This paper presents a cautionary tale regarding the discrepancy between the geometry of measurements and the geometry of the underlying phenomenon in a benign setting. The deformation in the metric illustrated in this paper is mathemati...

Find SimilarView on arXiv

Hierarchical Subspace Learning for Dimensionality Reduction to Improve Classification Accuracy in Large Data Sets

May 25, 2021

85% Match
Parisa Abdolrahim Poorheravi, Vincent Gaudet
Machine Learning
Machine Learning

Manifold learning is used for dimensionality reduction, with the goal of finding a projection subspace to increase and decrease the inter- and intraclass variances, respectively. However, a bottleneck for subspace learning methods often arises from the high dimensionality of datasets. In this paper, a hierarchical approach is proposed to scale subspace learning methods, with the goal of improving classification in large datasets by a range of 3% to 10%. Different combinations...

Find SimilarView on arXiv

Visual Cluster Separation Using High-Dimensional Sharpened Dimensionality Reduction

October 1, 2021

85% Match
Youngjoo Kim, Alexandru C. Telea, ... , Roerdink Jos B. T. M.
Computer Vision and Pattern ...

Applying dimensionality reduction (DR) to large, high-dimensional data sets can be challenging when distinguishing the underlying high-dimensional data clusters in a 2D projection for exploratory analysis. We address this problem by first sharpening the clusters in the original high-dimensional data prior to the DR step using Local Gradient Clustering (LGC). We then project the sharpened data from the high-dimensional space to 2D by a user-selected DR method. The sharpening s...

Find SimilarView on arXiv

Curvature Augmented Manifold Embedding and Learning

March 21, 2024

85% Match
Yongming Liu
Machine Learning
Human-Computer Interaction
Machine Learning

A new dimensional reduction (DR) and data visualization method, Curvature-Augmented Manifold Embedding and Learning (CAMEL), is proposed. The key novel contribution is to formulate the DR problem as a mechanistic/physics model, where the force field among nodes (data points) is used to find an n-dimensional manifold representation of the data sets. Compared with many existing attractive-repulsive force-based methods, one unique contribution of the proposed method is to includ...

Find SimilarView on arXiv

Nonlinear Supervised Dimensionality Reduction via Smooth Regular Embeddings

October 19, 2017

85% Match
Cem Ornek, Elif Vural
Computer Vision and Pattern ...

The recovery of the intrinsic geometric structures of data collections is an important problem in data analysis. Supervised extensions of several manifold learning approaches have been proposed in the recent years. Meanwhile, existing methods primarily focus on the embedding of the training data, and the generalization of the embedding to initially unseen test data is rather ignored. In this work, we build on recent theoretical results on the generalization performance of sup...

Find SimilarView on arXiv