ID: 2208.06438

Topological Data Analysis of Neural Network Layer Representations

July 1, 2022

View on ArXiv

Similar papers 3

Topological exploration of artificial neuronal network dynamics

October 3, 2018

89% Match
Jean-Baptiste Bardin, Gard Spreemann, Kathryn Hess
Neurons and Cognition
Algebraic Topology

One of the paramount challenges in neuroscience is to understand the dynamics of individual neurons and how they give rise to network dynamics when interconnected. Historically, researchers have resorted to graph theory, statistics, and statistical mechanics to describe the spatiotemporal structure of such network dynamics. Our novel approach employs tools from algebraic topology to characterize the global properties of network structure and dynamics. We propose a method ba...

Find SimilarView on arXiv

Understanding Deep Learning using Topological Dynamical Systems, Index Theory, and Homology

July 25, 2022

89% Match
Bill Basener
Machine Learning
Dynamical Systems
Geometric Topology

In this paper we investigate Deep Learning Models using topological dynamical systems, index theory, and computational homology. These mathematical machinery was invented initially by Henri Poincare around 1900 and developed over time to understand shapes and dynamical systems whose structure and behavior is too complicated to solve for analytically but can be understood via global relationships. In particular, we show how individual neurons in a neural network can correspond...

Find SimilarView on arXiv

Be Persistent: Towards a Unified Solution for Mitigating Shortcuts in Deep Learning

February 17, 2024

89% Match
Hadi M. Dolatabadi, Sarah M. Erfani, Christopher Leckie
Machine Learning
Computer Vision and Pattern ...

Deep neural networks (DNNs) are vulnerable to shortcut learning: rather than learning the intended task, they tend to draw inconclusive relationships between their inputs and outputs. Shortcut learning is ubiquitous among many failure cases of neural networks, and traces of this phenomenon can be seen in their generalizability issues, domain shift, adversarial vulnerability, and even bias towards majority groups. In this paper, we argue that this commonality in the cause of v...

Find SimilarView on arXiv

Deep neural networks architectures from the perspective of manifold learning

June 6, 2023

89% Match
German Magai
Machine Learning
Artificial Intelligence
Computer Vision and Pattern ...
Algebraic Topology

Despite significant advances in the field of deep learning in ap-plications to various areas, an explanation of the learning pro-cess of neural network models remains an important open ques-tion. The purpose of this paper is a comprehensive comparison and description of neural network architectures in terms of ge-ometry and topology. We focus on the internal representation of neural networks and on the dynamics of changes in the topology and geometry of a data manifold on dif...

Find SimilarView on arXiv

Rethinking Persistent Homology for Visual Recognition

July 9, 2022

89% Match
Ekaterina Khramtsova, Guido Zuccon, ... , Baktashmotlagh Mahsa
Computer Vision and Pattern ...

Persistent topological properties of an image serve as an additional descriptor providing an insight that might not be discovered by traditional neural networks. The existing research in this area focuses primarily on efficiently integrating topological properties of the data in the learning process in order to enhance the performance. However, there is no existing study to demonstrate all possible scenarios where introducing topological properties can boost or harm the perfo...

Find SimilarView on arXiv

Topological Deep Learning: A Review of an Emerging Paradigm

February 8, 2023

88% Match
Ali Zia, Abdelwahed Khamis, James Nichols, Zeeshan Hayder, ... , Petersson Lars
Machine Learning
Artificial Intelligence

Topological data analysis (TDA) provides insight into data shape. The summaries obtained by these methods are principled global descriptions of multi-dimensional data whilst exhibiting stable properties such as robustness to deformation and noise. Such properties are desirable in deep learning pipelines but they are typically obtained using non-TDA strategies. This is partly caused by the difficulty of combining TDA constructs (e.g. barcode and persistence diagrams) with curr...

Find SimilarView on arXiv

Topological Understanding of Neural Networks, a survey

January 23, 2023

88% Match
Tushar Pandey
Machine Learning
Algebraic Topology

We look at the internal structure of neural networks which is usually treated as a black box. The easiest and the most comprehensible thing to do is to look at a binary classification and try to understand the approach a neural network takes. We review the significance of different activation functions, types of network architectures associated to them, and some empirical data. We find some interesting observations and a possibility to build upon the ideas to verify the proce...

Find SimilarView on arXiv

Persistence Bag-of-Words for Topological Data Analysis

December 21, 2018

88% Match
Bartosz Zieliński, Michał Lipiński, Mateusz Juda, ... , Dłotko Paweł
Machine Learning
Machine Learning
Algebraic Topology

Persistent homology (PH) is a rigorous mathematical theory that provides a robust descriptor of data in the form of persistence diagrams (PDs). PDs exhibit, however, complex structure and are difficult to integrate in today's machine learning workflows. This paper introduces persistence bag-of-words: a novel and stable vectorized representation of PDs that enables the seamless integration with machine learning. Comprehensive experiments show that the new representation achiev...

Find SimilarView on arXiv

Predicting the generalization gap in neural networks using topological data analysis

March 23, 2022

88% Match
Rubén Ballester, Xavier Arnal Clemente, Carles Casacuberta, Meysam Madadi, ... , Escalera Sergio
Machine Learning
Algebraic Topology

Understanding how neural networks generalize on unseen data is crucial for designing more robust and reliable models. In this paper, we study the generalization gap of neural networks using methods from topological data analysis. For this purpose, we compute homological persistence diagrams of weighted graphs constructed from neuron activation correlations after a training phase, aiming to capture patterns that are linked to the generalization capacity of the network. We comp...

Find SimilarView on arXiv

Persistent Homology with Improved Locality Information for more Effective Delineation

October 12, 2021

88% Match
Doruk Oner, Adélie Garin, Mateusz Koziński, ... , Fua Pascal
Computer Vision and Pattern ...

Persistent Homology (PH) has been successfully used to train networks to detect curvilinear structures and to improve the topological quality of their results. However, existing methods are very global and ignore the location of topological features. In this paper, we remedy this by introducing a new filtration function that fuses two earlier approaches: thresholding-based filtration, previously used to train deep networks to segment medical images, and filtration with height...

Find SimilarView on arXiv