ID: 1906.01478

What do AI algorithms actually learn? - On false structures in deep learning

June 4, 2019

View on ArXiv

Similar papers 2

Towards an Understanding of Neural Networks in Natural-Image Spaces

January 27, 2018

89% Match
Yifei Fan, Anthony Yezzi
Computer Vision and Pattern ...

Two major uncertainties, dataset bias and adversarial examples, prevail in state-of-the-art AI algorithms with deep neural networks. In this paper, we present an intuitive explanation for these issues as well as an interpretation of the performance of deep networks in a natural-image space. The explanation consists of two parts: the philosophy of neural networks and a hypothetical model of natural-image spaces. Following the explanation, we 1) demonstrate that the values of t...

Find SimilarView on arXiv

Mathematical Challenges in Deep Learning

March 24, 2023

89% Match
Vahid Partovi Nia, Guojun Zhang, Ivan Kobyzev, Michael R. Metel, Xinlin Li, Ke Sun, Sobhan Hemati, Masoud Asgharian, Linglong Kong, ... , Chen Boxing
Machine Learning
Artificial Intelligence
Statistics Theory
Machine Learning
Statistics Theory

Deep models are dominating the artificial intelligence (AI) industry since the ImageNet challenge in 2012. The size of deep models is increasing ever since, which brings new challenges to this field with applications in cell phones, personal computers, autonomous cars, and wireless base stations. Here we list a set of problems, ranging from training, inference, generalization bound, and optimization with some formalism to communicate these challenges with mathematicians, stat...

Find SimilarView on arXiv

Meet You Halfway: Explaining Deep Learning Mysteries

June 9, 2022

89% Match
Oriel BenShmuel
Machine Learning
Cryptography and Security

Deep neural networks perform exceptionally well on various learning tasks with state-of-the-art results. While these models are highly expressive and achieve impressively accurate solutions with excellent generalization abilities, they are susceptible to minor perturbations. Samples that suffer such perturbations are known as "adversarial examples". Even though deep learning is an extensively researched field, many questions about the nature of deep learning models remain una...

Find SimilarView on arXiv

Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness

October 19, 2020

89% Match
Guillermo Ortiz-Jimenez, Apostolos Modas, ... , Frossard Pascal
Machine Learning
Artificial Intelligence
Computer Vision and Pattern ...

Driven by massive amounts of data and important advances in computational resources, new deep learning systems have achieved outstanding results in a large spectrum of applications. Nevertheless, our current theoretical understanding on the mathematical foundations of deep learning lags far behind its empirical success. Towards solving the vulnerability of neural networks, however, the field of adversarial robustness has recently become one of the main sources of explanations...

Find SimilarView on arXiv

Intriguing properties of neural networks

December 21, 2013

88% Match
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, ... , Fergus Rob
Computer Vision and Pattern ...
Machine Learning
Neural and Evolutionary Comp...

Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units...

Find SimilarView on arXiv

Topology and geometry of data manifold in deep learning

April 19, 2022

88% Match
German Magai, Anton Ayzenberg
Machine Learning
Computer Vision and Pattern ...
Algebraic Topology

Despite significant advances in the field of deep learning in applications to various fields, explaining the inner processes of deep learning models remains an important and open question. The purpose of this article is to describe and substantiate the geometric and topological view of the learning process of neural networks. Our attention is focused on the internal representation of neural networks and on the dynamics of changes in the topology and geometry of the data manif...

Find SimilarView on arXiv

A Study of the Mathematics of Deep Learning

April 28, 2021

88% Match
Anirbit Mukherjee
Machine Learning
Optimization and Control
Applications
Machine Learning

"Deep Learning"/"Deep Neural Nets" is a technological marvel that is now increasingly deployed at the cutting-edge of artificial intelligence tasks. This dramatic success of deep learning in the last few years has been hinged on an enormous amount of heuristics and it has turned out to be a serious mathematical challenge to be able to rigorously explain them. In this thesis, submitted to the Department of Applied Mathematics and Statistics, Johns Hopkins University we take se...

Find SimilarView on arXiv

Generalizing in the Real World with Representation Learning

October 18, 2022

88% Match
Tegan Maharaj
Machine Learning
Machine Learning

Machine learning (ML) formalizes the problem of getting computers to learn from experience as optimization of performance according to some metric(s) on a set of data examples. This is in contrast to requiring behaviour specified in advance (e.g. by hard-coded rules). Formalization of this problem has enabled great progress in many applications with large real-world impact, including translation, speech recognition, self-driving cars, and drug discovery. But practical instant...

Find SimilarView on arXiv

Deep Learning: An Introduction for Applied Mathematicians

January 17, 2018

88% Match
Catherine F. Higham, Desmond J. Higham
History and Overview
Machine Learning
Numerical Analysis
Machine Learning

Multilayered artificial neural networks are becoming a pervasive tool in a host of application fields. At the heart of this deep learning revolution are familiar concepts from applied and computational mathematics; notably, in calculus, approximation theory, optimization and linear algebra. This article provides a very brief introduction to the basic ideas that underlie deep learning from an applied mathematics perspective. Our target audience includes postgraduate and final ...

Find SimilarView on arXiv

Generalization in Deep Learning

October 16, 2017

88% Match
Kenji Kawaguchi, Leslie Pack Kaelbling, Yoshua Bengio
Machine Learning
Artificial Intelligence
Machine Learning
Neural and Evolutionary Comp...

This paper provides theoretical insights into why and how deep learning can generalize well, despite its large capacity, complexity, possible algorithmic instability, nonrobustness, and sharp minima, responding to an open question in the literature. We also discuss approaches to provide non-vacuous generalization guarantees for deep learning. Based on theoretical observations, we propose new open problems and discuss the limitations of our results.

Find SimilarView on arXiv