ID: 1903.01032

A Fundamental Performance Limitation for Adversarial Classification

March 4, 2019

View on ArXiv

Similar papers 5

On Adversarial Examples and Stealth Attacks in Artificial Intelligence Systems

April 9, 2020

90% Match
Ivan Y. Tyukin, Desmond J. Higham, Alexander N. Gorban
Machine Learning
Artificial Intelligence

In this work we present a formal theoretical framework for assessing and analyzing two classes of malevolent action towards generic Artificial Intelligence (AI) systems. Our results apply to general multi-class classifiers that map from an input space into a decision space, including artificial neural networks used in deep learning applications. Two classes of attacks are considered. The first class involves adversarial examples and concerns the introduction of small perturba...

Find SimilarView on arXiv

Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers

June 19, 2013

90% Match
Giuseppe Ateniese, Giovanni Felici, Luigi V. Mancini, Angelo Spognardi, ... , Vitali Domenico
Cryptography and Security
Machine Learning
Machine Learning

Machine Learning (ML) algorithms are used to train computers to perform a variety of complex tasks and improve with experience. Computers learn how to recognize patterns, make unintended decisions, or react to a dynamic environment. Certain trained machines may be more effective than others because they are based on more suitable ML algorithms or because they were trained through superior training sets. Although ML algorithms are known and publicly released, training sets may...

Find SimilarView on arXiv

Analysis of classifiers' robustness to adversarial perturbations

February 9, 2015

90% Match
Alhussein Fawzi, Omar Fawzi, Pascal Frossard
Machine Learning
Computer Vision and Pattern ...
Machine Learning

The goal of this paper is to analyze an intriguing phenomenon recently discovered in deep networks, namely their instability to adversarial perturbations (Szegedy et. al., 2014). We provide a theoretical framework for analyzing the robustness of classifiers to adversarial perturbations, and show fundamental upper bounds on the robustness of classifiers. Specifically, we establish a general upper bound on the robustness of classifiers to adversarial perturbations, and then ill...

Find SimilarView on arXiv

Reliable learning in challenging environments

April 6, 2023

90% Match
Maria-Florina Balcan, Steve Hanneke, ... , Sharma Dravyansh
Machine Learning
Cryptography and Security

The problem of designing learners that provide guarantees that their predictions are provably correct is of increasing importance in machine learning. However, learning theoretic guarantees have only been considered in very specific settings. In this work, we consider the design and analysis of reliable learners in challenging test-time environments as encountered in modern machine learning problems: namely `adversarial' test-time attacks (in several variations) and `natural'...

Find SimilarView on arXiv

A General Retraining Framework for Scalable Adversarial Classification

April 9, 2016

90% Match
Bo Li, Yevgeniy Vorobeychik, Xinyun Chen
Computer Science and Game Th...
Machine Learning
Machine Learning

Traditional classification algorithms assume that training and test data come from similar distributions. This assumption is violated in adversarial settings, where malicious actors modify instances to evade detection. A number of custom methods have been developed for both adversarial evasion attacks and robust learning. We propose the first systematic and general-purpose retraining framework which can: a) boost robustness of an \emph{arbitrary} learning algorithm, in the fa...

Find SimilarView on arXiv

Why adversarial training can hurt robust accuracy

March 3, 2022

90% Match
Jacob Clarysse, Julia Hörrmann, Fanny Yang
Machine Learning
Cryptography and Security
Computer Vision and Pattern ...
Machine Learning

Machine learning classifiers with high test accuracy often perform poorly under adversarial attacks. It is commonly believed that adversarial training alleviates this issue. In this paper, we demonstrate that, surprisingly, the opposite may be true -- Even though adversarial training helps when enough data is available, it may hurt robust generalization in the small sample size regime. We first prove this phenomenon for a high-dimensional linear classification setting with no...

Find SimilarView on arXiv

A Le Cam Type Bound for Adversarial Learning and Applications

July 1, 2020

90% Match
Qiuling Xu, Kevin Bello, Jean Honorio
Machine Learning
Machine Learning

Robustness of machine learning methods is essential for modern practical applications. Given the arms race between attack and defense methods, one may be curious regarding the fundamental limits of any defense mechanism. In this work, we focus on the problem of learning from noise-injected data, where the existing literature falls short by either assuming a specific attack method or by over-specifying the learning problem. We shed light on the information-theoretic limits of ...

Find SimilarView on arXiv

Intelligent Systems Design for Malware Classification Under Adversarial Conditions

July 6, 2019

90% Match
Sean M. Devine, Nathaniel D. Bastian
Machine Learning
Cryptography and Security
Machine Learning

The use of machine learning and intelligent systems has become an established practice in the realm of malware detection and cyber threat prevention. In an environment characterized by widespread accessibility and big data, the feasibility of malware classification without the use of artificial intelligence-based techniques has been diminished exponentially. Also characteristic of the contemporary realm of automated, intelligent malware detection is the threat of adversarial ...

Find SimilarView on arXiv

Analyzing the Robustness of Nearest Neighbors to Adversarial Examples

June 13, 2017

90% Match
Yizhen Wang, Somesh Jha, Kamalika Chaudhuri
Machine Learning
Cryptography and Security
Machine Learning

Motivated by safety-critical applications, test-time attacks on classifiers via adversarial examples has recently received a great deal of attention. However, there is a general lack of understanding on why adversarial examples arise; whether they originate due to inherent properties of data or due to lack of training samples remains ill-understood. In this work, we introduce a theoretical framework analogous to bias-variance theory for understanding these effects. We use o...

Find SimilarView on arXiv

A Bayes-Optimal View on Adversarial Examples

February 20, 2020

90% Match
Eitan Richardson, Yair Weiss
Machine Learning
Cryptography and Security
Computer Vision and Pattern ...
Machine Learning

Since the discovery of adversarial examples - the ability to fool modern CNN classifiers with tiny perturbations of the input, there has been much discussion whether they are a "bug" that is specific to current neural architectures and training methods or an inevitable "feature" of high dimensional geometry. In this paper, we argue for examining adversarial examples from the perspective of Bayes-Optimal classification. We construct realistic image datasets for which the Bayes...

Find SimilarView on arXiv