ID: 1903.01032

A Fundamental Performance Limitation for Adversarial Classification

March 4, 2019

View on ArXiv

Similar papers 2

How adversarial attacks can disrupt seemingly stable accurate classifiers

September 7, 2023

90% Match
Oliver J. Sutton, Qinghua Zhou, Ivan Y. Tyukin, Alexander N. Gorban, ... , Higham Desmond J.
Machine Learning
Artificial Intelligence

Adversarial attacks dramatically change the output of an otherwise accurate learning system using a seemingly inconsequential modification to a piece of input data. Paradoxically, empirical evidence indicates that even systems which are robust to large random perturbations of the input data remain susceptible to small, easily constructed, adversarial perturbations of their inputs. Here, we show that this may be seen as a fundamental feature of classifiers working with high di...

Find SimilarView on arXiv

Robustifying Binary Classification to Adversarial Perturbation

October 29, 2020

90% Match
Fariborz Salehi, Babak Hassibi
Machine Learning
Optimization and Control

Despite the enormous success of machine learning models in various applications, most of these models lack resilience to (even small) perturbations in their input data. Hence, new methods to robustify machine learning models seem very essential. To this end, in this paper we consider the problem of binary classification with adversarial perturbations. Investigating the solution to a min-max optimization (which considers the worst-case loss in the presence of adversarial pertu...

Find SimilarView on arXiv

Mitigation of Adversarial Attacks through Embedded Feature Selection

August 17, 2018

90% Match
Ziyi Bao, Luis Muñoz-González, Emil C. Lupu
Cryptography and Security
Machine Learning

Machine learning has become one of the main components for task automation in many application domains. Despite the advancements and impressive achievements of machine learning, it has been shown that learning algorithms can be compromised by attackers both at training and test time. Machine learning systems are especially vulnerable to adversarial examples where small perturbations added to the original data points can produce incorrect or unexpected outputs in the learning ...

Find SimilarView on arXiv

A Survey of Game Theoretic Approaches for Adversarial Machine Learning in Cybersecurity Tasks

December 4, 2019

90% Match
Prithviraj Dasgupta, Joseph B. Collins
Cryptography and Security
Artificial Intelligence
Machine Learning
Machine Learning

Machine learning techniques are currently used extensively for automating various cybersecurity tasks. Most of these techniques utilize supervised learning algorithms that rely on training the algorithm to classify incoming data into different categories, using data encountered in the relevant domain. A critical vulnerability of these algorithms is that they are susceptible to adversarial attacks where a malicious entity called an adversary deliberately alters the training da...

Find SimilarView on arXiv

Adversarial Examples - A Complete Characterisation of the Phenomenon

October 2, 2018

90% Match
Alexandru Constantin Serban, Erik Poll, Joost Visser
Computer Vision and Pattern ...
Cryptography and Security
Machine Learning
Neural and Evolutionary Comp...

We provide a complete characterisation of the phenomenon of adversarial examples - inputs intentionally crafted to fool machine learning models. We aim to cover all the important concerns in this field of study: (1) the conjectures on the existence of adversarial examples, (2) the security, safety and robustness implications, (3) the methods used to generate and (4) protect against adversarial examples and (5) the ability of adversarial examples to transfer between different ...

Find SimilarView on arXiv

Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains

March 23, 2017

90% Match
Tegjyot Singh Sethi, Mehmed Kantardzic
Machine Learning
Cryptography and Security
Machine Learning

While modern day web applications aim to create impact at the civilization level, they have become vulnerable to adversarial activity, where the next cyber-attack can take any shape and can originate from anywhere. The increasing scale and sophistication of attacks, has prompted the need for a data driven solution, with machine learning forming the core of many cybersecurity systems. Machine learning was not designed with security in mind, and the essential assumption of stat...

Find SimilarView on arXiv

Gradient-based Data Subversion Attack Against Binary Classifiers

May 31, 2021

90% Match
Rosni K Vasu, Sanjay Seetharaman, Shubham Malaviya, ... , Lodha Sachin
Machine Learning
Artificial Intelligence
Cryptography and Security

Machine learning based data-driven technologies have shown impressive performances in a variety of application domains. Most enterprises use data from multiple sources to provide quality applications. The reliability of the external data sources raises concerns for the security of the machine learning techniques adopted. An attacker can tamper the training or test datasets to subvert the predictions of models generated by these techniques. Data poisoning is one such attack wh...

Find SimilarView on arXiv

Certified Robustness to Label-Flipping Attacks via Randomized Smoothing

February 7, 2020

90% Match
Elan Rosenfeld, Ezra Winston, ... , Kolter J. Zico
Machine Learning
Artificial Intelligence
Cryptography and Security
Machine Learning

Machine learning algorithms are known to be susceptible to data poisoning attacks, where an adversary manipulates the training data to degrade performance of the resulting classifier. In this work, we present a unifying view of randomized smoothing over arbitrary functions, and we leverage this novel characterization to propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks. As a specific instantiation, we utili...

Find SimilarView on arXiv

Adversarial Robustness May Be at Odds With Simplicity

January 2, 2019

90% Match
Preetum Nakkiran
Machine Learning
Computational Complexity
Machine Learning

Current techniques in machine learning are so far are unable to learn classifiers that are robust to adversarial perturbations. However, they are able to learn non-robust classifiers with very high accuracy, even in the presence of random perturbations. Towards explaining this gap, we highlight the hypothesis that $\textit{robust classification may require more complex classifiers (i.e. more capacity) than standard classification.}$ In this note, we show that this hypothesi...

Find SimilarView on arXiv

Attack Strength vs. Detectability Dilemma in Adversarial Machine Learning

February 20, 2018

90% Match
Christopher Frederickson, Michael Moore, ... , Polikar Robi
Machine Learning
Machine Learning

As the prevalence and everyday use of machine learning algorithms, along with our reliance on these algorithms grow dramatically, so do the efforts to attack and undermine these algorithms with malicious intent, resulting in a growing interest in adversarial machine learning. A number of approaches have been developed that can render a machine learning algorithm ineffective through poisoning or other types of attacks. Most attack algorithms typically use sophisticated optimiz...

Find SimilarView on arXiv