ID: 1903.01032

A Fundamental Performance Limitation for Adversarial Classification

March 4, 2019

View on ArXiv
Abed AlRahman Al Makdah, Vaibhav Katewa, Fabio Pasqualetti
Computer Science
Statistics
Machine Learning
Systems and Control
Machine Learning

Despite the widespread use of machine learning algorithms to solve problems of technological, economic, and social relevance, provable guarantees on the performance of these data-driven algorithms are critically lacking, especially when the data originates from unreliable sources and is transmitted over unprotected and easily accessible channels. In this paper we take an important step to bridge this gap and formally show that, in a quest to optimize their accuracy, binary classification algorithms -- including those based on machine-learning techniques -- inevitably become more sensitive to adversarial manipulation of the data. Further, for a given class of algorithms with the same complexity (i.e., number of classification boundaries), the fundamental tradeoff curve between accuracy and sensitivity depends solely on the statistics of the data, and cannot be improved by tuning the algorithm.

Similar papers 1