Blind attacks on machine learners

Alex Beatson, Zhaoran Wang, Han Liu

Research output: Contribution to journalConference article

1 Citation (Scopus)

Abstract

The importance of studying the robustness of learners to malicious data is well established. While much work has been done establishing both robust estimators and effective data injection attacks when the attacker is omniscient, the ability of an attacker to provably harm learning while having access to little information is largely unstudied. We study the potential of a "blind attacker" to provably limit a learner's performance by data injection attack without observing the learner's training set or any parameter of the distribution from which it is drawn. We provide examples of simple yet effective attacks in two settings: firstly, where an "informed learner" knows the strategy chosen by the attacker, and secondly, where a "blind learner" knows only the proportion of malicious data and some family to which the malicious distribution chosen by the attacker belongs. For each attack, we analyze minimax rates of convergence and establish lower bounds on the learner's minimax risk, exhibiting limits on a learner's ability to learn under data injection attack even when the attacker is "blind".

Original languageEnglish (US)
Pages (from-to)2405-2413
Number of pages9
JournalAdvances in Neural Information Processing Systems
StatePublished - Jan 1 2016
Event30th Annual Conference on Neural Information Processing Systems, NIPS 2016 - Barcelona, Spain
Duration: Dec 5 2016Dec 10 2016

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this

@article{5fe934ff3bc542d0a6119ce2a0b76b20,
title = "Blind attacks on machine learners",
abstract = "The importance of studying the robustness of learners to malicious data is well established. While much work has been done establishing both robust estimators and effective data injection attacks when the attacker is omniscient, the ability of an attacker to provably harm learning while having access to little information is largely unstudied. We study the potential of a {"}blind attacker{"} to provably limit a learner's performance by data injection attack without observing the learner's training set or any parameter of the distribution from which it is drawn. We provide examples of simple yet effective attacks in two settings: firstly, where an {"}informed learner{"} knows the strategy chosen by the attacker, and secondly, where a {"}blind learner{"} knows only the proportion of malicious data and some family to which the malicious distribution chosen by the attacker belongs. For each attack, we analyze minimax rates of convergence and establish lower bounds on the learner's minimax risk, exhibiting limits on a learner's ability to learn under data injection attack even when the attacker is {"}blind{"}.",
author = "Alex Beatson and Zhaoran Wang and Han Liu",
year = "2016",
month = "1",
day = "1",
language = "English (US)",
pages = "2405--2413",
journal = "Advances in Neural Information Processing Systems",
issn = "1049-5258",

}

Blind attacks on machine learners. / Beatson, Alex; Wang, Zhaoran; Liu, Han.

In: Advances in Neural Information Processing Systems, 01.01.2016, p. 2405-2413.

Research output: Contribution to journalConference article

TY - JOUR

T1 - Blind attacks on machine learners

AU - Beatson, Alex

AU - Wang, Zhaoran

AU - Liu, Han

PY - 2016/1/1

Y1 - 2016/1/1

N2 - The importance of studying the robustness of learners to malicious data is well established. While much work has been done establishing both robust estimators and effective data injection attacks when the attacker is omniscient, the ability of an attacker to provably harm learning while having access to little information is largely unstudied. We study the potential of a "blind attacker" to provably limit a learner's performance by data injection attack without observing the learner's training set or any parameter of the distribution from which it is drawn. We provide examples of simple yet effective attacks in two settings: firstly, where an "informed learner" knows the strategy chosen by the attacker, and secondly, where a "blind learner" knows only the proportion of malicious data and some family to which the malicious distribution chosen by the attacker belongs. For each attack, we analyze minimax rates of convergence and establish lower bounds on the learner's minimax risk, exhibiting limits on a learner's ability to learn under data injection attack even when the attacker is "blind".

AB - The importance of studying the robustness of learners to malicious data is well established. While much work has been done establishing both robust estimators and effective data injection attacks when the attacker is omniscient, the ability of an attacker to provably harm learning while having access to little information is largely unstudied. We study the potential of a "blind attacker" to provably limit a learner's performance by data injection attack without observing the learner's training set or any parameter of the distribution from which it is drawn. We provide examples of simple yet effective attacks in two settings: firstly, where an "informed learner" knows the strategy chosen by the attacker, and secondly, where a "blind learner" knows only the proportion of malicious data and some family to which the malicious distribution chosen by the attacker belongs. For each attack, we analyze minimax rates of convergence and establish lower bounds on the learner's minimax risk, exhibiting limits on a learner's ability to learn under data injection attack even when the attacker is "blind".

UR - http://www.scopus.com/inward/record.url?scp=85018892706&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85018892706&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:85018892706

SP - 2405

EP - 2413

JO - Advances in Neural Information Processing Systems

JF - Advances in Neural Information Processing Systems

SN - 1049-5258

ER -