|
The "universal adversarial preturbation" undetectably alters images so AI can't recognize them
|
28
|
3477
|
April 3, 2017
|
|
Adversarial patches: colorful circles that convince machine-learning vision system to ignore everything else
|
22
|
3199
|
January 13, 2018
|
|
Techniques for reliably fooling AI machine-vision classifiers
|
20
|
2806
|
July 23, 2017
|
|
Machine learning models keep getting spoofed by adversarial attacks and it's not clear if this can ever be fixed
|
31
|
3476
|
March 14, 2018
|
|
Tiny alterations in training data can introduce "backdoors" into machine learning models
|
11
|
995
|
November 30, 2019
|