Researchers think that adversarial examples could help us maintain privacy from machine learning systems

Originally published at: https://boingboing.net/2019/10/02/mockingbird-and-attriguard.html

1 Like

Or we could, oh I don’t know, stop pretending that ML companies a) will self-govern with ethics, b) keep models or data secure, or c) have a “right” to invade critical processes and systems with opaque, untested prototypes.

1 Like

Why not both, though? The two ideas - being distrustful ML companies, and developing ways to temper ML systems’ intrusive effects - don’t seem to be exclusive.

1 Like

They’re not exclusive but one should have higher priority than the other. Instead of mandating that individuals and systems try to engage with adversarial counterexamples to fool invasive ML systems are a cost to their time and productivity, we should hit pause and actually develop best practices.

I haven’t read the whole whitepaper, but my understanding is that developing these kind of countermeasures requires having access to the surveillance ML system’s output (or the ability to run a copy of it). Otherwise, how can the adversarial system learn what is effective or ineffective at tricking it? is that right, or can this be done “blindly”?

It seems like this could lead to an practice of keeping neural nets or other trained ML expert systems extremely secret, since capturing a copy could allow someone to effectively run “offline attacks” against it - analogous to what hackers can do with password hashes.

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.