Generative adversarial network produces a "universal fingerprint" that will unlock many smartphones
There's a literal elephant in machine learning's room
Machine learning models keep getting spoofed by adversarial attacks and it's not clear if this can ever be fixed
Adversarial examples: attack can imperceptibly alter any sound (or silence), embedding speech that only voice-assistants will hear
Adversarial patches: colorful circles that convince machine-learning vision system to ignore everything else
Are you sure you want to delete this link?
The personal, minimalist, super-fast, database free, bookmarking service by the Shaarli community