Researchers can now send secret audio instructions undetectable to the human ear to Apple’s Siri, Amazon’s Alexa and Google’s Assistant.
https://www.nytimes.com/2018/05/10/technology/alexa-siri-hidden-command-audio-attacks.html
the first papers on audio attacks, which he titled “Cocaine Noodles” because devices interpreted the phrase “cocaine noodles” as “O.K., Google.”
Last year, researchers at Princeton University and China’s Zhejiang University demonstrated that voice-recognition systems could be activated by using frequencies inaudible to the human ear. The attack first muted the phone so the owner wouldn’t hear the system’s responses, either.
DolphinAttack, https://arxiv.org/pdf/1708.09537.pdf can instruct smart devices to visit malicious websites, initiate phone calls, take a picture or send text messages. While DolphinAttack has its limitations — the transmitter must be close to the receiving device — experts warned that more powerful ultrasonic systems were possible.
Chinese and American researchers from China’s Academy of Sciences and other institutions, demonstrated they could control voice-activated devices with commands embedded in songs that can be broadcast over the radio or played on services like YouTube.
Computers can be fooled into identifying an airplane as a cat just by changing a few pixels of a digital image, while researchers can make a self-driving car swerve or speed up simply by pasting small stickers on road signs and confusing the vehicle’s computer vision system.
There is already a history of smart devices being exploited for commercial gains through spoken commands.
There is no American law against broadcasting subliminal messages to humans, let alone machines.
The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting Secrets – Nicholas Carlini
https://arxiv.org/abs/1802.08232v1
Boffins baffled as AI training leaks secrets to canny thieves
Oh great. Neural nets memorize data and nobody knows why
Secrets worth stealing are the easiest to nab
https://www.theregister.co.uk/2018/03/02/secrets_fed_into_ai_models_as_training_data_can_be_stolen/
Technical article
https://arxiv.org/pdf/1801.01944.pdf
Audio Adversarial Examples: Targeted Attacks on #SpeechtoText Nicholas Carlini, David Wagner
https://arxiv.org/abs/1801.01944
#VoiceFirst #Security #CyberSecurity #Voice #Alexa #GoogleHome #Cortana