How much do you trust your in-home artificial intelligence? You casually ask it to play music, turn on your lights, or get you directions to a new restaurant, building a false rapport with it that you maybe mistake as a budding friendship. To a degree, you forget that it’s an easily manipulated machine and not some infallible eButler. A study performed by a group of students from University of California, Berkeley and Georgetown University may unfortunately start to deteriorate that artificial relationship.
Using the white noise that plays over your speakers, the students found a way to embed hidden commands that the artificial assistants would respond to. So far, they’ve been mostly harmless commands like turning airplane mode on or opening a website, but if the ability is there, we all know, in the wrong hands, it can be used maliciously.
The 2016 concluded in a research paper that revealed commands could be inserted directly into music or verbal text without the listener knowing. The fear is that Alexa, Siri, and Google’s assistant could be manipulated into ordering items online, unlocking doors, wiring money, and even turning off alarms.
The Internet of Things is intended to make life easier, but of course it can’t do so without a potential risk. While some are responding to the study by disconnecting their devices, others are using the report to further solidify their “anti-AI” platform. Pod Save America co host Tommy Vietor responded to the news with three simple words: “This is terrifying.”
Fifth-year Ph.D. student in computer security at UC Berkeley, Nicholas Carlini, revealed that those participating in the study were curious just how hidden the messages could get. “We wanted to see if we could make it even more stealthy,” Carlini revealed.
While home-based artificial intelligence users are reacting unfavorably to the revelation that their device may be reacting to hidden messages, others have picked up on an equally as unsettling reveal mentioned in the WRAL TechWire article. “There is no U.S. law against broadcasting subliminal messages to humans, let alone machines,” the article mentions. It’s true that the United States government apparently has no issue with the use of subliminal messaging, but there are broadcasting standards that, for now, protect users from They Live! level of brainwashing.
The report isn’t just a black mark against home-based artificial intelligence. It shows the potential vulnerabilities that could arise as we rely more and more on the use of inhuman intelligences. For now, there has been no evidence that embedded messaging has been used in a real-world environment to manipulate these personal assistants, but that won’t stop many from disconnecting and disabling those they once relied on.
H/T: Twitter, The New York Times