A bug in Google Home smart speaker allowed installing a backdoor account that could be used to control it remotely and to turn it into a snooping device by accessing the microphone feed.
Researcher Matt Kunze discovered the issue and received $107,500 for responsibly reporting it to Google last year. Earlier this week, the researcher published technical details about the finding and an attack scenario to show how the flaw could be leveraged.
While experimenting with his own Google Home mini speaker, the researcher discovered that new accounts added using the Google Home app could send commands to it remotely via the cloud API.
Using a Nmap scan, the researcher found the port for the local HTTP API of Google Home, so he set up a proxy to capture the encrypted HTTPS traffic, hoping to snatch the user authorization token.
Captured HTTPS (encrypted) traffic (downrightnifty.me)
The researcher discovered that adding a new user to the target device is a two-step process that requires the device name, certificate, and “cloud ID” from its local API. With this info, they could send a link request to the Google server.
To add a rogue user to a target Google Home device, the analyst implemented the link process in a Python script that automated the exfiltration of the local device data and reproduced the linking request.
The linking request that carries the device ID data (downrightnifty.me)
The attack is summarized in the researcher’s blog as follows:
The researcher published on GitHub three PoCs for the actions above. However, these should not work Google Home devices running the latest firmware version.
The PoCs take things a step further from just planting a rogue user and enable spying over the microphone, making arbitrary HTTP requests on the victim’s network, and reading/writing arbitrary files on the device.
Having a rogue account linked to the target device makes it possible to perform actions via the Google Home speaker, such as controlling smart switches, making online purchases, remotely unlocking doors and vehicles, or stealthily brute-forcing the user’s PIN for smart locks.
More worryingly, the researcher found a way to abuse the “call [phone number]” command by adding it to a malicious routine that would activate the microphone at a specified time, calling the attacker’s number and sending live microphone feed.
The malicious routing that captures mic audio (downrightnifty.me)
During the call, the device’s LED would turn blue, which is the only indication that some activity is taking place. If the victim notices it, they may assume the device is updating its firmware. The standard microphone activation indicator is a pulsating LED, which does not happen during calls.
Finally, it’s also possible to play media on the compromised smart speaker, rename it, force a reboot, force it to forget stored Wi-Fi networks, force new Bluetooth or Wi-Fi pairings, and more.
Kunze discovered the issues in January 2021 and sent additional details and PoCs in March 2021. Google fixed all problems in April 2021.
The patch includes a new invite-based system to handle account links, which blocks any attempts not added on Home.
Deauthenticating Google Home is still possible, but this can’t be used to link a new account, so the local API that leaked the basic device data is also inaccessible.
As for the “call [phone number]” command, Google has added a protection to prevent its remote initiation through routines.
It’s worth noting that Google Home was released in 2016, scheduled routines were added in 2018, and the Local Home SDK was introduced in 2020, so an attacker finding the issue before April 2021 would have had plenty of time to take advantage.