Google voice Assistant gets new privacy ‘undo’ commands
Credit to Author: John E Dunn| Date: Thu, 09 Jan 2020 12:02:58 +0000
Google’s controversial voice Assistant is getting a series of new commands designed to work like privacy-centric ‘undo’ buttons.
Assistant, of course, is inside an estimated one billion devices, including Android smartphones, countless brands of home smart speaker, and TV sets based on the Android OS.
But these are only the pioneers for an expanding AI empire. This year Assistant should start popping up in headphones, soundbars, ‘smart’ computer displays and, via Android Auto, more motor cars.
If this sounds oppressive, you could be in for a tough few years because Assistant (and rivals Alexa, Siri, Cortana, and Samsung’s Bixby) – could soon be in anything and everything a human being might reasonably expect to perform a task.
And yet 2019 was the year Google finally got the message that the system’s hidden risks might quickly become the sort of privacy itch that is hard to scratch if it’s not careful.
This included controversies over who might be listening to recordings without users having given consent. Others have likened it to a poorly regulated privacy-killing genie Google won’t voluntarily put back in the bottle.
Google, stop that
Google hopes its new commands will counter that impression by offering offers some control over what Assistant pays attention to.
Right now, Assistant activates for English speakers when it hears the commands, “Hey Google,” or “Ok Google.”
The problem is that Assistant can activate when it mishears a similar string of words, which leaves users unsure what and when it might be recording speech sent to Google’s AI cloud.
Soon (presumably after any necessary updates) users will be able to calm their paranoia with the new command “Hey Google, that wasn’t for you.”
Users can already manually delete interactions with Assistant through Voice & Audio Activity, but this will now be possible using the command, “Hey Google, delete everything I said to you this week.”
Other commands include, “Hey Google, are you saving my audio data?” (which brings up a privacy FAQ on a screen) and “How do you keep my information private?”
These look like a logical extension to Google’s revamped Assistant privacy policies, announced last September. Indeed, it’s not hard to imagine that the number of questions users can ask Assistant about privacy might in time expand even more.
Plain sailing?
As helpful as this development might seem, it’s not exactly a great advert for the privacy of a system that users must tell it not to do something many probably didn’t realise it could do in the first place – and that’s before factoring in all the rival voice AI systems out there which people might also be using.
The wider problem for Google is that privacy isn’t just about voice and extends to the wider environment of IoT devices enabled by the platform of which Assistant is only one part.
Take the company’s Home Hub system, which earlier this week had to disconnect Xiaomi IP cameras after someone discovered they were being fed images from other people’s units.
This is how privacy – and the crises it occasionally throws up – work. Using IoT, and interfaces such as voice control, involves trade-offs. Some now suspect these compromises might be inviting trouble in the long run.