Google Tightens Its Voice Assistant Rules Amid Privacy Backlash

Credit to Author: Lily Hay Newman| Date: Mon, 23 Sep 2019 07:01:00 +0000

Following Apple, Amazon, and others, Google will put in new safeguards against accidental voice assistant collection and transcription.

After months of revelations that smart speakers get a very human intelligence boost from contractors who transcribe and review customer audio snippets, the mea culpas are flowing in. At the end of August, Apple issued a rare apology about how it had handled human review of audio for Siri. Amazon has made it easier for users to understand how their data might be used and control whether or not it is eligible for review at all. And now Google is joining the fray with a set of privacy announcements about Google Assistant.

Google paused human audio review worldwide in July after reports that a contractor was leaking audio snippets in Dutch. Early Monday, the company said in a blog post that human review will now resume with increased options for user data control. The company emphasizes that sending audio for review has never been the default mode on its devices. But you'll now be prompted to review your settings choice if your devices are currently opted in to the "Voice & Audio Activity" program that potentially sends your recordings out for vetting.

"We believe you should be able to easily understand how your data is used and why, so you can make choices that are right for you," reads the post, which WIRED reviewed in advance of its publication. "Recently we’ve heard concerns about our process in which language experts can listen to and transcribe audio data from the Google Assistant to help improve speech technology for different languages. It's clear that we fell short of our high standards in making it easy for you to understand how your data is used, and we apologize."

In addition to adding more prompts and information about the Voice & Audio Activity settings, Google says that it has taken steps to improve filters meant to catch and immediately delete recordings made in error—those that are created when a smart speaker mistakenly thinks it has detected its so-called wake word. Those errant recordings have the potential to capture even more sensitive audio than those made when a user is intentionally speaking to a smart assistant. Google also says that it will soon launch a feature that lets users choose how sensitive their Google Assistant devices are to "hearing" their wake word. For example, you might set a device to be more sensitive in a room that often has a lot of background noise, and less sensitive in your bedroom. This will ideally help cut down on false positives.

Perhaps most importantly, Google says that by the end of the year it will update the Google Assistant policies to "vastly reduce" the amount of audio data the company stores. For accounts that have opted into Voice & Audio Activity, Google will now delete the "majority" of audio data older than a few months. This means that even if you've chosen to share audio snippets with Google, the company still won't keep them forever.

"One of the principles we strive toward is minimizing the amount of data we store, and we’re applying this to the Google Assistant as well," the Google post reads.

For those who understand the current status and capabilities of machine learning, "smart" assistants' reliance on human quality checks may not have come as much of a surprise. But for the many people who thought they were just talking to a machine, it was jarring to learn that real people might have listened in, and for how long those recordings and transcripts are stored.

About three weeks after Google paused audio snippet review, Apple did as well on August 2. With the release of iOS 13, the company is resuming the practice, but it is now employing all reviewers directly rather than contracting with third parties. And in the new mobile operating system, Siri's audio recording retention is now off by default. Amazon, which has perhaps received the most scrutiny for both its Alexa audio review process and its snippet retention policies, offers the option for users to opt-out of sharing audio clips for transcription. The company launched a dedicated "Alexa Privacy Hub" in May and added the voice command, "Alexa, delete everything I said today," soon after.

Microsoft said at the end of August that it no longer uses human review for audio snippets from Skype's translate feature or Cortana on Xbox. Similarly, Facebook said that it would stop human review on audio from Messenger—although it was only revealed to have used human transcription for its Portal hardware last week. All the major smart assistant developers maintain that audio snippets are fully anonymized before they go out to reviewers and most specifically say that less than one percent of total interactions with smart assistants actually get reviewed by a person.

These changes all represent real progress on improving privacy protections within smart assistants. But while more companies have at least acknowledged their mistakes, the improvements have come piecemeal across the industry, creating the potential that some exposures have been overlooked. At least now, consumers are getting more information about what really makes smart assistants so smart—and can use that information to decide how comfortable they really are inviting a sometimes-live mic into their homes.

https://www.wired.com/category/security/feed/

Leave a Reply