The EU Wants Big Tech to Scan Your Private Chats for Child Abuse

Credit to Author: Matt Burgess| Date: Wed, 11 May 2022 15:45:20 +0000

To revist this article, visit My Profile, then View saved stories.

To revist this article, visit My Profile, then View saved stories.

All of your WhatsApp photos, iMessage texts, and Snapchat videos could be scanned to check for child sexual abuse images and videos under newly proposed European rules. The plans, experts warn, may undermine the end-to-end encryption that protects billions of messages sent every day and hamper people’s online privacy.

The European Commission today revealed long-awaited proposals aimed at tackling the huge volumes of child sexual abuse material, also known as CSAM, uploaded to the web each year. The proposed law creates a new EU Centre to deal with child abuse content and introduces obligations for tech companies to “detect, report, block and remove” CSAM from their platforms. The law, announced by Europe’s commissioner for home affairs, Ylva Johansson, says tech companies have failed to voluntarily remove abuse content, and it has been welcomed by child protection and safety groups.

Under the plans, tech companies—ranging from web hosting services to messaging platforms—can be ordered to “detect” both new and previously discovered CSAM, as well as potential instances of “grooming.” The detection could take place in chat messages, files uploaded to online services, or on websites that host abusive material. The plans echo an effort by Apple last year to scan photos on people’s iPhones for abusive content before it was uploaded to iCloud. Apple paused its efforts after a widespread backlash.

If passed, the European legislation would require tech companies to conduct risk assessments for their services to assess the levels of CSAM on their platforms and their existing prevention measures. If necessary, regulators or courts may then issue “detection orders” that say tech companies must start “installing and operating technologies” to detect CSAM. These detection orders would be issued for specific periods of time. The draft legislation doesn’t specify what technologies must be installed or how they will operate—these will be vetted by the new EU Centre—but says they should be used even when end-to-end encryption is in place.

The European proposal to scan people’s messages has been met with frustration from civil rights groups and security experts, who say it’s likely to undermine the end-to-end encryption that’s become the default on messaging apps such as iMessage, WhatsApp, and Signal. “Incredibly disappointing to see a proposed EU regulation on the internet fail to protect end-to-end encryption,” WhatsApp head Will Cathcart tweeted. “This proposal would force companies to scan every person's messages and put EU citizens' privacy and security at serious risk.” Any system that weakens end-to-end encryption could be abused or expanded to look for other types of content, researchers say.

“You either have E2EE or you don’t,” says Alan Woodward, a cybersecurity professor from the University of Surrey. End-to-end encryption protects people’s privacy and security by ensuring that only the sender and receiver of messages can see their content. For example, Meta, the owner of WhatsApp, doesn’t have any way to read your messages or mine their contents for data. The EU’s draft regulation says solutions shouldn’t weaken encryption and says it includes safeguards to ensure this doesn’t happen; however, it doesn’t give specifics for how this would work.

“That being so, there is only one logical solution: client-side scanning where the content is examined when it is decrypted on the user's device for them to view/read,” Woodward says. Last year, Apple announced it would introduce client-side scanning—scanning done on people’s iPhones rather than Apple’s servers—to check photos for known CSAM being uploaded to iCloud. The move sparked protests from civil rights groups and even Edward Snowden about the potential for surveillance, leading Apple to pause its plans a month after initially announcing them. (Apple declined to comment for this story.)

For tech companies, detecting CSAM on their platforms and scanning some communications is not new. Companies operating in the United States are required to report any CSAM they find or that is reported to them by users to the National Center for Missing and Exploited Children (NCMEC), a US-based nonprofit. More than 29 million reports, containing 39 million images and 44 million videos, were made to NCMEC last year alone. Under the new EU rules, the EU Centre will receive CSAM reports from tech companies.

“A lot of companies are not doing the detection today,” Johansson said in a press conference introducing the legislation. “This is not a proposal on encryption, this is a proposal on child sexual abuse material,” Johansson said, adding that the law is “not about reading communication” but detecting illegal abuse content.

At the moment, tech companies find CSAM online in different ways. And the amount of CSAM found is increasing as tech companies get better at detecting and reporting abuse—although some are much better than others. In some cases, AI is being used to hunt down previously unseen CSAM. Duplicates of existing abuse photos and videos can be detected using “hashing systems,” where abuse content is assigned a fingerprint that can be spotted when it’s uploaded to the web again. More than 200 companies, from Google to Apple, use Microsoft's PhotoDNA hashing system to scan millions of files shared online. However, to do this, systems need to have access to the messages and files people are sending, which is not possible when end-to-end encryption is in place.

“In addition to detecting CSAM, obligations will exist to detect the solicitation of children (‘grooming’), which can only mean that conversations will need to be read 24/7,” says Diego Naranjo, head of policy at the civil liberties group European Digital Rights. “This is a disaster for confidentiality of communications. Companies will be asked (via detection orders) or incentivized (via risk mitigation measures) to offer less secure services for everyone if they want to comply with these obligations.”

Discussions about protecting children online and how this can be done with end-to-end encryption are hugely complex, technical, and combined with the horrors of the crimes against vulnerable young people. Research from Unicef, the UN’s children’s fund, published in 2020 says encryption is needed to protect people’s privacy—including children—but adds that it “impedes” efforts to remove content and identify the people sharing it. For years, law enforcement agencies around the world have pushed to create ways to bypass or weaken encryption. “I’m not saying privacy at any cost, and I think we can all agree child abuse is abhorrent,” Woodward says, “but there needs to be a proper, public, dispassionate debate about whether the risks of what might emerge are worth the true effectiveness in fighting child abuse.”

Increasingly, researchers and tech companies have been focusing on safety tools that can exist alongside end-to-encryption. Proposals include using metadata from encrypted messages—the who, how, what, and why of messages, not their content—to analyze people’s behavior and potentially spot criminality. One recent report by the nonprofit Business for Social Responsibility, which was commissioned by Meta, found that end-to-end encryption is an overwhelmingly positive force for upholding people's human rights. It suggested 45 recommendations for how encryption and safety can go together and not involve access to people’s communications. When the report was published in April, Lindsey Andersen, BSR's associate director for human rights, told WIRED: “Contrary to popular belief, there actually is a lot that can be done even without access to messages.”

https://www.wired.com/category/security/feed/

Leave a Reply