Big Tech Needs to Use Hazardous Materials Warnings

Credit to Author: Stephen Nowicki| Date: Sat, 10 Aug 2019 13:00:00 +0000

The technology sector has a hazardous materials problem, beyond the mountains of electronic waste it generates. More immediately, Big Tech fails to warn users when its products and services are hazardous. Users are long overdue for a clear, concise rating system of privacy and security risks. Fortunately, tech can learn from another industry that knows how to alert consumers about the dangers of improperly storing and leaking toxic products: the chemical industry.

Nearly sixty year ago, the chemical industry and its regulators realized that simple communication of hazards is critical to safety. Material Safety Data Sheets, the chemical equivalent of technology user terms and conditions, have offered descriptions of those hazards since the early 1900s. But as the industry evolved, it became clear, sometimes tragically, that end users rarely read these lengthy technical volumes. A quick reference was required.

Stephen Nowicki is IMS Manager of Kemper System America, Inc. and member of the Erie County Hazmat response team.

Enter the fire diamond, the now ubiquitous, universally understood symbol of chemical safety. You’ve seen them on propane tanks, chemical containers, and laboratories: cartoon rhombuses divided into colored quadrants, each filled with a number, between 0 and 4, indicating a substance’s toxicity (blue), flammability (red), and reactivity (yellow). Introduced in 1960 by the National Fire Protection Association, the diamond, officially called NFPA 704, is the standard for communicating the most basic and essential safety information of hazardous materials in the United States. Even if users don’t read the safety data sheet, they are greeted by this bright, unavoidable summary of material hazards every time they look at the container.

Whereas the chemical industry and its regulators have worked to ensure clearer warnings, the tech industry has worked to make it increasingly difficult for consumers know what hazards their products pose (hello, FaceApp). As technology companies use and misuse the personal data they collect in increasingly sophisticated ways, user agreements have only become longer and more byzantine. Facebook, for example, has terms of service and related policies that stretch for over 35,000 words, about as long as The Lion, The Witch, and the Wardrobe, and as bewildering as Narnia. Buried within are clauses that have significant privacy implications such as granting Facebook a “non-exclusive, transferable, sub-licensable, royalty-free, and worldwide license to host, use, distribute, modify, run, copy, publicly perform or display, translate, and create derivative works of your content.”

License agreements, like toxicology studies, provide valuable information, but they’re of little use when users need to quickly know what they’re getting themselves into. When emergency personnel are considering using a chemical product, they immediately need to know: Will it explode? Will it poison me? Will it burn me? Right away, the fire diamond answers. When considering a new app or service, tech users have similar questions: How much of a security risk is this? What data is collected and stored? Do I have any control? Will it poison me? Will it burn me? To find those answers, a user often first has to jump into the fire.

Besides the self-interest of entrenched tech industry players, there is no excuse for the need to read dozens of pages of dense text to learn the dangers of a product when that information can be condensed into a few numbers and color-coded blocks. If users are to rapidly adopt new services and technologies and to bear responsibility for understanding the content and implications of the burdens posed by license agreements of those technologies, then a transparent and standardized method of hazard communication is required.

Who should administer this? It could be a mandatory regulatory framework (from the FTC or Consumer Product Safety Commission) or a voluntary independent rating system created from accreditation bodies or industry watchdogs like the Electronic Frontier Foundation.

What should it look like? There are myriad design options, but one would be to create a tech safety diamond. Instead of stating physical harm, this warning system must summarize the key aspects of data collection, user control, data use, and data handling, to let users know if it’s worth the risk.

Blue: For data collection, the technology equivalent of toxicity, a low rating would indicate that the service would gather only names, IP addresses, or other basic information, while a high rating would mark the hoarding of deeply personal and potentially dangerous information like voice recordings or detailed location data.

Yellow: User control, the parallel to reactivity, is perhaps the simplest to rate, once a service has my data, can it be fully deleted, and if not, to what extent will it persist?

Red: Data use, or flammability, is extremely difficult to summarize in a single number, but low ratings would correspond to in-house uses for the service’s essential functions, high ratings would indicate aggressive third-party sharing, strong intellectual property claims on user content, or use of data to sculpt user behavior.

White: Data handling, which would range from secure storage and encryption (0) to unaccountable third parties (4).

Clear warnings will empower users to make better-informed decisions. With them, we wouldn’t need to reconsider only after we learned about the next phone company and app selling our location data to the highest bidder, or an insecure IoT device allowing bad actors to peer into our bedrooms. And perhaps companies will think twice before offering another service that would be labeled with the equivalent of a skull and crossbones.

WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com.

https://www.wired.com/category/security/feed/

Leave a Reply