OpenAI launches new alignment division to tackle risks of superintelligent AI

OpenAI is opening a new alignment research division, focused on developing training techniques to stop superintelligent AI — artificial intelligence that could outthink humans and become misaligned with humans ethics — from causing serious harm.

“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” Jan Leike and Ilya Sutskever wrote in a blog post for OpenAI, the company behind the most well-known generative AI large language model, ChatGPT. They  added that although superintelligence might seem far off, some experts believe it could arrive this decade.

To read this article in full, please click here

Read more

Lawyers and Incident Response can be a dangerous combo

Credit to Author: eschuman@thecontentfirm.com| Date: Fri, 07 Jul 2023 03:30:00 -0700

Lawyers and C-suite leaders have the same basic mission: protect the enterprise from bad actors who want to do harm. But they often often approach the job in such polar opposite ways that they wind up fighting each other instead of working together. 

A new academic report on the topic from researchers at the University of Edinburgh, the University of Innsbruck, Tufts University and the University of Minnesota tried to document how stark those differences have become.

“Cyber insurance sends work to a small number of [incident response] firms, drives down the fees paid and appoints lawyers to direct technical investigators,” the report noted. “Lawyers, when directing incident response often introduce legalistic contractual and communication steps that slow down incident response, advise IR practitioners not to write down remediation steps or to produce formal reports and restrict access to any documents produced.”

To read this article in full, please click here

Read more