OpenAI launches new alignment division to tackle risks of superintelligent AI

OpenAI is opening a new alignment research division, focused on developing training techniques to stop superintelligent AI — artificial intelligence that could outthink humans and become misaligned with humans ethics — from causing serious harm.

“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” Jan Leike and Ilya Sutskever wrote in a blog post for OpenAI, the company behind the most well-known generative AI large language model, ChatGPT. They  added that although superintelligence might seem far off, some experts believe it could arrive this decade.

To read this article in full, please click here

Read more

Cisco brings generative AI to Webex and Cisco Security Cloud

Cisco is adding new generative AI capabilities to its Webex collaboration platform, aimed at increasing productivity through automated meeting and conversation summaries.

The new offerings, announced at the Cisco Live! customer event in Las Vegas on Wednesday, include summarization capabilities that allow users to catch up on missed meetings or focus on the most important action items from a call. The capabilities also extend to Cisco’s asynchronous Vidcast tool and the Webex Contact Center.

To read this article in full, please click here

Read more

Governments worldwide grapple with regulation to rein in AI dangers

Ever since generative AI exploded into public consciousness with the launch of ChatGPT at the end of last year, calls to regulate the technology to stop it from causing undue harm have risen to fever pitch around the world. The stakes are high — just last week, technology leaders signed an open public letter saying that if government officials get it wrong, the consequence could be the extinction of the human race.

To read this article in full, please click here

Read more

ChatGPT creators and others plead to reduce risk of global extinction from their tech

Hundreds of tech industry leaders, academics, and others public figures signed an open letter warning that artificial intelligence (AI) evolution could lead to an extinction event and saying that controlling the tech should be a top global priority.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read the statement published by San Francisco-based Center for AI Safety.

The brief statement in the letter reads almost like a mea culpa for the technology about which its creators are now joining together to warn the world.

To read this article in full, please click here

Read more

Antivirus Security and the Role of Artificial Intelligence (AI)

Credit to Author: Quickheal| Date: Mon, 29 May 2023 05:55:42 +0000

With groundbreaking innovations and intelligent machines revolutionizing industries, the advent of AI is sparking endless possibilities  If you’re…

The post Antivirus Security and the Role of Artificial Intelligence (AI) appeared first on Quick Heal Blog.

Read more

OpenAI’s ChatGPT app for iPad, iPhone hits 500K downloads

OpenAI shipped its ChatGPT app for iPads and iPhones just a week ago, but it has already become one of the most popular applications in the last two years, with over half a million downloads in the first six days. That’s a real achievement, but also a challenge — that’s half a million potential data vulnerabilities.

Not to rest on its laurels, this year’s favorite smart assistant (so far) is now also available in 41 additional nations. There’s little doubt that this has been one of the most successful software/service introductions of all time, but that doesn’t change the inherent risk of these technologies.

To read this article in full, please click here

Read more

G7 leaders warn of AI dangers, say the time to act is now

Leaders of the Group of Seven (G7) nations on Saturday called for the creation of technical standards to keep artificial intelligence (AI) in check, saying AI has outpaced oversight for safety and security.

Meeting in Hiroshima, Japan, the leaders said nations must come together on a common vision and goal of trustworthy AI, even while those solutions may vary. But any solution for digital technologies such as AI should be “in line with our shared democratic values,” they said in a statement.

To read this article in full, please click here

Read more

Apple bans employees from using ChatGPT. Should you?

Read more

Senate hearings see a clear and present danger from AI — and opportunities

There are vital national interests in advancing artificial intelligence (AI) to streamline public services and automate mundane tasks performed by government employees. But the government lacks in both IT talent and systems to support those efforts.

“The federal government as a whole continues to face barriers in hiring, managing, and retaining staff with advanced technical skills — the very skills needed to design, develop, deploy, and monitor AI systems,” said Taka Ariga, chief data scientist at the US Government Accountability Office.

Daniel Ho, associate director for Institute for Human-Centered Artificial Intelligence (HAI) at Stanford University, agreed, saying that by one estimate the federal government would need to hire about 40,000 IT workers to address cybersecurity issues posed by AI.

To read this article in full, please click here

Read more

Steve Wozniak: ChatGPT-type tech may threaten us all

Apple co-founder Steve Wozniak has been touring the media to discuss the perils of generative artificial intelligence (AI), warning people to be wary of its negative impacts. Speaking to both the BBC and Fox News, he stressed that AI can misuse personal data, and raised concerns it could help scammers generate even more effective scams, from identity fraud to phishing to cracking passwords and beyond.

AI puts a spammer in the works

“We’re getting hit with so much spam, things trying to take over our accounts and our passwords, trying to trick us into them,” he said.

To read this article in full, please click here

Read more