Steve Wozniak: ChatGPT-type tech may threaten us all

Apple co-founder Steve Wozniak has been touring the media to discuss the perils of generative artificial intelligence (AI), warning people to be wary of its negative impacts. Speaking to both the BBC and Fox News, he stressed that AI can misuse personal data, and raised concerns it could help scammers generate even more effective scams, from identity fraud to phishing to cracking passwords and beyond.

AI puts a spammer in the works

“We’re getting hit with so much spam, things trying to take over our accounts and our passwords, trying to trick us into them,” he said.

To read this article in full, please click here

Read more

Q&A: At MIT event, Tom Siebel sees ‘terrifying’ consequences from using AI

Speakers ranging from artificial intelligence (AI) developers to law firms grappled this week with questions about the efficacy and ethics of AI during MIT Technology Review’s EmTech Digital conference. Among those who had a somewhat alarmist view of the technology (and regulatory efforts to rein it in) was Tom Siebel, CEO C3 AI and founder of CRM vendor Siebel Systems.

Siebel was on hand to talk about how businesses can prepare for an incoming wave of AI regulations, but in his comments Tuesday he touched on various facets of the debate of generative AI, including the ethics of using it, how it could evolve, and why it could be dangerous.

To read this article in full, please click here

Read more

Generative AI is about to destroy your company. Will you stop it?

Credit to Author: eschuman@thecontentfirm.com| Date: Mon, 01 May 2023 10:21:00 -0700

As the debate rages about how much IT admins and CISOs should use generative AI — especially for coding — SailPoint CISO Rex Booth sees far more danger than benefit, especially given the industry’s less-than-stellar history of making the right security decisions.

Google has already decided to publicly leverage generative AI in its searches, a move that is freaking out a wide range of AI specialists, including a senior manager of AI at Google itself

To read this article in full, please click here

Read more

As Europeans strike first to rein in AI, the US follows

A proposed set of rules by the European Union would, among other things. require makers of generative AI tools such as ChatGPT,to publicize any copyrighted material used by the technology platforms to create content of any kind.

A new draft of European Parliament’s legislation, a copy of which was attained by The Wall Street Journal, would allow the original creators of content used by generative AI applications to share in any profits that result.

To read this article in full, please click here

Read more

ChatGPT learns to forget: OpenAI implements data privacy controls

OpenAI, the Microsoft-backed firm behind the groundbreaking ChatGPT generative AI system, announced this week that it would allow users to turn off the chat history feature for its flagship chatbot, in what’s being seen as a partial answer to critics concerned about the security of data provided to ChatGPT.

The “history disabled” feature means that conversations marked as such won’t be used to train OpenAI’s underlying models, and won’t be displayed in the history sidebar. They will still be stored on the company’s servers, but will only be reviewed on an as-needed basis for abuse, and will be deleted after 30 days.

To read this article in full, please click here

Read more

Three issues with generative AI still need to be solved

Disclosure: Qualcomm and Microsoft are clients of the author.

Generative AI is spreading like a virus across the tech landscape. It’s gone from being virtually unheard a year ago to being one of, if not the, top trending technology today. As with any technology, there are issues that tend to surface with rapid growth, and generative AI is no exception.

I expect three main problems to emerge before the end of the year that few people are talking about today.

The critical need for a hybrid solution

Generative AI uses massive language models, it’s processor-intensive, and it’s rapidly becoming as ubiquitous as browsers. This is a problem because existing, centralized datacenters aren’t structured to handle this kind of load. They are I/O-constrained, processor-constrained, database-constrained, cost-constrained, and size-constrained, making a massive increase in centralized capacity unlikely in the near term, even though the need for this capacity is going vertical. 

To read this article in full, please click here

Read more

EU privacy regulators to create task force to investigate ChatGPT

The European Data Protection Board (EDPB) plans to launch a dedicated task force to investigate ChatGPT after a number of European privacy watchdogs raised concerns about whether the technology is compliant with the EU’s General Data Protection Regulation (GDPR).

Europe’s national privacy regulators said on Thursday that the decision came following discussions about recent enforcement action undertaken by the Italian data protection authority against OpenAI regarding its ChatGPT service.

To read this article in full, please click here

Read more

EU privacy regulators to create taskforce to investigate ChatGPT

The European Data Protection Board (EDPB) plans to launch a dedicated task force to investigate ChatGPT after a number of European privacy watchdogs raised concerns about whether the technology is compliant with the EU’s General Data Protection Regulation (GDPR).

Europe’s national privacy regulators said on Thursday that the decision came following discussions about recent enforcement action undertaken by the Italian data protection authority against OpenAI regarding its ChatGPT service.

To read this article in full, please click here

Read more

Tech bigwigs: Hit the brakes on AI rollouts

More than 1,100 technology luminaries, leaders, and scientists have issued a warning against labs performing large-scale experiments with artificial intelligence (AI) more powerful than ChatGPT, saying the technology poses a grave threat to humanity.

In an open letter published by Future of Life Institute, a nonprofit organization with the mission to reduce global catastrophic and existential risks to humanity, Apple co-founder Steve Wozniak and SpaceX and Tesla CEO Elon Musk joined other signatories in agreeing AI poses “profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.”

To read this article in full, please click here

Read more