Google Bard launches in EU, overcoming data privacy concerns in the region

Google has announced it is making its Bard chatbot available in the EU and Brazil, five months after the company opened it up for early access. To date, residents in EU countries have been unable to access the company’s ChatGPT rival due to issues surrounding data privacy concerns.

In addition to making Bard more widely available, Google has also introduced a host of new features including text-to-speech capabilities, shareable Bard conversation links, Google Lens compatibility, and the ability to customize Bard responses — for example, adjusting for tone and style.

To read this article in full, please click here

Read more

EU-US Data Privacy Framework to face serious legal challenges, experts say

Nine months after US President Joe Biden signed an executive order that updated rules for the transfer of data between the US and the EU, the European Commission this week ratified the EU-US Data Privacy Framework. Industry experts, however, say it will be challenged at the European Court of Justice (CJEU), and stands a good chance of being struck down.

The move comes two years after the CJEU shut down the previous EU-US data sharing agreement, known as Privacy Shield, on grounds that the US doesn’t provide adequate protection for personal data, particularly in relation to state surveillance. In 2015, a previous attempt to forge a data sharing pact, dubbed Safe Harbor, was also struck down by the CJEU.

To read this article in full, please click here

Read more

Governments worldwide grapple with regulation to rein in AI dangers

Ever since generative AI exploded into public consciousness with the launch of ChatGPT at the end of last year, calls to regulate the technology to stop it from causing undue harm have risen to fever pitch around the world. The stakes are high — just last week, technology leaders signed an open public letter saying that if government officials get it wrong, the consequence could be the extinction of the human race.

To read this article in full, please click here

Read more

Antivirus Security and the Role of Artificial Intelligence (AI)

Credit to Author: Quickheal| Date: Mon, 29 May 2023 05:55:42 +0000

With groundbreaking innovations and intelligent machines revolutionizing industries, the advent of AI is sparking endless possibilities  If you’re…

The post Antivirus Security and the Role of Artificial Intelligence (AI) appeared first on Quick Heal Blog.

Read more

Google killer, killed: Neeva and the limits of privacy as a philosophy

Well, that was fast.

Just under two years after splashing into the world with all sorts of provocative promises, a search startup that was set on convincing people to pay for a privacy-centric Google alternative is shutting its doors.

Neeva, founded by a pair of former Google executives and the subject of intense fascination within the tech universe, quietly announced over the weekend that its service will be winding down next week. From the announcement:

To read this article in full, please click here

Read more

Senate hearings see a clear and present danger from AI — and opportunities

There are vital national interests in advancing artificial intelligence (AI) to streamline public services and automate mundane tasks performed by government employees. But the government lacks in both IT talent and systems to support those efforts.

“The federal government as a whole continues to face barriers in hiring, managing, and retaining staff with advanced technical skills — the very skills needed to design, develop, deploy, and monitor AI systems,” said Taka Ariga, chief data scientist at the US Government Accountability Office.

Daniel Ho, associate director for Institute for Human-Centered Artificial Intelligence (HAI) at Stanford University, agreed, saying that by one estimate the federal government would need to hire about 40,000 IT workers to address cybersecurity issues posed by AI.

To read this article in full, please click here

Read more

Q&A: At MIT event, Tom Siebel sees ‘terrifying’ consequences from using AI

Speakers ranging from artificial intelligence (AI) developers to law firms grappled this week with questions about the efficacy and ethics of AI during MIT Technology Review’s EmTech Digital conference. Among those who had a somewhat alarmist view of the technology (and regulatory efforts to rein it in) was Tom Siebel, CEO C3 AI and founder of CRM vendor Siebel Systems.

Siebel was on hand to talk about how businesses can prepare for an incoming wave of AI regulations, but in his comments Tuesday he touched on various facets of the debate of generative AI, including the ethics of using it, how it could evolve, and why it could be dangerous.

To read this article in full, please click here

Read more

ChatGPT learns to forget: OpenAI implements data privacy controls

OpenAI, the Microsoft-backed firm behind the groundbreaking ChatGPT generative AI system, announced this week that it would allow users to turn off the chat history feature for its flagship chatbot, in what’s being seen as a partial answer to critics concerned about the security of data provided to ChatGPT.

The “history disabled” feature means that conversations marked as such won’t be used to train OpenAI’s underlying models, and won’t be displayed in the history sidebar. They will still be stored on the company’s servers, but will only be reviewed on an as-needed basis for abuse, and will be deleted after 30 days.

To read this article in full, please click here

Read more