GenAI is highly inaccurate for business use — and getting more opaque

Large language models (LLMs), the algorithmic platforms on which generative AI (genAI) tools like ChatGPT are built, are highly inaccurate when connected to corporate databases and becoming less transparent, according to two studies.

One study by Stanford University showed that as LLMs continue to ingest massive amounts of information and grow in size, the genesis of the data they use is becoming harder to track down. That, in turn, makes it difficult for businesses to know whether they can safely build applications that use commercial genAI foundation models and for academics to rely on them for research.

To read this article in full, please click here

Read more

The Dark Side of AI: Large-Scale Scam Campaigns Made Possible by Generative AI

Credit to Author: gallagherseanm| Date: Mon, 27 Nov 2023 11:30:18 +0000

Generative artificial intelligence technologies such as OpenAI’s ChatGPT and DALL-E have created a great deal of disruption across much of our digital lives. Creating credible text, images and even audio, these AI tools can be used for both good and ill. That includes their application in the cybersecurity space. While Sophos AI has been working […]

Read more

Q&A: Cisco CIO sees AI embedded in every product and process

Less than a year after OpenAI’s ChatGPT was released to the public, Cisco Systems is already well into the process of embedding generative artificial intelligence (genAI) into its entire product portfolio and internal backend systems.

The plan is to use it in virtually every corner of the business, from automating network functions and monitoring security to creating new software products.

But Cisco’s CIO, Fletcher Previn, is also dealing with a scarcity of IT talent to create and tweak large language model (LLM) platforms for domain-specific AI applications. As a result, IT workers are learning as they go, while discovering new places and ways the ever-evolving technology can create value.

To read this article in full, please click here

Read more

What exactly will the UK government's global AI Safety Summit achieve?

From tomorrow, the UK government is hosting the first global AI Safety Summit, bringing together about 100 people from industry and government to develop a shared understanding of the emerging risks of leading-edge AI while unlocking its benefits. 

The event will be held at Bletchley Park, a site in Milton Keynes that became the home of code breakers during World War II and saw the development of Colossus, the world’s first programmable digital electronic computer, used to decrypt the Nazi Party’s Enigma code, shortening the war by at least two years.

To read this article in full, please click here

Read more

Biden lays down the law on AI

In a sweeping executive order, US President Joseph R. Biden Jr. on Monday set up a comprehensive series of standards, safety and privacy protections, and oversight measures for the development and use of artificial intelligence (AI).

Among more than two dozen initiatives, Biden’s “Safe, Secure, and Trustworthy Artificial Intelligence” order was a long time coming, according to many observers who’ve been watching the AI space — especially with the rise of generative AI (genAI) in the past year.

To read this article in full, please click here

Read more

‘Data poisoning’ anti-AI theft tools emerge — but are they ethical?

Technologists are helping artists fight back against what they see as intellectual property (IP) theft by generative artificial intelligence (genAI) tools  whose training algorithms automatically scrape the internet and other places for content.

The fight over what constitutes fair use of content found online is at the heart of what has been an ongoing court battle. The fight goes beyond artwork to whether genAi companies like Microsoft and its partner, OpenAI, can incorporate software code and other published content into their models.

To read this article in full, please click here

Read more

White House to issue AI rules for federal employees

After earlier efforts to reign in generative artificial intelligence (genAI) were criticized as too vague and ineffective, the Biden Administration is now expected to announce new, more restrictive rules for use of the technology by federal employees.

The executive order, expected to be unveiled Monday, would also change immigration standards to allow a greater influx of technology workers to help accelerate US development efforts.

On Tuesday night, the White House sent invitations for a “Safe, Secure, and Trustworthy Artificial Intelligence” event Monday hosted by President Joseph R. Biden Jr., according to The Washington Post.

To read this article in full, please click here

Read more

Q&A: How one CSO secured his environment from generative AI risks

In February, travel and expense management company Navan (formerly TripActions) chose to go all-in on generative AI technology for a myriad of business and customer assistance uses.

The Palo Alto, CA company turned to ChatGPT from OpenAI and coding assistance tools from GitHub Copilot to write, test, and fix code; the decision has boosted Navan’s operational efficiency and reduced overhead costs.

GenAI tools have also been used to build a conversational experience for the company’s client virtual assistant, Ava. Ava, a travel and expense chatbot assistant, offers customers answers to questions and a conversational booking experience. It can also offer data to business travelers, such as company travel spend, volume, and granular carbon emissions details.

To read this article in full, please click here

Read more

ServiceNow embeds AI-powered customer-assist features throughout products

Read more