Biden lays down the law on AI

In a sweeping executive order, US President Joseph R. Biden Jr. on Monday set up a comprehensive series of standards, safety and privacy protections, and oversight measures for the development and use of artificial intelligence (AI).

Among more than two dozen initiatives, Biden’s “Safe, Secure, and Trustworthy Artificial Intelligence” order was a long time coming, according to many observers who’ve been watching the AI space — especially with the rise of generative AI (genAI) in the past year.

To read this article in full, please click here

Read more

‘Data poisoning’ anti-AI theft tools emerge — but are they ethical?

Technologists are helping artists fight back against what they see as intellectual property (IP) theft by generative artificial intelligence (genAI) tools  whose training algorithms automatically scrape the internet and other places for content.

The fight over what constitutes fair use of content found online is at the heart of what has been an ongoing court battle. The fight goes beyond artwork to whether genAi companies like Microsoft and its partner, OpenAI, can incorporate software code and other published content into their models.

To read this article in full, please click here

Read more

White House to issue AI rules for federal employees

After earlier efforts to reign in generative artificial intelligence (genAI) were criticized as too vague and ineffective, the Biden Administration is now expected to announce new, more restrictive rules for use of the technology by federal employees.

The executive order, expected to be unveiled Monday, would also change immigration standards to allow a greater influx of technology workers to help accelerate US development efforts.

On Tuesday night, the White House sent invitations for a “Safe, Secure, and Trustworthy Artificial Intelligence” event Monday hosted by President Joseph R. Biden Jr., according to The Washington Post.

To read this article in full, please click here

Read more

Google to block Bard conversations from being indexed on Search

Alphabet-owned Google is working on blocking user conversations with its new Bard generative AI assistant from being indexed on its Search platform or showing up as results.

“Bard allows people to share chats, if they choose. We also don’t intend for these shared chats to be indexed by Google Search. We’re working on blocking them from being indexed now,” Google’s Search Liaison account posted on Twitter, now X.

The internet search giant was responding to an SEO Consultant who pointed out on Twitter that user conversations with Bard were being indexed on Google Search.

To read this article in full, please click here

Read more

Q&A: How one CSO secured his environment from generative AI risks

In February, travel and expense management company Navan (formerly TripActions) chose to go all-in on generative AI technology for a myriad of business and customer assistance uses.

The Palo Alto, CA company turned to ChatGPT from OpenAI and coding assistance tools from GitHub Copilot to write, test, and fix code; the decision has boosted Navan’s operational efficiency and reduced overhead costs.

GenAI tools have also been used to build a conversational experience for the company’s client virtual assistant, Ava. Ava, a travel and expense chatbot assistant, offers customers answers to questions and a conversational booking experience. It can also offer data to business travelers, such as company travel spend, volume, and granular carbon emissions details.

To read this article in full, please click here

Read more

Jamf: Generative AI is coming to an Apple IT admin near you

Imagine running fleets of iPhones that alert you when unexpected security-related incidents take place, or when otherwise legitimate service requests arrive from devices at an unexpected time or location. Imagine management and security software that not only identified these kinds of anomalies but gave you useful advice to help remediate the problem.

This, and more, is the kind of protection Jamf hopes to deliver using generative AI tools.

Generative IT for Apple admins

Jamf believes generative AI can be a big benefit to tech support and IT admin, and talked about its efforts at the end of an extensive Jamf Nation User Conference (JNUC) keynote. Akash Kamath, the company’s senior vice president, engineering, explained that just as the Mac made computing personal, genAI makes AI personal.

To read this article in full, please click here

Read more

GenAI in productivity apps: What could possibly go wrong?

We’re in the “iPhone moment” for generative AI, with every company rushing to figure out its strategy for dealing with this disruptive technology.

According to a KPMG survey conducted this June, 97% of US executives at large companies expect their organizations to be impacted highly by generative AI in the next 12 to 18 months, and 93% believe it will provide value to their business. Some 35% of companies have already started to deploy AI tools and solutions, while 83% say that they will increase their generative AI investments by at least 50% in the next six to twelve months.

To read this article in full, please click here

Read more

Why and how to create corporate genAI policies

As a large number of companies continue to test and deploy generative artificial intelligence (genAI) tools, many are at risk of AI errors, malicious attacks, and running afoul of regulators — not to mention the potential exposure of sensitive data.

For example, in April, after Samsung’s semiconductor division allowed engineers to use ChatGPT, workers using the platform leaked trade secrets on least three instances, according to published accounts. One employee pasted confidential source code into the chat to check for errors, while another worker shared code with ChatGPT and “requested code optimization.”

To read this article in full, please click here

Read more

Zoom goes for a blatant genAI data grab; enterprises, beware (updated)

Credit to Author: eschuman@thecontentfirm.com| Date: Thu, 17 Aug 2023 07:06:00 -0700

When Zoom amended its terms of service earlier this month — a bid to make executives comfortable that it wouldn’t use Zoom data to train generative AI models — it quickly stirred up a hornet’s nest. So the company “revised” the terms of service, and left in place ways it can still get full access to user data.

Computerworld repeatedly reached out to Zoom without success to clarify what the changes really mean.

Editor’s note: Shortly after this column was published, Zoom again changed its terms and conditions. We’ve added an update to the end of the story covering the latest changes.

Before I delve into the legalese — and Zoom’s weasel words to falsely suggest it was not doing what it obviously was doing — let me raise a more critical question: Is there anyone in the video-call business not doing this? Microsoft? Google? Those are two firms that never met a dataset that they didn’t love.

To read this article in full, please click here

Read more