Q&A: How one CSO secured his environment from generative AI risks

In February, travel and expense management company Navan (formerly TripActions) chose to go all-in on generative AI technology for a myriad of business and customer assistance uses.

The Palo Alto, CA company turned to ChatGPT from OpenAI and coding assistance tools from GitHub Copilot to write, test, and fix code; the decision has boosted Navan’s operational efficiency and reduced overhead costs.

GenAI tools have also been used to build a conversational experience for the company’s client virtual assistant, Ava. Ava, a travel and expense chatbot assistant, offers customers answers to questions and a conversational booking experience. It can also offer data to business travelers, such as company travel spend, volume, and granular carbon emissions details.

To read this article in full, please click here

Read more

Jamf: Generative AI is coming to an Apple IT admin near you

Imagine running fleets of iPhones that alert you when unexpected security-related incidents take place, or when otherwise legitimate service requests arrive from devices at an unexpected time or location. Imagine management and security software that not only identified these kinds of anomalies but gave you useful advice to help remediate the problem.

This, and more, is the kind of protection Jamf hopes to deliver using generative AI tools.

Generative IT for Apple admins

Jamf believes generative AI can be a big benefit to tech support and IT admin, and talked about its efforts at the end of an extensive Jamf Nation User Conference (JNUC) keynote. Akash Kamath, the company’s senior vice president, engineering, explained that just as the Mac made computing personal, genAI makes AI personal.

To read this article in full, please click here

Read more

GenAI in productivity apps: What could possibly go wrong?

We’re in the “iPhone moment” for generative AI, with every company rushing to figure out its strategy for dealing with this disruptive technology.

According to a KPMG survey conducted this June, 97% of US executives at large companies expect their organizations to be impacted highly by generative AI in the next 12 to 18 months, and 93% believe it will provide value to their business. Some 35% of companies have already started to deploy AI tools and solutions, while 83% say that they will increase their generative AI investments by at least 50% in the next six to twelve months.

To read this article in full, please click here

Read more

Why and how to create corporate genAI policies

As a large number of companies continue to test and deploy generative artificial intelligence (genAI) tools, many are at risk of AI errors, malicious attacks, and running afoul of regulators — not to mention the potential exposure of sensitive data.

For example, in April, after Samsung’s semiconductor division allowed engineers to use ChatGPT, workers using the platform leaked trade secrets on least three instances, according to published accounts. One employee pasted confidential source code into the chat to check for errors, while another worker shared code with ChatGPT and “requested code optimization.”

To read this article in full, please click here

Read more

Zoom goes for a blatant genAI data grab; enterprises, beware (updated)

Credit to Author: eschuman@thecontentfirm.com| Date: Thu, 17 Aug 2023 07:06:00 -0700

When Zoom amended its terms of service earlier this month — a bid to make executives comfortable that it wouldn’t use Zoom data to train generative AI models — it quickly stirred up a hornet’s nest. So the company “revised” the terms of service, and left in place ways it can still get full access to user data.

Computerworld repeatedly reached out to Zoom without success to clarify what the changes really mean.

Editor’s note: Shortly after this column was published, Zoom again changed its terms and conditions. We’ve added an update to the end of the story covering the latest changes.

Before I delve into the legalese — and Zoom’s weasel words to falsely suggest it was not doing what it obviously was doing — let me raise a more critical question: Is there anyone in the video-call business not doing this? Microsoft? Google? Those are two firms that never met a dataset that they didn’t love.

To read this article in full, please click here

Read more

Zoom goes for a blatant genAI data grab; enterprises, beware

Credit to Author: eschuman@thecontentfirm.com| Date: Fri, 11 Aug 2023 11:21:00 -0700

When Zoom amended its terms of service earlier this month — a bid to make executives comfortable that it wouldn’t use Zoom data to train generative AI models — it quickly stirred up a hornet’s nest. So the company “revised” the terms of service, and left in place ways it can still get full access to user data.

(Computerworld repeatedly reached out to Zoom without success to clarify what the changes really mean.)

Before I delve into the legalese — and Zoom’s weasel words to falsely suggest it was not doing what it obviously was doing — let me raise a more critical question: Is there anyone in the video-call business not doing this? Microsoft? Google? Those are two firms that never met a dataset that they didn’t love.

To read this article in full, please click here

Read more

Q&A: TIAA's CIO touts top AI projects, details worker skills needed now

Artificial intelligence (AI) is already having a significant effect on businesses and organizations across a variety of industries, even as many businesses are still just kicking the tires on the technology.

Those that have fully adopted AI claim a 35% increase in innovation and a 33% increase in sustainability over the past three years, according to research firm IDC. Customer and employee retention has also been reported as improving by 32% after investing in AI.

To read this article in full, please click here

Read more

EEOC Commissioner: AI system audits might not comply with federal anti-bias laws

Keith Sonderling, commissioner of the US Equal Employment Opportunity Commission (EEOC), has for years been sounding the alarm about the potential for artificial intelligence (AI) to run afoul of federal anti-discrimination laws such as the Civil Rights Act of 1964.

It was not until the advent of ChatGPT, Bard, and other popular generative AI tools, however, that local, state and national lawmakers began taking notice — and companies became aware of the pitfalls posed by a technology that can automate efficiencies in the business process.

Instead of speeches he’d typically make to groups of chief human resource officers or labor employment lawyers, Sonderling has found himself in recent months talking more and more about AI. His focus has been on how companies can stay compliant as they hand over more of the responsibility for hiring and other aspects of corporate HR to algorithms that are vastly faster and capable of parsing thousands of resumes in seconds.

To read this article in full, please click here

Read more