ServiceNow embeds AI-powered customer-assist features throughout products

Read more

GenAI in productivity apps: What could possibly go wrong?

We’re in the “iPhone moment” for generative AI, with every company rushing to figure out its strategy for dealing with this disruptive technology.

According to a KPMG survey conducted this June, 97% of US executives at large companies expect their organizations to be impacted highly by generative AI in the next 12 to 18 months, and 93% believe it will provide value to their business. Some 35% of companies have already started to deploy AI tools and solutions, while 83% say that they will increase their generative AI investments by at least 50% in the next six to twelve months.

To read this article in full, please click here

Read more

Why and how to create corporate genAI policies

As a large number of companies continue to test and deploy generative artificial intelligence (genAI) tools, many are at risk of AI errors, malicious attacks, and running afoul of regulators — not to mention the potential exposure of sensitive data.

For example, in April, after Samsung’s semiconductor division allowed engineers to use ChatGPT, workers using the platform leaked trade secrets on least three instances, according to published accounts. One employee pasted confidential source code into the chat to check for errors, while another worker shared code with ChatGPT and “requested code optimization.”

To read this article in full, please click here

Read more

Q&A: TIAA's CIO touts top AI projects, details worker skills needed now

Artificial intelligence (AI) is already having a significant effect on businesses and organizations across a variety of industries, even as many businesses are still just kicking the tires on the technology.

Those that have fully adopted AI claim a 35% increase in innovation and a 33% increase in sustainability over the past three years, according to research firm IDC. Customer and employee retention has also been reported as improving by 32% after investing in AI.

To read this article in full, please click here

Read more

Researchers build a scary Mac attack using AI and sound

A UK research team based at Durham University has identified an exploit that could allow attackers to figure out what you type on your MacBook Pro — based on the sound each keyboard tap makes.

These kinds of attacks aren’t particularly new. The researchers found research dating back to the 1950s into using acoustics to identify what people write. They also note that the first paper detailing use of such an attack surface was written for the US National Security Agency (NSA) in 1972, prompting speculation such attacks may already be in place.

“(The) governmental origin of AS- CAs creates speculation that such an attack may already be possible on modern devices, but remains classified,” the researchers wrote.

To read this article in full, please click here

Read more

UK intelligence agencies seek to weaken data protection safeguards

UK intelligence agencies are campaigning for the government to weaken surveillance laws, arguing that the current safeguards limit their ability to train AI models due to the large amount of personal data required.

GCHQ, MI5, and MI6 have been increasingly using AI technologies to analyze data sets, including bulk personal data sets (BPDs), which can often contain sensitive information about people not of interest to the security services.

Currently, a judge has to approve the examination and retention of BPDs, a process that intelligence agencies have described as “disproportionately burdensome” when applied to “publicly available datasets, specifically those containing data in respect of which the subject has little or no reasonable expectation of privacy.”

To read this article in full, please click here

Read more

EEOC Commissioner: AI system audits might not comply with federal anti-bias laws

Keith Sonderling, commissioner of the US Equal Employment Opportunity Commission (EEOC), has for years been sounding the alarm about the potential for artificial intelligence (AI) to run afoul of federal anti-discrimination laws such as the Civil Rights Act of 1964.

It was not until the advent of ChatGPT, Bard, and other popular generative AI tools, however, that local, state and national lawmakers began taking notice — and companies became aware of the pitfalls posed by a technology that can automate efficiencies in the business process.

Instead of speeches he’d typically make to groups of chief human resource officers or labor employment lawyers, Sonderling has found himself in recent months talking more and more about AI. His focus has been on how companies can stay compliant as they hand over more of the responsibility for hiring and other aspects of corporate HR to algorithms that are vastly faster and capable of parsing thousands of resumes in seconds.

To read this article in full, please click here

Read more

EEOC chief: AI system audits might comply with local anti-bias laws, but not federal ones

Keith Sonderling, commissioner of the US Equal Employment Opportunity Commission (EEOC), has for years been sounding the alarm about the potential for artificial intelligence (AI) to run afoul of federal anti-discrimination laws such as the Civil Rights Act of 1964.

It was not until the advent of ChatGPT, Bard, and other popular generative AI tools, however, that local, state and national lawmakers began taking notice — and companies became aware of the pitfalls posed by a technology that can automate efficiencies in the business process.

Instead of speeches he’d typically make to groups of chief human resource officers or labor employment lawyers, Sonderling has found himself in recent months talking more and more about AI. His focus has been on how companies can stay compliant as they hand over more of the responsibility for hiring and other aspects of corporate HR to algorithms that are vastly faster and capable of parsing thousands of resumes in seconds.

To read this article in full, please click here

Read more

Was Steve Jobs right about this?

Perhaps Steve Jobs was right to limit the amount of time he let his children use iPhones and iPads — a tradition Apple maintains with its Screen Time tool, which lets parents set limits on device use. Now, an extensive UNESCO report suggests that letting kids spend too much time on these devices can be bad for them.

Baked in inequality and lack of social skills

That’s the headline claim, but there’s a lot more to the report in terms of exploring data privacy, misuse of tech, and failed digital transformation experiments.

To read this article in full, please click here

Read more