Samsung shows we need an Apple approach to generative AI

It feels as if practically everyone has been using Open AI’s ChatGPT since the generative AI hit prime time. But many enterprise professionals may be embracing the technology without considering the risk of these large language models (LLMs).

That’s why we need an Apple approach to Generative AI.

What happens at Samsung should stay at Samsung

ChatGPT seems to be a do-everything tool, capable of answering questions, finessing prose, generating suggestions, creating reports, and more. Developers have used the tool to help them write or improve their code and some companies (such as Microsoft) are weaving this machine intelligence into existing products, web browsers, and applications.

To read this article in full, please click here

Read more

Legislation to rein in AI’s use in hiring grows

Organizations are rapidly adopting the use of artificial intelligence (AI) for the discovery, screening, interviewing, and hiring of candidates. It can reduce time and work needed to find job candidates and it can more accurately match applicant skills to a job opening.

But legislators and other lawmakers are concerned that using AI-based tools to discover and vet talent could intrude on job seekers’ privacy and may introduce racial- and gender-based biases already baked into the software.

“We have seen a substantial groundswell over the past two to three years with regard to legislation and regulatory rule-making as it relates to the use of AI in various facets of the workplace,” said Samantha Grant, a partner with the law firm of Reed Smith. 

To read this article in full, please click here

Read more

Tech big wigs: Hit the brakes on AI rollouts

More than 1,100 technology luminaries, leaders and scientists have issued a warning against labs performing large-scale experiments with artificial intelligence (AI) more powerful than ChatGPT, saying the technology poses a grave threat to humanity.

In an open letter published by The Future of Life Institute, a nonprofit organization that aims is to reduce global catastrophic and existential risks to humanity, Apple co-founder Steve Wozniak, SpaceX and Tesla CEO Elon Musk, and MIT Future of Life Institute President Max Tegmark joined other signatories in saying AI poses “profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.”

To read this article in full, please click here

Read more

“Just awful” experiment points suicidal teens at chatbot

Categories: News

Categories: Privacy

Tags: Koko

Tags: Robert Morris

Tags: Motherboard

Tags: AI ethics

Tags: AI

Tags: artificial intelligence

Startup Koko has been criticized for experimenting with young adults at risk of harming themselves. Worse, the young adults were unaware they were test subjects.

(Read more…)

The post “Just awful” experiment points suicidal teens at chatbot appeared first on Malwarebytes Labs.

Read more

A week in security (February 13 – 19)

Categories: News

Tags: Josh Saxe

Tags: Lock and Code S04E04

Tags: AI

Tags: artificial intelligence

Tags: endpoint security leader

Tags: CISA

Tags: DPRK

Tags: ChatGPT

Tags: informed consent

Tags: valentine’s day

Tags: password sharing

Tags: Android

Tags: data leaks

Tags: ESXiArgs

Tags: TrickBot

Tags: Wordpress

Tags: fake Hogwarts Legacy

Tags: Arris router

Tags: ransomware

Tags: Mortal Kombat

Tags: Section 230

Tags: iPhone calendar spam

The most interesting security related news from the week of February 13 to 19.

(Read more…)

The post A week in security (February 13 – 19) appeared first on Malwarebytes Labs.

Read more

YouTube AI wrongfully flags horror short “Show for Children” as suitable for children

Credit to Author: Jovi Umawing| Date: Fri, 08 Jul 2022 15:57:26 +0000

“Show for Children” is most definitely not to be shown to children.

The post YouTube AI wrongfully flags horror short “Show for Children” as suitable for children appeared first on Malwarebytes Labs.

Read more

Microsoft backs off facial recognition analysis, but big questions remain

Credit to Author: Evan Schuman| Date: Thu, 07 Jul 2022 03:00:00 -0700

Microsoft is backing away from its public support for some AI-driven features, including facial recognition, and acknowledging the discrimination and accuracy issues these offerings create. But the company had years to fix the problems and didn’t. That’s akin to a car manufacturer recalling a vehicle rather than fixing it.

Despite concerns that facial recognition technology can be discriminatory, the real issue is that results are inaccurate. (The discriminatory argument plays a role, though, due to the assumptions Microsoft developers made when crafting these apps.)

Let’s start with what Microsoft did and said. Sarah Bird, the principal group product manager for Microsoft’s Azure AI, summed up the pullback last month in a Microsoft blog

To read this article in full, please click here

Read more

Pegasus spyware found on UK government office phone

Credit to Author: Jovi Umawing| Date: Thu, 21 Apr 2022 19:32:28 +0000

The NSO Group’s flagship spyware was found on a device in 10 Downing Street’s network.

The post Pegasus spyware found on UK government office phone appeared first on Malwarebytes Labs.

Read more