UK data regulator issues warning over generative AI data protection concerns
The UK's Information Commission’s Office reminds organizations that data protection laws still apply to unfiltered data used to train large language models.
The UK's Information Commission’s Office reminds organizations that data protection laws still apply to unfiltered data used to train large language models.
Organizations are rapidly adopting the use of artificial intelligence (AI) for the discovery, screening, interviewing, and hiring of candidates. It can reduce time and work needed to find job candidates and it can more accurately match applicant skills to a job opening.
But legislators and other lawmakers are concerned that using AI-based tools to discover and vet talent could intrude on job seekers’ privacy and may introduce racial- and gender-based biases already baked into the software.
“We have seen a substantial groundswell over the past two to three years with regard to legislation and regulatory rule-making as it relates to the use of AI in various facets of the workplace,” said Samantha Grant, a partner with the law firm of Reed Smith.
More than 1,100 technology luminaries, leaders and scientists have issued a warning against labs performing large-scale experiments with artificial intelligence (AI) more powerful than ChatGPT, saying the technology poses a grave threat to humanity.
In an open letter published by The Future of Life Institute, a nonprofit organization that aims is to reduce global catastrophic and existential risks to humanity, Apple co-founder Steve Wozniak, SpaceX and Tesla CEO Elon Musk, and MIT Future of Life Institute President Max Tegmark joined other signatories in saying AI poses “profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.”
Categories: News Categories: Privacy Tags: Koko Tags: Robert Morris Tags: Motherboard Tags: AI ethics Tags: AI Tags: artificial intelligence Startup Koko has been criticized for experimenting with young adults at risk of harming themselves. Worse, the young adults were unaware they were test subjects. |
The post “Just awful” experiment points suicidal teens at chatbot appeared first on Malwarebytes Labs.
Read moreCategories: News Tags: Josh Saxe Tags: Lock and Code S04E04 Tags: AI Tags: artificial intelligence Tags: endpoint security leader Tags: CISA Tags: DPRK Tags: ChatGPT Tags: informed consent Tags: valentine’s day Tags: password sharing Tags: Android Tags: data leaks Tags: ESXiArgs Tags: TrickBot Tags: Wordpress Tags: fake Hogwarts Legacy Tags: Arris router Tags: ransomware Tags: Mortal Kombat Tags: Section 230 Tags: iPhone calendar spam The most interesting security related news from the week of February 13 to 19. |
The post A week in security (February 13 – 19) appeared first on Malwarebytes Labs.
Read moreCredit to Author: Paul Ducklin| Date: Sun, 01 Jan 2023 21:36:46 +0000
The bad news: the crooks have your SSH private keys. The good news: only users of the “nightly” build were affected.
Read moreCredit to Author: Jovi Umawing| Date: Fri, 08 Jul 2022 15:57:26 +0000
“Show for Children” is most definitely not to be shown to children.
The post YouTube AI wrongfully flags horror short “Show for Children” as suitable for children appeared first on Malwarebytes Labs.
Read moreCredit to Author: Evan Schuman| Date: Thu, 07 Jul 2022 03:00:00 -0700
Microsoft is backing away from its public support for some AI-driven features, including facial recognition, and acknowledging the discrimination and accuracy issues these offerings create. But the company had years to fix the problems and didn’t. That’s akin to a car manufacturer recalling a vehicle rather than fixing it.
Despite concerns that facial recognition technology can be discriminatory, the real issue is that results are inaccurate. (The discriminatory argument plays a role, though, due to the assumptions Microsoft developers made when crafting these apps.)
Let’s start with what Microsoft did and said. Sarah Bird, the principal group product manager for Microsoft’s Azure AI, summed up the pullback last month in a Microsoft blog.
Credit to Author: Jovi Umawing| Date: Thu, 21 Apr 2022 19:32:28 +0000
The NSO Group’s flagship spyware was found on a device in 10 Downing Street’s network.
The post Pegasus spyware found on UK government office phone appeared first on Malwarebytes Labs.
Read moreCredit to Author: Lisa Vaas| Date: Tue, 03 Mar 2020 12:48:02 +0000
It has to do with optics: faces appear to flatten out as we get further away. Our brains compensate, but AI-run facial recognition doesn’t.<img src=”http://feeds.feedburner.com/~r/nakedsecurity/~4/A15DxfJxLcc” height=”1″ width=”1″ alt=””/>
Read more