Microsoft AI “Recall” feature records everything, secures far less
Microsoft unveiled an AI search tool on new laptops that will require regular screenshots of all device activity to be recorded and stored.
Read moreMicrosoft unveiled an AI search tool on new laptops that will require regular screenshots of all device activity to be recorded and stored.
Read moreThis week on Lock and Code, we talk about what people lose when they let AI services make choices for dinners, reservations, and even dating.
Read moreMore than 150 leading artificial intelligence (AI) researchers, ethicists and others have signed an open letter calling on generative AI (genAI) companies to submit to independent evaluations of their systems, the lack of which has led to concerns about basic protections.
The letter, drafted by researchers from MIT, Princeton, and Stanford University, called for legal and technical protections for good-faith research on genAI models, which they said is hampering safety measures that could help protect the public.
Of all the potential nightmares about the dangerous effects of generative AI (genAI) tools like OpenAI’s ChatGPT and Microsoft’s Copilot, one is near the top of the list: their use by hackers to craft hard-to-detect malicious code. Even worse is the fear that genAI could help rogue states like Russia, Iran, and North Korea unleash unstoppable cyberattacks against the US and its allies.
The bad news: nation states have already begun using genAI to attack the US and its friends. The good news: so far, the attacks haven’t been particularly dangerous or especially effective. Even better news: Microsoft and OpenAI are taking the threat seriously. They’re being transparent about it, openly describing the attacks and sharing what can be done about them.
The last few weeks have been a PR bonanza for Taylor Swift in both good ways and bad. On the good side, her boyfriend Travis Kelce was on the winning team at the Super Bowl, and her reactions during the game got plenty of air time. On the much, much worse side, generative AI-created fake nude images of her have recently flooded the internet.
As you would expect, condemnation of the creation and distribution of those images followed swiftly, including from generative AI (genAI) companies and, notably, Microsoft CEO Satya Nadella. In addition to denouncing what happened, Nadella shared his thoughts on a solution: “I go back to what I think’s our responsibility, which is all of the guardrails that we need to place around the technology so that there’s more safe content that’s being produced.”
Credit to Author: eschuman@thecontentfirm.com| Date: Mon, 12 Feb 2024 03:00:00 -0800
The IT community of late has been freaking out about AI data poisoning. For some, it’s a sneaky mechanism that could act as a backdoor into enterprise systems by surreptitiously infecting the data large language models (LLMs) train on and then getting pulled into enterprise systems. For others, it’s a way to combat LLMs that try to do an end run around trademark and copyright protections.
This week on the Lock and Code podcast, we speak with Bruce Schneier about a future of AI-powered mass spying.
Read moreOpenAI is hoping to alleviate concerns about its technology’s influence on elections, as more than a third of the world’s population is gearing up for voting this year. Among the countries where elections are scheduled are the United States, Pakistan, India, South Africa, and the European Parliament.
“We want to make sure that our AI systems are built, deployed, and used safely. Like any new technology, these tools come with benefits and challenges,” OpenAI wrote Monday in a blog post. “They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used.”
Credit to Author: eschuman@thecontentfirm.com| Date: Tue, 19 Dec 2023 10:03:00 -0800
Enterprise executives, still enthralled by the possibilities of generative artificial intelligence (genAI), more often than not are insisting that their IT departments figure out how to make the technology work.
Let’s set aside the usual concerns about genAI, such as the hallucinations and other errors that make it essential to check every single line it generates (and obliterate any hoped-for efficiency boosts). Or that data leakage is inevitable and will be next to impossible to detect until it is too late. (OWASP has put together an impressive list of the biggest IT threats from genAI and LLMs in general.)
Generative artificial intelligence (genAI) is likely to play a critical role in addressing skills shortages in today’s marketplace, according to a new study by London-based Kaspersky Research. It showed that 40% of 2,000 C-level executives surveyed plan to use genAI tools such as ChatGPT to cover critical skills shortages through the automation of tasks.
The European-based study found genAI to be firmly on the business agenda, with 95% of respondents regularly discussing ways to maximize value from the technology at the most senior level, even as 91% admitted they don’t really know how it works.