AI jailbreaks: What they are and how they can be mitigated
Credit to Author: Microsoft Threat Intelligence| Date: Tue, 04 Jun 2024 17:00:00 +0000
Microsoft security researchers, in partnership with other security experts, continue to proactively explore and discover new types of AI model and system vulnerabilities. In this post we are providing information about AI jailbreaks, a family of vulnerabilities that can occur when the defenses implemented to protect AI from producing harmful content fails. This article will be a useful reference for future announcements of new jailbreak techniques.
The post AI jailbreaks: What they are and how they can be mitigated appeared first on Microsoft Security Blog.
Read more