Fast-track generative AI security with Microsoft Purview
Credit to Author: Steve Vandenberg| Date: Mon, 27 Jan 2025 17:00:00 +0000
As a data security global black belt, I help organizations secure AI solutions. They are concerned about data oversharing, data leaks, compliance, and other potential risks. Microsoft Purview is Microsoft’s solution for securing and governing data in generative AI.
I’m often asked how long it takes to deploy Microsoft Purview. The answer depends on the specifics of the organization and what they want to achieve. Microsoft Purview should enable a comprehensive data governance program but it can provide risk mitigation for generative AI in the short term while the program is underway.
Organizations need AI solutions to add value for their customers and to stay competitive. They can’t wait for years to secure and govern these systems.
For the organizations deploying generative AI, “how long does it take to deploy Microsoft Purview?” isn’t the right question.
The risk mitigation Microsoft Purview provides for AI can begin on day one. This includes Microsoft AI, like Microsoft 365 Copilot, AI that an organization builds in-house, and AI from third parties like Google Gemini or ChatGPT.
This post will discuss ways we can secure and govern data used or generated by AI quickly, with minimal user impact, change management, and resources required.
These Microsoft Purview solutions are:
- Microsoft Purview Data Security Posture Management for AI
- Microsoft Purview Information Protection
- Microsoft Purview Data Loss Prevention
- Microsoft Purview Communications Compliance
- Microsoft Purview Insider Risk Management
- Microsoft Purview Data Lifecycle Management
- Microsoft Purview Audit and Microsoft Purview eDiscovery
- Microsoft Purview Compliance Manager
Here are short term steps you can take while the comprehensive data governance program is underway.
Microsoft Purview Data Security Posture Management for AI
Microsoft Purview Data Security Posture Management for AI (DSPM for AI) provides visibility into data security risks. It reports on:
- User’s interactions with AI.
- Sensitive information in the prompts users share with the AI.
- Whether the sensitive information users share is labeled and thus is protected by durable security policy controls.
- Whether and how user interactions may be violating company policy including codes of conduct and attempts at jailbreak, where users manipulate the system to circumvent protections.
- The risk level of users interacting with the system, such as inadvertent or malicious activities they may be involved in that put the organization at risk.
DSPM for AI reports on this for each AI application and can drill down from the reports to the individual user activities. DSPM for AI collects and surfaces insights from the other Microsoft Purview solutions around generative AI risks in a single screen.
Custom sensitive information types, sensitivity labels, and information protection rules are reasoned over by DSPM for AI, but if these are not available, more than 300 out-of-the-box sensitive information types are available from day one.
DSPM for AI will use these to report on risk for the organization without additional configuration. The organization’s administrators can configure policy to mitigate these risks directly from the DSPM for AI tool.
Figure 1. DSPM for AI shows interactions with Microsoft 365 Copilot, enterprise generative AI from other providers, and AI developed in-house.
Figure 2. DSPM for AI Reports on generative AI user interactions with sensitive data.
A big concern that organizations have in widely deploying generative AI is that it will return results that contain sensitive information that the user should not have access to. SharePoint sites have been created over the years, are unlabeled, and may be accessible to the entire organization through the AI. The “security by obscurity” that may have prevented the sensitive information from being inappropriately shared is now negated by the AI that reasons over and returns the data.
Data assessments, part of DSPM for AI, and currently in preview, identifies potential oversharing risks and allows the administrator to apply a sensitivity label to the SharePoint sites, the sensitive data, or initiate an Microsoft Entra ID user access review to manage group memberships.
The administrator can engage the business stakeholder who has knowledge of the risk posed by the data and invite them to mitigate the risk or apply the policy at scale from the Microsoft Purview administration portal.
Figure 3. Data assessment—visualize risk, review access, and deploy policy.
Microsoft Purview Information Protection
The document access controls of Microsoft Purview Information Protection, including sensitivity labels, are enforced when the data is reasoned over by AI. The user is given visibility in context that they are working with sensitive information. This awareness empowers users to protect the organization.
The sensitivity labels that enforce scoped encryption, watermarking, and other protections travel with the document as the user interacts with the AI. When the AI creates new content based on the document, the new content inherits the most restrictive label and policy.
Microsoft Purview can automatically apply sensitivity labels to AI interactions based on the organization’s existing policy for email, desktop applications, and Microsoft Teams, or new policy can be deployed for the AI.
These can be based on out-of-the-box sensitive information types for a quick start.
Microsoft Purview Data Loss Prevention
The Microsoft Purview Data Loss Prevention policies that the organization currently uses for email, desktop applications, and Teams can be extended to the AI or new policy for the AI can be created. Cut and paste of sensitive information or transfer of a labeled document into the AI can be prevented or only allowed with an auditable justification from the user.
A rule can be configured to prevent all documents bearing a specific label from being reasoned over by the AI. Out-of-the-box sensitive information types can be used for a quick start.
Microsoft Purview Communication Compliance
Microsoft Purview Communication Compliance provides the ability to detect regulatory compliance (for example, SEC or FINRA) and business conduct violations such as sensitive or confidential information, harassing or threatening language, and sharing of adult content.
Out-of-the-box policies can be used to monitor user prompts or AI-generated content. It provides policy enforcement in near real time and also audit logs and reporting.
Microsoft Purview Insider Risk Management
Microsoft Purview Insider Risk Management correlates signal to identify potential malicious or accidental behaviors from legitimate users. Pre-configured generative AI-specific risk detections and policy templates are now available in preview.
As the Insider Risk Management solution algorithms determine a user to be engaging in risky behavior, the data loss prevention (DLP) policies for that user can be made stricter using a feature called Adaptive Protection. It can be configured with out-of-the-box policies. This continuous monitoring and policy modulation mitigates risk while reducing administrator workload.
AI analytics can be activated from the Microsoft Purview portal to provide insights even before the Insider Risk Management solution is deployed to users. This quickly surfaces AI risks with minimal administrative workload.
Microsoft Purview Data Lifecycle Management
Microsoft Purview can enforce AI Data Lifecycle Management, with retention of AI prompts, prompt returns, and the documents AI creates for a specified time period. This can be done globally for every interaction with an AI solution. It can be done with out-of-the-box or custom policies. This will keep these interactions available for future investigations, for regulatory compliance, or to tune policies and inform the governance program.
A policy for deletion of AI interactions can be enforced so information is not over-retained.
Microsoft Purview Audit and Microsoft Purview eDiscovery
The organization will need to support internal investigations around the use of AI. Microsoft Purview Audit logs and retains these interactions. They also need to support their legal team should they have to produce AI interactions to support litigation.
Microsoft Purview eDiscovery can put a user’s interactions with the AI as well as their other Microsoft 365 documents and communications on hold so that their availability to support investigations is maintained. It allows them to be searched based metadata, enhancing relevancy, annotated, and produced.
Microsoft Purview Compliance Manager
Microsoft Purview Compliance Manager has pre-built assessments for AI regulations including:
- EU Artificial Intelligence Act.
- ISO/IEC 23894:2023.
- ISO/IEC 42001:2023.
- NIST AI Risk Management Framework (RMF) 1.0.
These assessments are available to benchmark compliance over time, report on control status, and maintain and produce evidence for both Microsoft and the organization’s activities that support the regulatory compliance program.
Microsoft Purview is an AI enabler
Without security, governance, and compliance bases being covered, the AI program puts the organization at risk. An AI program can be blocked before it deploys if the team can’t demonstrate how it is mitigating these risks.
The actions suggested here can all be taken quickly, and with limited effort, to set up a generative AI deployment for success.
Learn more
Learn more about Microsoft Purview.
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and Twitter (@MSFTSecurity) for the latest news and updates on cybersecurity.
The post Fast-track generative AI security with Microsoft Purview appeared first on Microsoft Security Blog.