Answer Brief
Microsoft has released a significant security update for Microsoft 365 Copilot, introducing real-time Data Loss Prevention (DLP) for prompt inputs. The feature uses Microsoft Purview to detect and block sensitive information—such as credit card numbers or internal project codes—from being processed by the AI, preventing accidental data leakage while maintaining operational productivity.

Executive Summary: Microsoft has released a significant security update for Microsoft 365 Copilot, introducing real-time Data Loss Prevention (DLP) for prompt inputs. The feature uses Microsoft Purview to detect and block sensitive information—such as credit card numbers or internal project codes—from being processed by the AI, preventing accidental data leakage while maintaining operational productivity.
Why It Matters
The release of real-time prompt protection marks a critical evolution in how enterprises manage the risk of generative AI. While previous iterations of Microsoft’s security focused on protecting the data Copilot could access, this update addresses the 'input risk'—the human error of pasting sensitive data directly into a chat interface. By shifting security to the point of entry, Microsoft is providing a necessary guardrail for organizations that have been hesitant to adopt AI due to fears of intellectual property leakage.
Technically, the signal is a deeper integration between Microsoft Graph, Microsoft Purview, and the LLM orchestration layer. When a user submits a prompt, the Purview engine scans the text before it reaches the model. If a violation occurs, the request is intercepted, and the AI is prevented from querying the web or internal documents based on that specific sensitive input. This creates a hard risk boundary at the user-interface level.
Technical Signal
For global operations teams, this functionality is vital as it bridges the gap between AI productivity and regional compliance requirements like GDPR or Japan's Act on the Protection of Personal Information. In highly regulated markets, the ability to demonstrate real-time blocking of PII (Personally Identifiable Information) is often a prerequisite for moving AI projects from pilot to production. This release effectively lowers the compliance barrier for global enterprises.
Security and IT infrastructure teams are the primary stakeholders for this update. They now have the tools to audit not just what the AI produces, but what their employees are asking of it. The availability of a simulation mode is particularly useful for these teams, as it allows them to gauge the potential for 'false positives' that might disrupt legitimate employee workflows before the blocking policy is fully activated.
Operational Impact
From a risk perspective, this update mitigates 'prompt-based leakage,' where users inadvertently include confidential credentials or trade secrets in a query. However, it also highlights the growing complexity of the 'AI security stack.' Organizations must now manage DLP policies specifically tuned for natural language, which requires a more nuanced approach than traditional file-based DLP.
Readers should watch for how competitors like Google and Anthropic respond with similar native input protections. As the industry moves toward 'AI Agents' that can perform actions, the necessity for real-time input and output filtering will only intensify. The next logical step for Microsoft will likely be expanding these detections to multi-modal inputs, such as images or voice commands, as those capabilities become more prevalent in the Copilot ecosystem.
Event Type: product
Importance: high
Affected Companies
- Anthropic
- Microsoft
Affected Sectors
- Artificial Intelligence
- Cloud Computing
- Cybersecurity
Key Numbers
- Security Updates Released: 160
- Critical Vulnerabilities Patched: 8
- Historical Development Period: 2 years
Timeline
- Microsoft introduces mechanism to exclude sensitive files and emails from Copilot processing.
- Microsoft announces the general availability of real-time DLP for Copilot prompts.
- General availability starts for Microsoft 365 Copilot and Copilot Chat users.
- Current reporting date; feature fully integrated into Microsoft Purview DSPM.
Frequently Asked Questions
How does the new Copilot DLP feature prevent data leaks?
The system evaluates text prompts in real-time before they are processed by the AI. Using Microsoft Purview’s detection engine, it identifies sensitive data like social security numbers or custom keywords. If a violation is detected, the processing is blocked, and the user receives a policy notification, ensuring data stays within the organization's control.
What types of information can the system detect?
It detects standard sensitive information types, including credit card numbers and government IDs. Additionally, organizations can configure custom keywords, internal project names, and specific code values unique to their business. This allows for tailored protection across different sectors like healthcare, finance, and research and development.
How do administrators manage these new security policies?
Management is handled through the Microsoft Purview Data Security Posture Management (DSPM) portal. Administrators can apply policies with a single click and use a 'simulation mode' to test impact before full enforcement. Detailed logs of policy violations and detected sensitive data types are accessible via the Microsoft Defender portal.