Using Microsoft’s Data Security Stack to Prevent the Next Breach
AI is transforming industries worldwide, but it’s simultaneously creating one of the largest data risks we've ever seen. According to the Varonis 2025 State of Data Security Report, 95% of healthcare organizations have exposed sensitive cloud data, and 64% have employees using unverified AI tools. The same study found that 90% of organizations have sensitive data accessible to AI systems, making the attack surface nearly universal.
In 2025, the AI platform DeepSeek leaked over a million lines of chat history, secret keys, and backend details from unsecured databases. Employees had installed it without approval, and that single event crystallized the danger of Shadow AI.
Shadow AI in Healthcare...
Healthcare data is an irresistible target because it’s a,) permanent, b.) identifiable, and c.) regulated. Yet most organizations still have ghost users (90% have stale but enabled accounts), stale OAuth apps, and missing MFA enforcement. These conditions give attackers an exploit to access sensitive patient data or even poison AI models with manipulated training data.
A clinician pasting anonymized notes into ChatGPT may think they’re safe; however, if those notes contain traceable identifiers, they could still violate HIPAA and expose patient data in public AI models.
Even sanctioned AI like Copilot introduces new governance requirements. Only 1 in 5 healthcare organizations label their files, meaning most of them lack automated classification to control what Copilot can surface. That’s 48,000 folders per organization potentially visible to every employee!
The root of the problem...
Healthcare’s problem in essence, is fragmentation. Cloud apps, AI, and unverified connectors create data flows that are unmonitored. Attackers don’t need to exfiltrate data directly because they can just query it through compromised AI agents or user credentials.
We need to start assuming that AI is both a productivity layer and an attack vector.
The Purview Blueprint for Shadow AI Defense
Microsoft’s "Prevent Data Leak to Shadow AI blueprint" shows us a unified, step-by-step defense framework that combines Defender for Cloud Apps, Entra, Intune, and Purview.

Each product contributes a distinct phase of control:
Discover
- Identify which AI tools employees use
- Defender for Cloud Apps
- Detect user interactions with AI tools in Microsoft Edge
- Purview
Outcome: Unknown AI app usage is surfaced.
Block Access
- Block unverified AI apps at the organization or user level.
- Restrict access via Conditional Access or Intune app policies.
Outcome: Unsanctioned apps are blocked before data leaves the environment.
Secure Data
- Use DLP to prevent sensitive data from being pasted, uploaded, or submitted in prompts.
- Apply sensitivity labels to enforce encryption and conditional access across sanctioned AI tools.
Outcome: Sensitive data never reaches sanctioned AI tools in the first place.
Govern Data
- Audit, retain, and investigate AI prompt activity.
- Detect and respond to inappropriate data sharing or insider misuse.
Outcome: AI prompts and data flows are governed and auditable.
Using this blueprint, we're given a closed loop from discovery through governance, also ensuring that organizations can embrace AI innovation without exposing themselves to AI-assisted breaches.
Ok, but how do we implement it?
A healthcare organization, for example, can operationalize this model like so:
→ Discover
Deploy Defender for Cloud Apps to inventory all AI app usage across your environment.
→ Block
Apply Entra Conditional Access to block or restrict high-risk AI apps, and enforce Intune policies on managed devices.
→ Secure
Extend DLP to detect PHI and other sensitive data in AI prompts.
→ Govern
Use Edge audit capabilities to monitor and retain AI prompt history for compliance review.
Data Security is AI Security...
We can conclude with Varonis' report that AI is a catalyst for data risk. In healthcare, where data protection equals patient protection, that means every new AI integration must begin with security design.
Microsoft’s integrated stack offers visibility and governance at the speed of AI. The question is no longer whether AI will reshape healthcare, but whether healthcare organizations can secure their data fast enough to keep up.
Oh, and don't forget about the NIST AI RMF!!



