Technology & Future/Cybersecurity & Privacy

Microsoft admits Copilot bug bypassed privacy labels to summarize confidential emails

Microsoft confirms a bug in Copilot allowed the AI to summarize confidential emails and drafts, bypassing enterprise DLP policies.

Rayan Arlo2026-02-18
Microsoft admits Copilot bug bypassed privacy labels to summarize confidential emails
Advertisement

Key Highlights

  • Microsoft 365 Copilot successfully summarized emails explicitly marked with confidential sensitivity labels despite active DLP protections.
  • The bug impacted a subset of the 15 million paid Copilot seats across Word, Excel, and Outlook.
  • Microsoft stock has tumbled 16% YTD as the company simultaneously pushes for massive price increases in July.

Microsoft’s aggressive AI rollout just hit a $3 trillion reality check. Even as the company prepares to hike prices by up to 16.7% for Microsoft 365, it has admitted to a critical security flaw that allowed its Copilot assistant to ingest and summarize emails explicitly marked as confidential. This isn’t just a technical glitch; it’s a structural crack in the trust Redmond has been attempting to build with its 15 million paid Copilot users who rely on the platform to handle sensitive corporate data.


The vulnerability, tracked under advisory CW1226324, was first detected on January 21, 2026. For weeks, the Copilot "work tab" chat feature was incorrectly processing messages stored in users’ Sent Items and Drafts folders. Most alarmingly, the AI was able to bypass Data Loss Prevention (DLP) policies and sensitivity labels the very digital padlocks that enterprises use to keep their "Highly Confidential" secrets out of the mouths of LLMs.


Microsoft confirmed the incident in a service alert, stating that a "code issue is allowing items in the sent items and draft folders to be picked up by Copilot even though confidential labels are set in place." While the company began rolling out a fix in early February, the damage to the "privacy-first" narrative is substantial.


This admission comes at a delicate moment for Microsoft. Despite delivering strong Q2 FY2026 earnings, the stock is down over 16% year-to-date as investors worry that the massive capital expenditure on AI data centers isn't yielding the bulletproof reliability expected of enterprise software. When your AI assistant can read the draft of a secret merger or a sensitive legal strategy despite being told not to, the "productivity gain" starts to look like a liability.


The business implications are layered. Microsoft has been positioning Copilot as the "secure" alternative to public chatbots, promising that "your data is your data." However, this bug proves that generative AI's value within a productivity suite is entirely dependent on invisible policy checks that can fail with a single bad line of code. If a sensitivity label the bedrock of Microsoft Purview can be ignored by the very tool it's meant to gate, IT departments may rethink how quickly they want to integrate "agentic" AI into their most sensitive workflows.


Industry experts are already sounding the alarm. "Users' email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat," Microsoft admitted to administrators after reports surfaced of the AI returning summaries of protected emails. For a company that recently announced it would force AI integration while simultaneously increasing subscription costs, this "limited scope" advisory feels like a major oversight.


As of mid-February, Microsoft says it is "continuing to monitor the deployment" of the fix and is reaching out to affected organizations. But for the 15 million subscribers who just saw their confidential drafts summarized by an "agent," the question isn't whether the bug is fixed, it's what else the AI is seeing that it shouldn't.

Advertisement

Read More

Advertisement