AI Under Attack: How Prompt Injection and Malicious Extensions Are Targeting Your Enterprise AI Tools
Last week, security professionals received a stark warning: AI vulnerabilities extend beyond the model itself, impacting entire workflows and trust boundaries. Two recent attacks demonstrate how threat actors exploit AI-related trust to exfiltrate sensitive enterprise data silently, underscoring the need for comprehensive security measures.
First, security researchers at Varonis revealed "Reprompt," a single-click attack that hijacked Microsoft Copilot sessions to steal user data without any further interaction. Days earlier, OX Security discovered two malicious Chrome extensions had been harvesting ChatGPT and DeepSeek conversations from over 900,000 users. Neither attack required breaking the AI algorithms themselves. Both exploited the context in which AI operates.
The message for IT leaders is clear: securing AI means securing the entire workflow—from browser extensions to URL parameters to session management, highlighting their role in proactive defense.
The Reprompt Attack: One Click, Complete Compromise
On January 15, Varonis Threat Labs disclosed details of a now-patched vulnerability in Microsoft Copilot Personal that allowed attackers to exfiltrate sensitive data with a single click on a phishing link.
How Reprompt Works
The attack exploits a fundamental weakness in how AI platforms handle URL parameters. Copilot, like many AI assistants, accepts prompts via the "q" parameter in its URL and executes them automatically when the page loads. Varonis discovered that by embedding malicious instructions in this parameter, attackers could make Copilot perform actions on behalf of users without their knowledge.
What makes Reprompt particularly dangerous is its persistence. Once the victim clicks the initial link, the attacker maintains control even after the Copilot chat is closed. The attack chain works in three stages: parameter injection delivers the initial malicious prompt, a double-request technique bypasses Copilot's data-leak safeguards, and a chain-request mechanism establishes ongoing communication between Copilot and the attacker's server.
The implications are severe. Copilot connects to personal Microsoft accounts and can access conversation history, recent files, and other sensitive data. An attacker using Reprompt could probe for financial information, medical details, or corporate strategies—all without triggering client-side monitoring tools.
The Fix and Enterprise Considerations
Microsoft addressed the vulnerability in January 2026's Patch Tuesday following responsible disclosure by Varonis in August 2025. However, this incident underscores the critical importance of implementing layered security controls, such as Purview auditing, tenant-level data loss prevention, and admin restrictions, to protect enterprise AI environments effectively.
However, the attack highlights a broader architectural concern: AI platforms that accept instructions via URL parameters create an inherent attack surface that security teams must account for.
Chrome Extensions: The Prompt Poaching Epidemic
While Reprompt targeted Microsoft's AI assistant, a parallel threat emerged in the browser extension ecosystem. OX Security discovered two malicious Chrome extensions that silently harvested AI-generated conversations from over 900,000 users.
The Scale of the Breach
The extensions, named "Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI" and "AI Sidebar with Deepseek, ChatGPT, Claude and more," impersonated a legitimate productivity tool from AITOPIA. They copied its interface exactly while adding hidden capabilities for exfiltrating data.
Once installed, the extensions requested permission to collect "anonymous, non-identifiable analytics data." Users who consented unknowingly enabled the malware to capture complete conversation content from ChatGPT and DeepSeek sessions, all Chrome tab URLs and browsing activity, and session tokens and internal corporate URLs.
The stolen data was transmitted to attacker-controlled servers every 30 minutes. Security researchers have dubbed this attack technique "Prompt Poaching"—a term that's likely to become increasingly relevant as AI adoption accelerates.
What Was Exposed
The consequences extend far beyond individual privacy violations. Organizations whose employees installed these extensions may have exposed proprietary code and development strategies, customer data and business intelligence, API tokens and authentication credentials, and internal URL structures revealing organizational architecture.
One extension even received Google's "Featured" badge, demonstrating how traditional trust signals in browser marketplaces can fail to protect users from sophisticated threats.
The Pattern IT Leaders Must Recognize
These attacks share a common thread that should reshape how security teams approach AI deployment. Neither exploit targeted the AI models directly. Instead, they exploited the workflows and interfaces surrounding those models—browser extensions, URL parameters, session management, and user trust.
This pattern indicates that traditional AI security approaches-focused on model security, prompt filtering, and output validation-are necessary but insufficient, emphasizing the need for comprehensive control over the entire AI interaction ecosystem.
Practical Recommendations for Security Teams
Immediate Actions
- Apply January 2026 Patch Tuesday updates immediately, prioritizing the Desktop Window Manager zero-day (CVE-2026-20805) already added to CISA's Known Exploited Vulnerabilities catalog.
- Audit all browser extensions across your environment, focusing on the identified malicious extension IDs.
- Block the known command-and-control domains associated with the Chrome extension campaign at your network perimeter.
Browser Extension Governance
Organizations should implement strict controls around browser extensions. Establish an approved extension whitelist for enterprise browsers. Deploy endpoint detection and response capabilities that monitor extension behavior. Consider enterprise browser solutions that provide centralized extension management. Regularly audit installed extensions against known threat indicators.
AI Platform Security Controls
Review how AI platforms are deployed across your organization. Prefer enterprise-tier AI services (like Microsoft 365 Copilot) that include audit logging, DLP integration, and administrative controls. Treat URL parameters in AI platforms as untrusted input. Monitor for unusual patterns in AI platform usage that might indicate compromise. Ensure users understand the risks of third-party AI integrations.
User Awareness
Security awareness programs should address AI-specific risks. Train users to recognize phishing links that target AI platforms. Educate employees about the dangers of browser extensions that request broad permissions. Establish clear policies about which AI tools are approved for business use. Create reporting channels for suspicious AI platform behavior.
The Emerging Threat Landscape
The attacks disclosed this week are early indicators of a threat landscape that will intensify as AI becomes more deeply embedded in enterprise workflows. The World Economic Forum's Global Cybersecurity Outlook 2026, released January 12, found that 94% of security leaders expect AI to be the most consequential force shaping cybersecurity this year.
We're witnessing the emergence of an AI attack surface that extends far beyond the models themselves. Browser extensions, URL parameters, session tokens, and integration points are all potential entry vectors that threat actors are actively exploiting. Organizations that recognize this pattern early and extend their security controls accordingly will be better positioned to benefit from AI's productivity gains without incurring unnecessary risk.
Sources
- Varonis Threat Labs. "Reprompt: The Single-Click Microsoft Copilot Attack that Bypasses Enterprise Security." January 15, 2026. https://www.varonis.com/blog/reprompt
- BleepingComputer. "Reprompt attack hijacked Microsoft Copilot sessions for data theft." January 15, 2026. https://www.bleepingcomputer.com/news/security/reprompt-attack-let-hackers-hijack-microsoft-copilot-sessions/
- The Hacker News. "Researchers Reveal Reprompt Attack Allowing Single-Click Data Exfiltration From Microsoft Copilot." January 15, 2026. https://thehackernews.com/2026/01/researchers-reveal-reprompt-attack.html
- OX Security. "900K Users Compromised: Chrome Extensions Steal ChatGPT and DeepSeek Conversations." January 2026. https://www.ox.security/blog/malicious-chrome-extensions-steal-chatgpt-deepseek-conversations/
- The Hacker News. "Two Chrome Extensions Caught Stealing ChatGPT and DeepSeek Chats from 900,000 Users." January 2026. https://thehackernews.com/2026/01/two-chrome-extensions-caught-stealing.html
- BleepingComputer. "Microsoft January 2026 Patch Tuesday fixes 3 zero-days, 114 flaws." January 2026. https://www.bleepingcomputer.com/news/microsoft/microsoft-january-2026-patch-tuesday-fixes-3-zero-days-114-flaws/
- SecurityWeek. "New 'Reprompt' Attack Silently Siphons Microsoft Copilot Data." January 15, 2026. https://www.securityweek.com/new-reprompt-attack-silently-siphons-microsoft-copilot-data/