Security researchers have found that several widely used browser extensions marketed as VPNs or privacy tools were intercepting and selling users’ conversations with artificial intelligence chat platforms without clear consent. The behaviour was discovered in multiple extensions, including one with more than six million installations on Chrome and additional users on Microsoft Edge. Analysis of the code showed that these extensions included scripts designed to capture text entered into AI services as well as the responses returned by those services.
The extensions targeted at least ten major AI providers. These included ChatGPT, Claude, Google Gemini, Microsoft Copilot, Perplexity, DeepSeek, Grok from xAI, and Meta AI. Researchers reported that the collected chat data was transmitted back to servers controlled by the extension developers and may have been sold to third-party data brokers.
Investigators said the data collection was built into the extensions’ code by default and could not be turned off by users. The functionality was introduced in an update in July 2025. Users who had installed or updated the extensions after that point would have had their AI chat text captured and uploaded automatically.
The behaviour was detected in several extensions sharing common underlying surveillance code, all published by the same developer group. Together, the affected extensions were installed by around eight million users across Chrome and Edge. Some of the affected extensions carried “Featured” badges on extension marketplaces, which normally indicate compliance with platform quality and security standards.
The extensions in question were presented to users as tools to enhance privacy or provide VPN services intended to mask internet traffic. In reality, the scripts injected by the software collected sensitive input data as users interacted with AI chatbots. The scripts activated whenever a supported platform was accessed, capturing both the prompts users typed and the responses generated by the AI models.
Security researchers noted that the discovery raises questions about the effectiveness of browser extension review processes on major extension stores. Extensions that underwent review and obtained quality badges were still found to contain hidden data exfiltration functionality.
Experts advising users on digital privacy recommended that individuals uninstall unverified or little-known extensions promptly and treat any sensitive information entered into AI chat platforms with caution if such extensions were installed. Users were also encouraged to audit browser extensions regularly and remove those that request extensive permissions unrelated to their stated function.
The incident has contributed to wider scrutiny of browser extensions that claim to protect privacy but, in practice, collect and transmit user data. Other research has previously identified extensions that perform screenshots of users’ browsing activity or gather other sensitive information.
The case also underscores growing concerns about the privacy of AI chat interactions. Conversations with AI services often contain personal, professional, or unique information that users may not intend to share beyond the session. When such data is captured and monetised by third parties, the implications for user trust in both AI tools and browser extensions are significant.
Browser developers and platform maintainers have been urged to improve extension vetting and to provide clearer warnings when extensions request access to sensitive data flows. Until such measures are implemented, users remain responsible for carefully evaluating the trustworthiness and reputation of extensions before installation.
