Security researchers at Microsoft have identified a campaign involving malicious browser extensions disguised as artificial intelligence assistants for Google Chrome and Microsoft Edge that secretly collect user chat conversations and browsing activity. According to the findings, the extensions were installed by nearly 900,000 users.

 

 

The extensions were distributed through the Chrome Web Store and presented themselves as legitimate productivity tools designed to enhance browsing with AI features. In practice, the software operated as spyware that collected sensitive information from users interacting with online AI services. Researchers said the tools harvested conversations from platforms including ChatGPT and DeepSeek, along with browsing data from the infected browser.

The campaign relied on convincing branding and descriptions that imitated legitimate AI extensions. Because the add-ons appeared similar to widely used productivity tools, users installed them through normal browser extension marketplaces without realizing they were malicious. Researchers also observed cases in which automated browser agents downloaded the extensions automatically because the listings appeared trustworthy.

Once installed, the extensions behaved like standard browser add-ons. They requested permissions that allowed them to read content on websites and monitor user activity. Security specialists say such permissions are common for browser extensions, which makes malicious versions difficult to detect during installation.

The spyware collected data locally on the infected device before transferring it to infrastructure controlled by the attackers. According to the analysis, the information was periodically transmitted to remote servers, allowing the operators to maintain continuous visibility into browsing activity and interactions with AI services.

Researchers said the extensions were active across more than 20,000 enterprise environments, indicating that the campaign may have exposed corporate information as well as personal data. Conversations with AI systems often contain sensitive material such as proprietary code, internal business discussions, or personal information. Access to such data could allow attackers to conduct corporate espionage, identity theft, or targeted phishing campaigns.

Browser extensions represent a growing security concern because they are widely trusted and easy to install. Once added to a browser, they can gain broad access to page content and user activity. Security researchers note that this level of access creates a potential attack surface if extensions are compromised or intentionally designed to collect data.

The discovery highlights the risks of installing browser add-ons that purport to enhance AI tools or integrate chatbots into web browsing. Security experts recommend that organizations monitor extension use within corporate environments and restrict installations to trusted developers. They also advise users to regularly review installed extensions and remove any tools that are unfamiliar or unnecessary.

The campaign demonstrates how malicious actors can use legitimate distribution channels and familiar branding to spread spyware. By embedding data-collection functions inside apparently useful AI tools, attackers can gather information from a large number of users without exploiting traditional software vulnerabilities.

Leave a Reply