OpenAI is facing a proposed class action lawsuit accusing the company of secretly sharing ChatGPT user conversations and personal information with Meta and Google through online tracking technologies embedded on its website.
The lawsuit, filed in California federal court, alleges that OpenAI used tracking tools such as Meta Pixel and Google Analytics on ChatGPT pages in ways that transmitted user queries, email addresses, user IDs, and browsing information to third-party advertising platforms without proper consent.
According to the complaint, users believed their conversations with ChatGPT were private, particularly when discussing sensitive topics such as finances, health, legal issues, or personal relationships. Plaintiffs argue that sending this data to advertising and analytics companies violated privacy laws and OpenAI’s own promises about confidentiality.
The lawsuit claims OpenAI embedded Meta’s Facebook Pixel tracking code and Google Analytics scripts across ChatGPT infrastructure, allowing data about user interactions to be automatically transmitted to external systems. Plaintiffs allege the data sharing included portions of chatbot prompts, account identifiers, session information, and metadata tied to user behavior.
The complaint argues that many users would not have shared highly personal information with ChatGPT if they knew conversations could potentially be exposed to third-party advertising networks. The plaintiffs are seeking damages as well as court orders preventing OpenAI from continuing the alleged practices.
The case adds to mounting legal pressure on AI companies over privacy, copyright, and data collection practices. OpenAI already faces multiple lawsuits tied to how ChatGPT handles copyrighted material, personal information, and potentially harmful outputs.
Meta is also dealing with growing legal scrutiny surrounding data collection and AI training practices. Earlier this month, several major publishers sued Meta over claims the company used pirated books and copyrighted materials to train its Llama AI models without authorization.
Privacy researchers have increasingly warned that AI chatbots may become major data collection platforms because users often disclose intimate details during conversations. Unlike traditional search engines, generative AI systems encourage long-form interactions where users may reveal personal health information, financial details, legal concerns, passwords, or workplace data.
