Meta is introducing a new “Incognito Chat” mode for Meta AI on WhatsApp, promising users completely private AI conversations that even Meta itself supposedly cannot access.
The feature is designed to address growing privacy concerns around AI chatbots, as users increasingly share sensitive personal, financial, health, and work-related information with generative AI systems.
According to Meta, Incognito Chat uses the company’s “Private Processing” infrastructure to create temporary AI conversations that are encrypted and processed in a secure environment isolated from Meta’s own systems. The company says conversations are not stored by default and disappear automatically after sessions end.
Meta CEO Mark Zuckerberg described the feature as the first major AI chat system where conversation logs are not stored on servers. Meta claims the architecture prevents both employees and external attackers from viewing message contents.
The new mode will roll out gradually on WhatsApp and the standalone Meta AI app over the coming months. Initially, Incognito Chat will support text conversations only, with image uploads and multimedia disabled at launch.
Meta says the move reflects how users are beginning to treat AI systems more like personal advisors or therapists, asking highly sensitive questions they may not want permanently stored. WhatsApp head Will Cathcart said many users feel uncomfortable sharing deeply personal information with AI companies just to receive answers or assistance.
The company also announced an upcoming feature called “Side Chat,” which will allow users to privately ask Meta AI questions about ongoing WhatsApp conversations without alerting other participants in the chat.
Despite Meta’s privacy claims, the announcement is already generating skepticism among privacy advocates and security researchers. Critics point out that Meta has faced repeated scrutiny over data collection practices across Facebook, Instagram, and WhatsApp for years.
Some experts also warn that “private processing” systems still rely on cloud infrastructure that could potentially become targets for hackers or future legal demands. Similar concerns have emerged around other AI chatbots, including ChatGPT, Gemini, and Claude, where stored conversations have become relevant in lawsuits and investigations.
Meta argues its system differs from existing “temporary chat” or “incognito” modes offered by competitors because conversations are not only hidden from users’ histories but are also inaccessible to the company itself. By comparison, OpenAI, Google, and Anthropic retain temporary conversations for varying periods ranging from days to weeks.
The launch comes as AI companies face mounting pressure to improve transparency around how chatbot conversations are stored, processed, and potentially used for training future models. Meta currently states that ordinary WhatsApp messages remain protected by end-to-end encryption and are not used to train AI systems, though interactions with Meta AI may be handled differently depending on settings and features used.
