Meta Platforms is introducing an “incognito” mode for WhatsApp’s Meta AI, a private chat option designed to keep sensitive talks off Meta’s servers, Meta says. WhatsApp users and Meta’s head of WhatsApp, Will Cathcart, are central to this change as the company tries to answer growing worries about how AI systems handle personal data.
Meta’s new incognito chat mode lets users hold temporary, private conversations with Meta AI without leaving a permanent record. Messages are handled in a secure environment that Meta says it cannot access, and sessions are not saved by default. When a user exits, the chat disappears, aiming to give people control over what stays and what goes.
The move comes amid long-running privacy questions about generative AI, which relies on large language models trained on massive datasets. Those models can be exposed to private details when users type personal or sensitive information into chatbots. Meta is pitching incognito mode as a way to let people ask those questions without worrying that the company is hoarding the underlying data.
Meta is not inventing the idea of private AI conversations; other platforms have already added similar controls. Google’s Gemini and OpenAI’s ChatGPT offer options to disable history and opt out of including chat data in training. Meta says it wants to match that expectation while keeping the convenience of having an assistant inside WhatsApp.
Will Cathcart told reporters that people are asking “a lot of meaningful questions about our lives with AI systems, and it doesn’t always feel like you should have to share the information behind those questions with the companies that run those AI systems.” That quote frames the problem Meta is trying to solve: users want useful answers without surrendering their private details.
Incognito chats will accept only typed questions and will return text responses; users won’t be able to upload files or generate images inside these private sessions. Meta also requires age confirmation before people can use the feature, since its platforms are off-limits to anyone under 13. Those limits are meant to reduce abuse and avoid exposing minors to unfiltered AI interactions.
To keep incognito conversations safe, Meta built guardrails so the bot won’t help with harmful or dangerous requests. As Cathcart explained, the system will “steer the user towards helpful information if it can and then refuse (to answer) and eventually even just stop interacting with the user completely.” Those are blunt measures, but Meta argues bluntness is necessary when AI might be asked to do or explain risky things.
The company says chats in this private mode won’t be used to train Meta’s models, putting another layer of separation between a user’s private session and the company’s broader AI development. That distinction is crucial: training data used to improve AI is where the biggest privacy concerns live, and Meta wants users to feel confident that what they say in incognito won’t feed into future models.
From a user perspective, the trade-offs are straightforward: you get more privacy and temporary interactions, but you give up multimedia features and a record that can be referenced later. For people who value confidentiality over convenience, that’s a fair exchange. For others who want richer, image-based back-and-forth or a persistent thread they can return to, standard Meta AI chats will remain an option.
Privacy-conscious users should still read the fine print, because “secure environment” can mean different things in practice. Meta’s description promises restricted access and no default saving, yet technical details about encryption, retention windows, and how emergencies are handled remain the kinds of specifics privacy advocates will demand. The new mode is a step, not a final answer, and it will invite scrutiny from users, regulators, and security researchers alike.