All requests are proxied through DuckDuckGo, and all personalized user metadata is removed. (e.g. IPs, any sort of user/session ID, etc)
They have direct agreements to not train on or store user data, (the training part is specifically relevant to OpenAI & Anthropic) with a requirement they delete all information once no longer necessary (specifically for providing responses) within 30 days.
For the Llama & Mixtral models, they host them on together.ai (an LLM-focused cloud platform) but that has the same data privacy requirements as OpenAI and Anthropic.
Recent chats that are saved for later are stored locally (instead of on their servers) and after 30 conversations, the last chat before that is automatically purged from your device.
Obviously there’s less technical privacy guarantees than a local model, but for when it’s not practical or possible, I’ve found it’s a good option.
It’s what drives most billionaire mentalities: elite projection.
They think that what they want must be what everyone wants. They believe that what’s best for them is best for everyone as long as they believe it to be morally okay for their own interests.
For a billionaire that regularly isolates himself from not just society, but also his own company’s employees, having fake profiles where computers do the communication instead of a human, go along and agree with whatever you say, remain eternally unoffensive, and exist solely to increase engagement doesn’t seem like a bad idea, since it seems almost like what he’d want for himself.