- Grok conversations shared by users have been found indexed by Google
- The interactions, regardless of how private, could be searching online
- The problem arose because Grok’s shared button did not add noindex labels to avoid discovery of search engines
If time has spent talking to Grok, their conversations can be visible with a simple search on Google, as discovered for the first time in a report of Forbes. More than 370,000 Grok chats were indexed and searched on Google without user knowledge or permission when they used the Grok shared button.
The unique URL created by the button did not mark the page as something for Google to ignore, which makes it publicly visible with a little effort.
Passwords, private health problems and the drama of relationships fill the conversations now publicly available. The even more worrying questions for Grok about making drugs and planning murders also appear. Grok transcripts are technically anonymized, but if there are identifiers, people could solve who were raising small complaints or criminal schemes. These are not exactly the type of topics that you want to link to your name.
Unlike screen capture or a private message, these links have no expiration or access control incorporated. Once they are live, they are live. It is more than a technical failure; It makes it difficult to trust AI. If people use romantic chatbots or romantic role, they don’t want what the conversation is leaked. Finding your deepest thoughts together with recipe blogs in search results can move you away from technology forever.
No privacy with ia chats
So how do they protect you? First, stop using the “Share” function unless you feel completely comfortable with public conversation. If you have already shared a chat and regret it, you can try to find the link again and request its elimination of Google using your content elimination tool. But that is a cumbersome process, and there is no guarantee that it disappears immediately.
If you talk to Grok through the X platform, you must also adjust your privacy settings. If deactivated allowing your publications to be used to train the model, you may have more protection. That is less safe, but the hurry to implement AI products has made many of the privacy protections more blurred than you think.
If this problem sounds familiar, it is because it is only the last example of AI chatbot platforms that smoke the user’s privacy while promoting the individual exchange of conversations. Operai recently had to return an “experiment” where Chatgpt conversations began shared in Google’s results. Meta faced a violent reaction this summer when people discovered that their discussions with the target AI chatbot could appear in the application’s discover food.
Conversations with chatbots can read more as entries of the newspaper than as publications on social networks. And if the default behavior of an application makes them search content, users will delay, at least until next time. As with Gmail’s ads that scan their entrance tray or Facebook applications that scrape their friends list, the impulse is always apologizing after a violation of privacy.
The best of cases is that Grok and others patch this quickly. But AI Chatbot users should probably assume that anything else could read any other shared person. As with many other supposedly private digital spaces, there are many more holes than no one can see. And you may not treat Grok as a reliable therapist.