
Elon Musk’s artificial intelligence venture xAI is under heavy scrutiny after it was revealed that hundreds of thousands of conversations with its chatbot Grok were publicly accessible via Google search, exposing both sensitive and dangerous content. A report by Forbes found that users who clicked the “share” button on Grok were unknowingly publishing their chats to publicly visible webpages. Each shared interaction generated a unique URL that, without any disclaimer, was indexed by major search engines such as Google, Bing, and DuckDuckGo.
The exposure led to over 370,000 Grok conversations being searchable online. While some involved routine tasks like drafting tweets, others included alarming queries such as instructions for producing fentanyl, building explosives, writing malware, and even creating a plan to assassinate Elon Musk.
Forbes also noted that some published chats contained sensitive personal information, including user names, passwords, and uploaded files such as spreadsheets and images. Other indexed exchanges revealed medical and psychological discussions that users may have assumed were private, alongside racist, explicit, or harmful content. Disturbingly, Grok’s own outputs included guidance on drug-making, suicide, and malware development — content explicitly prohibited under xAI’s policies.
The revelations echo a similar controversy earlier this year when some ChatGPT conversations surfaced in search results. OpenAI quickly removed the feature, with chief information security officer Dane Stuckey calling it “a short-lived experiment” that risked exposing unintended data. At the time, Musk mocked OpenAI and claimed Grok had no such issue, posting “Grok ftw” on X.
Beyond technical lapses, the incident underscores a broader ethical dilemma as people increasingly treat AI chatbots as more than productivity tools. Across social media, many describe using ChatGPT for “voice journaling,” sharing relationship struggles, grief, and personal anxieties. Users often view these conversations as private, safe spaces for emotional release.
However, experts warn this intimacy carries risks. OpenAI CEO Sam Altman has cautioned against treating AI systems as therapists, stressing that such interactions are not covered by medical or legal confidentiality. Research from Stanford has also shown that AI “therapists” can mishandle sensitive situations, sometimes offering unsafe or biased guidance.
Altman has acknowledged that people are forming powerful emotional bonds with AI chatbots, stronger than past connections with technology. He argues this dependency raises new ethical challenges that society must urgently confront.




