
Elon Musk has acknowledged in a U.S. federal court that his artificial intelligence company, xAI, partially relied on OpenAI’s models while developing its chatbot, Grok. The statement came during ongoing legal proceedings tied to Musk’s lawsuit against OpenAI, where he has accused the organization of straying from its original nonprofit mission.
During questioning, Musk confirmed that xAI employed a technique known as “model distillation,” a process where one AI system learns from another to replicate its capabilities more efficiently. When asked directly whether OpenAI’s models were used in this way, Musk responded that it was “partly” the case, describing the approach as a common practice across the AI industry.
Model distillation has become a focal point of debate in the artificial intelligence sector, as it allows companies to build competitive systems at significantly lower cost by leveraging the outputs of more advanced models. While widely used, the method exists in a legal and ethical gray area, as it may conflict with the terms of service set by AI providers.
Musk’s admission is particularly notable given the broader tensions between major AI developers. Companies like OpenAI, Google, and Anthropic have been increasingly concerned about unauthorized use of their models, especially as competition intensifies globally. At the same time, distillation is seen as a key driver enabling smaller or newer players to narrow the technological gap with industry leaders.
The disclosure also highlights an element of irony within the AI ecosystem, where leading firms themselves have faced scrutiny over how they source and use data for training. Musk’s testimony adds another layer to the ongoing discussion around intellectual property, competitive practices, and regulation in the rapidly evolving AI landscape.




