The Disturbing Change in Elon Musk’s Behavior Regarding AI Transparency
Elon Musk’s position on the openness and transparency of artificial intelligence (AI) systems has undergone a profound change in the last couple of years.
Ten years ago, when Musk, Sam Altman and others founded OpenAI, they did so with the mission of advancing AI in a way that would be safe, transparent, and beneficial for all humanity. They intentionally structured OpenAI as a non-profit with an explicit emphasis on openness and the belief that AI technologies – especially Artificial General Intelligence – should not be controlled by a single private company or government. Musk was a strong supporter of open-source AI, advocating that breakthroughs and models should be shared openly to prevent concentration of power and to enable public scrutiny for AI safety. Thus the name “Open” AI.
Fast forward to today, and Musk’s behavior seriously betrays his original, nobler intentions. Among other things, Musk has had unchecked access to numerous government agencies where he has reportedly rolled out his own proprietary AI platform with little transparency and lots of privacy concerns; it appears that he tweaks his own Grok chatbot model to reflect his opinions/viewpoints, often at the expense of truth; and he’s quietly building a massive data center in Tennessee that was found to be operating 33 methane gas turbines – without proper permits – which emit large amounts of toxic and carcinogenic pollution.
How did he get from point A to point B, you might ask? Consider the following.
In 2018, Musk left OpenAI, citing concerns over the direction of the company and his conflicting business interests – both honorable motivations. After his departure, he became increasingly critical of OpenAI’s pivot toward a more secretive and for-profit model, accusing the company and its leadership of betraying their original commitment to openness and public benefit.
In 2023, Musk was one of the 1,000 influential tech leaders, academics, and AI researchers who signed the Future of Life Institute’s “Pause AI” letter in March of that year. The letter called for all AI labs to suspend the development of AI systems more powerful than OpenAI’s newly released GPT-4 for at least six months. The letter argued that the rapid and competitive development of increasingly powerful AI systems posed “profound risks to society and humanity” and stressed that existing regulatory and safety planning was inadequate. (Other notable signatories included Apple co-founder Steve Wozniak, Yuval Noah Harari, Geoffrey Hinton, and senior folks from major AI labs like Google/DeepMind and Stability AI.)
Just one month later, Musk announced that he was developing his own chatbot – called TruthGPT – a “maximum truth-seeking AI that tries to understand the nature of the universe.” While his announced intentions were once again noble, this put Musk in direct competition with other chatbot providers – obviously, including OpenAI – and seemed to swiftly begin moving him away from his AI transparency and openness roots.
In February 2024, Elon Musk filed a lawsuit against OpenAI and its CEO, Sam Altman, alleging that the company had strayed from its original open, non-profit mission and had become a "closed-source de facto subsidiary" of Microsoft. Musk sought to block OpenAI’s full conversion into a for-profit business, alleging breach of contract, violation of fiduciary duty, and unfair business practices. He called for an injunction to halt OpenAI’s commercial deployment of AGI and demanded that it return to its nonprofit mission.
Many view Musk’s lawsuit primarily as an attempt to slow down OpenAI, now a direct competitor to his company, xAI. While Musk continued to call for a return to open-source/safety-focused AI, his actions appeared more selfish as he accelerated the development of his own chatbot, now called Grok, and advanced his own for-profit AI company.
When he submitted a $97.4 billion unsolicited bid to acquire OpenAI’s nonprofit arm in 2025, his shift in motivations became even more evident. It seemed Musk had become far more concerned about the profitability of his company and its models than the wellbeing of humanity.
In January 2025, Musk was appointed to lead the Department of Government Efficiency (DOGE) under President Trump’s administration and his shift from openness to secrecy seemed complete. He quickly fell silent regarding DOGE’s plans to use AI and his Grok chatbot across federal government agencies. A search for insights in the public domain has yielded sparse results including the following ping of Musk’s own chatbot: Please list as many public statements as possible that Elon Musk has made about DOGE's rollout of AI and Grok across government agencies. Grok’s answer included the following:
There are no direct public statements from Elon Musk specifically addressing the Department of Government Efficiency’s (DOGE) rollout of AI or Grok across government agencies in the provided sources. While Musk has made general comments about Grok and AI on X, these do not explicitly reference DOGE’s activities or government implementation. Below, I’ve compiled relevant public statements from Musk found in the provided X posts that discuss Grok and AI in a broader context, as these are the closest available to the topic. I’ve also noted the absence of specific DOGE-related statements to clarify the scope.
This is utterly contrary to the goals of openness and transparency. It’s also not what we’d expect, even demand, from a public servant.
When I asked Grok to explain Musk’s shift in focus, its answer included the following:
There’s no definitive evidence in the provided sources that Elon Musk has explicitly shifted from a mission of advancing AI in a safe, transparent, and beneficial way for humanity to a secretive, self-serving agenda involving Grok’s rollout in government agencies or tweaking it to align with his personal views. However, the sources do highlight controversies and actions that could be interpreted as inconsistent with his stated goals, particularly around the use of methane gas turbines at the Colossus data center.
Musk’s actions can be seen as pragmatic responses to competitive and logistical pressures rather than a deliberate abandonment of his mission for safe, transparent, and beneficial AI. However, the lack of public oversight, environmental impact, and Grok’s reliance on Musk’s views undermine the transparency and universal benefit he claims to prioritize. Without direct statements from Musk addressing these issues, the perception of a “change” stems from inconsistencies between his rhetoric and xAI’s actions.
Other chatbot responses were not as “forgiving.” For example, perplexity.ai posited that mounting industry competition and pressures to monetize AI – particularly xAI’s attempt to secure major government and enterprise contracts – led Musk to set aside ethical concerns and regulatory caution to rapidly gain market share and maximize perceived “innovation.” Among other things, Perplexity cited reports about DOGE bypassing customary vetting/approvals, restricting transparency about both AI systems design and operations, and use of encrypted communications (like Signal) to skirt public records and congressional oversight. This is not a good look.
To me, this is a prime example of a human trait that has been demonstrated again-and-again throughout history. The more money, power, and fame you offer a person, the more he/she will behave in a manner that protects their personal interests, even if that involves compromising their core/original values.
What are your thoughts?