Elon Musk, who is now one of the most influential individuals in the United States after President Donald Trump, is under increasing scrutiny for whether he is maintaining a separation between his government position and his business interests. Concerns have mounted over the potential that Musk might have trained, or is training, his artificial intelligence (AI) system, Grok, on federal data obtained by the Department of Government Efficiency (DOGE).
Musk’s Access to AI and Government Data: The Scandal
Doge, the newly created federal agency with a mandate to enhance efficiency, has allegedly accessed important government databases, such as the Social Security system. It is now pushing for access to the Internal Revenue Service (IRS) records, which contain sensitive taxpayer information. Doge has also been meeting with the Departments of Defense and State, fueling additional controversy over the level of Musk’s influence.
Critics have been uneasy for months about Musk and his comparatively inexperienced staff dealing with classified information. Now, these worries have spread to fears that Doge’s access to federal data might be used to train Grok, Musk’s AI chatbot. The scandal comes after Musk’s recent announcement that AI will have a central role in streamlining federal government operations.
White House Denies Allegations, But Doubts Remain
Despite growing speculation, the White House has denied any misuse of federal data by Musk or Doge. Press Secretary Karoline Leavitt dismissed the claims, calling them “unequivocally false” in a statement to Politico.
However, skepticism persists due to the agency’s opaque functioning. Democratic Representative Ro Khanna has called for greater oversight, emphasizing that Congress—not the Trump administration—should regulate AI’s use in federal data processing.
“There need to be strict guardrails and transparency on the use of any federal data for AI. This is true for the government’s use of this data. It is even more critical for the use by private companies, where there should be equal access and stringent protection of privacy. All of this needs to be dictated by Congress, not the whims of the Trump administration,” Khanna explained to Politico.
Republicans Call for Safeguards As well.
Democrats are not the only ones with concerns about Musk’s possible abuse of government information. Pro-Trump conservative strategist Ryan Girdusky also demanded regulatory protections.
“I would like Republicans to find guardrails for any private entity, though, whether it be Elon Musk or George Soros, who gets ahold of federal data. And there seems to be no effective, or very few effective, guardrails in obtaining and holding and using federal data,” Girdusky said.
Is Grok AI Being Trained on Government Data?
Although the White House has denied allegations that Musk is training Grok using federal data, the AI chatbot itself has given evasive answers on the matter. When Firstpost questioned Grok if it had been trained on data accessed by Doge, the chatbot replied that “it’s theoretically possible.”
In another report, Politico discovered that Grok responded, “It’s plausible that data Doge accessed could have flowed to xAI projects like Grok 3.” Such comments have only added to speculation regarding whether or not Musk’s AI systems are using secret government information.
Lack of Transparency at Doge
Musk and Trump have come to Doge’s defense as a necessary instrument to remove government inefficiency. The operations of the agency, however, are still kept largely in the dark. Doge has spoken mostly through X (formerly Twitter) posts, with no apparent signs of whether it simply files reports and suggestions or has some decision-making powers.
Adding to the confusion, Musk has frequently represented Doge in meetings with world leaders. Yet, the White House insists he is not the head of the agency, and it remains unclear who is leading it.
Experts Warn of AI’s Potential for Abuse
Pascal Hetzscholdt, an AI expert, has voiced strong concerns about the implications of a government-affiliated AI like Grok accessing federal data.
“With the fictional example of a dominant AI such as Grok 3 comes the very real fear of the data being leveraged to judge people’s loyalty and quell dissent,” Hetzscholdt penned in a Substack column.
He also cautioned, “The accumulation of vast amounts of data, combined with advanced AI analytics, could create a powerful tool for social control and political repression. Safeguards such as strong privacy laws, independent oversight, and ethical AI development are crucial to prevent these potential abuses.”
With questions surrounding Musk’s involvement in government and the potential overlap of his AI businesses with federal access to data, calls for transparency are increasing. Although Musk has dismissed any impropriety, concerns from both political parties and AI professionals indicate that more stringent regulations may be necessary to guarantee responsible AI development in government activities.