In Defense of Inefficiency
While the focus is on civil servants being forced out of government, the real problem is what might replace them.
Since Elon Musk's attachment to the Trump campaign, there has been constant talk of improving government efficiency. Trump signed an executive order on the first day of his presidency to establish Elon's meme-to-reality agency, the Department of Government Efficiency (DOGE). In recent weeks, Elon has used DOGE to wield a free hand as he guts the federal government just as he did in the private sector with Twitter. So far, he and his accomplices have offered two million federal employees early retirement, cancelled hundreds of millions of dollars of government contracts, and closed down agencies all in the name of saving money. Notably, none of the contracts held by Elon’s companies have been affected. The question that should be front of mind is how the work previously performed by the agencies is going to get done in light of the significant turnover and disruption.
The government's answers to this question vary. Some in the administration claim that terminated employees weren't doing real work to begin with. In other cases, it has been suggested that the work should have been the responsibility of the states. And then there is the more ominous and opaque discussion of government reform, while Elon walks around wearing a “Tech Support” shirt. Technology is often deployed under the guise of saving money and efficiency. But everything comes with a cost.
The Trump administration has made pushing AI integration a top priority. On the 3rd of February, a recording of an internal meeting of the Technology Transformation Services (TTS) was obtained by 404 Media. In the leak, Thomas Shedd, former Tesla executive and newly-appointed director of the TTS, instructed a team of internal engineers that the agency would be taking an “AI-first” approach to the government. When engineers expressed their concerns—mainly because the first proposed action violated a privacy statute—Shedd replied, "We should still push forward and see what we can do."
The US General Services Administration is already developing a generative AI chatbot. Shedd has expressed a desire to deploy “AI coding agents” using a product called Cursor, which was developed by Andressen-Horowitz' Anysphere. However, it is unclear whether this product is still being considered, given the complete lack of transparency surrounding the project. There is speculation that DOGE is now looking to use Microsoft's GitHub Copilot instead. The agency had initially hoped to use Google Gemini but later decided it did not have the required capabilities.1
AI is not wholly bad. There are, in fact, valuable ways that the government can use it.2 AI is already used by several agencies, including the Securities and Exchange Commission, the Centers for Medicare & Medicaid Services, the Internal Revenue Service, the Patent and Trademark Office, and Customs and Border Protection. Most AI models used by the government are built in-house and operate using supervised learning, meaning that both time and personnel are needed for them to operate successfully. Biden was particularly supportive of further integrating AI into the civil service and wrote a 36 page Executive Order, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which represented the beginnings of a AI framework for the whole of government. In Biden’s Executive Order, a wide range of issues were addressed, “including safety, transparency and accountability, bias, privacy, and displacement of workers.” This people- and process-centred approach was tailored to address the potential risks and harms that come with a greater reliance on AI.
One of Donald Trump's first actions was to rescind Biden's Executive Order and sign his own, “Removing Barriers to American Leadership in Artificial Intelligence,” in which he prioritised deregulation and instituted a 180 day deadline to identify and eliminate all burdensome AI regulation from the government.3
Even without regulation, creating and perfecting AI models takes time. Given the rate at which the bureaucracy is being cut, it is unlikely that any functional system will be ready in time to pick up the slack. The likelihood of a catastrophic failure seems high. Elon knows this, which means his desire to replace the current civil service with technology is not based on efficiency. The purpose of this technology is to be an alternative to the existing human workers. The values and purpose of an institution live inside the workforce. If you remove the workers, their knowledge and values, you are removing the essence of the institution.
Without the previous workforce, the government will be rewritten by the technology developed by Elon and those associated with him. After a period, it will be fine for the AI to fail because a new ideologically aligned workforce will be in place. And the consistent failure of AI will create a reliance on the technological elites that created it. The only alternative to tech insiders will be tech outsiders.
Of course, there is a question about whether the government would even bother to build its own AI. Similar to what occurred during World War II, with the Defense Production Act, the government could take control of a private AI company using emergency orders after the bureaucracy failed. If I were Sam Altman, I would keep this in mind, especially given Elon Musk's attempt to purchase and sue OpenAI.
Regardless of how the current administration intends to embed AI into the government, they want it done fast. It is the speed at which the government or DOGE will have to move that makes the situation so concerning. You can’t fast track the training of AI models. Indeed, Elon waxes lyrical about the safety risk of AI, as do all AI executives to varying degrees. Executives at the top three AI companies, OpenAI, Anthropic, and Google DeepMind, are all on the record saying that AI could lead to human extinction. Leaders in the world of AI are pretty creative when describing their product; some refer to it as a bomb, some say it will likely end humanity, while others temper their criticism and simply talk about how AI could lead to rampant authoritarianism if it falls into the wrong hands. But what if this is part of the plan?
Ruthless Efficiency
What if inefficiency is a natural check against authoritarian impulses? When human beings are central to the operation of the government, things move at a pace where people have time to resist and push back against injustice. They may resist objectionable orders informally by slowing down implementation, drawing public attention through leaks or whistleblowing, or even raising their concerns through established legal channels such as the courts or oversight bodies.
This system ensures that no single person can wield absolute power without resistance from those dedicated to upholding the nation's legal and moral framework. They check with each other at the coffee pot or leave it for the day to sleep on the decision. That slowness, framed by this administration as inefficiency, is a check on power, a deeply human exercise of morality and judgment. And when the judgment seems wrong, it can be challenged by the person affected. And a very fallible human chain can be called into question. Every day is filled with thousands of opportunities to resist. Every bureaucrat is a check and balance unto themselves.
Just as the potential failure of AI is arguably an intended feature of those who wish to install it, so too is human fallibility in the current system. Human beings do not claim perfection. Knowing that we are fallible, we check our work, share our work, and create processes for reporting the work of others and challenging their decisions.
What does it mean for AI to be efficient? AI does what it is told to do, ruthlessly. It is entirely goal-oriented and will look to perform its tasks in the most optimal way. It performs its tasks because it has been told to perform its tasks. There is no question of why. Human beings, with their independent value systems and moral reasoning, often understand their work in a much larger context, such as upholding the values of America. The values of America can be given to AI as parameters, but this is not the same as having values Let's say a human and an AI model are both asked to find savings in the SNAP food program. AI is more efficient at finding savings because human beings will likely prioritize people in need being able to retain access to benefits over savings, given that that is why the program exists. For AI, values are nothing more than constraints for which a workaround must be found.
The same problem with AI’s goal optimization occurs in a more exaggerated way with the Constitution. Public servants swear an oath to uphold the Constitution. At best, the Constitution represents a set of rules that need to be worked around, like a puzzle to be solved, for an AI model. Rather than understanding the ideas in the Constitution as inviolable principles that ought to guide their work, AI sees the Constitution as a challenge. How can it facially comply with the document without sacrificing its goal? This problem has been observed by AI scientists and is referred to as specification gaming.4 Specification gaming is when an AI works to satisfy the literal objective but simultaneously fails to achieve the intended outcome. This is the monkey paw or Midas touch problem of such models. Imagine, for example, a scenario where AI is asked to violate your rights without discernibly violating your rights.
Most of the time, the AI industry's answer to this problem is to ensure that there is always a "human in the loop" to ensure that the systems are not making these mistakes. But if you ensure that all staff are ideologically aligned with and loyal to an executive who is also trying to pervert the Constitution to concentrate power, a human in the loop does very little to mitigate AI's obedience. The current personnel restructuring is designed to achieve that exact outcome.
Farewell, Accountability
The first few weeks of the current administration demonstrated firsthand the panic that can be caused by moving fast and breaking things. The administration's decision to “flood the zone” has overwhelmed the public and their representatives, rendering them incapable of keeping up with and responding to everything they are witnessing. One requirement of accountability that is often overlooked is time. You need time to investigate, contest, and rectify. Under the flurry of current executive action, I have found myself suddenly grateful for the bureaucrat's need to take a lunch break, to pick their child up from school, and to get stuck in a conversation about some interpersonal office drama.
Elon has spent the past few weeks bragging about how he and his team have been sleeping in government offices and working around the clock (with some DOGE employees even moving their wives and children into the buildings). For all the talk of bringing the start-up grind to the government, even tech bros are only human. They are limited by their need to sleep, even if they do sleep in the building. AI does not need to sleep. A singular AI system is capable of doing the work of hundreds of human employees, entertaining millions of different ways to handle specific tasks, and executing hundreds of functions in perfect synchronization toward its instructed goal. Before you even know what the system is doing, it will be done.
Keep reading with a 7-day free trial
Subscribe to The Silly Serious to keep reading this post and get 7 days of free access to the full post archives.