Could an AI Run Government Someday?

could-an-ai-run-government-someday

Artificial intelligence (AI) has recently shown its prowess in defeating humans at a 3,000-year-old strategy board game called Go, one of the most complex board games that requires not just intellect, but intuition. DeepMind’s AlphaGo program, which prevailed over the Go world champion Lee Sedol, used a two pronged strategy – calculating the next move in the game while simultaneously trying to predict the winner. The AI constantly calculated its mathematical odds of winning, which led to some interesting outcomes. Experts thought some of the moves made by the AI program looked silly to begin with, but only later did it all make sense. Essentially, the AI fooled its opponent into thinking that it was making stupid mistakes, while it was actually increasing its odds of winning. AlphaGo was thinking like a human, even if it was based on mathematical models. At one stage, it was even suggested that AlphaGo went beyond human capability. At its most basic level, AlphaGo is a decision making machine, taking into account a finite number of outcomes but keeping the end goal in sight, which was winning the game.

So, does it make sense for an AI to start making decisions for countries, governments, and humanity as a whole? Keep in mind that political decisions involve complex interlinking factors and secondary and tertiary outcomes, some of which could be beyond what the human brain can comprehend or analyze. Government decision making is a complex task, something that is getting harder all the time with the rapidly shifting geopolitical dynamics, terrorism, climate change, rising inequality, the pace of automation and its impact on jobs and society, and the possibility that we are entering an era of post-capitalism. With so much uncertainty and risk of destabilization, and almost no parallels in history to deal with the current set of global issues, governments are likely to be reactive and short-sighted, making rash decisions and not thinking through the countless intertwined future outcomes, and what it all means for the planet as a whole.

AI is already making strides in commercial markets such as finance, automotive, retail, media, and many others as highlighted in Tractica’s report on Artificial Intelligence for Enterprise Applications. However, as AI gets embedded into the wider private sector economy, I don’t see why AI along with massive scale simulation systems could not be deployed to help governments make decisions. Decisions could be within public sector departments like healthcare and education agencies, either focused on making internal operations more efficient, or within the government itself where officials would have an “AI advisor” in addition to their retinue of human advisors. The first opportunity, using AI to bring greater efficiency to government operations, is already happening, but the second option of aiding high-level government decision making is where the future becomes more interesting.

If seen through the eyes of a game theorist, the world can be considered a strategy game, where each player (or country) is trying to maximize their chances of winning, and increasing their odds of survival. At some level, this could be run like a SimCity simulation, but on a massive scale, where you go through a number of scenarios and analyze the individual outcomes. A London-based startup called Improbable is building technology that will enable such large scale simulations, using distributed computing. Improbable’s technology is being used today in gaming that can adapt to gameplay, making it possible to enable multiple different gameplays for each player in the same game. Improbable is also building systems that could simulate a city, taking into account all the factors including the residents, transport, water, infrastructure, public safety, and education and running the simulation at scale. Improbable’s founder, Herman Narula, has said that the company wants to make The Matrix possible.

So what if Improbable started using AI-based decision making techniques like AlphaGo to simulate not just a city but a country or the world, and continuously calculate the odds of a country or the world surviving, but at the same time making sure that certain thresholds were met with regard to social equality, poverty, hunger, disease, climate, and so forth. It does sound like the stuff of science fiction, but it is highly likely that within the next decade, AI and simulation systems will become powerful enough to make decisions for cities, or at least some aspects of municipal operations. Improbable would be a great acquisition for companies like Google, IBM, or Microsoft to develop AI solutions that are simulation based, targeting everything from finance to healthcare to governments. I personally see a lot of potential for massive-scale simulations and AI systems to work together.

In my view, AI has a role to play as a decision-making partner or AI advisor, not as the decision maker itself. The risks involved in letting AI take over governments are far too great, and nobody wants a SkyNet future, where the AI ends up posing a threat to humans. However, AI being used in a controlled environment to help the government make decisions isn’t too farfetched.

The question comes down to trust – how much humans can trust the AI to make decisions, keeping in mind that the AI could be fooling humans, with the end outcome of AI taking over the government. How do you define the controlled environment? What are the safeguards that you can put into place to ensure that the AI doesn’t take over?  How do you account for rogue governments in the future that could be fully AI driven? In my view, instead of dismissing the likelihood of superintelligent AI, these are the kinds of questions and debates that we need to be having, where we start thinking of safeguards and measures that can be put in place for running AI in a controlled way, which will eventually help humanity, not threaten it. Work is already being done by the Future of Life Institute, supported by Elon Musk, and Nick Bostrom’s Future of Humanity Institute at Oxford to lead efforts related to ethical AI and think through the various implications of superintelligent AI and possible safeguards.

In the wrong hands, AI can definitely do a lot of harm, but there is also a lot of good that can come from the technology. If AI can help governments make smart decisions that benefit humanity in an increasingly complex and fragile world, I don’t see why not, assuming that there are safeguards in place to prevent malicious intent.

Comments are closed.