Technology may be outstripping ethical or policy thinking
Chinese teenagers on October 31 reported to the Beijing Institute of Technology, one of the country’s premier military research establishments. Selected from more than 5,000 applicants, Chinese authorities hope they will design a new generation of artificial intelligence weapons systems that could range from microscopic robots to computer worms, submarines, drones, and tanks.
The program is a potent reminder of what could be the defining arms race of the century, as greater computing power and self-learning programs create new avenues for war and statecraft.
It is an area in which technology may now be outstripping strategic, ethical and policy thinking -- but also where the battle for raw human talent may be just as important as getting the computer hardware, software, and programming right.
Consultancy PwC estimates that by 2030, AI products and systems will contribute up to $15.7 trillion to the global economy, with China and the US likely the two leading nations.
But it is the potential military consequences that have governments most worried, fearful of falling behind -- but also nervous that untested technology could bring new dangers.
In the US, Pentagon chiefs have asked the Defense Innovation Board -- a collection of senior Silicon Valley figures who provide the US military with tech advice -- to come up with a set of ethical principles for the use of artificial intelligence in war. Last month, France and Canada announced they were setting up an international panel to discuss broadly similar questions.
So far, Western states have stuck to the belief that decisions of life and death in conflict should always be made by humans, with computers and algorithms simply supporting those decisions. Other nations -- particularly Russia and China -- are flirting with a different path.
Russia -- which last year announced it was doubling AI investment -- said this month it would publish a new AI national strategy roadmap by mid-2019.
Russian officials say they see AI as a key to dominating cyberspace and information operations, with suspected Russian online “troll farms” thought to already be using automated social media feeds to push disinformation.
Beijing is seen as even further ahead in developing AI, to the extent some experts believe it may already be beating the US.
Experts say achieving mastery in AI comes down to having sufficient computer power, enough data to learn from, and the human talent to make those systems work. As the world’s most powerful autocratic states, Russia and China have that capability and intent, both to use AI to maintain government dominance at home and beat enemies beyond.
Already, Beijing is using mass automatic surveillance -- including facial recognition software -- to crack down on dissent, particularly in its ethnic Uighur Muslim northwest. Along with Russia, China has many fewer scruples and controls than Western states when it comes to monitoring its citizens’ communications. Such systems will likely become more powerful as technology improves.
Traditionally, Western democracies -- particularly America -- have proved more adept than dictatorships at tapping new technology and innovation. On AI, however, Washington’s efforts to build links between Silicon Valley and the military have been far from trouble-free.
US Air Force leaders say its highly classified future long-range strike aircraft, designed to replace the B-2 stealth bomber, will be able to operate both with and without crew. Western militaries are also plowing growing resources into unmanned trucks and other supply vehicles, hoping to perform many more “dirty, dull, and dangerous” battlefield tasks without risking human personnel.
These dynamics will become much more complex with the growing use of drone swarms, in which multiple unmanned vehicles control themselves. But when it comes to killing, Defense Department policy requires that a human must remain “in the loop.”
That may become ever harder to manage, however -- particularly if an enemy’s automated systems are making such judgments at much faster than human speed.
In recruiting their 31 teenagers for the Beijing Institute of Technology, those managing selection reportedly looked for “willingness to fight.” With technology this untested -- and so potentially destructive -- that may prove a very dangerous trait to prioritize.
Peter Apps is Reuters global affairs columnist, writing on international affairs, globalization, conflict and other issues. This article was previously published by Reuters.