On December 26th, according to foreign media reports, the US military is desperately trying to occupy a place in the field of artificial intelligence (AI). US Defense Department officials believe that AI will help the United States win in future wars.
But the Department of Defense's internal documents and interviews with senior officials clearly show that the US Department of Defense's AI efforts are plunging into a difficult situation due to being abandoned by a technology giant. To get rid of this dilemma, the US Department of Defense is developing a new strategy to seize the opportunity in a new battle, that is, to compete for Silicon Valley AI talent.
The battle began with an unexpected failure. In June of this year, Google announced its withdrawal from Project Maven of the US Department of Defense, which used the technology giant's AI software. This effort aims to create algorithms that help intelligence analysts identify military targets from video clips.
Two months ago, thousands of Google employees signed a petition calling on the company to stop working with the US Department of Defense on the project.
According to five sources familiar with the internal discussions of Project Maven, within the US Department of Defense, Google’s withdrawal brought frustration and frustration, even anger. Project Maven is the first major effort by the US military to use AI in war.
On June 28th, the US Department of Defense distributed an internal memo to about 50 officials and said: “We were caught in a debate about the strategic situation without preparation. "The memo shows that Google's withdrawal made the US Department of Defense feel unprepared, and this department recently faced the risk of alienating its experts who are vital to the military's AI development plan.
The memo warned: "If we don't win the key supporters' lsquo;people's hearts & rsquo;, we can't effectively compete with our opponents. ”
In fact, Project Maven is far from complete, and the cost of spending in 2017 was only about $70 million, which is a small part of the Pentagon's $600 billion massive budget. But Google’s statement shows that the US Department of Defense is still struggling to cope with greater public relations and scientific challenges.
So far, Google's response has been to try to establish a new public image for its AI work, and to seek an advisory committee composed of technology company executives to review the Department of Defense's AI policy.
The reason the US Department of Defense feels anxious is obvious: it wants a smooth path to integrate AI into future weapons and equipment, a desire that has been supported by billions of dollars in commitment to ensure that these systems are trusted by military commanders. And acceptance, in addition to the need to add billions of dollars in research and development spending on the technology itself.
The exact role of AI in the war is still unclear. Many weapons that are integrated with AI technology do not involve machine algorithm decisions, but they have the potential to do so. As the US Department of Defense stated in a strategy document in August: “Technology supporting unmanned systems will enable the development and deployment of autonomous systems that can independently select targets and launch attacks with deadly weapons. . ”
US defense officials say developing AI technology is different from creating other military technologies. Although the military can easily obtain the most advanced fighters and bombs from large defense contractors, the core of innovation in AI and machine learning belongs to Silicon Valley's non-defense technology giants.
Officials worry that without their help, the US military may lose an escalating global arms race. In this competition, AI will play an increasingly important role.
Chris Lynch, a former technology entrepreneur and head of the Defense Digital Services (DDS) department of the US Department of Defense, said in an interview: "If you decide not to participate in Project Maven, you are actually not involved." Discuss whether AI or machine learning is used for military operations. & rdquo; Lynch said that AI has begun to be used in war, so the question now is, which American technologists will design it?
Lynch hired several technical experts and spent several years studying the problems facing the US Department of Defense. Lynch said that AI technology is too important, even if the agency has to rely on less professional experts, it will continue to research and development in this area.
However, Lynch added: “If there is no help from the industry elite, we will hire people who are far less capable than others to develop less capable products that may put young men and women in danger and may There will be errors in this. ”
Google is unlikely to change course soon. In June of this year, Google announced that it would not seek to renew the Maven contract less than a week later, and immediately issued an AI guiding principle that the company will not use AI for “helping to develop weapons or other products for direct harm to humans”. ;
Since then, many US Department of Defense officials have complained that Google is not patriotic and accused the company of still cooperating with the Chinese government. The Chinese government is the United States' largest competitor in the field of AI technology.
According to people familiar with the matter, Project Maven aims to simplify the work of intelligence analysts by tagging object types in video clips of drones and other platforms, helping analysts gather information and narrowing the focus on potential targets. But the algorithm does not choose targets or commands to launch a blow. For those who are worried about the combination of advanced computing and new forms of deadly violence, this is a long-term fear.
Despite this, many of Google’s people look at the plan in a worrying way.
A former Google employee familiar with the internal discussion said: “They immediately thought of the drone, and then they thought of machine learning and automatic target recognition. I think the AI developed by this project will soon be upgraded, allowing them to carry out targeted killings and launch targeted wars. ”
Google is just one of the tech giants that the US Department of Defense is trying to integrate AI technology into modern warfare. Microsoft and Amazon are also involved in their related efforts. According to current and former US Department of Defense officials, after Google announced its withdrawal in June, more than a dozen large defense companies approached US defense officials and expressed their willingness to take over the job.
But Silicon Valley activists also said that the industry cannot easily ignore the moral doubts of technology practitioners. The former Google employees said: “Between those who are responsible for shareholders and those who want to get a multi-million dollar Defense Department contract, with those ordinary employees who must build these things and are not willing to be accomplices morally. There are obvious differences. ”
In order to bridge this gap and ease the strong opposition of AI engineers, the US Department of Defense has taken two initiatives to date.
The first initiative was officially launched at the end of June to create a joint AI center to oversee and manage all military AI efforts, initially with a focus on public relations-friendly humanitarian missions. It will be headed by Jack Shanahan (Jack Shanahan), whose previous main task was to manage Project Maven.
This is a politically savvy decision, and its first major step is to find a way to use the AI to help the military organize search and rescue in natural disasters.
Brendan McCord, chief designer of the US Department of Defense's AI strategy, said at a technical conference in October: "Our goal is to save lives." The fundamental task of our army is to maintain peace. This is to stop the war and protect our country. This is to improve global stability and to ultimately protect the Enlightenment from a range of values. ”
The second initiative is to require an advisory group composed of technical experts —— the Defense Innovation Board to conduct a new review of AI ethics, including former Google CEO Eric · Schmidt (Eric Schmidt) and professional social networking site LinkedIn co-founder Reid & Hidman (Reid Hoffman).
The review aims to develop guidelines for the use of AI in the military. The National Defense Innovation Council is currently managed by former Defense Minister Innovation Advisor Joshua Marcuse, who is now an executive director of the committee.
The consulting team will spend approximately nine months in public meetings with AI experts, and an internal team from the US Department of Defense will also consider the issue. The committee will then advise the Secretary of Defense James · James Mattis on whether the AI should or should not be integrated into the weapons program.
Marcuse said in an interview: "This must be a real review, and willing to impose some restrictions on what we are going to do, what we will not do, and define the boundaries." ”
To ensure the fairness of the debate, Marcuse said the National Defense Innovation Commission is looking for people who are critical of the military's role in the AI field.
He said: "There are various concerns about how the Department of Defense will apply these technologies. I think these concerns are reasonable because we have the legal right to violate the privacy of citizens in certain circumstances, and there is violence. Legal power, as well as legal power to wage war. ”
Officials from the US Department of Defense said that solving these problems is crucial because the United States manages AI talent in different ways, and it wants to try to attract outside experts.
Marcuse said: "These experts must choose to work with us, and we need to provide them with a meaningful, verifiable commitment to give them a real opportunity to work with us and make them believe that they are good people. ”
Marcuse said that although he is willing to discuss potential restrictions on the use of AI in the future, he believes that the Defense Innovation Commission will not attempt to change the US Department of Defense's existing policy of relying on AI automated weapons. In 2012, the Obama administration introduced this policy.
In May 2017, the Trump administration made minor technical revisions to the policy, but did not prevent the military from using AI in any of its weapons systems. A number of US Department of Defense officials said that the policy stipulates that the commander has a “degree of human judgment” for any weapon system that integrates AI.
However, this policy also requires that the weapon system containing it must undergo special review by three senior US Department of Defense officials before the computer is programmed to initiate a fatal operation. But so far, no such special review has been conducted.
According to a former defense official familiar with the details, at the end of 2016, at the end of the Obama administration’s term, the US Department of Defense conducted a new review of the 2012 policy and decided in a confidential report. Major changes are needed.
However, the Trump administration has discussed it internally and has made military weapons engineers more aware that this policy does not prohibit the use of AI in weapon systems. Trump is concerned that military engineers have been reluctant to integrate AI into their designs.
The Silicon Valley debate on Project Maven has at least temporarily stopped this discussion, prompting US Defense Department executives to first try to win support from the Defense Innovation Council.
But in any case, the Pentagon intends to integrate more AI into its weapons. Marcuse said: "When a new technology is revolutionizing the war model, we will not stand by. This is unfair to the American people, to the soldiers who are sent to the dangerous battlefield, and to our allies. ”