Home > News content

The AI project has been blamed by Google and the US Department of Defense is furious about digging AI talent in Silicon Valley.

via:网易科技     time:2019/1/14 19:33:02     readed:309

In June 2018, Google announced its withdrawal from the controversial Project Maven project of the U.S. Department of Defense, the first time the U.S. military has explored the use of large-scale AI in warfare. The project uses Google AI software to improvevideoAnalytical efficiency to help intelligence analysts identify military targets from video clips. In October, thousands of Google employees signed a petition calling on the company to end cooperation with the U.S. military on the project.

According to five sources familiar with internal discussions on Project Maven, Google's withdrawal has frustrated, frustrated and even angered the U.S. Department of Defense.

On June 28, an internal memorandum circulated to about 50 U.S. Defense Department officials said: "We were inadvertently involved in a debate about strategic narratives." The memo points out that the U.S. Department of Defense is caught off guard by Google's withdrawal, and the agency is facing the dilemma of increasingly alienating AI experts, who are critical to the military's AI development plan. The memorandum warned: "If we do not win the support of key players, we will not be able to compete effectively with our opponents."

The cost of Project Maven in 2017 was only about $70 million, a very small part of the US Department of Defense's massive budget of $600 billion that year. But Google's statement shows that the Department is still struggling to meet greater public relations and scientific challenges. So far, Google's response has been to try to create a new public image for its AI research and development work, and to seek an advisory panel composed of senior executives of technology companies to review the Department of Defense's AI policy.

The reason for the Department of Defense's anxiety is obvious: its desire to integrate AI into weapons in the future has been supported by billions of dollars in pledges and billions of dollars in technology research and development funding.

It is not clear what role AI will eventually play in the war. Many AI weapons will not involve machine algorithm decision-making, but they are working in this direction. A U.S. Department of Defense strategy paper last August said: "The technology that supports unmanned systems will enableDevelopmentIt is possible to deploy autonomous systems that can independently select targets and launch attacks with lethal force.

Officials at the U.S. Department of Defense say developing AI is different from developing other military technologies. Although it is easy for the military to obtain cutting-edge fighter planes and bombs from large defense contractors, the core of AI and machine learning innovation belongs to Silicon Valley's non-defense technology giants. Officials fear that without their help, the U.S. military might lose an escalating global arms race. AI will play an increasingly important role in this competition.

Lynch said AI technology was so important that the agency would continue to work on it even if it relied on less professional experts. But Lynch added: "Without the help of the best people in the industry, we would hire people who are not the most capable, and first develop products that may be far less capable than we expected."

Google is unlikely to change its attitude soon. Less than a week after announcing that the company would no longer seek to renew the Project Maven contract, Google issued new AI guidelines, specifically stating that it would not use AI "to help develop weapons or other products that directly harm humans". Since then, many U.S. defense officials have complained about Google's patriotism and accused the company of still seeking cooperation with the Chinese government. China is America's biggest competitor in AI technology.

According to people familiar with the matter, Project Maven aims to simplify the work of intelligence analysts by annotating target types in video clips of UAVs and other platforms, helping analysts gather information and narrowing their focus on potential targets. But these algorithms do not select targets, nor do they issue attack instructions. For a long time, those who worry about the combination of advanced computing technology and new forms of lethal violence have always felt a sense of trepidation about this algorithm.

Many of Google's people still look at the project in a worrying way. A former Google employee familiar with internal discussions said: "They immediately thought about UAVs, and then they thought about machine learning and automatic target recognition. I think the AI developed by this project will soon be upgraded to enable them to carry out targeted killings and launch targeted wars."

Google is just one of the technology giants the U.S. Department of Defense is seeking to recruit to inject AI into modern warfare. Other goals includeMicrosoftAnd Amazon. According to current and former U.S. Defense Department officials, after Google announced its withdrawal in June 2018, more than a dozen large defense companies approached Defense Department officials and offered to take over the job. But Silicon Valley activists also said the military could not easily ignore the ethical concerns of scientists and technicians.

The former Google employee said: "There is a disagreement between those responsible for shareholders, those who want to get multimillion-dollar contracts from the Ministry of Defense, and ordinary employees who have to develop these things but feel morally incompatible with them." To bridge this gap and ease the sharp opposition of AI engineers, the U.S. Department of Defense has taken two measures so far.

website

To ensure the fairness of the debate, Marcuse said the Defense Innovation Commission was looking for critics of the military's role in the field of AI. "They have all kinds of concerns, including validity and legitimacy, and how the Ministry of Defense will apply these technologies, because in some cases we have legitimate rights to violate people's privacy, we have legal authority to carry out violence, and we have legitimate power to wage war," he said.

Officials at the U.S. Department of Defense say it is crucial to address these concerns because the United States differs from other countries in managing AI talent. "These people have to choose to work with us, so we need to provide them with meaningful, verifiable commitment that they have a real opportunity to work with us and that they can believe that they are good people and that AI technology is used for good," Marcuse said.

Despite Marcuse's willingness to discuss potential future restrictions on AI, he believes that the Defense Innovation Commission will not attempt to change the U.S. Department of Defense's policy of relying on AI automatic weapons. The Obama administration enacted this policy in 2012. The Trump government made minor technical changes in May 2017, but did not prevent the military from using AI in any weapon system.

Several U.S. Department of Defense officials said that the agency's policy requires commanders to have "an appropriate degree of human judgment control" over any weapon system implanted with AI, but this statement has not been further explained. Nevertheless, the policy does require that the weapon system containing computers must undergo special scrutiny by three senior U.S. Department of Defense officials before they are programmed to launch lethal operations. So far, no such special review has been conducted.

According to a former Defense Department official familiar with the details, at the end of 2016, just before the end of the Obama administration's term, the U.S. Department of Defense reexamined its 2012 policy and decided in a confidential report that no major reforms were needed. "Nothing is blocked and nobody wants to update the instructions," the former official said.

Nevertheless, the Trump administration has discussed it internally to make it clearer to military weapons engineers that the policy does not prohibit the inclusion of autonomy in weapons systems. The Trump administration is concerned that military weapons engineers are reluctant to apply AI to their designs. The controversy over Project Maven in Silicon Valley at least temporarily prevented such discussions, prompting U.S. Department of Defense leaders to first try to win the support of the Defense Innovation Commission.

Nevertheless, the U.S. Department of Defense intends to integrate more AI into its weapons. "We will not turn a blind eye to a new technology that could radically change the pattern of war, which is unfair to the American people, to our soldiers on the battlefield and to our allies who depend on us," Marcuse said.

China IT News APP

Download China IT News APP

Please rate this news

The average score will be displayed after you score.

Post comment

Do not see clearly? Click for a new code.

User comments