The New York Times received an e-mail from Li Feifei for her colleagues. She wrote in her: "Either way, we must avoid mentioning or suggesting artificial intelligence." "AI weapons are not the most sensitive topic of AI. They are also very It may be one of them. For media who do everything they can to attack Google, this will give them a great opportunity."
Li Feifei worried that this military contract may bring about a bad influence on Google. It turns out that this is her foresight. According to documents reviewed by the New York Times, emails, and interviews with a dozen current and former Google employees, the relationship between Google and the Department of Defense has triggered a crisis that is related to life and death. This is because the former has won Part of the Maven plan contract - This program uses artificial intelligence to interpretvideoThe image is designed to improve the accuracy of drone strikes.
It splits Google’s employees, sparked fierce employee meetings and internal communications, and caused some employees to resign. The dispute has deeply deplored some Google executives, including Li Feifei, because there is often a gap between scientists who are ethically resolutely opposed to the contract and salespeople who covet contracts, and executives are trying to bridge this gap. Divide.
The advertising model supporting Google’s phenomenal growth has evoked criticism that it violates the privacy of Internet users and supports suspiciouswebsiteThis includes those who sell fake news. At present, Google's path to future growth through cloud computing services has led the company to differentiate into different camps because of its different positions on weapons. Continuing to undertake large-scale defense contracts may drive away savvy AI specialists, and rejecting such work will cause it to lose potential large-scale business.
Both supporters and opponents believe that the internal debate on the Maven plan actually opens the door to larger defense contracts. The controversy also prompted about 4,000 employees to sign a joint petition asking the company to “release a clear policy stating that Google and its contractors will neverDevelopmentWar technology."
DeepMind is an artificial intelligence pioneer in London and was acquired by Google in 2014. DeepMind executives stated that they are completely opposed to the military and surveillance operations, and the employees of the laboratory have protested the contract. The acquisition agreement between the two companies shows that DeepMind technology will never be used for military or surveillance purposes.
About 12 Google employees have resigned because the contract was first reported by Gizmodo. An outgoing engineer had asked to rename a conference room after Clara Immerwahr, a German chemist. She committed suicide in 1915 after protesting against the use of science for war. A company email obtained by the New York Times said the company's New York office had "done the right thing" sticker.
Employees who opposed the Pentagon contract shared emails and other internal documents. They showed that at least some Google executives had anticipated these objections and negative reports. But other employees also pointed out thatMicrosoftwithAmazonGoogle’s competitors are keen to pursue the lucrative Pentagon business. They concluded that these projects are critical to the company's development and there is no shame.
There are also many technology companies that are also seeking military operations and do not involve all employees. However, Google's fundamental philosophy and self-image are different from other families.
"We have the 'Don't be evil' creed to do the best we know for our users, our customers, and everyone," 2004, Larry Page made this remark to Peter Jennings Peter Jennings, ABC's famous anchorman. Page and Google co-founder Sergey Brinin were named person of the year by ABC News.
The intrigue of Google’s internal conflict is that Maven’s work may be used for a deadly drone targeting system. The reason why this discussion has become particularly urgent now is because experts predict that artificial intelligence will play an increasingly central role in the war, and artificial intelligence is one of Google's strengths.
Last August, U.S. Defense Secretary Jim Mattisstopped by Amazon before visiting Google to call for closer cooperation with technology companies. His visit to Google was widely reported in the media.
"I have seen many of the most significant advances made by private companies on the West Coast," he said.
Lee's comments are part of an internal email exchange initiated by head of defense and intelligence sales) Scott Frohman, head of defense and smart sales at Google. Under the heading "Communications / PR requests-urgent" communications / PR Request-URGENTs, Froman noted that the Maven contract was about to be paid and asked for advice on how to introduce the "immediate problem" to the public.
Many colleagues joined the discussion, but usually they will follow Li Feifei's opinion. Li Feifei was born in China. When she immigrated to New Jersey with her parents, she was only 16 years old and could not speak English. Today she has climbed to the top of the technology world.
The final decision will be made by her boss Diane Green, chief executive of Cloud, Lee said in an email. However, Li Feifei believes that the company should Maven share of the contract as a "major victory for the GCP(Google cloud platform" to promote.
She also suggested that she be "extremely cautious" when introducing the project and refer to the "humanist artificial intelligence" she has been talking about publicly. She is planning to discuss this topic in the "New York Times" column in March.
In the e-mail, she wrote: "If the media start to learn that Google is secretly making artificial intelligence weapons, or developing artificial intelligence technology that helps the defense industry make weapons, I don't know what will happen."
When asked about the e-mail of last September, Li Feifei said in a statement: "I believe that humanistic artificial intelligence will bring benefits to people in a positive and well-intentioned manner. Participate in anything I think is turning artificial intelligence into weapons. The projects are very contrary to my personal principles."
It turns out that Google did not announce Maven to the public. The public noticed Google’s “contractor” work when the dissenting employees started protesting on Google’s powerful internal communications platform.
Google promised its employees that the company will develop a set of principles that will guide its decision-making in ethical minefields when it comes to contracting for defense and intelligence. Google told the New York Times on Tuesday that the new artificial intelligence principles being worked out ruled out the possibility of using artificial intelligence as a weapon. However, it is not yet clear how this will be achieved.
Sandal Pichai, chief executive of Sandal Pichai, said at a corporate conference on Thursday that the company wanted to offer guidelines that would "stand the test of time." Employees say they expect the principles to be published within the Google in the coming weeks.
In the bipolar dispute caused by Google and the military, people may have neglected some subtle differences. If a better analysis of the effects of drones can be performed, the ability of operators to identify and identify terrorists can be improved, which in turn can reduce civilian casualties. Even if Google withdraws, it is almost impossible for the U.S. Department of Defense to abandon development research on artificial intelligence. And military experts said that China and other developed countries have already invested heavily in military artificial intelligence.
However, some highly skilled technicians choose Google because Google is willing to accept the goal of kindness and selflessness. Therefore, they were shocked by the fact that employers may eventually be involved in more effective killing methods.
Google’s corporate message board and internal social platforms reflect the company’s distinctive culture. These platforms encourage employees to boldly evaluate everything, everything from Google's restaurant foods to company diversity initiatives. But the old employee said that even in this free speech workplace, the chaos that Marven plans to bring to Google is more serious than other events they have recently imagined.
After the news of the deal leaked inside, Greene made a speech at the weekly company-wide conference on Friday. According to two people familiar with the meeting, Green explained that the system was not designed to deprive people of their lives, and that it was only a relatively small transaction, with a value of only $9 million.
This has not calmed the anger of the people. According to the invitation, Google decided to hold a discussion on April 11 to present a "multi-point view", with the participation of Meredith Whittakern, an AI researcher at Greene, and Vint Cerfgang, vice president of Google. Whitaker is the leader of the anti-Maven campaign, and Cerf is considered one of the fathers of the Internet for his pioneering technical work at the Defense Department.
Due to the fact that there are too many points to explore, they debated this topic three times a day, and Google employees in different regions around the world watched these debates on video.
According to the staff who read the debate, Green insisted that the Maven plan did not use artificial intelligence for offensive purposes. Whitaker believes that it is difficult to make clear rules about the use of this technology.
On Thursday, Google co-founder Brin answered a question about Google’s work in the Maven project at the company-wide conference. According to two Google employees, Brin said that he understands this controversy and has conducted extensive discussions with Page and Pichchai. However, he said he believes that if military organizations around the world can work closely with international organizations like Google, rather than dealing with nationalist defense contractors, it will be more conducive to world peace.
Google and its parent company, Alphabet, employ many of the world’s top artificial intelligence researchers. Some researchers work at an artificial intelligence laboratory called Google Brain in Mountain View, California. Other researchers are in different groups, including Green Supervision's cloud computing business on Alphabet Board members.
Many of these researchers have recently emerged from academia and some retain professorships. One of them is British Geoff Hinton, who helped run the Google brain lab in Toronto and publicly said he would not work for the United States. In the late 1980s, he left the United States for Canada. Part of the reason is that he is reluctant to accept funding from the Defense Department.
Jeff Dean, the longest and most respected employee at Google, currently oversees all artificial intelligence work at the company. At a developer conference in May, he said he signed a letter opposing the use of so-called machine learning to develop automatic weapons-they don't need a human trigger to identify targets and fire.
DeepMind, an artificial intelligence laboratory based in London, is widely considered to be the world’s most important gathering place for artificial intelligence talent. Today, although it is an independent Alphabet subsidiary, the line between Google and Google is blurry.
The founder of DeepMind has been warning of the dangers of artificial intelligence systems. At least one of the lab's founders, Mustafa Suleyman, held policy discussions with Google leaders, including Pichai, on the Maven plan, according to a person familiar with the matter.
Obviously, the possibility of Google quietly getting involved in national defense work and not attracting public attention has been shattered. It is also unrealistic for Feifei Li that he wants to exclude artificial intelligence from debate.
Irene Blackeen, a senior executive at Google's Washington office, warned Li in a communication last September: "We can lead the discussion about the Google cloud, but getting the DOD program is an honor in the field of artificial intelligence. I think we need to do it before we can figure it out. "
Translation: Panda Translation Agency Liu Pan Qian Gongyi