"Line"websiteRecently, the magazine details the journey of Li Feifei, a great bull in the field of artificial intelligence, to make artificial intelligence better benefit mankind. As we all know, AI has a very difficult problem: its creator's prejudice is being rigidly coded into the future of AI, which brings many hidden dangers. Li Feifei intends to repair this problem.
"Action must be taken to make AI benefit mankind."
Last June, Li Feifei attended a congressional hearing on the theme "Artificial Intelligence - The greater the capacity, the greater the responsibility". Li Feifei is the only woman in the conference and the only person who has made a breakthrough in the field of artificial intelligence. As a way to help computers identify images.data baseImageNet, a researcher, is one of the few scientists who have made remarkable progress in artificial intelligence in recent years.
Last June, Li Feifei, the chief AI scientist at Google Cloud, left her post as director of the AI Laboratory at Stanford University. But she appeared before the committee because of her other identity: co-founder of a non-profit organization that focused on recruiting women and people of color as builders of artificial intelligence.
It was not surprising that the congressman consulted her professional advice that day. What is surprising is her speech: the serious threats that she loves so much.
Once invented, technology can cause enormous impact in a short time. With the help of artificial intelligence tools like ImageNet, computers can learn to handle specific tasks and achieve far more efficiency than humans. As this technology becomes more sophisticated, it is used to filter, sort and analyze data, and to make decisions on important global and social issues. Although these tools have existed for more than 60 years to some extent, over the past decade, we have begun to use them to perform tasks that will change the trajectory of human life: today, AI can help identify a variety of problems, such as what kind of treatment is available for patients and who is eligible to handle them. Life insurance, how long a sentence is given to the offender, and what job seekers offer interviews.
Of course, these forces can also bring danger. Amazon had to abandon its AI recruitment software that had learned to filter out women's resumes. The scandal of Google in 2015 is still vivid. At that time, its photo recognition software mistakenly labeled black people as gorillas.MicrosoftTweets driven by artificial intelligence have also published racial discrimination tweets. But these problems can be explained clearly and can be corrected. Li Feifei believes that in the near future, we will reach a time when we can not take corrective measures. This is because artificial intelligence technology has been adopted so quickly and has such far-reaching and wide-ranging impact.
Li Feifei came to this hearing last month 6 testify, because she firmly believes that she is the field where the need for re calibration. Most of the male cattle have been warning technology industry, driven by the future of artificial intelligence technology will bring the threat to human survival. But Li Feifei thought those concerns were seriously magnified. She focused on the not so exaggerated, but the more important question: how artificial intelligence will influence people's way of life and work. It will change the experience of mankind, but not necessarily in the right direction. "We have time," Li Feifei said. "But we must act now." She believes that if we make a fundamental change in the AI design and designers, so this technology will become a revolutionary force for the benefit of mankind. Otherwise, the technology will become very human.
At the hearing, Li Feifei was the last to speak. AI is nothing artificial. It is inspired by people, created by people, and most importantly, it affects people. It's a powerful tool. We're just beginning to understand it. We have a great responsibility to understand it.
Segway platform for Artificial Intelligence Laboratory of Stanford University JackRabbot 1 mobile robot
Li Feifei grew up in Chengdu. She is a lonely and intelligent child who loves reading. Her family always does something unusual: her family doesn't like pets, but her father buys her a puppy. Her mother was born in an intellectual family and encouraged her to read Jane Eyre. When Li Feifei was 12 years old, her father moved to Pacipani, New Jersey, where she and her mother had not seen him for years. The year of family reunion, she was 16 years old. Within two years, Li Feifei has learned enough English to act as a translator, bringing convenience to parents who can only speak the most basic English in their lives.
She also did quite well at school. Bob Sabella (Bob Sabella), a high school math teacher in Feifei, helped her learn and adapt to her new life in the United States. There was no advanced calculus at Passiboni High School, so Sabella designed a temporary course to teach Li Feifei during his lunch break. Sabella and his wife treated her like a family, visiting Disney with her, and borrowing $20,000 to open a dry cleaning shop for her parents. In 1995, she won a scholarship to Princeton University. There, she went home almost every weekend to help her at her dry cleaner.
In college, Li Feifei had a wide range of interests. She majored in physics as well as computer science and engineering. In 2000, she began her PhD at California Institute of Technology in Pasadena, where she studied the intersection of neuroscience and computer science.
With the ability to discover and connect seemingly different domains, Li Feifei conceived ImageNet. Her peers in computer vision are building models to help computers perceive and decode images, but those models are limited in scope: researchers may have to write an algorithm to identify cats and dogs, respectively. Li Feifei began to suspect that the problem was not on models, but on data. She believes that if children learn to recognize objects by observing countless objects and scenes at an early age, computers may be able to learn in a similar way: by analyzing various images and their connections. This concept is a great achievement of Li Feifei. "This is a way of organizing a whole visual concept of the world." She said.
However, she failed to convince her colleagues that the huge task of marking every picture of every object in a huge database was rational. More importantly, Li Feifei believes that if the idea is to be feasible, the marking needs to be simplified and complicated, whether it is a general marking such as "mammal" or a very specific marking such as "star snore". When Li Feifei returned to Princeton University as an assistant professor in 2007, she talked about her idea of building ImageNet, but she failed to persuade her colleagues to help. Finally, a professor specializing in computer architecture agreed to cooperate with her.
Her next challenge is to complete this very difficult project. That means that many people will spend a lot of time marking the pictures in a uniform manner. Li Feifei tried to pay Princeton students $10 an hour for help, but progress was slow. Then a student asked her if she had heard of Amazon Mechanical Turk, Amazon's crowdsourcing platform, where she could quickly recruit a large number of workers at a much lower cost. But expanding the workforce from a small number of Princeton students to thousands of workers invisible also poses challenges. Li Feifei must take the potential bias of workers into account. "As a network worker, is their goal to make money in the simplest way?" She said, "If you ask them to choose pandas from 100 pictures, what can stop them from clicking on them all the time?" So she embedded and tracked some pictures, such as those of Golden Retriever dogs that had been correctly labeled as dogs, and used them as control groups. If the workers on Amazon MechanicalTurk can mark these pictures correctly, it means they are working honestly.
In 2009, Li Feifei's team felt that the huge data set of 3.2 million pictures was comprehensive enough for use. So they published the database and a related paper. Later, the number of pictures in the database increased to 15 million. At first, the project received little attention. But the team had an idea: they approached the organizers of the next year's computer vision technology competition in Europe and asked them to allow participants to use the Image? Net database to train their algorithms. The competition thus turned into the ImageNet large scale visual recognition challenge.
Meanwhile, Li Feifei joined Stanford as an assistant professor. Then she married Sylvio Savareth, a robotics expert, (Silvio Savarese). But he works at the University of Michigan, and they are miles apart. Savareth eventually joined Stanford in 2013 as a faculty member.
In 2012, University of Toronto researcher Jeffrey Sinton (Geoffrey Hinton) took part in the ImageNet competition to use Lee's database to train an artificial intelligence called Deep Neural Network. He found the AI much more accurate than anything else so far-he also won the game. The ImageNet-driven neural network built by Hinton has changed everything. By the last ImageNet race in 2017, the error rate for objects in computer recognition images had fallen from 15 percent in 2012 to less than 3percent. At least to some extent, computers have become better at recognizing images than humans.
ImageNet has promoted the great leap forward in deep learning technology, which has laid a solid foundation for the development of self driving cars, face recognition and recognizable objects in mobile phone cameras in recent years.
Shortly after Sinton received the award, while Li Feifei was still on maternity leave, she began to wonder why there were few women in her peers. At that time, she felt that the problem was serious; she knew that the serious imbalance between men and women in employment would gradually cause trouble. Most of the scientists who build artificial intelligence algorithms are men, whereas men tend to have similar backgrounds. Their specific world outlook penetrates into the projects they pursue, and even into the dangerous scenarios they envisage. Many AI creators are boys with science fiction dreams, and their minds are filled with scenes from Terminator and Silver Wing Killer. Li Feifei believes that there is nothing wrong with worrying about something like this. But those ideas will make it impossible to see the potential danger of AI in a comprehensive way.
Li Feifei said that for the deep learning system, "to bias them, they will output bias." She admits that while the algorithms that drive AI may be neutral, the data and applications that produce the results of those algorithms are not. What is really important is artificial intelligence creators and their original intention of creating AI. In testimony at a congressional hearing, Li Feifei pointed out that without engineers of all kinds, we might generate biased algorithms to make unfair decisions about people's loan applications, or train neural networks only on white faces - the models created could not be effectively applied to black faces. "I think it would be the end of the world if, in 20 years'time, our technology industry, leaders and practitioners lacked diversity." She said.
Li Feifei began to feel that it was crucial to focus the development of AI on helping to improve human experience. One of her projects at Stanford University is to work with medical schools to introduce artificial intelligence into intensive care units (ICUs) to reduce problems such as hospital-acquired infections. The project involvesDevelopmentA camera system monitors handsets to remind hospital staff who forget to wash their hands correctly. This type of interdisciplinary collaboration is unusual. "No other person from the field of computer science has ever raised such a thing with me." Arnold Milstein (Arnold Milstein), director of the Stanford University Center for Clinical Research and a professor of medicine.
That project gave Li Feifei the hope of how AI will evolve. It can be used to supplement people's skills, not to replace them directly. If engineers cooperate with people in other disciplines, even ordinary people, they can create tools to expand human capabilities, such as saving automation and time-consuming tasks, allowing ICU nurses to spend more time caring for patients, rather than using artificial intelligence to automate people's shopping experience and eliminate cashiers. Work.
Considering the rapid development of AI, Li Feifei believes that they need to change their team structure as quickly as possible.
Li Feifei is in the laboratory of artificial intelligence of Stanford University.
Helping more women in artificial intelligence
Li Feifei has always been fascinated by mathematics. She also knows that women and non-ferrous people need to make great efforts in computer science. According to the National Science Foundation of the United States, women accounted for 28% of those who received a bachelor's degree in Computer Science in 2000. In 2015, this figure dropped to 18%. Even in her own lab, Li Feifei could not recruit enough colored people and women. Despite being more diverse than typical AI laboratories in the past, she says, it is still mostly male. "We still don't have enough women, especially ethnic minorities, even in the talent pool." "When students go to an AI conference, they will see that 90% of the people are male and that the number of African Americans is far less than that of white boys," she said.
When Li Feifei became an adviser to Olga Luzakovsky (Olga Russakovsky), the latter was not optimistic about the field at all. By then Luzakovsky was already a learned computer scientist with a bachelor's degree in mathematics and a master's degree in computer science from Stanford University, but her thesis was slow to complete. She felt a little lonely because she was the only woman in the lab. After Li Fei arrived Stanford, the situation immediately took place the change. Luzakovsky said Li Feifei helped her learn some of the skills she needed to succeed in research and build her confidence. She is now an assistant professor of computer science at Princeton University.
Four years ago, when Lusakovsky completed her doctorate, she asked Li Feifei to help create a summer camp aimed at stimulating girls'interest in artificial intelligence. Li Feifei immediately agreed with her. They gathered the volunteers together and issued a notice appealing to the students of the second grade of high school to participate. In just a month, they received up to 200 applications, though only 24 entries. Two years later, they expanded the project by setting up AI4All, a non-profit organization, to bring underrepresented young groups, including girls, people of color and people with poor economic conditions, to the campuses of Stanford University and the University of California, Berkeley.
AI4All's small shared office in Kapor Center, downtown Oakland, California, is no longer big enough to accommodate today's scale of development. It currently has summer camps in 6 universities. Last year, the newly launched Carnegie Mellon Summer Camp had 20 places, but the number of applicants was as high as 900. A AI4All student used computer vision to detect eye diseases. Another student used AI to write programs to sort 911 emergency calls according to urgency; her grandmother died because the ambulance failed to arrive in time. This seems to prove that personal perspective can affect future AI tools.
TOYOTA's human powered support robot in the artificial intelligence robot laboratory of Stanford University
Encountering Maven military projects in Google
Three years after Stanford University operated its AI laboratory, Li Feifei left in 2016 to join Google as chief scientist of the company's corporate computing business, Google Cloud AI. She wanted to know for herself how the industry works, and whether contacting customers eager to deploy new tools would change the scope of her interdisciplinary research. Technological giants such as Facebook, Google and Microsoft have spent a lot of money developing artificial intelligence in order to use the technology to better develop their businesses. Enterprises usually have more data than universities. For AI researchers, data is fuel.
At Google, Li Feifei's first experience was very exciting. She came into contact with the company that applied her scientific achievements to the real world. She led the introduction of public-oriented artificial intelligence tools that allow anyone to create machine learning algorithms without writing any code. She opened a new laboratory in China to help shape AI tools in order to improve health care. During her speech at the World Economic Forum in Davos, she also met with leaders and big stars from some countries.
However, working in a private enterprise has also brought new pressure and discomfort. Last spring, Li Feifei was embroiled in a dispute over Google's Maven contract with the US Department of defense. The project is interpreted by AI.videoImages help UAVs attack targets; according to Google, they are "low-resolution object recognition using artificial intelligence" and "saving lives is the primary goal". However, many employees are strongly opposed to the use of their work for military UAVs, and about 4,000 people signed a petition asking the company to "formulate clear policies to state that neither Google nor contractors are allowed to build technology for war purposes." Several employees resigned in protest.
Although Li Feifei was not directly involved in the project, her department was accused of being responsible for the implementation of the project. After her e-mail, which seemed to have been written to help companies avoid embarrassing situations, was leaked to the New York Times, she was also controversial in public opinion. This is confusing, because she is regarded by the industry as a symbolic figure of AI ethics. In fact, before the public outcry, she thought the technology was "harmless" and did not expect it to cause such strong dissatisfaction among employees.
But Li Feifei realized why the incident caused a great uproar: "Not entirely because of the event itself, but because at that time - everyone has a sense of collective urgency about our responsibilities, AI has become a new force, Silicon Valley needs to start such a dialogue. Maven seems to have touched upon these problems. "No evil doing" is no longer a persuasive slogan, she said.
The controversy subsided after Google announced it would not renew its Maven contract. Google's team of scientists and executives, including Li Feifei, has also written public guidelines promising that Google's artificial intelligence research will focus on building socially beneficial technologies to avoid bias in its tools. And avoid situations where technology may eventually hurt people. Li Feifei has been preparing to return to Stanford University, but she thinks the completion of the guidance is extremely important. "I think it is important to recognize that each organization has a set of principles and a responsible review process. You know, Benjamin Franklin (Benjamin Franklin) said in the Constitution that it may not be perfect, but it's the best version we can do right now. " People will still have different opinions, and people with different views can continue to talk to each other. "it was one of her happiest days of the year when it came out," she said. "it means a lot to me to be personally involved and to contribute."
Back to Standford to start the new project.
In June this year, I visited Li Feifei's home in Stanford University campus. When we started talking, her cell phone kept getting new messages. Her parents asked her to translate the doctor's instructions to her mother into Chinese. Her parents could text her anytime and ask her for help as soon as possible. Whether she was at Google Headquarters, giving a speech at the World Economic Forum, or attending a congressional hearing, she was able to keep her mind open. Give your parents a reply.
In many times of life, Li Feifei is dual-minded, focusing on two seemingly completely different things at the same time. She is a scientist who has a deep feeling for art. She is fascinated by robots as well as humans.
In late July, Li Feifei called the author and asked, "have you seen the statement of Shannon Valle (Shannon Vallor)?" Valle, a philosopher at Santa Clara University who specialises in the philosophy and ethics of new technologies, has just signed up as Google Cloud's ethics consultant. Li Feifei was very supportive of this; she even quoted Waler's testimony at the Washington hearing, saying: "there is no independent machine value. Machine value is human value. " Google's appointment is not without precedent. Other companies have begun to introduce guidelines on the use of their artificial intelligence software and who can use it. In 2016, Microsoft set up an internal ethics committee. The company said it had refused to do business with potential customers because of ethical concerns raised by the commission. It also began to impose restrictions on the use of its artificial intelligence technology, such as banning some face recognition applications.
When we talked in July, Li Feifei knew that she was leaving Google and that Stanford University's two-year academic leave was coming to an end. Many people speculated that she left Google because of the Maven project, but she said that the reason she returned to Stanford was that she didn't want to lose her academic status. She sounded tired. She said that after a turbulent summer at Google, the ethical guidelines she helped write were "dawn before dawn".
She is eager to start a new project at Stanford University. This fall, she and former Stanford University provost John Eachmandy (John Etchemendy) announced the creation of an academic center dedicated to the integration of artificial intelligence and human nature, natural science, design research and interdisciplinary research. "as a new science, there has never been a widespread attempt in the field of artificial intelligence to have humanities scholars and social scientists work together." She pointed out. Those skills have long been seen as having little relevance to artificial intelligence, but Mr Li insists they are key to the future of artificial intelligence.
Li Feifei sounds optimistic. At last June's hearing, she told lawmakers, "I've thought deeply about the dangerous and harmful jobs currently facing mankind, such as fire fighting, rescue and natural disaster recovery." In her view, we should not only avoid harm as much as possible, but also realize that these are precisely the jobs that technology can bring great help.
Of course, to what extent can a single academic institution change a whole field? However, Li Feifei insists that she must do her best to train researchers to think like ethicists, so that they can be guided by principles rather than profits, so that they can be inspired by different backgrounds.
On the phone, I asked Li Feifei if she had ever imagined that there were ways to develop AI in different ways, which might not cause any problems we have seen so far. "I find it hard to imagine," she said. "Scientific progress and innovation come from generation after generation of tedious hard work and tireless attempts. It took us some time to realize the existence of such prejudices. I realized six years ago that, "Oh, my God, we are entering a crisis."
On Capitol Hill, Li Feifei said, "As a scientist, I am humbled to see that artificial intelligence science is still in its infancy. This is a scientific field with a history of only 60 years. Artificial intelligence has a long way to go before it realizes its potential to help humans, compared with the classical sciences that make our daily lives better: physics, chemistry and biology. She added, "with proper guidance, AI will make life better. Without proper guidance, this technology will further widen the wealth gap, make technology more exclusive, and reinforce the prejudices that we have spent generations trying to overcome. Li Feifei makes us feel that we are now in a period when AI has not exerted its full influence after it was invented.