Although there is no Google, Microsoft, Facebook in the AI on the fame so big, but the Amazon AI accumulation is very deep, has been quietly invested. In the end Amazon in the AI to do what things, how they are with the existing business, such as clouds, electricity business combination of it? Foreign media siliconAngle Amazon AWS AI head Swami Subramanian conducted an exclusive interview, Lei Feng network will be compiled as follows.
Amazon does not usually appear with Google, Microsoft, Facebook and IBM on the list of artificial intelligence leaders. But this situation is about to change.
Jeff Bezos, chief executive of Amazon, recently told him in an open letter to shareholders every year that he is very interested in machine learning, and that the AI's branching technology can teach the computer that it does not require explicit programming To complete the study, which is the key to the success of the company's future.
Like today's other AI leaders, Amazon is focused on the depth of learning the neural network, designed to mimic the way the brain learns in its original way. In the past few years, in-depth learning technology has made great progress in voice and image recognition, making Amazon's Alexa voice assistant and Google's unmanned vehicles all become reality.
Bezos pointed out that some of the Amazon work is obvious, such as Alexa, Prime Air Express UAV and the use of machine learning to charge the Amazon Go store. Other machine learning work is mainly behind the scenes, power demand forecasting, product recommendations, etc., while Bezos wants it to have the greatest impact.
Areas of concern
The next phase is the use of Amazon Web Services cloud services to the developer community to promote machine learning, reduce the use of cost, improve ease of use. Amazon began last year through the cloud service, the introduction of new services, such as Lex, Alexa, making the machine learning technology to work with developers to work. Can be used to create conversational interfaces such as robots, as well as text-to-speech Polly and image analysis and related tasks for Rekognition.
"Customers have developed a powerful system for applications ranging from early disease detection to crop yields. "Said Bezos. "Look at this area." There are more things that will happen. & Rdquo;
Machine learning services are important to help Amazon beat rivals in the ever-increasing cloud computing war, and Google and Microsoft are hoping for success with the Seattle online retail giants. In fact, it is clear that Amazon wants to be the main technology supplier for the future of intelligent applications.
"The next pillar of Amazon may be AI, and it's just as important as its Prime member's free shipping service and AWS itself," CB Insights said in a new report. Amazon is hoping to become a platform company than ever before. & Rdquo;
By comparing other companies, such as Gartner, it has a way to learn products in the cloud machine to catch up with Microsoft and Google. Over the past week at the AWS Summit in San Francisco, the company announced new updates and features aimed at starting to remedy the situation.
Amazon AI Vice President Swami Sivasubramanian
For a more in-depth understanding of Amazon's machine learning program, Silliconangel interviewed the developer conference at Swami Sivasubramanian, vice president of Amazon AI, responsible for Amazon Web Services (AWS). The following is the text editing version of the dialogue:
Q: Please elaborate on the scope of work involved in the study of the Amazon.
answer:There are three layers.Top-level application, Such as Lex, Polly and Rekognition, are pre-trained depth learning models that provide application programming interfaces that are suitable for application developers who do not want to learn about any depth of learning, but want to build intelligent applications that can be heard, spoken, or seen.
The next layer is like the Amazon machine learning API platform services, As well as EMR [Elastic MapReduce, for the analysis of large amounts of data] and other parts, suitable for those who want to build their own machine learning model, based on Redshift data [AWS data warehouse] or relational database.The next level of my team's efforts is the depth of learning frameworks and machine learning algorithmsThe
A lot of scientists in my team are studying the core depth of learning framework. At AWS, we are very open to support all depth learning frameworks, such as from Apache MXNet to TensorFlow to Caffe to Theano and so on.
Q: In general terms, what do you want to achieve here?
answer:Our goal is to fundamentally democratize AI technology so that each developer can use AI. To a large extent, even in today's establishment of artificial intelligence, in many cases need to do well in the field of machine learning doctoral researchers.
We want to be able to build new intelligent applications, in fact, can do what people can do, such as can see, can listen to or can understand or understand. And we enable businesses to make informed decisions based on their own data stored in AWS.
Q: Where can we see the relevant action?
answer:Netflix has created a recommendation engine that uses depth learning to show customers what they should look at. Pinterest has done image recognition. We use Amazon's machine learning to automate logistics, so when you click on an order to buy something, the robot will use computer vision and depth learning to select and send the goods. We also use it to enhance existing products such as X-Ray, which is a cool Amazon instant video feature software that uses computer vision and depth learning, so when you pause the video, it will tell you all the pages Who is the actor?
We are also using it to create a new product line. Everybody knows Alexa now. I used Alexa for two years, it's like a live person at home. Together with Amazon Go, this technology supports some of the no checkout experience, and in fact we can see who is walking up to pick up something or put it down.
Q: Amazon recently talked about AI, but Google, Microsoft, Facebook and other companies get more attention, Amazon wants to change this point?
answer:At Amazon, we tend to focus more on customer-related matters. Take Amazon Go as an example, we say: "This is a no-checkout retail experience that can help customers shop faster." "We will not say that" hey see this, it's a great depth to learn the application, by the way it's very useful. " Alexa is the same. I think I am a scientist, but I prefer this because my family likes to talk to Alexa.
In other words, Amazon has been in the machine learning and artificial intelligence has invested heavily, we have been very open in the scientific community, made our contribution, and very open. We submitted a number of articles this year, research papers and so on. In MXNet, we contributed 35% of our code submission.
Q: What changes have made the depth learning algorithm of about 20 years old now much better now?
Now, we have the ability to store all of these data in an affordable way, without having to pay a lot of money for these storage vendors.
Second, the use of professional computing equipment. GPU [graphics processing unit] and FPGA [field programmable gate array] chip has been unlocked and used for these applications.
The last step is that once you build and train these things, we can easily extend the distributed training infrastructure to hundreds of GPUs through preconfigured templates. Thanks to cloud computing, programming is now very simple.
What is the next step?
Q: How much will Amazon focus on applying the existing technology, rather than proposing new algorithms or techniques?
answer:We conduct basic innovation research in many areas - language recognition, natural language understanding, and visual understanding. If you go back ten years ago, we must promote the depth of learning technology to make it to the hands of customers can be used normally. Like Alexa, even so popular, we still need to invent new algorithms to get the customer experience we want. For Amazon Go, we have to dramatically improve the technical level of depth learning and computer vision.
We are also here in the core engine for basic research, such as the depth of learning framework. We have a team that works on a deep learning engine and continues to work hard to expand the system. Our customers have a lot of data they want to deal with at the PB level - images, videos and so on. With the ever-increasing volume of data that needs to be processed, scalability will be one of the main differences over the next few years.
Q: Can your machine learning model work on the edge of a cloud service network, such as an unmanned car that can not wait to reply from a central cloud?
answer:We believe that models built for the cloud can also run on the edge. The depth learning model we build can be run in a traditional computer environment or in EC2 [AWS Elastic Computing Cloud Services] or Lambda [AWS Self-Organizational Computing Services]. Greengrass [Allows local processing of off-line operations and data, without cloud services] is a good environment for running in edge devices. My team ported a MXNet depth learning model to identify objects in the table, and they can run on the Raspberry Pi camera [built with a small and cheap computer].
The goal is to have a mixture, some depth learning models will run at the edge, for quick usage, and some in the cloud can be used for more complex use cases. This is the way Alexa works. That's why we see this new mixed mode deployment that may be more interesting in the future.
Q: What is the next step in machine learning?
answer:My daughter is two years old, she saw two tomatoes, you can recognize what kind of tomatoes. She does not need to observe a thousand tomatoes. This is why I think the depth of learning is still in its infancy. In fact, today's technology can use very limited data to improve the accuracy of the depth learning model.
We've been trying these things. Sometimes people do not need absolute accuracy. Even things like visual search, people as long as they can get better coverage, willing to compromise to lower accuracy.
So there are more things to happen next. If this was the first day of work in the Amazon machine learning field, then we were just waking up and never even having a cup of coffee.