Home > News content

Dr. Stanford dies for two years, and comes out of the roster of Google and Tesla

via:博客园     time:2017/3/16 17:00:32     readed:1212

132132

This is a blue Lincoln car, the roof has a LED screen, a few black cylindrical things around the LED screen around, keep turning. The front of the front also has a black raised. In the driver's seat next to a screen, which shows the blue dense spots and blue box, with the car driving, the screen points and lines are constantly refreshed. In the side of the seat on the seat, also equipped with a small fire extinguisher.

I sat on the car and took a lap in Mountain View. In this section of 7.5 km, 20 minutes or so, we have 16 traffic lights, after a four-way STOP SIGN (stop sign) junction, after the red arrow left turn and so on. Most of the time, the whole experience is very comfortable, only occasionally slightly slow down the sudden acceleration, and the driver next to the screen on the strong sense of science on the screen, to remind me that this is an unmanned car.

AnnotationAnnotation tool

Yes, my side of the driver did not put the steering wheel, feet also left the throttle and brake. The roof of the camera, laser radar and radar is the car's eyes, and the car after the car is the car's brain, a set of depth-based learning algorithm took me from the roadside up, and sent me back to the parking lot.

This is what we have reported beforeStart - up companyR & D auto driving car. This mysterious autopilot car company was established less than two years, but it was already its fourth generation vehicle. I also had the privilege to become its first external passenger. Robot driver is very natural, more than I used to ride the autopilot car more like "human", such as when we drove to an intersection, happens to encounter a red light to the green light, the car did not stop and then start , But with a very natural slowdown to speed up the switch, spent the intersection.

Although it looks simple, but there are some hard times. "Said the engineer who accompanied me to experience. Such as a left turn intersection is left turn arrow green light to go to go, and let the car understand the green light at this specific intersection and turn left the green light difference is not easy. "At first the car is do not understand. We collected some of these intersection data to train it, it learned. & Rdquo;

IMG_4098IMG_4098

Of course, the car also has a lot to do better places, such as how to deal with the acceleration and deceleration more smoothly, how to do a reasonable turn right red light and no protection left turn and so on. But on the whole, is already a very good driving experience.

Co-founder and CEO Sameep Tandon tells us that at present their team already has the autopilot level of the L4 level (fully automatic driving in some cases), and then they want to work with more partners to integrate their hardware and software The solution is brought to the business team.

IMG_4617IMG_4617

Classic robot direction V.S. depth learning direction, automatic driving which strong?

Automated driving company so much, Drive.ai What's the difference?

In the interview, several co-founders have repeatedly stressed that Drive.ai is a "depth of learning technology for the first" company. This means that they use the technology and Waymo (the original Google unmanned car sector), Tesla and so are not the same, they use the depth of learning technology to build automatic driving system.

what does this mean?

In the field of automatic driving, the basic can be divided into two schools: one is the use of classic robot direction, is based on the rules (rule-base). The engineer will write a fixed code for each scene to tell the robot how to do it. The result is that if the new scene appears and there is no corresponding code, then the machine is likely to not know how to deal with. This severely limits its scalability.

As an example, Waymo's autopilot, when extending from the headquarters of Mountain View to Austin, is simply because the traffic lights of Mountain View are vertical, and Austin's landscape is no way to successfully identify the traffic lights, and not Do not let the programmer to re-write the program & ldquo; teach & rdquo; it.

Another now more popular, including Drive.ai also choose the direction that is based on the depth of learning techniques. Depth learning can simulate the mechanism of brain recognition, for non-structured data (such as image voice, etc.) better recognition, judgment and classification, so that the algorithm can be learned from the data and training. This is like a human brain, only the need for engineers through similar scenes continue to train the machine, it will be able to learn to make their own judgments, so even in the new scene, the car also know how to deal with, more conducive to adapt and expansion.

For example, the same time in the identification of traffic lights, rule-base automatic driving car will need to mark on the high-precision map of all the traffic lights, so that the machine fixed to see that direction; but the depth of learning algorithm can be directly from the camera to identify the traffic lights Color, so the vehicle can read their own traffic lights, as well as the entire intersection of traffic conditions, in order to decide whether to go forward.

Sameep Tandon said that with the depth of learning the advantages of being realized, more and more companies are claiming that their technology is based on the depth of learning, but in fact few people really do this. "All of our technology, such as maps, mobile planning, and decision making are all based on in-depth learning. We are using the depth of learning to design our entire system, which and other companies take the direction of the classic robot, but the depth of learning as a supplementary part, which is very different. "He said.

122122

From data processing to algorithm training to computing resources, based on in-depth learning to build an automatic driving company

Tao Wang, another founder of Drive.ai, says the amount of data that automates driving is one of the hardest driving problems, and how to use it as a key after collecting autopilot data. The first thing to do is to mark them in order for the algorithm engine to be trained. An hour of automatic driving data, even in the big Internet company, also need 800 hours of artificial come and go mark it.

Drive.ai They have created a customized data marking tool that can continually optimize the entire data workflow and perform high-quality data classification. They use the depth of learning to allow the same task can be a number of categories at the same time, the output results together, you can produce high-quality mark. They are now the data tag speed is already 20 times the big company, which means that you can have more data can be "Hello" to the algorithm engine to learn, so that cars can quickly handle new roads, learn new use scenes , With the increase in training data and continue to improve performance.

Tao Wang said that Drive.ai's depth learning system is even more accurate than a dedicated human marker. Once the algorithm shows a light is a red light, but the special labeler recalled that the green light, the results they took a special trip to see the data, found that really is a red light. This also shows that the algorithm can be trained to be smarter than humans. Whether it is decision-making, route planning or positioning can be done very well. & Rdquo;

Once, their car saw the way there was a dog skateboard, the labeler was shocked and the engineer said, "how can this classification? "However, the car is still able to drive normally. The key to deep learning is that you do not need to identify every thing, but know how to be safe driving, and then make your own decisions.

Another important part is that Drive.ai has built a simulator that can simulate various scenarios, such as bike ride, etc., check how the learning engine handles these situations. The simulator is 7 * 24 hours running, so the equivalent of their car has been on the virtual world on the road for a variety of tests. In the real world, as one of the earliest companies to get California auto driving test license, Drive.ai has also let their own car in the mountain city of the city on the road test 9 months, and no Accident occurred.

There is also a key point is that the depth of learning based on the autopilot system can get rid of the dependence on expensive hardware. Unlike the custom sensors, Drive.ai uses commercial low-cost hardware, including lasers, radars and cameras, and the depth learning system synchronizes all of the sensor data, based on these Information makes the most sensible decision to avoid a single noise caused by miscarriage of justice. So that even if one of them fails, the other can work properly.

101101

It is the sensor's redundant design, so that they completed the achievements of the rainy night driving - even if the rain to the camera to block, but the other sensors so that the car can still be safe to drive automatically.

Because sensors deliver information to the artificial intelligence neural networks of their software systems, these neural network systems can run on common computing hardware, so that the cost of their solutions is greatly reduced.

"Our conversion program is not expensive, and can adapt to all types of cars, whether cars, trucks, trucks, or golf cars are OK. Can be in a week, to land a new car platform. And because of our expertise in building large-scale neural networks for machine applications, we can only drive part of the other autopilot's computing resources to drive the car automatically. Each processor requires almost 30% of a desktop computer. "Said Sameep Tandon.

Most of the autopilot cars need to be accurate to centimeters of high-precision map, to perceive, find out where the car is open, the result is that it will need to continue to update the high-precision map, very expensive and very dangerous. And the depth of learning to Drive.ai automatic driving car can compare the objects on the map and the real environment, like the lane, sidewalk such things, so even if the environment changed, the car can adapt.

Stanford's entire artificial intelligence lab member came to business

Why can Drive.ai do this? It also starts from the background of their team members. Although Drive.ai was founded less than two years ago, as early as four or five years ago, the founders of the team had already worked at Stanford's artificial intelligence lab to begin a large-scale study of the depth of the learning system.

Yes, the core members of the team from the depth of learning the famous Wu Enda's laboratory. Two years ago, six people in the lab were all suspended from the Ph.D. project, and the company was set up as a result of the discovery of the depth of learning on unmanned opportunities. So, in some companies there are one or two in-depth study of the expertise is very rare when the team can say that Stanford's artificial intelligence laboratories are evacuated and hellip;

11

"Autopilot is very difficult, if only a person's words, are very difficult to do, so we decided to do it together." "Said Sameep Tandon, co-founder and CEO. "We were talking to a lot of car companies and found that we had such a great skill, and we could not wait to graduate. & Rdquo;

Tandon said the team started doing very basic research from a very early age, such as data annotations, and is one of the best depth learning teams. When they are still in Stanford, they have created the world's largest neural network. At that time Google with 1000 machines in their Google brain project to do an experiment, the results they only use a 16 GPU machine to reproduce the effect, only one-tenth of the cost. So Tandon is embarrassed to say that he does not know what other companies are known as deep learning, but they are definitely one of the best in-depth learning teams in the world.

IMG_4104IMG_4104

After two years of low-profile research, Drive.ai now believes that its own L4 level unmanned research has reached a certain stage, hoping to further find partners to bring their technology to more cars up. Co-founder Carol Reiley tells us that Drive.ai and OEMs are very close and want the support of carmakers. "We do not build a car and sensor, we just offer a solution. Now hope to start from the business team began to cooperate, including parcel delivery, food delivery, retail and so on. We want to work with partners to do the level of L4, to improve the accuracy of positioning, together to collect data, and then continue to expand outward, and ultimately to the consumer level L5 level. & Rdquo;

Depth learning as a leading method of artificial intelligence, you can teach the machine how to think like human, this is the key to unmanned. On this basis, to create a scalable, applicable to a wide range of security platform, is what we do. We believe that automatic driving will subvert the traffic system as a whole, both from security and efficiency. "Said Sameep Tandon.

China IT News APP

Download China IT News APP

Please rate this news

The average score will be displayed after you score.

Post comment

Do not see clearly? Click for a new code.

User comments