Home > News content

After reading the Google I/O full record, I can't distinguish myself from the present or the future.

via:博客园     time:2018/5/9 18:09:36     readed:374

Still on this familiar open-air stage, or the familiar face, or we know Google.

This year's Google I/O conference officially opened with the "Good Morning" of Google CEO Sundar · Sundar Pichai.

orgsrc=https://images2018.cnblogs.com/news/66372/201805/66372-20180509175119595-379831433.png

Throughout the audience AI is still the protagonist of this year

At the beginning of the conference, Pei Cai went straight to the topic and recalled the changes since the Google I/O conference last year, such as "the position of the cheese in Hamburg has changed" and "the beer bubbles overflow from the cup" and so on.

After one year of technical practice, Google's AI technology has also made great progress in various fields. For example, last year Google used AI technology to diagnose the retina of an eye patient in the medical field and achieved remarkable results for researchers in Indian laboratories.

orgsrc=https://images2018.cnblogs.com/news/66372/201805/66372-20180509175119578-1816967187.png

Many times we will see foreign language video on the video website. If it is to find foreign language videos with Chinese subtitles, we are fortunate. However, if we are faced with a foreign language without any subtitles, the accented video is very embarrassing.

Based on AI technology, YouTube can now translate subtitles of video content based on the video's image and sound. Even with accent-sensitive content, AI technology can intelligently translate content based on video content and render it with subtitles.

orgsrc=https://images2018.cnblogs.com/news/66372/201805/66372-20180509175119611-1972409405.png

To put it plainly, its function is to make it easier for you to watch videos.

However, the role of AI can be more than just "translating".

With Gboard's new Moss Code input method, specific input devices, and AI recognition, even disabled people with mobility problems can communicate with friends through this input method.

orgsrc=https://images2018.cnblogs.com/news/66372/201805/66372-20180509175119647-1134933614.png

In the on-site DEMO video, Tania, a handicapped person, uses his head to tap the Moss code on the input boards on both sides to enable text input and voice output via a smartphone and communicate with his partner.

orgsrc=https://images2018.cnblogs.com/news/66372/201805/66372-20180509175119683-83305370.gif

In the future, more people with disabilities who need help can also communicate with the outside world through new input methods and AI technology.

In addition to the new UI changes in the UI design, the Gmail mailbox also incorporates AI technology. When the user writes email, the AI ​​predicts the content to be input.

orgsrc=https://images2018.cnblogs.com/news/66372/201805/66372-20180509175119677-879656236.png

For example, when I write “I am going to go” in the email edit box, AI will predict where I like or go and auto-fill it to increase the efficiency of content writing.

There aren’t many scenes left for Android.

As early as two years ago on Google I/O, Google announced that its future development will be from "mobile first" to "AI first." Since then, it has been the absolute show of the developer's event, Android, slowly becoming the foil of AI, or one of the carriers of AI.

Andorid’s Dave Burke, vice president of R&D, took the stage and reviewed the history of Android for 10 years. He also mentioned the T-Mobile G1 that was powered by the original Android.

orgsrc=https://images2018.cnblogs.com/news/66372/201805/66372-20180509175123477-870389829.png

Flash 10 years …

It is always worried that Android, which uses the letter system code, will always use 26 letters of the alphabet to run out of space. However, when it comes to the new generation of Android operating system, only 10 letters are available.

Android P arrived before starting the next 10 years.

orgsrc=https://images2018.cnblogs.com/news/66372/201805/66372-20180509175128550-1788938840.png

You can say this is the biggest upgrade of this system since Android 5.0, and it can be said that it has been difficult to change anything.

Say it is the biggest upgrade, because starting from this generation of Android, this system will usher in a new gesture interaction mode, and the three already boring virtual keys at the bottom of the interface will become the past.

In addition to the interactive approach and the small changes that are routinely made on the UI each year, Android has long been inseparable from AI.

orgsrc=https://images2018.cnblogs.com/news/66372/201805/66372-20180509175128906-1505479322.png

If Google Assistant enters Android to make people use cool features, then relying on AI to optimize the system is a real user benefit.

For example, in this new system, Google uses AI to help users save power, AI monitors cell phone battery consumption, and turns off applications that users have not used for a while but are still running in the background. AI will also adjust the brightness of the phone according to the user's habits, not rely solely on light sensors to solve.

orgsrc=https://images2018.cnblogs.com/news/66372/201805/66372-20180509175130583-1717053285.png

Of course, the use of AI will inevitably involve a large amount of user data. However, Burke said that this data exists only on the user's mobile phone and will not be uploaded to the cloud. They can only run on the user's device.

Today's Keynote, the time left for Android is not yet more than last year, perhaps, it also proves that Android has long been a mature work.

Being in AI does not mean it is no longer important. After all, there are 10 letters that are not used up. We look forward to the next 10 years.

More and more like a human Google Assistant

In a recent test by a foreign testing agency on the mainstream of several current artificial intelligence tests, Google Assistant's "IQ" beat Amazon's Alexa and Apple's Siri.

Today, Google Assistant’s new capabilities may be unbelievable to everyone.

orgsrc=https://images2018.cnblogs.com/news/66372/201805/66372-20180509175131928-2009950502.png

First of all, Google has filled out 6 new sounds for Google Assistant. At present, Google Assistant only has two voices, male and female, and they still sound like machines, not natural.

The six new sounds are closer to human voices.

In addition, Google Assistant has further improved semantic understanding and multi-round dialogue capabilities.

Prior to this, when Google Assistant was awakened with "Hey, Google," it needed to wait for half a second for the next formal conversation. Now, when users say "Hey, Google," they can immediately say the next instruction, and Google Assistant responds faster than before.

orgsrc=https://images2018.cnblogs.com/news/66372/201805/66372-20180509175134744-2036663172.png

This also allows users to have an experience that is closer to human dialogue.

Google Assistant will also be able to recognize multiple instructions in a sentence and make a complete answer to this, which also places high demands on the processing and semantic understanding of the artificial assistant.

If you think that those above are just small, painless updates, then the next one may make you wonder and even feel a bit scary.

Say ——Google Assistant can call himself.

orgsrc=https://images2018.cnblogs.com/news/66372/201805/66372-20180509175138261-1706540417.png

In the live demo, Google demonstrated a demo of Google Assistant's new features.

When you are inconvenient to answer the phone, Google Assistant in your phone will become your “operator” or, more precisely, it will “become” you temporarily to answer the phone for you.

For example, when you reserve a restaurant, the restaurant waiter will ask you for information such as time, name, etc., but if you happen to have a problem to answer the call, Google Assistant will talk to the restaurant attendant and it will understand what the waiter asks. Questions and answer them with voice one by one.

Even more incredible, the voice of Google Assistant that came out of the live Demo was used in the same tone as humans, and the waiter on the opposite side was completely unaware that they were talking to the AI.

orgsrc=https://images2018.cnblogs.com/news/66372/201805/66372-20180509175139140-406535072.png

Google CEO Pi Cai stated that Google Assistant's dialogues have been accumulated through machine learning. At present, it is still being perfected. Therefore, we may not be able to use this futuristic feature in a short time.

On the other hand, maybe sometime in the future when we receive a gentle call, the other party may be an AI.

In addition to the "speaking" upgrade, there are also "visual" updates.

With new features comes Google Home with a screen, which has already made its debut at this year's CES and will be available for sale in July. Its opponent will be Amazon’s Echo Show.

orgsrc=https://images2018.cnblogs.com/news/66372/201805/66372-20180509175141143-2068190932.png

In addition, users can finally perform operations such as shopping and booking via Google Assistant on their voice.

Of course, on this I/O, the wearable platform Wear OS is missing, there are fewer ARs and VRs, and we don't know what their destiny is. After Google All in AI, they apparently gradually took second place.

Not only can identify problems, but also solve the problem of the camera

At the Google I/O conference last year, Google Lens was shocked by its new features.

And a year later, Google Lens can not only recognize texts and objects based on AI technology, but also answer questions that users “see” through AI technology.

orgsrc=https://images2018.cnblogs.com/news/66372/201805/66372-20180509175142236-1168739627.png

Copying and pasting content after recognizing text is a basic feature of Google Lens. However, after combining the AI ​​technology, the user can directly view the image of the item after scanning the words in the specific environment.

Let's say I'm in a restaurant right now. Before ordering, I would like to see what the "Caesar Salad" produced in this restaurant is. At this time, I only need to use Google Lens to scan the menu "Caesar Salad" and the system will help me to feedback the picture content of the menu.

orgsrc=https://images2018.cnblogs.com/news/66372/201805/66372-20180509175143424-701849181.png

"Text recognition" may be considered a "pediatrics" for today's smart phones. What's more, it is just a text recognition system. However, this year Google Lens added support for object style matching and real-time scene matching, which is a lot more advanced than previous Google Lens.

orgsrc=https://images2018.cnblogs.com/news/66372/201805/66372-20180509175144559-508616970.gif

Also based on AI technology, users can use Google Lens to identify objects that are of interest to them and get similar-style products; at the same time, Google Lens can provide users with real-time view of the captured objects through real-time cloud TPU data. Provide relevant content for viewing.

orgsrc=https://images2018.cnblogs.com/news/66372/201805/66372-20180509175146975-2054558738.png

As another example, when Google Lens recognizes a portrait of a singer, the system will play the MV of the singer in the form of a picture-in-picture.

Theory seems boring, in fact, can be condensed into a sentence ——

After a year of technical precipitation, Google Lens has upgraded from helping users “see problems” to actively helping users “answer questions”.

orgsrc=https://images2018.cnblogs.com/news/66372/201805/66372-20180509175148254-430707721.png

It is worth mentioning that this update of Google Lens adds support to four Chinese smart device manufacturers, including Xiaomi, Asus, OnePlus and TCL.

AI is not only on mobile phones, even cars can be opened

Driverlessness is one of the hot topics in the Internet in recent years. Before the conference ended this year, Google reported on the status of unmanned technology in cooperation with Waymo. After a total of 6 million miles of road tests, Phoenix will become the first pilot city of the Google / Waymo driverless car in the near future.

orgsrc=https://images2018.cnblogs.com/news/66372/201805/66372-20180509175149162-1182072379.png

AI is strong, but we still have a slight regret

After watching this nearly two-hour developer event, I and my colleagues of the day and night realized that time was too fast. And we did not guess the beginning, did not guess the middle, did not guess the end.

No one guessed that this is Google's most special opening. When Google CEO Sandara · Pi Cai uttered the word "AI", people were expecting that Google would come up with new works that shocked the audience. What they spoke from Picai's mouth was "Responsibility".

This technology company, which has a lot of people's impressions and enthusiasm for technology, is not a black technology or new equipment at the moment. Instead, it is the responsibility of a technology company, a socially oriented responsibility —— "Deep responsibility to get this right."

We didn't guess that Google Assistant could talk to humans like humans. I was joking with my friends yesterday and said that the AI ​​community also relied on valuations and was beaten by AI today. Even if this artificial intelligence still has a long way to go, Google once again let us know that AI is not a bubble.

We didn't guess the end because it was a bit overwhelming and it wasn’t enough. We hadn’t had time to absorb so much information before we could wait for the next year’s I/O.

This time, in the face of the still exciting Google I/O Conference, I hope we do not need to have a sense of urgency and witness.

Image source: 9to5google 丨 This article was edited by Wen Jun and Liu Han.

China IT News APP

Download China IT News APP

Please rate this news

The average score will be displayed after you score.

Post comment

Do not see clearly? Click for a new code.

User comments