Home > News content

AI Real-time Screening for Cancer Cells, Google's New Breakthrough in Nature

via:博客园     time:2019/8/15 14:08:39     readed:943

Annie, a singer from the concave temple

Quantum Production Co., Ltd. Public Number QbitAI

Detection of the number one killer of human healthcancerNow, a qualitative leap has been achieved.

A rural clinic in a remote town now has the opportunity to use AI to screen for breast cancer and prostate cancer with extremely high lymph node metastases.

Google’s unlimited research has been on Nature Medicine.


Unlike humans, which take time and effort to slice and fix stains to find cancer cells, this intelligent microscope can automatically find cancer cells from massive cells and completeReal-time detection.

Also, this microscope incorporates AR technology to superimpose the diagnostic results directly into the original image of the microscope.

Previous observation of cancer cells, the effect is complicated and tired:


And Google’s microscope is hanging:


Layers are locked, and one is the enemy!

Compared with the old-fashioned carpet search under the microscope, AI can capture cancer cells in an instant.Security camera in the biological world.

And need such a large amount of computing, even inLocal completion. No need to connect to the network, no need to access the cloud, you can complete it in the terminal.

Do you need equipment that is very tall?

Nonono, just putOrdinary light microscopeA little modification, adding a camera to the computer, can complete the identification of breast cancer and prostate cancer. The device is simple and can be copied quickly.


Can be offline, real-time, and the algorithm model is stable, the hardware cost is both cost-effective, the whole work is highly automated … … in a time when medical AI is generally in the big brother era, Google AI has taken a step similar to Android smart machine.

Some netizens even bluntly said:

I trust this machine more than humans.


This is a microscope that can save lives.

Accuracy rate is over 90%

Under this microscope, experiments with lymph nodes and prostate nodules were performed at 10x magnification and 20x magnification, respectively, and it was found that cancer cells could be successfully labeled.

As can be seen from the ROC curve, the AUC of the lymph node detection reached 0.92 under a 10x microscope, the AUC of the 20-fold microscopic lymph node test reached 0.97, and the AUC of the prostate nodule under a microscope of 10 times reached 0.93, 20 times the microscopic prostate knot. The section detection AUC reached 0.92.


Finally, whether it is the pursuit of accuracy, accuracy or recall, all numerical results in the four experiments exceeded 0.9.


In addition, in addition to cancer, various other can be displayed, such as progesterone receptor staining count, mitotic count, cell count, microbial detection … … and the like.


This also means that in addition to diagnosing diseases, this result can be used in many fields such as biological research and forensic identification.

Microscope construction

The entire AR microscope consists of three parts: the microscope body, the deep learning algorithm, and the computer that runs the algorithm.

First look at the microscope body. The body is an ordinary brightfield optical microscope, and the Nikon Eclipse Ni-U is used here.


Nikon Eclipse Ni-U microscope

Does it look like it is used in the middle school biology class? Now, add two modules to this microscope.


One module is a camera that captures high-resolution images of the current microscope field of view.

The other module is a display that adds digital information to the original image captured by the camera and overlays it.

After the image captured by the camera module is processed by the deep learning algorithm, it can be automatically screened, found, and delivered to the display.

On the computer side, the AR microscope includes a high-speed image capture card BitFlow CYT and a NVIDIA Titan Xp GPU.


For each microscope field of view, several processes are required: first take the current field of view, then convert the image to RGB pixel values, run a deep learning algorithm on the image, and then find the lesion.

Real time not stuck

Because the maximum amount of hardware is used to accomplish different tasks, this process is less expensive and is one or two orders of magnitude lower than traditional full slide scanners.

Moreover, the entire system can be easily transferred to a variety of ordinary microscopes.

The neural network uses 1000× 1000 size, but the microscope field of view is larger, reaching 5120 & times; 5120, therefore, a sliding window is needed to step through the entire large image.

To increase speed, the research team applied the Complete Convolutional Network (FCN) to the InceptionV3 deep learning architecture to form InceptionV3-FCN, which reduced the amount of computation by 75%.




Modify InceptionV3

Moreover, after the transformation, the delay was also reduced, from 2126ms to 296ms, and the FPS increased from 0.94 to 6.84. After that, it is optimized by other software methods. The delay is finally reduced to 37ms, and the FPS is as high as 27, which is basically not card.


Google AI big pen

This time, it is another big study of the Google Brain team. Jeff Dean, the head of Google AI and deep learning, is also among the authors.

A total of three in common, namely:

Po-Hsuan Cameron Chen, Ph.D., graduated from the Institute of Neuroscience at Princeton University and previously earned a bachelor's degree in electrical engineering from National Taiwan University, focusing on the workings of the brain.


Krishna Gadepalli, graduated from the University of North Rhine-Westphalia, and joined Google for 12 years.

In addition, there is Robert MacDonald, a researcher in the Google Health department.

This paper was submitted to Nature on June 19 this year. It was confirmed to be accepted on July 2, and only experienced from submission to receiving.18 daysThe receiving speed made the users sigh.

Previously, there was no research without a smart microscope.

Motic, a Chinese microscope manufacturing company invested by the Global Good Foundation, has launched a smart microscope, EasyScan Go, for the diagnosis of malaria. However, the sample test takes 20 minutes and is shortened to 10 minutes.

At last year's Tencent Global Partner Conference, Tencent AI Lab also released a smart microscope. Doctors can't see AI feedback from the microscope in real time. After connecting to the computer, they can read the images in the microscope from the computer display. AI auxiliary advice.

However, even Tencent, who is at the forefront in China, is still in the state of cooperation with top-level hospitals. It is not end-to-end, copyable and ready to use, such as Google AI.

So with real-time diagnosis, AI+AR can be directly superimposed in the field of view of the microscope, and the ability to diagnose metastatic lesions … … Google's microscope currently seems to be ready for everything, only FDA-approved Dongfeng.

Don't go, take the code!

Don't go, Google Smart Microscope, you also have the opportunity to do it yourself DIY, the official has released the model architecture and tool code.

Deep learning architecture code can get:


Camera grabber driver BitFlow:


In addition, you need to use basic tools such as TensorFlow, OpenCV, and Scipy, which are not provided here.

Finally, attach the Nature paper address:


China IT News APP

Download China IT News APP

Please rate this news

The average score will be displayed after you score.

Post comment

Do not see clearly? Click for a new code.

User comments