NVIDIA today released its FY 2018 third-quarter earnings report followed by an analyst conference call with company CEO Jen-Hsun Huang, CFO and executive vice president Colette Kress, Attended a conference call, introduced the company's third-quarter operating and financial position, and answered questions from analysts on-site.
The following is a summary of the conference call Q & A section:
Toshiya Hari, Goldman Sachs analyst: Three months ago, you said July's quarter was a transitional season for data center operations. Obviously, you are in good shape when entering the quarter of October. Could you talk about the future development of the data center business in the next few quarters? Especially in reasoning. Please share your customer feedback and your outlook for the year ahead.
Jen-Hsun Huang: As you know, we started to promote Volta graphics last quarter. We started to increase production in the previous quarter. Since then, support has been announced for Volta from Amazon, Microsoft, Google to Baidu, Alibaba and Tencent, and even the nearest Oracle Corp. We will provide Volta for their in-house deep learning and external public cloud services. We also announce that Volta is now supported by all the major server computer manufacturers in the world and we are bringing this product to market. HP, Dell, IBM, Cisco and China's Huawei, Inspur and Lenovo all announced they will build services around the Volta GPU family of servers.
So I think this up is just the first step in building support for our company's GPU acceleration services for data centers and cloud service providers around the world. The devices for these GPU servers have expanded to many markets. I've talked about the main target market for Tesla GPUs, and I often mention five of these segments.
The first is high-performance computing, with a market value of about 11 billion U.S. dollars. It is one of the fastest-growing segments of the IT industry as more and more people are using HPC to develop products, look for insights or forecast markets, or do other things. Today, we represent 15% of the world's top 500 supercomputers. As I've said repeatedly, I fully believe this, and I think it is becoming more and more real, and every supercomputer in the future will somehow accelerate. This is a very important growth opportunity for us.
The second is deep learning training, which is very similar to HPC. You need to do calculations on a very large scale. You are performing trillions of summing operations and the models get bigger and bigger. Every year, the amount of data we train is on the rise. Differences between different computing platforms could mean building a $ 20 million data center or high-performance computing server with training costs as high as $ 200 million. So, the money we save and the performance and value we provide are incredible.
The third part you just mentioned, related to reasoning. When you're done with web development, you have to put it in very large data centers to support billions of queries that consumers make on the Internet every day. For us, this is a completely new market. 100% of the world's reasoning is done on the CPU. We recently announced that with the Tensor RT 3 reasoning platform and its accompanying Tensor Core GPU instruction set architecture, we are able to increase the network speed by a factor of 100.
Now you should look at it this way, imagine the amount of work you need to accomplish and how much you can save if you use our platform to be able to do it at 100 times faster?
Another way of looking at it is that as the network becomes larger and more complex, we know that every network on earth will run on one architecture because they now accept our architecture Training. Whether CNN or RNN, GNN or automatic encoders, or variants of all these networks, we can support whatever the accuracy of the support you need and the size of your network. As a result, you can scale your very large data center to support more traffic, or significantly reduce costs, or both.
The fourth part of our data center is to provide all these capabilities. As I mentioned earlier, whether it is HPC, training or vice versa, it can be used on the public cloud. Thousands of start-up companies are now founded on artificial intelligence. Everyone is aware of the importance of this new computational model. Thanks to this new tool, this new feature, the past can not solve the problem can now be solved. So, you can see that there are a lot of start-ups coming up from west to east.
These companies are reluctant to use their scarce financial resources to build high-performance computing centers or they are not capable of building high-performance platforms like these Internet companies. Because for these cloud service providers, the cloud platform is just a fantastic resource. So it is hourly rental. At the same time, we developed such a product, and I mentioned that all service providers have brought it to the market. We created a registry in the cloud to box these very complex software stacks. Each soft-frame has a different version of the GPU and different acceleration layers and different optimization techniques to support it, and we've included it all in every version and in every market.
We put it in the cloud registration version, called the GPU cloud. So all you have to do is download it to your cloud service provider and we've got Tesla 4 certification. With just one click, you can learn more. Finally, this is a cloud service provider. If you estimate these artificial intelligence start-ups have already won tens of billions of dollars in investment. Whether they're building themselves or renting in the cloud, a large percentage of investment funds will eventually move to HPC. So, I think this is a multi-billion dollar opportunity for us.
In the end, this may be the biggest chance in the vertical industry. Whether it's developing their supercomputer, car companies preparing for autonomous cars or health-care companies that are using artificial intelligence to make more accurate diagnostic analyzes, or manufacturers that conduct line inspections, robotics companies, large logistics companies That's all. But the way to think about it is that all of these programs, all of which are planning the delivery of products through this massive network of delivery systems, are actually the largest platforms, whether Uber, DD, Lyft, Amazon or Dunhill, UPS or FedEx, all of them are confronted with HPC issues that are now also emerging in deep learning.
These are very exciting opportunities for us, the last one is the vertical industry. My point is that all of these sections are now looking for a place because we've put the GPU in the cloud and all of our OEM partners are trying to bring these platforms to market. We now have the ability to solve high-performance computing and deep learning training and use a common platform for reasoning. So I think we're excited about accelerating computing in the data center, and I think that's just the beginning.
Bernstein Research Analyst Stacy Rasgon: I have a question about the seasonal nature of your gaming business in the fourth quarter. This part of the business revenue in the fourth quarter usually a little bit more, please talk about to promote gaming business fourth-quarter revenue and the quarter-on-quarter growth of the seasonal factor is what, another issue may be related to this Volta fourth wealth Quarter shipments will exceed the third quarter?
Jen-Hsun Huang: I'll answer your next question. I think we are very satisfied with the performance guidance we have provided. If I talk about Volta alone, I can only say that its shipments have just begun to grow. As market opportunities increase, they will continue to grow in the future. Therefore, what I hope for is that we can continue to grow. There is now evidence that the markets we serve, that is, the markets where Volta is available, are very large markets. Therefore, we have reason to hope for the future growth of Volta. Cloud service providers are either announcing they have started offering Volta, either announcing that they are about to do so. They are busy getting Volta to go through the cloud because customers are clamoring for it. We have launched an OEM partnership program, many OEM partners are sampling, while others are busy accelerating Volta to market. I think the basic needs have been established.
The urgent need for accelerated computing is objective because Moore's Law is no longer expanding. There is demand in the market, all preparations for bringing Volta to market are already in place. As for the game business, in the end what contributed to the game business growth? Remember, every product in our gaming business will sell to millions of people. There are many factors driving the growth of the gaming business. As you know, the power of eSports is incredible. The reason why eSports is so unique is that people want to win games and have better equipment. The delay they expect is very low, and the increase in performance has caused the delay to drop, and they want to be able to react as quickly as possible. People want to win, and they want to make sure that the devices they use do not make them impossible to win.
The second factor driving growth is the quality of content and content. Look at Call of Duty, Fate 2, or PUB G, which looks amazing. AAA-level content looks amazing. The real thing about video games is that you get the best equipment you need to get the highest quality out of your content and content. It is very different with streaming video, it is very different from watching streaming movies, and this is its essence. But of course this is not the case for video games. When AAA content comes out later this year, it helps push the platform to popularity. Finally, society is increasingly becoming an important part of the game's growth momentum. People are beginning to realize how beautiful these games are.
I am optimistic about the fourth quarter, it seems that this will be a very good performance quarter.
Evercore analyst CJ Muse: In the short term you talked about Volta's health needs. Do you see any restrictions on the supply? Such as chips, high-bandwidth memory and so on. In the long run, you've said that CUDA is a sustainable competitive advantage for you. Now that we've moved into areas beyond high-performance computing and mega-training, moving deeper into the areas of reasoning and the existence of GPUs as a service, and you've got a GTC conference. I am curious, can you talk about how you view this advantage, how it has evolved in the past year, and how do you think CUDA, the standard for artificial intelligence?
Jen-Hsun Huang: Everything we build is complicated. Volta is the largest processor ever built by mankind, with 21 billion transistors, 3D packaging, and the fastest memory on the planet, all of which add up to a few hundred watts of power, which is essentially the world's largest Know the most energy-efficient calculation. A Volta replaces hundreds of CPUs, so it's energy efficient, it saves a lot of costs, and it does that very quickly, which is one of the reasons why GPU acceleration is so popular. As you know, we are a single-architecture company about the future of our architecture. This is important because there is so much software and so much more on this architecture.
In reasoning, we have a complete set of software and optimization compilers and digital libraries for training, all of them thoroughly optimized for an architecture called CUDA. In terms of reasoning, the optimization compiler needs to extract large-scale calculation charts from all of these frames, the calculation charts become larger and larger, the numerical accuracy of different types of networks is also different, and the numerical accuracy of different types of applications is different . The high degree of precision required for autonomous vehicles, the number of people crossing the road, the calculation of something, and not the attempt to track it are matters of human life. Testing under a variety of different weather conditions is a very different issue.
As a result, numbers, network types, are constantly changing and they are always getting bigger. Digital accuracy is different for different applications. We have different calculations, different levels of computational performance and energy efficiency, and these reasoning compilers may be one of the most complex in the world. So we have a single architecture to optimize, whether it's HPC numerical, molecular dynamics, computational chemistry, biology or astrophysics, and all of the inference training methods can have a huge impact on us. That's why NVIDIA will be a 11,000-person company.
We can say that in 10 times the scale of operation. The reason is that we have a single architecture, the benefits of doing so will be revealed over time, rather than through more architecture. As the number of architectures increases, your software organization is divided into smaller pieces of different specifications. So this is a huge advantage for us. This is also a huge advantage for the entire industry.
So those who support the CUDA architecture know that the next-generation architecture benefits a lot, and then move on, and technological advances give them benefits and rewards. To be frank, I think this benefit will grow exponentially. I am very excited about this.
Bank of America analyst Vivek Arya: In the past few months we've seen a lot of announcements from Intel, including Xylinx and other products that describe other ways to enter the AI market. My question is, in the end is the use of GPU or SPGA, or ASIC? How does the customer make the decision? What is the competitive advantage that can be maintained for a long time? When they think about the solution to the problem reasoning part, will your position in the test market affect their decisions?
Jen-Hsun Huang: First, we have a single architecture where people know our commitment to the GPU, to CUDA, to all software running on our GPU, to each of our 500 applications, to each We are fully committed to these promises, the Digital Solver, every CUDA compiler, every single toolchain in every single operating system on every single computing platform. We will support this software for the rest of our lives. Their benefits to CUDA investments continue to increase. You do not know how many people sent me the feeling they upgraded GPU, effortlessly, computational speed doubled, which is incredible value for the customer. The fact that we are steadfast in our focus on this architecture gives everyone confidence in us. They know that as long as we are alive, we will support it. That is the benefit of a single architectural strategy.
If you have 4 or 5 different architectures to support your clients and you ask them to choose their favorite architecture, you are actually telling them that you are not sure which architecture is the best. We all know that no one can always support five kinds of architecture, so some things must be discarded, if the customer choose the wrong, it would be very unfortunate. If there are five kinds of architecture, as time goes by, there will be 4 kinds of mistakes. So, I think our strength lies in our special focus. As far as FPGAs, I think FPGAs have their own living space, and we NVIDIA are using FPGAs to make prototypes, but FPGAs are a chip design that can be a type of chip and flexible substrates are easy to use on a chip, which is its strength .
Our strength is that we have a programming environment. Writing software is much easier than designing a chip. If it's in our area of interest, for example if we are not focused on network packet processing and are very focused on deep learning, we are very much focused on high performance and parallel digital analysis and if we focus on these areas, our platform is invincible. You should think about the problem in this way, and I hope that what I said will help you somehow.
Atif Malik, Citi Analyst: Kress, you mentioned at the last earnings conference call that the revenue from encryption products in the OEM column was $ 150 million and that part of the business was in the quarter of October What will be the income? The next quarter of revenue? In the long run, why encryption products will not affect the future of the game needs?
Kress: In our earnings report, in the OEM column, the specific encryption product is equivalent to 70 million US dollars of revenue, corresponding to the previous quarter's 150 million US dollars in revenue.
Jen-Hsun Huang: In the longer term, encryption products are small to us, but not zero. I believe there will be some time in the crypto product business, as it is now. There will be new cryptocurrencies coming up, and existing cryptocurrencies will appreciate. The need to tap new cryptocurrencies will continue to emerge. So, I think, for a while, we'll see that the crypto products business will be a small business, but not zero. When you think about cryptocurrency in the context of our company, keep in mind that we are the largest GPU computing company in the world.
Our GPU business is very large, we have more than part of the business. There are data centers, I have talked about five components in the data center. There are many other parts of the world, whether rendering or computing design or broadcasting, workstations, laptops, or data centers, architecture is completely different. Of course, you know we have high performance computing, automated machines, autonomous vehicles and robots.
Of course, we also have the game business, these different parts of each other are large, and are growing. So my feeling is that although the cryptographic product business will persist for some time, it will only stay at a non-zero scale.
Morgan Stanley analyst Joe Moore: You just mentioned that the encryption market has shifted to traditional games. What is driven by this factor? Is the lack of a dedicated cryptocurrency product? Or just a preference for driving the market towards game-oriented cryptocurrency solutions?
Jen-Hsun Huang: When the market for cryptocurrencies becomes very large, it lures some people into building a custom ASIC for it. Of course, bitcoin is a good example. Bitcoin is very easy to design on its own dedicated chip. But what happens next is that some different players start monopolizing the market. As a result, it freed everyone from the mining market and encouraged the development of new currencies. And the only way to get people to tap into new currency is to work harder, and you want a lot of people trying to tap it.
Therefore, this platform is perfect for it, the ideal digital platform, the new digital currency into a CUDA GPU. The reason is because there are hundreds of millions of NVIDIA GPUs in the market today. If you want to create a new cryptocurrency algorithm, optimizing your GPU is ideal. But this is hard to do. Therefore, you need a lot of calculations to do this. However, there are already enough GPUs on the market, which is an open platform, and the threshold for people entering the market and starting to dig out is very low.
This is the cycle of digital currency, which is why I say digital currency cryptography uses the GPU, and for a while, the business of encrypting the currency using the GPU can be very small, but not zero. It's small now because someone can build a custom ASIC when it gets bigger. However, if someone has a custom ASIC, new cryptocurrencies will appear. This is a cycle of reciprocating cycles.
B. Craig Ellis, Riley Analyst: The annual revenue from data center operations has reached $ 2 billion, a huge milestone. Looking back over the past five years, I have not seen any precedent with what you now have with server partners, OEM partners, and the mega-partners who are deploying it. So my question is what does the expansion of partners mean for data center growth relative to doubling each year in the past two years? What are the new gaming platforms, the 1070 Ti and the Titan Xp, which are just announced on the gaming platform?
Jen-Hsun Huang: We have never created a product with broad industry support and sustained growth for nine consecutive quarters. It doubled from the same period of the previous year and the size of the cooperation is also considerable. We have never created such a product before. I think there are several reasons. The first fact is that CPU size expansion is over. This is the law of physics. The end of Moore's Law is just the law of physics. However, the software development world and the world in which computers help solve problems are growing faster than ever before.
No one has seen such a large-scale planning issues as Amazon. No one has ever seen a large-scale planning problem like DD. There are millions of taxis a week, and the number of orders they receive is staggering. So no one has seen such a large-scale problems, so high-performance computing and accelerated computing using GPU has been identified as the future direction of development. So I think this is the most important level of the parameters.
Followed by artificial intelligence and its emergence and in order to solve the problems we have not considered in the past developed a variety of applications. Now we can solve the past can not solve the problem. My point is that this happens in every industry we know, whether it's an internet service provider, healthcare, manufacturing, shipping, logistics, or financial services, and AI is a great tool. Deep learning is also a great tool that can help solve some problems that the world has never been able to solve before.
I think we are committed to the bizarre architecture of high-performance computing, and we are seven years ahead of the industry in deep learning, and we have long recognized the importance of this new computing method that is inherently well-suited to what we have Skills, the effect of this approach is incredible, I think we have created the perfect conditions for our architecture.
So I think this is definitely the most successful product line in our company's history.
Raymond James Analyst Chris Caso: I have a question about the car market and its prospects. With the rapid growth in other sectors, the proportion of cars now is declining. Of course, the appeal of design seems very positive. When car income grows, does its share of total income return to its previous level? Will grow more rapidly? If these designs win, will this happen in the next year? Or we have to wait a few years?
Jen-Hsun Huang: As you know, we really reduce the importance of infotainment, though it is still the main source of our income. We may assign hundreds of engineers to develop infotainment products, including the processor we are developing, and another 2,000 to 3,000 engineers to develop automation equipment and artificial intelligence platforms. I believe all the moving stuff will someday become automatic, it could be a bus, a truck, a commuter car, a car or a small robot that carries a load of goods in a warehouse and can give You send pizza. We think this is a huge challenge, and it is a huge computational problem. We decided to focus on this issue.
Over the next few years, if you look at our today's DRIVE PX platform, you'll find that more than 200 companies are developing it and 125 startups are developing it, including map companies, OEM companies, commuter Car companies, car companies, truck companies, taxi companies and more. Last quarter, we announced the expansion of the DRIVE PX platform to include the DRIVE PX Pegasus, the world's first automatic-grade autonomous driving platform. So I think our position is very good and the investment we have made has proven to be one of the best investments. So, I think in terms of revenue, my expectation is that in the coming year we will enjoy the revenue from the supercomputer that customers will have to buy to supervise their network to simulate all these autonomous driving The car's driving process and developing their autonomous car.
We will see a fair amount of development systems to be sold next year. Later that year, I think you'll see a huge presence of robotic cabs, and we'll make thousands of dollars on every robotic taxi. Then, by the end of 2020 or 2021, you will begin to see the first fully autonomous car, which is what people call a fourth-class autonomous car. This is my opinion. Next year is a simulation environment, development system, supercomputer, followed by a robot taxi, and then a year or two are autonomous vehicles.
Canaccord Genuity analyst Matt Ramsey: I remember, about 3 years ago or 3 and a half years ago, you said on the analyst day that the current gross margin was about 50%, which includes the Intel pay The amount of money You now have a gross margin of 60% and do not include the amount Intel pays. Could you talk about how the data center business and other components of the portfolio to promote gross margin growth? How will you reduce product costs and what impact will this have on gross margin next year as you increase your investment in game business?
Kress: Yes, we have been raising gross margins for years. But this is the evolutionary process of the whole model. It includes a model of the value-added platform we sell and the entire ecosystem of work we do, the software we work on many platforms. The data center is one of them, and our ProVis is another. If you think about all the work we do on the game, the expansion of the entire ecosystem, you understand. As a result, our gross profit margin has been on the rise. For each quarter, the combination of more is a state, our products have different combinations, some products have a little seasonal.
Depending on when certain platforms are in the market, we can adjust the product mix in these subsets. This will remain the focus of our attention and we will do everything we can to continue to raise the gross margin. You can see from our guidance for the fourth quarter, we are very satisfied with this guidance, we will also raise gross margin.
Jen-Hsun Huang: There are several ways we can think about raising production. First and foremost, I am very proud of the technical team at VLSI and they have made us ready for these new nodes. This is very important to us. Our company's technology team is world-class and nothing is more important than this. Then, once we get into production, we get the benefits of increased production. As production increases, we can surely benefit from cost reductions.
But that's not the point, and I mean, in the final analysis, what we are really focused on is the software stack that continues to improve upon our processors. Because each of our processors is associated with a large amount of memory, systems, networks and the entire data center. For most of our data center products, if we can increase data throughput in the data center by 50%, we can boost our performance by a factor of 2 to 4. The way to think about it is to billions of dollars in data centers and more than double productivity.
All of the work we do at CUDA and the incredible work we do with optimization compilers and graphical analytics translate into customer value, not in dollars, but in billions of dollars. This is the acceleration of computing leverage.
Hans Mosesmann, analyst at Rosenblatt: Please give your opinion on this week and Intel and their plans to enter the field of graphics and their relationship with AMD.
Jen-Hsun Huang: This week a lot of news. I think some of the news I can be sure. First, Raj's departure from AMD is certainly a big loss for AMD. Intel may now realize that the GPU is very, very important. Modern GPUs are not circular accelerators. These processors are domain-specific parallel accelerators. They are extremely complex. They are the most complex process in the world today. That's why IBM uses our processors on the world's largest supercomputer, which is why every major cloud, every major server manufacturer, uses NVIDIA GPUs. This is hard to do, and the number of software engineers is also very important.
So, if you look at the way we do things, our plans after five years. It takes three years to build a new generation, and we've built a variety of GPUs. Most importantly, we have more than 5,000 engineers working on system software, digital libraries, compilers, graphical analytics, cloud platforms and virtualization stacks in order to make this computing architecture useful to all of our services . So when you think about it from this perspective, you see that it is a huge task to say that it is the most important processor in the world today. This is why we are able to increase the speed of the application by a factor of 100. This is unthinkable unless you do the kind of innovation we've done.
And finally, about the chip they developed together. Needless to say, Pascal GeForce's energy efficiency, Max-Q design technology, and all the software we developed did set a new design point for the industry. Now we can build a state-of-the-art gaming laptop with the state-of-the-art GeForce processor and deliver a gaming experience better than 4K, all in an 18mm thick laptop. The combination of Pascal and Max-Q really raised the bar, which I think is its essence.