NVIDIA (NASDAQ:NVDA) recently reported powerful fiscal first-quarter 2018 results. The graphics processing unit (GPU) specialist’s revenue jumped 66%, GAAP earnings per share soared 151%, and adjusted EPS surged 141%.
There was a wealth of information about the company’s results and future prospects shared on the earnings call.Our focus here is on NVIDIA’s data center, which is growing like gangbusters — its revenue grew 71% year over year to $701 million in the quarter, accounting for 22% of the company’s total revenue.
Here’s what you should know from NVIDIA’s Q1 call.
Image source: Getty Images.
Data center’s total addressable market estimated at $50 billion by 2023
From CFO Colette Kress’ remarks:
We see the data center opportunity as very large, fueled by growing demand for accelerated computing and applications ranging from AI [artificial intelligence] to high-performance computing across multiple market segments and vertical industries.We estimate the TAM at $50 billion by 2023, which extends our previous forecast of $30 billion by 2020.
NVIDIA views its TAM as being comprised of three main segments: deep learning training, deep learning inferencing, and high-performance computing (HPC). Deep learning is a category of artificial intelligence (AI) that aims to mimic in machines how humans make inferences from data. It’s comprised of two steps: training and inferencing, with the latter involving a machine applying what it’s learned to new data.
Data center revenue was $701 million in the first quarter, which means the platform’s annual run rate is about $2.8 billion. So NVIDIA’s estimated TAM of $50 billion by 2023 means the company views the platform as having the potential to grow up to about 18 times — or by nearly 1700% — in the next five years. This translates to a compound annual growth rate (CAGR) of more than 70%.
Of course, it’s not likely that NVIDIA will capture 100% of its TAM, but even capturing a good chunk of it would represent very robust growth. And NVIDIA is well positioned to capture a large portion of the TAM, as it’s experiencing strong adoption of its GPU-based accelerated computing platform. This is evidenced not just by its current financial results, but also by the fact thatits total number of developers is well over 850,000, up 72% from last year.
Traction is increasing in the data center AI inference market
From Kress’ remarks:
Inference GPU shipments to cloud service providers more that doubled from last quarter, and our pipeline is growing into next quarter. We dramatically increased our inference capabilities with the announcement of the TensorRT 4 AI inference accelerator software at our recent GPU Technology Conference in San Jose.
NVIDIA’s GPUs have already achieved the leadership position in accelerating deep learning training, but the company only recently began making inroads into data center inferencing, which is dominated by central processing units (CPUs). As recently as the second quarter of fiscal 2018, NVIDIA didn’t generate any revenue from inferencing.
NVIDIA touts that its recently launched TensorRT 4 AI inference software “dramatically” expands the use cases compared with the prior version, and acceleratesinferencing up to 190 times faster than CPUs for common applications such as computer vision, neural machine translation, automatic speech recognition, speech synthesis, and image and speech recognition systems.
NVIDIA is “far ahead of the competition”
From CEO Jensen Huang’s response to a question regarding data center chip competition, particularly from Alphabet’s (NASDAQ:GOOG)(NASDAQ:GOOGL) Google unit:
CPU scaling has slowed. The world needs another approach going forward. [B]ecause of our focus on it, we find ourselves in a great position. Google announced TPU [tensor processing unit] 3 and it’s still behind our Tensor Core GPU. Our Volta is our first generation of a newly reinvented approach of doing GPUs. It’s called Tensor Core GPUs. We’re far ahead of the competition. … Not only is it faster, it’s also more flexible.
As background, Google announced earlier this month at its annual I/O developer conference that it will release a third generation of its tensor processing unit AI chip, which it touts will be eight times faster than its TPU 2 released last year. In addition to using these chips for internal purposes, it plans to make them available for others to use via Google Cloud. The company’s TPU 2 was a huge leap forward because it can handle both deep learning training and inferencing, whereas the first generation of the chip could only perform inferencing.
In short, while investors do need to watch the competition, particularly Google, NVIDIA’s data center business appears poised to continue its torrid growth for some time. Put this together with a gaming business that’s booming and an auto platform that’s laying the groundwork to profit big once producing driverless vehicles becomes legal, and NVIDIA’s future is looking really bright.