Premium

Oracle Cloud Is The First Major Cloud Provider To Make NVIDIA A100 Available

Oracle announced that it will be the first major cloud provider to make NVIDIA A100 Tensor Core GPU generally available. Oracle’s latest GPU instances allow customers across automotive and aerospace industries to run complex, data-intensive, high-performance applications like modeling and simulations more efficiently and at a lower cost than ever before.

During the Oracle Live launch event, Nvidia founder and Chief Executive Officer, Jensen Huang explained that his new A100 GPU is the result of more than ten years of work. It’s aimed at using deep learning to accelerate the speed for training AI and ML models. Huang described the concept of deep learning as “The Big Bang of AI today” thanks to it propelling their growth. While data-center business has doubled every year, this growth has quadrupled when software trained with AI can be accelerated by GPUs.

“Our customers demand the best of on-premises with all the benefits of the cloud, which is what we’re delivering with our latest GPU instance running on NVIDIA’s latest A100 GPU,” said Karan Batta, Vice President of Oracle Cloud Infrastructure. “We have the largest, most performant, and most cost effective A100 offering in the cloud because we offer double the memory and more local storage than competitors. This is the GPU instance customers have been waiting for to move to the cloud and deliver important breakthroughs.”

In addition to the bare metal instance, organizations will be able to deploy one, two, or four GPUs per virtual machine in the coming months. This will also give customers access to all the existing toolsets, like pre-configured Data Science VMs optimized for GPUs, to run any HPC or deep learning containers from NVIDIA NGC, a hub of cloud-native, GPU-optimized containers, models and industry-specific SDKs.

Many business customers have informed Huang that they have a massive demand for increasingly larger models. In order to meet the escalating needs, Oracle will optimize the new Nvidia A 100, with OCI having upgraded RAM, storage, and cluster-networking.

This comes as Nvidia and Cineca announced plans for the Leonardo supercomputer, which will be the "world's fastest AI supercomputer" when it goes online in 2021. At the heart of Leonardo are "nearly 14,000" Nvidia A100 GPUs, with four GPUs per node. Each node will consist of a single Intel Sapphire Rapids CPU, which isn't slated to start shipping until next year.