
Nvidia announced on Thursday that it has provided Grace Blackwell GPUs to cloud infrastructure company CoreWeave to accelerate the development of next-generation artificial intelligence (AI).
An Nvidia spokesperson stated that CoreWeave is the first cloud infrastructure provider to offer the Nvidia GB200 NVL72 system at a large scale. Leading AI companies such as Cohere, IBM, and Mistral AI are using this system to train and deploy next-generation AI models.
CoreWeave plans to provide thousands of Blackwell GPUs to its customers. The company, which transformed from a cryptocurrency mining operation to a GPU cloud provider, went public on the Nasdaq in March with a market capitalization of about $25 billion.
The GB200 NVL72 system connects 72 Blackwell GPUs and 36 Grace CPUs using rack-scale NVLink and Quantum-2 InfiniBand networking, with the capability to scale up to 110,000 GPUs.
Nvidia reports that Cohere has observed up to a threefold performance improvement when training a 100-billion-parameter model using the GB200 NVL72 compared to existing Hopper GPUs.
IBM is leveraging the GB200 NVL72 to train Granite, its next-generation open-source enterprise AI model. Granite underpins solutions like IBM watsonx Orchestrate, enabling businesses to automatically build and deploy high-performance AI agents.