In order to handle intensive artificial intelligence computing work in the cloud, Microsoft Corp. (MSFT.O) and U.S. chip designer and computing company Nvidia Corp. (NVDA.O) announced on Wednesday that they are collaborating to build a ‘massive’ computer.
Tens of thousands of graphics processing units (GPUs), including the most potent H100 and A100 chips from Nvidia, will be used by the AI computer to run on Microsoft’s Azure cloud. Nvidia declined to disclose the deal’s value, but industry sources claimed that each A100 chip costs between $10,000 and $12,000, and the H100 is significantly more expensive.
According to Ian Buck, general manager of Nvidia’s Hyperscale and HPC division, ‘we’re at that inflection point where AI is coming to the enterprise and getting those services out there that customers can use to deploy AI for business use cases is becoming true.’ We are witnessing widespread adoption of AI. and the necessity of using AI for business use cases.
Nvidia stated that it will collaborate with Microsoft to create AI models in addition to providing the processors to the software and cloud juggernaut. According to Buck, Nvidia would use Microsoft’s AI cloud computer and create AI applications on it to provide services to users.
The demand for faster, more potent computing infrastructure has significantly increased as a result of the rapid expansion of AI models such as those used for natural language processing.
Nvidia said that Azure would be the first public cloud to deploy its 400 Gigabit/s Quantum-2 InfiniBand networking technology. That fast-linking networking technology connects servers. This is significant since intensive AI computing tasks call for hundreds of chips to collaborate across multiple servers.