The latest CPUs and GPUs will consume more power than ever to support artificial intelligence (AI) and other advanced applications. How is this compatible with data centers’ energy efficiency imperatives? By Graeme Burton.
There’s a paradox at the heart of the current rush to roll-out resources to support AI: on the one hand, the compute-intensive AI applications coming down the line will require radically more powerful GPUs, drawing far greater kilowatts, in order to run those applications efficiently. Organizations, public and private, cloud providers and data center operators are therefore turning to high performance computing (HPC).
The article then takes under considerations:
- Power efficiency
- In or out? The case for HPC in the cloud
- The power of two
Both AWS and Nvidia have collaborated to bring software and toolkits to help improve the skills of HPC users across the span of their workflows. Nvidia has developed SDKs that help users tackle the challenges of deploying HPC workloads across GPU technologies, optimizing applications to run on the Nvidia platform,” notes Hyperion Research. Interesting read!
[Read More]