Data centers form the backbone of our digital society worldwide and thus also of the emerging AI. Their dramatically increasing energy requirements pose new challenges in terms of energy efficiency. The introduction of an 800 V power supply architecture and the use of wide bandgap power semiconductors are crucial in this regard.
Around 11,000 data centers worldwide currently process, store and network huge amounts of data. According to the International Energy Agency (IEA), 2 per cent of the world’s total electricity demand, or 460 TWh, is currently used to power data centers. By way of comparison, this corresponds to the consumption of around 153 million households. And that’s not even taking AI into account. If we add that in, the global data volume could reach 291 zettabytes by 2027.
AI is a new player on the field, and its energy appetite is enormous. The demand for AI is growing ‘faster than the existing infrastructure can handle,’ said Sam Altman, CEO of OpenAI, at this year’s AMD Advancing AI conference. Specifically, this means that the global energy demand for data centers could reach an estimated 1000 TWh as early as next year. And the spiral of demand continues to turn. Some estimates predict that by 2030, the energy demand for data centers in the US alone will reach 1000 TWh.
Will that be enough? Looking at the Jevons paradox, one might conclude that it probably won’t. The paradox states that an increase in the efficiency of systems leads to a general increase in usage. In other words, the more successful AI is, the greater the additional demand and the higher the energy requirements. This makes assumptions that around 8 per cent of the world’s total electricity demand will flow into data centers by 2030 seem entirely realistic.
Why do data centers actually need such vast amounts of energy? According to experts, up to 40 percent is used for cooling alone! In addition, around 17 percent of the energy in an average data center is lost in various energy conversion steps. This represents a significant opportunity for innovative energy solutions that reduce waste heat and minimise cooling requirements. This is where power semiconductors come into play, especially components from the wide bandgap world, such as SiC and GaN. These components significantly increase energy efficiency and optimise every step of energy conversion, from the power grid to the processors. This approach not only reduces operating costs but also promotes the sustainability of such solutions. Infineon Technologies has run through this scenario and concluded that the use of highly efficient power semiconductors could help data center operators to almost halve their losses in the power supply network from the current 17 per cent to around 9 per cent.
The magic word in this context is 800 V power supply architecture for AI data centers. This type of high-voltage direct current distribution, which some may already be familiar with from e-mobility, ensures a reliable and significantly more efficient power supply for AI server racks than is currently possible.
At the blade level, this means a very rapid evolution from power supplies that once provided 2.7 kW to solutions that provide 12 kW, and this is likely to be only an intermediate step towards even higher-performance solutions. High-voltage DC distribution enables power conversion directly at the AI chip, the graphics processing unit (GPU) within the server.
AI data centers already contain over 100,000 GPUs, further evidence of the need for more efficient energy supply. Experts estimate that AI data centers will require one MW or more of power per IT rack by 2030. In conjunction with multi-path solutions with high power density, HVDC architecture will set the new standard in the industry.
Against this backdrop, Infineon Technologies and Nvidia announced at the end of May this year that they would be collaborating on the development of the industry’s first 800 V power supply architecture for AI data centers. ‘The new 800 V HVDC system architecture provides highly reliable and energy-efficient power distribution throughout the data center,’ said Gabriele Gorla, Vice President of System Engineering at Nvidia. ‘This innovative approach allows us to optimise the energy consumption of our advanced AI infrastructure, supporting our commitment to sustainability while delivering the performance and scalability required for the next generation of AI workloads.’
Currently, power supply in AI data centers is still decentralised. AI chips are therefore powered by a large number of power supplies. The 800 V HVDC system architecture, on the other hand, will be centralised in order to make optimum use of the limited space in the server rack. This approach will further increase the importance of state-of-the-art power semiconductor solutions such as SiC and GaN in order to achieve even higher distribution voltages with as few power conversion stages as possible. Infineon currently assumes that the proportion of power semiconductors in these centralised HVDC architectures will be similar to or even higher than in the AC distribution architecture used today.
Infineon Technologies and Delta Electronics are jointly developing silicon MOSFET-based power modules with a vertical power supply for AI processors in hyperscale data centers (Images Infineon Technologies).
In addition to Infineon Technologies, Nvidia is now also working with Texas Instruments and STMicroelectronics in the semiconductor sector to implement this project. In the field of power supply components, the partners are Delta Electronics and Flex Power. Eaton, Schneider Electric and Vertiv are the partners for the power supply systems in the data centers. Together with these selected partners, Nvidia plans to put the Kyber Rack Scale System into production in 2027, which is expected to deliver up to 15 exaflops of FP4 inference performance.
The latest collaboration between Infineon Technologies and Delta Electronics shows how existing silicon-based power semiconductors can be used to improve the efficiency of power supplies in AI data centers. Delta uses Infineon’s integrated 90 A OptiMOS-based power stages to implement vertical power supplies (VPD modules). These VPD modules enable a more direct and compact power supply path. The use of these VPDs could save up to 150 tonnes of CO2 per rack over an expected lifetime of three years. Assuming that future hyperscale data centers will consist of up to 100 server racks, the CO2 emissions saved would be equivalent to those of around 4,000 households per year.
Finally, the question arises as to what significance the new power electronics solutions could have for power semiconductor specialists in the future. Could they compensate for the e-mobility boom, which has so far fallen far short of expectations? In the short term, this is likely to be wishful thinking. Dr Stefan Hain, Senior Director, Power Electronic Systems at onsemi, assumes that “the demand from the new AI server business for SiC and GaN in 2030 is likely to be around a tenth of what the industry is supplying to e-mobility at that point in time. eg
“Our application and system expertise in AI power supply from the grid to the core, combined with Nvidia’s world-leading expertise in accelerated computing, paves the way for a new standard in power supply architecture for AI data centers.”