AI Infrastructure · Grid-to-Core Power · Vendor Landscape

The AI Power Delivery Stack: How Modern Accelerators Are Really Powered, and What Each Vendor Actually Does

The hardest scaling problem in AI is no longer just compute or memory. It is power delivery. Once racks move toward accelerator-heavy designs, the system has to carry more energy, at higher efficiency, with tighter transient control, lower copper loss, and much better thermal behavior. This post walks from the grid to the chip and explains how Vicor, MPS, TI, ADI, Renesas, Infineon, Delta, Flex, Murata, Bel, Navitas, and Wolfspeed fit into that stack.

Long-form technical essay with original diagrams and a layer-by-layer vendor map.

1. Why AI power delivery changed so dramatically

Traditional cloud servers were demanding, but relatively tame compared with modern accelerator racks. AI changed the shape of the problem.

A modern accelerator board may need to feed multiple high-power GPUs or ASICs, very large memory systems, fast fabrics, and aggressive cooling. The absolute power is higher, but more importantly, the current density, transient behavior, thermal concentration, and voltage-stability demands are all much harsher than in older server designs.

In older systems, power delivery was a support function. In AI systems, it is becoming an architectural limit.
Old assumption: if the rack has enough watts, the compute can be powered.
New reality: where voltage is converted, where current flows, and how close regulation sits to the silicon now all materially affect feasibility.

2. The real stack: from grid to chip

Power delivery is not one market. It is a chain. The cleanest mental model is:

Diagram A · Grid-to-core power chain
Grid utility feed Facility UPS / switchgear conditioning Rack / Shelf PSU / shelf power 48V or HVDC Board intermediate bus conversion Near-load POL / modules close to package Package / Die substrate / PDN rails into silicon
The important point is that different vendors dominate different layers. Some live at facility and rack scale. Some focus on board conversion and control. Some specialize in the brutal last mile near the chip.

Once you see the stack this way, the vendor landscape becomes much easier to understand. Delta and Flex matter heavily at infrastructure scale. Murata and Bel matter in PSU and shelf ecosystems. TI, ADI, Renesas, and Infineon provide core control, conversion, protection, and telemetry silicon. Vicor and MPS are much closer to the near-load power-delivery problem.

3. The central architectural split: centralized VRM versus distributed near-load power

The deep architectural divide in this market is not just “who sells the best regulator.” It is about where conversion happens.

Diagram B · Two competing philosophies
Centralized Board VRM Distributed Near-load Conversion 48V Big board VRM long low-V path GPU More copper loss, more thermal pain, longer low-voltage delivery path 48V Near-load module GPU Keep distribution voltage high longer, transform closer to the silicon
This is the heart of the market. The industry increasingly prefers keeping distribution voltage higher for longer and performing the hardest conversion closer to the load.

That shift is one reason 48V has become such an important theme. Higher distribution voltage cuts current for a given power level, which cuts copper loss and makes the board and rack more manageable. At the far end of the trend, some vendors are now talking openly about 800V DC architectures for future AI data centers.

4. What Vicor actually does

Vicor is best understood as a near-load power-architecture company. It does not mainly win by being the broadest supplier of PMICs or the broadest facility integrator. It wins by giving system designers a way to keep distribution efficient and bring dense conversion closer to the processor.

Vicor’s computing materials emphasize AI, HPC, data centers, and a 48V ecosystem of modular components. Its technical content also focuses on the fact that very high-current processors suffer badly from distribution losses if low-voltage power has to travel too far across the PCB.

Core role

High-density modular conversion, especially in 48V-oriented architectures.

What it tries to optimize

Shorter low-voltage path, better efficiency, lower board loss, higher current density near the load.

Why it stands out

More of a system-level power-architecture story than a generic controller catalog.

The cleanest summary is this: Vicor is trying to own the part of the stack where efficient distribution meets the brutal last-mile delivery problem of modern accelerators.

5. What MPS does

Monolithic Power Systems is one of the closest practical alternatives to Vicor in AI datacenter power. It also leans hard into 48V distribution and high-density solutions, but its public positioning often feels more like integrated power modules plus digital control plus reference solutions rather than a pure architecture manifesto.

If Vicor often feels like “distributed near-load architecture,” MPS feels like “integrated 48V module ecosystem for modern datacenters.”

6. What Infineon does

Infineon is broader than Vicor or MPS. It is not just solving the last conversion stage. It is one of the major suppliers of the semiconductor plumbing that makes the whole AI power tree work.

Its AI datacenter materials span grid-to-core messaging, 48V server-rack power, hot-swap and eFuse protection, intermediate bus conversion, and silicon-carbide-related infrastructure themes.

Strength: broad device portfolio across protection, switching, control, and power-path management.
Best way to think about it: Infineon helps make the entire power tree safe, efficient, and manufacturable.

In very high-power 48V systems, inrush control, fault isolation, hot-plugging, and protection are not side details. They are uptime-critical. That is exactly where Infineon’s hot-swap and power-path portfolio matters.

7. What Texas Instruments does

TI brings something different to the table: an enormous catalog of controllers, monitors, power stages, sensors, hot-swap devices, and support silicon. It is one of the deepest vendors if you care about the full control-and-telemetry environment around AI power conversion.

TI has publicly said it is working with NVIDIA on 800V DC power-distribution systems for next-generation AI data centers, and its technical materials describe both the move toward 800V DC and the design complexity around that transition.

TI is less about owning one single dramatic module and more about owning the support silicon and design foundation that makes large AI power systems controllable and observable.

8. What ADI does

ADI sits in a similar broad category to TI, but with a slightly different character. Its advantage is the intersection of power, precision, sensing, and control.

ADI’s recent AI/data-center materials highlight the transition toward 800V architectures and the need for better protection and telemetry as server power rises sharply. That emphasis matters: these systems do not just need conversion; they need accurate insight into what the power system is doing under harsh transient conditions.

ADI’s real leverage is not only that it can regulate power, but that it can help the system measure, protect, and control power accurately as architectures become much more complex.

That makes ADI especially relevant when the architecture moves beyond “efficient regulator” into “safe, monitorable, fault-tolerant high-voltage AI system.”

9. What Renesas does

Renesas has become more visible because it is leaning aggressively into the next power transition: 800V DC, GaN, and reliability for AI infrastructure.

Its recent public materials discuss 800V DC AI data center architecture, GaN for higher-density conversion, and battery-backup solutions for both 48V and 800V datacenter environments.

In other words, Renesas is one of the vendors trying to claim the transition zone between today’s 48V reality and tomorrow’s 800V AI power architecture.

10. Delta and Flex: infrastructure-scale players

Delta and Flex matter because the AI power problem is not just a board problem. It is also a rack, room, and deployment problem.

Delta

Delta operates heavily at facility, rack, cooling, and infrastructure scale. Its data-center pages emphasize comprehensive power management, cooling, remote infrastructure management, HVDC direction, and NVIDIA-oriented AI data center solutions.

Flex

Flex spans two layers: AI infrastructure integration and board-level power modules. Its materials talk about modular AI infrastructure that integrates power, cooling, and compute, while Flex Power Modules addresses advanced DC/DC conversion closer to the board.

Delta is more obviously a full infrastructure backbone story. Flex is interesting because it straddles infrastructure deployment and practical power-module supply.

11. Murata and Bel: PSUs, shelves, open-rack hardware, and practical ecosystem power

Murata and Bel are easy to underrate if you focus only on the glamour end of AI accelerators. But they matter because datacenter power has to exist as deployable hardware, not just as whiteboard architecture.

Murata emphasizes OCP-compatible power systems, centralized PSU architectures, and practical power-delivery optimization for AI servers. Bel similarly plays in data-center power conversion, OCP-style platforms, and modular power shelf infrastructure.

Their role: they help turn the power architecture into actual shelves, modules, and standards-aligned hardware that operators can deploy at scale.

13. Vendor map by layer

LayerMain vendor examplesWhat they mostly contribute
Facility / rack / AI infrastructureDelta, FlexPower, cooling, integrated deployment, modular infrastructure, HVDC direction
PSUs / shelves / open-rack hardwareMurata, Bel, Delta, FlexPractical, deployable datacenter power hardware
Board conversion and controlTI, ADI, Infineon, RenesasControllers, sensing, telemetry, protection, conversion support silicon
Near-load modules / 48V architectureVicor, MPSBring dense conversion close to the processor, reduce board loss, improve current delivery
Wide-bandgap device foundationNavitas, Wolfspeed, Renesas, InfineonGaN and SiC devices for higher efficiency and higher-voltage conversion
Diagram C · Vendor positioning by layer
Infrastructure Delta Flex PSU / Shelf Murata Bel Delta / Flex Control / Protection TI ADI Infineon Renesas Near-load Vicor MPS Wide-bandgap Navitas Wolfspeed Renesas Infineon
No single vendor owns the whole chain. The winners live at different layers, which is why AI power delivery is becoming a true systems problem rather than a single-component problem.

14. Where the industry is going

The future direction is fairly clear even if the exact winners are not.

The longer-term pattern is unmistakable: distribute at higher voltage, convert later, regulate closer, observe everything.

That is why the industry now feels split into power-architecture vendors, control-and-protection vendors, infrastructure integrators, and wide-bandgap device suppliers. They are all solving the same problem from different altitudes in the stack.

15. References

  1. Texas Instruments: AI/data-center power architecture, 800V DC collaboration with NVIDIA, and server PSU evolution materials.
  2. Vicor computing pages and technical articles on 48V power distribution and powering clustered AI processors.
  3. Monolithic Power Systems: 48V datacenter solutions, 48V modules, and datacenter application pages.
  4. Infineon: AI/data-center power, 48V power path protection, hot-swap, and grid-to-core materials.
  5. Analog Devices: 800V hyperscale data center articles, AI accelerator power pages, and high-voltage protection/telemetry content.
  6. Renesas: 800V DC AI data center architecture, GaN for AI data centers, and battery backup solutions.
  7. Delta: AI data center power, cooling, HVDC, and infrastructure pages.
  8. Flex: AI infrastructure platform and Flex Power Modules resources.
  9. Murata: Open Compute/datacenter power systems and AI server power-delivery optimization materials.
  10. Bel Fuse: OCP and data-center power ecosystem resources.
  11. Navitas: GaN/SiC AI data center PSU and 800V DC/DC platform announcements.
  12. Wolfspeed: SiC server-power and AI-data-center materials.