AI Infrastructure · Photonics · Systems Architecture

CPO, LPO, DSP, and VCSEL:
What Actually Matters
for AI Infrastructure

The optics conversation is getting noisy. Co-packaged optics, linear pluggables, DSP-heavy modules, VCSEL scale-up links, silicon photonics, InP — it is easy to turn all of it into buzzwords. The useful question is simpler: which technology solves which bottleneck, for which part of the AI fabric, under which operational constraints?

By Manish KL · April 2026 · ~14 min read

The industry is clearly shifting from generic "faster optics" toward AI-specific interconnect choices. Coherent is publicly showcasing multiple co-packaged approaches spanning silicon photonics, InP, and VCSEL. Lumentum is explicitly framing products around scale-out, scale-up, and scale-across AI infrastructure. Marvell is arguing that connectivity has become a primary bottleneck in AI data centers. That means the right framing is no longer optics as a component category, but optics as a topology, power, and failure-domain decision.

The thesis

Hyperscalers do not buy "photonics" in the abstract. They buy a tradeoff surface: watts per bit, thermal integration, reach, serviceability, signal recovery burden, manufacturing maturity, and what happens when the link misbehaves at cluster scale. Understanding that surface — not the spec sheet — is what actually drives interconnect decisions.

CPO
Best when power density and front-panel limits dominate
LPO
Best when DSP power overhead in short-reach links is the target
DSP Pluggables
Best when reach, robustness, and ecosystem maturity matter
VCSEL
Best when short-reach, dense, low-energy scale-up is the target

Why the discussion gets confused

People routinely compare CPO, LPO, DSP pluggables, silicon photonics, InP, and VCSEL as though they are peers on one clean axis. They are not. Some describe packaging choices. Some describe signal-processing philosophy. Some describe device technology. Some describe materials. Some describe where in the network stack a given interconnect belongs.

That is why optics conversations often feel slippery. "CPO versus LPO" sounds crisp, but one is fundamentally about moving optics closer to the switch or accelerator package, while the other is largely about reducing the power overhead of retiming DSPs inside pluggable modules. "VCSEL versus silicon photonics" is a completely different axis: now you are in the domain of emitters, modulation style, distance, packaging compatibility, and manufacturing tradeoffs.

The clean mental model: CPO answers a packaging-and-density problem. LPO answers a module-power problem. DSP-heavy pluggables answer a robustness-and-reach problem. VCSEL often answers a short-reach, high-density, practical-integration problem. Materials like InP and silicon photonics are enabling layers underneath those system choices — not competitors to them.

Technology taxonomy: a visual map

Before diving into each approach, it helps to see how these axes intersect. The diagram below maps the four primary approaches across the dimensions that matter for AI fabric decisions.

Fig 1 — AI Interconnect Technology Map
← More pluggable / serviceable Packaging integration More co-packaged / fixed → ← Low reach / scale-up Reach & distance Long reach / scale-out → LOW REACH + PLUGGABLE LOW REACH + INTEGRATED HIGH REACH + PLUGGABLE HIGH REACH + INTEGRATED DSP Pluggables 100m – 10km reach Forgiving, mature LPO 2m – 500m reach Low DSP power CPO Package-scale integration Best pJ/bit at extreme BW VCSEL 2m – 100m reach Dense scale-up fabrics SCALE-UP ZONE SCALE-OUT ZONE Enabling Material Layers SiPh + InP (substrate for CPO, LPO, DSP, and VCSEL)
The four primary interconnect approaches occupy different positions on the packaging-integration vs. reach axes. CPO and VCSEL cluster toward co-packaged scale-up; DSP pluggables serve longer-reach scale-out; LPO sits between. Silicon photonics and InP are enabling material layers that support all four approaches rather than competing with them.
Packaging problem
CPO

Pull optics into or adjacent to the switch/accelerator package to escape front-panel bandwidth, electrical reach, and power-density limits.

Signal path problem
LPO

Keep the pluggable form factor but reduce heavy DSP retiming so module power drops and the host carries more analog burden.

Robustness problem
DSP Pluggables

Use stronger electrical and optical compensation to make links more forgiving, especially as speeds rise and deployments get messy.

Short-reach problem
VCSEL Paths

Dense, efficient optical emission for short-reach scale-up fabrics where cost, packaging compatibility, and integration matter most.

CPO: power and density with harder failure domains

CPO is compelling because it attacks the real system problem, not the cosmetic one. At very high aggregate bandwidth, front-panel pluggables and long electrical traces become a structural tax on power, reach, and design flexibility. Coherent's 2026 OFC announcements make that explicit: multiple CPO approaches are being demonstrated, including a 6.4T socketed CPO based on silicon photonics paired with an External Laser Source using InP CW lasers, plus multimode and InP-on-silicon variants.

This matters because CPO is not just "more optics." It is a system response to the fact that electrical paths out of very fast switch ASICs and accelerators are increasingly painful at any reasonable power budget. If the economics of AI fabrics are constrained by power-per-bit and by how much useful bandwidth you can physically expose from the package, moving optics closer is not optional forever.

Fig 2 — CPO vs Front-Panel Pluggable: Electrical Escape Distance
CONVENTIONAL FRONT-PANEL PCB ASIC Switch/NIC ~10–15cm trace signal loss + power DSP +Optics Fiber ⚡ Full DSP power in module + SerDes loss over long trace CO-PACKAGED OPTICS (CPO) PACKAGE / TRAY ASIC Switch/NIC ~2mm trace Optical Engine (SiPh+InP ELS) Fiber ELS InP CW laser ✓ Minimal electrical escape loss ELS keeps hot laser thermally separated
CPO eliminates the long electrical trace between ASIC and optical engine, dramatically reducing SerDes power. The External Laser Source (ELS) compromise — an InP CW laser kept in a separate, more serviceable module — solves the thermal and replacement problem for the most failure-sensitive component. Coherent's socketed CPO + ELS framing at OFC 2026 is a concrete implementation of this architecture.

CPO is not a free lunch, however. It changes serviceability. It changes thermals. It changes laser strategy. It changes how you think about fault domains. A failed pluggable can be replaced like a field part. A deeply integrated optical engine is entangled with the host package and its cooling design. That is exactly why the External Laser Source compromise is so important in current CPO debates: keep the optical engine close to the ASIC, but move the most failure-sensitive, heat-sensitive laser function outward into a more serviceable module.

CPO tradeoff summary: wins when the front panel becomes the bottleneck and every additional picojoule per bit hurts. Less attractive when fast replacement, modularity, and operational simplicity still dominate the buying decision.

LPO: less DSP, more analog discipline

LPO is attractive for a different reason. It tries to keep the operational friendliness of pluggables while trimming the DSP overhead that burns power and adds cost. The idea is elegant: if you can reduce the retiming and digital signal recovery burden inside the module, you can save power at scale. In an AI data center full of very short-reach, relatively controlled links, that is a serious lever.

The catch is that you have not abolished physics — you have shifted where the compensation burden sits. A more linear pluggable path means the channel, the host, the connector quality, and the surrounding analog environment all matter more. In other words, LPO saves watts partly by demanding more discipline from the system around it.

Fig 3 — LPO vs DSP Pluggable: Where Compensation Lives
DSP PLUGGABLE Host ASIC Basic driver Chan DSP Module Full retimer DSP CDR + EQ + FEC ~10–15W DSP power → Fiber Module carries full burden LINEAR PLUGGABLE (LPO) Host ASIC Host EQ (FFE/DFE) Host CDR assist FEC in host Chan Linear Module ~3–5W → Fiber Host carries equalization burden
LPO shifts signal processing work from the module to the host ASIC. This cuts per-module power from ~10–15W (full DSP) to ~3–5W, but requires the host to implement stronger equalization, CDR assist, and FEC. The channel must also be cleaner — LPO has much less margin for connector variance or manufacturing spread.

That shift has a second consequence that hyperscalers care about deeply: interoperability. Once more of the equalization burden lives in the host and less inside a self-healing module DSP, the ecosystem has less room for sloppy combinations. "Vendor A's" switch and "Vendor B's" linear module have to agree on a much tighter electrical and optical contract. That is one reason the LPO MSA has spent considerable effort on interoperability requirements and multi-vendor test discipline.

LPO bottom line: this is a narrower optimization. It makes the most sense in controlled environments where reach is modest, channels are clean, and operators are willing to tune the platform rather than depend on a heavier module DSP stack to paper over imperfections. The saved watts compound across thousands of short-reach AI fabric links.

DSP-heavy pluggables: expensive, but forgiving

The fashionable thing is to dismiss DSP-heavy pluggables as old thinking. That is too glib. They remain important because robust signal recovery, compensation, and ecosystem maturity still matter enormously when links get longer, deployments get messier, and buyers want confidence rather than heroics.

Coherent's OFC 2026 disclosures are instructive: its 1.6T demonstrations include multiple transceivers with several electrical interfaces and DSP solutions from three industry leaders — exactly what you would expect from a market that still values interoperability, margin, and recoverability.

DSP-heavy modules survive because the real world is ugly. Manufacturing spread is ugly. Connector variance is ugly. Thermal drift is ugly. Installations are ugly. If you want the link to keep working under less-than-ideal conditions, DSP buys forgiveness.

Unfashionable but true: the most elegant architecture does not always win. The one that keeps working across messy deployments often does. For scale-out links where links are longer, connectors are more varied, and service teams are not photonics experts, DSP pluggables remain the conservative, rational choice.

VCSEL: more important than systems people realize

VCSEL is easy for systems people to underestimate because it does not sound as glamorous as silicon photonics or InP-on-silicon. But current public signals from Coherent and Lumentum suggest it deserves more attention in AI fabrics than most assume.

Coherent is explicitly showing multimode socketed CPO built with high-speed VCSELs. Lumentum is showcasing a scale-up optical interconnect using a high-density 1060nm VCSEL array co-packaged with a host ASIC for "slow and wide" scale-up protocols such as UCIe and PCIe. That is a direct answer to the AI fabric's most immediate physical problem: dense, local, energy-efficient links at package and rack scale.

Fig 4 — VCSEL Array Scale-Up Architecture (Lumentum-style "slow and wide")
GPU / ASIC Host processor UCIe / PCIe lanes "slow and wide" VCSEL Array 1060nm, co-packaged 20×4 array = 80 lanes ×25Gbps = 2Tbps/dir MMF Ribbon Cable 2m – 100m, OM5 PD Array Co-packaged receiver GPU / ASIC Peer processor ~1–3 pJ/bit · Dense: 80+ lanes/module · Multimode: practical manufacturing Energy efficiency Lane density No single-mode alignment precision needed
Lumentum's 1060nm VCSEL array architecture targets the "slow and wide" scale-up problem: many parallel lanes at per-lane speeds that match UCIe/PCIe electrical protocols (e.g. 25Gbps/lane × 80 lanes = 2Tbps bidirectional). Multimode fiber relaxes manufacturing tolerances vs. single-mode, making this practical for tray-to-tray and rack-local distances.

The broader lesson is that AI infrastructure is splitting the optical problem into multiple terrains. The answer for rack-to-rack is not automatically the answer for package-to-package, tray-to-tray, or chassis-local fabrics. The right question is not "Will VCSEL win?" — it is "In which distance-and-density regime does VCSEL become the right engineering answer?"

Power budget comparison

The clearest way to understand these tradeoffs is numerically. The figures below are representative of 400G/800G-class deployments and will evolve as speeds increase, but the relative ordering is expected to be durable.

Fig 5 — Approximate Power and Energy Efficiency Comparison (800G-class)
TechnologyModule power (800G)Energy eff.Reach
CPO (SiPh)
~5–8W electrical escape eliminated
~1–3 pJ/bit Package–2m
VCSEL (MMF)
~6–10W total module
~2–4 pJ/bit 2m–100m
LPO (SMF)
~8–12W module (DSP removed)
~3–6 pJ/bit 2m–500m
DSP Pluggable
~20–30W module (full DSP)
~8–15 pJ/bit 500m–10km
Power figures are representative ranges for 800G-class links; actual values depend on vendor implementation, operating speed, and temperature. The key insight is the order-of-magnitude difference in energy efficiency between CPO/VCSEL (optimized for short reach) and DSP pluggables (optimized for reach and robustness). At 50,000-port AI cluster scale, a 10W/port difference is 500kW.

What wins where: the decision matrix

The right question is never "which is best" — it is always "which bottleneck does this solve, in which segment of the AI fabric, at what operational cost?" This matrix summarizes the current state of play.

Fig 6 — Technology Decision Matrix for AI Fabric Segments
ScenarioCPOLPODSP Plug.VCSEL
Package-to-package (<2m)BestOKOverkillStrong
Rack-local scale-up (2–50m)StrongGoodPossibleBest
Rack-to-rack (50–500m)PossibleBestGoodPoor
Scale-out spine (>500m)PoorLimitedBestPoor
Harsh/varied deploymentsDependsRiskyBestDepends
Extreme power constraintBestGoodPoorBest
Hot-swap serviceabilityHardGoodBestMixed
No single technology dominates all scenarios. CPO and VCSEL lead for power-constrained short-reach scale-up. DSP pluggables lead for robustness and longer-reach scale-out. LPO occupies a sweet spot for controlled short-to-medium reach links where power savings are valuable and operational discipline is available.

CPO

  • Best pJ/bit at extreme bandwidth density
  • Eliminates SerDes escape power
  • ELS architecture solves serviceability
  • Complex thermals and fault domains
  • Not field-replaceable (typical)

LPO

  • Preserves pluggable serviceability
  • 3–5× power saving over DSP per module
  • Good fit for controlled short-reach AI fabrics
  • Less forgiving — cleaner channels required
  • Interoperability discipline is non-trivial

DSP Pluggables

  • Handles messy real-world conditions
  • Ecosystem maturity and interoperability
  • Field-replaceable in minutes
  • Higher power per module
  • Increasingly constrained at extreme speeds

VCSEL

  • Excellent for dense short-reach scale-up
  • Practical multimode manufacturing
  • "Slow and wide" aligns with UCIe/PCIe protocols
  • Reach limited to ~100m typical
  • Not the answer for every AI fabric segment

Silicon photonics vs InP: enabling layers, not competitors

Silicon Photonics

When density, photonic integration, and long-term lane-speed scaling matter, silicon photonics keeps appearing. Coherent's CPO and pluggable announcements both lean on SiPh as a primary path toward higher bandwidth architectures. SiPh benefits from CMOS-compatible manufacturing at scale, enabling cost-effective integration of modulators, detectors, and routing elements on a single platform. The limitation is that silicon is a poor laser material — which is why it typically pairs with an external or hybrid laser source.

Indium Phosphide (InP)

When you need powerful CW lasers, high-speed modulators, or a vertically integrated optical engine, InP remains foundational. This becomes especially clear in the ELS model for CPO, where the system keeps the high-power laser in a more serviceable InP module while the SiPh engine handles routing and modulation inside the package. Coherent is explicit that InP is central to its AI-oriented portfolio — its OFC 2026 CPO examples pair silicon photonics with an External Laser Source powered by InP CW lasers.

Silicon photonics handles the routing, modulation, and integration. InP handles the light generation. The winning CPO architecture in 2026 is not choosing between them — it is combining them strategically.

What hyperscalers actually care about

They care about power — because power is rent, cooling, and carbon. They care about yield — because a technology that works in a lab but fails at manufacturing scale is not a technology you can ship. They care about how ugly the deployment can get before the link falls over. They care about how fast a field team can restore service. They care about whether the optical choice forces a more expensive switch or cooling design. And increasingly, they care about whether the interconnect strategy matches the topology of AI workloads rather than the habits of traditional networking.

Lumentum's current public framing is useful because it explicitly separates scale-out, scale-up, and scale-across infrastructure — exactly the right decomposition for thinking about the market. There is no universal winner because there is no single AI fabric problem. Coherent's multi-technology strategy says the same thing in another language: multiple optical architectures will coexist because the workloads and integration points are diverging.

Marvell's OFC messaging sharpens the point further by arguing that connectivity has become a primary bottleneck in modern AI data centers. Once that is true, optics stops being a line item and becomes a first-order systems decision.

The real selection rule: pick the optical architecture that removes the dominant bottleneck in that layer of the AI fabric. Do not ask which technology is "best." Ask which one buys the most useful relief — in watts, density, reach, robustness, or operability — for the exact segment of the system you are trying to scale. The answer will be different for the package boundary, the rack boundary, and the campus boundary.

The deeper point

This is why the optics stack is getting more interesting, not less. The industry is no longer just pushing generic bandwidth curves. It is decomposing the AI interconnect problem into multiple surfaces: package escape, rack-scale aggregation, local scale-up, cross-rack scale-out, thermal budgets, and serviceability. That is why CPO, LPO, DSP pluggables, VCSEL, silicon photonics, and InP are all alive at once — they are each solving a different piece of a problem that cannot be solved by a single approach.

The mistake is to expect a single winner. The better model is specialization. Different photonic approaches will dominate different radii of the AI fabric. The systems person's job is to understand which one belongs where — and crucially, to recognize that the right answer is changing as AI cluster design itself evolves, as lane speeds increase, and as co-packaging moves from experimental to production.

If the companion essay on photonics as "AI operating system" argued that the scheduler must understand light — this essay supplies the hardware menu the scheduler will be choosing from: four distinct physical approaches, each with its own power economics, serviceability model, reach characteristics, and failure behavior. Understanding that menu is prerequisite to using it wisely.