Why the discussion gets confused
People routinely compare CPO, LPO, DSP pluggables, silicon photonics, InP, and VCSEL as though they are peers on one clean axis. They are not. Some describe packaging choices. Some describe signal-processing philosophy. Some describe device technology. Some describe materials. Some describe where in the network stack a given interconnect belongs.
That is why optics conversations often feel slippery. "CPO versus LPO" sounds crisp, but one is fundamentally about moving optics closer to the switch or accelerator package, while the other is largely about reducing the power overhead of retiming DSPs inside pluggable modules. "VCSEL versus silicon photonics" is a completely different axis: now you are in the domain of emitters, modulation style, distance, packaging compatibility, and manufacturing tradeoffs.
Technology taxonomy: a visual map
Before diving into each approach, it helps to see how these axes intersect. The diagram below maps the four primary approaches across the dimensions that matter for AI fabric decisions.
CPO
Pull optics into or adjacent to the switch/accelerator package to escape front-panel bandwidth, electrical reach, and power-density limits.
LPO
Keep the pluggable form factor but reduce heavy DSP retiming so module power drops and the host carries more analog burden.
DSP Pluggables
Use stronger electrical and optical compensation to make links more forgiving, especially as speeds rise and deployments get messy.
VCSEL Paths
Dense, efficient optical emission for short-reach scale-up fabrics where cost, packaging compatibility, and integration matter most.
CPO: power and density with harder failure domains
CPO is compelling because it attacks the real system problem, not the cosmetic one. At very high aggregate bandwidth, front-panel pluggables and long electrical traces become a structural tax on power, reach, and design flexibility. Coherent's 2026 OFC announcements make that explicit: multiple CPO approaches are being demonstrated, including a 6.4T socketed CPO based on silicon photonics paired with an External Laser Source using InP CW lasers, plus multimode and InP-on-silicon variants.
This matters because CPO is not just "more optics." It is a system response to the fact that electrical paths out of very fast switch ASICs and accelerators are increasingly painful at any reasonable power budget. If the economics of AI fabrics are constrained by power-per-bit and by how much useful bandwidth you can physically expose from the package, moving optics closer is not optional forever.
CPO is not a free lunch, however. It changes serviceability. It changes thermals. It changes laser strategy. It changes how you think about fault domains. A failed pluggable can be replaced like a field part. A deeply integrated optical engine is entangled with the host package and its cooling design. That is exactly why the External Laser Source compromise is so important in current CPO debates: keep the optical engine close to the ASIC, but move the most failure-sensitive, heat-sensitive laser function outward into a more serviceable module.
LPO: less DSP, more analog discipline
LPO is attractive for a different reason. It tries to keep the operational friendliness of pluggables while trimming the DSP overhead that burns power and adds cost. The idea is elegant: if you can reduce the retiming and digital signal recovery burden inside the module, you can save power at scale. In an AI data center full of very short-reach, relatively controlled links, that is a serious lever.
The catch is that you have not abolished physics — you have shifted where the compensation burden sits. A more linear pluggable path means the channel, the host, the connector quality, and the surrounding analog environment all matter more. In other words, LPO saves watts partly by demanding more discipline from the system around it.
That shift has a second consequence that hyperscalers care about deeply: interoperability. Once more of the equalization burden lives in the host and less inside a self-healing module DSP, the ecosystem has less room for sloppy combinations. "Vendor A's" switch and "Vendor B's" linear module have to agree on a much tighter electrical and optical contract. That is one reason the LPO MSA has spent considerable effort on interoperability requirements and multi-vendor test discipline.
DSP-heavy pluggables: expensive, but forgiving
The fashionable thing is to dismiss DSP-heavy pluggables as old thinking. That is too glib. They remain important because robust signal recovery, compensation, and ecosystem maturity still matter enormously when links get longer, deployments get messier, and buyers want confidence rather than heroics.
Coherent's OFC 2026 disclosures are instructive: its 1.6T demonstrations include multiple transceivers with several electrical interfaces and DSP solutions from three industry leaders — exactly what you would expect from a market that still values interoperability, margin, and recoverability.
DSP-heavy modules survive because the real world is ugly. Manufacturing spread is ugly. Connector variance is ugly. Thermal drift is ugly. Installations are ugly. If you want the link to keep working under less-than-ideal conditions, DSP buys forgiveness.
VCSEL: more important than systems people realize
VCSEL is easy for systems people to underestimate because it does not sound as glamorous as silicon photonics or InP-on-silicon. But current public signals from Coherent and Lumentum suggest it deserves more attention in AI fabrics than most assume.
Coherent is explicitly showing multimode socketed CPO built with high-speed VCSELs. Lumentum is showcasing a scale-up optical interconnect using a high-density 1060nm VCSEL array co-packaged with a host ASIC for "slow and wide" scale-up protocols such as UCIe and PCIe. That is a direct answer to the AI fabric's most immediate physical problem: dense, local, energy-efficient links at package and rack scale.
The broader lesson is that AI infrastructure is splitting the optical problem into multiple terrains. The answer for rack-to-rack is not automatically the answer for package-to-package, tray-to-tray, or chassis-local fabrics. The right question is not "Will VCSEL win?" — it is "In which distance-and-density regime does VCSEL become the right engineering answer?"
Power budget comparison
The clearest way to understand these tradeoffs is numerically. The figures below are representative of 400G/800G-class deployments and will evolve as speeds increase, but the relative ordering is expected to be durable.
What wins where: the decision matrix
The right question is never "which is best" — it is always "which bottleneck does this solve, in which segment of the AI fabric, at what operational cost?" This matrix summarizes the current state of play.
CPO
- Best pJ/bit at extreme bandwidth density
- Eliminates SerDes escape power
- ELS architecture solves serviceability
- Complex thermals and fault domains
- Not field-replaceable (typical)
LPO
- Preserves pluggable serviceability
- 3–5× power saving over DSP per module
- Good fit for controlled short-reach AI fabrics
- Less forgiving — cleaner channels required
- Interoperability discipline is non-trivial
DSP Pluggables
- Handles messy real-world conditions
- Ecosystem maturity and interoperability
- Field-replaceable in minutes
- Higher power per module
- Increasingly constrained at extreme speeds
VCSEL
- Excellent for dense short-reach scale-up
- Practical multimode manufacturing
- "Slow and wide" aligns with UCIe/PCIe protocols
- Reach limited to ~100m typical
- Not the answer for every AI fabric segment
Silicon photonics vs InP: enabling layers, not competitors
Silicon Photonics
When density, photonic integration, and long-term lane-speed scaling matter, silicon photonics keeps appearing. Coherent's CPO and pluggable announcements both lean on SiPh as a primary path toward higher bandwidth architectures. SiPh benefits from CMOS-compatible manufacturing at scale, enabling cost-effective integration of modulators, detectors, and routing elements on a single platform. The limitation is that silicon is a poor laser material — which is why it typically pairs with an external or hybrid laser source.
Indium Phosphide (InP)
When you need powerful CW lasers, high-speed modulators, or a vertically integrated optical engine, InP remains foundational. This becomes especially clear in the ELS model for CPO, where the system keeps the high-power laser in a more serviceable InP module while the SiPh engine handles routing and modulation inside the package. Coherent is explicit that InP is central to its AI-oriented portfolio — its OFC 2026 CPO examples pair silicon photonics with an External Laser Source powered by InP CW lasers.
What hyperscalers actually care about
They care about power — because power is rent, cooling, and carbon. They care about yield — because a technology that works in a lab but fails at manufacturing scale is not a technology you can ship. They care about how ugly the deployment can get before the link falls over. They care about how fast a field team can restore service. They care about whether the optical choice forces a more expensive switch or cooling design. And increasingly, they care about whether the interconnect strategy matches the topology of AI workloads rather than the habits of traditional networking.
Lumentum's current public framing is useful because it explicitly separates scale-out, scale-up, and scale-across infrastructure — exactly the right decomposition for thinking about the market. There is no universal winner because there is no single AI fabric problem. Coherent's multi-technology strategy says the same thing in another language: multiple optical architectures will coexist because the workloads and integration points are diverging.
Marvell's OFC messaging sharpens the point further by arguing that connectivity has become a primary bottleneck in modern AI data centers. Once that is true, optics stops being a line item and becomes a first-order systems decision.
The deeper point
This is why the optics stack is getting more interesting, not less. The industry is no longer just pushing generic bandwidth curves. It is decomposing the AI interconnect problem into multiple surfaces: package escape, rack-scale aggregation, local scale-up, cross-rack scale-out, thermal budgets, and serviceability. That is why CPO, LPO, DSP pluggables, VCSEL, silicon photonics, and InP are all alive at once — they are each solving a different piece of a problem that cannot be solved by a single approach.
The mistake is to expect a single winner. The better model is specialization. Different photonic approaches will dominate different radii of the AI fabric. The systems person's job is to understand which one belongs where — and crucially, to recognize that the right answer is changing as AI cluster design itself evolves, as lane speeds increase, and as co-packaging moves from experimental to production.