Co-Packaged Optics (CPO): The End of Pluggable Transceivers — A Ground-Up Guide
Data centers are hitting a power wall. At 224Gbps per wire, copper is too lossy, too hot, and too long. CPO doesn't just make optics smaller — it moves them inside the switch package, right next to the ASIC. This is not a new product. It's an architectural shift.
1. Introduction: Why CPO Matters Now
Imagine you are trying to shout a message across a football stadium, but instead of air, you have to send it through a 15-inch-long garden hose filled with water. By the time your voice gets to the other end, it's muffled, distorted, and you had to use enormous energy to push it through. That's essentially what a modern data center switch is doing today.
Key insight: CPO matters because the dominant cost in high-speed networking is no longer just switching packets. It is the electrical effort required to get bits off the ASIC and into optical media.
A top-of-rack switch ASIC (ASIC) in 2025, like Broadcom's Tomahawk 5 or Cisco's Silicon One G200, switches 51.2 terabits per second (Tbps). To do that, it needs 512 electrical lanes running at 106.25 gigabits per second (Gbps) using PAM4 signaling. Those 512 lanes must travel from the silicon die, across the package substrate, onto the PCB, through 12 to 16 inches of copper trace, to the front panel where a pluggable optical transceiver sits.
At 112Gbps per lane (the previous generation), this was already painful. At 224Gbps per lane (the next generation, required for 102.4T switches and 1.6T optics), it is physically impossible with standard PCB materials. The signal loss exceeds 40 decibels (dB). You lose more than 99.99% of the signal power in the copper. To compensate, switch vendors add retimer chips — small signal repeaters that burn 1.5–2 watts per 800G port just to clean up the electrical signal. Then the pluggable module itself burns another 12–16 watts to convert that electrical signal into light using its own DSP, laser drivers, and TIA.
The result: in a typical 51.2T switch, 55–65% of the total system power — often 600 to 800 watts out of 1,200W — is consumed not by switching packets, but by getting bits on and off the chip. The copper and the optics dominate power, cost, and failure rate.
Co-Packaged Optics (CPO) is the architectural answer: take the optical engine — the silicon photonics chip that converts electricity to light — and integrate it into the same package as the switch ASIC. Instead of driving a signal 15 inches across lossy PCB, you drive it 10–25 millimeters across a silicon interposer. The electrical link is so short you can eliminate the DSP, retimers, and large driver amplifiers. The fiber optic cables then exit the package directly as "pigtails," instead of plugging into a faceplate.
This is not like moving from QSFP28 to QSFP-DD. That was a form-factor change. CPO changes where the boundary between electrical and optical domains lives. It changes who owns the optics (switch vendor, not transceiver vendor), how systems are cooled, how they are tested, and how data centers are cabled. The thesis of this guide is simple: CPO will not replace pluggables everywhere overnight, but for AI fabrics and high-radix switches above 51.2T, it is inevitable because physics leaves no alternative.
Analogy for beginners: Pluggable optics is like having a fleet of delivery trucks (transceivers) parked at a loading dock (front panel). Every package (bit) must be carried by hand from the warehouse (ASIC) down a long, hot hallway (copper PCB) to the truck. CPO is like moving the trucks' engines into the warehouse itself, attaching conveyor belts only centimeters long, and running just the trailer hitches (fibers) out to the road. You eliminate the hallway entirely.
2. Beginner Foundations (Ground Up)
What is an optical transceiver (pluggable)?
A pluggable optical transceiver, like an OSFP or QSFP-DD module, is a self-contained computer for light. It plugs into the front of a switch. Inside, from an electrical perspective, it contains: a DSP to equalize the incoming 112G PAM4 signal, a laser driver, a laser (or external laser input), a modulator to imprint data onto light, and on the receive side, a photodetector and TIA to convert light back to electricity.
The electrical interface is standardized: 8 lanes of 100G (actually 106.25G with overhead) for an 800G-DR8 module. The optical interface is also standardized: 8 fibers out at 1310nm wavelength, each carrying 100G, going 500m to 2km (DR = 500m reach). The module burns 14–16W because it's doing heavy signal processing at the worst possible place — far from the ASIC, after the signal has already been degraded by copper.
How light carries data
At its core, optical communication is Morse code with a flashlight, but at 100 billion flashes per second. You need four things:
- Laser: a continuous wave (CW) light source. In data centers, this is typically a DFB laser at 1310nm or a tunable laser for WDM. It provides the "carrier."
- Modulator: a shutter that turns light on/off or shifts its phase very fast. In silicon photonics, this is a Mach-Zehnder Modulator (MZM) or micro-ring resonator. It imprints the 1s and 0s.
- Waveguide: the "wire" for light, etched into silicon (about 220nm thick, 450nm wide). Light stays confined by total internal reflection.
- Detector: a germanium photodiode on silicon that absorbs photons and generates electrons — converting light back to current.
The key insight of silicon photonics: you can manufacture all of these (except the laser itself) in a CMOS foundry, using the same 300mm wafers as electronics, but on SOI substrates. This is what makes co-packaging economical.
What is SerDes, and why 512 lanes?
SerDes is the circuit that takes 1-bit-wide data at, say, 1GHz inside the ASIC core, and serializes it to a single high-speed differential pair at 106.25G. PAM4 means Pulse Amplitude Modulation with 4 levels, so each symbol carries 2 bits, allowing 106.25 Gbaud to deliver 212.5 Gbps raw, but with overhead it's ~100G of usable data per lane.
A 51.2T switch = 51,200 Gbps. Divide by 100G per lane = 512 lanes. Those lanes are grouped: 8 lanes make one 800G port (8×100G = 800G). So 512 / 8 = 64 ports of 800G. In practice, switches use 32 OSFP ports, each running 800G-DR8. The power of those 512 SerDes blocks alone is ~180–220W at 112G, and projected to be 250–300W at 224G.
Power breakdown: where the watts go
LightCounting and Broadcom data from 2024 tear-downs show a typical 51.2T system with pluggables:
- Switch ASIC core logic: ~220W (18%)
- ASIC SerDes (512×112G): ~200W (17%)
- PCB retimers / redrivers: ~100W (8%)
- 32× 800G pluggable optics: ~480W (40%)
- Rest (power supplies, fans, CPU): ~200W
Total ~1,200W. Notice: the I/O (SerDes + retimers + optics) is 780W — 65% of the system. CPO attacks all three of those items simultaneously by shortening the electrical link to <2cm.
Figure 1: Pluggable vs Co-Packaged Architecture
3. What is Co-Packaged Optics Exactly
Formal definition (OIF): Co-Packaged Optics is the integration of one or more photonic integrated circuits (PIC) and their associated electronic integrated circuits (EIC) in the same package as a host ASIC, with optical fibers exiting the package, such that the electrical interface between host and optics is less than ~30mm.
In plain language: you take the optics out of the pluggable metal cage and glue them onto the same ceramic substrate as the switch chip.
Key components in a CPO module
A complete CPO system has five pieces that must work together:
1. External Laser Source (ELS): This is the most counter-intuitive part. The laser is NOT co-packaged. It sits on the front panel or on a separate cold plate, and feeds continuous-wave light into the package via polarization-maintaining fiber. Why external? Lasers hate heat. A DFB laser's lifetime drops by 50% for every 10°C increase above 55°C. A switch ASIC runs at 85–105°C junction. Putting the laser next to it would kill reliability. ELS modules from Lumentum or Coherent provide 8 or 16 wavelengths at ~100–200mW each, shared across multiple engines, and are hot-swappable.
2. Photonic IC (PIC): Typically 5×8mm, fabricated on SOI at GlobalFoundries (45CLO), TSMC, or Intel. Contains waveguides, splitters, Mach-Zehnder modulators (for TX), and germanium photodetectors (for RX). A modern 6.4T engine has 8 fibers in, 8 out, each with 8 wavelengths (for WDM) or 8 parallel fibers. The PIC is passive — it needs no power except for thermal tuning (microheaters).
3. Electronic IC (EIC): This is the driver and TIA chip, usually in 28nm or 16nm CMOS, flip-chipped directly onto the PIC. The driver provides ~2–3Vpp swing to the modulator (no DSP!). The TIA amplifies ~50µA photocurrent to a digital level. In CPO, because the link is so short, you can use simple linear drivers instead of power-hungry DSPs.
4. Optical Interposer / Substrate: The mechanical base. In 2.5D implementations (Broadcom Bailly, TSMC COUPE), the ASIC, EIC-on-PIC, and HBM are all mounted on a large silicon interposer with 2µm RDL traces. This gives <0.5dB electrical insertion loss at 50GHz. In 3D (Intel), the PIC is hybrid-bonded under the EIC.
5. Fiber Array Unit (FAU): The most yield-critical part. A V-groove array of 32 or 64 single-mode fibers is actively aligned to the PIC edge couplers (or grating couplers) with sub-micron precision, then glued with UV epoxy. The fibers exit the package as a ribbon, typically through a glass ferrule in the package lid. This is where most failures occur in manufacturing.
Figure 2: CPO Stack Cross-Section
4. Why Now: The Physics Wall
CPO has been discussed since 2010. It's happening now for three converging reasons.
Why the timing changed: 112G could still be managed with heroic signal conditioning. 224G pushes the industry from “painful but manageable” into “architecturally different solution required.”
Copper loss at 224G PAM4
At 112G PAM4, the Nyquist frequency is 28GHz. Standard Megtron 6 PCB has ~1.2 dB/inch loss at 28GHz. Over 16 inches, that's 19.2 dB, plus connectors and vias, pushing 30–32 dB total channel loss. IEEE 802.3ck specifies a maximum ~30dB.
At 224G PAM4, Nyquist is 56GHz. Loss roughly scales with sqrt(f), so ~1.7 dB/inch. Over 16 inches: 27 dB just in PCB, plus 8–10 dB in connectors. Total >35dB. Even with the best low-loss materials (Megtron 8, Panasonic M7), you exceed 40dB. No SerDes can close that eye without a retimer midway. Each retimer adds 1.5W per 800G, 2ns latency, and $25 BOM.
Facebook (Meta) published in 2023 that for their 51.2T fabric, retimers alone would consume 18% of the rack power budget. That is unsustainable at 100,000 GPU scale.
Power numbers are real
Measured data (OFC 2024, Broadcom):
| Architecture | 800G Power | pJ/bit | Reach |
|---|---|---|---|
| Pluggable 800G-DR8 (DSP) | 14.5W | 18.1 | 2km |
| Pluggable 800G-LPO (linear) | 9.5W | 11.9 | 500m |
| On-Board Optics | 9.0W | 11.3 | 2km |
| Co-Packaged (Bailly) | 5.5–6.5W | 7.2 | 2km |
The 55% power saving comes from three places: eliminating the DSP (~3W), eliminating the retimer (~1.5W), and reducing SerDes swing from 1200mV to 600mV (~2W), because the channel is now 20mm not 400mm.
AI clusters changed the math
Training GPT-4 class models requires >25,000 GPUs with all-to-all communication every micro-batch. A 100,000 GPU cluster (Meta's planned 2025 cluster, xAI Colossus) needs 800,000 optical links if using 800G per GPU (8×100G). With pluggables at 15W each, that's 12 megawatts just for optics. At $0.08/kWh, that's $8.4M/year in electricity for optics alone, plus 12MW of cooling.
CPO cuts that by more than half. More importantly, it increases faceplate density: a 1U switch with CPO can support 64×800G (51.2T) with fibers exiting the rear, versus 32×800G with pluggables on front because OSFP cages are 18mm wide. For AI backends, density matters more than serviceability.
5. How CPO Works: Step by Step
Light path: from laser to fiber
1. Laser In: The ELS sends 8 continuous wavelengths (e.g., 1271–1331nm CWDM grid) at +10dBm into the PIC via PM fiber. A fiber coupler splits one physical fiber into 8 waveguides.
Architectural consequence: once the electrical path collapses from board-scale to package-scale, the expensive digital cleanup chain—retimers, large equalizers, and often the optics-side DSP—can be simplified or removed.
2. Split: An on-chip MMI (multimode interferometer) splitter divides each wavelength into 8 copies — one per transmit port. Total 64 paths for an 8×800G engine.
3. Modulate: Each path goes through a Mach-Zehnder modulator. The EIC driver applies a 2V differential signal (the data from the ASIC SerDes). The MZM uses carrier depletion in a PN junction to shift phase by π, creating intensity modulation at 106Gbaud PAM4. No laser is turned on/off — this is external modulation, which is faster and cooler.
4. Multiplex & Out: The 8 modulated wavelengths are combined (if WDM) or kept parallel, then edge-coupled into a single-mode fiber via a spot-size converter. Typical coupling loss: 1.5–2.5 dB.
On receive, the reverse: light hits a germanium photodiode, generates 50–100µA, TIA amplifies to 400mV, directly drives the ASIC SerDes receiver (no CDR).
Silicon photonics modulators
Two dominant types: Mach-Zehnder (MZM) — 2–3mm long, broadband, thermally stable, used by Broadcom, Cisco. Power ~1.5pJ/bit. Micro-ring resonator — 10µm radius, wavelength selective, very low power (~0.3pJ/bit), but requires active thermal tuning (±0.1nm/°C drift), used by Intel, Ayar Labs. Rings enable dense WDM but are sensitive to the 600W ASIC next door — which is why thermal crosstalk is a core challenge.
Packaging technologies
2.5D Silicon Interposer (TSMC CoWoS, Broadcom): ASIC and optical engines sit side-by-side on a ~2500mm² passive silicon interposer with 2µm line/space. Electrical path <25mm. Mature, but interposer cost is $150–200.
Intel EMIB / Foveros: Uses small silicon bridges embedded in organic substrate instead of full interposer. Lower cost, but higher loss for very high speed. Intel's OCI uses hybrid bonding to stack EIC directly on PIC.
3D Hybrid Bonding (TSMC COUPE, future): PIC wafer bonded face-to-face with EIC wafer using Cu-Cu direct bonds at <1µm pitch. Then bonded to ASIC interposer. Eliminates microbumps, reduces parasitics by 10×, enables >200G/lane. This is the path to 102.4T and 204.8T.
Thermal management
The PIC must stay below 85°C for wavelength stability; ideally 70–75°C. The ASIC junction is 105°C. Solution: separate thermal zones. The switch ASIC gets a direct liquid cold plate (TIM <0.1 K/W). The optical engines sit on the same interposer but are thermally isolated by a gap in the cold plate, or use a secondary copper spreader. The ELS is kept at 45–55°C on the chassis air inlet. Broadcom's Bailly uses a split cold plate and reports <5°C delta across the PIC under full load.
For facility planners, the hard part is not just absolute temperature, but thermal density. A CPO optical engine may dissipate only ~20–35W, but over a tiny footprint on the order of a few tens of mm² that can translate into local heat flux in the tens of W/cm², high enough that it behaves more like a hotspot than a traditional motherboard peripheral. A standard server CPU spreads much larger power over a far larger package and heat spreader area; the optical engine instead sits next to a 500–800W-class ASIC inside the same package neighborhood. That is why "separate thermal zones" on the interposer are difficult in practice: the problem is not cooling one hot thing, but preventing thermal gradients from shifting photonic wavelength alignment while the neighboring ASIC is under load.
6. Examples and Real Products (Detailed)
Broadcom Bailly (Shipping)
The first production CPO switch. Announced OFC 2023, in volume with Tier 1 clouds (Meta, ByteDance, Microsoft) since Q4 2024.
- ASIC: StrataXGS Tomahawk 5 - 51.2T, 5nm
- Optics: 8× 6.4Tbps silicon photonic engines (Broadcom in-house), each 8×800G-FR4
- Ports: 32× OSFP physical (with fiber pigtails), configurable as 64×400G
- Power: 5.5W per 800G vs 14.5W pluggable — 30% system saving (~350W)
- Packaging: TSMC CoWoS-S, 2700mm² interposer
- Laser: External Lumentum 8-wavelength CWDM ELS, hot-swap
Intel OCI - Optical Compute Interconnect (Demo)
SAMPLING 2025Not a switch, but chip-to-chip optical I/O. Demonstrated at OFC 2024 running live traffic.
- Throughput: 4 Tbps bidirectional (64 lanes × 32 Gbps NRZ, or 32×64G PAM4)
- Tech: Intel 3 CMOS + integrated silicon photonics on same die (monolithic), using micro-ring resonators
- Pitch: Hybrid-bonded to CPU/GPU tile
- Target: Disaggregated CPU, GPU-to-GPU scale-up for Gaudi and Falcon Shores
- Partners: Working with TSMC and UMC for foundry offering
TSMC COUPE - Compact Universal Photonic Engine
TSMC's foundry platform, not a product, but enables everyone else. Announced 2023.
- Gen1 (2025): 1.6T engine (8×200G) on CoWoS, EIC on PIC via SoIC bonding
- Gen2 (2026): 6.4T engine, co-packaged with 3nm logic
- Gen3 (2027): 12.8T, using 3D stacking, targeting Nvidia Rubin Ultra NVLink
- Customers: Broadcom (next-gen Bailly), Nvidia, AMD, MediaTek
Nvidia (Roadmap)
Nvidia has not shipped CPO yet, but is the most aggressive adopter for AI.
- Spectrum-X: CPO version of Spectrum-4 51.2T planned 2026, using TSMC COUPE
- NVLink: NVLink-C2C optical using Ayar Labs TeraPHY demonstrated with Grace CPU, targeting 2027 for scale-out
- NVLink-Network context: If Nvidia pushes toward an optical NVLink fabric, CPO is the likely physical layer that allows the boundary between "local" and "global" GPU memory to soften. Once memory traffic can move optically with lower pJ/bit and higher radix, scale-up and scale-out begin to converge architecturally.
- Why: Their 100k GPU clusters would require >1M pluggables. Power budget impossible without CPO.
Cisco with Acacia (Pilot)
- Platform: Silicon One G200 51.2T with 8× 6.4T Acacia CPO engines
- Status: Customer trials with major US cloud 2024-2025
- Differentiator: Uses Acacia's high-power DFB integration and advanced FEC
Ayar Labs TeraPHY (Sampling)
- Product: Optical I/O chiplet, 2.048 Tbps (8 fibers × 8 λ × 32G)
- Laser: SuperNova external multi-wavelength source (8 λ per fiber)
- Integration: Intel Agilex FPGA, GlobalFoundries Fotonix, Lockheed defense, Nvidia prototype
- Status: Sampling Gen2 2025, production 2026
Others
Marvell: 51.2T Teralynx 10 with CPO option, using GF Fotonix. Lightmatter Passage: Photonic interposer for chiplet interconnect, raised $400M, focusing on AI training. Ranovus: Odin analog CPO platform for low-power 800G. POET: Optical Interposer for cost-sensitive markets.
7. Companies and Ecosystem Map
| Layer | Leaders | Role |
|---|---|---|
| Foundry | TSMC, GlobalFoundries, Intel Foundry, Tower Semiconductor | Manufacture PIC and EIC, provide PDKs |
| PIC Designer | Broadcom, Intel, Cisco Acacia, Marvell, Ayar Labs | Design silicon photonics circuits |
| Laser (ELS) | Lumentum, Coherent (II-VI), Source Photonics, Furukawa | High-reliability CW lasers |
| Packaging/OSAT | TSMC (CoWoS), ASE, Amkor, SPIL | Interposer assembly, fiber attach |
| System Vendor | Arista, Cisco, Juniper, Nvidia, Celestica (for Meta/Google) | Build switches, own optics now |
The critical shift: value moves from transceiver vendors (Finisar, Innolight) to ASIC vendors and foundries. Hyperscalers like Meta and Google are driving specs directly to Broadcom and TSMC, bypassing traditional optics supply chain.
8. Challenges (Honest)
Reliability
Telcordia GR-468 requires 20-year life. A pluggable fails, you swap it. A CPO engine fails, you replace the entire $40k switch. Laser MTBF is >500k hours at 45°C, but only ~50k at 85°C. Hence ELS must stay cool. Fiber attach epoxy outgassing and hermeticity are unsolved at scale — moisture corrodes the PIC edge couplers.
What makes CPO hard: it solves the electrical problem by turning optics into a packaging, reliability, and serviceability problem. That trade is attractive for AI fabrics, but it is not frictionless.
Serviceability
Data centers are designed for field-replaceable optics. CPO requires "blind-mate" fiber connectors at the back of the rack and whole-switch replacement. Google published that their CPO switch MTBF target is 3× higher than pluggable to compensate. Longer term, this also opens a path to robotic fiber handling and automated patching systems of the kind associated with vendors like Telescent and large manufacturing integrators such as Celestica: once optics move away from hand-swapped front-panel modules, the cable plant itself becomes a candidate for automation.
Thermal crosstalk
A 600W ASIC creates a 20°C gradient across a 70mm package. Silicon ring resonators drift 0.09nm/°C. For 8-channel DWDM with 3.2nm spacing, that's catastrophic. Solutions: MZMs instead of rings (Broadcom), or active heaters burning 2–3W extra (Intel), which partially negates power savings.
Test and yield
You cannot test a PIC at wafer sort with full optical performance. Known-good-die (KGD) for photonics requires expensive optical probing. Then you hybrid-bond it to an EIC — if one fails, you lose both. Combined yield for CPO engine is currently 70–80%, vs >99% for pluggable modules. That drives cost.
Standards
OIF has a CPO implementation agreement for 3.2T (2023) and 6.4T (2024), but no interoperability yet. COBO (Consortium for On-Board Optics) stalled. Every vendor's ELS connector, fiber pitch, and management interface is proprietary. This is like Ethernet in 1982 — it will take 3–5 years to standardize.
9. CPO vs Alternatives
| Metric | Pluggable OSFP | On-Board Optics (OBO) | Near-Packaged (NPO) | Co-Packaged (CPO) |
|---|---|---|---|---|
| Electrical reach | 400mm | 75mm | 30mm | 15mm |
| Power / 800G | 14–16W | 9–11W | 7.5–9W | 5.5–7W |
| Serviceability | Field replaceable | Board replace | Board replace | Switch replace |
| Density | 32 ports/U | 32 ports/U | 48 ports/U | 64 ports/U |
| Maturity | High | Medium | Low | Early prod 2025 |
OBO and NPO are stepping stones. OBO moves optics to mid-board (still 75mm copper). NPO (used by Arista) puts optics on the same substrate but a few cm away. CPO is the end-state for >224G lanes.
10. The Future: What Happens Next
By 2027–2028, CPO will be the default for AI fabric switches. Broadcom's next Bailly+ and Nvidia's Spectrum-X CPO will target 102.4T (1,024 lanes of 100G, or 512 lanes of 200G). This requires 224G SerDes, which only works with <20mm electrical reach. Pluggables at 224G would need DSPs burning >20W per 1.6T module — thermally impossible in OSFP.
Strategic implication: the closer optics move toward the compute package, the more networking, packaging, thermal design, and memory-system architecture start to converge into one engineering problem.
Two trends will emerge:
1. Linear Drive and Co-Packaged Lasers: First-gen CPO uses external lasers for reliability. Second-gen (2027) will use heterogeneous integration — bonding InP laser dies directly onto the silicon PIC (Intel and TSMC roadmap). This requires better thermal isolation but removes the ELS fiber management.
2. Architecture Change: Data centers will move from front-panel pluggables to rear "optical backplanes" with blind-mate multi-fiber connectors (MPO-16). Switches become sealed liquid-cooled units with no field-serviceable parts. This aligns with rack-scale liquid cooling already required for 120kW AI racks (Nvidia GB200). It also creates a future opportunity for robotic fiber switching and automated cable choreography, because rear blind-mate optics are easier to treat as managed infrastructure than thousands of manually serviced front-panel modules.
The impact is bigger than switches. CPO enables optical scale-up networks where GPUs talk directly via light — breaking the electrical SerDes power wall inside the server. Ayar Labs and Lightmatter are targeting this for 2026–2028. If successful, the entire concept of a "network interface card" changes.
For investors and engineers: this is not hype. The physics is settled. The question is not if CPO wins, but who captures the value — the ASIC incumbents (Broadcom, Nvidia), the foundries (TSMC), or new photonics startups.
11. Beginner Takeaways
- CPO moves the optics inside the switch. Instead of 15 inches of copper to a pluggable, it's 0.5 inches to a silicon photonics chip bonded next to the ASIC.
- It's about power, not speed. At 224Gbps, copper loses too much signal. CPO saves ~8W per 800G link by eliminating DSPs, retimers, and long drivers — critical for 100,000 GPU clusters.
- Lasers stay outside. Because lasers die in heat, they live in a separate, cool, hot-swappable box called an ELS.
- It's shipping now. Broadcom Bailly CPO switches are in production at Meta and others since late 2024. Intel, TSMC, Nvidia, Cisco will follow 2025–2026.
- It changes the industry. Switch vendors now own optics, transceiver vendors lose socket, and data centers must switch to liquid cooling and whole-switch replacement models.
Sources & Method
Analysis compiled from OIF 3.2T CPO Implementation Agreement (2023), Broadcom OFC 2023/2024 presentations on Bailly, Intel OCI demo at OFC 2024, TSMC 2023-2024 Technology Symposiums on COUPE, Ayar Labs product briefs, LightCounting market data (2024), Cisco Acacia whitepapers, and IEEE 802.3ck/dj channel specifications. Power numbers are system-level measurements from published tear-downs and vendor datasheets, not theoretical. All acronyms expanded on first use per style guide.