Photonics AI Networking Optical Transceivers Silicon Photonics CPO Industry Landscape
Part 1 of 2

The Photonics Stack: Who Builds What for AI Networking

AI clusters need extraordinary amounts of light. Not metaphorically — literally. Every GPU-to-GPU communication in a training cluster travels as photons over fiber. The companies that generate, shape, modulate, route, amplify, and test those photons are the unseeen infrastructure of the AI era. This is a technical map of eight of them: what each one actually builds, where it sits in the stack, and why its piece of the problem is hard.

Manish KL · April 2026 · ~22 min read · Industry Technical Primer
In this essay
  1. Why photonics is now a systems problem
  2. A map of the full stack
  3. Corning (GLW) — the glass that carries everything
  4. Applied Optoelectronics (AAOI) — vertically integrated transceiver manufacturing
  5. Coherent (COHR) — deep vertical stack, from crystal to coherent module
  6. Lightwave Logic (LWLG) — the polymer bet against silicon modulators
  7. Marvell (MRVL) — the DSP and silicon photonics engine layer
  8. Ciena (CIEN) — coherent networking systems and the WaveLogic DSP lineage
  9. Arista (ANET) — the switching fabric and AI network operating system
  10. Viavi Solutions (VIAV) — the test and measurement layer that validates everything
  11. How the pieces fit: a stack view
  12. What Part 2 covers

Why photonics is now a systems problem

For most of computing history, the optical link was a well-defined component: a transceiver module that plugged into a switch port, converted electrical signals to light, sent them down a fiber, and converted them back. The system engineer's job was to choose the right form factor, hit the power budget, and not think too hard about what was inside the module.

AI infrastructure has dissolved that clean abstraction. A dense GPU training cluster needs thousands of high-speed links operating simultaneously, at bandwidths that have doubled every two years and are now approaching 1.6 terabits per second per port. At that scale, the physical properties of light — its power budget, its polarization, its modulation format, its interaction with the semiconductor substrate — become first-order design constraints that the systems engineer can no longer ignore.

The question of how to generate light, shape it, modulate data onto it, package it efficiently, switch it, and validate it is no longer a component-selection question. It is a systems architecture question. And it has spawned a rich industrial ecosystem of companies that each own a distinct piece of the answer.

The GPU is the headline of the AI cluster. The photonic interconnect is the nervous system. You cannot train a 100,000-GPU model if any part of the light path fails.

This essay maps that ecosystem across eight companies — Corning, Applied Optoelectronics, Coherent, Lightwave Logic, Marvell, Ciena, Arista, and Viavi Solutions — and explains what each one actually builds, what the hard technical problem is at their layer of the stack, and how their products connect to the others.

This is Part 1. It covers all eight companies at the product and technology level. Part 2 will go deeper on the architectural transitions — co-packaged optics, linear pluggable optics, silicon photonics versus InP, and where the stack is heading as it approaches 3.2T and beyond.

A map of the full stack

Before diving into individual companies, it helps to have a mental model of the stack. An AI cluster's optical infrastructure can be thought of in six layers, each of which has distinct technical requirements and different companies competing for it.

AI photonics stack — six layers from physical fiber to network control Diagram showing the six-layer AI photonics stack: physical fiber and glass at the bottom, then laser and light source generation, then electro-optic modulation and transceiver modules, then DSP and signal processing, then switching fabric, and network operating system and control plane at the top. Companies are mapped to each layer. Layer 6 — Network OS & control plane EOS, CloudVision, AVA telemetry, traffic scheduling Layer 5 — Switching fabric Spine/leaf ASICs, 800G/1.6T switching, VOQ, RDMA congestion Layer 4 — DSP & coherent signal processing PAM4 DSP, coherent modem IC, TIA, laser drivers, silicon photonics PIC Layer 3 — Transceiver modules OSFP/QSFP-DD packaging, pluggable, LPO, CPO/NPO integration Layer 2 — Laser & light source generation InP DFB, VCSEL, EML, CW laser for silicon photonics, EO polymer modulator Layer 1 — Physical fiber & cable infrastructure Ultra-low-loss SMF, micro cable, multicore fiber, connectivity hardware Test & validation spans all layers OTDR · BERT 1.6T traffic gen Transceiver qual SiPh wafer test Fiber connector inspection tools VIAV — all layers GLW — L1 AAOI, COHR, LWLG — L2–L3 MRVL, CIEN — L3–L4 ANET — L5–L6
Figure 1. The six-layer AI photonics stack. Each company in this essay occupies a distinct slice. The test layer (VIAV) spans all of them. No single company owns the full stack — the interdependencies are tight, and the failure mode of any layer affects all layers above it.

With that map in place, let's go through each company in order from the physical substrate up.

Corning GLW Layer 1 — Physical fiber & cable

Corning is the company that made 1.3 billion miles of optical fiber — enough to wrap the Earth roughly 52,000 times. It holds the dominant position in manufacturing the glass that physically carries light in every major telecommunications network and, increasingly, in every major AI data center.

The reason Corning is difficult to displace is not simply scale. It is that manufacturing ultra-low-loss optical fiber is a materials science problem of extraordinary subtlety. The silica glass in a single-mode optical fiber must be pure to within a few parts per billion of contaminant — any impurity introduces absorption or scattering that attenuates the optical signal over distance. Corning's Outside Vapor Deposition process for laying down this glass with the required purity and refractive index profile is the product of decades of proprietary process development.

What Corning actually makes

The core product is optical fiber — specifically, single-mode fiber (SMF) for long-distance data center interconnect and multimode fiber for shorter intra-rack and inter-rack links. But for AI data centers, the design requirements are different from what telecommunications networks need.

AI clusters are extremely fiber-dense. A single hyperscale AI data center may have millions of individual fiber terminations, connecting GPU racks to spine switches in a full or near-full mesh. The physical challenge is running that many fibers through limited floor space and rack-unit budgets without sacrificing signal quality or serviceability. Corning has responded to this with two new product directions aimed squarely at AI density requirements.

Product
What it solves
Contour™ Flow Micro Cable
Half the diameter of conventional optical cables at equivalent fiber count. Enables higher fiber density in AI rack cable trays and reduces congestion in data center pathways — critical when a single AI fabric requires thousands of individual fiber runs.
Multicore Fiber (MCF)
Multiple cores within a single fiber cladding, delivering 4x capacity per fiber strand with 75% fewer connectors and 70% less cable mass. Demonstrated at OFC 2026 as part of the GlassWorks AI Solutions portfolio for ultra-dense AI fabrics.
Connectivity hardware
Corning's preterminated MPO/MTP trunk cables, splice closures, patch panels, and structured cabling systems — the physical plumbing that terminates and manages fiber runs at scale inside an AI data center.

In January 2026, Corning and Meta announced a multiyear supply agreement of up to $6 billion covering optical fiber, cable, and connectivity solutions for Meta's AI data centers. The groundbreaking ceremony for a significant expansion of Corning's optical cable manufacturing capacity in Hickory, North Carolina, marked a milestone in this collaboration. The deal is significant not only for its scale but for what it signals architecturally: Corning's CEO noted that AI data centers require up to ten times more optical fiber than traditional cloud computing environments, and that as GPU counts per rack scale into the hundreds, the transition from copper to fiber for intra-rack connectivity is inevitable, given fiber's superior economics and power efficiency at that density.

The hard problem at Layer 1: Running millions of fiber terminations in a dense GPU rack environment requires not just fiber that works, but fiber that can be handled, bent, and managed in extremely tight spaces without introducing insertion loss, modal noise, or signal degradation. The physics of light guiding in thin glass and the mechanical engineering of dense cable management are both pushed to their limits in a modern AI fabric.
Applied Optoelectronics AAOI Layers 2–3 — Laser manufacturing + transceiver modules

Applied Optoelectronics is one of the few companies in the industry that manufactures its own Indium Phosphide (InP) laser chips and uses them in its own transceiver modules — a vertical integration approach that most of its competitors do not match. Where the majority of transceiver manufacturers source their laser chips from external suppliers, AOI grows, fabricates, and packages its own III-V semiconductor lasers in-house at its Sugar Land, Texas facility.

That vertical integration creates a specific competitive advantage in supply-constrained cycles. When InP laser supply tightens — as it did sharply in 2024 and 2025 when hyperscaler demand for 800G transceivers outstripped the industry's chip manufacturing capacity — AOI can continue shipping because it controls its own chip supply chain. This capability allowed the company to avoid the crippling component shortages that plagued the industry in 2024 and 2025, enabling it to gain market share from larger, more fragmented competitors.

What AAOI actually makes

The core product is optical transceivers for hyperscale data center use — specifically the units that plug into switch ports and GPU server host adapters, converting electrical signals from the host electronics into light that travels down the fiber to the destination. AOI produces these at 400G, 800G, and now 1.6T data rates.

Product
Technical approach
800G single-mode transceivers
The primary production product as of 2025–2026. OSFP form factor, single-mode fiber, designed for the GPU-to-switch links inside AI training clusters. AOI received its first volume orders for these in late 2025 from major hyperscale customers.
1.6T transceivers
The next-generation product entering volume production in 2026. Delivers twice the bandwidth per port, reducing the number of fibers and ports needed per GPU in the fabric — a significant cost and density advantage at cluster scale.
400 mW InP pump laser (ELSFP)
A high-power, narrow-linewidth continuous-wave laser designed to feed silicon photonics-based optical systems that require an external light source. This positions AOI in the co-packaged optics ecosystem, where the laser is physically separated from the modulator on the chip.

The 400 mW laser deserves particular attention because it represents AOI's positioning for the next architectural transition. The 400 mW narrow-linewidth pump laser enables shared and external laser architectures by reliably feeding many silicon photonics lanes or wavelengths from a single centralized source, and stabilizes silicon photonic devices by minimizing wavelength drift and noise in ring modulators and on-chip nonlinear elements. In co-packaged optics architectures, where the laser cannot sit physically on the same hot package as the compute die, an external high-power stable laser like this one becomes the enabling component for the entire design.

AOI's manufacturing scale target for 2026 is ambitious: the company plans to build a 210,000 square foot manufacturing expansion near its Sugar Land, Texas headquarters, which it has described as the largest planned production capacity for AI-focused datacenter transceivers in the United States. Management has stated an intent to reach combined capacity of 500,000 units per month for 800G and 1.6T products by end of 2026, up from approximately 90,000 units a year prior.

Coherent COHR Layers 2–4 — Deep vertical stack, materials to coherent modules

Coherent is the broadest-stack photonics company in this group. It manufactures its own compound semiconductor wafers (InP, silicon carbide, and gallium arsenide), grows its own laser chips on those wafers, packages those chips into optical engines, and builds finished transceiver modules — all in-house. It also designs its own silicon photonics platforms and operates multiple generations of coherent DSP chips for long-distance transmission.

The depth of Coherent's stack is the result of the 2022 merger between II-VI Incorporated (a materials and compound semiconductor company founded in 1971) and Coherent, Inc. (a laser and photonics company with a long history in industrial and telecom applications). The combined entity inherited III-V semiconductor fabrication facilities, silicon photonics foundry capacity, coherent DSP design teams, and system-level integration expertise across datacenter, telecom, and industrial markets.

What Coherent actually makes

The most commercially visible current products are the 800G and 1.6T pluggable transceivers for AI data center use. These come in multiple optical architectures — silicon photonics-based variants using Mach-Zehnder modulators, EML-based variants using Coherent's 200G differential electro-absorption modulated lasers, and VCSEL-based variants for short-reach applications — all in the OSFP form factor that the industry has standardized on for high-port-count switch connections.

At OFC 2026, Coherent demonstrated three distinct technology paths for 1.6T transceivers simultaneously: a silicon photonics PIC implementation using Coherent's 400G pure silicon PN junction Mach-Zehnder Modulator, multiple 1.6T transceivers with different DSP chips from three industry leaders in OSFP form factor, and the new XPO pluggable MSA form factor targeting 12.8T and beyond. Running three architectures in parallel is an indicator of scale — it requires independent design teams, multiple wafer processes, and deep systems integration expertise that most transceiver companies do not have.

Product family
Technology and target use
800G / 1.6T pluggables (OSFP)
Short-reach datacenter links for GPU-to-switch and switch-to-switch connections within an AI training cluster. Multiple optical engine variants (EML, SiPh, VCSEL) covering different reach and power budgets.
ZR/ZR+ coherent modules
Long-distance data center interconnect (DCI) — linking AI clusters across campuses or metro areas. Uses Coherent's InP-based coherent photonic integrated circuits and in-house DSP technology for high spectral efficiency over hundreds of kilometers.
D-EML (200G differential EML)
Coherent's own electro-absorption modulated laser chip for 1.6T transceivers, recognized with a 2025 Lightwave Innovation Award. A key enabling component for 200G-per-lane optical interfaces that the 1.6T generation requires.
Optical Circuit Switch (OCS)
A non-mechanical optical switch using liquid-crystal technology that reconfigures fiber paths without converting to electricity. Enables dynamic topology changes in AI fabric without burning power on electrical re-switching. Coherent shipped OCS systems to seven customers in Q1 FY2026 in configurations up to 320×320.
Silicon Carbide (SiC) power devices
Outside the optical stack — Coherent's 300mm SiC substrate platform for power electronics in electric vehicles and next-generation power supplies. Not directly part of AI networking but represents the industrial diversification of Coherent's materials platform.

The coherent long-haul segment is where Coherent's InP photonic integration expertise is most visible. Coherent's InP expansion to six-inch wafers improves cost, yield, and supply resiliency for AI optics — the transition from four-inch to six-inch wafer processing increases the number of devices per wafer run substantially, which directly reduces unit cost at the volume levels that hyperscaler demand now requires.

The hard problem at Coherent's layer: Building a vertical stack from III-V semiconductor wafers through to finished system-level modules requires maintaining process expertise across multiple semiconductor technologies simultaneously — InP for long-reach coherent, silicon photonics for dense short-reach, SiC for power — while integrating them into products that meet the increasingly tight power, space, and signal-integrity budgets of dense AI clusters. No other company in this space has attempted this breadth of vertical integration.
Lightwave Logic LWLG Layer 2 — Electro-optic polymer modulator platform

Lightwave Logic occupies one of the most technically interesting and strategically distinct positions in the stack. While the rest of the industry fights over who can build the best InP lasers, silicon photonics platforms, or DSP chips, Lightwave Logic is working on a fundamentally different approach to the modulator — the device that encodes data onto the light carrier.

Most optical modulators today rely on semiconductor materials — either the plasma dispersion effect in silicon, or the Franz-Keldysh effect in InP-based electroabsorption modulators. These approaches have well-understood limitations: silicon's electro-optic coefficient is relatively weak, requiring long modulator waveguides or high drive voltages to achieve the required phase shift; EMLs require III-V semiconductor processes with all their associated cost and complexity.

Lightwave Logic's approach is to use engineered organic polymers — specifically, its proprietary Perkinamine® material family — as the electro-optic layer. Organic electro-optic polymers can in principle achieve electro-optic coefficients 10x to 30x larger than silicon, which means much shorter modulators, much lower drive voltages, and much lower power consumption for equivalent bandwidth. The modulator bandwidth ceiling also moves dramatically higher: Perkinamine-based devices have been demonstrated at 110 GHz bandwidth, which enables 400G-per-lane operation that would otherwise require exotic and expensive semiconductor processes.

The technical approach and current status

The key manufacturing advantage of the polymer approach is compatibility with standard silicon photonics foundry processes. Lightwave Logic intends to co-develop custom Perkinamine® polymer material optimized for AI scale-up and scale-out, co-develop technical solutions for 400Gb/s CPO applications, and produce a Process Design Kit (PDK) for standard silicon photonics foundry processes covering modulator design, testing, packaging and assembly processes. If Perkinamine can be spun onto standard silicon wafers using existing foundry tooling rather than requiring dedicated III-V epitaxy, it potentially offers a path to lower-cost, scalable modulator production.

The current state of Lightwave Logic is pre-revenue commercialization: the company is executing a multi-stage design win cycle with Fortune Global 500 customers. Key 2026 milestones include building, processing, and testing Silicon Photonics PICs augmented with Perkinamine® polymers to achieve a final product targeted for deployment within a hyperscale data center or AI factory, with later phases validating high manufacturing process yields and establishing volume production capacity.

As of early 2026, four Fortune Global 500 customers had advanced to Stage 3 of Lightwave Logic's design win cycle, representing the prototype-to-product phase of development. A fifth commercial dimension opened in March 2026 when Lightwave Logic announced integration of its polymer modulator designs into Tower Semiconductor's PH18 silicon photonics PDK, and a subsequent integration into GlobalFoundries' silicon photonics platform — meaning the Perkinamine modulator is becoming available as a standard building block in two of the industry's major foundry ecosystems.

What to watch: The central question for Lightwave Logic is whether Perkinamine's electro-optic performance advantage survives the transition from research-grade devices to manufacturing-grade conditions — particularly temperature stability over time, photo-oxidation resistance, and consistent device yield across wafer runs. These have historically been the failure modes for prior generations of polymer electro-optic devices. The 2025 characterization as an "execution year" in which the company expanded reliability datasets on these exact concerns is the right technical focus.
Marvell Technology MRVL Layers 3–4 — DSP, silicon photonics light engines, custom AI silicon

Marvell's role in the photonics stack is at the intersection of semiconductor design and optical systems. It does not manufacture its own optical fiber or grow its own III-V laser chips — instead, it designs the digital signal processing chips and silicon photonics integrated circuits that sit at the heart of optical modules and process the high-speed electrical signals that drive them.

The key product line is the PAM4 DSP family. A PAM4 DSP (Pulse Amplitude Modulation with 4 levels) is the chip inside an optical transceiver module that serializes incoming digital data into high-speed electrical waveforms, compensates for signal degradation, drives the laser or modulator, and — on the receive side — recovers and error-corrects the incoming optical signal. The PAM4 DSP is the silicon brain of a pluggable transceiver, and Marvell is one of the two or three companies worldwide that designs these chips at leading-edge performance and process nodes.

What Marvell actually makes

Product
Technical specification
Ara (3nm PAM4 DSP)
Industry's first 3nm process node PAM4 DSP for 1.6T optical transceivers. Combines eight 200 Gbps channels inside a single optical module, enabling rapid deployment of AI scale-out fabrics across rows and data halls. The 3nm process node delivers significant power and latency advantages over competing designs.
Nova / Nova 2 (PAM4)
800G PAM4 DSP family for AI and ML data center networks. Supports both Ethernet and InfiniBand architectures — the same silicon can power modules going into different fabric types.
1.6T SiPh light engine
A silicon photonics light engine integrated into a linear-drive pluggable optics (LPO) module supporting 1.6T DR8. Consumes less than 5 picojoules per bit including laser power, and consolidates hundreds of components including modulators, photodetectors, drivers, and microcontrollers in a single package.
6.4T CPO light engine
Co-packaged optics light engine for the next generation of switch ASICs. Integrates optical I/O directly with the switching silicon, eliminating the pluggable transceiver form factor entirely for the highest-bandwidth applications.
Aquila (coherent-lite DSP)
1.6T O-band optimized coherent-lite platform for mid-range reach applications — filling the gap between direct-detect PAM4 (intra-datacenter) and full coherent (metro/long-haul).
Custom AI accelerators
Marvell designs custom AI accelerator silicon for cloud customers, most notably a multi-generational agreement with AWS covering custom AI compute, optical DSPs, DCI modules, and Ethernet switching silicon. This is the fastest-growing part of Marvell's AI revenue.

The silicon photonics light engine work is particularly noteworthy because it positions Marvell not just as a DSP supplier but as a photonic integration platform. As the foundation for co-packaged optics systems, Marvell introduced the 3D Silicon Photonics Engine, which integrates hundreds of optical-communication components into a single device, delivering twice the bandwidth while significantly reducing power consumption compared to similar devices.

Ciena CIEN Layers 4–5 — Coherent DSP, optical networking systems

Ciena occupies a unique position in this stack: it is primarily a systems company, not a component company. Where Coherent and Marvell sell chips and modules to transceiver manufacturers and integrators, Ciena sells complete optical networking systems — platforms that combine its proprietary coherent DSP technology with line systems, ROADMs, and switching hardware into deployable network infrastructure.

The cornerstone of Ciena's technology is the WaveLogic coherent DSP family, which the company has been developing and iterating for nearly two decades. WaveLogic is Ciena's proprietary modem chip — it performs the sophisticated digital signal processing required to push coherent optical signals over long fiber spans at the highest possible spectral efficiency. Unlike PAM4, coherent modulation uses the full complex optical field (both amplitude and phase) to encode information, allowing it to carry far more bits per hertz of bandwidth, correct for accumulated fiber impairments, and achieve transcontinental reach without signal regeneration.

What Ciena actually makes

The flagship product as of 2025–2026 is WaveLogic 6 (WL6), which comes in two variants. WL6 Extreme is the platform's coherent chassis-based form, capable of 1.6 Tb/s on a single wavelength — the industry's highest-capacity single-carrier coherent solution, built on a 3nm CMOS process. WL6 Nano is its pluggable counterpart, delivering 800G coherent performance in a compact form factor for deployment directly on routers and switches as metro DCI transceivers.

Product
Function
WaveLogic 6 Extreme (WL6e)
1.6 Tb/s single-wavelength coherent transponder for long-haul and inter-datacenter links. Powers Ciena's WaveRouter, 6500, and Waveserver platforms. Achieved the industry's first 1.6T IP/optical trial transmission in 2025.
WaveLogic 6 Nano (WL6n)
800G coherent pluggable transceiver (QSFP-DD/OSFP) for metro DCI and AI campus interconnect. Demonstrated 800 Gb/s transmission over 1,050 km using a third-party switch, validating its performance outside Ciena's own chassis.
1.6T Coherent-Lite (WL6n CLite)
First coherent pluggable transceiver with 8×224G SerDes — 1.6T throughput in a pluggable for campus DCI and intra-datacenter applications. Bridges the gap between direct-detect 1.6T and full coherent, offering longer reach than PAM4 at lower power than full coherent.
WaveRouter / 6500 platforms
Packet-optical convergence platforms that integrate Ciena's coherent line-rate DSP with IP routing capabilities. The WaveRouter became the world's first available 1.6T coherent router in 2025.
Blue Planet software suite
Ciena's network management and automation platform. The 2025 Agentic AI Framework enables AI-driven autonomous network operations — automated fault detection, capacity rebalancing, and path optimization without manual intervention.

In September 2025, Ciena acquired Nubis Communications for $270 million. Nubis specializes in high-performance, ultra-compact, low-power optical and electrical interconnects tailored to support AI workloads, and the acquisition gave Ciena access to Co-Packaged Optics (CPO) and Near-Packaged Optics (NPO) technology that extends its reach from metro and long-haul interconnect into the shorter-reach intra-datacenter applications that AI training fabrics require.

Ciena also demonstrated its path toward 3.2T at ECOC 2025 with a world-first 448G PAM4 driverless optical transmission over 500m of fiber — a collaboration with HyperLight, McGill University, and Keysight that demonstrated 3nm CMOS-based 224G SerDes operating with a sub-volt direct-drive thin-film lithium niobate modulator. This represents the ecosystem's first proof-of-concept for the next generation of data center networking that would enable 3.2T interfaces using 448G-per-lane technology.

Arista Networks ANET Layers 5–6 — Switching fabric and network operating system

Arista is not a photonics company in the materials or components sense. It does not make lasers, fiber, or transceiver chips. What Arista builds is the switch — the device at the center of the AI cluster fabric that every GPU's optical link terminates into, and the software that manages how traffic flows through those switches.

The reason Arista belongs in a photonics essay is that the switch defines the optical interface requirements for every other layer of the stack. When Arista moves its switching platforms to 800G-per-port and then 1.6T-per-port, every transceiver manufacturer, DSP designer, laser supplier, and cable installer must follow. The switching roadmap is the primary demand signal for the rest of the stack.

What Arista actually makes

The core product is the Etherlink™ AI platform, a family of switches specifically engineered for AI training and inference cluster fabrics. These differ from conventional data center switches in several important ways driven by the traffic characteristics of distributed AI training.

AI training workloads generate a small number of very high-bandwidth flows — the all-reduce operations that synchronize gradient updates across GPU nodes during backpropagation. These flows are latency-sensitive in a specific way: if any single link in the communication pattern stalls, the entire training step stalls, because all GPUs must complete the all-reduce before proceeding to the next forward pass. This means conventional best-effort packet scheduling is insufficient — the switch must guarantee that high-priority AI traffic flows are never dropped or significantly delayed by competing traffic.

Product
Function in AI fabric
7800R4 AI Spine switch
Flagship modular chassis for large AI training cluster spines. Supports up to 576 ports of 800GbE. Key innovation is Virtual Output Queuing (VOQ), which prevents head-of-line blocking — the failure mode where a single slow destination causes all traffic behind it to stall.
7700R4 AI Distributed Etherlink Switch
Supports the largest AI clusters with massively parallel distributed scheduling and congestion-free traffic spraying based on the Jericho3-AI architecture. Designed for clusters requiring more than 10,000 GPU nodes in a single logical fabric.
Arista EOS® (Extensible Operating System)
A single operating system image that runs identically across every Arista switch platform. Single-image consistency means configuration errors — the leading cause of data center outages — cannot result from version mismatches across different hardware families.
CloudVision® with NetDI and CV UNO
Network management platform with AI-powered job-centric observability. CV UNO tracks AI training job completion health at 100-microsecond resolution across the entire fabric, detecting flow-level stalls before they cause training run failures.
Cluster Load Balancing (CLB)
A new Ethernet-based RDMA load balancing algorithm that operates at the RDMA queue-pair level rather than the flow level. Enables high bandwidth utilization in the sparse-flow, large-bandwidth-burst traffic pattern characteristic of GPU all-reduce operations.

Arista's role in the broader ecosystem extends beyond its own products through its participation in the Ultra Ethernet Consortium (UEC), which it co-founded. Etherlink platforms are forward-compatible with Ultra Ethernet Consortium standards, supporting both current and emerging UEC capabilities that are expected to provide additional performance benefits when UEC-compliant network interface cards become available. The UEC is the industry effort to make standard Ethernet competitive with NVIDIA's proprietary InfiniBand fabric for AI training workloads — and Arista is structurally positioned as the primary beneficiary if that transition succeeds.

Viavi Solutions VIAV All layers — Test, measurement, and validation

Viavi occupies a layer that does not appear in the AI cluster's runtime architecture but is essential to every other layer functioning correctly: test and measurement. Every optical transceiver that goes into a hyperscale data center has been qualified using test equipment. Every fiber run has been measured for insertion loss and return loss. Every switch port has been validated for link integrity. The tools that perform those measurements are Viavi's product portfolio.

The reason test and measurement matters deeply in AI photonics is that the tolerance budgets at 800G and 1.6T are extremely tight. A 1.6T link running at 200G per lane over 8 lanes has no margin for connector contamination, excessive insertion loss, or signal integrity degradation anywhere in the path — a single dirty connector end-face that would have been acceptable at 100G can cause a link failure at 1.6T. At the transceiver manufacturing scale that AI demand requires, automated test at high throughput is not optional; it is the quality gate that determines whether products reach customers in working condition.

What Viavi actually makes

Product
What it validates
ONE LabPro 1600ER module
1.6T traffic generation and analysis for transceiver characterization and qualification. Recognized with a Lightwave Innovation Award for nine consecutive years. Used by transceiver manufacturers to measure BER, latency, and signal integrity at full 1.6T line rate before products ship.
MAP-300 multi-function test system
Multi-user, multi-function production test system for R&D qualification and volume manufacturing of optical network components and connectivity. Used on the factory floor at transceiver manufacturers to screen parts at scale.
mFVU-3000 connector microscope
Dual 400X high-resolution fiber connector end-face inspection. Magnetic quick-connect adapters with auto-ID simplify setup, and fully automatic inspection reduces test times. Minimizing contamination during manufacturing reduces test failures and improves yield.
NITRO Fiber Sensing / FiberComplete PRO
Distributed fiber sensing and dense datacenter fiber test solutions for monitoring installed fiber infrastructure. OTDR-based systems that locate faults, measure insertion loss, and track fiber plant health over time — essential for maintaining a large AI data center fabric.
TestCenter 1.6T Platform
Network-level traffic generation platform for validating AI fabric switch performance at 1.6T port speeds — testing congestion response, latency under load, and packet drop behavior under the bursty all-reduce traffic patterns that AI training generates.

At OFC 2026, Viavi showcased advanced technologies for the validation of next-generation AI fabrics at scale, with demonstrations covering 1.6T Ethernet, transceiver, connectivity and silicon photonic manufacturing solutions, PCIe over optics, automated network test, and fiber sensing. The silicon photonic manufacturing test capability is particularly new and important — as silicon photonics PICs replace discrete optical components inside transceivers, the wafer-level test methodology that works for standard ICs must be adapted for photonic circuits, where the test stimulus is light rather than voltage.

How the pieces fit: a stack view

Mapping each company back to the stack diagram from the beginning of this essay makes the interdependencies visible.

Company positions mapped across the AI photonics stack Diagram showing how each of the eight companies maps to specific layers of the AI photonics stack. GLW occupies Layer 1 (fiber). AAOI and COHR span Layers 2-3 (lasers to modules). LWLG targets Layer 2 (EO polymer modulator). MRVL spans Layers 3-4 (light engine to DSP). CIEN spans Layers 4-5 (coherent DSP to networking systems). ANET owns Layers 5-6 (switching to network OS). VIAV spans all layers as the test and validation infrastructure. L1 FIBER L2 LASER / LIGHT SOURCE L3 TRANSCEIVER MODULE L4 DSP / SIGNAL PROCESSING L5 SWITCHING FABRIC L6 NETWORK OS / CONTROL GLW AAOI COHR LWLG MRVL CIEN ANET VIAV test & validation Bar height = stack layers covered. VIAV spans all. No company owns the complete stack.
Figure 2. Company positions across the six-layer AI photonics stack. The bars represent the primary layers each company occupies — width is illustrative of relative scope, not market share. Note that COHR is the deepest vertical stack among the pure photonics companies, and ANET is the only company whose primary product is above the optical component layer.

The most important structural observation from this map is that no single company owns the full stack. The fiber comes from Corning. The laser chips come from AAOI, Coherent, or a handful of other III-V manufacturers. The DSP inside the transceiver comes from Marvell, Coherent's in-house teams, or Broadcom. The transceiver module is assembled by AAOI, Coherent, or one of the major Asian module manufacturers. The switch that the transceiver plugs into is from Arista or Cisco. The long-haul coherent link between datacenters uses Ciena's WaveLogic platform. And everything is validated at every stage by Viavi's test instruments.

That lack of vertical integration across the full stack creates both fragility and resilience. When any one layer hits a supply constraint — as InP laser supply did in 2024–2025 — it bottlenecks every layer above it. But it also means the ecosystem can evolve each layer somewhat independently, and architectural transitions like the shift to co-packaged optics or the adoption of polymer modulators can happen at the component level without requiring every company to rebuild its entire stack.

What Part 2 covers

This essay has mapped what each company builds and where it sits in the stack. Part 2 will go deeper on the architectural transitions that are now reshaping the stack itself.

The three transitions that matter most are co-packaged optics (CPO), which collapses Layers 2 through 4 into the switching silicon package and disrupts the pluggable transceiver model; linear pluggable optics (LPO), which removes the DSP from the transceiver to reduce power at the cost of reduced reach; and the shift from 800G to 1.6T and then 3.2T, which requires fundamental changes to the laser modulation approach, the DSP architecture, and the fiber plant design. Each of these transitions redraws the competitive landscape in ways that will reward different companies than the current pluggable transceiver cycle does.


Manish KL writes about AI infrastructure, memory systems, accelerator architecture, and photonics. Related essays: Photonics Is No Longer a Component Story · Scale-Out Was Yesterday. Scale-Up Optics Is the Next Battle · InP vs Silicon Photonics vs VCSEL: The Materials Stack Behind AI Networking

© 2026 Manish KL