AI clusters need extraordinary amounts of light. Not metaphorically — literally. Every GPU-to-GPU communication in a training cluster travels as photons over fiber. The companies that generate, shape, modulate, route, amplify, and test those photons are the unseeen infrastructure of the AI era. This is a technical map of eight of them: what each one actually builds, where it sits in the stack, and why its piece of the problem is hard.
For most of computing history, the optical link was a well-defined component: a transceiver module that plugged into a switch port, converted electrical signals to light, sent them down a fiber, and converted them back. The system engineer's job was to choose the right form factor, hit the power budget, and not think too hard about what was inside the module.
AI infrastructure has dissolved that clean abstraction. A dense GPU training cluster needs thousands of high-speed links operating simultaneously, at bandwidths that have doubled every two years and are now approaching 1.6 terabits per second per port. At that scale, the physical properties of light — its power budget, its polarization, its modulation format, its interaction with the semiconductor substrate — become first-order design constraints that the systems engineer can no longer ignore.
The question of how to generate light, shape it, modulate data onto it, package it efficiently, switch it, and validate it is no longer a component-selection question. It is a systems architecture question. And it has spawned a rich industrial ecosystem of companies that each own a distinct piece of the answer.
The GPU is the headline of the AI cluster. The photonic interconnect is the nervous system. You cannot train a 100,000-GPU model if any part of the light path fails.
This essay maps that ecosystem across eight companies — Corning, Applied Optoelectronics, Coherent, Lightwave Logic, Marvell, Ciena, Arista, and Viavi Solutions — and explains what each one actually builds, what the hard technical problem is at their layer of the stack, and how their products connect to the others.
This is Part 1. It covers all eight companies at the product and technology level. Part 2 will go deeper on the architectural transitions — co-packaged optics, linear pluggable optics, silicon photonics versus InP, and where the stack is heading as it approaches 3.2T and beyond.
Before diving into individual companies, it helps to have a mental model of the stack. An AI cluster's optical infrastructure can be thought of in six layers, each of which has distinct technical requirements and different companies competing for it.
With that map in place, let's go through each company in order from the physical substrate up.
Corning is the company that made 1.3 billion miles of optical fiber — enough to wrap the Earth roughly 52,000 times. It holds the dominant position in manufacturing the glass that physically carries light in every major telecommunications network and, increasingly, in every major AI data center.
The reason Corning is difficult to displace is not simply scale. It is that manufacturing ultra-low-loss optical fiber is a materials science problem of extraordinary subtlety. The silica glass in a single-mode optical fiber must be pure to within a few parts per billion of contaminant — any impurity introduces absorption or scattering that attenuates the optical signal over distance. Corning's Outside Vapor Deposition process for laying down this glass with the required purity and refractive index profile is the product of decades of proprietary process development.
The core product is optical fiber — specifically, single-mode fiber (SMF) for long-distance data center interconnect and multimode fiber for shorter intra-rack and inter-rack links. But for AI data centers, the design requirements are different from what telecommunications networks need.
AI clusters are extremely fiber-dense. A single hyperscale AI data center may have millions of individual fiber terminations, connecting GPU racks to spine switches in a full or near-full mesh. The physical challenge is running that many fibers through limited floor space and rack-unit budgets without sacrificing signal quality or serviceability. Corning has responded to this with two new product directions aimed squarely at AI density requirements.
In January 2026, Corning and Meta announced a multiyear supply agreement of up to $6 billion covering optical fiber, cable, and connectivity solutions for Meta's AI data centers. The groundbreaking ceremony for a significant expansion of Corning's optical cable manufacturing capacity in Hickory, North Carolina, marked a milestone in this collaboration. The deal is significant not only for its scale but for what it signals architecturally: Corning's CEO noted that AI data centers require up to ten times more optical fiber than traditional cloud computing environments, and that as GPU counts per rack scale into the hundreds, the transition from copper to fiber for intra-rack connectivity is inevitable, given fiber's superior economics and power efficiency at that density.
Applied Optoelectronics is one of the few companies in the industry that manufactures its own Indium Phosphide (InP) laser chips and uses them in its own transceiver modules — a vertical integration approach that most of its competitors do not match. Where the majority of transceiver manufacturers source their laser chips from external suppliers, AOI grows, fabricates, and packages its own III-V semiconductor lasers in-house at its Sugar Land, Texas facility.
That vertical integration creates a specific competitive advantage in supply-constrained cycles. When InP laser supply tightens — as it did sharply in 2024 and 2025 when hyperscaler demand for 800G transceivers outstripped the industry's chip manufacturing capacity — AOI can continue shipping because it controls its own chip supply chain. This capability allowed the company to avoid the crippling component shortages that plagued the industry in 2024 and 2025, enabling it to gain market share from larger, more fragmented competitors.
The core product is optical transceivers for hyperscale data center use — specifically the units that plug into switch ports and GPU server host adapters, converting electrical signals from the host electronics into light that travels down the fiber to the destination. AOI produces these at 400G, 800G, and now 1.6T data rates.
The 400 mW laser deserves particular attention because it represents AOI's positioning for the next architectural transition. The 400 mW narrow-linewidth pump laser enables shared and external laser architectures by reliably feeding many silicon photonics lanes or wavelengths from a single centralized source, and stabilizes silicon photonic devices by minimizing wavelength drift and noise in ring modulators and on-chip nonlinear elements. In co-packaged optics architectures, where the laser cannot sit physically on the same hot package as the compute die, an external high-power stable laser like this one becomes the enabling component for the entire design.
AOI's manufacturing scale target for 2026 is ambitious: the company plans to build a 210,000 square foot manufacturing expansion near its Sugar Land, Texas headquarters, which it has described as the largest planned production capacity for AI-focused datacenter transceivers in the United States. Management has stated an intent to reach combined capacity of 500,000 units per month for 800G and 1.6T products by end of 2026, up from approximately 90,000 units a year prior.
Coherent is the broadest-stack photonics company in this group. It manufactures its own compound semiconductor wafers (InP, silicon carbide, and gallium arsenide), grows its own laser chips on those wafers, packages those chips into optical engines, and builds finished transceiver modules — all in-house. It also designs its own silicon photonics platforms and operates multiple generations of coherent DSP chips for long-distance transmission.
The depth of Coherent's stack is the result of the 2022 merger between II-VI Incorporated (a materials and compound semiconductor company founded in 1971) and Coherent, Inc. (a laser and photonics company with a long history in industrial and telecom applications). The combined entity inherited III-V semiconductor fabrication facilities, silicon photonics foundry capacity, coherent DSP design teams, and system-level integration expertise across datacenter, telecom, and industrial markets.
The most commercially visible current products are the 800G and 1.6T pluggable transceivers for AI data center use. These come in multiple optical architectures — silicon photonics-based variants using Mach-Zehnder modulators, EML-based variants using Coherent's 200G differential electro-absorption modulated lasers, and VCSEL-based variants for short-reach applications — all in the OSFP form factor that the industry has standardized on for high-port-count switch connections.
At OFC 2026, Coherent demonstrated three distinct technology paths for 1.6T transceivers simultaneously: a silicon photonics PIC implementation using Coherent's 400G pure silicon PN junction Mach-Zehnder Modulator, multiple 1.6T transceivers with different DSP chips from three industry leaders in OSFP form factor, and the new XPO pluggable MSA form factor targeting 12.8T and beyond. Running three architectures in parallel is an indicator of scale — it requires independent design teams, multiple wafer processes, and deep systems integration expertise that most transceiver companies do not have.
The coherent long-haul segment is where Coherent's InP photonic integration expertise is most visible. Coherent's InP expansion to six-inch wafers improves cost, yield, and supply resiliency for AI optics — the transition from four-inch to six-inch wafer processing increases the number of devices per wafer run substantially, which directly reduces unit cost at the volume levels that hyperscaler demand now requires.
Lightwave Logic occupies one of the most technically interesting and strategically distinct positions in the stack. While the rest of the industry fights over who can build the best InP lasers, silicon photonics platforms, or DSP chips, Lightwave Logic is working on a fundamentally different approach to the modulator — the device that encodes data onto the light carrier.
Most optical modulators today rely on semiconductor materials — either the plasma dispersion effect in silicon, or the Franz-Keldysh effect in InP-based electroabsorption modulators. These approaches have well-understood limitations: silicon's electro-optic coefficient is relatively weak, requiring long modulator waveguides or high drive voltages to achieve the required phase shift; EMLs require III-V semiconductor processes with all their associated cost and complexity.
Lightwave Logic's approach is to use engineered organic polymers — specifically, its proprietary Perkinamine® material family — as the electro-optic layer. Organic electro-optic polymers can in principle achieve electro-optic coefficients 10x to 30x larger than silicon, which means much shorter modulators, much lower drive voltages, and much lower power consumption for equivalent bandwidth. The modulator bandwidth ceiling also moves dramatically higher: Perkinamine-based devices have been demonstrated at 110 GHz bandwidth, which enables 400G-per-lane operation that would otherwise require exotic and expensive semiconductor processes.
The key manufacturing advantage of the polymer approach is compatibility with standard silicon photonics foundry processes. Lightwave Logic intends to co-develop custom Perkinamine® polymer material optimized for AI scale-up and scale-out, co-develop technical solutions for 400Gb/s CPO applications, and produce a Process Design Kit (PDK) for standard silicon photonics foundry processes covering modulator design, testing, packaging and assembly processes. If Perkinamine can be spun onto standard silicon wafers using existing foundry tooling rather than requiring dedicated III-V epitaxy, it potentially offers a path to lower-cost, scalable modulator production.
The current state of Lightwave Logic is pre-revenue commercialization: the company is executing a multi-stage design win cycle with Fortune Global 500 customers. Key 2026 milestones include building, processing, and testing Silicon Photonics PICs augmented with Perkinamine® polymers to achieve a final product targeted for deployment within a hyperscale data center or AI factory, with later phases validating high manufacturing process yields and establishing volume production capacity.
As of early 2026, four Fortune Global 500 customers had advanced to Stage 3 of Lightwave Logic's design win cycle, representing the prototype-to-product phase of development. A fifth commercial dimension opened in March 2026 when Lightwave Logic announced integration of its polymer modulator designs into Tower Semiconductor's PH18 silicon photonics PDK, and a subsequent integration into GlobalFoundries' silicon photonics platform — meaning the Perkinamine modulator is becoming available as a standard building block in two of the industry's major foundry ecosystems.
Marvell's role in the photonics stack is at the intersection of semiconductor design and optical systems. It does not manufacture its own optical fiber or grow its own III-V laser chips — instead, it designs the digital signal processing chips and silicon photonics integrated circuits that sit at the heart of optical modules and process the high-speed electrical signals that drive them.
The key product line is the PAM4 DSP family. A PAM4 DSP (Pulse Amplitude Modulation with 4 levels) is the chip inside an optical transceiver module that serializes incoming digital data into high-speed electrical waveforms, compensates for signal degradation, drives the laser or modulator, and — on the receive side — recovers and error-corrects the incoming optical signal. The PAM4 DSP is the silicon brain of a pluggable transceiver, and Marvell is one of the two or three companies worldwide that designs these chips at leading-edge performance and process nodes.
The silicon photonics light engine work is particularly noteworthy because it positions Marvell not just as a DSP supplier but as a photonic integration platform. As the foundation for co-packaged optics systems, Marvell introduced the 3D Silicon Photonics Engine, which integrates hundreds of optical-communication components into a single device, delivering twice the bandwidth while significantly reducing power consumption compared to similar devices.
Ciena occupies a unique position in this stack: it is primarily a systems company, not a component company. Where Coherent and Marvell sell chips and modules to transceiver manufacturers and integrators, Ciena sells complete optical networking systems — platforms that combine its proprietary coherent DSP technology with line systems, ROADMs, and switching hardware into deployable network infrastructure.
The cornerstone of Ciena's technology is the WaveLogic coherent DSP family, which the company has been developing and iterating for nearly two decades. WaveLogic is Ciena's proprietary modem chip — it performs the sophisticated digital signal processing required to push coherent optical signals over long fiber spans at the highest possible spectral efficiency. Unlike PAM4, coherent modulation uses the full complex optical field (both amplitude and phase) to encode information, allowing it to carry far more bits per hertz of bandwidth, correct for accumulated fiber impairments, and achieve transcontinental reach without signal regeneration.
The flagship product as of 2025–2026 is WaveLogic 6 (WL6), which comes in two variants. WL6 Extreme is the platform's coherent chassis-based form, capable of 1.6 Tb/s on a single wavelength — the industry's highest-capacity single-carrier coherent solution, built on a 3nm CMOS process. WL6 Nano is its pluggable counterpart, delivering 800G coherent performance in a compact form factor for deployment directly on routers and switches as metro DCI transceivers.
In September 2025, Ciena acquired Nubis Communications for $270 million. Nubis specializes in high-performance, ultra-compact, low-power optical and electrical interconnects tailored to support AI workloads, and the acquisition gave Ciena access to Co-Packaged Optics (CPO) and Near-Packaged Optics (NPO) technology that extends its reach from metro and long-haul interconnect into the shorter-reach intra-datacenter applications that AI training fabrics require.
Ciena also demonstrated its path toward 3.2T at ECOC 2025 with a world-first 448G PAM4 driverless optical transmission over 500m of fiber — a collaboration with HyperLight, McGill University, and Keysight that demonstrated 3nm CMOS-based 224G SerDes operating with a sub-volt direct-drive thin-film lithium niobate modulator. This represents the ecosystem's first proof-of-concept for the next generation of data center networking that would enable 3.2T interfaces using 448G-per-lane technology.
Arista is not a photonics company in the materials or components sense. It does not make lasers, fiber, or transceiver chips. What Arista builds is the switch — the device at the center of the AI cluster fabric that every GPU's optical link terminates into, and the software that manages how traffic flows through those switches.
The reason Arista belongs in a photonics essay is that the switch defines the optical interface requirements for every other layer of the stack. When Arista moves its switching platforms to 800G-per-port and then 1.6T-per-port, every transceiver manufacturer, DSP designer, laser supplier, and cable installer must follow. The switching roadmap is the primary demand signal for the rest of the stack.
The core product is the Etherlink™ AI platform, a family of switches specifically engineered for AI training and inference cluster fabrics. These differ from conventional data center switches in several important ways driven by the traffic characteristics of distributed AI training.
AI training workloads generate a small number of very high-bandwidth flows — the all-reduce operations that synchronize gradient updates across GPU nodes during backpropagation. These flows are latency-sensitive in a specific way: if any single link in the communication pattern stalls, the entire training step stalls, because all GPUs must complete the all-reduce before proceeding to the next forward pass. This means conventional best-effort packet scheduling is insufficient — the switch must guarantee that high-priority AI traffic flows are never dropped or significantly delayed by competing traffic.
Arista's role in the broader ecosystem extends beyond its own products through its participation in the Ultra Ethernet Consortium (UEC), which it co-founded. Etherlink platforms are forward-compatible with Ultra Ethernet Consortium standards, supporting both current and emerging UEC capabilities that are expected to provide additional performance benefits when UEC-compliant network interface cards become available. The UEC is the industry effort to make standard Ethernet competitive with NVIDIA's proprietary InfiniBand fabric for AI training workloads — and Arista is structurally positioned as the primary beneficiary if that transition succeeds.
Viavi occupies a layer that does not appear in the AI cluster's runtime architecture but is essential to every other layer functioning correctly: test and measurement. Every optical transceiver that goes into a hyperscale data center has been qualified using test equipment. Every fiber run has been measured for insertion loss and return loss. Every switch port has been validated for link integrity. The tools that perform those measurements are Viavi's product portfolio.
The reason test and measurement matters deeply in AI photonics is that the tolerance budgets at 800G and 1.6T are extremely tight. A 1.6T link running at 200G per lane over 8 lanes has no margin for connector contamination, excessive insertion loss, or signal integrity degradation anywhere in the path — a single dirty connector end-face that would have been acceptable at 100G can cause a link failure at 1.6T. At the transceiver manufacturing scale that AI demand requires, automated test at high throughput is not optional; it is the quality gate that determines whether products reach customers in working condition.
At OFC 2026, Viavi showcased advanced technologies for the validation of next-generation AI fabrics at scale, with demonstrations covering 1.6T Ethernet, transceiver, connectivity and silicon photonic manufacturing solutions, PCIe over optics, automated network test, and fiber sensing. The silicon photonic manufacturing test capability is particularly new and important — as silicon photonics PICs replace discrete optical components inside transceivers, the wafer-level test methodology that works for standard ICs must be adapted for photonic circuits, where the test stimulus is light rather than voltage.
Mapping each company back to the stack diagram from the beginning of this essay makes the interdependencies visible.
The most important structural observation from this map is that no single company owns the full stack. The fiber comes from Corning. The laser chips come from AAOI, Coherent, or a handful of other III-V manufacturers. The DSP inside the transceiver comes from Marvell, Coherent's in-house teams, or Broadcom. The transceiver module is assembled by AAOI, Coherent, or one of the major Asian module manufacturers. The switch that the transceiver plugs into is from Arista or Cisco. The long-haul coherent link between datacenters uses Ciena's WaveLogic platform. And everything is validated at every stage by Viavi's test instruments.
That lack of vertical integration across the full stack creates both fragility and resilience. When any one layer hits a supply constraint — as InP laser supply did in 2024–2025 — it bottlenecks every layer above it. But it also means the ecosystem can evolve each layer somewhat independently, and architectural transitions like the shift to co-packaged optics or the adoption of polymer modulators can happen at the component level without requiring every company to rebuild its entire stack.
This essay has mapped what each company builds and where it sits in the stack. Part 2 will go deeper on the architectural transitions that are now reshaping the stack itself.
The three transitions that matter most are co-packaged optics (CPO), which collapses Layers 2 through 4 into the switching silicon package and disrupts the pluggable transceiver model; linear pluggable optics (LPO), which removes the DSP from the transceiver to reduce power at the cost of reduced reach; and the shift from 800G to 1.6T and then 3.2T, which requires fundamental changes to the laser modulation approach, the DSP architecture, and the fiber plant design. Each of these transitions redraws the competitive landscape in ways that will reward different companies than the current pluggable transceiver cycle does.
Manish KL writes about AI infrastructure, memory systems, accelerator architecture, and photonics. Related essays: Photonics Is No Longer a Component Story · Scale-Out Was Yesterday. Scale-Up Optics Is the Next Battle · InP vs Silicon Photonics vs VCSEL: The Materials Stack Behind AI Networking
© 2026 Manish KL