What you’ll learn:
- Why the shift to 800G presents unique challenges compared to previous Ethernet evolutions.
- The biggest hurdles vendors must clear to bring 800G solutions to customers.
- How different stakeholders are testing and validating 800G components to meet market demand.
When building a house, any of a thousand decisions could lead to budget overruns, blown timelines, and big problems down the road. Fortunately, it’s easy to get answers to most questions that arise: check the blueprints. But what if the blueprints are still being drafted even as construction gets underway?
Thankfully, no one builds houses that way. But such a scenario isn’t far off from what vendors must contend with right now in the race to deliver 800-Gigabit Ethernet (800G).
The last big shift in data-center optics, moving from 100G to 400G, progressed over many years, leaving plenty of time for standards bodies to formalize the technology and for vendors to implement it.
This time around, we don’t have that luxury. With exploding demand for cloud services, operators of the world’s biggest data centers need higher-speed transmission technologies now, not in two or three years.
Vendors in this space—chipset makers, network equipment manufacturers (NEMs), and cable and transceiver vendors—are racing to meet the need as quickly as possible. But with standards still incomplete and open to interpretation, delivering production-ready 800G solutions is far from simple.
Let’s review the biggest outstanding questions facing vendors bringing new 800G components to market and the steps they’re taking to find answers.
The Growing Data-Center Challenge
If it seems like you’re having déjà vu, don’t worry. Yes, the industry did just work through similar questions in the development of 400G optics, and many large-scale networks and data centers only recently adopted them. In the world’s biggest hyperscale data centers, though, skyrocketing bandwidth and performance demands have already exceeded what 400G can deliver.
Having millions of enterprise workers suddenly switch to full-time, work-from-home status was among the biggest drivers of exploding cloud demand, but not the only one. The growing dominance of cloud applications in the enterprise, millions of Internet of Things (IoT) devices coming online, and sharp upticks in artificial-intelligence and machine-learning (AI/ML) workloads have also played a role.
These trends present a double-edged sword for vendors of data-center network and transmission technologies. On one hand, they find a market positively clamoring for 800G components for lower layers of the protocol stack, even in their earliest incarnations. On the other, big questions remain unanswered about the technology. These gray areas have the potential to create significant problems for customers in interoperability, performance, and even the in the ability to reliably establish links.
The good news is that 800G is based on well-understood 400G technology. Vendors can apply familiar techniques like analyzing forward-error-correction (FEC) statistics to accurately assess physical-layer health. At the same time, jumping from 400G to 800G isn’t as simple as just tweaking some configurations.
Shifting to 112G electrical lanes alone—double the spectral content per lane of 400G technology—represents a huge challenge for the entire industry. Manufacturers in every part of the ecosystem, from circuit boards to connectors to cables to testing equipment, need electrical channel technology that operates well out to twice the symbol rate.
The physics involved in achieving that goal are enormously complex and interdependent, requiring players from across the industry to move forward on this journey together. It also means that vendors need earlier and more thorough testing and validation than any previous Ethernet evolution.
Answering Outstanding Questions
As vendors push forward with 800G solutions, they’re working through such open questions as:
Where are 800G standards going, and what will they ultimately look like?
The most immediate issue facing vendors today is the lack of a mature standard—and, in fact, they’re staring at a scenario with two competing standards in different stages of development. In past technology evolutions, IEEE has served as a kind of lodestar for the industry.
However, while IEEE specifies the 112G electrical lanes referenced above in 802.3CK, they haven’t yet completed their 800G standard. In the meantime, vendors seeking to deliver early solutions are using the only 800G standard that exists today, the Ethernet Technology Consortium’s (ETC) 800GBASE-R.
How will these competing standards resolve? Should vendors expect a repeat of the Betamax/VHS wars of the 1980s, where one standard ultimately dominated the market? Should they invest in supporting both? What kind of market momentum will they lose if they wait for clear answers? Whichever way vendors go represents a significant bet, with major long-term implications.
How will different vendor devices interact?
Working with multiple standards in different stages of development also means that early components may not necessarily support the same electrical transmission capabilities, even for core functions like establishing a link. For example, some 800GBASE-R ASICs that support both Auto-Negotiation (AN) and Link Training (LT) are currently incompatible with ASICs that support only LT.
Therefore, even when all components comply with the ETC standard, customers can’t assume that links will automatically be established when using different devices and cables. They may need to manually tune transmit settings. This increases the potential for link flaps, a condition in which links alternate between up and down states, which can dramatically affect throughput.
Which issues that had negligible impact in 400G will now become big problems?
Jumping from 400G to 800G optics means more than a massive speed increase. We’re also doubling the spectrum to use higher frequencies, as well as doubling the sample speed and symbol rate. Suddenly, issues and inefficiencies we didn’t have to worry about in 400G can seriously diminish electrical performance.
One big problem we already know we need to solve: the huge amount of heat generated by 800G optics. Navigating this issue affects everything from the materials used for ASICs to pin spacing, and vendors are still working through all of the implications.
Unleashing Tomorrow’s High-Performance Ethernet
Those are just some of the questions vendors have to wrestle with as they race to get 800G components into the hands of customers. The only way to get answers is to perform exhaustive testing at all layers of the protocol stack.
Right now, vendors and their cloud-provider customers are hard at work testing and validating emerging 800G technologies. Given the unsettled standards space and the many aspects of the technology still in flux, these efforts run the gamut—even when focusing exclusively on products using the ETC standard.
Network equipment manufacturers (NEMs) are working to assure link and application performance under demanding live network conditions. Chipset makers are employing the latest techniques—Layer-1 silicon emulation, software-based traffic emulation, and automated testing workflows—for pre-silicon validation and post-silicon testing. Cable and transceiver vendors are performing exhaustive interoperability testing to validate link establishment and line-rate transmission in multivendor environments. And, as quickly as they can get 800G components, hyperscalers are launching their own extensive testing efforts to baseline network and application performance.
It’s a huge effort—and a testament to the industry’s ability to push new technologies forward, even in the absence of settled standards. We may not have the comprehensive blueprints we’d prefer, but with state-of-the-art testing and validation, we can build an 800G future we’ll all be happy to live in.