Key Challenges and Innovations for 800G and 1.6T Networking

This article is Part 2 of a two-part series on 1.6T Networks and Hyperscale Data Centers

In Part One, we explored the vital role of data centers and edge computing in enabling emerging technologies. The demand for faster data speeds that emerging technologies place on these networks have created a need for 800G and 1.6T transceivers.

Get the 1.6T Poster!

I also explained the basics of data center infrastructure and how the Institute of Electrical and Electronics Engineers (IEEE) and Optical Internetworking Forum (OIF) create the standards that govern the physical layer transceivers and interfaces that connect data centers and edge computing networks.

In Part Two, we will look at the innovations required to increase Ethernet speeds from 400 Gb/s (400G) to 800G and 1.6 Tb/s (1.6T) to meet the high demands of emerging technologies. The main challenges to faster Ethernet speeds are:
• Increasing speeds and data capacity
• Decreasing bit errors
• Increasing power efficiency

How can developers increase networking speeds and data capacity?

Data center integrators could use more parallel lanes to increase a network’s aggregate data rate. The first generation of 800G will likely consist of eight 100 Gb/s lanes with a total aggregate data rate of 800 Gb/s. However, increasing the data rate per lane is more efficient; developers can choose to increase either the baud rate or the bits per symbol.

Increasing the baud rate transmits symbols through the channel faster, potentially increasing signal degradation. Increasing the pulse amplitude modulation scheme (PAM) sends more bits per symbol, but the margin for error is lower and thresholds tighter.

The IEEE and OIF will consider the tradeoffs of each method of implementation when defining the 800G and 1.6T standards. Both groups have set out to define 800G and 1.6T over 224 Gb/s lanes.

Below are several challenges and potential solutions for achieving 224 Gb/s lane rates:

Switch Silicon SerDes

Faster networking switch chips are essential to increasing lane speeds. High-speed application-specific integrated circuits (ASICs) enable low-latency switching between elements in a server rack and the data center. From 2010 to 2022, switch silicon bandwidth rose from 640 Gb/s to 51.2 Tb/s after multiple improvements in complementary metal-oxide semiconductor (CMOS) process technology.

The SerDes (serializer / deserializer) speed and the number of SerDes (I/O pins) define a chip’s bandwidth. For example, a chip with 51.2 Tb/s bandwidth has 512 100 Gb/s SerDes, enough to support 128 ports of 400G Ethernet capacity, with each port made up of four lanes of 100 Gb/s. The next generation of switch silicon will double the bandwidth once again as 102.4T switches will have 512 lanes of 200 Gb/s SerDes. These silicon switches will support 800G and 1.6T over 224 Gb/s lanes.

Pulse Amplitude Modulation

Increasing the symbol rate (baud rate) can cause signal degradation as the data moves faster through the channel. Maintaining the signal integrity of high-speed digital communications has become more complex, so the standards organizations have moved to higher modulation schemes to increase the bits per symbol. For example, 400G Ethernet uses four-level pulse amplitude modulation (PAM4) SerDes to increase the data rate from 50 Gb/s to 100 Gb/s at the same symbol rate of 50 GBd. With this change, 400G networks were able to start using four lanes of 100 Gb/s instead of eight lanes of 50 Gb/s.

There is a tradeoff to pulse amplitude modulation. Sending more bits per cycle lowers the margin for noise for each symbol. With non-return-to-zero (NRZ) signaling, the threshold range of voltage distinguishing a zero-bit from a one-bit is higher. As the number of bits per symbol increase, the threshold gets smaller, and there is a reduction in noise immunity. Levels of noise that would not close an eye diagram at 50 GBd NRZ, meaning the receiver can clearly distinguish between bit levels, can cause trouble to a receiver trying to interpret a 50 GBd PAM4 symbol.

NRZ vs PAM4 eye diagram comparison

Figure 1: PAM4 signals have smaller eye heights and therefore tighter design margins regarding noise and jitter.

Currently, the industry is likely to retain PAM4 commonality and instead look at other methods of maintaining data integrity at high speeds. But future generations of the standard may utilize higher modulation schemes (PAM6 or PAM8).

How does forward error correction decrease bit error rate?

In most high-speed data standards, finely tuned equalizers in the transmitter and receiver ensure that signals transmitted through a channel can be interpreted on the other end, compensating for signal degradation in the channel. However, as faster speeds push physical limits further, more complex approaches become necessary. One such solution is forward error correction.

Forward error correction refers to transmitting redundant data to help a receiver piece together a signal that may have corrupted bits. FEC algorithms are usually good at recovering data frames when random errors occur but are less effective against burst errors when entire frames are lost.

Losing whole data frames makes it harder for the receiver to reconstruct the signal. For example, 224 Gb/s transceivers require more robust FEC algorithms to transmit and receive data successfully. Each FEC architecture has tradeoffs and advantages of coding gain, overhead, latency, and power efficiency.

Forward error correction (FEC) algorithm comparison

Figure 2: Types of FEC architectures and their tradeoffs. Credit: Cathy Liu, Broadcom

While FEC will help alleviate the effects of random errors between transmitter and receiver, burst errors can still cause problems. In 224 Gb/s systems, more complex FEC algorithms will be necessary to minimize burst errors. Test and measurement developers are working on FEC-aware receiver test solutions to identify when frame losses occur and help debug them.

How do optical modules affect power efficiency?

Perhaps the most difficult challenge facing data centers is power consumption. Data centers consume around 1% of the world’s total generated power. Data center operators need to scale processing capacity without proportionally increasing the power consumption. A key component of power efficiency is the optical module.

Optical module power consumption has increased with each successive generation. For example, 100G quad small form factor pluggable (QSFP28) modules used less than 5W of power, but 400G QSFP-DD (QFSP double density) modules used up to 14W.

As optical module designs mature, they will become more efficient. Take 800G QSFP-DD modules which debuted with a power consumption of 17W; they should decrease to 10W as the technology matures. Generally, power consumption per bit is decreasing. However, with an average of 50,000 optical modules in each data center, the modules’ increasingly high average power consumption remains a concern.

Optical module power consumption between generations

Figure 3: Power consumption of optical module generations. Credit: Supriyo Dey, Eoptolink

To increase power efficiency, developers are working on alternative optical modules. Co-packaged optics have the potential for the lowest power consumption. Co-packaged optics move the optical module to the ASIC, eliminating the optical retimer and performing optoelectronic conversion inside the package.

The tradeoff is that power dissipation concentrates inside the ASIC package, which may require novel cooling solutions. Cooling is another significant power draw in data centers. Co-packaged optics are not yet proven, so the industry will likely continue to use pluggable optics in 800G systems. Later versions of the 800G or 1.6T standard may use co-packaged optics.

Pluggable vs Co-package optics

Figure 4: Pluggable and co-packaged optics. Credit: Tony Chan Carusone, Alphawave IP

What is the timeline for 800G and 1.6T networking?

While there is no way of predicting the future exactly, we can make some observations based on the current state of networking R&D. 2022 saw the final release of the OIF’s 112 Gb/s standard and the IEEE’s 802.3ck (400G) standard. These standards will provide the groundwork for defining 800G over 112 Gb/s lanes.

The first 51.2T switch silicon was released in 2022, enabling 64 800 Gb/s ports, and validation began on the first 800G transceivers. This year, the standards organizations will release the first versions of the IEEE 802.3df and OIF 224 Gb/s standards, which will give developers a better indication of how 800G and 1.6T systems might be constructed using 112 Gb/s and 224 Gb/s lanes.

In the next two years, expect the IEEE and OIF to finalize the physical layer standards and look for more news about co-packaged optics, 1.6T transceivers, and 224 Gb/s SerDes switch silicon, which will will set the stage for the final validation push for 800G and 1.6T using 224 Gb/s lanes.

800G 1.6T standard development timeline

Figure 5: Projected timeline for 800G and 1.6T developments

For now, 400G is undergoing mass deployment. Operators will upgrade hyperscale data centers to support the current wave of demand, but ultimately, they can only buy time until the next inevitable speed grade. What comes next? By 2025, we could see 448 Gb/s SerDes chips (using 102.4T ASICs) on the market. We will probably be talking about the need for 3.2T networks by then.

Data centers will always need more efficient and scalable data technologies. Today’s developers have their sights on that near future and are already at work crafting the invisible backbone of tomorrow’s connected society.

But don't just take my word for it…

Watch “1.6T Ethernet in the Data Center” on Keysight University

Get exclusive insights from a variety of industry experts from across the networking ecosystem or click one of the links below for more resources on 1.6T networking design and test solutions for hyperscaler data centers.

Learn more about How to Analyze PAM4 Receiver Signals

limit
3