Ensure Reliable Connectivity for AI Interconnects

Validate optical transceivers against the real-world demands and conditions of AI data centers. Scale R&D and production testing for AI-ready interconnects with advanced, low-noise design and test solutions. Ensure 1.6T connectivity by optimizing and validating optical and electrical performance at every layer with AI-optimized test and measurement tools.

Optimize performance and reliability at 800G and 1.6T speeds

Stay ahead of increasing performance demands for AI interconnects with high-bandwidth, low-noise test solutions for optical transceivers.

Accelerate development of next-gen, AI-ready optics

Fast-track R&D with high-performance, future-proof instruments built to handle multiple generations of data center networking standards development

Maximize test efficiency without compromising accuracy

Streamline compliance and production testing with automation solutions that increase throughput and lower costs without sacrificing quality

1.6T is the Future of Ethernet. Are You Ready?

The AI-driven explosion in data center traffic is well underway. Before long, even 800G won't be enough. 1.6T is the future of Ethernet, and the future is now.

As standards and compliance tests continue evolving, technology must stay ahead of the market. Make sure you’re ready with expert predictions, advice, and test solutions. Listen in as industry experts discuss the latest Ethernet developments, what to expect from 1.6T, challenges the market must overcome, and comprehensive test solutions for the technology.

1.6T Interconnect Webinar

Frequently Asked Questions: AI Interconnects

An interconnect (transceiver) is a device that links servers to switches within a network, enabling data transfer between components. For short distances, the interconnect may be electrical (copper) or optical. For longer distances, optical interconnects are typically used due to their higher performance and lower signal loss over extended lengths.

While an AI interconnect (used in a machine learning data center environment) will not be any different from a similar interconnect used in the inference (traditional data center environment), the load / utilization on the interconnect will be much greater over extended periods of time. The selection of interconnects for a machine learning AI deployment should be done carefully to ensure performance and longevity in the network. Attention should be paid to measurements such as Bit Error Rate (BER) to ensure there is sufficient headroom for sample-to-sample variations among production interconnects.

An interconnect in a network connects servers to switches or switches to switches. For a high-performance AI network, it is important that the workloads are running in an optimized environment. High-speed, quality interconnects running in an optimized network help ensure that in an AI network, workloads are not waiting on the network for data.

There are two primary families of interconnect technologies used in AI systems: off-chip and on-chip interconnects.

Off-chip interconnects handle communication between separate components — such as servers, switches, and accelerators — often across boards or racks. Leading technologies in this category include Ethernet with RDMA / RoCEv2, InfiniBand, PCI Express (PCIe), Compute Express Link (CXL), and NVLink.

On-chip interconnects operate within a single chip or package, enabling ultra-fast communication between internal components like cores and memory. Key technologies in this family include High Bandwidth Memory (HBM), Network-on-Chip (NoC), and Co-Packaged Optics (CPO).

On-chip interconnects are limited to communications within components such as GPUs, CPUs, and memory on a single chip. These are extremely short, fast, and power-efficient communication paths. Off-chip interconnects can span a data center and beyond. They are fast, but not as fast as short, on-chip interconnects. However, they are more fault-tolerant and are optimized for system-level communications.

Innovations in AI interconnects include Coherent Optics, Linear Pluggable Optics (LPO), Co-Packaged Optics (CPO), Compute Express Link (CXL), and advanced interconnect network topologies.

Want help or have questions?