CXL 3.0 Opens New Use Cases
The newly unveiled CXL 3.0 introduces memory sharing, direct device peer-to-peer memory access without involving a host, and multilevel switching. A new global fabric-attached memory can be shared by 4,096 hosts.
The latest Compute Express Link (CXL) update adds memory sharing and lets accelerators access each other directly while doubling performance. Version 3.0, outlined publicly in August, enables the primary use cases in the original CXL vision. At the same time, the standard’s last remaining open competitor folded, donating its technology to the CXL Consortium.
CXL 3.0 rides atop PCIe Gen6, doubling bandwidth relative to CXL 2.0. It also allows coherent direct memory access from peer to peer, whereas the prior version required host involvement. In addition, it expands memory pooling, whereby hosts can exclusively access arbitrary quantities of remote memory, to implement memory sharing, in which that access is no longer exclusive. And what were originally limited switch configurations are now more flexible.
Announced in 2019, CXL arose from the desire to allow hosts and accelerators to more easily access each other’s memory as well as to decouple processors from the limitations of physical DRAM connections. Applications could then access more memory more flexibly without software needing information about the memory’s type and location (although the OS kernel needs that information).
Several other attempts at such a standard—CCIX, GenZ, Infinity, and OpenCAPI, —failed to get the same traction. All have effectively folded, the lattermost roughly coincident with CXL 3.0’s unveiling.
The CXL Consortium has developed and rolled out the standard in stages. Initial implementations, such as Intel’s Ponte Vecchio and Sapphire Rapids, AMD’s Genoa, and Nvidia’s Grace, cover CXL 1.1, which largely affords memory abstraction; version 2.0, which allows pooling and switching, has appeared first in a new switch from Xconn. Commercial 3.0 implementations are expected in a few years.