The interconnect standard Compute Express Link (CXL) created by intel swallows another competitor: the Open Coherent Accelerator Processor Interface (OpenCAPI) originally developed by IBM is to become part of the CXL consortium with all its developments. This also includes the Open Memory Interface (OMI), which connects processors with SDRAM in the form of DDIMMs.
According to the OpenCAPI consortium, the parties involved still have to agree to the handover, which is only a formality. Once that happens, OpenCAPI is practically history. CXL may inherit aspects of OpenCAPI, but there will be no separate development.
In the case of cross-manufacturer interconnects, it makes sense to agree on a common standard. With these protocols, storage media or GPU or FPGA accelerators from different companies can be connected to processors in a cache-coherent manner – a big topic in data centers in 2022.
CXL at the forefront
CXL is asserting itself in the interconnect scramble of recent years. The first generation is based on PCI Express 5.0 and enables low-latency communication between numerous plug-in cards with its own protocol. Intel founded the CXL consortium in 2019, which now includes all industry giants, including AMD and Nvidia.
The first processor generations with CXL support are Intel’s fourth Xeon SP series aka Sapphire Rapids and AMD’s Epyc 9004 aka Genoa. Other manufacturers are already preparing products, such as Samsung’s CXL Memory Expander.
Although OpenCAPI is three years older than CXL, it never caught on apart from IBM’s power processors. Micron and Samsung brought SDRAM bars in the form of DDIMMs for IBM servers with extra memory.
In 2021, CXL was already swallowing Gen-Z in a similar deal. That leaves the CCIX interconnect, which AMD, ARM, IBM, Qualcomm, Xilink, Huawei and Mellanox created, but which is hardly relevant anymore.