SDC2021: Compute Express Link 2.0: A High-Performance Interconnect for Memory Pooling

preview_player
Показать описание
Data center architectures continue to evolve rapidly to support the ever-growing demands of emerging workloads such as artificial intelligence, machine learning and deep learning. Compute Express Link™ (CXL™) is an open industry-standard interconnect offering coherency and memory semantics using high-bandwidth, low-latency connectivity between the host processor and devices such as accelerators, memory buffers, and smart I/O devices.

CXL technology is designed to address the growing needs of high-performance computational workloads by supporting heterogeneous processing and memory systems for applications in artificial intelligence, machine learning, communication systems, and high-performance computing (HPC). These applications deploy a diverse mix of scalar, vector, matrix, and spatial architectures through CPU, GPU, FPGA, smart NICs, and other accelerators.

During this session, attendees will learn about the next generation of CXL technology. The CXL 2.0 specification, announced in 2020, adds support for switching for fan-out to connect to more devices; memory pooling for increased memory utilization efficiency and providing memory capacity on demand; and support for persistent memory. This presentation will explore the memory pooling features of CXL 2.0 and how CXL technology will meet the performance and latency demands of emerging workloads for data-hungry applications like AI and ML.

Learning Objectives
Learn about CXL 2.0, the next generation of Compute Express Link technology
Memory pooling features of CXL 2.0
How CXL will meet the performance and latency demands of emerging workloads for data-hungry applications like AI and ML

Presented by
Andy Rudoff
Intel
Learn More:
Рекомендации по теме