Compute Express Link™ 2.0 Specification: Memory Pooling

preview_player
Показать описание
Compute Express Link™ (CXL™) is an open industry-standard interconnect offering coherency and memory semantics using high-bandwidth, low-latency connectivity between the host processor and devices such as accelerators, memory buffers, and smart I/O devices. In November 2020, the CXL Consortium announced the CXL 2.0 specification which introduces support for switching, memory pooling, and support for persistent memory – all while preserving industry investments by supporting full backward compatibility.
In this webinar, Mahesh Wagh (AMD) and Rick Sodke (Microchip), will explore how CXL 2.0 supports memory pooling for multiple logical devices (MLD) as well as a single logical device with the help of a CXL switch. This presentation will also introduce the standardized fabric manager for inventory and resource allocation to enable easier adoption and management of CXL-based switch and fabric solutions.
Рекомендации по теме
Комментарии
Автор

Thanks a lot for the presentation. It clarifies many of my questions.

IvanHunglin
Автор

It seems the fabric manager manages system memory for all those hosts. In this case, the fabric manager has higher exception level than hypervisors. What exactly is this piece of SW?

IvanHunglin
Автор

Thanks for the great video.
I have a few questions about the video.
In the case of Multiple Logical Device (MLD), one CXL Memory Node is likely to consist of 1 ~ 4 channels. Is the host assigned to each channel?
Or, if this memory node is E3.S form factor, there will be 20 or 40 DRAM packages, are the hosts allocated in units of packages?

sjlee