filmov
tv
Liqid Teams with Inspur at GTC for Composable Infrastructure
Показать описание
In this video, Dolly Wu from Inspur and Marius Tudor from Liquid describe how the two companies are collaborating on Composable Infrastructure for AI and Deep Learning workloads.
"Today Liqid and Inspur announced that the two companies will offer a joint solution designed specifically for advanced, GPU-intensive applications and workflows. The Matrix Rack Composable Platform powered by NVIDIA GPUs will enable the scale out, sharing, accelerated performance, and dynamic composability of GPUs via fabric-based composable infrastructure. The rack-level solution delivers unparalleled infrastructure adaptability to manage emerging applications driven by artificial intelligence (AI) that require performance not possible with traditional, static data center infrastructure.
Our goal is to work with the industry’s most innovative companies to build an adaptive data center infrastructure for the advancement of AI, scientific discovery, and next-generation GPU-centric workloads,” said Sumit Puri, CEO of Liqid. “Liqid is honored to be partnering with data center leaders Inspur Systems and NVIDIA to deliver the most advanced composable GPU platform on the market with Liqid’s fabric technology.”
Bringing together the best hardware solutions available for the data center, including the Inspur i24 servers & GX4 expansion chassis, NVIDIA Tesla V100 and P100 GPUs, and Liqid Grid PCIe fabric technology, the Matrix Rack delivers a truly composable, fully scalable GPU platform.
The Matrix Rack enables disaggregated pools of GPUs to be scaled, accelerated and shared natively over a PCIe fabric, facilitating the performance from dozens of NVIDIA GPUs to be clustered and orchestrated as needed in tandem with disaggregated pools of NVMe storage, compute, and networking resources. Advanced fabric technologies like GPU peer-to-peer can deliver significantly higher GPU performance over legacy static platforms, with the ability to right-size bare-metal physical server on demand to accommodate any workload.
AI and deep learning applications will determine the direction of next-generation infrastructure design, and we believe dynamically composing GPUs will be central to these emerging platforms,” said Dolly Wu, GM and VP Inspur Systems. “We are excited to partner with NVIDIA and Liqid to deliver the market’s first mature, rack-scale solution for composable GPUs, leveraging Inspur’s leading server and storage solutions.”
and
"Today Liqid and Inspur announced that the two companies will offer a joint solution designed specifically for advanced, GPU-intensive applications and workflows. The Matrix Rack Composable Platform powered by NVIDIA GPUs will enable the scale out, sharing, accelerated performance, and dynamic composability of GPUs via fabric-based composable infrastructure. The rack-level solution delivers unparalleled infrastructure adaptability to manage emerging applications driven by artificial intelligence (AI) that require performance not possible with traditional, static data center infrastructure.
Our goal is to work with the industry’s most innovative companies to build an adaptive data center infrastructure for the advancement of AI, scientific discovery, and next-generation GPU-centric workloads,” said Sumit Puri, CEO of Liqid. “Liqid is honored to be partnering with data center leaders Inspur Systems and NVIDIA to deliver the most advanced composable GPU platform on the market with Liqid’s fabric technology.”
Bringing together the best hardware solutions available for the data center, including the Inspur i24 servers & GX4 expansion chassis, NVIDIA Tesla V100 and P100 GPUs, and Liqid Grid PCIe fabric technology, the Matrix Rack delivers a truly composable, fully scalable GPU platform.
The Matrix Rack enables disaggregated pools of GPUs to be scaled, accelerated and shared natively over a PCIe fabric, facilitating the performance from dozens of NVIDIA GPUs to be clustered and orchestrated as needed in tandem with disaggregated pools of NVMe storage, compute, and networking resources. Advanced fabric technologies like GPU peer-to-peer can deliver significantly higher GPU performance over legacy static platforms, with the ability to right-size bare-metal physical server on demand to accommodate any workload.
AI and deep learning applications will determine the direction of next-generation infrastructure design, and we believe dynamically composing GPUs will be central to these emerging platforms,” said Dolly Wu, GM and VP Inspur Systems. “We are excited to partner with NVIDIA and Liqid to deliver the market’s first mature, rack-scale solution for composable GPUs, leveraging Inspur’s leading server and storage solutions.”
and