filmov
tv
Edge Talk Episode 14: Edge AI Inference and NGC-Ready Server: A Hardware Perspective
Показать описание
The accelerating deployment of powerful AI solutions in competitive markets has evolved hardware requirements down to the very edge of our network due to eruption in AI-based products and services. For edge AI workloads, efficient and high-throughput inference depends on a well-curated compute platform. Advanced AI applications now face fundamental deep learning inference challenges in latency, reliability, multi-precision artificial neural networks support and solution delivery.
NGC software runs on a wide variety of edge-to-cloud GPU servers, and Lanner’s edge AI appliance, LEC-2290E, optimized for NVIDIA® T4 have passed an extensive suite of tests that validate its ability to deliver high-volume, low-latency inference using NVIDIA GPU and NGC software components such as TensorRT, TensorRT Inference Server, DeepStream, CUDA toolkit, and various NGC-supported deep learning frameworks.
NGC software runs on a wide variety of edge-to-cloud GPU servers, and Lanner’s edge AI appliance, LEC-2290E, optimized for NVIDIA® T4 have passed an extensive suite of tests that validate its ability to deliver high-volume, low-latency inference using NVIDIA GPU and NGC software components such as TensorRT, TensorRT Inference Server, DeepStream, CUDA toolkit, and various NGC-supported deep learning frameworks.