O'Reilly AI NYC 2017 : Learn how a GPU database helps you deploy an easy-to-use scalable AI solution

preview_player
Показать описание
Artificial intelligence’s promise is to change how we work and live. With cognitive applications in healthcare, retail, financial services, manufacturing, and transportation, AI is already transforming industries, saving lives, and delivering efficiencies. But deploying AI solutions isn’t easy. Do you optimize for compute, throughput, power, or cost? How do you manage the data? For the various AI frameworks like TensorFlow, Caffe, Torch, would more and faster training of the models be beneficial? What if you could run AI and BI workloads on one platform and deliver faster and better analytics?

Karthik Lalithraj explains how a GPU-accelerated database helps you deploy an easy-to-use, scalable, cost-effective, and future-proof AI solution that enables data science teams to develop, test, and train simulations and algorithms while making them directly available on the same systems used by end users.

Topics include: The characteristics of AI workloads and requirements for productionizing AI models:

Compute, throughput, data management, interoperability, security, elasticity, and usability

Considerations for architecting AI pipelines: Data generation (data prep and feature extraction), model training, and model serving

How a modern GPU-accelerated database with in-database analytics delivers the ease-of-use, scale, and speed to deploy deep learning models and libraries such as TensorFlow, Caffe, and Torch pervasively across the enterprise—and allows you to converge AI with BI and more quickly deliver results
Рекомендации по теме
join shbcf.ru