XLA: TensorFlow, Compiled! (TensorFlow Dev Summit 2017)

preview_player
Показать описание
Speed is everything for effective machine learning, and XLA was developed to reduce training and inference time. In this talk, Chris Leary and Todd Wang describe how TensorFlow can make use of XLA, JIT, AOT, and other compilation techniques to minimize execution time and maximize computing resources.

event: TensorFlow Dev Summit 2017; re_ty: Publish;
Рекомендации по теме
Комментарии
Автор

23:27: why is the JIT operating on the TF graph ? Shouldn't it operate on the XLA graph ??

malharjajoo
Автор

29:49 - What is the difference between AOT compilation and simple compilation (eg: GCC for C, C++) ?

malharjajoo
Автор

Regarding the XLA JIT results, why there can be slow downs? Is it due to the JIT compilation time (which would count as the runtime for TF)?

YeHenryTian
Автор

Why ON_1 flag works only for GPU device, but not for CPU? How to use JIT for existing .pb file on CPU?

apivovarov
Автор

Do feeds in tfcompile necessarily need input shapes?

rajatarora
Автор

With the help of tensorflow can we build chatbot..?

eybustt
welcome to shbcf.ru