Deep Learning Concepts: Training vs Inference

preview_player
Показать описание
In Deep Learning there are two concepts called Training and Inference. These AI concepts define what environment and state the data model is in after running through the Deep Learning Neural Network. In this episode of Big Data Big Questions I will explain the differences between Training vs. Inference and detail the hardware and system requirements in each environment. Watch this video to learning what Machine Learning Engineers should understand about both stages of Deep Learning.

► DATA ENGINEER RESOURCE - Site devoted to "BUILDING STRONGER DATA ENGINEERS" ◄

► ASK BIG DATA BIG QUESTION - Submit questions to be answered on Big Data Big Questions ◄

► CONNECT ON TWITTER ◄
Рекомендации по теме
Комментарии
Автор

Holy Cow, 6 minutes wasted. Lots of words, no real explanation. Training is the process of passing the training dataset forward & backward through the network structure adjusting the weights to reduce the losses X number of times. When the losses have been reduced to an acceptable level, the model's structure and the weights & biases are saved.

Inference is the process of loading the previously saved model's structure and the weights & biases into memory and then running feature sets (X values) of new data through the model producing predictions (y values)...

Training/learning is creating "intelligence". Inference can be thought of as "thinking" with the artificially intelligent "brain" that was created through training.

lakeguy
Автор

DUDE...inference is the process of using an AI model to analyze new data and make predictions. You just going round and round get to the point.

kevinl
Автор

What was the difference anyway???? confused more!

mrkabbazi
Автор

Training is to find the parameters in the function with a large scale data and computing resources. Inference is to apply the function we have already made clear to simple environment and smaller platform.

LinkingL-xv
Автор

Hi Thomas, I have few questions as I am a newbies in AI, machine learning and deep learning. Do we still need to use special devices to apply or run trained model (inference)? Because, I saw there there few devices out there for that like Jetson Nano, Google Coral USB Accelerator and so on. What is the advantage of getting help from these devices rather than directly running on PC or raspberry pi ? My question may be silly but I need an answer. Thanks

johnmoore
Автор

wipe this video please. Inference is something completely different! Please check your sources as this is 100% wrong information!!!

stephanverbeeck
Автор

How is inference affected by location of GPU servers?

EDvoxel
Автор

But what is inference? Just using the ai?

Vix
Автор

How do we calculate the inference speed metric for ML model ?

yuktiadya
Автор

Good explanation, but you talk too much bro

axelvulsteke
Автор

Do weights and biases change when we change the Image?

archinb
Автор

cats unfortunately dont have an inference engine to distinguish cats from mirror reflections.


and the distant goal is to get a lightweight biped robot keep balanced while walking

mcasualjacques
Автор

I mean first get into the topic first or keep the introduction very, very short. Couse you have only 6 min. OUt of 6 min the ifirst 1.25 transmit no meaningful information.

petraiondan
Автор

Would have been better if you included some math, code and/or diagrams to explain what's really going on. Just general information, not much useful.

sunitgautam
Автор

I don’t think you answered the question, thumbs down

jidengcheng
Автор

the antics at the intro is purely unnecessary it's cringy and don't do that! just get straight to the fucking point and we'll get it.. most of your audience are obviously serious people, you know that given the material you are presenting!

kipropcollins