Informer attention Architecture - FROM SCRATCH!

preview_player
Показать описание
Here is the architecture of probsparse attention for time series transformers.

ABOUT ME

RESOURCES

PLAYLISTS FROM MY CHANNEL

MATH COURSES (7 day free trial)

OTHER RELATED COURSES (7 day free trial)
Рекомендации по теме
Комментарии
Автор

Amazing work, can't wait for next episode !

LeoLan-vvnq
Автор

Thank you for your amazing content. Where can I acess the drawio file?

mohamadalikhani
Автор

Why aren't the padding tokens appended during data preprocessing, before the inputs are turned by the feedfoward layer into the key, query, value, vectors?

neetpride
Автор

I would love if the quizzes had answers in the comments eventually. I know this is a fresh video, but I want to check my work, not just have a discussion 😅

-beee-
Автор

can i find the flow chart graphic of the informer model on github? and is draw io for free?

samson
Автор

great video thanks
question ... in the third question ... how do sample subset of keys, queries "depending on importance"

adelAKAdude
Автор

With regard to the quiz I think it is B D B. Not sure how this is going to launch a discussion though. You present things very well.

sudlow
Автор

Can u tell an interactive model of AI neural network for school project.. And ur videos are nice and I understand easily.. Pls tell

AmirthaAmirtha-mb
Автор

Not sure if just me, but starting at about 4:50 your graphics are so dark...
maybe go to a white background or light gray, like your original png...

rpraver
Автор

is this same as the wjat timesfm uses

dumbol
Автор

Also as always great video, hoping in future you deal with encoder only and decoder only transformers...

rpraver
join shbcf.ru