BERT Paper Explained

preview_player
Показать описание
❤️ Support the channel ❤️

Useful resources to learn about transformers:

Useful resources to learn about BERT:

Paid Courses I recommend for learning (affiliate links, no extra cost for you):

✨ Free Resources that are great:

💻 My Deep Learning Setup and Recording Setup:

GitHub Repository:

✅ One-Time Donations:

▶️ You Can Connect with me on:

#PaperExplained #PaperReview
Рекомендации по теме
Комментарии
Автор

What papers are you guys reading? <3

AladdinPersson
Автор

Your paper reviews are soooo goood, thank you <3

essamgouda
Автор

This is cool, waiting for the implementation

AkshatSurolia
Автор

This one was great, man. I loved it. Two suggestions for follow-up videos. 1) Fine-tune BERT for a specific task, showing how to use it in practice. 2) Go deeper into transformers architecture.

mohamadrezabidgoli
Автор

Thank you for uploading comprehensible video, it added much value to my understanding. I was reading this paper for my reasearch but had problem understanding it. Your explanation are clear and helpful. Please review and explain other related papers too. More power to you and what you are doing!!!

ashok_learn
Автор

Bro can you please cover bert with implementation like you did for Yolo and Encoder -Decoder models

sayedathar
Автор

Can you make a video on BERT, GPT, StyleGANs, Hierarchical Audoencoders from scratch

navinbondade
Автор

This man teach every concept very intuitively ❤️. Wating for your BERT lessons. ❤️From India. Pls make some in-depth video regarding BERT implementation.

dv
Автор

can you please create a video series on BERT ??

SanataniAryavrat
Автор

Amazing Content. Could you please make architecture comparison videos such as GPT v(1, 2, 3) and Yolo v(1, 2, 3, 4, 5) ?

teetanrobotics
Автор

Isn't the pretraining step self supervised instead of semi supervised?

lomo
Автор

I wonder how BERT handle the word out of vocab, especially in NER problem? For example, if word "Abcde" which is not in vocab is broken into substring: "A", "#b", "#c", "#d", "#e", then how does BERT predict "Abcde"'s label in NER problem?

nicholekalman
Автор

Waiting for your BERT from Scratch Video <3
(If you do this, please do include the training of your BERT from scratch on our custom huge corpus of any language :D, it would really be great !)

rog
Автор

Just found the channel, great stuff 👍

aidanobrien
Автор

Thanks for the wonderful explanation! I think you meant "Encoder Blocks" at 8:08.

AshishYadav-murd
Автор

Your videos are awesome! I do learn a lot. Thank you so so much!

jintaotan
Автор

Could you make a video on SemiSupervised GAN?

francescolee
Автор

Hi! could you please apply T5 on translation task and training the tokenizer on new language that was not used in the paper?

arijaa.
Автор

commenting to feed the algorithm.
I'm reading 'Line Drawings from 3D
Models', which is supposedly the method they used in the latest Spiderman movie to add 2d lines onto the 3d animation.
I hope to pair this with the attention maps in DINO, to learn a representation of a character in animation.
then recast that representation into animation as well as create original animation following a consistent style.
Will it work? Maybe. But the main goal is to understand attention better.

BlissfulBasilisk
Автор

Have been enjoying the paper reviews and paper implementations on your channel! I am looking for resources of paper implementations in TensorFlow, can you recommend any?

radiradev