What is Positional Encoding in Transformer?

preview_player
Показать описание
#chatgpt #deeplearning #machinelearning
Рекомендации по теме
Комментарии
Автор

The word vectors are modified by the position? Or are the word vector and position vector are combined so that both are present, and therefore the neural network can learn to combine them in the most effective way?

terjeoseberg
Автор

do you know why pos encoding is then done by doing time_emb + token_emb and not a concatenation?

Youkouleleh