Implementing BERT on CommonLit Readability Kaggle Dataset to Predict Reading Complexity

preview_player
Показать описание
In this video, we will see how to implement BERT on Kaggle dataset to predict the complexity of reading the text. Each code is explained to understand the concept. This will be the simplest implementation using Tensorflow and Transformers (By Huggingface) module. Model is trained in Google Colab notebook using GPU

Notebook used in the video:

To understand concept in depth:

Timestamp:
00:00 - intro
00:25 - Competition & Dataset
02:22 - GPU & installation
03:34 - Import Libraries & Data
04:30 - EDA & preprocess on text
05:40 - Tokenisation
06:13 - Selecting BERT model
07:55 - Constructing Tokeniser
09:50 - Model Input
20:45 - Model Configuration
21:49 - Downloading pretrained Model
24:40 - Defining the Model
34:03 - Training & analyzing model
35:08 - How to Increase accuracy

Music Credits:
"Keys of Moon - Flowing Energy" is under a Creative Commons (CC-BY 3.0) license
Рекомендации по теме
Комментарии
Автор

So beautifully you have explained it. Keep it up.

mathimagery
Автор

How can I concat last 4 hidden layers in the bert output using code from the video?

МитяБережной-лт