Large Language Models: How Large is Large Enough?

preview_player
Показать описание

When it comes to large language models, one of the first things you may read about is the enormous size of the model data or the number of parameters. But is "bigger" always better? In this video, Kip Yego, Program Marketing Manager explains that the truthful answer is "It depends". So what are the factors that will drive this decision? Kip breaks them down and explains one-by-one how to decide.

#llm #ai #generativeai #ml
Рекомендации по теме
Комментарии
Автор

These videos are fantastic!
Thank you so much for making them available :D

thatdudewiththething
Автор

Well done video, Get kip to more of these please.

dominiquecoladon
Автор

Thank you, this is the information I was searching for. I was explaining the concept in theory to someone. The idea was to use smaller models that are trained for specific domains. By eliminating or reduce all the other domains, the model should perform better and reduce messy results.

ttjordan
Автор

Great video Kip!

At the moment it seems that bigger equals better. Time to change that perception accordingly

YvesNewman
Автор

Thank you, that was informative. One question I have is how you determine domain specificity, and perhaps potential lost opportunity?
For example, using financial services tasks as in your example. If you ask someone working in finance about what insights they'd be looking for, tax or perhaps transfer pricing may not be what they consider as part of their domain. However, transfer pricing and tax could have a huge impact on what finance should consider when taking decisions. How do you ensure the domain specificity is not too narrow?

rich
Автор

The MBA in me says, beyond some point, the trade-off isn't worth it. Then again, that's probably what they said about the Apollo mission.

gjjakobsen
Автор

5:38 THANK YOU BRO. Definitely feel more confident after hearing that.

Alice
Автор

How can we find a domain specific models or how to train them?

nirmal
Автор

My llm's so large, it reaches almost every 1 and 0 it can write on; you can literally call it a "wipe"

IsaacFoster..
Автор

I see no point whatsoever in comparing a domain specific finetuned model to a non finetuned model to draw conclusions or suggest any insights doing this.

aberobwohl
Автор

bro you got worse handwriting than me!!! Good info though. lol

Alice
join shbcf.ru