5 problems when using a Large Language Model

preview_player
Показать описание
Five problems to consider when building applications using Large Language Models (LLMs.)

📚 My 3 favorite Machine Learning books:

Disclaimer: Some of the links included in this description are affiliate links where I'll earn a small commission if you purchase something. There's no cost to you.
Рекомендации по теме
Комментарии
Автор

One of my latest projects requires me to make ChatGPT output JSON, which is a nightmare. Most of the time, it adds extra text outside of the JSON, which completely breaks my application. But after a few days of prompt engineering, I've gotten it to function most of the time now. But getting these LLMs to do exactly what we want is still difficult. The majority of the time, it's pure luck.

PritishMishra
Автор

1) has a ‘learning in the wild’ type scenario been studied? Whereby the LLM is Tarzan and navigates as a young boy learning to communicate with us chimps? So the vectors are connected instinctively.

2) can CHATGPT go back to the original chat that responded offensively? I just wonder how LLM learning was affected by creating a false answer when the answer was true.

Becidgreat
Автор

3:16 have you tried to use one of those OSS model instead of OpenAI API where obviously their keep tuning the underneath model which change the type of output you have. That not necessarily an LLM problem here but an OpebAI API issue.
Using OpenAI API you have no control on which version of the model is used. Today can be GPT-3.5 build 12345 and tomorrow can be GPT-3.5 build 23456 and the output will be different

jeanchindeko
Автор

Great video, the sound is a bit too low though.

EvilCherry
Автор

you should tighten the screws in your desk, it gives me anxiety when it moves

neilwng
Автор

Also huge sums of money and tunning wth human feedback.

nedyalkokarabadzhakov
Автор

Its not just LLM demos, _ALL_ startup demos are smoke and mirrors

AlexanderWhillas