Large Language Models Are Zero Shot Reasoners

preview_player
Показать описание

When you create a prompt for a large language model, are the answers sometimes wrong or just plain weird? It may be you! Or more accurately, the way you are formulating your question. In the video, Martin Keen explains why LLMs are led astray and offers suggestions on prompting techniques to reduce these mishaps.

#llm #ai #watsonx
Рекомендации по теме
Комментарии
Автор

Thanks for this video! Would love to see the next video on Tree of Thoughts method of prompting.

arijitgoswami
Автор

Excellent explanation. I suggest that next time you add a little history at the beginning of your video about where the term is coming from (see original publication where the term was first coined).

Zulu
Автор

Thanks. I have one question. To do prompt tuning on a foundation model, how to choose data sets which are for general public domain (not for specific domain) and under which circumstances, we should train with few-shot prompts and zero-shot prompts? thanks

yuchentuan
Автор

Thank you for this video! Though, wasn't this video already published? I could even remember the beats of the first lines?

manomancan
Автор

There's such a strange, uncanny-valley feeling watching someone who's been inverted (flipped along the vertical axis like a mirror appears to do)

EvanBoyar
Автор

Good video, is there an established way to provide step by step examples to the llm? E.g. will check get better results if I explicitly number my stems and provide enumerated examples, can I use arrows to indicate example-> step -> final ?

michaeldausmann
Автор

I just try the same direct prompt right now to gpt4 and correct answer !

fredericc
Автор

Great video. Thank you
Can you make a video about the current state of LLMs in the market place? There are lots of claims out there of capable models like GPT but it’s really hard to separate fact from fiction. Thanks again

enthanna
Автор

Can a subsequent SFT and RTHF with different, additional or lesser contents change the character, improve, or degrade a GPT model? Can you modify a GPT model?

amparoconsuelo
Автор

Has this been reuploaded or do I just have a really bad case of deja Vu? I'm 100% sure I have watched this video again before and it wasn't anywhere within the past 18hrs

sirkv