How to Stream Responses from the OpenAI API

preview_player
Показать описание
Learning how to stream responses is essential to good UX in AI-powered applications but it can be a bit intimidating to beginners. Let's walk through a sample implementation!

NOTE: as of this writing, GPT-4 is 'generally' available to current paying customers of the OpenAI API. If that's not you and GPT-4 isn't working, use 'gpt-3.5-turbo' instead of 'gpt-4'

Рекомендации по теме
Комментарии
Автор

Thank you!!! I was wondering how it works and I am super new to ui ux stuff. gonna try that :)) thanks again!!

kkyang
Автор

Hey thank you for the video, it helped me out so much. I was able to get it running but some prompts keep stopping abruptly and I need to tell it to "continue" to get it going again. Is there anyway around this? sorry if this is a dumb question

activoDS
Автор

what is the output token limit for turbo 3.5 16k context model?

businessemail
Автор

@3:06 lol, after looking at the source code I know the reason why its response said "remember, you owe me" and asked for a chocolate bar. 🍫

[spoiler]Because of your default System prompt![/spoiler]

DarrenJohnX