OpenAI: New 100% Reliable Structured Outputs

preview_player
Показать описание
🔍 OpenAI's Game-Changing API Update: 100% Reliable JSON Outputs Explained!

In this video, we delve into OpenAI's latest API update introducing structured outputs. We explore how it differs from JSON mode by ensuring outputs conform to provided schemas, and discuss its implications for developers. Key topics include function calling, response format parameters, and safety measures. We also cover limitations like additional latency on first requests and potential model hallucinations. Discover how this innovation can simplify data extraction and enhance application reliability.

00:00 Introduction to OpenAI's Structured Outputs
00:11 Understanding JSON Mode vs. Structured Outputs
00:38 Frameworks and Evaluation
01:27 Accessing Structured Outputs: Function Calling
02:56 Accessing Structured Outputs: Response Format Parameter
03:39 SDK Support and Use Cases
03:48 Generating Dynamic UIs with JSON Schema
04:59 Reasoning Steps and Data Extraction
06:31 Technical Details and Limitations
08:37 Availability and Final Thoughts
Рекомендации по теме
Комментарии
Автор

The best way to support this channel? Comment, like, and subscribe!

DevelopersDigest
Автор

Give whoever pushed forward on that project a raise. Really high impact.

matterhart
Автор

Spent the last 2 days on getting the assistant output into separate key/values; didn't know about this. The user experience I want is that the assistant message is just multiple clickable options to drill down. The base prompt state updates and a new set of clickable options appears. A somewhat never-ending user experience. So it's hilarious your stuff always shows up in a notification at the exact right moment; I was about to system prompt asking for comma-delimited output to parse ... :)

ToddDunning
Автор

I've been struggling with getting GPT to give me solid JSON, this is a WELCOME update!!! thanks for the great video description too.

dibbers
Автор

I read this blog post but I should have just watched this. So much insightful context!

KyTechInc
Автор

Wow this is insane. Been building for 2 years within LLMs.

The days of finding unique ways to redundantly say please are (mostly) over 😂

jalengonel
Автор

Really high quality no-nonsense coverage! Thank you! I'm surprised how your channel hasn't been recommended to me earlier. Subscribed

nicksmeta
Автор

I found that fine tuning a model specifically to handle output produces the most reliable results... this might change things. Thanks for the update!

gabrielkripalani
Автор

You're on fire. These last couple of videos have been lean and useful, and coming just in time. Keep up the amazing work! Curious about how the end result was a HTML like screen (i.e. the dynamic UIs). I've been struggling with how to convert ChatGPT outputs into HTML, etc. Also, the previous version of Function Calling, I believe there was some sort of dynamic function calling (i.e. you gave multiple options for functions, and it determined which one to use). Is that still available with this newest 100% Reliable JSON Output version approach?

brodendangio
Автор

Very clear! Thanks for explaining. I like the structured output for the UX. That is very promising for getting better user experiences

rluijk
Автор

I stumbled upon this update just now when in the playground UI. Why is this announcement halfway down the page? This is so important! 😅 Since September of last year, I've been abusing function calling mode to do this kind of thing for my own framework. It's so messy! Having Pydantic integration and pre-parsed return values will make my code so much cleaner!

...guess I'm spending time tomorrow after work refactoring half my framework's main class methods 😅

IceMetalPunk
welcome to shbcf.ru