GPT-4 Turbo with Vision Explained Simply

preview_player
Показать описание
An easy-to-understand explanation of GPT-4 Turbo with Vision. This version will support 128,000 tokens of context. A full explanation of tokens when it comes to AI large language models is explained in the video. I’ll also discuss it’s knowledge updates, cheaper API integrations, image capabilities, text-to-speech offerings and the impressive copyright shield which will protect developers from getting sued using GPT4 Turbo by defending their customers and covering some fees incurred in court.

Connect with us here:

#openai #gpt4 #artificialintelligence
Рекомендации по теме
Комментарии
Автор

Thanks for this really easy to understand explanations of tokens and updates. And even developers' protection from getting sued

iameverywhere
Автор

Big token context is nice but reality hits different. I doubt anybody will allow access to that large of a limit on a 20 dollar subscription. It's not a big deal for day to day use with private users because they rarely stay in a single chat for that long and the large context isn't really necessary. However when you start doing coding or translation or long document editing etc, you are running up an astronomical cost. Giving access to 100k+ token context on a subscription means someone could send a single message to send an entire book to GPT4, and that alone is going to cost you 1 dollar and then if you get an equally long response from GPT, which will also take ages, it will cost you 3 dollars. How do you expect a 20 dollar subscription to allow access to a model that would basically let a developer perform a total of 5 GPT interactions for the equivalent price? I don't think that's going to be available at that price point or with that large of a context limit anytime soon. Maybe once newer models become available, and as new hardware gets developed to support these bigger models, the older ones will become either much cheaper or free altogether. I suspect that all in all in about 3 years we will easily have personal human tier AI in our personal computers and phones, and in about 6-7 it will all be running locally on a phone.

Edit:
Also, running up large context token limits is a tool that is more aimed at efficiency in interaction use between large text dumps, IE when people use chat gpt to bulk translate large documents for instance or create big batches of code. Big context numbers for conversation is usually unnecessary and/or pointless. We should be less focused on larger context limits and more focused on more efficient use of context. Any time you have a long chat with the AI, every message you send, you basically resend all the previous messages you sent up until that point. This means that the longer you talk to an AI bot, the more it costs you per message. It's helpful to be able to talk about something you said 20 messages ago, but preemptively having to load that with every interaction is extremely inefficient. The next breakthrough is going to be semantic memory, where the model will recognize keywords from your previous messages, save them, then use them to regain context on a layer database basis. It will make use of AI cheaper, because your input tokens will shrink significantly, and the range of memory will extend as well.

Mike
Автор

The overbearing censorship is still a stumbling block, some of the blocks make no sense.

RealmsOfThePossible
Автор

your hand gestures is distracting. tone it down a bit.

Galiano