GPT-4o is BIGGER than you think... here's why

preview_player
Показать описание
Updated Note: This aged poorly (2024-07-07)
Рекомендации по теме
Комментарии
Автор

Updated Note: This aged poorly (2024-07-07)

DaveShap
Автор

Simulated or not, gpt4o's emotions are still more sincere than those of my ex.

mutantdog.
Автор

Short version;
- Release 4o to masses.
- 4o trains on milions and milions of context.
- By end of the year all that data is gathered and put togheter.
- We got AGI.

Metalmaxm
Автор

Hey folks, the audio problem is NVIDIA Broadcast, the AI I use to clean up audio real-time. It's been getting worse and worse, so I finally uninstalled it. It's not the mic, gain, limiter, or cables. Thanks for bearing with.

DaveShap
Автор

"GPT-4o is BIGGER than you think... here's why"
...
Hot female voice

archdemonplay
Автор

The more natural speech and end to end multi modality being added to GPT 4 feels like they are wanting to get us used to these tools and interaction modes before they switch out the underlying model to GPT 5.

jful
Автор

When calling customer service, I prefer conversing with an AI assistant rather than someone whose strong accent or limited language proficiency prevents clear communication.

elphil
Автор

If they add NSFW Sam won't have to worry about raising $7 trillion. 😆

RogueAI
Автор

is my computer dying or is his audio crackling?

Devin
Автор

Sure it's not some flashy breakthrough in terms of abilities, but a real-time conversational format could actually be huge. Remember, GPT 3.5 got big almost exclusively because they made an approachable UI and opened it up to everyone.

epg-
Автор

"As many of you pointed out in the audience, aligning humans is actually the hard part... Scooby Doo taught us that humans are always the monster." - David Shapiro

A seemingly insignificant remark at the end of a video with potentially profound implications as we march ever closer to AGI...

damienhughes
Автор

Regarding consciousness of AI, once it gets sufficiently sophisticated, it won't matter if it's real or simulated — it will be indistinguishable, people will not care, and treat it as real.

MicaelLNobre
Автор

Thanks for the post. Initially I was “meh” when I watched the release but the longer I thought about it the more ways I saw it is kind of brilliant.

Mimi_Sim
Автор

Before i watch this video, the reason why i think its bigger than most of the enthusiants of the future of technology (who may typically be a large portion of your viewers) think is because its the fact that the cool stuff we already know that one would have had to pay for (previously gpt 4) is now free and even better, so this will get the world more ready for adapting to the truth of the future. As more and more people will start to use it who wouldnt have wanted / couldnt pay for gpt 4.

Merrily-inmq
Автор

GTP-4o is the new standard. All future AI needs to be completely multi-modal, no more LLMs. AGI will be multi-modal, it has to be. But we are still early in data, what is next is robotics and sensor input data, not just video and audio. And finally we need local processing, not through the internet on server. Once all of that is done we will have AGI robotics.

AleksandrVasilenko
Автор

It's the same thing that scientists do with new science. Someone on the fringe has a wild idea and all the scientists say it's impossible. Give it a while (1 - 100 years) and it turns out it's true.

Arthur C. Clarke had something to say on the matter:

1. "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."
2. "The only way of discovering the limits of the possible is to venture a little way past them into the impossible."
3. "Any sufficiently advanced technology is indistinguishable from magic."

Hector-bjls
Автор

I think something crucial that is still missing for AGI is the ability to do inference and active learning at the same time. Storing things in the context window is not learning. I think the context window is more akin to how our own short-term memory works and is currently being brute forced to act as long-term memory as well.

You can keep on increasing the window context size and come up with tricks to reduce the impact on model performance but for it to truly grasp new information and be able to come to new insights, it should be able to update its own weights based on the new information it receives.

If that's too expensive to do on the fly then just reserve moments where the AI gets to review whatever is inside its context window and decide what is kept and used as new training data. A bit similar to how sleeping might work in humans.

Hydde
Автор

I can't wait until video game npc have chatgpt 5 intelligence

creepystory
Автор

Great ramble, great clarity, great as always

addeyyry
Автор

Loved the ramble. Thank you for sharing your thoughts.

simonlooker