But Is it AGI?! OpenAI's o3 IMPRESSES--but is it Intelligent?

preview_player
Показать описание

Join this channel to get access to perks:

Get The Elon Musk Mission (I've got two chapters in it) here:

For a limited time, use the code "Knows2021" to get 20% off your entire order!

**You can help support this channel with one click! We have an Amazon Affiliate link in several countries. If you click the link for your country, anything you buy from Amazon in the next several hours gives us a small commission, and costs you nothing. Thank you!

**What do we use to shoot our videos?

**Here are a few products we've found really fun and/or useful:

Tesla Stock: TSLA

**EVANNEX
If you use my discount code, KnowsEVs, you get $10 off any order over $100!

Instagram: @drknowitallknows

Рекомендации по теме
Комментарии
Автор

It’s fine to have your own definition of AGI. I think o3 is smarter than the average human at most knowledge-based work and that’s good enough for me to be considered AGI. We have to stop moving the goalposts at some point :)

gonzalezm
Автор

Company doesn't need AGI to take our jobs

MuhammadakbarAK
Автор

I love how now it's a hotter take to say something isn't AGI then that it is.

adamholter
Автор

We might have reached AGI when your boss can't distinguish if you or your computer has done the work while you'r at home. But this might also be the moment when you get fired.

jean-marcducommun
Автор

<I had problem comprehending trading in general. I tried watching other YouTube trading channels, but they made the concepts more complicated. I was almost giving up until when i discovered content and explain everything in detail. The videos are easy to Follow

LoretaPrifti-ot
Автор

🙄🙄 I love how the goal post move each time a new model comes out. Listen to the ARC guy say, "I'm excited to start work on our NEW eval/test"

bitwise_
Автор

I believe they achieved an early form of AGI. Pre training has hit a wall but test time inference is only at the beginning. They do not need any more scientific breakthroughs besides more powerful and efficient GPUs.
If o3 is not AGI then I wanna see your reaction to o4 or o5.

khutsohlase
Автор

Arc agi is literally just a regular IQ test

tedarcher
Автор

And Ai can deal with most of the tasks you mentioned even with conflicting goals. Just have a conversation with Claude.

adelatorremothelet
Автор

That's why I'm a subscriber! I said it from the start, that OpenAI's o3 AGI hype was a hoax.

MrlegendOr
Автор

You're Agi is on the upper extreme n more when Asi hits in about 2030-32 and fully takes over. I don't believe it has to be fully integrated just seeing integration across 90% plus of fields which is only year or two. And remember what we see is not what we see. They are probably a year ahead of o3 n on o4 at least in house and integrated in all their work. That's why these frontier models are 10xing in 4-5 months now. Was 6, six months ago too. Agents coming online like image generation did changes the world and thats right around the corner

ListenGrasshopper
Автор

O3 is definitely impressive, but one number caught my eye: $1000 per run. It's not because it's expensive—I believe that cost will gradually come down—but because of the reason behind it: a heavy computation search across various paths. You might recall AlphaGo, which used Monte Carlo Tree Search during inference. I'm not sure what type of solution search O3 employs, but it seems to involve some of these heavy computation methods. Could we say it’s a kind of ‘fake AGI’?

joelxart
Автор

As a retired engineer I dealt with that problem in creating decision aiding algorithms and planners by introducing what we called quality graphs.

A quality graph map a truth value, e.g. something that can be assigned a value based on truth such as weight, time, size, distance, color, etc. to a goodness factor.

Also it was often useful to limit goodness between 0 to 1 or -1 to 1 so that one could multiply the together to get composite goodness value based on several criteria.

Thus they can act much like fuzzy logic membership functions.

The quality graphs might be computed or just arbitrarily defined heuristically based on human judgment.

These might be multiplied by a base value say from 1 to 100 and thus function much like expected values in probability.

Thus we might have an composite goodness = .5 x .7. x 100 = 35 or the like.

RonLWilson
Автор

A humanoid robot that can walk, talk, drive and solve math & physics problems would be impressive.

jimcallahan
Автор

Merry Christmas,
Doc!! ❤🎄❄⭐☃️🎅☃️⭐❄🎄❤

keepcalm
Автор

I make a distinction between AGI and "being conscious". AGI can be achieved without consciousness, ar at least without consciousness as we experience it as humans. For us, each problem or situation has a value and/or emotional state, which has a sort of meaning. We feel happy or sorry about a specific situation or problem precisely because we translate the outcome into an emotional state which resonates with the life principles embedded within us. Perhaps consciousness permeates everywhere ( if we assume a "field hypothesis" ), and therefore an AI could be as well conscious to a certain degree; now let's consider that life as we know it took several billion years to achieve the emotional geometry that we share among living beings. Something as simple (for us) as the fear of dying or the joy of having a child would be emotionally nonsensical for an AGI, thus it would "understand" very accurately why we feel fear or joy. And - I might be wrong, but my intuition is that it's not a matter of time needed to train an AGI to acquire emotions: we could give an AGI all the time and computing power available in the Universe, it will very likely ignore the process that led to our emotions.

elefantfilms
Автор

"That's one small step for AI, one giant leap backward for Unemployment" 🤭

yoyo-jcqg
Автор

It's much easier for probabilistic models to satisfy some fuzzy goal where the answer is more or less correct according to some basic intuitions than the highly precise solutions to novel problems that o3 achieved.
To achieve better than human results in the examples you give, last decade's specialized models already sufficed. You can easily just train an image classifier for ripe vs unripe. It isn't verifiable in a definitive way, but you can probably get 99.999% accuracy where the human will probably be fine with 80%.
If you want to solve hard problems in science, that isn't going to cut it.

steve_jabz
Автор

When is it AGI? The larger problem may be that we don't have a good definition of intelligence or consiousness.
Seems hard to derive useful tests, without even knowing what to test for.

Clearly, the Turing test (and anything like it) is nearly useless.

SundogbuildersNet
Автор

2024: AI passes simple graphics IQ tests
2026: AI plays Zelda BotW like a human
2030: embodied AI has no trouble doing stuff

PoffinScientist