Q-Star LEAKED: Internal Sources Reveal OpenAI Project 'Strawberry' (GPT-5?)

preview_player
Показать описание
An article from Reuters has new information about Q-star and project Strawberry. Let's take a look!

(North America only)

Join My Newsletter for Regular AI Updates 👇🏼

Need AI Consulting? 📈

My Links 🔗

Media/Sponsorship Inquiries ✅

Links:
Рекомендации по теме
Комментарии
Автор

Do you think this will be GPT-5?

(North America only)

matthew_berman
Автор

Sounds like OpenAI is feverishly trying to generate hype while Claude 3.5 Sonnet is slapping their asses.

Douchebagus
Автор

All talk - no shipping from Open AI.
Matt, you are pumping out top-notch vids, you're efforts are much appreciated.

fynnjackson
Автор

Is this the new thing?
Youtube title: <insert AI here> Leaked
Seems to be a lot of 'leaks'. So many in fact that they have become meaningless.

ploppyploppy
Автор

So basically "make some noise about us again because we are not the only big shots in the game" and all that AGI internally and redefining things without showing any new results. Will believe it when i see it (like the omni voice etc)

fuzzylogicq
Автор

This video contains zero leaks or revelations.

ringpolitiet
Автор

I have been thinking about the multi shot behaviour of LLM lately. I feel like when asking a model to correct a mistake it's like it always falls back to finishing sentence or fill in the blanks behaviour. I started to think about the problem is the model has no noise eg it looks at the previous examples but because it already came up with the most probable answer from the past. Maybe what needs to happen is during multi shot operations there is a need to add intentional noise in areas where the errors are. Kind of like how stable diffusion models work start with a noisy signal then iterate to form a coherent answer.

justindressler
Автор

12:10 this looks severely undercooked. Level 3 can be achieved before level 2. Level 5 is claimed to be lower than AGI, but a single human is a GI that cannot perform the work of an entire organization, therefore is below level 5.

oguretsagressive
Автор

The way that reads to me suggests there hasn't been a breakthrough with the LLM itself but maybe an ancilliary procress that processes the information? The way I see the brain is that it isn't one organ. Unconventional but it fits. Sections of it operate in a chorus and orchestrated dance to deliver significantly more than the sum of its parts. I've often wondered if this is the way to true AGI. An LLM for each of these "parts" and each LLM would have it's own ancillary process mimicing the brain. Of course, that is so much easier said than done now isn't it?

BlueBearOne
Автор

I'm a little confused, maybe somebody could help me understand. It seems All of the LLMs can already do a bit of reasoning, The leading ones more than the others. Many of them show the kind of initiative that suggests understanding of intention, which itself is contiguous or synonymous with planning. So I don't see what's different than what we already have - both in approach and output - except possibly increased resources to devoted to planning, memory, maybe incorporating agents. But I don't see the sea change, does anybody else?

billywhite
Автор

When OpenAI does announce their next model, do you think they'll actually release it at this rate? Sora got announced, it was never released. GPT4o voice mode got announced and it was never released. I'm beginning to wonder if anything will actually come out from OpenAI?

seanmurphy
Автор

Saying you are the first comment and actually being 4 is like asking Chat GPT to tell you how many words are in your answer.

mrbrent
Автор

As Matthew mentioned, we're definitely seeing a slowing of the cadence of significant releases from OpenAI (and other frontier model companies). This is likely mostly having to do with safety. As these models get ever more complex, the concern of the developers must be shifting towards understanding emergent capabilities. When you give a machine the ability to internally reflect on problems from multiple angles by recombining trained knowledge, I think that any possibility of understandability goes right out the window. The question then is, how do you establish effective guardrails on such a system?

Avman
Автор

Honestly, OpenAI is really undewhelming. They're now full-on into hype mode as they're losing market share to Anthropic. They've fallen in the typical sillicon valley trap of overhyping and underdelivering. I'm getting quite annoyed with them. Hearing them talking about what they're doing, you'd expect we'd already have AGI now, when in reality, they've made close to no progress since GPT4 (4o is worse..)

gab
Автор

For AI to give you accurate information about how the physical world operates, it will need a physical robot body to experience context.

arxmechanica-robotics
Автор

I asked chatgpt the question you talking about at 6:50 and it's answer is correct, look at the answer: my question : Imagine that you are at the north pole and I ask you to walk 200m in one direction then you must turn left and walk 200m. question: have you crossed your starting point?
Chatgpt answer: "No, you have not crossed your starting point. Here's why:

1. You start at point A at the North Pole.
2. You walk 200 meters in any direction (say south).
3. Then you turn left (let's say this means you start walking east) and walk another 200 meters.

At the North Pole, any initial direction is effectively a southward movement. So after walking south, when you turn left, you are walking parallel to a latitude (parallel circle).

To better understand, let's consider some specific cases:

- **If you are at the exact geographic North Pole**: All initial direction is south. After turning left, you walk 200 meters east or west around the pole, and you have not crossed your starting point.
- **If you are not exactly at the North Pole, but close to it**: After 200 meters towards the south, you are at a certain latitude. Turning left (east) and walking 200 meters will take you to another location along this latitude. You do not cross your starting point.

In these cases, your walk does not take you to your starting point, because the directions change depending on your initial position and the curvature of the Earth."

Matthew did you tested the question ?

herteert
Автор

Can't wait for it to be released! Thanks for keeping us up-to-date!!!

mpvincent
Автор

The scale they released for guaging the progress toward AGI doesn't make any mention of AGI. Which level is AGI...







AGI

MichaelForbes-dp
Автор

To think local models are not going to send anything back to a corp is like saying your phone will not send anything information to Google/Apple .... we can hope though

ScottAshmead
Автор

Topic idea : Ollama integration with Stable Diffusion, Comfyui, Maya/Blender, Kirita and Griptape nodes ?

MilesBellas