GPT-5 Is Slowing DOWN! (OpenAI Orion News)

preview_player
Показать описание


Links From Todays Video:

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience
Рекомендации по теме
Комментарии
Автор

Yes, the video primarily deals with speculation and reported insider information rather than concrete details from actual model releases. The discussion centers around:
* Leaked information and industry rumors: The main points about GPT-5's (or Orion's) performance are based on an article from "The Information" that cites anonymous OpenAI employees and internal discussions. There's no official confirmation from OpenAI about these claims.
* Interpretation of AI scaling laws: The video analyzes the implications of the rumored slowdown in GPT-5's development for the broader trends in AI research, particularly the scaling laws that have driven progress so far. This analysis is based on expert opinions and interpretations of current trends, not on definitive findings.
* Focus on future possibilities: A significant portion of the video explores potential future scenarios, such as the rise of test-time compute and the development of OpenAI's 01 series. These are educated guesses about the direction of AI research, not established facts based on released models.
Therefore, while the video raises important questions and provides valuable insights into the current state of AI, it's crucial to remember that the core discussion is based on speculation and projections rather than concrete evidence from released models.

WhatIsRealAnymore
Автор

jimmy apples called this report fake news, plus Sam Altman very recently said the path to AGI should be clear. Dario Amodei said similar things. So I wouldn’t be overly concerned.

ct
Автор

Drawing a comparison to moores law. I remember 30 years ago there were concerns that physical constraints meant that we were reaching the end of being able to build ever more capable CPUs. The same for memory, hard disks, parallel processing, modem speeds, wifi, bluetooth, LEDs, Solar panels, batteries, etc. it seems like human ingenuity most always pushes development beyond perceived limits

MrArdytube
Автор

чувак, мне дико нравится, когда в одном видео ты сообщаешь "мы все умрем", а через 3 часа "чуваки нас наебали, никакого agi расходимся"

ИванИванов-жбу
Автор

"we haven't hit the scaling laws limits, yet"

Notice the language is slowly and quietly getting more realistic

We went from exponential scaling (Sam's words in the beginning)
to linear scaling, (Moores law and feedback loop argument phase)
and now logarithmic in general, by silently admitting the scaling laws come with their asymptotes .

nexys
Автор

Even if we hit a wall thats fine.That will inspire us to go around constraints or create a new architecture for AI.
Open source always innovates

mrd
Автор

Jimmy Apples said it’s fake news. I’m gonna stand by that statement, cause he hasn’t been wrong so far.

sava
Автор

Langauge task gains is still very good, hoping we reach a point where can can accurately translate 99.9% of any content from any language to another language soon. Imagine how much that unlocks.

ThisIsntmyrealnameGoogle
Автор

Interesting take on GPT-5's performance. It’s fascinating to see how even the most advanced AI systems face their own challenges as they evolve.

BeyondAlgorithms-lt
Автор

I mean, we know that OpenAi is shifting from the old naming conventions. So there probably wont be be anything named gpt5. This comes with the model focus shift from gpt 4 to o1, which means there more to be had from the model line than their previous model line.

georgemontgomery
Автор

I’ve been saying this. ChatGPT has already absorbed every bit of human knowledge. AGI isn’t going to come from the architecture were using today

codycast
Автор

Your postings are so ADD. One day you post that Sam Altman said AGI was next year and here ware a day later saying GPT 5 won't come out soon (which he had already stated). Please stop linkbaiting purely for the sake of clicks and try to be accurate in your projections.

mos
Автор

I cannot imagine how it can disrupt the Virtual and Personal Assistants industry. I wish that it will not make millions of people jobless.

SabreMichael
Автор

I agree for different reasons, average joe probaly needs agi, but agi providers are admitting they cannot provide that service.

Which is not totally distrubing, but imo does not bode well for humanity long term.

memegazer
Автор

The missing link that causes a slowdown in AI Models intelligence is the lack of training sets originating in human environments. Lack of stereovision and scale causes glitches in recreation of visual artifacts (6 fingers humans, facial morphism for the same character...) Also, it lacks physical interaction and true daily intellectual communications in a physical context. It is something that will only be improved through the introduction of robots in human environments or training with advanced synthetic data.

PierreH
Автор

I think what a lot of people are missing is that if the previous generation model already achieves say 90% of what's possible, then there only 10% left. There just is not much left to achieve.

dennisg
Автор

im reading the comments and i see the copium!
you guys are inhaling deeply
guys the models plateaued a long time ago!
and before you start! i love ai and use it everyday in coding

but i will not let it do the code.
jimmy apples is all about the ai hype to keep his job secure

thtcaribbeanguy
Автор

I think that the bigger the scale of the model is (assuming the data was of high quality) then the deeper these AI bots would be able to think for longer to make smarter and smarter problem solving and if you take into account them getting also faster and faster due to better hardware (and software/AI models) then it all means that even if there's a slow down then it's probably only a temporary one since it's only a matter of time before these AI models become super geniuses and way beyond (so having IQs that get increasingly bigger as we progress), also don't forget unexpected breakthroughs that can shift the whole paradigm for the better.

MrRandomPlays_
Автор

I believe we must slow down the progress of AI at this point to allow for rigorous testing, ensuring accuracy and safeguarding humanity.

jayhu
Автор

I think they're lying and they're just keeping the good ones to themselves

oxygon