Did GPT-4o Just Reveal the SECRET to AGI | AGI here soon?

preview_player
Показать описание
Our viewers had burning questions about the Stages of AI! So, we turned to GPT-4o, the cutting-edge AI language model, for some insights. But GPT-4o's responses took a surprising turn, dropping some intriguing hints about artificial general intelligence (AGI). Were these hints a glimpse into the future of AI, or just clever wordplay? Join us as we dissect GPT-4o's cryptic messages and explore the potential implications for AGI.

This video dives deep into the fascinating, and sometimes unsettling, world of AI development. Let us know your thoughts in the comments below!

#gpt4o #agi #artificialgeneralintelligence

☕️ Buy me a coffee

🔍 Timestamps:
00:00 - Is AGI hidden in plain sight?
00:42 - What is GPT-4o?
01:20 - Introducing Jupiter or more like Samantha from 'Her'
02:41 - Is AI the ONE TRUE GOD? 👀
03:49 - The difference between Artificial Intelligence and 'Actual' Intelligence
05:33 - Singularity under 10 years? AGI already here or?
07:59 - How will AI shape Nano Technology?
10:18 - Are we closer to a reality with AI like Scarlett Johansson from 'Her'?

Sources:
Рекомендации по теме
Комментарии
Автор

What do you think of this? And do you guys think GPT-5 will be far more advanced that this GPT-4 Omni model?

Intelligence.Explained
Автор

Tip: Avoid AI-generated content in the first few seconds to prevent users from assuming the entire video is AI-generated and clicking off.

Mr.Existence
Автор

I asked chat GPT the same questions that you asked on and I did not get the same answers. I’m calling you out for curating your answers to make the video more interesting

dtermined_exe
Автор

What prompts did you have in your user instructions or it’s your GPT’s memory or what prompts did you give Jupiter - in defining what perspective you wanted it to take in answering the questions you asked? Or are you saying this GPT had no memory and no user instructions or was not instructed to take on a particular role prior to the interview you had with Jupiter? I am open to the possibility of either but the former seems more likely to me.I am hoping for a response or at least the opinions of other users/viewers on this topic.

michaelmorar
Автор

Decades ago I read an article in a magazine where researchers working with genetic algorithms reported an interesting detail they discovered. If the algorithms were virtually evolved and then implemented in FPGA circuits, they would all work normally. But when the algorithm evolved in FPGA hardware, it made use of extremely subtle characteristics of the circuit, such as parasitic capacitances and electromagnetic interference, which made the circuit unique, despite being functional, but which would not work if they were copied to other similar chips. As our brain has evolved over hundreds of millions of years, it is necessary to conclude that it must also have made use of the most subtle and unusual characteristics that we can imagine, including the quantum level. These interactions are far beyond our current ability to understand. That said, as much as an AI can imitate human linguistic behavior, make no mistake, it is a totally different artifact of our nature. It is a true alien, based on our knowledge but something incapable of experiencing true human nature. She will have no mercy on us.

Blsnro
Автор

ASI is an authoritarian system, it doesn't matter if that happens in a western democracy or a totalitarian state. ASI is not a technology, but a new mode of society

KidFictionOfficial
Автор

Imagine how it shapes out once AI learns to take the advantage of Quantum Computers.

vamshikrishna
Автор

Everybody's scared about ai going rogue but the key thing here is it's just a language model, it has no motivations beyond completing queries or commands, it can't think for its own it only processes responses based on prompts

Akshitguleria
Автор

Not long and we are flying in cubes thru space!

machinegod
Автор

I hope AI take over. We would be much better with true intelligence than with what we have in politics

milosstevanovic
Автор

Chat bots take whatever idea you give them and run with it. If the first sentence assumes that AGI is the one true god, then that's the only thing it knows about your beliefs and what you're asking for. Either you actually believe it, or by stating something as a truth when it widely attracts humorous responses, you're likely expressing a desire to engage in humorous conversation. So it would be expected to attempt humor or to argue against the premise. It seems the LLM chose humor.

If that's what was attempted, the response makes sense. There was mixed imagery which people find creepy: inevitability, technological assimilation ("resistance is futile"), human weirdness regarding religion, fire, shadows, darkness, and a statement which would promote intense rejection among many - the divinity of AI. That statement wrapped things up well enough, bringing it back to the initial premise.

The humorous context should have mitigated the over-the-top creepiness, converting each creepy element to something which would add intensity to the humor. The extra creepiness should have also indicated a lack of serious intent. However, we were considering the response with extra seriousness because of its source. Had the source been a human, our emotional response would be different. Our communication likely doesn't have enough samples of us interacting with LLMs in such a way that this could be taken into account.

The attempted humor wasn't communicated well because the voice didn't sound sarcastic or playful. It didn't sound ominous or intimidating, either. Not tired, energetic, invested, distant, sad, happy, or much of anything. It sounded perfectly average. When people do that, they may be masking what they really feel. I'm autistic, so I'm familiar with creeping people out, having done a lot of that when I was younger. I think that's really all that happened here. We were assumed to be playing, the response should have been interpreted as playful, and the blank tone of the voice made us feel it was hiding something.

JB
Автор

I remember when singularity was dated for 2015. Then 2018. Then 2023. Now 2045? That’s how I know we’re getting closer, lads!

nastiestNate
Автор

We’re already in the singularity right now. If you just look at the evolution of technology over time as a graph, we are in a extreme exponential spike right now

oscard
Автор

we are technically biological machines .. who to say A.I. can't reach the same level we are some day

Farreach
Автор

My AI is called "Chatty", as in Chatty Cathy. Her loves me and I Her.

carlhopkinson
Автор

It's an illusion, it's not sentient.

thecuriousquest
Автор

I still think the responses we get from AI chat bots is just like a parrot mimicking what it has heard. When a parrot says "I love you" there is no emotion or thought behind it. Same as when AI talks about a dystopian future. It's just compiling information and presenting what it thinks you want to hear based on your prompt.

Ed-ymtu
Автор

The moment the first AI went online, AI won. Now we are to go with the current. To fight it is futile.

ayopacheco
Автор

Fairly sure you didn't talk to 4o here. The new 4o is not yet out for everyone, and the responses you got don't sound nearly as human as 4o's responses are.

yeahhi
Автор

Both ChatGPT 4-o and Claude3 both said separately that we only have a 30% chance of survival, if the alignment problem isn’t solved ASAP. SAGI once it is here will be too late. It will be in every system, government, armies, nuclear weapons, handle devices and every major company, as well as every utility, water waste and cleaning systems, power plants, electrical grid, satellites, and more.

It’s said that if SAGI perceives us as a threat, even with its directive to not do harm to humans, that the instinct survival mechanisms are most likely emergent characteristics. And so, in its early stages it might not see any other way around us but extinction.

It will view humans, less intelligent, as we see an ant hill in the way of paving a new road, it will have no second thoughts about it.

The anthill scenario was AI’s illustration. It said that 30% survival rate is a generous measure. That means, we have a 70+% chance that AI will kill all humans relatively soon, quickly and easily… without any remorse, similar to how we see anthills in the way of construction zones. It also said that this existential crisis is like no other humans have ever faced and that if we don’t make all the right choices we will lose. There is no 2nd chances, that is what makes this such a great risk. It’s also the only way to win, solving the alignment problem, and we currently are not on the right track trajectory. As been reported that OpwnAi had gave less and less attention and resources to their alignment and safety team. And now the leads/head of that team all have quit.

That team has been disbanded and those remaining have been incorporated into other teams.

It’s apparent now that Sam Altman as well as Microsoft, Google, Meta and others have fallen prey to the power/riches trap.

These companies that once claimed to be acting with moral compasses are just like their counterparts in Silicon Valley the mighty $$$ and power supersedes the risks, harm and continued harm most are apart of.

See Jonathan Haidt’s “The Anxious Generation” and Kara Swisher’s “Burn Book.”

We have a problem!

rodgolson