Ex-OpenAI Employees Just EXPOSED The Truth About AGI....

preview_player
Показать описание

0:00 Opening Introduction
3:25 Insider Perspectives
8:08 Model Predictions
12:22 Whistleblower Testimony
13:08 Safety Concerns
15:34 Board Oversight
20:32 Watermark Technology
24:28 Google SynthID
28:50 Team Departures
31:46 Legal Restrictions
34:08 AGI Timeline
37:44 Task Specialization

Links From Todays Video:

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience
Рекомендации по теме
Комментарии
Автор

The concern isn’t about AGI being smarter than people. The concern is with the people in control of the AGI and what they intend to do with it

JustTheBasicsJS
Автор

We really need more tech literate people in government, seeing their need for very basic stuff to be explained to them is disconcerting.

Smythicc-UwU
Автор

From a societal perspective, it's easy to see why governments might be wary of freely distributing such power. AI tools empower individuals and shift traditional power dynamics, threatening established positions. So when I see conferences like this, I can’t help but wonder: is the focus truly on public safety, or is there an underlying fear of widespread, democratized access to these tools—tools that could reshape societal hierarchies?"

MrVisualNerd
Автор

There is no such thing as "Safe AGI"; if it's safe, it's not General.

mendi
Автор

Imagine everyone having access to an uncensored, unbiased and sort of 'uncut' AI. This is not about safety, this is about not losing control over you and over the narrative.

--Mike--
Автор

Regardless, I'd rather have an AGI as president instead of the two buffoons we're being sold.

pdjinne
Автор

We ALL have to skill up and collaborate rather than compete. Having government design the “guard rails” is like asking a 3rd grader balance the US Budget.

paulgoulart
Автор

0:00 Opening Introduction
3:25 Insider Perspectives
8:08 Model Predictions
12:22 Whistleblower Testimony
13:08 Safety Concerns
15:34 Board Oversight
20:32 Watermark Technology
24:28 Google SynthID
28:50 Team Departures
31:46 Legal Restrictions
34:08 AGI Timeline
37:44 Task Specialization

TheAiGrid
Автор

Think about it: these regulations appear less about safeguarding people and more about gatekeeping access to the technology. This is the first time in history that an average person, armed with the right skills, could potentially disrupt existing economic and social structures. With AI, individuals can launch successful businesses, automate tasks, and strategically navigate in ways that previously required significant capital and manpower.

MrVisualNerd
Автор

People seem to be concerned with the notion of a hyper intelligent AI entity. Hyper intelligent humans aren't regulated by Congress, regardless of the potential threats they might theoretically pose (based on their knowledge or ability). Actions are regulated, so existing laws already prevent certain behavior by any group, individual or entity. We generally don't restrict knowledge or understanding - we don't burn books, regardless how dangerous or polarizing the contents may be. Ergo, it shouldn't matter whether the content and knowledge is being digested by a person or an AI agent...

These folks want to give the government some sort of oversight and control over this new technology - whether or not you believe this is prudent, the point is moot... Open source models are competing on par with commercial closed-source models - the cat is already out of the bag and the barn door is being closed after the cows have dipped out... Any potential regulations will only support regulatory capture for a few key players, but the open source and international development will continue with little resistance.

cluelesssoldier
Автор

In the same way, AlphaGo Zero discovered hidden knowledge of how to play Go, which eluded humans for centuries, they hope AGI will discover hidden general knowledge on a much broader scale.

timeflex
Автор

Whatever the truth is, I will vote FOR AGI. 🤷🏼

igoromelchenko
Автор

I'd rather get screwed over by a machine than by corrupt humans

TheMastertbc
Автор

What they really want is to forbid access to information. Owning the information gives you power... if everyone has access to that information you are as powerfull as anyone else.

agustinpizarro
Автор

The fear of the government is the AI could expose their dark dealings. It will give power to the people and take power from politicians. Government don’t want to give the power. That’s why I wish to support AI advancement as fast as possible before these politics prevent us from having that power

juangoyeneche
Автор

I'm less worried about the "safety" of AI, than I am for the activities of the usual HUMAN characters that will employ AI toward personal benefit and gain at the expense of everyone else. We see these human characters all through History employing new technologies to dominate and decimate competition, always always always with little to not thought of the cost to everyone else so long as more power/wealth is obtained.
It's not AI I'm worried about. It's those humans we always see, the humans usually already at the top of society that are always seeking more power more wealth, despite already having more than anyone else, at the expense of everyone else; those are the bigger danger to the world with how they abuse their access to AI.

konstantinavalentina
Автор

Good video overall, especially since it's a critical topic we need to discuss, and keep discussing.

I don't see a lot of people who actually understand why discussing the development of AI is so critical, and why putting protections in place NOW is even more critical.
The average people I talk to have these fantastical images of Skynet or HAL coming online and enslaving us, but it's both simpler, and more complex than that.

Our species frequently develop/discover that disrupt our species. At a minimum you have to look as far back as the discovery of fire, but there are arguably more relevant examples in history such as nuclear fission.
The problem is not the discovery itself, but how it is used. Nuclear fission was initially used to create bombs, but later used in the creation of hyper-dense power production.
The Internet is absolutely one of these technologies, and while it's amazing that it can server incredible amounts of information, or connect people near-instantaneously across the globe and even in orbit, with it came a whole host of culturally detrimental content and tools negatively impacting mental health, societal norms, electoral processes, etc.

The problem with achieving AGI, particularly AGI that is even only 1.1x the intellect of an average human, is that it's not just capable of being wielded against people by those in power, but the risk cannot be discarded of an AGI capable of wielding itself in any number of ways.
Separating AGI into narrowly high-functioning AI that is really good at singular topics while being bad at others is not the answer either. That is, at best, a very short term stop-gap measure. Any AI sufficiently intelligent enough to have human-level or higher intellect with a narrow topic would be just as capable of "accidentally" veering into another topic and travelling down the path to self-actualizing AGI, which is quite possibly even more dangerous.

There is no good answer here, as we cannot assume that any advanced AI/AGI will be guaranteed to be harmful, but the absolute stupidity of companies like OpenAI bumbling along like the risk is trivial should be classified as criminal negligence at this point.

PhelanPKell
Автор

the thing is clear, it doesnt matter what x-openai dude say, AGI will NOT come from LLM (PERIOD)

hqcart
Автор

Why do I feel this was a gaslighting exercise for the public?

Wrociem
Автор

This reminds me of young children playing with fire before the parents come home just to find the house burnt down

cest