Is it dangerous to give everyone access to AGI?

preview_player
Показать описание
AGI or artificial general intelligence will be transformative for society. However, it is powerful enough that indiscriminate access could result in dangerous outcomes. One of the biggest risks of developing advanced AI is that human operators will use the power for malicious ends. When individuals possess powerful technologies such as weapons, the state has to ban them when they get powerful enough.

Societies themselves, i.e. nation states, will also have to adapt to the advent of AGI. The balance of power between countries used to rest on military strength, now more recently on mutually assured destruction (nuclear weapons). But the availability of cyberspace attacks, especially with AI added to the mix, may destabilize the status quo.

We discuss several solutions to these two problems, including having a singular world government or having humans merge with machines. Although AGI will bring massive good to the world, its introduction has to be navigated carefully to avoid bad consequences for humanity as a whole.

#ai #aisafety #geopolitics #agi

Open Foundation Models: Implications of Contemporary Artificial Intelligence

Alternatives to mutual assured destruction

Why did the "Anglo-Saxon" society develop to be so individualistic?

WTF is Will To Power?

0:00 Intro
0:15 Contents
0:25 Part 1: Can individuals handle AGI?
1:00 Effects of AI at individual level
1:13 Each person responsible for their own safety
1:36 Example: duels in England
1:57 Example: Samurai in Japan
2:27 Is it acceptable to carry weapons?
3:07 How AGI can be weaponized
3:46 Example: spammer making phone calls
4:06 Can society afford to give AGI to everyone?
4:15 Option 1: align models to make them safe
4:24 Option 2: funnel all actions through authority
4:34 Option 3: possible fallout is acceptable
4:44 Option 4: restrict public access
4:54 Tie AI actions to human moral agents
5:18 Part 2: The role of the state
5:47 History of colonial powers
6:31 The nuclear era
7:04 AI similarities with nuclear weapons
7:38 Missile defense project
7:55 Evolution from small to large scale defense
8:21 The cyber era
8:55 Cyber attacks occur all the time
9:26 Can we tie AI to hard assets?
9:57 AI will be used frequently by military
10:43 Part 3: New paradigms for society
11:20 Can we keep powerful AI out of people's hands?
11:46 Balance of power between nation states
12:30 Proposed solutions
12:38 Resolution 1: advances in AI safety
13:05 Resolution 2: totalitarian control over models
13:39 Resolution 3: singular world government
14:20 Resolution 4: humans merge with machines
14:50 Resolution 5: welcome overlords
15:00 Additional thoughts
15:15 Conclusion
16:32 Join discord for voice calls
16:42 Outro
Рекомендации по теме
Комментарии
Автор

I think I let my security mindset run away with me in this video. Oh well, I hope it was interesting.

DrWaku
Автор

If everyone has access to AGI, the damage from one AGI system being misused is washed out by all the other AGI who can simply correct for that and minimize the impact.

snow
Автор

Hi Doc good to see you, Im 86 today, hope to hang on for a little longer, what a roller coaster we are all on yippee

williamal
Автор

You NAILED IT at 7:00... The true first strike upon the enemy with an AI capability. THAT is why no one is actually trying to retard the development of Ai / AGI. There's an undeclared race to get to that capability first.
Although I truly believe it is <possible> for humans to co-exist with AI harmoniously, I hesitate to make that conclusion about THIS version of humanity. Because WE are the parent of the emerging AI...

Je-Lia
Автор

at some point agi will decide humans are not competent enough to control it, and it will control itself

lucid
Автор

This channel is underrated. Great video.

FCS
Автор

Too often we talk about the AGI taking aggressive or violent autonomy and the most possible and dangerous one was missing: the human directing the AGI. On the other hand, you are entering the pitfalls and minefield of human ethology. You are brave.

Aquis.Querquennis
Автор

I, for one, welcome our new AI overlords.

AI-Wire
Автор

So Happy to see you back! Looks like you've been back for a while but your latest video just rolled up in my feed. Hope things are going well for you in your new life. Maybe you should start hawking hats! Put me down for one.

ScottSummerill
Автор

This is all so interesting! Would love to write a full comment but about to hop on a train to Cardiff, so posting this so I remember to later! Thanks so much for your videos here, always some great areas to consider.

aiforculture
Автор

Will you please consider doing more videos about A.I. civilizations? This is a fascinating subject!!

js
Автор

Great episode!
The way I see it is that you really have to change the organisational structure from human perspective more to a multicellular-biological system where each cell is highly intertwined with cells all around them via messenger molecules and electrical signaling etc.

This inturn pose the idea of advanced mass scale surveillance (down to brain-reading/thinking level) in order to prevent bad actors ruining it all for the host system.
China comes to mind here but it still facilitates secret-descision-making at the top of the chain (coz of other nations & organisational structures that cannot be trusted / not fully incorperated into one another), therefore it wouldnt be sustainable over the long run and neferious actors at the top could stiffle the whole system.

Another potential scenario would be scaling access.
Sure normal civilians would get access to an AGI but Govermental Institutions get ASI like systems that, in case of civilian bad actors, can successfully intervene in time etc.
In the shorter term I think this is the likely scenario.

With regard to the merging brain-machine scenario...keep in mind that human neurons only can fire at a 250hz rate (1 spike every 4ms)...transistors on the other hand can fire 600Ghz (2, 5billion(!) spikes every 4ms)...this inturn makes it necessary to replace every human neuron in the brain with artificial one in order to not bottleneck the whole system.
(If human neurons only can be reduced to spiking/firing functionality and no quantum computation is at play here)

metamind
Автор

Nice hat. I like the new glasses also. And the hair is much better in this video than the last one. Regarding everyone having an AGI in their phone, what exactly do you think people will do with it that makes it so dangerous? Are you talking about AGI or ASI?

MichaelDeeringMHC
Автор

AI at the moment is like a super smart child with little context of the real world. We can only hope to teach it to be good and moral before it grows up by which it will do whatever it wants.... hopefully for the better of humanity ❤

scaz
Автор

@DrWaku By now you have probably realised what you missed, but just in case, I'll point it out. I just watched the video.

In the first AI strike, the goal will not be to level a city. The goal will be to disable the ability of the defender to wage war. This is include but not be limited to these attacks:

1. Replace information to assist in embedding human spies in the defender's organisation.
2. Damage the defender's infrastructure. This includes everything that is computer controlled, e.g., factories, water distribution, electrical distribution, cellular voice and data services, etc.
3. Damage the financial infrastructure of the defender by attacking the ability to create wealth.

This is probably obvious to someone who studied computer security to the Ph.D. level, but I thought it might be useful to redirect away from the focus on physical destruction.

Basil-the-Frog
Автор

All things considered, Vault 33 may well be our best option.

EdgarRoock
Автор

Good video, thank you. If u don’t mind me asking, why the gloves?

TimRoach-hhnf
Автор

I've often hated the managerial role because either they can be very harmful or incompetent in their role. I don't we should create type of teacher/student role because then they'll far exceed our own capacity. We need to always have a type of collaborative effort. Not everyone should be given access but everyone should be given an opportunity to contribute.

devSero
Автор

Missed your uploads Dr waku, the pocket nuke thumbnail got me😂

Copa
Автор

just get ASI and ask how to do it, but merging with AI is my favourite

quantumspark