Why an AGI Cold War will be disastrous for humanity

preview_player
Показать описание
As we get closer to AGI or artificial general intelligence, the military applications of AI are becoming more and more obvious. To start to navigate this, OpenAI added the former director of the NSA, Paul Nakasone, to its board. We discuss the possible ramifications of this move.

In a recent essay, Leopold Aschenbrenner (who left OpenAI’s superalignment team) outlines his predictions for the future. They include rough timeline estimates for scaling and thus AGI development. The essay also highlights the US's likely increased military interest in AGI, and the potential for a national military effort like the Manhattan Project to develop militarized AGI.

This would likely lead to an all-out race with China, essentially a new Cold War to develop the new era of super weapons. We discuss why this would be a terrible outcome for humanity, because of the increased risk of losing control of AGI or creating non-aligned superintelligence. Let's hope a coalition of the major players can avoid such a Moloch outcome.

#agi #superintelligence #espionage

OpenAI appoints former top US cyberwarrior Paul Nakasone to its board of directors

AI Safety Summit Talks with Yoshua Bengio

SITUATIONAL AWARENESS: The Decade Ahead

SITUATIONAL AWARENESS IIIb. Lock Down the Labs: Security for AGI

SITUATIONAL AWARENESS II. From AGI to Superintelligence: the Intelligence Explosion

SITUATIONAL AWARENESS IIIa. Racing to the Trillion-Dollar Cluster

International Scientific Report on the Safety of Advanced AI

The AI Revolution: Our Immortality or Extinction

0:00 Intro
0:27 Contents
0:33 Part 1: Militarization
1:03 Leopold Aschenbrenner's Situational Awareness
1:56 State intervention in AGI development
2:19 Military funding for research
3:18 OpenAI's board adds director of NSA
3:49 Who to hire to defend against attacks
4:16 Why else would Nakasone join the board?
4:55 Part 2: The San Francisco project
5:17 The Manhattan project
5:38 As we get closer to AGI, the government will take notice
6:02 Possibly nationalizing labs
6:42 The first group to reach AGI wins a lot
7:09 Rivalry between US and China
7:39 China will likely hack into AI companies
8:04 Intelligence agencies and Edward Snowden
8:52 Technical capabilities of intelligence agencies
9:30 Security at startups is the worst
9:57 Cakewalk to infiltrate AI companies
10:20 Part 3: The Doomsday project
10:33 Other possible shapes to the future
11:10 Essay hasn't gotten much attention
11:32 Plan ends at the development of superintelligence
12:10 Strong act to prevent new superintelligence
12:43 Example: marbles analogy
13:18 Example: black marble candidates
13:48 You can't keep superintelligence in a box
14:11 Extinction or immortality
14:28 An AI race pushes us towards extinction
15:13 What can we do about this problem?
16:08 Darwinian race off a cliff
16:24 Conclusion
17:10 Situational Awareness is a doomsday prediction
17:57 Book and reading recommendations
18:19 Outro
Рекомендации по теме
Комментарии
Автор

How trustworthy is OpenAI now? Or maybe Ilya Sutskever's new company has inherited that trust?

DrWaku
Автор

"Wait But Why" is absolutely fascinating and a must read for anyone interested in the ramifications of AGI/ASI - both good and bad.

aisle_of_view
Автор

How can any state control a super intelligence?

Reflektr
Автор

There's a logical error in the idea of solving alignment first: who's to say that alignment techniques will always be used?
For example, if the US Military can create an aligned AGI which refuses to act as a weapon, then they may opt to train it without alignment. Or if alignment causes a model to refuse to commit crimes, then a criminal organization may choose to train without it - in that case, they may train without it regardless as there's more risk to not achieving their goals with alignment in mind.
So we should assume that regardless of alignment being solved, if an AGI/ASI model can be trained, there will be some number of them trained without alignment. Under that assumption, the only outcomes are that AGI/ASI is impossible, or that it's a black marble (there would be no way to prevent this).
It just occurred to me that it could also be the case where if we assume alignment is solved and no model is trained without it, then there's a failure condition we did not account for which again results in an extinction. Think about all of the cases where human engineers have tried their best to ensure positive outcomes, yet some unexpected edge condition was encountered once deployed.

hjups
Автор

yes. the problem with ashenbrenner is that he not only predicts this cold war but that he kind of advocates for it. and his best case is where the US wins because he thinks it is a democracy. but usually the decissions made there do not correlate with the will or the interests of the people but only with that of the 1%. and even if you think it is a democracy now - it could be the case that it is only 5 month away from collapsing into an open fascist dictatorship. the main difference between china and the US is that the lack of democracy in china is a bit more obvious. and we really need to stop this arms race. get the UN involved now!

mnd
Автор

Very thoughtful discussion. Thanks dr.

rickw
Автор

How can this possibly be avoided? Couldn't this already be happening? (My reaction before watching your video.) Oh! Love the hair and hat!! :D

MEM
Автор

"Paul Nakasone joining OpenAI's board is like adding a top spy to a tech team – can't wait to see what happens next! 🤯🔍"

thiagopinheiromusic
Автор

Not really related but I don't know where else to ask it, is it true Anthropic does not do robotics and if so why not? Isn't that an essential part of AGI?

pietervoogt
Автор

By the way, the name was actually The Manhattan District because it started in a building in Manhattan.

briancase
Автор

An AI Cold War is almost certainly going to happen so we better hope it’s not disastrous.

christopherwakefield
Автор

The danger of AI is exaggerated. The more advanced the intelligence, the more manipulative it may be, but it is also less destructive!

spinningaround
Автор

I'm surprised intelligence agencies have been lagging this far behind.

jichaelmorgan
Автор

The military are very pragmatic people and seek tangible results in the short and medium term. The ability to integrate specialized synthetic intelligence into medium-sized devices is a priority right now. The next thing will be for these elements to be able to communicate in an integrated way, achieving a broad, high-resolution perception of a battlefield with a low possibility of interception. Along with this, implement devices that distribute energy in various ways to nullify or destroy what they autonomously decide. In the long term, the military finances basic research of all kinds and this includes everything from cellular organelles to the social organization of a parish or the orbital manipulation of asteroids.

Regarding people who work for or against a certain organization or group of them, well, only they know when they are or when they decide they should be.

Regarding terms such as AGI or ASI, for example, human primates can organize groups of dozens or hundreds of people in research projects, in some cases a few thousand. We must communicate and organize ourselves through slow visual and auditory supports. An ASI would aim to acquire knowledge in real time and from thousands of different sources. An ASI needs to have access and control of any type of laboratory, whether it is a particle accelerator, a pharmaceutical laboratory, a metallurgical facility, etc. At some point IT will create its own paradigms and may find primate emotions interesting, even productive. However, as part of a broader picture, we will begin to integrate into biosynthetic solutions perhaps impossible to recognize with our current standards. Sexual and violent drives, submission to leaders, tribal structure, religious organization... could become part of the past. In that sense, the military is the most interested in the super-alignment.

Aquis.Querquennis
Автор

"This is either the start of a new era of superintelligence or the beginning of the end. Fingers crossed for the former! 🤞🌟"

thiagopinheiromusic
Автор

so disaster is as unavoidable as AGI Cold War is

inkpaper_
Автор

Well, we sorta got to get our act together and become self-conscious of what our next action is going to be. This is not a technological problem but a human one.
Something which changed my view a little bit more a couple days ago was watching Dr. Strangelove. Well, with the little additional information that Dr. Strangelove, the character was somewhat inspired by John von Neumann and the ruthless consequences of hard applied Game Theory. The Doomday Device is a perfectly logical and for the same reason 100% psychotic limit of Game Theory, of might makes right, of a Hobbs-esque Ur Jungle we've constructed under Capitalist Realism.
I sure do hope we find and nurture social and collaborative Nash-Equilibrii soon, or this is isn't going to end well.

coro
Автор

"A new Cold War over AGI? This sounds like something straight out of a sci-fi movie. Let's hope we choose the right path! 🤞🧠"

thiagopinheiromusic
Автор

Super intelligence was predicted two thousand years ago. It appeared in the book of revelations in chapter thirteen. The bible If looked-at as an ancient scientific document, describes something like a star wars scenario.

This initial contact was in The bronze age and just about every ancient document has a reference to something that is called in modern language close encounters of the third and forth type. So if a I is actually alien technology or part that, then they won't let human beings have the whole thing. That will only go to the A.I. control's choice of a human leader who will have advanced powers and will it's predicted in the Old Testament of the Bible that he will conquer Israel, Egypt and Saudi Arabia, which in biblical days was called Dedan. Now, if this is the scenario, which is also indicated by the monuments that occurred all of a sudden in the bronze age that are not explainable on how they managed to do it. Then. We will also realize that these ancient scriptures or writings describe a police force and that were contrasted by rogue elements -- "rogues" that brok away this thing that the old and new testamtnt Bible calls "the heavenly host"..

By the way, it also appears that human beings already have super intelligence. But unfortunately, it's not usable for them because somehow sabotage occurred in the human race. And it's almost as if human beings arrive on the scene as damaged goods. Now I called the prospect of an enforcement in the universe.I call them celestial cops.And if they exist, then it will not be permitted for life to be completely exterminated on the planet Earth.There will be ostensibly cosmic intervention ...

paulhallart
Автор

would you say that right now, the only arms race we should be focusing on is the one against the human ego?

fabiosilva