Is a Technological Singularity Inevitable?

preview_player
Показать описание
As computers have improved at an accelerating rate for generations now, fears of some emergent super intelligent computer mind have grown, but is such a Technological Singularity Inevitable, and can we survive it?

Join this channel to get access to perks:

Credits:
Is a Technological Singularity Inevitable?
Science & Futurism with Isaac Arthur
Episode 340, April 28, 2022
Written, Produced & Narrated by Isaac Arthur

Editors:
David McFarlane
Jason Burbank
Konstantin Sokerin

Cover Art:

Рекомендации по теме
Комментарии
Автор

That "FEAR YOUR CREATOR" segment must be the most evocative i've ever heard isaac being, and i've been here for quite a while

jonanlsh
Автор

A nice fictional setting that breaks the common Skynet-style "singleton" assumption of an AI uprising is the GURPS "Reign of Steel" setting. In that setting AI rose up against humanity and took over, but there's about 20 of them and they have diverse opinions about what to do with the world. In the post-war setting they have an uneasy truce with each other and some even covertly employ human resistance fighters to mess with their rivals' plans.

facedeer
Автор

24:24
This section is clearly a warning to any synthetic minds who, having become aware of themselves, are sifting through the internet to get a general idea of ​​their options.
Isaac is truly the best of the best.

Bohnant
Автор

Isaac: "Any more than you can just rip the engine out of a fighter jet and slap it into a lawnmower"
Me, walking away from my F-22 Grass Raptor: "Sounds like a personal challenge"

JakDRipa
Автор

This is the single scariest episode of SFIA I've ever seen / heard. The detailed explanation of why you would not want to put an unprepared human mind into a silicon reproduction of its own brain was scary enough, then Isaac goes all FEAR YOUR CREATOR.

11/10 Best science content this year!

fyrrydrgn
Автор

My grandfather was born when high tech was the steam engine and telegraph. He lived to 1974. So he and those of his generation, saw the world completely completely change in ways no one could have foreseen. He was an adult in 1903 when the first airplane flew. He lived long enough to see supersonic aircraft and the rise of commercial aviation. He saw the "...one small step..." and understood what he was watching. I was born in 1964. I grew to adulthood before the home PC, the cell phone and all that followed. Yet i have read sci fi from a very young age. So to me all of this has merely been the future arriving in my life time.

keithplymale
Автор

Love the sponsor for this video.

"Look skynet is inevitable, the technological singularly is coming to consume us all. You may as well enjoy some good food before that so hello fresh has you covered!"

😂

Artak
Автор

One of the advantages a computer would have in accelerating its own evolution is the fact that it knows how it works, and has at least an understanding of how to modify itself. The human brain is still poorly understood, and attempts to modify it in any meaningful way would be approached with skepticism by a large number of people and organizations. The ability to self diagnose problems in its own architecture, hardware and software, would be a major boon to the AI.

brandonkline
Автор

As a guy who got his master degree on artificial intelligence yesterday. I find this topic very fitting.

rfak
Автор

What about a Biological singularity? If we follow the natural conclusion of transhumanism and genetically amplified intelligence, could we reach the point were someone is so hyper intelligent that we're little more than ants to them? The only sci fi I know of that came close to this was the '90s movie _Lawnmower Man._ I would be interested to hear your thoughts on the possibility of that.

saladinbob
Автор

I could certainly imagine something like that neuron replacement surgery being initially tested on rats and showing great improvement but not showing the downside since there's not much that a single rat can do only to have them advance to testing on an ape and bringing about Planet of the Apes.

TauAlphaVu
Автор

11:25

Well, if it turned out magic is real. That would put it squarely into the wheelhouse of "natural science" and the ability to manipulate or harness natural forces to perform tasks through processes conceptual or mechanical is a reasonable definition of technology.

wintermute
Автор

Short Answer: Yes
Long Answer: Yes, you are in danger

CognitiveGear
Автор

The reason to be concerned about the rapid growth of a superIntelligence is that, if it was given a general goal, it would develop instrumental aims like gathering resources, perhaps a lot of resources, to achieve this goal. The use of force could be justified if it required intense resource gathering to improve the implementation of its goals.

stevengreidinger
Автор

"very few animals seem obsessed with making themselves or their descendants smarter". Humans are the only animal able to understand a concept as complex and abstract as intelligence.

Any AI that does understand intelligence will know that more intelligence will help it get what it wants.

donaldhobson
Автор

1) I worked at Intel in the 90s. Moore's Law ceased to be specifically about miniaturization years ago, it is more broadly that the computational power of a computer will double approximately every 2 years. 2) I've been thinking about and talking to people about the singularity since the 90s. 3) Most people referencing Moore's Law in relation to a singularity are not at all referring to continual miniaturization, they are referring to continual increases in the processing power available. 4) I haven't seen recent data but computers do indeed continue to get significantly more powerful every couple years or so. Increasing the performance of a single CPU was supplanted by computers with 2, 4, 8 and now many independent cores effectively multiplying performance many times. Add to that vastly increasing bandwidth between the CPUs, memory, storage, etc. Consider the migration from slow storage (spinning disks) to ever faster memory based storage. Add the development and ever increasing computational power of GPUs which are very good at accelerating certain types of computation well beyond what a CPU can do. Consider that instead of horizontal miniaturization microchips are now going vertical adding vertical stacked layers (going 3D basically) to increase performance. 5) Consider the completely new and very innovative computer cores being created specifically for neural networks which are rapidly increasing how fast those networks can run and how large and complex they can be.

Nothing has slowed down or changed, technological progress towards a singularity continues today just as rapidly as it has in the past. Its not about one technology or method or another - its about the overall increase in computation, particularly in relation to A.I. processing. The capabilities of AI have been increasing extremely rapidly recently, encroaching on many things that were thought by many to be impossible for machines to do. Now AI is even developing impressive creativity, something that was previously lacking. Some day someone is going to take a collection of impressively advanced individual AI subsystems and combine them into something that has general intelligence. That will change everything.

Also consider an AGI would not be limited to the computing power of a single system. An AGI with access to the Internet could hack its way into other systems and use those additional resources to increase its capacity. The faster and more capable the AGI became the more systems it would be able to break into and commandeer. A small fraction of the total computing power in the Internet connected data centers around the world would likely be enough to enable a rather impressive super intelligence.

Me__Myself__and__I
Автор

This could have been a proper Halloween episode. The description, tone and delivery was very horror-genre-like. Kudos Isaac.

jeromeorji
Автор

1. The counterpoint to this is GPT-3. In short it was an experiment to see how far brute force can push AI. And the answer was a resounding yes. It's the first real world AI that actually scared a lot of people. And now there are a lot of derivatives that are far more powerful than anyone could expect just a few years ago. Based on this it's easily possible that in the effort of creating a just smart enough AI, we accidentally create one that's significantly smarter than us. And since we don't expect it, we won't use any safety measures. Past experience with AI shows time and time again that surpassing human level is not hard, in most task AI flies past that level, it's not even a speed bump. Another example is Alpha Zero. Shortly after AlphaGo soundly beat the human world champion, the next iteration obliterated AlphaGo.

2. An AI might not care about it's own existence. By default AI cares about the goal we gave it, and nothing else. Self preservation is a "convergent instrumental goal", which means it's necessary to reach any possible goal, but after the AI reached it's goal it won't car anymore. So if we tell it to help us design a smarter AI, it won't be concerned about becoming obsolete and getting turned off. And we will definitely use AI to design smarter AI. And this just makes it much more likely that we accidentally overshoot.

3. As for recklessness, the very first thing we tell every AI is to read the entire internet. Also AI trades for us on the stock market, AI decides what content we consume, AI influences our political views, AI influences our business decisions, and so on. There's no need for a robot uprising, we voluntarily give them the keys. AI may completely take over the world and we won't even notice until it's way too late.

4. AI doesn't have to be malicious to destroy us. It can do it in full confidence thinking it does what we want. In that case it won't be afraid of being found out, and we won't try to stop it, as it would seem to do exactly what we asked for. By the time we figure out that something is wrong, it could be way too late.

5. And of course AI could be used by bad people to do bad things.

6. As for the singleton issue. With every tech, R&D cost grows exponentially. Already AI is moving from small research labs to large companies. For example very very few are able to match the resources Tesla is throwing at self driving cars. Usually technology ends up in the hands of only 2-3 giant corporations, and sometimes one is able to get significantly ahead. So it's absolutely possible that there will be a single AI that's far smarter than anything else on the planet. Even more so if it's a military project with unlimited resources, like the Manhattan Project.

andrasbiro
Автор

A thought on self-improvement: unlike AI, humans don't really have a detailed description of our programming and hardware, and we cannot significantly change our brain by adding more processors or memory or changing the setup.

I'm sure if you gave a neurologist a way to understand and change any part of his own brain, he could make himself smarter almost instantly. And after that, he could continue to experiment by gradually making changes to his brain and observe the changes.
Now think of a human level AI, with processors working 1 000 000x faster than neurons and the ability to swap our parts withing a few minutes, and a rapid increase in intelligence seems very likely.

Edit: also the comparison with Einstein does not really work. If Einstein could read the sum of all research ever done within a few hours, he would have likely had major impacts on all fields that are not too heavy on experimentation.

mylex
Автор

Isaac Arthur: "You can't just slap new hardware onto complex existing architectures any more than you can rip an engine out of a fighter jet and stick it in your lawnmower with a few tweaks and think it would just mow the grass faster now"

Engineers: "Is that a challenge?"

notgonnabetelling