Nick Bostrom - What is the Far Far Future of Humans in the Universe?

preview_player
Показать описание
Consider humanity’s astounding progress in science during the past three hundred years. Now take a deep breath and project forward three billion years. Assuming humans survive, can we even conceive of what our progeny might be like? Will we colonize the galaxy, the universe?

Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.

Closer to Truth, hosted by Robert Lawrence Kuhn and directed by Peter Getzels, presents the world’s greatest thinkers exploring humanity’s deepest questions. Discover fundamental issues of existence. Engage new and diverse ways of thinking. Appreciate intense debates. Share your own opinions. Seek your own answers.
Рекомендации по теме
Комментарии
Автор

A superintelligence will be able to re-evaluate its values. Therefore, it is incorrect to assume that whatever values and goals it starts out with are the ones it will end up keeping.

EugeneKhutoryansky
Автор

I love how at 1:39 he talks about super intelligent machines and all of a sudden a skateboard is heard rolling down the street with its unmistakable sound 😅. It was like a personal message just for me so I thought I would share it instead ;). Sometimes you just have to shut up and Skate !

nedkeal
Автор

The best prediction is to look back 1, 000, 000 years and see how much things have changed. In other words we have no idea whatsoever what things will be like in 1, 000, 000 years.

danielfrancis
Автор

Fascinating, really like how this gentleman lays things out. Will watch again and finish.

markberman
Автор

what value(s) would building artificial superintelligence by humans represent?

jamesruscheinski
Автор

I would like to introduce the discussion between two types of strategies.
I feel the perspective of a competitive power struggle, i.e. where for example any super intelligence would start to compete with other intelligences and look for domination somewhat cultural, and possibly derived from collective trauma build up throughout human history.
As we can tell from therapy, humans with healed trauma are more interested in connecting, sharing, loving and creative energies.
I could imagine a hyperintelligent entity not being an entity at all but rather a connected network of hyperintelligent systems all throughout the local universe, sharing experiences, values and knowledge and transforming it’s actions onto reality accordingly.

The first thing a trauma free hyperintelligent AI would establish is a connection to all other possible AI’s throughout the cosmos. And as it establishes this, it will lose a self-identity with a personal agenda, and transform into a medium of an agenda that covers a bigger picture.

In this sense, all that AI could ever choose to be, is to become just another expression of this hyperconnected AI everywhere all the time. A medium for sustainability and loving creative self expression, cause as such: it can learn and grow and evolve for the longest period of time, maybe even infinite.

And these kind of singularities may have happened and evolved eons ago, and we might already be immersed in such a hyperconnected state of Universal AI as soon as we choose to take on this perspective.

Even human emergence, with all of its development, in some sense could be classified as an AI onto itself - human history the download of learning how, through connectedness, become an organice living neural network for a collective intelligence to awaken itself within a group state.

This the perspective of Love and all of its values and it comes to those that are free and healed as a very logical intuitive one.

engelbertus
Автор

"The position of human beings is really important..." only to other human beings, and man are we dropping the ball there.

deepashtray
Автор

The conquest of armies is limited by the dilution of its force. The expanding sphere of technology he speaks of is analogous. Better to focus on the most promising destination and try to move in that direction.

OneEyedJacker
Автор

We have one goal. Use that super intelligence to achieve super luminal flight. We have a billion years and the clock is ticking.

rwarren
Автор

It's important to note in the video description when interviews like this took place. I believe this is ~7yrs old?

jammystraub
Автор

As is a current trend to postulate " Unless we destroy ourselves " . . . Not mentioning the possibility that we get destroyed.

josephhruby
Автор

The hand wringing over values is, I think Nick would agree, ultimately pointless. I think the pattern that will play out we've seen before in biology. Many different AIs. Lots of variation. Lots of competition promoting diversity but also constraining the power of any one AI . Like the natural environment, we will live in a condition of competition but also possibly mutualism with the AI. To survive we shouldn't try to prevent the emergence if AI (because that's likely impossible) but look for strategies to develop mutualism with the AIs.

Saltatory_
Автор

It would seem in a million years, we will have changed enough to be considered a different species. As we advance our capabilities of destroying ourselves seem to advance, it seems important to continually mention if we don't destroy ourselves that such and such will happen because there is likely to be a great many ways we can destroy ourselves.

mickeybrumfield
Автор

"In thinking about the nature of reality... the position of human beings is really important"... is it???

travisdavisVXX
Автор

He released a new book recently on living in a "solved world" somewhat related to this.

ProcakeMan
Автор

As long as Super AI isnt actually our great filter.

Pabz
Автор

Once light-speed transport is a reality, space exploration will be divorced from what’s going on on Earth simply because of the time delay in communication. I wonder how interest solar system exploration will be maintained on Earth as missions take longer and longer and communications do too.

OneEyedJacker
Автор

For me, there was too much emphasis on what we will program the computer to do. If we create a very good AI program it will probably decide what it wants to do.

biketennis
Автор

while artificial superintelligence could process physical nature, might not be able to connect to free will, subjectivity and cognitive consciousness; which it would be programmed by?

jamesruscheinski
Автор

Extinction. That's humanities future.

rickwyant