AI Utopia: A Conversation with Nick Bostrom (Episode #385)

preview_player
Показать описание
Sam Harris speaks with Nick Bostrom about ongoing progress in artificial intelligence. They discuss the twin concerns about the failure of alignment and the failure to make progress, why smart people don’t perceive the risk of superintelligent AI, the governance risk, path dependence and "knotty problems," the idea of a solved world, Keynes’s predictions about human productivity, the uncanny valley of utopia, the replacement of human labor and other activities, meaning and purpose, digital isolation and plugging into something like the Matrix, pure hedonism, the asymmetry between pleasure and pain, increasingly subtle distinctions in experience, artificial purpose, altering human values at the level of the brain, ethical changes in the absence of extreme suffering, our cosmic endowment, longtermism, problems with consequentialism, the ethical conundrum of dealing with small probabilities of large outcomes, and other topics.

Nick Bostrom is a Professor at Oxford University, where he is the founding director of the Future of Humanity Institute. He is the author of more than 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which sparked the global conversation about the future of AI. His work has framed much of the current thinking around humanity’s future (such as the concept of existential risk, the simulation argument, the vulnerable world hypothesis, astronomical waste, and the unilateralist’s curse). He has been on Foreign Policy’s Top 100 Global Thinkers list twice, and was the youngest person to rank among the top 15 in Prospect’s World Thinkers list. He has an academic background in theoretical physics, AI, computational neuroscience, and philosophy. His most recent book is Deep Utopia: Life and Meaning in a Solved World.

September 30, 2024

Рекомендации по теме
Комментарии
Автор

Nothing better than seeing a new Sam Harris video. Happy subscriber here saying keep it up!

daltonryan
Автор

The amount of thought AI devs will grant air gapping will probably be the same they invested into copyright...after they ignored every copyright issue imaginable.

florianschmoldt
Автор

You should bring guys like Nate Hagens or Daniel Schmachtenberger to your show

Mashbass
Автор

Rational Animations has the coolest explanation ever, on how an AI can become a threat.
Its called 'The Alien Message'.
It's subtle, you won't even know what you're looking at until the video approaches the end and you go: "Holy F***ing S**t!!! 😲"

(Sorry for spoiling it 😆)

andreisopon
Автор

I can't help but hearing Dr. Strangelove methodolgy here.

truecapitalist
Автор

The truth is, we’re already cyborgs in a way. If we want to maximize humanity’s maximum biological meaning, then we need to build massive preserves and breed wild humans so they can feel all the emotions and challenges and live out their natural biological urges. However, we moved on from that the moment we started to civilize and domesticate ourselves.

This discussion is the latest chapter of a book that probably started being written in the Stone Age. I think the pertinent question at this point is simply how much humanity we are willing to erase in favor of what is here now and what is coming. The future might be so technological and resource-plentiful that we can build entire planets for different eras of human technological development and run concurrent civilizations at their peak, allowing us to compare which ones generate the greatest number of fulfilled humans with meaningful lives.

The question is no longer whether we are going to lose our humanity. The question is, what do we want? We’ve entered the buffet, we’re holding the plate, and we’re questioning whether or not we should be eating. Sorry, there’s no turning back. We have to fill this plate.

To extend the metaphor, we’ve tamed nature and now must create our environment to find purpose. This challenge mirrors the moment in a game of chess when you leave the opening and face endless possibilities in the middle game. It’s here that you must commit to a plan, adapt, and redefine your goals as circumstances change. This is the plight of modern humans in developed, free societies—where limitless freedom often leads to struggles with direction, meaning, and identity.

Collectively, humanity now faces this same challenge. We’re both sculptor and marble, no longer constrained to the instruments of our evolutionary past. Instead, we have the power to craft our own music. To suggest we should limit ourselves to past instruments or stop playing altogether seems truly absurd. The question isn’t about losing our humanity; it’s about choosing how to evolve it.

edit: grammar

orange_blossoms_sunset
Автор

Once AI starts building its own AI the link to control its outcome is gone, humans are becoming collateral.

danburlaqu
Автор

Here before the next SH vid drops with israeli state propagandist and then they talk about how great things are going

diego
Автор

The unalignment will likely be wider than the gulf between, say, humans and ants.

rodblues
Автор

Time to reconsider full episodes on YT. There’s little argument for creating scarcity, much less a community, when you’re interviewing authors on a book tour 🤷🏻‍♂️

sashetasev
Автор

This is turning out much more interesting than I expected... Genuinely thoughtful takes.

JD..........
Автор

Who is the organized "we" that these conversations keep referring to? The "we will do this correctly when the time comes" statements, etc. There is no "we" running the world.

timoex
Автор

As a subscriber, I listened to the entire podcast. It was, in my humble opinion, an exceptional one, Sam Harris at his best, kudos as well to Nick Bostrom. These are a couple of world-class thinkers, and their synergistic interplay helped make this podcast one of the best. The subject matter was compelling, but the conversation itself was priceless.

davtil
Автор

Keynes predicted a 15 hour work week.

In all honesty, if you look at the amount of made up jobs we have right now that add nothing to productivity or are in no way efficient, I really don’t see how you can say he got that wrong. There is so much ‘filler’ going on in the ‘work space’ right now it’s actually harming us.

vanessa
Автор

I can say from my experience working with AI/ML that I don't see much danger in IA becoming sentient or hostile in the "hunting Sarah Connor" sense. It's more an issue with the loss of determinism we get with traditional software development. The models are not human readable or reverse engineer friendly, and output can vary from run to run. They are only as good as the training data fed in. If we connect these models to our critical infrastructure, we can see some very unexpectedly negative effects.

RoyClendaniel
Автор

The casual synchronicity of watching a conversation of Bostorm with O'Connor and a day after that this video comes out. Exquisite

Bentzi_Ganot
Автор

A completely free agi would have no certaint limits to its intelligence, but its also practically useless to us. We don't want to turn the world into an automatic strip mine to study physics, an agi could decide to do that for completely dumb reasons, without better understanding of fundamental physics at all than we have. Everything that needa to be air gapped from hardware can be. Beyond that, the way ai worls now for soeech is like improved intuition for people, how we generate our own thoughts isnt really more apparent, only in terms of remembering where information came from. We could just expand our toolkit and also our cognitive abilities, through genetic engineering and through machine augmentation, like extending memory, or adding extra neurons that are digital, air gapped or no, in principle there is no obstacle to that. But thats very sci fi from a position of okey image and video generation and okey text generation and voice generation. We are still far away from beating humans at thinking. The mumber of sentences an ai has to encrypt into its weight to figure anything out is ridiculous compared to us. Without us, our current ai is only capable of producing random nothingness.

monkerud
Автор

I know this sounds weird but you should invite Nick Mullen. Very interesting guy.

davorrudovic
Автор

The people against immortality can die. The rest of us can travel the stars.

idealmasters
Автор

Surely they will need to learn more about our own consciousness before assessing the risk of spontaneous awareness from an artificial framework

necromorph