Sam Harris & Steve Jurvetson - The Future of Artificial Intelligence

preview_player
Показать описание
Sam Harris and Steve Jurvetson discuss the future of artificial intelligence at Tim Draper’s CEO Summit.

Sam Benjamin Harris is an author, philosopher, neuroscientist, blogger, and podcast host.

Stephen T. Jurvetson is an American businessman and venture capitalist. He is formerly a partner of Draper Fisher Jurvetson.

November 17th, 2017
Рекомендации по теме
Комментарии
Автор


“Many of you probably harbor a doubt that minds can be platform independent. There is an assumption working in the background that there may be something magical about computers made of meat.”

“Many people are common sense dualists. They think there is a ghost in the machine. There is something magical that’s giving us, if not intelligence per se, at the very least consciousness. I think those two break apart. I think it is conceivable that we could build superintelligent machines that are not conscious and that is the worst case scenario ethically.”

jurvetson
Автор

Wonderful discussion, probably one of my favorites on the topic.

captaingreen
Автор

It's great that Jurvetson can think at roughly the same level as Sam Harris and follow his train of thought. Not many interviewers can do that

citiblocsMaster
Автор

At around the 52 minute mark, Sam Harris talks about being a multi-disciplinary omnivore. Fair enough, but his shallow understanding of (and frequent slagging of) the discipline of economics illustrates the problem facing those who would give to AI machines constraining/guiding values. Brilliant though he is, I would not want Sam Harris to be the one deciding which orienting values should be built into AI machines when it comes to economic understanding... and given his views, I am equally sure that the only economists he would want to be allowed near the AI code would be those who share his left-leaning viewpoint... and therein lies the problem. The question, as always, comes down to “Who decides?”. See “The Vision of the Anointed” by Thomas Sowell.

petermathieson
Автор

Lol, Sam had it out with Moby? I like him even more now.

carbon
Автор

The purpose of human life is reasonably to create AGI.

(Purpose may be objective, see Wikipedia/Teleonomy. Not to be confused for theism/nonsense/teleological argument!)

godbennett
Автор

both the speaker and the host were marvelously eloquent, coherent, and insightful. its a shame they didnt have another couple hours to go even deeper.

judgeomega
Автор

Can you show me the "Blue Fairy"?

carlton
Автор

Gah. Why are most of Sam's recent talks/podcasts audio only? ; /

beastieb
Автор

This is some of the deepest thinking i have ever heard. Just Beautiful

bmusic
Автор

Excuse me if I sound stupid, but aren't devices like Google home, Alexa and smart assistants in general proof that intelligence does not require being conscience? Yes, intelligence is a scale and perhaps once a certain level is reached it requires being conscience, but aren't these home assistants intelligent?

ChoppedBananas
Автор

The only basis for value that you both seem to be recognizing here are only pleasure/pain sensations, intelligence and productivity. What about BEING? Does being not have value, even if just for the sake of being? What is it about a person that gives him/her energy or life? What makes Elon Musk different for walking across town to go to a birthday party when he was a child? Is a flower less valuable because it cannot use human tools? Should all the chickens be killed after we stop eating animals?

valeriekneen-teed
Автор

...wonders if AI will make stupid mistakes like publishing radio articles to a video channel if it replaces humans. It will be really great to have such a high intelligense behind video channel contributers.

BlissedOut
Автор

Very interesting and i like his take on free will

Mattstiless
Автор

Maybe we can set up a symrobotic relationship like our gut biomes. Super intelligent machines need us somehow and we have some effect on their emotional landscape.

lylen
Автор

The main concern I have, is how the seed of moral understanding is defined. Relative morality has no meaning as it is ever-changing. Morality requires some form of an absolute center to gravitate around.

valeriekneen-teed
Автор

Can someone please explain to me what credentials Sam Harris posses to intelligently discuss issues surrounding and closed AI models/systems?

HKashaf
Автор

11:25 - “Winner takes all” - in a free market?

dosomething
Автор

Sam Harris is an exceptionally bright guy, but in this interview he and his interviewer both allowed themselves to fall - yet again in Harris’ case - into the “winner takes all” trap. There are many organizations - some state-sponsored and some profit-oriented - who are racing down this path. Some will get there sooner. Others will get there a bit later. But no contender is going to stop, and the reason is that winning a battle is not the same as winning the war. More important still, all the competitors will be modelling intelligence as they understand it, and they will all be attempting to constrain and direct that intelligence in accordance with the values held dear by their developers. State-sponsored Russian AI will be guided by Russian state values. State-sponsored Chinese AI will be guided by and will try to maximize what is valued by the Chinese state. Ditto any state-sponsored AI, and in a democracy where parties change and the values hovering the values of those parties change, so will the values governing their AI machines.... and spare a thought for the values that will be built into the AI machines built by jihadist regimes. There won’t be one machine. There will be many. The intelligence of those machines will escalate exponentially, but these super-human intelligences will not be perfect, they will each start off with the differing and flawed value systems of their creators, and the inherent flaws of those value systems will unleash upon the world immensely powerful and deeply flawed gods that will battle among themselves, putting all life on earth in peril.

petermathieson
Автор

I like Sam Harris but I some of his thinking seems incomplete.

1:00 We are on thy way to a machine intelligent future and it doesn't matter if the change is incremental or exponential. We had intelligent machines for a long time. There's a degree of intelligence in an abacus or an astrolabe. So maybe the future is a continuation of the past, smart machines. Also the pace does matter because the world isn't sitting still, human IQ is supposedly increasing at about 10% per generation so if we reach some technological barrier that slows machine intelligence to less than that of human intelligence or if we invent some technology to increase human intelligence fast than machine intelligence then the future might look very different from what many suppose.

4:30 Mind is platform independent. It surprises me that Sam Harris accepts that so uncritically. There's nothing magic about what happens in a human head and there is nothing magic about what happens in a CPU. But the brains and computers are fundamentally different. Humans are bad at computation, our brains are not computational, brains are a complicated mess of chemicals and structures that somehow give rise (in a non magical way) to our intelligence. The idea that a thing made out of semiconductors and wires can have an emotion misunderstands what emotions are, they are physical state of a biological system. Take fear, heart racing, adrenaline pumping, muscles tensing publics dilating and all the other things that fear is aren't going to happen in a computer unless you give the computer a heart and hormones. Take all the biological components out of fear and what are you left with? The realization that there is some sort of danger? I'm sure computers can do that, a car's lane departure system can do that but that's the car being afraid.

Besides fear, there is caring in general and motivation in particular. I'm not at all convinced and I see no evidence that solely computational systems can care or be motivated. We could be built the smartest computer ever, load it with all our knowledge and even make it capable of changing itself but once it's plugged in all it's going to do is stare at us blankly until we give some set of instructions, an algorithm to follow.

13:00 Conscious does require something other than computation, it requires a good definition. Conscious is this vague term humans made up and now they are all chagrined about trying to figure out what it is. Before we can decide whether computers can be conscious or not we need an operationalizable definition of consciousness. Explicitly excluding anything supernatural, conscious may require more than an algorithm.

Computers are smart, they will get smarter, I believe machine intelligence will change the world and there is a lot of danger there and a lot of potential for good. But machine intellgence isn't human intelligence, they are different things. We shouldn't anthropomorphize computers. The real danger isn't the machines, it what stupid emotional and hormonal people will do with the machines.

myothersoul