Life 3.0: Being Human in the Age of AI | Max Tegmark | Talks at Google

preview_player
Показать описание
Max Tegmark, professor of physics at MIT, comes to Google to discuss his thoughts on the fundamental nature of reality and what it means to be human in the age of artificial intelligence.

Max Tegmark is a renowned scientific communicator and cosmologist, and has accepted donations from Elon Musk to investigate the existential risk from advanced artificial intelligence. He was elected Fellow of the American Physical Society in 2012, won Science Magazine’s “Breakthrough of the Year" in 2003, and has over 200 publications, nine of which have been cited more than 500 times. He is also scientific director of the Foundational Questions Institute, wrote the bestseller Our Mathematical Universe, and is a Professor of Physics at MIT.

Рекомендации по теме
Комментарии
Автор

Carbon chauvinist! Great talk, thanks for coming out Max and hope to see you again soon.

bariswheel
Автор

I just want to point out that I had an Atari when I was 6 years old and one of my favorite games was Breakout and I definitely knew very well that always aiming for the corners and getting the ball to bounce above the blocks was the fastest way to win and it was a standard tactic that I, and my friends as far as I can remember, always used. It is absolutely nothing invented by the AI! Still amazingly good book and cool video to complement it. Thanks Max!

littlestewiegriffin
Автор

Watching a computer progressed from barely being able to play a video game to superhuman skills seems like a pretty good preview of what's about to happen. I'm actually kind of surprised we haven't already seen a seed AI bootstrap itself right past superhuman intelligence already. It feels tantalisingly close to reality.

ianyboo
Автор

my favorite part was where he was saying "its wrong to torture chickens in factory farms because you're assuming they cant feel pain"


and he is wearing a leather jacket

lucid
Автор

Mr. Tegmark states during his presentation that when the computer was learning how to avoid obstacles that the computer was rewarded for accomplishing the task. What was the reward I wonder? Whether or not we are aware of it, once our brains begin learning and using that knowledge to gain what it wants, it's always seeking to improve it's ability to gain what it wants. Is that how the reward system is built into AI machine learning? For instance every time it successfully avoids an obstacle it gets a tiny boost to RAM and processing. Complete the entire task without a mistake and you get a significant bump in processing. If the machine understands that it's ability to improve is tied to it's ability to not make errors, would it push itself harder than usual? You woul think that would also have to be part of the code. Desire. Drive. Want. That's what pushes us as humans to succeed past our (un)intended limits.

TornSoul
Автор

This glimce into the future will come much sooner than we expect. And the social disruption will be great.

sidkaskey
Автор

If consciousness is mostly about processing speed and synthesis of possible scenarios, then AI is going to get over it pretty quickly as soon as it has integrated the environment and libraries available to it, after that it's all repetitions.

davidwilkie
Автор

In this video, Max almost exactly looks like Alain Delon in the 1960ies, when Delon was the sexiest gangster movie star in the world (ask your grandmother, she will confirm this).

wostra
Автор

If I had a chance to ask Max a question it would be this. Isn't the development of AI going to be slow enough and incremental enough to know, with a reasonable amount of certainty, that when it is turned on for the first time we can know what to expect? I'm having a hard time imagining it's possible to create something so potent that it could pose such a high threat to us. If anyone wishes to give it a stab, please do.

jonreiser
Автор

Also If you think we will have any control of superintelligence, just look to how arrogant and smug some very smart people are.

SpiteBellow
Автор

He discussed everyone he wrote on his book.

wealth
Автор

I wonder if AI will develop its own sense of pain or pleasure as a survival/learning mechanism?

crazyeyedme
Автор

Google no longer thinks they shouldn't be evil. In fact it looks like they've embraced it enthusiastically.

johnmiller
Автор

0:52 "yeah sure, as if Google™ already got that covered?" lol

mushfek
Автор

Just a little pause on this subject of technology folk's. Before you make any conclusions on the subject of technological advances, I would suggest to you, BILLY GRAHAM ' S lecture on the subject & before making any preconceived judgments as to what makes technological advances subject to BILLY GRAHAM ' S lecture on the subject

davidstorrs
Автор

I am reading the book life 3.0, the Omega Team is true or his imagination of future? their plan looks cool

gracec
Автор

Why has this question NEVER been asked, , not even close?: What if the power cabal (call it what you will) which did 9/11 is the same power developing SAI? (I have other questions, but let's start with that one...) If Max T et al. are so concerned by the future of AI/humanity, why has this question been avoided at all costs?

allanweisbecker
Автор

Is his book better, worse, or about the same as this talk? I have a copy, but feel like skipping it now, though I'm not sure why.

kentvandervelden
Автор

He talks about aligning our goals with AI which I think is a great idea. That being said it could turn out just being wishful thinking.

He hints at self preservation as a goal of AI. If that is indeed the case it seems like we could use that advantage to protect ourselves. We will definitely need to convince AI that humans will always have the ability to completely destroy it, if AI in any way threatens humanity. Either mutually assured destruction or a world wide EMP blast or some sort of master kill switch ect.. The project would have to be kept completely secret from the AI, almost like a call our bluff and see what happens scenario. AI would know a kill switch exist but not know how or when it could happen.

johnekopy
Автор

Good luck with the international agreement.Get your head out of the clouds Max.

stephenwarren