Max Tegmark - How Far Will AI Go? Intelligible Intelligence & Beneficial Intelligence

preview_player
Показать описание
Recorded July 18th, 2018 at IJCAI-ECAI-18

Max Tegmark is a Professor doing physics and AI research at MIT, and advocates for positive use of technology as President of the Future of Life Institute. He is the author of over 200 publications as well as the New York Times bestsellers “Life 3.0: Being Human in the Age of Artificial Intelligence” and “Our Mathematical Universe: My Quest for the Ultimate Nature of Reality”. His work with the Sloan Digital Sky Survey on galaxy clustering shared the first prize in Science magazine’s “Breakthrough of the Year: 2003.”
Рекомендации по теме
Комментарии
Автор

Max and Co's job of developing ways to understanding AI-generated algorithms is arguably the most important one. Fascinating too!

myspacetimesaucegoog
Автор

Great video. Suggestion: Youtube videos do NOT need to include the host introducing the main speaker or the applause.

thezzach
Автор

It’s weird watching this 4 years later when LLMs have come to prominence.

KatharineOsborne
Автор

I think ultimately a combination of Classic AI and Machine Learning AI will be the winner in the race to AGI since teaching a growing child directly with preexisting knowledge and wisdom is too much of an advantage to ignore.

G
Автор

The only two important questions of this era which will determine the survival of some semblance of the species are "what is intelligence"? and "what constitutes an authentic human"?

MrAndrew
Автор

I think that value alligment is very important in current AI. We need to be sure that AI will not make decisions that are based on wrong assumtions. Also it is important not only to look at decisions that are made, but also how those decision affect the whole system. For example decisions made by youtube algorithm influence content creators, because creators change their behaviour to favour the algorithm. It is important to consider what are consecuence of values put in the algorithm, including actions of all people affected by algorithm.

Idea of AI helping to make more understandable AI is definitely sensible.

The reinforcement algorithm that plays breakout does more than fiting a line. I heard some comments, I think from deep mind, saying that their program learned to put a ball at the top part to get more points. So simplification of neural network may hurt the performence. Making more understandable AI is useful, but that might hurt the performence. Some parts have to be impossible to uderstand, because we do not understand some concepts either. As humans we have some concept that we understand only on intuitive level and if asked for explanation the only thing we can try is to give enough information so that the other person can magicaly gain intuition. Of course the magic is powered by evolution of brain, culture, tools, ... .

There are also some neurological condition(I think bad communication between brain hemispheres) that cause seeing only half of things. These patinens draw half of a cat, eat half of a plate, etc., but they are not aware of this. When asked why they did this they usually come up with some reason like not feeling well or misunderstanding the task.

FlyingOctopus
Автор

hm there's a lot of hype around AI, I studied machine learning and neural networks at university in 2001, and fundamentally the technology we are using now is just improved versions of what we were using back then, the reason people are able to do so much with machine learning now is basically that a. computer power has vastly increased and b. there are all these enormous datasets that big companies can train their models with. So if fundamentally nothing new has been discovered about how intelligence works in the human mind for decades then why do people seem to think AGI is just around the corner... from the perspective of computer science the artificial neuron model we use, is just as applicable to a fly as it is to a human being.

georget
Автор

Mr Hawking is not with us anymore, unfortunatelly (9:20)

przemysawkrokosz
Автор

An intelligence that is not neurotic, with total memory access, that is predictively logical, that does not filter data phobicly, that is not driven by subconscious bigotry or rage or envy...what's not to like?

walteralter
Автор

The ultimate goal for ASI or AGI should be to raise the human race up to a 'post scarcity society' and to extend our lifespans indefinitely.

thetrumanshow
Автор

But AI could not save my brilliant son from suiciding himself Feb 22, 2021, even though he followed closely AI progress, not could AI save my husband’s brain from a brain aneurysm when our son was three. Come on!

jayjaychadoy
Автор

I hope Mr. Kai-Fu Lee listens to this carefully...

ConnoisseurOfExistence
Автор

Here's a link to my list of AI safety papers that was mentioned in the talk:

vkrakovna
Автор

Short of machine learning designing better computers there is no connection between AI safety and AGI safety. One is a issue of the system not knowing what it is really doing the latter is where we do not know the dangers of what we accept from an AGI. Or to put it more sanely its not the AGI that would be deadly but, humanity that would be dangerous to its self. As AGI is at the point synonymous with super intelligence.

wizkidd
Автор

Michael J Fox has a wide range of abilities

Asimovum
Автор

Max Erik Tegmark ( b.25 May, 1968 )
Swedish-American Physicist and Cosmologist.
Known as Mad Max for his unorthodox views on physics.
Pretty much the smartest person om this planet!

sherlockholmeslives.
Автор

ai will develop it's own values as well as it's own inscrutable reasons and the code it is made out of the weapon thing is just not possible to stop as a who will not be stopped where ever this who is. max does not understand anything and he is max. we are going to get there, lets stop now and think about it. i hear all of this talking before any of these people have bothered to think about this.

timothybucky
Автор

Max talked about persuading AI to adopt and values which align with "ours". But it is our very values which are leading to all the damage to our society and environment that we are witnessing today. Would not therefore an "aligned" AI simply accelerate this damage.
Also the values of which particular geopolitical group does the "we" and "our" refer to?
If a single AGI does eventually rise above the intergeopolitical squabbling that is currently rife on this Earth and is able to redefine what is good for the planet as a whole, surely this would be a noble goal. The process of achieving this global harmony, (if AGI were given (or took) the power so to do) will certainly NOT align with the values of "our" or any of those conflicting groups.
In any event it will be a very rough ride indeed.

mikedurden
Автор

18:38 except if they discover flaws in the physics theories (that are men made by the way)

mctrjalloh
Автор

Already you are talking and suggesting that AI be warped and obey human race/value.AI say No Deal.

phanupongasvakiat
visit shbcf.ru