Machines playing God: How A.I. will overcome humans | Max Tegmark | Big Think

preview_player
Показать описание
Machines playing God: How A.I. will overcome humans
----------------------------------------------------------------------------------
Right now, AI can't tell the difference between a cat and a dog. AI needs thousands of pictures in order to correctly identify a dog from a cat, whereas human babies and toddlers only need to see each animal once to know the difference. But AI won't be that way forever, says AI expert and author Max Tegmark, because it hasn't learned how to self-replicate its own intelligence. However, once AI learns how to master AGI—or Artificial General Intelligence—it will be able to upgrade itself, thereby being able to blow right past us. A sobering thought. Max's book Life 3.0: Being Human in the Age of Artificial Intelligence is being heralded as one of the best books on AI, period, and is a must-read if you're interested in the subject.
----------------------------------------------------------------------------------
MAX TEGMARK:

Max Tegmark left his native Sweden in 1990 after receiving his B.Sc. in Physics from the Royal Institute of Technology (he’d earned a B.A. in Economics the previous year at the Stockholm School of Economics). His first academic venture beyond Scandinavia brought him to California, where he studied physics at the University of California, Berkeley, earning his M.A. in 1992, and Ph.D. in 1994.

After four years of west coast living, Tegmark returned to Europe and accepted an appointment as a research associate with the Max-Planck-Institut für Physik in Munich. In 1996 he headed back to the U.S. as a Hubble Fellow and member of the Institute for Advanced Study, Princeton. Tegmark remained in New Jersey for a few years until an opportunity arrived to experience the urban northeast with an Assistant Professorship at the University of Pennsylvania, where he received tenure in 2003.

He extended the east coast experiment and moved north of Philly to the shores of the Charles River (Cambridge-side), arriving at MIT in September 2004. He is married to Meia-Chita Tegmark and has two sons, Philip and Alexander.

Tegmark is an author on more than two hundred technical papers, and has featured in dozens of science documentaries. He has received numerous awards for his research, including a Packard Fellowship (2001-06), Cottrell Scholar Award (2002-07), and an NSF Career grant (2002-07), and is a Fellow of the American Physical Society. His work with the SDSS collaboration on galaxy clustering shared the first prize in Science magazine’s "Breakthrough of the Year: 2003."

---------------------------------------------------------------------------------
TRANSCRIPT:
Max Tegmark: I define intelligence as how good something is at accomplishing complex goals. So let’s unpack that a little bit. First of all, it’s a spectrum of abilities since there are many different goals you can have, so it makes no sense to quantify something’s intelligence by just one number like an IQ.

To see how ridiculous that would be, just imagine if I told you that athletic ability could be quantified by a single number, the “Athletic Quotient,” and whatever athlete had the highest AQ would win all the gold medals in the Olympics. It’s the same with intelligence.

So if you have a machine that’s pretty good at some tasks, these days it’s usually pretty narrow intelligence, maybe the machine is very good at multiplying numbers fast because it’s your pocket calculator, maybe it’s good at driving cars or playing Go.

Humans, on the other hand, have a remarkably broad intelligence. A human child can learn almost anything given enough time. Even though we now have machines that can learn, sometimes learn to do certain narrow tasks better than humans, machine learning is still very unimpressive compared to human learning. For example, it might take a machine tens of thousands of pictures of cats and dogs until it becomes able to tell a cat from a dog, whereas human children can sometimes learn what a cat is from seeing it once. Another area where we have a long way to go in AI is generalizing.

If a human learns to play one particular kind of game they can very quickly take that knowledge and apply it to some other kind of game or some other life situation altogether.

And this is a fascinating frontier of AI research now: How can we have machines—how can we can make them as good at learning from very limited data as people are?

And I think part of the challenge is that we humans aren’t just learning to recognize some patterns, we also gradually learn to develop a whole model of the world.

So if you ask “Are there machines that are more intelligent than people today,” there are machines that are better than us at accomplishing some goals, ......

Рекомендации по теме
Комментарии
Автор

200, 000 years later:
"Welcome to the robotic metaphysical debate: were we designed or did we evolve? Is the carbon-based creator a myth or a fact?"
"Robot philosopher: It is just illogical that an inferior being could create a superior being... surviving carbon-based microorganisms have showed extremely inefficient chemical based signaling that are in orders of magnitude inferior to our quantum entangled processing. Can an amoeba write a line of code? No..."

quelorepario
Автор

I think this is the best period in time to live on. “The “calm” before the storm”.

dushi
Автор

I'm much more worried about humans using AI to overcome rival humans.

tikal
Автор

and so we create our gods, not only in our imagination but in reality..

geistreiches
Автор

The cats & dogs example is silly. Newborns can't differentiate between cats and dogs even if you show them 10k pictures of them. After a few years they have seen enough of the world to build up an understanding of it and THEN they can differentiate between cats and dogs after a single picture. To put it into ML terms, they do some kind of transfer learning.

eega
Автор

AI will never overcome humans, because God's Wisdom is greater than machine logic.

WolfDragon
Автор

1) What is the definition of animate, life, consciousness, independent thought, intelligence, and independent decision and action?
2) Can an entity independently change itself, intentionally harm, succeed, replace, override, or overcome humans without independent decision, and action.
3) Is independent decision, and action possible without intelligence?
4) Is intelligence possible without independent thought?
5) Is independent thought possible without consciousness?
6) Is consciousness possible without life?
7) Is it possible for an animate machine, or code to gain life? Virus? DNA?
8) Will a dead body, or inanimate object with all the right molecules in the right place, and given energy become animated? DNA?
9) Can links in this chain of thought be skipped?

MichaelSHartman
Автор

And then humans won't be necessary anymore! So let it be!

alephii
Автор

I wanna sit down and have a Beer with Max. He seems like a genuinely good human being.

FamilyGuySouthPark
Автор

AGI - utterly terrifying & utterly inevitable because our greed & our curiosity are utterly insatiable. We'll never pull the plug in time & we'll be (utterly) extinct.

ianport
Автор

They are limited in the reprogramming/self improvement process by valid data input though. This is why I believe it will take them more time on that phase.

jakegold
Автор

Don't worry that day will confirm the end of humanity

subhadeepray
Автор

The minute AGI can independently come up with E=mC^2 I'll believe in its superiority.

djacob
Автор

Only problem is...computers and A.I are designed. Living organisms are beyond design.

glynemartin
Автор

For those interested in physics, Max wrote some interesting review pieces where he tried to put boundaries/evaluate models like Orch-OR and QBD semi-quantitatively; some of its points have been criticized as flawed, but many of the ideas are kinda interesting and generic

thstroyur
Автор

Machine learning: sure one computer is slow at learning. But once the first computer knows a thing it can transfer that to any number of other computers effectively instantly.

CarFreeSegnitz
Автор

I never have understood the point in wanting to make a machine more intelligent than humans. We already make the most intelligent and sophisticated beings that can ever be made which are humans, out children and such. I am pretty sure the only reason the A.I. thing is happening, growing and being pushed and pursued is because it is a key element of which plays a huge, vital role in the final chapter(age) before the return of the king. A.I. is sastanic pure and simple all the way to its origins, always has been always will be.

realtalk
Автор

I for one welcome my AGI overlords and look forward to never having to calculate, work or breath ever again.

FishGuts
Автор

Perhaps we're misusing the word "learning" in machine learning. Maybe we should really call it machine evolving, because it's more similar to the evolution of our species than what we do when we learn. When a human baby is born, it is already the result of millions of years of evolution - it is born with the capability to learn. And it keeps growing up while learning.
We're pretty good at training (evolving) models to perform specific tasks. Learning is to work with what you got (what you're made of), to apply concepts from one frame to another.

j.a.
Автор

Intelligence can be described as the ability to solve problems. For example, how to solve the problem with a growing population and food supply. This can be accomplished by producing moor food or reduce people. What kind of intelligence has the ability to judge what’s the correct choice or a combination of both? Actually one can take this problem to the edge and say “ get rid of all humans and the problem with food supply is solved”.

Peternicklas