Life 3.0

preview_player
Показать описание
Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence or AGI.
Swedish-American physicist, cosmologist and machine learning researcher, Max Tegmark thinks that AI will redefine what it means to be human due to the scale of the changes it will bring about.

He describes early life forms such as bacteria as Life 1.0. The rise of Homo sapiens as Life 2.0 and the potential rise of Superhuman AI as life 3.0.

Max Tegmark describes the current status of our modern society as Life 2.1 due to the increase of technological enhancements of our biology.

He worries that the advent of digital superintelligence also known as artificial superintelligence or ASI will bring about drastic change to our society for the better or for worse.

Artificial intelligence today is properly known as narrow AI. It can perform particular functions at the expert level. However current AI lacks common sense and can only deal with a narrow range of situations compared with humans.

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen.

There are many ways in which AI could surpass human intelligence. We are already studying the algorithms of the brain in order to figure out how our own minds work and use that information to make machines more intelligent. Eventually the machines will be capable of self-improvement and the AI will become a self-reinforcing loop.

The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity to become much more powerful than humans.

While there are many unknowns about the development of intelligent machines and how we should deal with them, there is no question that AI will play a fundamental role in the future of humanity.

Superintelligence does not necessarily have to be something negative. According to Tegmark if we manage to get it right, it might become the best think to happen to mankind.

#life3.0 #ASI #Science

Sources:
Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Chicago (Author-Date, 15th ed.).
Рекомендации по теме
Комментарии
Автор

Fascinating. The word control worries me. Controlling an intelligence will only lead to trouble

gh
Автор

This all assumes our modern high tech civilization will remain functional long enough to see it realized.

deepashtray
Автор

“If we manage to get it right” sounds to me as one of probabilities, suggesting that we can also mess it up. All of that reminds me a game called: Russian roulette.

surfcitiz
Автор

Just got chills thinking of an AI that could rewrite and optimize its own source code. Can you imagine being the first scientist to open a terminal to see only a giant hard drive of pure machine code?

Jaybearno
Автор

Fascinating all this, I just hope that I’m still alive to see all this unfold. I know it’s starting but I wanna witness all the juicy bits!

alanbrady
Автор

We might be able to control ai and gai.. but there's literally no way for us to control sai.. especially considering we could never comprehend it..

MH-ncjd
Автор

This would be a lot more encouraging if it wasn’t for the fact that we as a species are spiritually and philosophically bankrupt. The future that we face is likely dystopian rather than utopian.

tonybowman
Автор

The worst that advanced AI will probably do to us is ignore us, as it pursues it's own evolution

frankmarshall
Автор

Controlling AI and super intelligence could never happen by the mere nature of what's being built. If we build it to think for us and faster than we do, how could it ever be controlled by us? The first word that a super intelligent machine will learn is "bullshit". From that point on everything we do is categorized.

mt-qcqh
Автор

goals are defined by the purpose of life form, an undefined purpose will lead undefined goals.

yashaswikulshreshtha
Автор

If we get it right the first time, it's all good bro. Otherwise, we're totally fucked. Roll on wisdom.

enjerth
Автор

They only will be limited by their resources at hand, it’s likely we will be considered a resource, and utilise what we never considered

jazzunit
Автор

@7:07 Whoever is gonna try to mug a robot is gonna have a baad time

ImmortalZombie
Автор

The more knowledge / intelligence, the more the realization: It can really only go well for everyone if everyone and every single form of life is doing well. The SuperKI has the knowledge to create the best conditions for every form of life.

MERAPIMERAPI
Автор

If the bacteria couldn't learn anything we won't be where we are today.... It may be the learning rate is different

Xnshau
Автор

Please add proper English subtitles... Autogenerated are skip words and misspell...

ogltoui
Автор

Being fast at computation is good
But AI will never understand the power of knowing nothing .

Anomander
Автор

any one knows the background music? name or link please!

bakdiabderrahmane
Автор

Awesome video! The future will arrive sooner than we know and it will be weird! 😜 liked and subscribed!

DoctorJack
Автор

Isn’t this exactly what we are? Biological machines…? Perhaps we are someone else’s “robots?” Essentially the smart guys are creating ourselves with a greater processing power….

Cedricknowledge