Is the Intelligence-Explosion Near? A Reality Check.

preview_player
Показать описание


🔗 Join this channel to get access to perks ➜

#science #sciencenews #tech #technews #ai
Рекомендации по теме
Комментарии
Автор

Artificial Intellignce is nothing compared to Natural Stupidity

lokop-bqov
Автор

In 1997, I was working at University.
A Faculty member gave me an assignment: write a program that can negotiate as well as a human.
"The test subjects shouldn't be able to tell if it's a machine or a human."
Apparently, she had never heard of the Turing Test.
When we told her of the difficulty of the task, she confidently told us "I'll give you two more weeks."
The point?
There are far too many people with advanced degrees but no common sense making predictions about something never seen before.

pirobotbeta
Автор

Do you all remember before the internet, that people thought the cause of stupidity was lack of access to information? Yeah. It wasn't that.

framwork
Автор

In this post truth era, what people are searching for isn't truth, but rather comfort. They want someone to tell them what the answer is, regardless of the truth of the answer.

There is a lot of uncertainty right now about the future, and that is the cause of all this anxiety. It's so much easier just to point at an algorithm and listen to it. That way, no one is responsible when its wrong - it's the algorithms fault.

AI is trained, at the end of the day, on how humans understand the world. It's limits, therefore, will be human. Garbage in, garbage out. Seems a lot of engineers these days seem to think that basic axiom isn't true anymore, because these language models are confident in their answers. Confident does not mean correct.

RigelOrionBeta
Автор

There's another issue, with language models anyway. The learning database already includes virtually 100% of all text written by humans, including the internet. But also, now the internet is flooded with AI-generated text, so you can't use the internet anymore, because that would be AI version of Habsburg royal lineage.

michaelbuckers
Автор

All Im saying is, is that if you need 10 Nuclear reactors to run artfificial general intelligence while humans only need a cheese sandwich, I believe we win this round.

hmmmblyat
Автор

I love how Sabine is deadpan serious throughout most videos and yet she can still make one laugh with unexpectod jokes

pablovirus
Автор

Selling shovels has always been the best way to make money in a goldrush.

calmhorizons
Автор

If I will be able to ask Google home why I went to the kitchen, I am on board!

msromike
Автор

I think there are a couple of problems here that you don't point out.

The biggest one is that we don't have a rigorous definition of what the end result is. Saying "Artificial General Intelligence" without a strong definition of what you actually mean doesn't mean anything at all, since you can easily move the goalposts in either direction and we can expect people to do exactly that.

Another one is that current neural networks are inefficient learners and learn a very inefficient representation of their data. We are rapidly reaching a point of diminishing returns in that area and without some fundamental breakthroughs neural networks, as currently modeled, won't get us there. Whereever "there" ends up.

There also seems to be some blind spots in current AI research. There are large missing pieces to the puzzle that we don't yet have and that people who should know better are all to willing to handwave away. One example is that I can give examples of complex behavior in the animal world (honeybee dances are a good one) that it would be very hard to replicate using neural networks all by themselves. What that other piece is is currently unspecified.

davidbonn
Автор

I like to compare this to game development. Imagine someone saying in 2002 that because we managed to double the number of polygons we can render every 2 years photorealistic games are 10 years away. 23 years later it turns out that making photorealistic games is a very difficult topic that requires lots of problems to be solved, some easy some super hard. E.g. today we can render lots of polygons and calculate realistic lightning but destructible environments are not solved. Or realistic realtime water simulations are far away. Or we know that rendering lots of polygons is not enough, e.g. animations or shadows, especially from large objects are hard problems

vhyjbdfyhvjybv
Автор

“I can’t see no end!”
Said the man who earned money from seeing no end.

😅😅😅 That’s gold, Sabine!

k.vn.k
Автор

I am from Australia and I totally agree with you. Australia is one of the biggest users of AI in mining - but a lot of people don’t understand why. If you read through the comments about driverless trucks and trains in Australia, people have no idea of just how remote, humid and hot the northern parts of Australia are.

People working in iron ore mining in Australia are just hours away from being seriously dehydrated or dead. For iron ore mining to be carried out at the scale that it is, it needed something better than the modern human, who is not able to work outside of an air conditioned environment in the remote northern locations of Australia. Therefore, mining companies had to come up with something that can work in a hostile environment. My understanding is that AI in mining has not reduced the number of people, just move them to a city in an air conditioned building.

anthonyj
Автор

You read a 165-page essay, even though you knew the contents inside would be dubious at best. Sabine is heroic.

jeremiahlowe
Автор

Exponential curves usually stop being exponential pretty fast. The surprising success of Moore’s law makes IT people think that’s normal, which it isn’t.

Stadtpark
Автор

Never underestimate the capability and resourcefulness of corporate greed- especially when it's a collective effort.

zigcorvetti
Автор

"I can't see no end" says anyone in the first half of the S-curve

bulatker
Автор

The Kurzweil prediction is for 2029 not 2020, right?

MrFunctin
Автор

It is surprisingly common, when one tries to converse with pure software engineers, to get them to accept that the laws of physics apply to them and cannot be by-passed by sufficiently clever coding. You get the same sort of thing from genetic engineers, who simply won't accept that endless fiddling with a plant's DNA will not compensate for the absence of moisture or other nutrients in the soil or other growing medium.

matthewspencer
Автор

Sabine, I worked on AI at Stanford. There are two areas where people have misconceptions.

1) We do not need new power to get to AGI. Large power sources are only needed if the masses are using AI. A single AI entity can operate on much less power than a typical solar field. It does not need to serve millions of people. It only needs to exceed our intelligence and become good at improving itself. It can serve a single small team that directs it at focusing on solving specific problems that change the world. One of the early focus issues is designing itself to use less power and encode information more efficiently.

2) No new data is needed. This fallacy assumes that the only way to AGI is continuing to obtain new information to feed LLMs. All of the essence of human knowledge is already captured. AI only needs to understand and encode that existing knowledge more efficiently. LLMs are not the future, they are a brief stepping stone.

billcollins