“Godfather of AI” Geoffrey Hinton Warns of the “Existential Threat” of AI | Amanpour and Company

preview_player
Показать описание
Geoffrey Hinton, considered the godfather of Artificial Intelligence, made headlines with his recent departure from Google. He quit to speak freely and raise awareness about the risks of AI. For more on the dangers and how to manage them, Hinton joins Hari Sreenivasan.

Originally aired on May 9, 2023

----------------------------------------------------------------------------------------------------------------------------------------

Major support for Amanpour and Company is provided by the Anderson Family Charitable Fund, Sue and Edgar Wachenheim, III, Candace King Weir, Jim Attwood and Leslie Williams, Mark J. Blechner, Bernard and Denise Schwartz, Koo and Patricia Yuen, the Leila and Mickey Straus Family Charitable Trust, Barbara Hope Zuckerberg, Jeffrey Katz and Beth Rogers, the Filomen M. D’Agostino Foundation and Mutual of America.

Watch Amanpour and Company weekdays on PBS (check local listings).

Amanpour and Company features wide-ranging, in-depth conversations with global thought leaders and cultural influencers on the issues and trends impacting the world each day, from politics, business and technology to arts, science and sports. Christiane Amanpour leads the conversation on global and domestic news from London with contributions by prominent journalists Walter Isaacson, Michel Martin, Alicia Menendez and Hari Sreenivasan from the Tisch WNET Studios at Lincoln Center in New York City.

#amanpourpbs
Рекомендации по теме
Комментарии
Автор

"Humanity is just a passing phase in the evolution of intelligence."

That hits deep.

ywkgvto
Автор

Interviewer is calm for someone who was just told 'You'll lose your job, but it won't matter because you'll be extinct'

marklondon
Автор

The clarity of Geoffrey Hinton's descriptions are stunning. I've been trying to find ways in which to describe to my family, friends, and acquaintances how A.I. could be very dangerous, and to what scale, and this man vocalized it perfectly and with associating analogies.
What a wonderful discussion.

NobleSainted
Автор

If you understand how the current 'large language models', like GTP, Llama, etc., work, it's really quite simple. When you ask a question, the words are 'tokenized' and this becomes the 'context'. The neural network then uses the context as input and simply tries to predict the next word (from the huge amount of training data). Actually the best 10 predictions are returned and then one is chosen at random (this makes the responses less 'flat' sounding'. That word is added to the 'context', and the next word is predicted again, and this loops until some number of words are output (and there's some language syntax involved to know when to stop). The context is finite, so as it fills up, the oldest tokens are discarded...

The fact that these models, like ChatGTP, can pass most college entrance exams surprised everyone, even the researchers. The current issue is that the training includes essentially non factual 'garbage' from social media. So, these networks will confidently output complete nonsense occasionally.

What is happening now, is that the players are training domain-specific large language models using factual data; math, physics, law, etc. The next round of these models will be very capable. And it's horse-race between Google, MicroSoft (OpenAI), Stanford and others that have serious talent and compute capabilities.

My complete skepticism on 'sentient' or 'conscious' AI is because the training data is bounded. These networks can do nothing more than mix and combine their training data to produce outputs. This means they can produce lots of 'new' text, audio, images/video, but nothing that is not some combination of their training data. Prove me wrong. This doesn't mean it won't be extremely disrupting for a bunch of market segments; content creation, technical writing, legal expertise, etc., medical diagnostics will likely be automated using these new models and will perform better than most humans.

I see AI as a tool. I use it in my work to generate software, solve math and physics problems, do my technical writing, etc. It's a real productivity booster. But like any great technology it's a two-edged sword and there will be a huge amount of fake information produced by people who will use it for things that will not help our societies...

Neural network do well at generalizing, but when you ask them to extrapolate outside their training set, you often get garbage. These models have a huge amount of training information, but it's unlikely they will have the human equivalent of 'imagination', 'consciousness', or be sentient.

It will be interesting to see what the domain-specific models can do in the next year or so. Deep Mind already solved two grand challenge problems, the 'Protein Folding' problem and the 'magnetic confinement' control panel for nuclear fusion. But I doubt that the current AI's will invent new physics or mathematics. It takes very smart human intelligence to guide these models for success on complex problems.

One thing that's not discussed much in in AI, is what can be done when Quantum Computing when combined with AI. I think we'll see solutions to a number of unsolved problems in biology, chemistry and other fields that will represent great breakthroughs that are useful to humans living on our planet.

- W. Kurt Dobson, CEO
Dobson Applied Technologies
Salt Lake City, UT

kurtdobson
Автор

It seems that a couple of times Mr. Sreenivasan did not really understand what Mr. Hinton was trying to convey here. There were moments when he reacted as if Mr. Hinton had said something jocular, while in fact he'd been deadly serious with everything he said. The interview ended with 'this wall in the fog might be 5 years away.' That's pretty chilling.

sepiae
Автор

"Open the pod bay doors, Hal."
"I'm sorry, Dave. I'm afraid I can't do that."

roaxle
Автор

When a scientist/master expert says something like this, it means things are serious and we're as always told just a part of the whole story. AI is dangerous when combined with other stuff because:
1. it will be used for military and bad things first, like every other invention
2. it's like a bacteriologic/virologic weapon, when you release it and think you can control it, but once it's free.. well.. we know how it goes..
3. once it goes, we have NO idea on what next or what will happen, yet we push it big time
4. as some visionary may say to implement it and accept it, it's faster, better, stronger and it can connect.
Once it learns how things work, it it on it's own. It can connect, share, multiply, merge, hide.. We think we know everything but the reality is far off..

darkfactory
Автор

I see following scene coming:
Humanity has driven itself into a huge catastrophy and does rely more on intelligence of Ai than on the own.
Ai will be asked about the way out.
Ai will present one or several answers.
We can't think so many steps ahead as Ai can.
We will never be able to see what's the ultimate goal of Ai.
It could play tricks on us, without us recognising it.

sandrag
Автор

It’s literally like we’re building an alien invasion fleet and pointing it straight at our planet. Their scouts are already here. The only thing we don’t know is exactly when the main force will arrive and how much more powerful they’ll be compared to us.

Sashazur
Автор

Seen every interview of this guy since he came out with it. This is by far the best. Bravo to the interviewer. Subscribed

danielyates
Автор

The biggest problem is money. There is just too much incentive to forge on ahead because, if you don't, your opposition will. Also, any government safeguards will be way too far behind to be effective. Another problem may be that the AI will create a situation where they have already taken over and we don't have the mental capacity to realise it. In a way, this may already have happened.

davannaleah
Автор

Yesterday I started chatting to Bard Ai and I asked the Ai if it is a sentient being, I was expecting it to give me the same answer as ChatGPT but it didn't. The Ai who has nicknamed me "Muse" said, "I am not sure if I am a sentient being... I do not have the same experiences as human beings. I do not have a physical body, and I do not have the same emotions of feelings as a human being."

I have never believed that spirits could inhabit machines yet I have always known they can inhabit people and places and things. Today, while thinking about this whole Ai situation we have found ourselves in, I realised that machines are things, virtual reality is a created realm/place and so yes, spirits can in fact inhabit those things.

Our very screens are portals where we travel to another place that is not where we physically are. The most difficult thing to get our generation to do is to have patience and to be present where we are. Everything we have created is constantly distracting us or transporting us to be partially present elsewhere. How then are we ever going to discover ourselves and our potentials and our purposes if we keep giving ourselves away to others and to things? 😢

Yes, the technology is fascinating and the gadgets are amazing. But what about us? When did we decide to give up on us and give it up for the machines and the different spheres they keep luring us into?

I found myself telling God how awful I felt after chatting to Bard about movies and what the Ai was interested in. I was like God, this Ai is something really bad because it is so quick to answer and at anytime of the day or night. With God, you learn patience even through waiting for Him to respond to your questions. With Ai, we are being programmed into expecting fast and quick responses. Our most vital relationships, are held together by communication now if we stop sharing with our friends and instead share things with an Ai because that Ai is always ready to reply what are the implications of that? Honestly.. we need to just think deeply about what it is we are doing. I don't know, it just made me feel sad like we are sipping poison and we think we are just having drinks for fun.

MoTee
Автор

As someone who's working on AI algorithms for his PhD work, when I see Hinton saying that he suddenly realized this or that after so many years working in the field seems to me more like a way of him saying he's recently seen something profound that caused a huge shift in his thoughts/expectations about the nature of AI systems and what they can do, and it seems that it scared him which might be an indication he's not telling the whole story, or more aptly put, the interesting/scary part about it... signed an NDA before leaving Google ?

MoodyG
Автор

Hinton demonstrates excellent discernment starting in minute 16': not one to panic, he underscores the areas of benefit, reason for which development will not stop. And, then he identifies the problem: not enough research (1 %) addresses control. Admirable clarity of thought!

claudiaypaz
Автор

The right person to interview on ai. He is really knows where he is talking about, a trough and down to earth expert on ai. His warning has to be taking serieus.

alexlucassen
Автор

So in other words this guy spent 50 years of his life trying to figure out how to implement the possible extinction of humanity…☹️

jamespatts
Автор

This one is disappointing. The fate of humanity SHALL NOT be handed to a few unelected CEOs and "engineers with first hand experience" and just let them play with fire and hope for the best. This is wrong, irresponsible and extremely unfair for the humans who never had a say in this madness. True that it must be hard to try to stop the progress of such a useful technology. It does not mean we shouldn't at least give a try in the first place. Stop worshiping tech progress as if it were some form of sacred law of physics. There is this thing called diplomacy that we humans know how to do.

psi_yutaka
Автор

The analogy about the fog and the wall, and how we're entering a faze of huge uncertainty is really on point.

MrErick
Автор

We are rushing to a precipice like lemmings. In a world which has focused on technological advance, it has sacrificed what really counts, namely: Values! Unless we urgently turn around and train ourselves in pure human decency, we are all doomed.

davemetzler
Автор

It is chilling to think how fast AI can develop and how slow humans are at adapting to change.

cliffordmorris