Geoffrey Hinton | Will digital intelligence replace biological intelligence?

preview_player
Показать описание
The Schwartz Reisman Institute for Technology and Society and the Department of Computer Science at the University of Toronto, in collaboration with the Vector Institute for Artificial Intelligence and the Cosmic Future Initiative at the Faculty of Arts & Science, present Geoffrey Hinton on October 27, 2023, at the University of Toronto.

0:00:00 - 0:07:20 Opening remarks and introduction
0:07:21 - 0:08:43 Overview
0:08:44 - 0:20:08 Two different ways to do computation
0:20:09 - 0:30:11 Do large language models really understand what they are saying?
0:30:12 - 0:49:50 The first neural net language model and how it works
0:49:51 - 0:57:24 Will we be able to control super-intelligence once it surpasses our intelligence?
0:57:25 - 1:03:18 Does digital intelligence have subjective experience?
1:03:19 - 1:55:36 Q&A
1:55:37 - 1:58:37 Closing remarks

Talk title: “Will digital intelligence replace biological intelligence?”

Abstract: Digital computers were designed to allow a person to tell them exactly what to do. They require high energy and precise fabrication, but in return they allow exactly the same model to be run on physically different pieces of hardware, which makes the model immortal. For computers that learn what to do, we could abandon the fundamental principle that the software should be separable from the hardware and mimic biology by using very low power analog computation that makes use of the idiosynchratic properties of a particular piece of hardware. This requires a learning algorithm that can make use of the analog properties without having a good model of those properties. Using the idiosynchratic analog properties of the hardware makes the computation mortal. When the hardware dies, so does the learned knowledge. The knowledge can be transferred to a younger analog computer by getting the younger computer to mimic the outputs of the older one but education is a slow and painful process. By contrast, digital computation makes it possible to run many copies of exactly the same model on different pieces of hardware. Thousands of identical digital agents can look at thousands of different datasets and share what they have learned very efficiently by averaging their weight changes. That is why chatbots like GPT-4 and Gemini can learn thousands of times more than any one person. Also, digital computation can use the backpropagation learning procedure which scales much better than any procedure yet found for analog hardware. This leads me to believe that large-scale digital computation is probably far better at acquiring knowledge than biological computation and may soon be much more intelligent than us. The fact that digital intelligences are immortal and did not evolve should make them less susceptible to religion and wars, but if a digital super-intelligence ever wanted to take control it is unlikely that we could stop it, so the most urgent research question in AI is how to ensure that they never want to take control.

About Geoffrey Hinton

Geoffrey Hinton received his PhD in artificial intelligence from Edinburgh in 1978. After five years as a faculty member at Carnegie Mellon he became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto, where he is now an emeritus professor. In 2013, Google acquired Hinton’s neural networks startup, DNN research, which developed out of his research at U of T. Subsequently, Hinton was a Vice President and Engineering Fellow at Google until 2023. He is a founder of the Vector Institute for Artificial Intelligence where he continues to serve as Chief Scientific Adviser.

Hinton was one of the researchers who introduced the backpropagation algorithm and the first to use backpropagation for learning word embeddings. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning and deep learning. His research group in Toronto made major breakthroughs in deep learning that revolutionized speech recognition and object classification. Hinton is among the most widely cited computer scientists in the world.

Hinton is a fellow of the UK Royal Society, the Royal Society of Canada, the Association for the Advancement of Artificial Intelligence, and a foreign member of the US National Academy of Engineering and the American Academy of Arts and Sciences. His awards include the David E. Rumelhart Prize, the IJCAI Award for Research Excellence, the Killam Prize for Engineering, the IEEE Frank Rosenblatt Medal, the NSERC Herzberg Gold Medal, the IEEE James Clerk Maxwell Gold Medal, the NEC C&C Award, the BBVA Award, the Honda Prize, and most notably the ACM A.M. Turing Award.

Рекомендации по теме
Комментарии
Автор

Professor Hinton's best speeches, his off-campus business speeches are too colloquial and his speeches to computer science students are too specialized, but his speeches to the average University of Toronto student are a perfect blend of both!

JoeSanchec
Автор

42:15 it totally clicked for me in this section: LLMs do appear to have understanding because they’re not just encoding a bunch of string predictions, they’re encoding concepts (features) and their relationships… which sounds basically like human learning/understanding.

jjjj
Автор

00:09 Dr. Geoff Hinton's unwavering conviction in artificial neural networks for machine learning
03:01 Dr. Hinton's pioneering work in deep learning revolutionized visual recognition software.
08:26 Digital computation separates hardware from software for immortality.
10:38 Analog computation offers low-power, efficient parallelization
15:13 Distillation allows transfer of knowledge between digital architectures.
17:33 Digital computation has efficient way of sharing knowledge among different agents.
21:50 Language models use powerful statistical methods for autocomplete.
23:55 GPT-4 learns through interactions between feature activations of words.
27:56 Memories are reconstructed from stored weights, leading to inaccuracies.
29:58 Progress of chatbots and neural net language models
34:16 Using relational data to train a neural net for capturing knowledge in family trees.
36:41 Back propagation algorithm for neural networks
40:44 Neural net models learn interactions like rules captured from the domain.
42:51 Evolution of language models and transformers in natural language processing.
47:00 Context and interactions determine word meanings
48:53 Digital intelligence can efficiently accumulate and share knowledge.
53:17 Super-intelligences will seek more power and be adept at manipulating people.
55:15 The rise of super-intelligences poses significant threats and potential worst-case scenarios for humanity.
59:56 The understanding of mental states is crucial for perception
1:02:03 Digital intelligence can exhibit subjective experiences similar to humans.
1:06:07 Consciousness involves subjective experience and self-awareness.
1:08:38 Artificial intelligences may compete in an evolutionary battle, but human cognition may secure our place as interesting conversational partners.
1:13:41 Digital intelligence could be trained to develop different forms of intelligence.
1:16:04 Digital intelligence poses risks if used irresponsibly
1:20:22 Evidence suggests LLMs may not truly understand.
1:22:12 Digital intelligence uses compression to understand and encode vast amounts of text.
1:26:16 Researching human brain cells for low power computation.
1:28:27 Open sourcing powerful models may lead to security risks.
1:33:05 Concerns about the impact of superintelligent AI on human society.
1:34:53 Digital intelligence can potentially develop without interacting with the real world.
1:38:40 Playing Scrabble doesn't require speaking French
1:40:50 Encourage students to get good at using digital intelligence
1:45:27 Digital intelligence can understand more data and may be better at figuring out how things work.
1:47:29 Scaling up existing techniques can make digital intelligence smarter without the need for fundamental breakthroughs.
1:52:00 Digital intelligence will evolve software engineer roles with fewer individuals needed.
1:54:15 Research on using language to make distillation more efficient

quickcinemarecap
Автор

I appreciate Professor McIlraith informing the audience that their questions would end up being posted online, as the presentation was being filmed.
That is admirably considerate and conscientious. Some people might not want to ask a question if that means they will be on the net.
It should be standard, but don't remember ever hearing someone do that before - and I listen to a lot of lectures online with audience questions at the end.

penguinista
Автор

One of the best lectures by Proff. Geoff Hinton.

aakashnigam
Автор

This lecture is gold. He manages to explain complex topics in simple terms without getting overly technical

gusbakker
Автор

This is an insanely good lecture. Congrats to Hinton.

laikaish
Автор

Really great talk, and an amazing Q&A session! It was a pleasure to attend.

shalevlifshitz
Автор

Brilliant, honest and treasure of thoughts 👏🏼

prashantprabhala
Автор

A read pleasure to listen to Prof Hinton's talks ...What a brilliant mind... such a interesting and insightful way to explain the comlex in simple terms. ... I wish I attended his lectures when I was in college..

unhandledexception
Автор

its always amazing listening to geoffrey hinton

prodrectifies
Автор

This is, as usual for Hinton, excellent. Thanks!

briancase
Автор

Before watching this wonderful lecture and Q&A, I gave the transcript to an llm for a summary and highlights. I got a great and very useful and interesting summary. I just learned how to do this today and I will use this technique a lot. I always watch a video if Geoffrey is in it but for a lot of videos I might be satisfied with a summary, especially if that summary doesn't intrigue me.

ctoh
Автор

At 66, this is my first response to anything on the internet. You come closer than anyone of what I understand about
However, I'm not educated traditionally ... you put it together beautifully, and if somebody else has already said this, sorry. Consciousness doesn't matter. What you're saying is that intelligence from Neural Networks is already smarter than we are in the analog and/or digital. Thank You It is the nature of things.

albertleedom
Автор

29:00 this discussion of confabulations and that the human brain does this too is so helpful in understanding what “hallucinations” are and where they come from

jjjj
Автор

Great talk, and an unusually good Q&A!

solomonmatthews
Автор

No Daniele, of course you are not the only one who takes a delight in Hintons thoughts, and the beautiful and now and then tongue in cheek - humorous way he expresses himself.

scarlettkersten
Автор

Dr. Hinton sparked my aspirations for AI. I have much to learn, but I will study every and all things about it.

TerribleDayForRain
Автор

Am I the only one to notice how Hinton is delightfully funny and makes no effort to be polite, and just says what he thinks directly. Most other people in the room are insufferably politically correct. Thats so depressing.

daniele
Автор

That was a really great talk and very informative, and also shows an evolution of his thinking over time. I remember studying his work back in the 90’s when I was at university, and I use it every day at work now, and I’m glad he’s taken us all through the AI winter into this new, somewhat scary, world of possibilities.

velvetsound