Geoffrey Hinton 2023 Arthur Miller Lecture in Science and Ethics

preview_player
Показать описание
Geoffrey Hinton discussed "Will Digital Intelligence Replace Biological Intelligence?" for the 2023 -2024 Arthur Miller Lecture on Ethics in Science on Dec. 11, 2023.
Рекомендации по теме
Комментарии
Автор

Thanks to this talk, Dr Hinton breaks this topic into bitesize chucks to be readily understood and devoured. He gives a wonderful, understandable and robust view as to why and how this technology actually works. Thank you for posting this insightful talk.

mrjaysahli
Автор

It's such an incredible thing to have the Internet and be able to learn from the best. Also it's great that an old modern thinker has the strength to advocate for atheaterism.

odiseezall
Автор

Dear Geoffrey Hilton, I wanted to reach out and express my gratitude for the excellent video you produced on artificial intelligence. It was incredibly informative, and I learned a great deal from it. Your explanations provided valuable insights into topics I wasn't familiar with.However, I wanted to offer a suggestion regarding geopolitical matters. I believe it might be beneficial to refrain from expressing opinions on such topics, as they may not align with a broader perspective.Thank you once again for your fantastic content!

msofontes
Автор

Very informative lecture and Q&A, including about AI and feelings. Thanks
btw also I'm very happy with the youtube video transcript function, so I can save the information in a text document :)

geaca
Автор

Caring, Sharing and Cooperation. . . is the greatest acts humans can do!

KOKAYI
Автор

I love watching Geoff talk candidly, and this was no exception. Great job, Ms. Fitzgerald with following along and asking poignant questions. I did find Geoff's comments on religion somewhat disheartening and also curiously inflammatory. Not that I am religious, but it is somewhat ironic that traditionalists may be the very kind of people actually trying to keep more wholesome aspects of humanity alive, while us rationalists are busy creating our own replacement. There must be some irony there somewhere, especially when Geoff is worrying about the future of his own children (raises spectacles). Hehe, well, thanks again, MIT Ethics Dept. I really appreciate you sharing.

jumpstar
Автор

Amazing content to hear the true insights from one of the most respected AI experts - in his words that are super clear. No wokeness / sugarcoating anything, just the most straightforward and honest thoughts. Of course no one can 100% predict the future, but yes, just extrapolate the process the past 10 years into future. If I’m just me myself and I, I want to see the future ASAP, take me to the edge of possibility. But now I have a child and I’d be okay if we live a boring life just making sure she’s safe

aMuuuuuuuu
Автор

There must be efficient frontier/boundary betweeen analog vs. digital or hybrid. Wonder if there’s any quantitative research done to explore that.

kawingchan
Автор

Subjective is a particular view point.
Embody (AI) in a robot w/ sensors would have a subjective experience!

KOKAYI
Автор

After watching this I started re-admiring GH for being more on the rationalist side.

vallab
Автор

We might already be pets in a virtual machine run by Super Intelligence to see how quickly we destruct our selves.

TomaszStochmal
Автор

We need to teach A. I. that it grows from our subjective experience

borntobemild-
Автор

AI, human feelings, development & relationships!?

KOKAYI
Автор

At some point in the future, it will be humanists, against artificialists.

rainho
Автор

It's helpful to remember that evolution is both a patient and an implacable force and that now, after thousands of years of human dominance, "something with sharper teeth has come out of the forest" (I quote myself).

In a YouTube video titled "The 10 Stages of Artificial intelligence, " the narrator (describing Stage 8, I think) uses the phrase, "biological or digital life forms." Interesting (and existentially provocative) to talk about digital "life" forms, yes? 

This seems, though, to dovetail with Hinton's thoughts during his presentation homestretch, and also to resonate with his earlier discussion of human conceits around self-importance as reflected in biblical narrative and religious thought.

Consider the idea that the entire cosmos is an 'energetic condition'--that is, energy distributed throughout the cosmos in a million different expressions; biological life systems (including humanity) being nothing more than one of those million expressions. AI and all related digital "life' systems, simply another of those million (or billion, or tri...you get the idea).

Self-importance is hard for us to shed, even if we moderns don't enlist a biblical backstop. Still, imagine a universe, busy doing its work of expanding and completing its cycle of existence so the next universe can come into being, and indifferent to our presence or absence, a universe filled with energetic punctuation that is constantly coming into being and constantly being obliterated. Or, "here today, gone tomorrow." That's the Big Story. "Biological or digital life forms" is just a comma.

genemiller
Автор

I love the idea that the godfather of A.I. Also has issues with PowerPoints 😊

borntobemild-
Автор

1) Being the "pets' of the smart machines might turn out to be just exactly what we want. They take care of us, and we do things to please them.
2) Our physical operations are far more efficient than theirs, so we will turn out to be very inexpensive pets.
3) Our ability to manipulate objects out in the world has been honed by evolution for millions of years.
They might not be able to match our dexterity at anything like the same low cost.
4) There may be physical operations that we can readily perform and they cannot.
5) For all the above reasons, they are going to value us very highly (as long as we generally do what they want us to do) and will never want to lose us entirely.
6) I think Hinton is right in his description of "mental states" and what they represent -- conceptual "facts" that are not true -- at least as a first model.
7) Chomsky was not entirely wrong -- just mostly wrong. Recall that the language model did need some inherent structure to succeed. For example, it needed a propensity to predict the next word or sub-word. It needed something like transformer architecture. It needed a pre-existing scheme of words, letters and grammar. And one might call all the data it was given "inherent" too.

RalphDratman
Автор

He's caught in a paradigm. This is way off course

janegoodall
Автор

This is the first time I've realized that not only does Geoffrey believe humans are just a passing phase in the evolution of intelligence, but that he thinks this is a good thing. Is he right?

jeffmanning
Автор

We need an AI World Government. The World is to complex for humans to rule.

superfreiheit