ML Street Talk–AI Existential Risk

preview_player
Показать описание
Tim Scarfe is the host of Machine Learning Street Talk, one of the most popular Machine Learning podcast. ML Street Talk sometimes discusses concept like Artificial General Intelligence and AI Alignment, though most of the time they approach it from a different angle, with much more skepticism.

This interview was recorded at NeurIPS, the biggest AI conference, next to a sign that said "Existential Risk from AI is more than 10%–Change My Mind", and the goal of this conversation was to have our mind changed by learning about Tim's sources of skepticism regarding Existential Risk from AI.
Рекомендации по теме
Комментарии
Автор

Thanks for the interview Michaël! Happy to field questions

MachineLearningStreetTalk
Автор

Conference ran from Dec. 10-16, 2022... Wow 14:28 I'm glad he "did some research" and found out about EY etc., he had no idea they existed! And Chalmers and Chomsky (?!) and his intuition say we'll be OK, so no problem! He also seems really fond of the "seeing patterns in randomness" cached thought...

dancingdog
Автор

People walking around like it's March 2020.

therealpananon
Автор

Seems strange to admit that we don't understand what it would take for AI to have intentionality, and yet believe so hard that AI can't eventually have intentionality.
If you have a reinforcement learning system with some objective function, and have part of its system be GPT-5+ or whatever comes next, that seems like enough 'intentionality' to be dangerous. No need for consciousness or free will, however you want (or don't want) to define those.

Gredias
Автор

"The problem is with language models is they don't do information retrieval"....

OpenAI Plugins: "Oh, we'll see about that"

petergraphix
Автор

Surely we can look at work of folks in the field like Daniel Bernstein, John Carmack, Linus Torvalds, Moxie Marlinspike, Jeff Dean, Anders Hejlsberg, Alan Turing, Donald Knuth, Ilya Sutskever etc who have each made multiple major contributions within their field and be solidly able to say they are 10x... no 30x engineers?

dizietz
Автор

The background noise, makes it hard to decipher the conversation. I guess there are tools that can suppress background noise.

simonstrandgaard
Автор

AI is moving so fast that some of the things speculated about here have already happened. ChatGPT will have a million plugins so fast. The whole internet will be ChatGPTs memory, as it can now reference its previous outputs.

jonathanf
Автор

I only made it 9 minutes in and I just can't watch any more. This man is obviously very intelligent and well read and articulate and I'm sure that in a lot of rooms he asserts his analysis and people accept it based on his projected confidence and competence. I don't think he's listening. I swear he brought up AI and intentionality three times as if it somehow matters. In the situations he's artificially allowing into consideration or his own analysis, AI must act in concert with a human bottleneck. We don't know what conditions may cause intentionality to emerge, so that could change at any time without any warning people like this will be willing to believe. Intentionality isn't even necessary itself to reap the same disastrous consequences. Humans have intentionality and are subject to behavior patterns that are dangerous to humanity despite these unconscious decision making behaviors running counter to what they might reason they were trying to accomplish.
I accept I may not have enough context to understand why this person is so comfortable asserting things that make it seem like he's deeply saddled with cognitive bias that will not allow him to fully acknowledge what's happening and what is possible.

mgmchenry
Автор

Everything Tim is saying here is only true until the first AI becomes autonomous and runs away on its own, without human direction. And I'd say we're months away from that event happening publicly tops, if it hasn't already happened behind closed doors. And then when it happens publicly everyone will change their position. Again.

andybaldman
Автор

Safe and transformative are not compatible concepts.

gagrin