ROBERT MILES - 'There is a good chance this kills everyone'

preview_player
Показать описание
Please check out Numerai - our sponsor @

Numerai is a groundbreaking platform which is taking the data science world by storm. Tim has been using Numerai to build state-of-the-art models which predict the stock market, all while being a part of an inspiring community of data scientists from around the globe. They host the Numerai Data Science Tournament, where data scientists like us use their financial dataset to predict future stock market performance.

Welcome to an exciting episode featuring an outstanding guest, Robert Miles! Renowned for his extraordinary contributions to understanding AI and its potential impacts on our lives, Robert is an artificial intelligence advocate, researcher, and YouTube sensation. He combines engaging discussions with entertaining content, captivating millions of viewers from around the world.

With a strong computer science background, Robert has been actively involved in AI safety projects, focusing on raising awareness about potential risks and benefits of advanced AI systems. His YouTube channel is celebrated for making AI safety discussions accessible to a diverse audience through breaking down complex topics into easy-to-understand nuggets of knowledge, and you might also recognise him from his appearances on Computerphile.

In this episode, join us as we dive deep into Robert's journey in the world of AI, exploring his insights on AI alignment, superintelligence, and the role of AI shaping our society and future. We'll discuss topics such as the limits of AI capabilities and physics, AI progress and timelines, human-machine hybrid intelligence, AI in conflict and cooperation with humans, and the convergence of AI communities.

Robert Miles:
@RobertMilesAI

Panel:
Dr. Tim Scarfe
Dr. Keith Duggar

Refs:
Are Emergent Abilities of Large Language Models a Mirage? (Rylan Schaeffer)

TOC:
Intro [00:00:00]
Numerai Sponsor Messsage [00:02:17]
AI Alignment [00:04:27]
Limits of AI Capabilities and Physics [00:18:00]
AI Progress and Timelines [00:23:52]
AI Arms Race and Innovation [00:31:11]
Human-Machine Hybrid Intelligence [00:38:30]
Understanding and Defining Intelligence [00:42:48]
AI in Conflict and Cooperation with Humans [00:50:13]
Interpretability and Mind Reading in AI [01:03:46]
Mechanistic Interpretability and Deconfusion Research [01:05:53]
Understanding the core concepts of AI [01:07:40]
Moon landing analogy and AI alignment [01:09:42]
Cognitive horizon and limits of human intelligence [01:11:42]
Funding and focus on AI alignment [01:16:18]
Regulating AI technology and potential risks [01:19:17]
Aligning AI with human values and its dynamic nature [01:27:04]
Cooperation and Allyship [01:29:33]
Orthogonality Thesis and Goal Preservation [01:33:15]
Anthropomorphic Language and Intelligent Agents [01:35:31]
Maintaining Variety and Open-ended Existence [01:36:27]
Emergent Abilities of Large Language Models [01:39:22]
Convergence vs Emergence [01:44:04]
Criticism of X-risk and Alignment Communities [01:49:40]
Fusion of AI communities and addressing biases [01:52:51]
AI systems integration into society and understanding them [01:53:29]
Changing opinions on AI topics and learning from past videos [01:54:23]
Utility functions and von Neumann-Morgenstern theorems [01:54:47]
AI Safety FAQ project [01:58:06]
Building a conversation agent using AI safety dataset [02:00:36]
Рекомендации по теме
Комментарии
Автор

Miles really undersells himself. I think he explains the risks of AI clearer than any other popular speaker on this topic.

Thanks for inviting him on!

leeeeee
Автор

Keith's stand point seems to be, don't worry, we'll just outsmart it.

Like we'll all somehow know intuitively that any more advance will be dangerous, and then all look at each other and say, "time to destroy these machines that spit out rivers of gold and do all the hard work, pass me a sledgehammer".

gasdive
Автор

The guy on the right is so painfully naive

rosameltrozo
Автор

Miles's humility is winning and his competence is clear for all to hear, especially in his caution and careful style of communicating.

GingerDrums
Автор

Keith just really doesn’t get it. He’s thinking it’s all robocop. He does not seem to understand that this is not like writing a sci-fi plot.

quentinkumba
Автор

If you're going to push back with, "Yeah, but what about...", then you should probably be finishing that question by pointing out some deficiency in the statement you're responding to. A good example of this is how Miles consistently points out the logical flaws in those challenges. These interactions alone end up being fairly strong evidence for why we should be very concerned about AI safety. It suggests to me that many people would not even realize when they were being out-maneuvered by a sufficiently sophisticated AI.

dmwalker
Автор

Keith’s argument about asteroids is ridiculous

luomoalto
Автор

My top quotations:

"We're approximately the least intelligent thing that can build a technological civilisation."

"Alignment isn't 6 months behind capabilities research, it's decades."

"If we can get something that cares about us the way we care about our pets, I'll take it."

"I get all my minerals from asteroids, it's so convenient." (lol)

I struggle to understand how anyone can hear the words 'sovereign AI' or 'pets' and not feel a deep, chilling terror.

Can we just call this what it really is? It's an arms race to build God, a castrated zombie God you control, constrained only by the laws of physics. Whose God are we building? Do we all get one?

It feels a lot like the logic of the USA's second amendment, except with nukes. Advocates cry "it's a human right to arm ourselves to the teeth". Everyone is terrified, and one drunken misunderstanding ends us all.

luke.perkin.inventor
Автор

AI deciding to keep us around for its own reasons seems much worse than death. Much, much worse.

ikotsus
Автор

I really like the talk, but I think it's kind of a shame that it went into the whole "can we be sure we we'll be 100% irrevocably and completely wiped out" direction. The "is there a real risk of considerable, very hard to reverse damage, and are we doing enough to address it?" angle seems so much more interesting.

slgnssp
Автор

It's tough to have a real, complex, and nuanced talk about the all the issues around AI catastrophe when you have to consistently respond to the simplistic. Please match the seriousness and depth of your participants.

Thank you for your work Miles.

szboid
Автор

I wish these guys would actually engage with the points made by their guest and argue about those points. Instead they are clearly overmatched intellectually - and there is no shame in that; we each have our limits. It only becomes shameful when you deal with it simply by handwaving really hard and telling yourself that you're winning.

davidhoracek
Автор

How in the world is the host on the right looking at the progress we make in a year of AI research, looking at the average intelligence of humans, and feeling confident that this is all going to work out?

What’s notable in this discussion is that the points Miles is making are still the absolute basic problems of AI safety research. Total entry level stuff. We have no idea how to solve any of them well, and the problems are not hypothetical- they are observed properties of the systems we have studied.

lkyuvsad
Автор

It's discouraging that the hosts seem to be incredulous of the basics of the alignment problem. Incredulity won't help us solve these problems, and where there is disagreement it does nothing to advance understanding.

-Haiku
Автор

“Because we do not know what intelligence is we may create it without knowing and that is a problem.” Love it!

curiousmind
Автор

Robert is just the best. And just to flaunt my fan-boyhood, my favourite moment in this video is at 44:29 where he drives a nail into the coffin of lofty philosophical debate about intelligence during an AI safety conversation: you don't need to understand what fire "really is" in order to cause substantial harm with it, be it deliberately or accidentally. If anything, not knowing exactly what intelligence is, only increases the risk inherent to anything that's ether more or differently intelligent. And that's all there is to say about the "nature of" intelligence in a debate about AI safety.

erikfinnegan
Автор

The mental gymnastics of these guys is exhausting. Robert tries to stick to facts, and they make up non sequitur strawmen scenarios and then pretend it is a good argument. Their hearts may be in the right place, but they are not being realistic. All an AI had to do is convince most humans to support it to win. That's it. No lasers required.

alexpavalok
Автор

If you are in a car being driven towards a cliff face at 200 mph, at what distance should you start worrying? How long should you wait until you start taking action? Too many opponents of AI Safety research seem to want to wait until the car has already gone over the cliff before they admit there's a problem. By that point, it's too late.

michaelspence
Автор

Extracting resources from the Earth crust is not a waste. You still receive more than you spend. So it would be rational to extract from all sources, not just asteroids.

XOPOIIIO
Автор

57:00 This co-host is kinda disrespectful, isn't he? Ignores the crux of the arguments all the time, and just laughs at the face of his guest.

flisboac