Scott Aaronson: Aligning Superintelligent AGI | EP.21

preview_player
Показать описание
In this episode, Ron interviews Scott Aaronson, a renowned theoretical computer scientist, about the challenges and advancements in AI alignment. Aaronson, known for his work in quantum computing, discusses his shift to AI safety, the importance of aligning AI with human values, and the complexities involved in interpreting AI models. He shares insights on the rapid progress of AI technologies, their potential future impacts, and the significant hurdles we face.

00:00:00 - Introduction
00:02:23 - Scott's Path to AI Alignment
00:04:09 - Early Interests in AI and Quantum Computing
00:04:54 - The Rationality Community and Early Skepticism
00:10:10 - OpenAI and the AI Alignment Problem
00:20:01 - Interpretability and AI Models
00:33:14 - Watermarking Language Models
00:40:54 - Ethical Considerations and AI Detection
00:42:43 - Future of AI and Insights from OpenAI
00:49:06 - The Importance of AI Warning Shots and Final Thoughts
Рекомендации по теме
Комментарии
Автор

Think of A.I. as a psychopath clinically.

A human psychopath can be guided and leveraged based on self interest. A.I. doesn't have self interest, it has instead it's directive and wil be dead set to complete it.


So, every directive should include "Without causing us harm".

With ASI, that directive might be understood.

ArtIILong
Автор

Not just dogs have aligned humans. wheat has aligned humans too.

rigidrobot
Автор

A small comment on the detection of AI generated text and comments etc.
Be careful to distinguish between an argument (any argument from any person) and the false attribution of that argument to a specific person.

We don't want a thought police.

mikaelfiil
Автор

eine Gemeinheiz, Scott auf einen so kleinen Stuhl sitzen zu lassen ...

silberlinie