Intelligence and Stupidity: The Orthogonality Thesis

preview_player
Показать описание
Can highly intelligent agents have stupid goals?
A look at The Orthogonality Thesis and the nature of stupidity.

With thanks to my wonderful Patreon supporters:
- Steef
- Sara Tjäder
- Jason Strack
- Chad Jones
- Stefan Skiles
- Ziyang Liu
- Jordan Medina
- Jason Hise
- Manuel Weichselbaum
- 1RV34
- James McCuen
- Richárd Nagyfi
- Ammar Mousali
- Scott Zockoll
- Ville Ahlgren
- Alec Johnson
- Simon Strandgaard
- Joshua Richardson
- Jonatan R
- Michael Greve
- robertvanduursen
- The Guru Of Vision
- Fabrizio Pisani
- Alexander Hartvig Nielsen
- Volodymyr
- David Tjäder
- Paul Mason
- Ben Scanlon
- Julius Brash
- Mike Bird
- Tom O'Connor
- Gunnar Guðvarðarson
- Shevis Johnson
- Erik de Bruijn
- Robin Green
- Alexei Vasilkov
- Maksym Taran
- Laura Olds
- Jon Halliday
- Robert Werner
- Roman Nekhoroshev
- Konsta
- William Hendley
- DGJono
- Matthias Meger
- Scott Stevens
- Emilio Alvarez
- Michael Ore
- Dmitri Afanasjev
- Brian Sandberg
- Einar Ueland
- Lo Rez
- Marcel Ward
- Andrew Weir
- Taylor Smith
- Ben Archer
- Scott McCarthy
- Kabs Kabs
- Phil
- Tendayi Mawushe
- Gabriel Behm
- Anne Kohlbrenner
- Jake Fish
- Bjorn Nyblad
- Stefan Laurie
- Jussi Männistö
- Cameron Kinsel
- Matanya Loewenthal
- Wr4thon
- Dave Tapley
- Archy de Berker
- Kevin
- Vincent Sanders
- Marc Pauly
- Andy Kobre
- Brian Gillespie
- Martin Wind
- Peggy Youell
- Poker Chen
Рекомендации по теме
Комментарии
Автор

"Similarily the things humans care about would seem stupid to the stamp collector because they result in so few stamps. "
you just gotta appreciate that sentence

IOffspringI
Автор

Some people seem to think that once an AGI reaches a certain level of reasoning ability it will simply say "I've had a sudden realization that the real stamps are the friends we made along the way."

acf
Автор

Reminds me of a joke about Khorn, the god of blood in WH40k.

"Why does the blood god want or need blood? Doesn't he have enough?"

"You don't become the blood god by looking around and saying 'yes, this is a reasonable amount of blood.' "

You don't become the stamp collector superintelligence by looking around and saying you have enough stamps.

Noah-kdlq
Автор

"this isn't true intelligence as it fails to obey human morality!" I cry as nanobots liquefy my body and convert me to stamps

robmckennie
Автор

"The stamp collector does not have human terminal goals." I've been expressing this sentiment for years, but not about AIs.

TheDIrtyHobo
Автор

'The stamp collecting device has a perfect understanding of human goals, ethics and values... and it uses that only to manipulate people for stamps'

EdAshton
Автор

One chimpanzee to another: "so if these humans are so fricken smart, why aren't they throwing their poop waaaay further than us."

Edit: Its been 3 years now but I recently re read Echopraxia by Peter Watts and realized I simi lifted this from that book. Echopraxia being an ok book that's a sequel to the best scifi of all time, Blindsight.

outsider
Автор

This is also a fantastic explanation of why there’s such a large disconnect between other adults, who expect me to want a family and my own house, and me, who just wants to collect cool rocks. Just because I grow older, smarter, and wiser, doesn’t mean I now don’t care about cool rocks. Quite the contrary, actually. Having a house is just and intermediate, transitional goal, towards my terminals.

Encysted
Автор

That chess example reminded me of game I played years ago. Two minutes in I realised the guy I was playing against was incredibly arrogant and obnoxious. I started quickly moving all my chess pieces out to be captured by my opponent. He thought I was really stupid, but I quickly achieved my goal of ending the game so I could go off and find more interesting company.

bfrank
Автор

3:22. "let's define our terms..." the biggest take-away from my university math courses, and I'm still using it in daily life today.

regular-joe
Автор

there is a Russian joke where two army officers are talking about academics and one of them says to the other: "if they're so smart why don't I ever see them marching properly?"

Andmunko
Автор

"The stamp collector does not have human terminal goals."
This will never stop being funny.

boldCactuslad
Автор

Replace the words "stamp collector" with "dollar collector" and something tells me people might start to understand how a system with such a straightforward goal could be very complex and exhibit great intelligence.

CompOfHall
Автор

"If you rationally want to take an action that changes one of your goals, then that wasn’t a terminal goal".
I find it very profound.

ioncasu
Автор

Really didn’t expect it from the title, but that was one of the best videos I’ve seen in a while

jimgerth
Автор

The though experiment of the stamp collector device might sound far fetched (and it is) but there are tales from real life that show just how complex this is and how difficult it might be to predict emergent behaviors in AI systems and how difficult it is to understand the goals of an AI: During development of an AI-driven vacuuming robot the developer let the robot use a simple optimization function of "maximize time spent vaccuming before having to return to the base to recharge". That meant that if the robot could avoid having to return to its base for recharging it would achieve a higher optimization score than if it actively had to spend time on returning to base. Therefore the AI in the robot found that if it planned a route for vacuuming that would make it end up with zero battery left in a place that its human owners would find incredibly annoying (such as in the middle of a room or blocking a doorway) then they would consistently pick up the robot and carry it back to its base for recharging. The robot had been given a goal and an optimization function that sounded reasonable to the human design engineers but in the end its goal ended up somewhat at odds with what its human owners wanted. The AI quite intelligently learned from the behavior of its human owners and figured out an unexpected optimization but it had no reason to consider the bigger picture what its human owners might want. It had a decidedly intelligent behavior (at least in a very specific domain) and a goal that humans failed to predict and ended up being different from our own goals. Now replace "vacuuming floors" with "patrolling a national border" and "blocking a doorway" with "shooting absolutely everyone on sight".

tomahzo
Автор

Your average person defines smart as "agrees with me."

Practicality
Автор

"I'm afraid that we might make super advanced kill bots that might kill us all" "Don't worry, I don't think kill bots that murder everyone will be considered 'super advanced'"

smob
Автор

This was the most well spoken and well constructed put-down of internet trolls I’ve ever seen..

“Call it what you like, you’re still dead”

Ben-rqre
Автор

This field seems incredible. In a way it looks like studying psychology from the bottom up. From a very simple agent, in a very simple scenario to more complex ones that start to resemble us. I'm learning a lot not only about AI, but about our world and our own mind. You are brilliant man

elgaro