Why does human intelligence beat AI? – with Gerd Gigerenzer

preview_player
Показать описание


In this talk, Gerd discusses how trust in complex algorithms can lead to illusions of certainty that become a recipe for disaster.

00:00 Intro
06:23 The algorithms of finding love online
09:52 Why AI only works in stable world situations
12:03 The illusion of fully automated self-driving cars
14:13 Why Elon Musk’s prediction is wrong
17:09 Is more data always a good thing?
24:29 How human common sense differs from AI
26:50 Why humans and computers make different mistakes
31:28 The problem with deep neural networks
33:49 Are we sleep-walking into surveillance?
38:30 The lack of risk literacy amongst politicians and decision-makers
41:52 Face recognition doesn’t work for all problems
44:28 People want online privacy – but they don’t want to pay for it
51:07 China’s social credit system – could it happen in the west?
55:38 How to stay smart in a smart world

This livestream was recorded at the Ri on 28 April 2022.

Gerd Gigerenzer is Director of the Harding Centre for Risk Literacy at the University of Potsdam, Faculty of Health Sciences Brandenburg and partner of Simply Rational - The Institute for Decisions. He is the former Director of the Center for Adaptive Behavior and Cognition (ABC) at the Max Planck Institute for Human Development and at the Max Planck Institute for Psychological Research in Munich.

His award-winning popular books 'Calculated Risks', 'Gut Feelings: The Intelligence of the Unconscious', and 'Risk Savvy: How to Make Good Decisions' have been translated into 21 languages. His academic books include 'Simple Heuristics That Make Us Smart', 'Rationality for Mortals', 'Simply Rational', and 'Bounded Rationality'.

----
A very special thank you to our Patreon supporters who help make these videos happen, especially:
---

Product links on this page may be affiliate links which means it won't cost you any extra but we may earn a small commission if you decide to purchase through the link.
Рекомендации по теме
Комментарии
Автор

One problem with paying social media companies in exchange for privacy is that no one really expects them to honour that agreement - not when they can take your payment _and_ continue to sell the bulk of your information to anyone who will meet their price. They will have to show a track record of honesty and ethics first - and I won't be holding my breath waiting for that to happen.

aussiebloke
Автор

About 51:00-55:00, what stops "them" from both taking pay from customers and still selling data (or the government doing the same in some form)?

bcase
Автор

The planner code in Tesla FSD is not based on machine learning. They're combining different techniques to solve the autonomy problem.

audience
Автор

I have been working in the computer industry since the late 70's. I will put everything I know online to simply say.. there is no such thing as AI. It was originally called "Computer Learning" And nothing has changed.

AI gives the incorrect assumption that a computer can think.

As Steve Wozniak says.. when a computer gets up and says...I wounder what I will do today? then it will be intelligent.

All we really do is use a machine that operates at light speed to do calculations that take humans much longer to calculate. Then the computer can APPEAR to be intelligent.

AI is a marketing scam of a set of protocols that have been around for over 60 years.

samjones
Автор

At about 40:00, They also didn't realize that the face recognition system was not identifying 1 in 5 "terrorists" listed in the system.

bcase
Автор

I have some doubt about the survey at 47:00. I think many customers don't trust the social media and assume their personal data will still be used after they start paying money for the service, so why should they pay money when their data is (ab-)used anyway?

foo
Автор

I am listening to this man while doing mindless kitchen about 12 minutes stop dead in my tracks with an audable "OMG"...the dog barks and I see his beautiful, logical perfection. What an awesome going back in.

marianedmond
Автор

The slide says there are 28 members of Congress with criminal database face matches. Hmm, 28 names?

Adam Kinzinger, Alex Mooney, Bill Hagerty, Debbie Lesko, Elise Stefanik, Glenn Thompson, Guy Reschenthaler, Joe Wilson, John Cornyn, Josh Hawley, Ken Calvert, Kevin McCarthy, Lauren Boebert, Lindsey Graham, Marco Rubio, Marjorie Taylor Greene, Marsha Blackburn, Matt Gaetz, Mitch McConnell, Paul Gosar, Ralph Norman, Rand Paul, Rick Crawford, Rick Scott, Ron Johnson, Ted Cruz, Thomas Massie, Devin Nunes.

Sounds like Amazon criminal face recognition system is working perfectly fine to me.

... Connie Conway, Scott Fitzgerald, Dan Newhouse, Brian Babin, John Carter, Beth van Duyne, ...

More names? Take that back. Sounds like Amazon's system wasn't working hard enough. There are more than 28 Big Liars.

... Marsha Blackburn, Cindy Hyde-Smith, Mike Braun, Tom Cotton, Kevin Cramer, ...

shexec
Автор

Sir, thanks for this beautiful lecture. I am a simple man with no knowledge of artificial Intelligence except some youtube videos. I think the 1st part of your video where AI is failed due to general intelligence. As General Intelligence is far bellow a mature human being. That is why these problems are occurring.
In the last part of your video the topic of privacy is a real concern for us. And the privacy threat is getting bigger and bigger day by day.
So I request you to take attention of governments on these issues. As You people are holding a respected position Your voice will amplify more than a general public.
It's very important for AI research to know what AI will not able to do. Thanks for your valuable lecture.

jayabratabiswas
Автор

I think a lot of people would opt out of one of the following policies: (1) My AI may act without my approval; (2) My AI may act without my knowledge.

brothermine
Автор

Wonderful talk, and some very important points! The application of Bayes' theorem to the false positive rate of mass surveillance is especially important for people to be aware of. A coffee house where you get free coffee in exchange for surveillance and personalized ads actually doesn't sound all that bad, though :D

DielectricVideos
Автор

Great speaker and fascinating subject!

hbishop
Автор

I've been a nuts and bolts programmer for 60 years and since day 1 friends haves asked will computers be able to do
My answer started at 'maybe' but has moved to 'yes, but maybe not in the immediate future'.
So what I heard here seemed over biased towards today's knowhow.
Today's AI will help design tomorrow's and the rate of improvement will be exponential. Very few people, and I include myself, can really cope with exponential

billspence
Автор

No, I am not willing to pay for privacy. That is an unalienable right. The social media supplier has to provide and must guarantee privacy. If they do not, do not use their service. They are using your private data to make money. No matter how useful and ubiquitous the service is/has become, it cannot be trusted with your privacy.

brantregare
Автор

Most politicians from the US could not possibly beat AI.

meejinhuang
Автор

Great talk, thank you GGigerenzer and RI

stanlibuda
Автор

I live in Canada, our cities are not planed for level 4, they are barely planed for level 0.

universeisundernoobligatio
Автор

Human is so smart in comprehending.a school bus but so dumbed in understanding the significance of 99% accuracy rate. The later is a piece of cake for AI. It’s so ironic.

Buckzoo
Автор

I think it's generally a human condition that we don't understand risk.
I mean, unless trained to.

LJCyrus
Автор

In the case of the German Project Face section. If 12, 000 people were falsely identified couldn't you filter them out of the system by getting their information. I would imagine that if these people are commuters they would keep showing up daily so you could filter them out of the system. So yes it would be a pain in the beginning but as these people would get filtered out the miss rate would lower a lot. Or am I missing something? It seems like it would get better over time not worse right?

shadowdagger