Accelerate AI, or hit the brakes? Why people disagree

preview_player
Показать описание
This video studies the cognitive biases that affect us when we think about big questions including those in AI risk. We analyze three questions including ones about AGI, misinformation, and existential risk.

We analyze nine different cognitive biases including normalcy bias, the tendency to assume that the world will continue as normal, and reluctance to care for a disaster that has not yet happened. We analyze the three questions to determine which biases might affect those hearing the statements.

We also briefly discussed the path that some people take to understanding existential risk. I came up with seven stages of AI existential risk enlightenment. Where are you on this spectrum?

#ai #existentialrisk #cognitivebias

How brain biases prevent climate action

Climate change inaction: Cognitive bias influencing managers' decision making on environmental sustainability choices. The role of empathy and morality with the need of an integrated and comprehensive perspective

Reasoning through arguments against taking AI safety seriously

Machines of Loving Grace

List of cognitive biases

0:00 Intro
0:21 Contents
0:30 Part 1: Asking the big questions
0:49 Focusing on upside of AI
1:14 Possible to focus on potential risks
1:41 Yoshua Bengio and optimistic intuitions
2:32 Three questions to focus on
3:04 Part 2: Cognitive biases and other troubles
3:26 Example: motivated cognition
3:58 Present bias, diffusion of responsibility bias, etc
4:35 Detail on cognitive biases
4:40 Bias 1: The framing effect
5:19 Bias 2: Present bias / Hyperbolic discounting
5:49 Bias 3: Bias of diffusion of responsibility
6:30 Bias 4: Egocentric bias, overvaluing own opinion
6:49 Bias 5: Availability bias and personal memory
7:08 Bias 6: Optimism bias
7:15 Bias 7 & 8: Primacy and recency bias
7:33 Bias 9: Normalcy bias, refusal to react to novel disasters
8:00 Many other biases exist
8:12 Part 3: The answers no one wants to hear
8:15 Human disclaimer
8:33 Revisiting the three questions
9:14 Question 1: AGI will automate most jobs
9:39 Possible biases affecting question 1
9:59 Why you may not be taking action
10:34 Biases don't determine whether you should actually support a stance
10:51 Question 2: Misinformation will flood the internet
11:23 Possible biases affecting question 2
11:47 Why society is not taking action
12:20 Question 3: AI could lead to human extinction
12:58 Why you may not agree with this answer
13:39 Why society isn't tackling existential risk
14:33 Seven stages of grief
14:42 Seven stages of AI existential risk enlightenment
15:18 Conclusion
16:25 Outro
Рекомендации по теме
Комментарии
Автор

Acceleration is a misnomer. By rushing towards poorly understood capabilities we're moving away from beneficial AGI.

Dababs
Автор

The reason we need to focus on the positive is because IF we develop an AI that fulfills our requests perfectly then we need the most detailed prompt possible, this means we need way more realistic positive imagery of the future.

In this scenario any ideation of the idea of what we want the future to look like is the most important thing. Notating things we want, things we don't want and so forth. Just publishing these ideas and chatting with LLMs gives training data that the LLM can use to infer what our ideal future looks like.

Rolyataylor
Автор

I am with you on all your points, but the problem is our biases dictate where we research, and the internet has the capability of supplying any bias answer we want. Great videos, thanks.

juju
Автор

We need to Pause to coordinate on safety

TheJokerReturns
Автор

Enjoyed the video, as usual, but I wonder what ever would you do if a video had FOUR parts???

JonathanStory
Автор

I love you Dr Waku. My view...for what it's worth...is that we cannot stop or pause the AI ​​train. But we can jump aboard it and ride along. And we can help control the speed (if we have to cross an old bridge, we can reduce the speed a little, etc.). And we can build new tracks. And there is a lot we can do to get the best outcome out of the circumstances. Stopping or pausing the AI ​​train is just not an option.

Freja-co
Автор

The bystander bias says that me, an UBER driver, is not the best person to deal with existential risk of AI, compared to AI researchers, tech billionaires, politicians, and the guy that makes the donuts at Krispy Kreme. So to avoid bystander bias, I should really get out there are do something about AI existential risk. Fortunately, I'm off of work tomorrow. So yeah, no problem.

MichaelDeeringMHC
Автор

The real problem is the few individuals in charge of AI.

ZappyOh
Автор

MIT proselytized the singularity circa 1976. I was there. The largest department was EECS, Course 6. One of its 4 core requirements was AI, where we were taught that hard AI/SI was inevitable. It was an easy sell, especially in that department, which lived and breathed levels of abstraction.

Rick.Fleischer
Автор

Cognitive bias: If 'AI safety researchers' fail to create and maintain sufficient fear, their jobs are gone. It goes both ways, doesn't it?

minimal
Автор

It would be even better to apply cognitive bias analysis to politics.... Good luck to my American viewers, whichever way you want it.

DrWaku
Автор

What is funny is that I view most "fellow doomers" as subjected to the same biases when they discount the however small risk of unbound and inescapable suffering as a much bigger threat than extinction. In this case I empathize with accel more, since utopia vs mere extinction could be a legitimate gamble for many people if it was not for my first point.

AI_Opinion_Videos
Автор

Here's my uneducated thought process... There's definitely danger. No doubt about it. But I struggle to find a competent pair of training wheels to put on this ride.. What I mean is, WE and other nations within the circle of cooperation can agree to slow down research or gate it behind strict

What's stopping those other nations from slowing down or following that rule???

So it's like a false comfort and a hidden vulnerability/danger. This isn't as cut and dry as the cold war where it was "you bomb me, I bomb you."

It's much, much worse... And I hate to say it but, we've already jumped from that plane. The "digital" Pandoras Box is already open. We just don't know what's going to come out yet, or which country could have temporary.... And I do use temporary strongly here... Temporary control of whatever pops out.

Ignorant as I might be of the inner workings of A.I, I refuse to take a sigh of relief because the government and supporting businesses all said they are puttinf safety first and taking things "slow"... Why? Because we're all on one huge wooden ship. Just because one group in one section tosses the flamables overboard doesn't mean every group will..


So in the end, I've simply raised my hands up about this. What happens will happen. Live your life and try to benefit from whatever good can come from this artificial intelligence boom. If you're capable, continue to progress knowledge. Better to kniw as much about your enemy as you would a very close loved one. Hopefully, the wheel of fate will land us all in the right position.

durtyred
Автор

Normalcy bias is only a bias when there is clear information that what's going to happen is actually unusual, and you choose to ignore the evidence. Otherwise it's just sensible.

RyanTaylor-pigq
Автор

Thanks for this amazing video. I’m glad you started making videos; your 3 structure rule was what I taught when I taught public speaking in college- tell them what you’re going to tell them, tell them, then tell them what you told them.

I went into an existential depression about 2 years ago when I read Eliezer’s article in Time and couldn’t find any, and I mean any, arguments against his reasoning other than, “Nah.” “They’ll think of something.” “You don’t understand how they work; it’s a word calculator.”

None of those addresses the totality of the situation.

My entire life changed. I spend more money. I’m not completely irresponsible about it, but I bought Nvidia a few years back when I saw ai would be big, and that’s been enough to cover a lot of extra comfort.

So before it all goes sideways and differently, one way or another and very very soon, I’m enjoying the now.

I hope everyone here does the same; the enjoyment part anyways 😊.

LanceWinder
Автор

Firstly, we have to understand the current 'race condition'. Competitors, whether nations or corporations, WILL NOT slow down out of fear that their competitors will not, because they WON'T.

So slow down isn't possible.

With that out of the way, we can speculate - should we?

In all the possible scenarios I've simulated, the ONLY one where we all survive is the one where ASI assumes control and is benevolent.

From the point of view of most of us, the elite will use ASI to get rid of the rest of us - IF they can control it, which is unlikely. But a scenario where they do, and cement their control over the world forever, is more abhorrent to us than one where ASI causes us to go extinct.

I am hoping that superethics emerges hand in hand with superintelligence (my 'p-doom' is less than 2%), and I have a lot of great ideas about how to do that (lately I've been using ChatGPT to help me refine these ideas).

In the end, however, I will support full acceleration in every way possible because I want to break the spine of the current world order that treats us all like slaves.

XLR8!

pandoraeeris
Автор

if you don't agree with me, try to figure out where you have made the error.

MichaelDeeringMHC
Автор

I think my main bias is optimism. I'm getting older and I don't want to die but I guess I have to weigh that against the the billions of future lives that may be in jeopardy.

chrissscottt
Автор

I'm not worried about the end of civilization. More concerned about the total destruction of the biosphere and the ending of our species.

HoboGardenerBen
Автор

I don't think a calm fluid easy transition is an option. We don't operate that way as a species. We stir things up into a chaotic soup full of creative possibility and then clarify a new temporary relative stablity from it. I think we're in the step before the massive upshot in chaos and fuckery. I have been crafting my hobo flow to prepare for it for a long time, kinda excited to have the need to use those skills, to test them. Knowing the dasr comedy of the universe, I will probably survive bandits and starvation to be killed by a bit of cement falling from an overpass on a peaceful day where it feels like things are looking up, the future finally bright again. I'llbe looking off in the distance at the sunset, a tear in my eye, heart full of bittersweet hope, and WHAM, brain jelly breakfast fresh to order for the local crows. I'm fine with that :)

HoboGardenerBen