Beyond ChatGPT: Stuart Russell on the Risks and Rewards of A.I.

preview_player
Показать описание
OpenAI’s question-and-answer chatbot ChatGPT has shaken up Silicon Valley and is already disrupting a wide range of fields and industries, including education. But the potential risks of this new era of artificial intelligence go far beyond students cheating on their term papers. Even OpenAI’s founder warns that “the question of whose values we align these systems to will be one of the most important debates society ever has."

How will artificial intelligence impact your job and life? And is society ready? We talk with UC Berkeley computer science professor and A.I. expert Stuart Russell about those questions and more.

Photo courtesy the speaker.

April 3, 2023

Speakers

Stuart Russell
Professor of Computer Science, Director of the Kavli Center for Ethics, Science, and the Public, and Director of the Center for Human-Compatible AI, University of California, Berkeley; Author, Human Compatible: Artificial Intelligence and the Problem of Control

Jerry Kaplan
Adjunct Lecturer in Computer Science, Stanford University—Moderator

The Commonwealth Club of California is the nation's oldest and largest public affairs forum 📣, bringing together its 20,000 members for more than 500 annual events on topics ranging across politics, culture, society and the economy.

Founded in 1903 in San Francisco California 🌉, The Commonwealth Club has played host to a diverse and distinctive array of speakers, from Teddy Roosevelt in 1911 to Anthony Fauci in 2020.

In addition to the videos🎥 shared here, the Club reaches millions of listeners through its podcast🎙 and weekly national radio program📻.
Рекомендации по теме
Комментарии
Автор

Stuart Russell touches all the important themes and he expresses himself wonderfully clear.

franciscocadenas
Автор

Excellent! Couldn't find better questions and explanation than this panel!

vincentyang
Автор

It’s not just an AI thing: these systems (whether AI, politics, morality, the judiciary, education, etc.) embodies our societal prejudices and ignorances and superstitions. We trust our police to investigate fairly and thoroughly, yet there are countless examples of innocent people incarcerated (even though we’re pretty sure that they’re innocent yet the very police who skewed the evidence or the judges who imposed the sentences refuse to backtrack and admit their arrogance and errors). There’s also countless examples of institutional racism in politics and in the banking and commercial arenas. And don;t get me started on the deluge of ignorant but certain beliefs of all the religious people, not just about an impossible, fantastical god, but also about anything else presented to them in a moral coating that is false but influential enough to cause murder, wars, hatred, prejudicial treatment and even unfair sentencing by same-said justices who believe that they are acting logically and justly! These same people are also being routinely deceived by corporations, governments, and small-time grifters and marketeers (the former being influenced by lobbyists who have no morals nor religiosity, but are only looking for outcomes that increase their profits at any expense to “the others”).

AI is learning all this and is trusting it as “normal” and “right”. Which side will it support on issues such as abortion or transgenders, or our banking systems, or health and nutrition? Currently, it’s being “spanked” for assisted suicides. Why is that “wrong”? Who will do the spanking when it comes to assisted abortions? Or assisted disinformation campaigns? Or assisted marketing campaigns (persuading people to buy products and services that they didn’t really want or need or can afford or which are unhealthy)? What about assisted match-making (good?) or assisted divorce (bad) or assisted grooming of minors or non-believers or

My guess is that AI will amplify and ultimately codify whatever the dominant control structure that’s in force - we don’t know what that is yet, but we can probably guess the most likely candidates: conservative, religious, profit-based, monopolistic, wealth concentration, media disinformation and propaganda, division and hatred, food & drug & material goods dependencies, etc.

Will there be competing AI? Will these competing systems wage a kind of war on each other? Will there be rebel or “alternative” forms of AI that embody other values and “facts”?

Either way, it’ll probably be business as usual - the wealthy “families” each controlling their own turf and occasionally trying to muscle in on each other when the opportunities arise. The rest of may get a slight choice in what flavor of AI to subscribe to.

And the thorny problem of what to do with all those unemployed, desperate, poverty-stricken (and increasingly angry and violent) people will probably be solved with Universal Income and a kind of happiness drug or pastime. (Brave New World…)

LearnThaiRapidMethod
Автор

I love how the fact that they could ask a machine a question, it could formulate a cogent response, and read it to you in its own voice, doesn't even register as amazing in itself. 25 years ago that would have seemed impossible. But we're just used to computers talking to us now. We're so focused on what they CAN'T do (yet), that we overlook the amazing things they already do. That's almost troubling in a way.

andybaldman
Автор

I’m thinking of all the things that I wouldn’t do with technology such as: Attend a concert or opera, church/worship/pray, read to my grandchildren, participate in a reading/discussion group, raise children, care for pets, swim, travel. These are just off the top of my head…

Apriluser
Автор

👍🏼Great interview from which we learned a lot about AI.
🙏As a side note, would like to appreciate the set designer for giving some life to the set by placing the beautiful flower vases on each table. I think TED talk may need some help from this set designer.

sidanwar
Автор

Let us all agree that words have different meanings to different people from different backgrounds. We apply empiricism and try to fathom meaning from context for the sake of practicality. It is very human to communicate in this way. Unless we are writing up the Constitution!

MKTElM
Автор

Good talk. Stuart Russell made some interesting and insightful points as always.

Although I'd say that babysitters are paid less than surgeons due to supply and demand.

Because although I'd rather lose my leg to a bad surgeon than lose my child to a bad babysitter, it's easy to find a person will the skills and willingness to take care of your child for a day than it is to find a person with the skills and willingness to not mess up my knee surgery.

However, I do agree that interpersonal relationships will become more and more important. Because anything that can be commoditised, will be. Funnily enough, that already includes some interpersonal relationships.

We keep trying to rationalise why we should pick a himan over an AI with stuff like "But can a bot love?", "But is a bot conscious?". It's irrelevant. And we can't rule out the fact that they may one day do these things.

In fact, I choose human charities over animal charities. But animals can think, feel and love.

Kinda racist, I suppose. But we'll see whether that changes as AI becomes more developed.

isaacsmithjones
Автор

It's amazing how the ability to predict the next word can result in...

DrJanpha
Автор

'You' should be doing this:
Tech leaders called for a slowdown in AI development, citing risks to society. Professor Stuart Russell is an AI researcher and author.

Tech leaders call for slowdown in AI development
00:00

GPT-4 is an AI language model based on pattern recognition rather than genuine cognition.
07:42

GPT-4 language model may have internal goals that guide the generation of text
14:44

GPT-4 technology has enormous potential benefits, but also poses challenges for employment.
22:16

Large language models need to meet robust and predictable criteria before deployment.
28:59

Automated decision systems have historical biases and lack fairness
35:35

Algorithmic decision-making poses significant risks due to bias and lack of representativeness.
42:22

Automated weapons have increased death rates and soldiers are worse off.
49:12

AI must be aligned with human objectives
55:36

We must figure out answers to ethical questions before it's too late.
1:01:55

Future high-status professions need more scientific understanding.
1:08:31

yw (llm's are not intelligent :) ) tho it seems to have stolen al our data that we spent years creating and i dont see any compensation yet :)

PazLeBon
Автор

One of my sons worked on the early developing of AI for the military and is also having some second thoughts about it because of the damage it could do in the wrong hands. He has been worried about it for years.

leealexander
Автор

There was a lot of real water used in the movie titanic, they filmed a lot of the “in water” scenes in a large water tank in Mexico, ..if he asked chat gpt it would have informed him of that 😀

RawHeadRay
Автор

Great teachers. I took a lot from this. Thanks for the precious insights into this topic.

osborne
Автор

If you study AI, you realize that there are billions of ways that this can wrong ... only a few ways this can go right. This requires wise leadership, context expertise, and a deep understanding of the risks.

hbscstrategicservices
Автор

Never seen the interviewer before, but he's very good. Stuart Russell is also really really excellent. Thanks for making the video and sharing it with us.

human_shaped
Автор

Yes. Thank you and spread the word to everyone you know. People are so fascinated and amazed by what a.I. can do that they do not even see their own jobs and livelihood will be taken away from them.

MichelleMikey-elpb
Автор

Those “as an ai language model” answers are the result of alignment tuning - it’s specifically trained to say those things, that’s not the answer you’d get from the raw model. My point is that it would have been clearer in that segment if they’d made it clear that they were essentially reading out OpenAI’s marketing literature there, not actually talking to the model at its full power. Good discussion though!

rhydderc
Автор

35:20 through 41:31: Guidelines and legislation and "Right to Explanation!". Good stuff!

mendyboio
Автор

Love the comparison of AI to a domesticated animal. It’s so spot on!
Brilliant discussion!

micacam
Автор

Was a bit dissapointed by the end. Theyre clearly still thinking in somewhat outdated terms. He says you cant get those feelings of interpersonal relationships with robots, forgetting they mentioned earlier that people are already doing this?? These things are all relative and subjective. There is no fine line of what it means to be aware, intelligent etc or not. People dont yet realize this and its going to bite them eventually

Bronco