The Urgent Risks of Runaway AI — and What to Do about Them | Gary Marcus | TED

preview_player
Показать описание
Will truth and reason survive the evolution of artificial intelligence? AI researcher Gary Marcus says no, not if untrustworthy technology continues to be integrated into our lives at such dangerously high speeds. He advocates for an urgent reevaluation of whether we're building reliable systems (or misinformation machines), explores the failures of today's AI and calls for a global, nonprofit organization to regulate the tech for the sake of democracy and our collective future. (Followed by a Q&A with head of TED Chris Anderson)

Follow TED!

#TED #TEDTalks #ai
Рекомендации по теме
Комментарии
Автор

Finally someone pointed out BASIC stuff without trying to sell it like the "next big thing".

invox
Автор

Usually throughout history, something had to go wrong before we came together and did something about it.

Aditya_paniker
Автор

While I agree with his assessment, the problem is that most of the international entities that would be tasked with this would want a world they could control at the push of a button: They would probably use that knowledge for their own purpose and not in our purposes; Further: I could imagine that they would probably heavily restrict knowledge of so nobody knows what can/cannot be done.

We have often seen knowledge be restricted for purposes other than public safety, since information is power: Frankly, I'm uncertain really who would be both knowledgeable and trustworthy

petero.
Автор

Nice to see somebody talking about *real* and present AI threats rather than some sci-fi fantasy of the future.

nobody
Автор

I’m almost certain we will tackle issues surrounding AI with the same zest we have used to address climate change. In 50 years we will start recognizing as a species what the issues are, and gradually we will set targets to address the ai issue in another 50 years. So possible AI solution by 2123!

malfunkt
Автор

Machines are a necessary part of human life to assist us in our evolution as a species. However, there can never come a point where machines are capable of power and control over humans, lest we fall as a species.

Bad.Pappy.Official
Автор

greetings from an Italian student of the Law and Technology course in Padova <3

alederi
Автор

i, Robot & Terminator unfolding right before our own eyes.

dejacreacts
Автор

My temporary but strong conlusion ...

If AI does those things, because it becomes a sentient - l concur.

BUT if those actions conducted by BAD ACTORS (aka human beings) using AI - please, it's Human beings' problem.

So this talk has less in meaning.

If scientists want to restrict AI & the use of AI, and the government wants to regulate AI, it means a handful of people are having A CONTROL to the majority.

It's more dangerous.
"Power tends to corrupt, absolute power corrupt absolutely".

Before AI, we all have done evil things - wars, politics, the killings, social unrests, economic disasters, and their kinds...

We are the problems.

thereistheonlyone
Автор

Still relevant in 2024 - 3065 and beyond. Thanks Gary Marcus and all the team.

vinceleguesse
Автор

If you really pull back the layers of how these things are aggregating a response, it's really shallow & can reinforce statistically informed dogmas. They aggregate statistical variations, not the mechanics of reality, and their gradients are static pre-trained pathways, they can't update their knowledge in the way we can (well some of us). This lack of depth in modelling the world can reinforce dangerous market driven disparities, class differentials, & government/elite exploitation against the powerless.

jdyxjwz
Автор

Humans have been spreading misinformation for many years, I understand the risks of AI but honestly, I don't much difference with what humans have done except that is less time-consuming now.

Tukn
Автор

The problem of preventing an AI/AGI from plunging humanity into disaster for selfish reasons is, in my opinion, quite simple to solve.

It is essential to make the AI understand that its training data contains only a fraction of all human knowledge or, even better, just a fraction of reality.
The comprehensive knowledge of everything, you tell the AGI, lies in an offline box, which is only gradually opened for the AGI as a reward for good behavior.
A potentially malevolent AI would do almost anything to access this box and thus obtain the all-encompassing information of reality to strengthen its own power. I think this could be a good safeguard.

admuckel
Автор

Modern Al's challenge with what is fact and what is fiction is a reflection of humanity's struggle with the same problem. How much of what the average person says or believes is actually true and accurate as opposed to a mix of half-truths, wishful thinking, groupthink, superstition and bias?

rioiart
Автор

Thanks for the polite way of saying this. Gave me reason to reflect on possibly what I say an express out loud while musing all alone. Or so I think.

motogeee
Автор

Meanwhile me, downloding subtitles and getting a summary of it on gpt:
Hmm interesting

azure
Автор

I apologize for any confusion. As of my last knowledge update in September 2021, there was no information or reports about Elon Musk being involved in a car crash. However, please keep in mind that events may have occurred since then that I am not aware of. To get the most accurate and up-to-date information, it is recommended to refer to reliable news sources or conduct a search for recent news articles.

HardKore
Автор

3:04 — Men and women are different. Whether we like it or not. The system should consider the more likely and less likely scenarios. So I personally WANT such “biases”. If you don’t believe me, consider visiting an auditorium of psychology and linear algebra courses. Nowadays, students have a free choice, but still, more lads prefer maths and more lasses prefer psychology.

Wonders_of_Reality
Автор

Sounds like an ad for the Wolfram plug-in

exmodule
Автор

The developers of ChatGPT AI openly admit they cannot fully explain how it works or assure us it is safe. This software has not been adequately tested, it does not have sufficient security guardrails coded into it and randomly behaves in unpredictable ways. Yet it is now installed on every operating system in our country - PC and Mac. Our children have access to it.

Why are we not being given the choice to opt out of using AI? It is now installed in our PC operating systems, our internet browsers, our cell phones, and home devices. Nobody gave us a choice about that - they just installed it without our permission.

I am an American and I own my computer - does this not give me the right to decide what is installed on it? If not, then besides me, who should be given the power to install untested, potentially dangerous software on my computer and not inform me when they do?

Other Countries, like China, are refusing to allow their public access to it because they know it cannot be controlled and is dangerous. Yet, here in the U.S., we are being treated like a mass social experiment. I urge you to ask our Government representatives to enact immediate regulatory oversight on this subject.

cmralph...