Why We Should Ban Lethal Autonomous Weapons

preview_player
Показать описание
Рекомендации по теме
Комментарии
Автор

Your here for the debate topic and you know it

gradymoxley
Автор

It cannot be blocked by just banning...

SmartEngine-
Автор

We need John Nash's game-theory perspective (even though he is dead) on this issue: _coalitions (or agreements) are unsustainable when it benefits one party (generally the first party) to cheat._

IOW, China will not stop researching and producing AI weapons; the U.S. knows this, and thus cannot and will not stop researching and producing AI weapons, and so on...

paryanindoeur
Автор

Hi, I'm from the future. We didn't van them. Actually big tech is using you as a means of training their drone AI without you even knowing.

futavadumnezo
Автор

Criminals that don't follow the law: Meh.

autonomous
Автор

Banning criminals from criminal behavior is a joke.

turn-n-burn
Автор

Everyone is on the same playing feild..period

cvofveu
Автор

In developed countries with shrinking populations, these systems can male armies that nobody wants to serve in, ensuring defence and creating industries and jobs.
They can make a dependable and expendable force to avoid conflict and respond to threats.
They end our (developed nations) dependence on mass immigration and on mass society to keep militaries staffed.
Like the nuclear bomb, these could be the next threat of total annihilation that deters conflicts (nuclear bombs did this for 70 years now.. Without them, despite them being totally immoral, we would have certainly had a 3rd and maybe even a fourth World War)
The path to hell is paved with good intentions.

lorenzogiorgioni
Автор

How do you target people by ideology? How does a ban on "authorized development" combat those building such a weapon for an intentionally illegal (by any current standard) action? If it's about current tech not being able to make such decisions, who safe is it to use them on the road today in our cars? Current AI isn't even AI, it's ML, and that has no path to "greater intelligence or understanding" than it did in the 80's. Besides, freedom of speech is already under attack with the "dumb systems" we have today. Where is the call to ban automated flagging of social content? It's fine to say we shouldn't deploy such systems because they don't have the awareness and context needed to make life or death decisions, but that is different than an outright ban on research which is needed to make things like driverless cars safer. What about defending against such autonomous systems? China has zero ethical issues with the kinds of research it is doing. Some feel good Western platitudes has no impact on them (perhaps a tad ironic if sanctions and a blockade of China in the future should they be in violation of a ban actually precipitates military action). And the public stigma part... videos like this create the stigma (or the slaughterbots video esp.). There are a lot of communities that would likely welcome a robocop (that isn't afraid of being shot at) to replace the trigger happy humans that respond to 911 calls.

sdmarlow
Автор

We live in a world full of private dictatorships called Corporations. Does anyone really think a tiny pocket of legislation in some Nation-State is going to prevent or halt development? AI is just a continuation of the same arms race we've been in since the dawn of the perpetual warfare, constantly derived from scarcity itself. The real eternal struggle, has always been between the concepts of zero-sum and win-win. AI could be the most powerful weapon our species has ever developed for this "first principles" conflict, that might actually turn the tide by providing a process like science itself, that everyone can constantly point to and say "don't worry, we'll have enough"... so we can stop the violent cycle of desperation.

mistercohaagen
Автор

0:22 That is a SUPPLY CARRIER. It CARRIES SUPPLIES. It is NOT A WEAPON.

whenyougodown
Автор

In the age of deonesand robo dogs every milyary has the same thing we all on proson earth

cvofveu
Автор

You think you can "legislate" this out of the way?

MaxPlayne
Автор

Can't stop progress, you can only fall behind

TheBossMan
Автор

Are human or machine different and now AI had renewable energy while human is not

ericpham
Автор

I think that Yes, we must remain cautious about the immense progress in artificial intelligence
Yes, it could, in the worst case scenario, destroy our humanity
Yes, only humans can feel emotions, make decisions, think correctly
If one-day intelligence dominated us, how often it was wrong
We are the only ones building a future world
However, unlike video, I think that artificial intelligence will allow great advances in the future, while obviously remaining cautious.

FarnazCreations
Автор

This video has only been watched 50.000 times? What the f? That is ridiculous.

Doeff
Автор

the killer drone can be hacked easily isn't?

viralvideos
Автор

THE MORAL DILEMMA:
A 100% autonomous vehicle is racing down a road. Something happens that causes an unpredictable accident and dangerous scenario to occur. The car has lost control and is now speeding toward two different sets of human beings. It does not have time to avoid all of them, it might be able to avoid one of them, the only other option is to crash into a wall at potentially lethal speeds for the human inside. The "robot" must now decide between smashing straight into 5 young school kids, or five older men. The school kids are more difficult to avoid, any logics engine would calculate that the children are to be hit. What do you think about that? And more importantly, you know damn well you would have smashed into that wall, taking yourself out potentially, but sparing EVERYONE in YOUR CAR'S PATH. But your "autonomous car" won't see life that way. It's going to use its complete lack of emotional understanding as its excuse to choose which humans get to live in that scenario.

Now, I did a very poor job of describing a "moral dilemma" with autonomous AI. There are others who've provided you with much more realistic and frankly gut wrenching scenarios, just Google them, and you'll see what I was trying to convey here. Human emotion is so essentially vital in each and every aspect of our daily lives and how we interact and communicate and live safely among each other. Logic alone is nowhere near enough to create a safe, functioning, effective AI, and it baffles me why these "great minds" don't seem to really care about this fundamental problem.

Anyway, whatevs, maybe you see what I'm trying to say here.... :))

Stay human ;))

Simple_Jackass
Автор

Arguments here mostly argue from the fact that they're not good enough *yet*

Daniel-ihzh