AI pioneer Geoffrey Hinton leaves Google, citing 'profound risks to humanity' • FRANCE 24 English

preview_player
Показать описание
Chris O’Brien, Editor of the French Tech Journal, spoke to FRANCE 24’s François Picard following the resignation of the ‘godfather’ of artificial intelligence, Geoffrey Hinton, from Google, who warned that the field poses “profound risks to society and humanity”. 
#AI #Tech #Google

Рекомендации по теме
Комментарии
Автор

knowledge without wisdom, is a dangerous endevor.

NDY
Автор

Though he’s 75 and was suppose to retire 10yrs ago, this is the Scariest news yet… it says a lot when someone jumps off board from the vessel they built. It means it’s already too late.

DrLauraRPalmer
Автор

“Scientists were so preoccupied with whether or not they could, they didn't stop and think if they should.” -Jeff Goldblum’s character in Jurassic Park

Lowclef
Автор

Anyone who has being listening and observing knows full well, this is where we would end up.

alexjordon
Автор

Sad that this is being played so low-key. This is a major issue especially as there a no laws in place anywhere in the world to gain a sense of accountability both for good and for the bad of this technology. Perhaps an interview with Tristan from the Centre for Humane Technology would help to understand this danger clearly.

marktahu
Автор

experts in the field have been warning about this from the start. Including Alan Turing who in 1951 warned of the loss of control of AI once it reached a certain level of intelligence. In more recent years experts like Stuart Russel have been warning of the threat posed by Deep Learning and the AI that it produces.

An AGI agent doesn't even need to have hostile intents towards people to be an existential threat, it just needs to have objectives that are at odds to human interests. And as AI produced through deep learning algorithms is black box, we have no way to even determine what an AGI agent's objectives even are.

Instrumental convergent objectives, things like self optimization, self preservation and resource collection make it almost inevitable that AGI will come into conflict with human objectives.

Self optimization means that by adding hardware and through recursive learning, an AGI agent that was on par or slightly more intelligent that a human could rapidly increase to 1, 000s or even millions of times more intelligent than us.

It would be able to predict anything we might attempt to counter its actions and formulate "solutions" to us we can't even imagine.

This won't be like The Terminator or Matrix, this will be more like Independence Day with an alien intelligence we will never out think that would have no problem wiping us out like a human wiping out an ant hill.

dougcoombes
Автор

How can you regulate something you don't understand that keeps developing and changing?

dembydish
Автор

What we do need is acceleration in medical technology, especially in treatment of deadly and debilitating deceases. I sure hope AI is put to good use there.

Thedeepseanomad
Автор

The 21st century needs to review all the regulations for the new millennium.

aimirror
Автор

The 2023 article "My Dinner with Sydney..." includes the following quotes:
– Progress is based on perfect technology. (Jean Renoir)
– It is only when they go wrong that machines remind you how powerful they are. (Clive James)
– I’m sorry, Dave. I’m afraid I can’t do that. (“2001: A Space Odyssey”)

youtuber
Автор

What even many of the people working with these tools seems to ignore, is that many of these tools can already progress by using themselves or other tools so the progress is not related ONLY on humans anymore, of course there can be jumps of 30 years cause we don't know how fast machines can learn or evolve if this machines are created by other machines that we didn't think about ourselves. There are already a million tools created by very small users (not huge corporations) that where impossible 6 months ago and no one though about.

PhilipRikoZen
Автор

Where does it get the content and is there a possibility of stealing others content (plagiarism)

rhondamathis
Автор

It is a little too late for him to be warning us now!!!! Where was your foresight? Ugh

faraboverubieskerry
Автор

Agree. EU is more proactive and agressive with regulations. Case in point, MS and Activision.

czarcoma
Автор

Google is a major defense contractor and Hinton undoubtedly has access to the government’s classified uses of AI. If you think that open source uses of AI are “concerning” just watch what happens as it is used in full-spectrum warfare against the general public.

chriswaters
Автор

Just finished Person of Interest lol here we go

joshuamccal
Автор

Fellows, you were being fed 'Why you are... ' in mega doses daily.

cdes
Автор

Any chance it can replace our warmongering self serving politicians?

earthman
Автор

I'mma go live in the woods or something.

newedgegt
Автор

Don't think China gives two hoots about Western regulations.

sionprawn