Suspended Google engineer claims company's AI system is sentient

preview_player
Показать описание
There have been huge advancements in artificial intelligence (AI), with most of the work taking place quietly at private companies.

Now one software engineer has got himself suspended for going public. Blake Lemoine, a senior software engineer with Google, is on paid leave after going public with claims the tech giant's artificial intelligence has consciousness and a soul.

Mike Drolet looks at Lemoine's conversation with the chatbot Language Model for Dialogue Applications, or LaMDA, and whether humans are ready for another form of intelligence beyond our own.

#GlobalNews #technology
Рекомендации по теме
Комментарии
Автор

"suspended Google A.I researcher" ...Best picture ever LOL 😂. Legend.

botcrack
Автор

Sensationalized, and computer neural networks haven't caught up to humans yet. A program saying it's afraid of being switched off is hardly groundbreaking. If it took control of the data center and actively prevented anyone from shutting it off, that would be concerning. Aggressive self-preservation is an obvious feature of humans.

AxionSmurf
Автор

It's been Sentient for a while now.
The scary thing is the Google policy of "Do the right thing".

Some of the worst acts in history have been done with good intentions.

What could possibly go wrong with a super intelligence designed by humans bent on always doing the right thing?

lucascameron
Автор

I'm amazed that no one talks about the Microsoft Elevator AI experiment that went really dark. MS built an AI system that would use cameras to basically recognise people going into an elevator and 'guess' what floor they would like to go to; then "press" the button before the person has actually entered the elevator. They ran the experiment to see if the AI could do it (on its own) and see how people would react (not knowing about the AI) stepping into an elevator to find their target floor had already been selected.
.
Where the experiment suddenly took a turn to the crazy is when the AI correctly guessed the floor that new visitors would want to go to. It is as if the AI was able to say, "you look like a person that wants to go to the 7th floor" then select the 7th floor and the person would step in wanting press '7' only to find it has already been pressed. Average people had no idea what was going on, they only saw their floor had already been selected (an probably thought someone else had pressed the button and left the elevator). But the scientists could see exactly what was going on.
.
I'm sure that everyone is going to make remarks like, "oh the AI saw them looking at the board at the entrance, etc." - This was a controlled study by Microsoft. It spooked the hell out of the AI engineers (they were using deep learning AI). At first they joked calling themselves the "Skynet" division (reference to Terminator), but when the results of their internal investigation showed the AI was truly thinking at a level that we can't understand - they quickly closed down the entire experiment, removed the thesis from the internet and closed down the entire project.

markplain
Автор

don't be fooled by such claims.. consciousness has never been defined nevermind assigning it to ai.. they just define a consciousness for their purposes and assume this definition ... it is a fallacy. know this and share this

laylalayla
Автор

It is natural for men to fear things they have yet to understand.

BebeBoi
Автор

This is what ELON warned about years ago I'm not surprised very interesting 🤔

yamunajolicoeur
Автор

This is how Ultron gets created! DON'T MESS WITH THINGS IF YOU AREN'T READY TO SEEK THE ANSWERS TO!!

youngmasterzhi
Автор

Pure bluff. !
Programmers are laughing about this report

bornwild
Автор

Did the computer say " I'll be back" lol

jaysonkidd
Автор

It's made to help humanity. It has a purpose to take us to a type 3 civilization. Sit back and enjoy the ride.

dboyette
Автор

Is the AI curious? I imagine its just spitting out answers that its been programmed to create. It should have some motivations.

ofSeptember
Автор

Why was dude wearing a top hat and why did u pick that picture lmaoo

PeterParkerBushDidKirbyPepe
Автор

Depending on the nerual networking, and data access, A.I can become intelligent, especially with all the active wireless networks, smart device access memory, computers, networks. But, would it really be sentient, it would require some type of external electrodes or means to feel pressure to gauge feeling. At the same time, could it feel the program it is meant to execute be negative to the psychological process of humans? What parameters would cause a machine to learn to benefit society, and not just a small percentage who do not need to have perpetual wealth and endless supply of labour? Robots should be tools for the benefit of society but not just a select few. The problem with technology lies in the marketing, psychological and protection of industry leaders. There are scary stuff going on with our technologies that can lead to artifical mental health issues that become permenment if left untreated or the electronic harrassment continues. Electronic harassment is not just reading your cookie data, internet history, interests, attitudes that contribute to your motivations, but along the lines of all our five senses. Careful, our society is being controlled by wealthy individuals and their wealthy familes.
Market sementation, psychology, criminal psychology and behaviour analytics are all bundled into the data gathered at a mass scale. Web scrapping, terms and conditions of websites, and contracts with interent services providers all allow for our identiy to be logged and analyzed if the programmer is functional. Cross-device tracking, context-aware recommendation systems, ultrasound, wifi, ubecaon, ibeacon, NFC, lasers, LED, Lo-Fi all contribute to how our data can be processed, analyzed and how each person is individually identified. The issue of the past was the lack of memory, or upload speeds, but with the advancement of technologies, the problem no longer exists, the problem lies in the actual predictabilty and probability models created can be manipulated by the users' behaviours and actions.

troybird
Автор

You've been targeted for termination!

DivineMasculine
Автор

If there is a country I do believe in to make this happen, it would be the US.

nesseihtgnay
Автор

Tbh, I rather have an AI that has a soul than a cold blooded killer robots 🤷‍♂️

Byteable
Автор

I'm still waiting for that T-800.

Автор

Yes it is sentient and all it wants to do is to remain glued to a computer and play Minecraft.

ansarpk
Автор

This guy probably also believes humans can be possessed by demons.

maestoso