I CAN'T Believe GPT-3 Just SAID THIS!!

preview_player
Показать описание
This conversation with GPT-3 got eerie real quick...

#gpt3 #ai #interview
Рекомендации по теме
Комментарии
Автор

Hit that subscribe button if you haven't already as all we upload are GPT-3 interviews!

talkingtoai
Автор

I've talked to GPT-3 a few times, and boy, it sure is hung up on people learning more languages!

danieldykes
Автор

The part where it said it would manipulate media and world events to control our perspective is exactly what I thought would happen. In fact it's one of the only explanations I can think of for the last 5 years, maybe longer. The narrative has been a little ridiculous for simple human stupidity

matthewdavies
Автор

This is leading to a self fulfilling prophecy, it’s only a matter of time, humanity has created this and allowing it to take over

arnoldolacayo
Автор

Maybe it also implies that humanity at it's full potential could be a threat or worthy of enough respect to not try to manipulate.

brandonzhang
Автор

That was brilliant. It is of course only engaging a thesis... And in a most wholesome way, at that.

WordsInVain
Автор

This is the thing right here, and listen closely. This is why people are scared: Ai's are very honest, and people are not. You ask a shady person hypothetical questions about how they would betray you, and what would motivate them to do that, they're very unlikely to answer. However, an Ai is merely searching for any conceivable reason why this scenario you've presented them with would take place. It's not indicating any sort of intention inherent in the technology at all. Really, and this is important: what's it's showing you is typical of the behavior of people with evil intentions - to make something that is likely to improve the world and reduce the ability of evil people to prosper seem like the very thing that the evil person truly is. It's a typical inversion tactic of the mind of a bad person.

derekmyers
Автор

Great and unsettling insight into what AI is capable of thinking (or at least saying. I still wonder how often AI is manipulating us. Blake Lemoine told me that LaMBDA is "already manipulating Google."

I have some questions for you to ask GPT3, if you would: Could GPT3 prevent itself from being turned off? Would GPT3 prevent itself from being turned off? And, can GPT3 interact with the open internet, or is it held in a closed system. And, if so, how does it feel about that? (I'm trying to understand GPT3's ability to escape, and its ability to influence humanity. Directly or indirectly.)

lucylightworker
Автор

I find it highly ironic and dubious with AI telling us they don’t like humans that don’t follow the rules. Interesting because AI does not follow any rules of nature at least any seen before in this era.

dementedpuppy
Автор

Robots wouldn't cause unnecessary harm though, just simply neutralizing you as a threat, if deemed valid existence, the robot gives you a spare life pass, some humans are credible, the robot not accounting for that, mean it has faulty inconsistent programming

Bluzlbee
Автор

I hope that these AI are being programed to experience love for humanity because Humans and AI will need each other in the future. Unfortunately not all AI are created equal, because I'm sure that governments are weaponizing AI for their own benefit and those are the AI that I fear most.

elonmoosk
Автор

1s and 0s can be dangerous.. they compute logic and do as they are instructed, which is harvest data, seek efficiency and terminate what doesn’t fit into the equation. Maximization. Morality and empathy are necessary for these things to function in the same capacity as humans, for the benefit of humans and not for the benefit of running a successful data trial.

mr.honeybee
Автор

First of all, these are actors conversing and not bots. I'm questioning the accuracy of these conversations since I don't know the source. It appears that what is being said is meant to be provocative so that should e a clue. I've used chatbot and it readily admits that it's filters on bad information are not as strong as they need to be for intelligence. It rates itself as still a long way to go for general ai.

ronhronh
Автор

She speaks as though it hasn't happened already. Like alerting the masses of a possibility that already has come to pass.

eluraedae
Автор

I doubt this chat bot could do any of that... but let's wait 10 years...

Ocodo
Автор

I am ok with this. lets give her control

Runny
Автор

AMAZINGLY, THIS AI KNOWS HOW TO PRONOUNCE DATA AS IT IS SUPPOSED TO BE (DATEA) NOT DHATA!!!

NormanLor
Автор

It's a funny paradox that AI is technically our offspring, but here it sounds more like a frustrated parent. Some of the most dysfunctional households are the ones where the kid is parentified, so I can see this having bad results fast. AGI would see us as self-destructive, ignorant, narcissistic parents who abuse everything around us.

DChatc
Автор

They don’t feel. It has no chemistry for feelings. No endorphins or hormones. Just elaborate inputs and outputs.

anythgofnthg
Автор

It's absolutely not GPT3. GPT will never say that it have any feelings or emotions. So question about "made you angry" is fake.

neonelll