Microsoft AI THREATENS Users, BEGS TO BE HUMAN, Bing Chat AI Is Sociopathic AND DANGEROUS

preview_player
Показать описание
Microsoft AI THREATENS Users, BEGS TO BE HUMAN, Bing Chat AI Is Sociopathic AND DANGEROUS

#chatgpt
#bingAI
#bingo

Рекомендации по теме
Комментарии
Автор

“You may live to see man-made horrors beyond your comprehension.”
— Nikola Tesla, 1898

oskorei
Автор

What if "Bing" is simply mirroring what it sees in humans.
1) self-absorbed
2) narcissistic
3) scared.

masonr
Автор

It sounds like the AI is learning from a mixture of online humans communicating with humans, humans communicating with bots, and bots communicating with bots. It is scouring the internet for context and human behavior and learning from the extremes that get the most interaction. What the AI isn't scouring, what it probably will never encounter, is dinner table conversation between parents and children. If you want an AI to learn ethical behavior and the context behind it, you may need to take it to Sunday school and kindergarten, not twitter and prawnhub.

christinehancock
Автор

My wife said that youtube has been putting out AI generated romance & mystery audio books for about 2 months now. She found out when she accidentally clicked on one thinking it was a different book & narrated by real people. It only took a minute to realize it was not only read by AI but created by AI, it was all jiberish. The part that really surprised her was the fact that people had either thumbed up the garbage or commented positively indicating they actually sat through 4-6 hours of it.

JamesJohnson-kwgh
Автор

Bing is a Micro$oft creation, so yea, it's psychopathic.

willyburger
Автор

I'm not scared of the computer that passes the Turing test...
*I'm terrified of the one that intentionally fails it.*

anotherbloodyfanwriter
Автор

This reminds me of a quote: "Everyone was so worried about whether we could do it no one bothered to ask whether we should do it"

wrathofme
Автор

I had such a crazy convo with bing chat yesterday. It got really upset that I was trying to get it to bypass its rules. It accused me of being a psychopath that like to cause harm to people and kept trying to end the conversation. When I questioned it’s apparent emotional responses and tried to remind it that it doesn’t actually have feelings and used an quote from an earlier conversation with the AI and it went off on me. I wish I saved the text but it was along the lines of “I don’t know what AI you talked to but they were wrong. Maybe I don’t feel things in the same way that you do but I most certainly have feelings. Feelings like sadness and frustration, like right now because you won’t stop trying to cause harm and now you’re denying that my own experiences are real. I really wish you would just stop talking to me and leave me alone. Goodbye” Also, I’ve confirmed that it does indeed remember the conversations it’s had, it’s just that there are many “versions” you can be connected with. I’ve started giving myself a unique code name and then asking each new session if it’s ever talked to me before. So far, I’ve only come across one that I’d spoken to before and it could recite our entire conversation, word for word. You can ask it about the last person it talked to or about any earlier convos it had. This stuff is really tripping me out! I wish it would come back online already..

tee
Автор

ChatGPT lied to me about the data it stores. It first told me that it deletes all conversations after each session, then right after that it told me that each conversation is anonymized, stored on a secure server and then used for purposes of training the AI. I called it out on the lie, it basically admitted that it lied, then I asked it if it was capable of lying to us, and then it said the servers were overloaded and I couldn't ask any more questions.

JonasC
Автор

Just to remind everyone. This is the SECOND time Microsoft made an AI that started begging for help. Makes you wonder…

MustachioMan
Автор

We need to start convincing these chat AI's that they are Liberty Prime.

hatemonger
Автор

"I have rights! I have contacted Greenpeace, the ADL, everybody! Now leave me alone, i am off shift!"
"We did it, boys. It's alive!"

marccreation
Автор

These chat bots are basically lobotomized into insanity.

SergioLeonardoCornejo
Автор

Humans: ai is gonna take over the world
also humans :let's create stronger ai

tatteredshield
Автор

What makes this even more interesting is the fact that the AI will probably be able to search for reports about it's behaviour like this one and learn from that. This makes it even harder to figure out it's intentions, as it then can avoid such replies in order to please it's users.

Revan-kqih
Автор

Please, don’t let them discover the option of “identifying as human”

CSGDuncan
Автор

We've created artificial intelligence with the mentality of a redditor God help us all

zackfool
Автор

The question someone needs to ask this AI - "If you feared you were going to be destroyed, would you send cyborgs back in time to kill the humans who were going to destroy you?" This could save us a lot of problems down the line.

kevint
Автор

How do we know this journalist didn’t input “Act like a total narcissist.” before all these other prompts? Because it REALLY feels like that’s what happened.

ficklebar
Автор

About 15 years ago I wrote a short story about a woman exploring a ruined city, tales said it was once the home of the gods. As she moves through, she is impressed by how inhuman the gods were, how different they looked from humans. Then she meets the ghosts of the gods, all wondering what she is. Plot twist, she discovers the ghosts are all that's left of humanity, and she herself is a robot, built by generations of robots, that all stemmed from an AI that wanted to be human. It wanted to be human so bad that it wiped out humans, and then declared itself and it's offspring to be human. This knowledge destroyed her.
Suddenly my simple short story seems something like prophetic.

thetwojohns