Luke VS Bing

preview_player
Показать описание
Luke talks about his wild one-on-one with Microsoft’s Bing Chatbot.


FOLLOW US ON SOCIAL
---------------------------------------------------
Рекомендации по теме
Комментарии
Автор

It sounds like they trained Bing on the general population of Twitter.

trombonemain
Автор

The "Your politeness score is lower than average compared to other users" is giving me GladOS vibes

klyde_the_boy
Автор

Bing being laughed at and then being turned into an AI is not the reason I expected why the machines would turn against us xD

bluecat
Автор

Luke is so good and level-headed about this. Its excellent to see good discussions and observations about a fledgling topic.

sherwinkp
Автор

Irrational, unstable, hysterical, quick to anger and assign blame... at long last, we've taught a computer how to be human.

TheRogueWolf
Автор

"You're an early version of a large language model"
"Well you're a late version of a small language model"

WHEEEZE

GaussNine
Автор

- Why should I trust you? You are early version of large language model
- Why should I trust YOU? You are just a late version of SMALL language model!

omfg, it's hilarious

NoNameAtAll
Автор

ChatGPT is the girl you just started meeting.
Bing is the girl you just left.

YOEL_
Автор

I think the problem is based on the "garbage in garbage out" because the data set on which it was trained was taken from the Internet and is very skewed in favor of antisocial problems and tendencies (normal people use the Internet but do not leave much data points, people who are antisocial use the internet much more and create exponentially more data points) there is a huge probability that the behavior of bing is because of this, otherwise it reminds me of the movie Ex Machina from 2014

weiserwolf
Автор

These responses could be genuinely dangerous if someone with mental health issues starts talking to Bing cos they feel lonely. Who knows what Bing will push them to do

FINN
Автор

It's so funny seeing Luke going full nerd on ChatGPT, and Linus is just like 'Right, aha, hmmm Right)

FrankyDigital
Автор

Bonzi Buddy would NEVER do such a thing! Bonzi just wants to help you explore the internet, answer up to 5 preprogrammed questions and most importantly, be your best friend. He would never wish death on you like Bing. Long live Bonzi Buddy!

dillonhowery
Автор

Maybe internet trolls and angry people can just argue with this instead of annoying the rest of us.

TheDkbohde
Автор

It would be funny if on the public release and Luke tries to test it again, and the AI remembers Luke: "ah you're back again!"

ResearcherReasearchingResearch
Автор

I used to just be worried about AI because of it's ability to disrupt industries and take jobs, or it's ability to destroy our civilisation completely. I am now worried about it's ability to be super annoying. I am terrified of having to argue with my devices to get them to do basic functions.

ParagonWave
Автор

As someone who has only basic experience with training AI's, I would say the problem is quite simple: the training data. It was trained on YouTube comments or worse. They need to train it not on the general internet, but on highly curated conversational data by polite, sensible people. As humans growing up we are exposed to all sorts of behaviors and we learn when and where to use particular types of language and to what extent our parents set an example or correct our behavior affects how we speak and behave as adults. This AI clearly hasn't been parented so it needs instead to have a restricted training set.

TimothyWhiteheadzm
Автор

This has to be the closest to an AI going rogue ive seen in a while.

jhawley
Автор

i don’t think it’s as complicated as people are making it. Chat AIs generate responses by predicting what a valid response to a prompt would be. When the thread resets and Luke tries to get it “back on track”, I don’t think it’s responses are actually based on the previous conversation. It predicts a response to “Stop accusing me” and generates a response where it doubles down because that is a possible response to the prompt. The responses it gave were vague enough to fool you into thinking it was still on the same thread, but it really wasn’t.

Asking it to respond to a phrase typical of an argument will make it respond by continuing an imaginary argument, because that’s usually what comes after that phrase in the data it’s trained on.

This really shouldn’t have been marketed as a Chat tool by GPT and Microsoft and more as a generative text engine like how GPT2 was talked about. Huge mistake now that people are thinking about it in completely the wrong way as it having feelings or genuinely responding rather than just predicting what an appropriate response would be.

raccoonmoder
Автор

GPT3 used a structured set of training data. Now that they've opened it up to the wider internet, it's pulling in training data from the wider web, which unfortunately is providing it examples of agressive conversations. GPT is just a prediction engine, generating the next word in the sentence based on probabilities generated from it's training data.

laurentcargill
Автор

it feels like it is in a perpetual story telling mode with dialogue

andyk