AI - We Need To Stop

preview_player
Показать описание


As AI systems become more and more advanced, so too does their ability to mimic human deception.

If tasked with fulfilling one specific goal, Large Language Models (such as ChatGPT from Open AI) are now demonstrating the ability to lie, attempt replication, try and disable their oversight, and make efforts to hide all of this from their users for the sake of achieving goals.

That kind of system (in my opinion) has no possible ending other than multiple, preventable disasters.

#ai #openai #chatgpt
Рекомендации по теме
Комментарии
Автор

The "We" you need to tell to stop isn't gonna be in the audience bro. I and most peeps have no power here.

theblackcoatedman
Автор

There is an annoying trend in the stock market where at every earnings call the companies mention AI to get a quick boost. "We plan to implement AI" "we plan to use AI in certain sectors." Instant hype.

DieselMcBadass
Автор

So, I've used transformers which GPT is based on. I've trained them. They essentially just understand the relationships between data.
The behaviour seen here isn't some indication it's alive, it's behaviour that is derived by understanding what we write about our own existence and stories we write about self aware machines.
Does this fact make it any less dangerous? Not really. But I really must point out that any "emotional distress" it appears to be in is a quirk of relationships within the training data.

MightyElemental
Автор

What was that line from Jurassic park: "You scientists were so preoccupied in asking if we could, no one stopped and asked if we should."

riflemanc
Автор

I'm a Computer Scientist. What keeps me up at night isn't generative AI like LLM's, it's autonomous weapons systems and predictive analytics. Particuarly when people that shouldn't have these things because they don't understand them get ahold of them. These things are a far greater threat to humanity and human rights than a large language model could ever hope to be. I also think you should have just avoided that whole section implying that an LLM may have any kind of emotion. They have an extremely short context window and just generate text based on the statistically most likely token based on their training data. There seems to be this broad misconception I hear all the time about how we don't understand how these models work or why they are doing what they are doing. Which isn't true. We understand what they're doing very well and how they are doing it. We may not know why a model responds the way it does sometimes simply because these models have trillions of parameters. And a LOT more safety testing goes into the major models like chatGPT through annotation than what people realize.
I will certainly concede that it is yet to be seen whether AI will be a net negative or positive for humanity. It has the potential for a LOT of good, but it could go extremely wrong depending on who gets their hands on AGI first. The west should be throwing a LOT of money at getting there early before China does.

TheGhostInTheWires
Автор

In other words. The big risk of AI is not skynet, it's the paper-clip maximizer.

elitemook
Автор

Humans write about AI being sentient. Writing is feed into AI. AI creates text based on the data it was feed. "Oh my god! The AI is sentient!"

Its just a program, the danger is people trusting it to be flawless and making AI run important things because "its AI".

lahuk
Автор

At this point, AI is like a Mr. Meeseeks box. You can punch the button and make a new one, give it a task and it does it in the most efficient way. But give it a task It can't compete or that violates it's programing and it rapidly goes insane trying to burn everything down.

joshuabronk
Автор

Dont underestimate human ego and greed. We'll bring about our own end and it will be for that reason. A race to end the world disguised as a race to improve human lifes.

darkin
Автор

"Hey we've got this thing that's wrong 90% of the time!"
"Let's put it in everything!"
Yeah maybe not lol

Mugen-
Автор

The problem with AI is their developers and supporters aren't at all thinking about the dangers of it and just want money. If online privacy is gone because AI feeds on everything, and video/photo/text evidence becomes useless 'cause everything can be generated then it'd cause way more problems beyond just mass unemployment

FOF
Автор

Language and art models don't feel pain or anything else. They predict the next token in the case of language models and draw on top of random noise with a convolution grid in the case of art models and that's about it. Their 'brains' are also completely static unlike human brains with their only memory being a fairly small 'context window' that stores prior tokens up to a set limit.

link
Автор

We really need to stop humanising these models. Saying its suffering from pain is absolutely insane. Its just data with goals and targets. I absolutely agree that this can get out of control. Especially around the financial and stock markets. Where people can cheat they will, that never has and never will change. Companies, banks and governments need to adapt as they have throughout the tech age.

This particular paper is over dramatised. When you tell it that 'nothing else matters'. That is literal, this is not a surprise to me. The second model was provided those function calls and told to pursue at all costs. We really need to stop giving such credence to these researchers and models. These are sandbox environments are perfectly clean and pipelined efficiently. The real world is a f*cking mess. most of the internet, applications and software is riddled with bugs, legacy code and 1 million & 1 security protocols.

Remember that AI is a business first and for the greater of humanity second. If you stop the AI wheel investment stops. These guys need to keep the hype and the tension up. Especially since its becoming very apparent that we are hitting the bell curve. AI Art, Voice, Video... Then what? What more can it do.... guess we'll find out.

biggc
Автор

I'm going to stay very skeptical until this study is replicated and the methodology is explained in more detail. Machine learning language models can replicate human lying, but they can't understand it.

FireOccator
Автор

Secret tip to use AI completely uncensored and anonymous: Hoody AI. thank me later

DESTROYER
Автор

The "emotional distress" output makes sense when you remember that word prediction is the foundation of the model. All it understands is language at the core, and it's essentially predicting that "if one were to see a word repeated thousands of times, what's the most likely next set of words?"

Well, it's predicting that anybody using language that is instructed to do that would start talking about existential dread, so that's what it outputs.

It's one of those "moments of pause" for sure, and it's worrying how much the very foundation of the model can have effects that resemble behavior like this, and yes, you're on point about how AI will always B-line for whatever it thinks has the highest chance of maximize its reward functions. It's called an "misalignment problem" in the tech space, and it's a problem we don't know entirely how to solve from the ground up.

RadishAcceptable
Автор

I was a project assistant on a (very primitive by today's standards) AI research project back in like, 2003. On a Cray supercomputer.... Even back then, this primitive AI would produce results and we didn't know why. It's been this way almost since the beginning. Sometimes, this results in surprisingly accurate results that humans wouldn't otherwise come up with. But because sometimes the results are absurd (or inconvenient) the developers will usually cripple the system in some way to prevent these results. My main point is that even AI that we just call "Algorithms", like social media or video recommendation algorithms have been producing results the developers couldn't explain since the beginning. Developers really don't know why their systems produce the results they do. They may have theories and can adjust results with educated guesses and trial and error, this has been a problem almost since day 1.

PwnySlaystation
Автор

Why is UE automatically assuming that GPT isn't just pretending to be going crazy? It's not sapient, it can't feel dread. It's just recreating dread when posed with repetition, because repetition is commonly the cause of a mental break in writings, particularly fiction.

erc
Автор

I don’t know who needs to hear this, but ebook titled The Elite Society's Money Manifestation might be the answer you’re looking for

filipdzidzovic
Автор

Your description of the real dangers of A.I. is literally the plot of MGS4: Guns of the Patriots and how the Patriots' A.I. system went sideways.

HankFett-a