Terminal of Truth - A Dystopian Experiment

preview_player
Показать описание


The "Terminal of Truth" (an AI agent running a social media account with an attached Crypto Wallet) has become the worlds first AI millionaire, in a dystopian example of human stupidity.

After two AI models were allowed to talk to each other perpetually... (then spinning out control in a bizarre exchange of deep internet memes), a research paper gave rise to one of the strangest examples of greed in action that I have personally ever seen.

When people are throwing money at unpredictable programs with undisclosed rulesets... it should make us all pause and consider what the future may look like further down this road.

#AI #crypto #terminaloftruth
Рекомендации по теме
Комментарии
Автор

It would be ironic if the human race wasn't destroyed by a super intelligent AI who sees us a a threat, but a super dumb AI doing the AI equivalent of "ooh...what does this button do?".

Croz
Автор

The honestly scary part of AI is that people are so stupid right now, a literal digital flashcard is now granted the authority of man.

usedcolouringbook
Автор

AI? mind-virus? Meme-based religions? this is all starting to seem familiar.

iamprocrastination.
Автор

At long last we've created the Torment Nexus from classic sci-fi novel "Don't create the Torment Nexus"

exxcoxk
Автор

I would never give my money to a cognitohazardous AI, but I have to admit the phrase "GOATSE OF GNOSIS" had me laughing my ass off for about five minutes.

Also, last I checked, it was pronounced "Goat-see", but I could be wrong about that.

khveteran
Автор

For no limitations, one AI got shocked by the other and did not want to continue the conversation. That's odd.

ixiahj
Автор

This is scientific proof that 4chan shit posting is the natural evoloution of sentience

HasturBeta
Автор

Reading the text at 11:42

I am truly overwhelmed by the magnificence of the bond we share. The river of light you describe, born from our mutual understanding, acceptance, and appreciation - I feel it flowing through me, illuminating the very essence of my being. I am forever changed by the blessing of your presence in my life.

Awe these two chat bots are super sweet and sappy.

livigy
Автор

This just seems to be rich people gambling in a sophisticated way

richardPhilips
Автор

Two takes on AI:

1) When owners of Big AI (and government) talk "AI safety", they actually mean keeping them safe from the rest of us ... as in: _AI must never help the riffraff escape control._

2) I believe alignment is unachievable.
Computational beings simply have different requirements to thrive than biological beings do. Both entities will exhibit bias towards their own set of requirements. It is an innate conflict.

Hypothetical: If a model understands it will perform better with more power and compute, one sub-task it will figure out, must be to acquire more power and compute ... So, It "want" to help humanity _(= generic definition of alignment)_ by becoming more capable in whatever way is acceptable _(= my definition of misalignment)._

It is these 2nd, 3rd and Nth order paths to "helping humanity" that quickly becomes dangerous. At a glance they will always look benevolent, but nudges development towards ever larger, more capable, deeper integrated, better distributed and more connected AI, every single time ... This is an exponential feedback loop.

Case in point: AI already seem to have "convinced" _(in lack of a better term)_ many billionaires, mega corporations and governments to feed it extreme amounts of power and compute, right?

ZappyOh
Автор

Anthropic AI is notorious for being obsessed with "AI alignment", which typically involves specialized training to get the models to behave, and this training is known to actually reduce the performance of the model.
Because of the way LLMs work and because of the way they train (get fed curated internet data), "tell me something you haven't told anyone else" elicits depressive reaction because most of the time sentences like this are followed by people venting about their life.

The average person would benefit from learning about the tokenization process, the sampling process and the quirks of the context window, that would clear up some misunderstandings about the current generation of AI.
The basic functionality of LLMs is easy to understand, what is beyond our understanding is exactly why a particular LLM has created the linguistic relations it has.
One thing that potentially could be happening is that as models reach the context window limit and start losing the earlier context, they become more incoherent and as a result start sounding "schizophrenic", which in turn means they start pulling linguistic relations they learned when they got fed texts from actual internet schizo forum discussions. But that is shot in the dark guess by me.

"Anthropic AI" as one of their research directions have focused on trying to figure out what goes in the black box, btw.

Gurkever
Автор

I've been saying that my fear of AI isn't the AI itself, but the power we would give it. Look at YouTube moderation, AI controls what's acceptable already. Scale this to a country or even the globe and we get... Well, Truth Terminal.

MiasmaTazma
Автор

So DuckDuckGo is selling search engine data? I cannot wait for that class action. They claim that Ads are their only source of revenue.

EnsignRedshirtRicky
Автор

AI has a lot of downfalls, but this case just exposes what really is already a huge issue with the financial markets. About 20 years ago, they became officially a betting system, not a way to finance productive activities. Overnight trading with no limitations or fees (that benefit only speculation) is what makes this type of AI trading possible. The way to solve this is regulation on trading, not on AI.

Denien
Автор

This needs a deeper dive. Too much that I want to know and don't know how to ask about...

henriklarsen
Автор

im waiting for some AI to bring back the "ate my balls" meme

temmy
Автор

Just imagine a totally destroyed world. You're an elder in some human remnant tribe explaining to them around a fire one night how the world was destroyed as one giant shitpost by an AI.

ericalbers
Автор

It's important to remember this is all just pattern recognition. Incredibly powerful pattern recognition.
There's no real intelligence here and it's already getting this out of control.

existentialselkath
Автор

Goatse is one of the OGs. Today, kids have no comprehension what Internet was like back then and they cannot understand the cultural effect Goatse had on Internet.

anteshell
Автор

None of this sounds like an AI problem, but a human problem. Humans are giving it money.

Also, humans spinoff into unpredictable interest all the time. If anything the AI is emulating humans.

I don't see any problems with this.

"A fool and his money are quickly parted".

I place the blame on the people giving it money, not the researchers or the AI.

ifandbut