The Impending AI Model Collapse Problem

preview_player
Показать описание
Recorded live on twitch, GET IN

### Article

### My Stream

### Best Way To Support Me
Become a backend engineer. Its my favorite site

This is also the best way to support me is to support yourself becoming a better backend engineer.

MY MAIN YT CHANNEL: Has well edited engineering videos

Discord

Рекомендации по теме
Комментарии
Автор

I did not have "AI incest" on this year's bingo card.

NostraDavid
Автор

I was told in 2002 that automated code generation would make me obsolete by 2004. I am 22 years into automation replacing me in two years.

JustBCWi
Автор

"society might need to find incentives for human creators to keep producing content" is one of the most dystopian things I've ever read.

NstalgicLeaf
Автор

Imagine creating statistical models and then expecting them to not follow the laws of statistics

Superabound
Автор

From an artists perspective this always seemed obvious... Even as a human, you don't really want to just practice by copying other people's work. Yes, you can do that to some extent to learn their style or better understand their thought process, but mainly you want to focus on studying things from real life. Otherwise you're just learning to copy other people's mistakes.

torbjornkallstrom
Автор

Michael Keaton said it best in Multiplicity.
"You know how sometimes when you make a copy of a copy it isn't as sharp as, well the original."

necrux
Автор

“Problem”? As an artist I see this as an absolute win!

Satnanat
Автор

Rule number 1 of LLM or GAN training datasets is "Garbage in, garbage out."

MilitantHitchhiker
Автор

We need a new internet, no ai, no big corp social media or paid anything. Just people bagging each other on poorly hosted forums.

Bodom
Автор

the internet is already broken if we are being honest. The average joe doesnt even know they are consuming AI garbage

stanleybacklund
Автор

The pushback against this will be an increased value assigned to unaltered LLM training data, like iMessages. This is where quasi-monopolists like Apple and Google who own our non-public, limitlessly scrape-able data will leverage that to defeat alternatives like Microsoft/ChatGPT who are not middlemen in a representative sample of private communication.

tayzonday
Автор

I work in a completely different field, doing quality control in pharmaceutical packaging, but I think it offers insight into a general problem, which is: the more efficient and more accurate automated systems become, the more “dangerous” they become and the more they need good quality control. That’s because there’s a tendency to trust automated systems the more accurate and productive they become, which means they produce much more with much less oversight, which is technically good and is the ultimate goal, but the problem is that when there is a major or critical defect that might harm or kill someone, the more likely it is to “slip under the radar” and bypass human quality control.

JohnMiller-mmuldoor
Автор

This will be like carbon dating and low-bg-radiation steel. "Oh wow! You found man-made code from before the times?! Take my money! I want to feed it to my model!"

FizzlNet
Автор

It is like copying a document on a scanner. The first copy looks pretty close to the original, but if you scan each subsequent copy soon it'll be a mess. All the little imperfections in ai generation are doing the same thing. Copying the original thing it is trained on, not that bad, but like scanning the imperfections pile up.

Karn
Автор

If you want another good visual example, load up any of the From Software games with character creation, select one of the default faces, and then keep selecting "Similar Face". Over time moderate values become extreme values and you end up with a horrible distortion.

Frousch
Автор

Google results for some types of searches have already been flooded with bad AI articles.
If those are used as training data the results are going to be disastrous.

EdmondDantèsDE
Автор

My hot take is that even if LLMs were perfect, it would still be terrible. Imagine a world where there is this one person (human) who is super cultured and knowledgeable and very talented at writing, and somehow (by magic time-warping) that person is the sole author of all content that comes out. To me, this is completely dystopian, regardless of how "intelligent" this person is.

mike
Автор

By the way, it is a perfect reflection of a real person going insane if stuck in their own mind.
Always interact with the present reality and not the one in your own thoughts.
Always.

TheLiverX
Автор

Just like copying a thousand lines makes you not a programmer, copying a thousand images doesn't make you an artist. Shocker!

ilovejesusandilovegod
Автор

0:11 Saying 'Artificial AI' is like saying 'chai tea' or 'the CIA Agency'.

jannikheidemann
welcome to shbcf.ru