237. Machine Learning Models & Reification

preview_player
Показать описание
Machine learning algorithms are in the spotlight right now, leading some to worry about them remaking the world into something alien, but there's another, less popular concern: what if they make it into exactly what we think it is?

- Links for the Curious -

Рекомендации по теме
Комментарии
Автор

been binge watching your videos while working on a uni-project all night - its early morning now in germany - and so, this video came at perfect time! Vielen Dank!

Tmesis___
Автор

Hey man I haven't been on the channel in a while but I really miss the big THUNK in the opening. It really gives me that positive reinforcement! Or, gave.

The_SOB_II
Автор

Wow. Blown away. There's a lot of complex ideas you've managed to break down here.
Excellent video, explained in an understandable way, with historical context and examples.

You clearly know what you're talking about.
You resist anthropomorphising these systems, you call them Machine Learning models instead of "AI", you correctly identify that these models amplify the current biases and values of the people making them (and the data they were trained on).

For anyone who's interested in diving deeper into the topic, I recommend the book "Resisting AI: An Anti-fascist Approach to Artificial Intelligence" which dives DEEP on this, and how ML models are already affecting beurocratic systems and power structures like healthcare in the UK, facial recognition being used to falsely imprison marginalized people, and that there's a better way to introduce ML systems in society that doesn't involve amplifying current inequalities.

stealcase
Автор

You can really see the machinery with certain types of questions. GPT-4 doesn't think ahead, it's just trying to predict the next sentence. That's the reason it struggles with things like jokes. To tell a joke you kinda already need to know the punchline. I've been trying to talk to GPT-4 about philosophy and it's been a real struggle since it faces a similar problem. Sure you can start your joke with words like "poop" to increase your odds of telling something funny, and you can start your philosophical paper with words like "epistemology" to increase your odds of saying something insightful. But ultimately, just starting with a sentence containing poop or epistemology will rarely result in something interesting. Just like jokes, you need to to know how your philosophical paper is going to end if you want to produce quality philosophy.
I'm very grateful that we're building all these models because it crystalizes how humans *don't* think and reveals all these interesting similarities (like between jokes and philosophy) that I never realized before.

Xob_Driesestig
Автор

Overall this was a good video. It does remind me of a discussion I heard talking about "social constructs", where most social constructs are based on some objective underlying facts, so you shouldn't dismiss them wholesale. However, you should understand that the categories come with built-in values.

jcorey
Автор

if you take the llm as merely a reification of the internet zeitgeist one step further... you can imagine it as sort of the higher level cultural entity, similar to 'the hive' of a bee colony. not so much an entity in itself, but an emergence reflected from a thousand communicating entities.

judgeomega
Автор

This is a wonderful counterpoint to the people who are claiming that ChatGPT-5 will produce an AGI by the end of this year.

LeeCarlson
Автор

You are absolutely right, our use of AI will both cause a feedback loop where we start believing stuff and put it back in the AI.
Positive feedback loops and boosting our own biases.
This of course can be said of every advancement in our communication medium and especially when mass communication is democratized, or even slips out of the hands of the elites.

However, models are not just stochastic parrots. There is an emergent property to LLMs, something about Wittgenstein's language game that is really special here and you're sweeping it under the rug to appease .. them.

shodanxx
Автор

So ChatGPT is not a sophisticated parrot, it is a software that can parrot sophistication. I'm old so I remember the term GIGO.

patrickkelly
Автор

the values of chatgpt are entirely based on the values of the creators who belong to a certain "early life background" that is hilariously consistent for the last one hundred years or so that such information has been generally available.

fraternitas
Автор

maybe reification is what happened to Bieiefeld

sebo
Автор

Money makes the world go round but shades EVERY single truth into a subtle lie!

bthomson
Автор

One of the leading models of consciousness is IIT. This lends credence to the idea that statistical weights between words could be conscious

NickGhale
Автор

I asked dalle to make an image of a tiny newton but it was not cute.

anakimluke
Автор

Too early to say. It's happening now and we don't know where this rocket will land.

peernorback
Автор

I'm still not convinced that they aren't just fancy Markov chains and I've yet to see any really great creation or discovery come from them.

Your comment about standardized tests and college admissions being reification doesn't hold water. Doing well on standardized tests is strongly correlative within those who go to college, those with higher scores doing better than those with lower but good enough for admission scores. People who do well on standardized tests yet don't go to college do better than those who do poorly on those tests as well.

Of the common litany against the SAT the only valid one is that it is pretty much an intelligence test. The funny thing is that this is because of all the things removed to make it culturally neutral. This also makes it less predictive than it could be since it doesn't check for any requisite knowledge.

ferulebezel
Автор

9:20 so you mean its accurate and your discomfort with that is only a reflection of your amount of recent algorithms induced moral panic in face of the facts of who was, is, and will be, that professions leaders.

fraternitas
Автор

The implications of all this are very large. Implications for what we call fact versus fiction, and our ability to ascertain truth. Implications for our memory, as LLMs are capable of making engaging narratives to fill in the gaps, even towards wrong conclusions.

I think it says something about the originality, or lack thereof, in a lot of human thought. The fact that a mirror of what was can be so substantive an alternative for so many people, in so many ways, gives questions about our originality. If even Picasso called himself a thief, what are the implications of something like this, for novel input?

How much of what we already are, and already do, is a reified output of the web of ideologies which impact our thinking? One of the things about this mirror is, it's cheap. Sam Altman talked to Lex Fridman about the idea of "radically lowering the price of intelligence". And, given this context, and a wider understanding of political institutions and incentive structures, it brings us back to the question of how much do we value human lives.

The fact that you can bring a scrambling of intelligence, this fuzzy mirror, to aid in whatever prompt you could give it, without negotiation? It has huge implications for our "use" to one another, and our appreciation of one another. Already, content algorithms provide people more stimulation than the people around them. What about when this fuzzy mirror does it for almost everyone, at scale? It has such broad implications that give a lot of questions to the ego. About the depth of our individual contributions in comparison.

justinrobertson
Автор

7:30 I'm not sure how you can be so confident here that ChatGPT doesn't have a "subjective understanding of the world". How can you prove that you have it? How can you prove you aren't just a very big LLM? Let's say that in the future we will make a way bigger LLM that becomes an AGI an outsmarts all people in all tasks. Its design and architecture are still the same so will it still not have this "understanding" as we have?

The other problem I have is that you are implying that AI is in a way limited by us because it is trained on human data. It's true that current AIs possesses human biases but I don't think this will always be the case. Human biases won't always be inherent to the AI that we make. The smarter the AI will get the less biases it will have. I'm not saying that I think it will be perfectly objective in the end but just that it won't be limited by human thinking.

Macieks
Автор

This video can be summarised by: can human beings produce a tool and use that tool to inflict harm on other?

The answer will always be yes!

A more interesting question is how to avoid pitfalls in reality. Practical steps to avoid inflicting harm. How can we tighten the guard rails.

I just tried your prompts on chatgpt and they gave the bog standard it's unethical blah blah. If course I am sure by working hard one could overcome those guard rails.

We have to be aware that chatgpt can be extremely useful, heck it can produce better code than most interns.

Even if I am extremely generous here and go with the worst claim possible: chatgpt is incredibly biased, racist etc etc etc.

There are many Nobel prizes out there that advocated for eugenics, racists, etc. We still use their research no?

Considering how important this topic is, I wouldn't have expected a more in depth video.

sunetulajunge