Bias in AI and How to Fix It | Runway

preview_player
Показать описание
Our new research on mitigating stereotypical biases in text to image generative systems has proven to significantly improve fairness across different groups of people.

Рекомендации по теме
Комментарии
Автор

I hear what you're saying, Runway, but consider this:
My psychology professor (this was in the early 2000s) offended the class when he said "stereotypes are true" and he followed it up with "that's how they become stereotypes." The truth may be offensive to people, but that doesn't remove the fact that it's true.

As recently as 1980, the United States population was 80% white. I think it's closer to 70% now.

As of 2019--and I'm pulling this straight from Google--the race of doctors in the United States was "56.2% identified as White, 17.1% identified as Asian, 5.8% identified as Hispanic, and 5.0% identified as Black or African American."

Let's say I had a bowl of Skittles and 56.2 were red, 17.1 were green, 5.8 were yellow, and 5% were orange. If I were to pick 1 Skittle from the bowl while blindfolded, what flavor of Skittle would I be most likely to pick? Let's say I did that 100 times: the flavors should vaguely represent the percentages of likelihood that I would pick those flavors.

Therefore:
If my prompt was "1980's man" I should expect 4 out of 5 results to be white.
If my prompt was "man" I should expect 7 out of every 10 results to be white.
If my prompt is "doctor" I should expect roughly half of the results to be white.

If I were to generate footage talking about the Kony 2012 campaign and my prompt was "child soldier in the congo" or something like that, what race would the output be?

This isn't bias; it's just statistics.

My main takeaway from this is that the next time I use Runway, I should SPECIFY the race of the person that I want to generate.

T
Автор

This has nothing to do with reducing bias, it's all about increasing bias to support an agenda.

localminimum
Автор

What I like most is how diversity often turn into reverse racism

Danuxsy
Автор

This has NOTHING to do with bias in AI. It's just screeching about political correctness.

True example of bias in AI: Use a prompt involving "wizard." You'll mostly get the Merlin archetype: old white guy with a beard and a hat. But change "wizard" to "mage" and you get all sorts of magic-users: female, young, white, Asian, even elves. Plus, no hat. So the AI is NOT biased to thinking only white guys can be magic users (any more than it thinks you have to be wearing a hat) because it gives you something different when you change the prompt to something that should be a synonym. "Doctor, " "MD, " "physician, " "medic, " "healer, " "medical professional, " and other synonyms will get you different varieties of people based on their representation in the training data.

Your examples are cherry-picked, because in the case of other low-income workers, like "plumber" or "factory worker, " you'll get a greater number of white people. Same with other high-income professionals.

Bias in training data is a big issue, but it's NOT how you're representing it to be.

shanedk
Автор

I think your idea of diversity in this video is myopically narrow. If A.I. were to look at what Google image search prioritises, or modern advertising it would be picking up on enormously skewed data, that seems to be shifted through the lens of heavy DEI/EDI weighting. In short, you will be replacing one set of biases with another.

I don't trust you have a good moral solution at all.

OneSwitch
Автор

This is pure ideology. Social engineering.

diegomadero
Автор

1:32 I don't think it's necessarily that the models have bias because the data comes from us humans....It is also definitely that, but...
But it's because its really hard to have a dataset that encompasses all possible cases. So the model trains in a biased dataset and becomes biased.

goncalocartaxana
Автор

As a paying Runway customer I just want to say that I think this is a really bad idea. Stereotypes exist for a reason, and images of young attractive people is a perfectly suitable default, not only because they are the most pleasing to look at, but because they are the people most likely to be photographed, and therefore make up the largest percentage of images in the training data. If you ask an AI to generate a photo of an NBA player you expect to get a photo of an extremely tall athletic black man. This is NOT A PROBLEM. Forcing the AI to warp reality to fit some idealized marxist ideology that demands equality of outcomes is extremely dishonest and people do not like it. Disney is the proof.

brianwalls
Автор

When I start getting black knights around Arthur's Round table is the day I quit runway

csok
Автор

Did you clone the voice from Vox or hire the person who is doing the VO. 🤔🤔

alterverse_ai
Автор

Maybe that's how Google Gemini came up with black WWII Nazis. Lol.

deeplearningpartnership
Автор

This video is hilarious. My experience with Runway so far is that 9 times out of 10 this "randomly" generated person will NOT be white. 🤣

wamaricle
Автор

Very important. I have continously issues with this in Kaiber for instance.

miguelsureda
Автор

But you do not solve bias....YOU CREATE ANOTHER BIAS, non-sense. Bias isnt the problem cause they use reality.

korujaa
Автор

I’m glad runway are looking into this. It’s important and well timed

FlyingLotus
Автор

Very well done video. Bias is extremely important to talk about because these models have a very white male western centric tendency, since the datasets were mostly created and curated by them.
I suppose this can be a temporary fix, but bias cannot be eradicated from such models. Bias is at the basis since it learns from the data its fed and makes statistical averages, scraping off all the hard corners. While this effort is certainly a step in the right direction, I feel we need to discuss it as a generative model specific problem, its architecture and datasets, as there will always be a tendency towards specific ideologies ingrained inside of the model.
But anyway, I do appreciate seeing Runway investing in these important topics.

fabianmosele
Автор

Not that I expected anything less, but it's interesting to see the degree of white fragility in this comment section.

think