Mozilla Explains: Bias in AI Training Data

preview_player
Показать описание
How can artificial intelligence be biased? Bias in artificial intelligence is when a machine gives consistently different outputs for one group of people when compared to another. Typically these biased outputs follow classic human societal biases like race, gender, biological sex, nationality, or age.

Biases can be as a result of assumptions made by the engineers who developed the AI, or they can be as a result of prejudices in the training data that taught the AI, which is what Johann Diedrick explains in the latest edition of Mozilla Explains.

Рекомендации по теме
Комментарии
Автор

This is very important to not be biassed or sexist or racist.

SIB-
join shbcf.ru