Bias in AI is a Problem

preview_player
Показать описание
We think that machines can be objective because they don’t worry about human emotion. Even though that’s the case, AI (artificial intelligence)
systems may show bias because of the data that is used to train them. We have to be aware of this and correct for it.
Рекомендации по теме
Комментарии
Автор

A lot of biased is based on perception. If more men are trained and competent in a certain field in a population, but a organization insists on employing a equal number of men and is being neglected either way.

karlfick
Автор

Chat GPT excells at Recognizing patterns but the bias is built in by the programmers as guard rails

Caligula
Автор

Interesting summary and useful example to explain the concept. It is indeed a complex issue but I appreciated the video.

luizvaleriosociology
Автор

I have a question in my mind. Can we reduce the bias to zero?

adityasauce
Автор

I boggles my mind as to why any company would not strictly adhere to a Qualification based hiring practice. Using AI for such a task just seems plain stupid.

neoverse
Автор

Hi Dr. Raj, can this be avoided if the initial data is processed through Long term strategies?

satyaveerpaulx
Автор

Raj, I'm finding many resources that "raise the alarm" concerning bias in data and bias in algorithms. Can you point to literature on how to discover, measure and de-bias data and test, measure and de-bias algorithms?

paulmattson
Автор

What about bias according to religion or ethnicity. I noticed this as a user. They are against let's say Muslims or Black people. They mock us and almost hurt our feelings. They've been set to an attitude against religion in general.

LatifoMudallali
Автор

It's a question of perspective and context. The example you give is a very bad one as it can be in the company's best interest to be biased towards hiring more men than women to make stronger soldiers or more short people than tall ones to work in very confined spaces...

The goal should be to try to propose the best answer(s) based on a specific context.

The actual trend seem to go in the opposite direction. The companies are adding some very strong extra layer of political dogma to "guide" the process.
Google's AI system goes as far as to systematically discriminate towards white people and to add keywords to their user's request to change the request.

IronFreee