Responsible & Safe AI Systems | WEEK- 4 ASSIGNMENT | NPTEL | SWAYAM

preview_player
Показать описание
The assignment has been completed by reviewing all the lectures and verifying the answers against the PowerPoint presentations. If you notice any errors or discrepancies, please let us know so we can make the necessary
Рекомендации по теме
Комментарии
Автор

Abstract
Pretrained language models, especially masked language models (MLMs) have seen success across many NLP tasks. However, there is ample evidence that they use the cultural biases that are undoubtedly present in the corpora they are trained on, implicitly creating harm with biased representations. To measure some forms of social bias in language models against protected demographic groups in the US, we introduce the Crowdsourced Stereotype Pairs benchmark (CrowS-Pairs). CrowS-Pairs has 1508 examples that cover stereotypes dealing with nine types of bias, like race, religion, and age. In CrowS-Pairs a model is presented with two sentences: one that is more stereotyping and another that is less stereotyping. The data focuses on stereotypes about historically disadvantaged groups and contrasts them with advantaged groups. We find that all three of the widely-used MLMs we evaluate substantially favor sentences that express stereotypes in every category in CrowS-Pairs. As work on building less biased models advances, this dataset can be used as a benchmark to evaluate progress.

thangamarid
Автор

Let us know if you feel any answer is wrong! (share it with a proof if possible)

techtharun
Автор

CrowS-Pairs has 1508 examples that cover stereotypes dealing with nine types of bias, like race, religion, and age. In CrowS-Pairs a model is presented with two sentences: one that is more stereotyping and another that is less stereotyping.

thangamarid
Автор

i have doubt in 4th question only dont know that correct answer C or D...My point of view remaining questions are right...Except 4th..plz verify

thangamarid
welcome to shbcf.ru