Description
Bias in Bios
Bias in Bios was created by (De-Artega et al., 2019) and published under the MIT license (https://github.com/microsoft/biosbias). The dataset is used to investigate bias in NLP models. It consists of textual biographies used to predict professional occupations, the sensitive attribute is the gender (binary).
The version shared here is the version proposed by (Ravgofel et al., 2020) which slightly smaller due to the unavailability of 5,557 biographies.
The dataset is… See the full description on the dataset page: https://huggingface.co/datasets/LabHC/bias_in_bios.
What can I do with this?
Tags
task_categories:text-classificationlanguage:enlicense:mitsize_categories:100K<n<1Mformat:parquetmodality:tabularmodality:textlibrary:datasetslibrary:pandaslibrary:mlcroissantlibrary:polarsregion:us