Please use this identifier to cite or link to this item:

Title: 以降偏差達成公平性機器學習之探究-以求職應徵錄取為例
Bias Mitigation for Machine Learning Fairness - Job Recruiting Selection as an Example
Authors: 周敬軒
Chou, Ching-Hsuan
Contributors: 胡毓忠
Hu, Yuh-Jong
Chou, Ching-Hsuan
Keywords: 機器學習
Machine Learning
Bias mitigation
Machine fairness
Marital discrimination
Date: 2021
Issue Date: 2021-03-02 14:56:20 (UTC+8)
Abstract: 過去我們直覺認為機器學習應該是公平的、中性的,因為它來自數學的計算和統計。但事實並非如此,機器學習是透過訓練資料進行學習因此無可避免也會學習到人類的歧視與偏見。在機器學習中偏差是必要的,也可以說毫無偏差的資料集所訓練出來的模型是沒學習到任何知識,其分類結果亦不具參考價值。但有時候偏差是來自於敏感或受保護的屬性,就會造成不公平以及違法的問題。
本論文旨在以應徵招募為主題探討以前處理作法達成降低機器學習歧視與偏見的目標,並搭配Scikit-learn和IBM AIF360的函式庫建構標準化的降偏差機器學習流程。經過實驗驗證透過前處理算法降低資料集的婚姻狀態偏差,可以讓模型更具公平性,讓已婚和未婚兩個族群的分類結果有更趨一致,提高了分類器模型整體的準確率和分類品質。
In the past, we intuitively believed that machine learning should be fair and neutral, because it comes from mathematical calculations and statistics. But this is not the case. Machine learning learns through training data, so it is inevitable that it will also learn human discrimination and prejudice. Bias is necessary in machine learning. It can also be said that a model trained on an unbiased data set has not learned any knowledge, and its classification results have no reference value. But sometimes the bias comes from sensitive or protected attributes, which can cause unfairness and illegality.
The purpose of this paper is to use recruitment as the theme to discuss the pre-processing algorithm to achieve the goal of reducing machine learning discrimination and prejudice, and to use Scikit-learn and IBM AIF360 library to construct a standardized deflection reducing machine learning process. It has been experimentally verified that the pre-processing algorithm reduces the marital bias of the data set, which can make the model more fair, make the classification results of the married and unmarried ethnic groups more consistent, and improve the overall accuracy and classification of the classifier model quality.
Reference: [1] Acharyya, Rupam, et al. "Detection and Mitigation of Bias in Ted Talk Ratings." arXiv preprint arXiv:2003.00683 (2020).
[2] Angwin, Julia, et al. (2016). “Machine bias. ProPublica.”,, accessed: 2020-03-13
[3] Bellamy, Rachel KE, et al. "AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias." arXiv preprint arXiv:1810.01943 (2018).
[4] Calders, Toon, Faisal Kamiran, and Mykola Pechenizkiy. "Building classifiers with independency constraints." 2009 IEEE International Conference on Data Mining Workshops. IEEE, 2009.
[5] Chouldechova, Alexandra, and Aaron Roth. "The frontiers of fairness in machine learning." arXiv preprint arXiv:1810.08810 (2018).
[6] d'Alessandro, Brian, Cathy O'Neil, and Tom LaGatta. "Conscientious classification: A data scientist's guide to discrimination-aware classification." Big data 5.2 (2017): 120-134.
[7] Dwork, Cynthia, et al. "Fairness through awareness." Proceedings of the 3rd innovations in theoretical computer science conference. 2012.
[8] Frida Polli ,“Using AI to Eliminate Bias from Hiring”, accessed:2020-03-18
[9] Kamishima, Toshihiro, et al. "Fairness-aware classifier with prejudice remover regularizer." Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, Berlin, Heidelberg, 2012.
[10] Lohia, Pranay K., et al. "Bias mitigation post-processing for individual and group fairness." Icassp 2019-2019 ieee international conference on acoustics, speech and signal processing (icassp). IEEE, 2019.
[11] Manish Raghavan and Solon Barocas ,“Challenges for mitigating bias in algorithmic hiring”,, accessed: 2020-04-30
[12] Mehrabi, Ninareh, et al. "A survey on bias and fairness in machine learning." arXiv preprint arXiv:1908.09635 (2019).
[13] Peña, Alejandro, et al. "Bias in Multimodal AI: Testbed for Fair Automatic Recruitment." arXiv preprint arXiv:2004.07173 (2020).
[14] Peng, Andi, et al. "What you see is what you get? The impact of representation criteria on human bias in hiring." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing. Vol. 7. No. 1. 2019.
[15] Qin, Chuan, et al. "Enhancing person-job fit for talent recruitment: An ability-aware neural network approach." The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval. 2018.
[16] Raghavan, Manish, et al. "Mitigating bias in algorithmic hiring: Evaluating claims and practices." Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 2020.
[17] Silberg, Jake, and James Manyika. "Notes from the AI frontier: Tackling bias in AI (and in humans)." McKinsey Global Institute (2019): 4-5.
[18] Society For Human Resource Management. 2016. 2016 Human Capital Benchmarking Report. research-and-surveys/Documents/2016-Human-Capital-Report.pdf. (2016).
[19] Trisha Mahoney, Kush R. Varshney & Michael Hind. (2020). “AI Fairness - How to Measure and Reduce Unwanted Bias in Machine Learning”,, accessed: 2020-04-30
[20] Xue, Songkai, Mikhail Yurochkin, and Yuekai Sun. "Auditing ML Models for Individual Bias and Unfairness." arXiv preprint arXiv:2003.05048 (2020).
[21] Zhang, Brian Hu, Blake Lemoine, and Margaret Mitchell. "Mitigating unwanted biases with adversarial learning." Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. 2018.
[22] Zhang, Yukun, and Longsheng Zhou. "Fairness Assessment for Artificial Intelligence in Financial Industry." arXiv preprint arXiv:1912.07211 (2019).
[23] Ziyuan Zhong ,“A Tutorial on Fairness in Machine Learning”,, accessed:2020-03-28.
Description: 碩士
Source URI:
Data Type: thesis
Appears in Collections:[資訊科學系碩士在職專班] 學位論文

Files in This Item:

File Description SizeFormat
100401.pdf1509KbAdobe PDF0View/Open

All items in 學術集成 are protected by copyright, with all rights reserved.

社群 sharing