Enhancing Fairness in Financial AI Models through Constraint-Based Bias Mitigation
- Abstract
- As artificial intelligence (AI) increasingly drives decision-making in the financial sector, ensuring fairness inmachine-learning models has become critical. Bias in AI models can lead to discriminatory practices,undermining public trust and restricting access to essential financial services. While existing financial servicesleverage AI to enhance efficiency and accuracy, these systems can inadvertently produce unfair outcomes forspecific groups defined by sensitive attributes, such as gender and race. This study addresses the challenge ofmitigating bias in loan-approval models by applying fairness-aware machine-learning techniques. Weinvestigate two distinct constraint-based strategies for bias mitigation: fairness- and accuracy-constrainedmodels. These strategies are evaluated using logistic regression (LR) and a large-scale, contemporary financialdataset from the Korea Credit Information Services. The results demonstrate that fairness-constrained modelsachieve a superior balance between fairness and accuracy compared to a conventional LR model. Furthermore,we highlight the importance of tailored data preprocessing and carefully selecting relevant sensitive attributes(e.g., gender, age, nationality) in enhancing fairness outcomes. The findings underscore the necessity ofintegrating fairness considerations into
- Author(s)
- 김성민; 최이슬; 홍지원; 이은빈; 김정아
- Issued Date
- 2025-02-28
- Type
- Article
- Keyword
- 인공지능시스템및응용
- DOI
- 10.3745/JIPS.01.0111
- URI
- http://repository.sungshin.ac.kr/handle/2025.oak/8696
- Publisher
- Korea Information Processing Society
- ISSN
- 1976-913X
-
Appears in Collections:
- 융합보안공학과 > 학술논문
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.