Auditing Bias in AI and Machine Learning-Based Credit Algorithms: A Data Science Perspective on Fairness and Ethics in FinTech
DOI:
https://doi.org/10.21590/ijtmh.11.02.10Keywords:
Algorithmic Bias, Fairness, AI Ethics, Credit Scoring, FinTech, Bias Auditing, Machine Learning.Abstract
The legal, regulatory and corporate environment that artificial intelligence (AI) and machine learning (ML) use in credit
scoring and credit lending situations has changed significantly in the financial services industry. Nevertheless, this change
of technology has raised even more issues related to the bias in algorithms and their influence on fairness, equity, and
compliance. The problem of auditing the potential bias in AI and ML-based credit algorithms is explored in this paper
through the lens of data science with a focus on methodological aspects in an effort to track, quantify, and remove the
discriminatory patterns hiding inside the training sets and model architecture. This paper helps to bridge the gap in the
discussion of responsible AI in the context of financial technology because it provides a critical assessment of the existing
body of knowledge, a proposed framework of auditing, and practical advice to the stakeholders in FinTech. The evidence
highlights the necessity of open, responsible, and morally responsible data science in order to make certain that credit
decisions are not direct and worsen any existing forms of social injustice.