Show simple item record

dc.contributor.authorDeckker, D
dc.contributor.authorSumanasekara, S
dc.date.accessioned2025-10-01T07:15:19Z
dc.date.available2025-10-01T07:15:19Z
dc.date.issued2025-07
dc.identifier.urihttps://ir.kdu.ac.lk/handle/345/8910
dc.description.abstractGender bias in artificial intelligence (AI) systems, particularly within education and workplace settings, poses serious ethical and operational concerns. These biases often stem from historically skewed datasets and flawed algorithmic logic, which can lead to the reinforcement of existing inequalities and the systematic exclusion of underrepresented groups, especially women. This systematic review analyses peer-reviewed literature from 2010 to 2024, sourced from IEEE Xplore, Google Scholar, PubMed, and SpringerLink. Using targeted keywords such as AI gender bias, algorithmic fairness, and bias mitigation, the review assesses empirical and theoretical studies that examine the causes of gender bias, its manifestations in AI-driven decision-making systems, and proposed strategies for detection and mitigation. Findings reveal that biased training data, algorithm design flaws, and unacknowledged developer assumptions are primary sources of gender discrimination in AI systems. In education, these systems affect grading accuracy and learning outcomes; in workplaces, they influence hiring, evaluations, and promotions. Mitigation approaches can be categorized into three main categories: data-centric (e.g., data augmentation and data balancing), algorithm-centric (e.g., fairness-aware learning and adversarial training), and post-processing techniques (e.g., output calibration). However, each approach faces implementation challenges, including trade-offs between fairness and accuracy, lack of transparency, and the absence of intersectional bias detection. The review concludes that gender fairness in AI requires integrated strategies that combine technical solutions with ethical governance. Ethical AI deployment must be grounded in inclusive data practices, transparent protocols, and interdisciplinary collaboration. Policymakers and organizations must strengthen accountability frameworks, such as the EU AI Act and the U.S. AI Bill of Rights, to ensure that AI technologies support equitable outcomes in education and employment.en_US
dc.language.isoenen_US
dc.subjectArtificial Intelligence, Gender Bias, Algorithmic Fairness, Workplace Discrimination, Bias Mitigation in Educationen_US
dc.titleSystematic Review on AI in Gender Bias Detection and Mitigation in Education and Workplacesen_US
dc.typeJournal articleen_US
dc.identifier.facultyFOCen_US
dc.identifier.journalIJRCen_US
dc.identifier.issue02en_US
dc.identifier.volume04en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record