Faculty of Engineering and Computing header
Faculty of Engineering and Computing
Closing the Gap: A Method for Bias Reduction in AI Models

Closing the Gap: A Method for Bias Reduction in AI Models

Computer vision models are now embedded in countless technologies, from facial recognition systems to image-based search tools, but their impressive capabilities often come with a hidden flaw: social bias. These biases, inherited from the datasets on which the models are trained, can lead to uneven performance, for example, facial recognition systems showing lower accuracy for individuals with darker skin tones or for women. Addressing this issue is not just a matter of technical refinement but of fairness, inclusivity, and trust in AI systems.

In an important advance, Teerath Kumar and Dr Alessandra Mileo from the School of Computing at Dublin City University, working with Dr Malika Bendechache from the University of Galway, have introduced a new technique called FaceKeepOriginalAugment. This approach builds on existing data augmentation strategies but goes further by identifying “salient” or highly prominent regions in an image and strategically placing them into less prominent, “non-salient” regions. By doing so, the method increases dataset diversity without sacrificing the essential details that models rely on for accurate recognition. This more balanced data representation helps to counteract entrenched biases, enabling models to perform more fairly across different demographic groups.

The researchers rigorously tested FaceKeepOriginalAugment on datasets featuring professions such as CEO, Engineer, Nurse, and School Teacher to assess its ability to reduce gender bias. The results were clear: the method not only lowered gender bias but also enhanced the fairness of the models’ outputs. Complementing this innovation, the team developed a Saliency-Based Diversity and Fairness Metric, a new tool to measure both diversity and fairness in datasets, with a particular focus on addressing imbalances.

This work offers a practical and scalable framework for building more equitable computer vision systems. By embedding fairness directly into the way datasets are augmented and evaluated, it paves the way for AI applications that are not only more accurate but also more ethical, ensuring technology serves all members of society more equally.

Read the full paper: Saliency-based metric and FaceKeepOriginalAugment: a novel approach for enhancing fairness and Diversity, Multimedia Systems here.