authors

Joana Oliveira

LOBA

NEWS 25-02-2025

AEQUITAS Use Cases: Advancing Fairness in AI Across Key Sectors

Bringing AI Fairness from Theory to Practice

Artificial Intelligence (AI) is increasingly shaping critical decisions in healthcare, hiring, and public services. However, if left unchecked, AI systems can reinforce existing biases and lead to unfair outcomes, particularly for vulnerable and underrepresented groups. AEQUITAS is addressing these challenges head-on by developing tools and methodologies to assess, repair, and prevent bias in AI systems.

AEQUITAS is not just about identifying bias—it is about actively engineering fair AI systems through a rigorous, interdisciplinary approach. This includes working with diverse stakeholders, from AI developers and policymakers to affected communities.

As part of its mission, AEQUITAS is conducting real-world case studies in three key domains:

  • Healthcare – Addressing bias in medical diagnostics and predictive models.
  • Hiring & Human Resources – Ensuring fair recruitment and job-matching AI.
  • Social Inclusion & Education – Using AI to support disadvantaged groups.

Each case study is designed to evaluate existing AI models, develop fairness-aware algorithms, and create new solutions that promote equity in AI-driven decision-making

Use Case 1: Fair AI in healthcare

  1. AI-assisted identification of dermatological diseases

Dermatology AI models have historically been biased towards lighter skin tones, often leading to misdiagnosis for people with darker skin. AEQUITAS is tackling this by:

  • Developing a fair AI system for diagnosing pediatric dermatological diseases, ensuring equitable performance across different skin tones.
  • Creating a balanced dataset to train AI models on diverse patient populations.
  • Enhancing awareness among AI developers and medical professionals on racial bias in healthcare AI.

This initiative aims to improve diagnostic accuracy and accessibility for all patients, regardless of their skin.

  1. Bias-aware ICU healthcare outcome prediction

AI systems used in intensive care units (ICUs) often suffer from biases related to age, ethnicity, and local medical protocols. To address this, AEQUITAS is:

  • Using bias-aware predictive algorithms to forecast ICU patient outcomes such as mortality rates and length of stay.
  • Developing a synthesizer engine that generates realistic yet unbiased medical data for training AI models.
  • Ensuring that ICU decision-support systems are transparent and accountable, reducing disparities in patient treatment​.

By mitigating bias in healthcare AI, AEQUITAS contributes to more equitable medical decision-making and patient care.

Use Case 2: Fair hiring & job matching AI

AI-powered recruitment tools are increasingly used to screen job candidates, but historical biases in hiring data can lead to unfair outcomes, particularly for women, ethnic minorities, and other marginalized groups.

  1. AI-Assisted Recruiting Without Bias

AEQUITAS is working with Adecco Group to improve AI-driven recruitment processes by:

  • Identifying and repairing biases in hiring algorithms, particularly in STEM and medical fields, where gender disparities are common.
  • Creating fair AI models that eliminate systemic biases while preserving candidate qualifications.
  • Comparing hiring data from different countries (Italy & Spain) to assess cross-regional biases and their impact on job selection​
  1. Fair job matching systems

Many job-matching algorithms unintentionally reinforce traditional biases by prioritizing candidates based on past hiring trends. AEQUITAS is:

  • Evaluating job-matching AI systems to detect and correct gender, age, and social background biases.
  • Ensuring transparency in candidate selection, making hiring decisions fairer and more inclusive.
  • Developing methodologies to detect recruiter bias, ensuring AI decisions are interpreted correctly​.

Through these initiatives, AEQUITAS is reshaping the future of AI-assisted recruitment, ensuring diversity and fairness in the hiring process.

Use Case 3: AI for Social Inclusion & Educational Fairness

  1. AI-assisted detection of educational disadvantage

Educational inequalities often limit opportunities for students from disadvantaged backgrounds. AEQUITAS is developing an AI-driven tool to:

  • Identify students at risk of falling behind based on diagnostic educational data.
  • Ensure early intervention by policymakers and educators through AI-driven insights.
  • Assess AI fairness across different regions, reducing contextual biases in educational assessments​.
  1. AI for identifying child abuse risks in hospitals

AI models used in child abuse detection can unintentionally exhibit biases against ethnic minorities and economically disadvantaged families. AEQUITAS is working to:

  • Create an AI system that identifies potential child abuse cases without reinforcing existing racial or socioeconomic biases.
  • Improve AI-driven risk assessments by incorporating socio-economic and cultural factors.
  • Reduce false positives and negatives, ensuring that AI assists doctors fairly and effectively​.

The Future of AI is Fair, Transparent & Accountable

AEQUITAS is committed to developing AI systems that work for everyone—regardless of race, gender, or socioeconomic status. By integrating ethical AI principles, participatory design, and state-of-the-art fairness methodologies, the project ensures that AI-driven decisions do not reinforce existing inequalities but actively counteract them.

AEQUITAS is more than a research project—it is a blueprint for the future of equitable AI.

authors

Joana Oliveira

LOBA

found this interesting?

share this page