Call for papers

Third AEQUITAS Workshop on Fairness and Bias in AI

Background

Aim and Scope

Fairness and bias in AI have become increasingly pertinent as AI-based decision support systems find widespread application across industries, public and private sectors, and policymaking. These systems guide decisions in critical societal domains such as hiring, university admissions, loan approvals, medical diagnoses, and crime prediction. Given the problematic rise in societal inequalities and intersectional discrimination, it’s crucial to prevent AI systems from reiterating these issues and instead work toward mitigating them. As we leverage automated decision support systems to formalize, scale, and expedite processes, we’re presented with both the opportunity and the obligation to reassess existing procedures for the better. This entails avoiding perpetuating existing injustices by identifying, diagnosing, and rectifying them. Establishing trust in these systems requires the confidence of domain experts and stakeholders in the decisions made. Despite the increased focus on this area in recent years, there remains a lack of comprehensive understanding regarding the interpretation of bias or discrimination concepts in the realm of AI. Moreover, fairness and bias in AI are deeply intertwined with the principles of inclusion, cultural representation, and responsible AI. To mitigate bias, AI systems must be inclusive by design, ensuring diverse perspectives and underrepresented groups are meaningfully involved throughout development. Cultural representation is essential to avoid the marginalization of certain communities, ensuring AI systems respect diverse social contexts and avoid perpetuating harmful stereotypes. Identifying socio-technical solutions to fight bias and discrimination that are both realistically achievable and ethically justified is an ongoing challenge. Incorporating the roles of generative AI and the evolving legal landscape such as the AI Act will be critical in advancing these discussions and shaping the future of ethical AI implementation.

Call for Submissions

Call for Paper – (subtítulo, é possível aparecer a outra cor?)

We encourage the submission of original contributions, investigating novel methodologies/approaches to design/implement fair AI systems and algorithms, or tackling bias in AI. In particular, authors can submit:

  • Regular papers(max. 12 + references – CEUR.ws format);
  • Short/Position/Discussion papers(max 6 pages + references – CEUR.ws format);

To submit your papers you can click here

All submitted papers will be evaluated by at least two members of the program committee, based on originality, significance, relevance, and technical quality.

Submissions of full research papers must be in English, in PDF format in the CEUR-WS conference format available at this link or at this link if an Overleaf template is preferred.

Submissions should be single blinded, i.e. authors’ names should be included in the submissions.

Submissions must be made through the EasyChair conference system prior to the specified deadline (all deadlines refer to GMT). Discussion papers are extended abstracts that present your favourite recent application work (published elsewhere), position, or open problems with clear and concise formulations of current challenges. At least one of the authors should register and take part in the conference to make the presentation.

Indicative topics

Proceedings and Post Proceedings

All accepted papers will be published in CEUR-WS. A selection of the best papers, accepted for presentation at the workshops, will be invited to submit an extended version for publication in a Journal.

For any additional information contact: roberta.calegari@unibo.it

Key Dates

Paper submission deadline: March 14, 2025 - May 15, 2025

This is the time period of the paper submission.

Key Dates

Notification to authors: May 15, 2025 - July 15, 2025

This is the time period for the notification to the authors.

Key Dates

Camera-Ready submission: July 15, 2025 - September 8, 2025

This is the time period for the final submission.

Indicative topics

This workshop serves as a platform for exchanging ideas, presenting findings, and exploring preliminary work in all facets linked to fairness and bias in AI. This includes, but is not restricted to:

  • Bias and Fairness by Design
  • Fairness measures and metrics
  • Counterfactual reasoning
  • Metric learning
  • Impossibility results
  • Multi-objective strategies for fairness, explainability, privacy, class-imbalancing, rare events, etc.
  • Federated learning
  • Resource allocation
  • Personalized interventions
  • Debiasing strategies on data, algorithms, procedures
  • Human-in-the-loop approaches
  • Methods to Audit, Measure, and Evaluate Bias and Fairness
  • Auditing methods and tools
  • Benchmarks and case studies
  • Standard and best practices
  • Explainability, traceability, data and model lineage
  • Visual analytics and HCI for understanding/auditing bias and fairness
  • HCI for bias and fairness
  • Software engineering approaches
  • Legal perspectives on fairness and bias
  • Social and critical perspectives on fairness and bias
  • Inclusive AI and cultural representation

More information

Site: https://aequitas-aod.github.io/aequitas-ecai25.github.io/index.html

Get involved.

submit here