There is a need for a more holistic approach to fair AI, that includes technical steps, sociological activities, legal compliance measures and understanding, and ethical considerations. Addressing the need for AI systems free from discrimination requires a multidisciplinary approach that combines social, legal, and technical perspectives. Despite significant advancements in research and technical solutions, a gap remains between socio-legal and technical approaches. A Fair-by-design methodology is an approach that enables the design and implementation of new AI systems that are free of bias and discrimination. Despite significant advancements in research and technical solutions, a gap remains between socio-legal and technical approaches. The goal of this work package is to address this challenge and fill this gap.
The main goal of our WP is building a Fair-by-design methodology integrating fairness into AI from the outset, aiming to prevent biases and discrimination during the design and development phases, rather than fixing them later.
This proactive approach uses diverse datasets, fairness-aware algorithms, and transparency and accountability measures to ensure AI systems are built fairly from the ground up.
The action plan starts with a survey of fair sociological and data collection methodologies, in conjunction with an investigation and mapping of sociological theories and methods for fair AI. This activity will provide guidelines on how to select targeted user groups and other stakeholders to be involved in the design process of AI systems. Similarly, and in tight collaboration with WP6, an analysis of fair approaches from the legal and ethical perspective will be done. This is the activity related to task 5.1.
In parallel with the investigation of socio, legal, and ethical methods, an exploration of the state-of-the-art of technological fair software engineering methodologies, architectures and methods will be conducted. A careful analysis of mapping dataset and model specification with software engineering methodologies, architecture and techniques that can be exploited to design fair AI systems will be performed as well. This survey is part of task 5.2.
After having performed these surveys, gaps and challenges will be identified – and addressed. The aim is to find a methodology for tackling bias from the fair-by-design stance. This activity will provide a vade mecum for the management of a specific bias at hand. A matching between all identifiable types of bias and methodologies to follow as well as possible repair/mitigation techniques to be exploited will be provided. Depending on the bias, different methodologies and repair and mitigation techniques can be taken into consideration (such as data augmentation, running different algorithms on data, …). This is a core part of task 5.3.
Finally, a fairness-by-design engine will be designed and developed, as described in task 5.4.
The effort has involved multiple partners due to its multi-disciplinary nature. UNIBO has served as coordinator of the activities of this work package, also initiating the survey activities and the design of the fair-by-design methodology. UCC and UMU used their expertise to survey the technological fair-by-design algorithms currently available in the AI field; they also provided their knowledge to understand how to develop the actual technical components that will underpin the methodology. ALLAI has led the survey of the socio, legal, and ethical approaches of fair-by-design for AI algorithms. ITI has collaborated through all these tasks and provided support and coordination.
The expected results of the work packages are the following:
WP5 is connected with the other technical WPs (WP2, WP3, and WP4). In particular, the fair-by-design methodology is going to rely on specific sub-components developed in the other work packages, such as bias detection and bias mitigation techniques (respectively, WP3 and WP4); the coordination provided by WP2 allowed for an effective integration of all sub-components. The fair-by-design methodology is also strongly influenced by the outcome of WP6, as the coordination with this work package is crucial to align the technical solution with the relevant social, legal and policy elements and contexts.
WP5 is the core of Aequitas. This is where experts in IT, social sciences, and law collaborate to devise guidelines for making tomorrow’s AI fairer. It’s exciting to look at this problem from so many different perspectives: it gives us the impression of working on something which can really impact the future!
Giovanni Ciatto, WP5 Co-leader
In order to build the fair-by-design methodology, we first had examine the existing fair-by-design approaches, considering a socio, legal, ethical lens (task 5.1) and the technological lens (task 5.2).
The key takeaways of this analysis are the following (summarized in deliverables 5.1 and 5.2):
We have built a meta-methodology that can be used to build fair-by-design methodology. This has been the activity performed during task 5.3 and reported in deliverable 5.3. The meta-methodology is composed by a stable set of core principles and an evolvable pool of practices for steering end users towards a deeper understanding of the problem/domain they are dealing with, and for guiding their decision-making. The meta-methodology should then be reified into a guidelines-provisioning software system whose capabilities and degree of automation can be incrementally improved, as prescribed by the meta-methodology itself. We propose developing the Fair-by-Design via an incremental approach, starting from an initial version to be repeatedly refined. Our approach utilizes a questionnaire-based system where socio-legal and technical domain experts iteratively refine questions and responses, supported by automation.
We address the problem of building fair-by-design methodologies merging social, legal, ethical and technical perspectives. This kind of methodology is a crucial step towards fair and unbiased AI. Our approach does not directly provide a fair-by-design methodology, as we realized that the varying social and legal factors require different strategies to obtain fair AI approaches. Instead, we proposed a meta-methodology to build fair-by-design methodologies.