authors

NEWS 20240802

Fair-by-design Methodology

There is a need for a more holistic approach to fair AI, that includes technical steps, sociological activities, legal compliance measures and understanding, and ethical considerations.  Addressing the need for AI systems free from discrimination requires a multidisciplinary approach that combines social, legal, and technical perspectives.  Despite significant advancements in research and technical solutions, a gap remains between socio-legal and technical approaches. A Fair-by-design methodology is an approach that enables the design and implementation of new AI systems that are free of bias and discrimination. Despite significant advancements in research and technical solutions, a gap remains between socio-legal and technical approaches. The goal of this work package is to address this challenge and fill this gap.

Action Plan

The main goal of our WP is building a Fair-by-design methodology integrating fairness into AI from the outset, aiming to prevent biases and discrimination during the design and development phases, rather than fixing them later.

This proactive approach uses diverse datasets, fairness-aware algorithms, and transparency and accountability measures to ensure AI systems are built fairly from the ground up.

The action plan starts with a survey of fair sociological and data collection methodologies, in conjunction with an investigation and mapping of sociological theories and methods for fair AI. This activity will provide guidelines on how to select targeted user groups and other stakeholders to be involved in the design process of AI systems. Similarly, and in tight collaboration with WP6, an analysis of fair approaches from the legal and ethical perspective will be done. This is the activity related to task 5.1.

In parallel with the investigation of socio, legal, and ethical methods, an exploration of the state-of-the-art of technological fair software engineering methodologies, architectures and methods will be conducted. A careful analysis of mapping dataset and model specification with software engineering methodologies, architecture and techniques that can be exploited to design fair AI systems will be performed as well. This survey is part of task 5.2.

After having performed these surveys, gaps and challenges will be identified – and addressed. The aim is to find a methodology for tackling bias from the fair-by-design stance. This activity will provide a vade mecum for the management of a specific bias at hand. A matching between all identifiable types of bias and methodologies to follow as well as possible repair/mitigation techniques to be exploited will be provided. Depending on the bias, different methodologies and repair and mitigation techniques can be taken into consideration (such as data augmentation, running different algorithms on data, …). This is a core part of task 5.3.

Finally, a fairness-by-design engine will be designed and developed, as described in task 5.4.

Partners Actions

The effort has involved multiple partners due to its multi-disciplinary nature. UNIBO has served as coordinator of the activities of this work package, also initiating the survey activities and the design of the fair-by-design methodology. UCC and UMU used their expertise to survey the technological fair-by-design algorithms currently available in the AI field; they also provided their knowledge to understand how to develop the actual technical components that will underpin the methodology. ALLAI has led the survey of the socio, legal, and ethical approaches of fair-by-design for AI algorithms. ITI has collaborated through all these tasks and provided support and coordination.

Expected Results

The expected results of the work packages are the following:

  • A Fair-by-design methodology for AI where fairness considerations are integrated into the AI lifecycle from the outset; this methodology aims to ensure that AI systems are fairly designed.
  • A compendium of fair-by-design sociological, legal methodologies.
  • A compendium fair-by-design software engineering methodologies and architecture.
  • Design of fair-by-design methodologies and guidelines to help practitioners and stakeholders in their implementation.
  • A fair-by-design engine, capable of allowing AI investigators to study the effect of each input factor on the fairness KPIs, as well as the effects of interactions between input factors on the fairness KPIs; users will be enable to build controlled experiments and simulate the design of their AI applications, testing different solutions and the robustness of each one – always in terms of fairness.

Connection with other WPs

WP5 is connected with the other technical WPs (WP2, WP3, and WP4). In particular, the fair-by-design methodology is going to rely on specific sub-components developed in the other work packages, such as bias detection and bias mitigation techniques (respectively, WP3 and WP4); the coordination provided by WP2 allowed for an effective integration of all sub-components. The fair-by-design methodology is also strongly influenced by the outcome of WP6, as the coordination with this work package is crucial to align the technical solution with the relevant social, legal and policy elements and contexts.

WP5 is the core of Aequitas. This is where experts in IT, social sciences, and law collaborate to devise guidelines for making tomorrow’s AI fairer. It’s exciting to look at this problem from so many different perspectives: it gives us the impression of working on something which can really impact the future!

Giovanni Ciatto, WP5 Co-leader

Results so far

In order to build the fair-by-design methodology, we first had examine the existing fair-by-design approaches, considering a socio, legal, ethical lens (task 5.1) and the technological lens (task 5.2).

The key takeaways of this analysis are the following (summarized in deliverables 5.1 and 5.2):

  • There is a clear gap in current fair-by-design practice.
  • The integration of social, legal, ethical, and technological perspectives presents two challenges: complexity and interdisciplinarity.
  • Each perspective operates within its own framework:
    • Social, legal, and ethical perspectives focus on human behaviour, ethical principles designed for digitalization, and regulation, while technological perspectives prioritize efficiency, functionality, and innovation.
    • Bridging these perspectives requires interdisciplinary collaboration.
    • This is compounded by cultural and contextual differences, which are crucial from the legal point of view.
  • Divergent priorities: technological perspectives often prioritize performance and scalability, whereas social and legal considerations emphasize accountability, equity and the protection of (fundamental) rights, democracy, and the rule of law.
  • Pace of change: technology evolves rapidly, outpacing the ability of social, ethical and legal frameworks to adapt. This misalignment leads to regulatory gaps and ethical dilemmas.
  • Lack of common vocabulary and/or conceptual framework: each discipline has its own vocabulary and ‘language’ and concepts whilst quite often referring to the same elements or objectives. Mapping and matching these diverging vocabulary and concepts are a lengthy but crucial process.

We have built a meta-methodology that can be used to build fair-by-design methodology. This has been the activity performed during task 5.3 and reported in deliverable 5.3. The meta-methodology is composed by a stable set of core principles and an evolvable pool of practices for steering end users towards a deeper understanding of the problem/domain they are dealing with, and for guiding their decision-making. The meta-methodology should then be reified into a guidelines-provisioning software system whose capabilities and degree of automation can be incrementally improved, as prescribed by the meta-methodology itself. We propose developing the Fair-by-Design via an incremental approach, starting from an initial version to be repeatedly refined. Our approach utilizes a questionnaire-based system where socio-legal and technical domain experts iteratively refine questions and responses, supported by automation.

Conclusion

We address the problem of building fair-by-design methodologies merging social, legal, ethical and technical perspectives. This kind of methodology is a crucial step towards fair and unbiased AI. Our approach does not directly provide a fair-by-design methodology, as we realized that the varying social and legal factors require different strategies to obtain fair AI approaches. Instead, we proposed a meta-methodology to build fair-by-design methodologies.

authors

found this interesting?

share this page