authors

NEWS 20240802

Social impact based bias diagnosis and explanations

Avoiding bias and understanding the consequences of artificial intelligence used in decision-making is of high importance to avoid mistreatment and unintended harm. There have been disconnections on several levels, between the rationale of fairness definitions/metrics and the reasons why they were picked, how they affect stakeholders, explanations of deviations from the truth, etc. This work package addresses, from a socio-technical perspective, the development of the bias diagnosis and understanding sub-component that will enable bias detection provided input in terms of data, algorithm, and social context.

Action plan

With the objective to reveal and explain bias in data, algorithms, and outputs by identifying the socio-technical factors, this work package (month 6-24) addresses, from a socio-technical perspective, the development of the bias diagnosis and awareness sub-component, provided input in terms of data, algorithms, and social context. To detect and explain what and how the impacts are working in AI systems, WP3 focuses on exploring the socio-technical factors such as design choices, existing societal biases, or differing cultural perspectives, which might introduce biases extraneous to the data or algorithm itself.

When it comes to the introduction of bias in AI systems, we distinguish three key components: the data, the algorithm, and the social context. With a state-of-the-art review, we explored the detection methods for dataset biases; we modelled the connection of the proposed social impact-based framework to a list of selected bias detection tools and use cases; we developed an impact-based framework by applying socio-technical modelling and scaffolding for connecting bias and fairness-related effects; we developed a bias awareness and measurement tool as a diagnosis engine to integrate into the whole AEQUITAS service.

How WP3 is in connections with other partners and WPs

As a multi-disciplinary project, multiple partners are working together to achieve the AEQUITAS goals. With the coordination of UNIBO and their design of the fair-by-design methodology, UCC and UMU surveyed the datasets, and technological fair-by-design algorithms, and developed components for bias diagnosis and reparation respectively.

WP3, led by UMU, relates to the other technical WPs (WP2, WP4, WP5, and WP8). WP3 and WP4 respectively construct a sub-component as foundations of WP5 (the fair-by-design methodology). Within WP8, together with other partners, WP3 will be involved in producing training and education materials to increase awareness of the role of developers and other stakeholders on the effect of bias in design and introduce guidelines and tools to mitigate bias introduction at the root of the design process.

Bias and prejudice are attitudes to be kept in hand, not attitudes to be avoided.

Charles Curtis, American attorney, the 31st vice president of the United States (1929–33)

Conclusion

To summarize, bias detection and especially the explanation of its impacts in the AI decision-making systems from the social-technique point of view is critical in the AEQUITAS project as well as all the AI fields. With the aim of raising awareness and understanding the impacts of unfairness in AI systems, Work Package 3 has been working on bridging the disconnection of existing bias detection algorithms and explaining the impacts using social techniques.

About the authors

Lili Jiang is associate professor at Department of Computing Science, Umeå University, Sweden.  Her expertise is on data science and AI trustworthiness (e.g., privacy, fairness).   Mattias Brännström is doctoral student at Department of Computing Science in Umeå University. He has been working on impact-based explanations on AI decision-making systems.

authors

found this interesting?

share this page