acrofan

Industry Economy TECH GAME
Society Comfort AUTO MEDIA

Research project lends helping human hand to AI decisionmakers

  • Tuesday, July 19, 2022, 5:00 pm
  • ACROFAN=Newswire
  • newswire@acrofan.com
A new research project is setting out to help artificial intelligence systems make fairer choices by lending them a helping human hand.

Researchers from the University of Glasgow and Fujitsu Ltd. have teamed up for the year-long collaboration, which is called ‘End-users fixing fairness issues’, or Effi.

In recent years, artificial intelligence (AI) has become increasingly integrated into automated decision-making systems in industries like banking, healthcare and some nations’ justice systems.

Before they can be used to make decisions, the AI systems must first be ‘trained’ by a process known as machine learning. In this process, the AI system runs through many different examples of human decisions they will be tasked with making. Then, it learns how to emulate making these choices by identifying or ‘learning’ a pattern.

However, automated AI’s ability to make decisions which are fair to users can be negatively affected by the conscious or unconscious biases of the humans who made these example decisions. On occasion, the AI itself can even ‘go rogue’ and introduce unfairness.

An AI trained to make decisions about loan applications, for example, might lean towards declining an unfairly high proportion of loans if it has learned a pattern to decline applications from certain postcodes – and these postcodes are home to marginalised populations.

The Effi project is setting out to address some of these issues with an approach known as ‘human-in-the-loop’ machine learning which more closely integrates people into the machine learning process to help AIs make fair decisions.

The project builds on previous collaborations between Fujitsu and Dr Simone Stumpf, of the University of Glasgow’s School of Computing Science. Those projects have explored new types of human-in-the-loop user interfaces for loan applications based on an approach called explanatory debugging.

In these interfaces, human users can clearly see how AIs have reached their decision with a graphical representation of the process. If they suspect that the decision has been affected by bias, they can ‘flag’ it and provide suggestions to correct the ‘bug’ they have identified. From that feedback, the AI can learn to make better decisions in the future.

The Effi project will focus on three areas of investigation:

(1) ­­­How people can better interact with AI systems to identify and fix issues with fairness

(2) How AI systems can best incorporate user feedback to ensure improved fairness and reduced bias

(3) How people’s bias can be identified and acted upon to ensure that unfairness is not embedded in the AI’s decision-making

Dr Stumpf said: “Artificial intelligence has tremendous potential to provide support for a wide range of human activities and sectors of industry.

“However, AI is only ever as effective as it is trained to be. Greater integration of AI into existing systems has sometimes created situations where AI decisionmakers have reflected the biases of their creators, to the detriment of end-users. There is an urgent need to build reliable, safe and trustworthy systems capable of making fair judgements.

“Human-in-the-loop machine learning can more effectively guide the development of decisionmaking AIs in order to ensure that happens. I’m delighted to be continuing my partnership with Fujitsu on the Effi project and I’m looking forward to working with my colleagues and our study participants to move forward the field of AI decisionmaking.”

Over the course of the project, the researchers will revamp the human-in-the-loop decisionmaking interfaces they have previously developed. They will also develop new algorithms to integrate user feedback into the machine learning process, and develop a new prototype interface.

The interface will be robustly tested with a study involving a large number of participants to assess and fix the AI decisionmaking model. Once the study is complete, the team will analyse the data to identify problematic user feedback and find new ways to prevent ‘bad’ feedback from being integrated into the system.

Dr Daisuke Fukuda, the head of the research centre for AI Ethics, Fujitsu Research of Fujitsu Ltd, said: ”Through the collaboration with Dr Simone Stumpf, we have explored diverse senses of fairness of artificial intelligence in people around the world. The research led to the development of systems to reflect diverse senses into AI. We think of the collaboration with Dr Stumpf as a strong means to make Fujitsu's AI Ethics proceed.

“In this time, we will challenge the new issues to make fair AI technology based on the thoughts of people. As the demand for AI Ethics grows in the whole society including industry and academia, we hope that Dr Stumpf and Fujitsu continue to work together to make research in Fujitsu contribute to our society.”