Should Artificial Intelligence give reasons for decisions even if it affects accuracy – Citizens’ Juries deliberate

27 Feb

L Riste_Citizens Jury day 3 small groupwork_pZNJshV8

Is it necessary for artificial intelligence (AI) to give reasons for its decisions even if it means the results aren’t as accurate? When and why are explanations of AI decisions most important? A leading research team investigated these questions in Coventry and Manchester and the results could affect future national policy.

The National Institute for Health Research (NIHR) Greater Manchester Patient Safety Translational Research Centre (Greater Manchester PSTRC) and the Information Commissioner’s Office (ICO) commissioned Citizens Juries c.i.c to find out what the general public thinks. The findings of the research will inform guidance under development by the ICO and the Alan Turing Institute to help organisations explain decisions made by AI to the individuals affected. One question considered by the two juries was, if a computer gives a diagnosis, is it better to be given an explanation of how the computer reaches its diagnosis even if that means the computer’s diagnosis is likely to be less accurate?

Each jury was made up of 18 people from a cross-section of the public across two locations. The first week was the turn of Coventry and the second, the process was repeated in Manchester. The jurors came together for five days to hear expert evidence before making their recommendations.

Professor Niels Peek, Research Lead for Safety Informatics at the NIHR Greater Manchester PSTRC based at The University of Manchester, and Principal Investigator for this research, said: “AI is fast becoming extremely useful in healthcare diagnosis and, in some cases, can be more accurate than a doctor. The most advanced AI systems are now so complex that some aren’t able to give a reason for a diagnosis. If people are given a diagnosis or decision by a computer but aren’t able to ask for a reason, does that affect how much they trust it? Or, are they prepared to forgo an explanation if that means greater accuracy?

“We need to find out what the general public thinks about this and that’s why we’re conducting this research and asking for their feedback to ensure they have the opportunity to give their opinion on something that will affect patient safety.”

The two juries considered the importance of explanations and the trade-off between accuracy and explanations for decisions made by AI in four different scenarios:

  • Healthcare: diagnosis of acute stroke
  • Healthcare: finding matches between kidney transplant donors and recipients
  • Criminal Justice: deciding which offenders should be referred to a rehabilitation programme
  • Recruitment: screening job applications and making shortlisting decisions

Dr Malcolm Oswald, Director of Citizens Juries c.i.c, said: ”These are important questions now, and they will become ever more crucial as artificial intelligence increasingly affects our daily lives. It’s a complicated topic, and these citizens’ juries give us five days to bring expert evidence and the time for people to work together to reach reasoned recommendations that will inform national policy.”

To find out more about the NIHR Greater Manchester PSTRC visit: http://www.patientsafety.manchester.ac.uk and you can learn more about the breadth of projects Citizens Juries c.i.c. are working on at https://citizensjuries.org/.

Leave a comment