Tag Archives: Artificial Intelligence

Developing Artificial Intelligence to triage patients

13 Jan

by Sarah Darley and Niels Peek

Blur medical background clinic service counter lobby with patient paying bill at cashier desk  in hospital

Work is underway to develop a new system which uses Artificial Intelligence to triage patients.

We are developing the Patient Automated Triage and Clinical Hub Scheduling (PATCHS) which is designed to be more accurate than previous automated triage systems as it takes into consideration factors such as a patient’s medical history.

To access the system patients will visit their GP practice website and enter the reason they need to contact a GP. They’ll be asked a number of relevant questions. PATCHS will analyse this information along with the data already stored about the patient such as their age and medical history, all of which will combine to provide a tailored triage decision.

In a feasibility study that has been completed, PATCHS was trained to make triage decisions in the same way as a GP. This involved the system learning how to use data from 10,000 triage decisions historically made by GPs. When the outcomes were compared, PATCHS was found to have achieved similar levels of accuracy as a GP.

As PATCHS has proved to be successful it is now being tested on a larger scale using 150,000 triage decisions across Salford and Manchester. This next stage of the testing will also examine how it’s implemented, how easy it is to use for patients and all GP practice staff, as well as the potential to roll it out on a large scale.

We believe that the potential for PATCHS to make a real difference to primary care is huge and wide reaching. Not only can it reduce waiting times but it could also increase access to GP services, and extend appointment times while helping patients to receive the right care for them.

There are also benefits for GP practice staff and if it saves money this can be used to improve patient care. Interestingly, PATCHS will give us insight into the use of AI in healthcare.

PATCHS is funded by Innovate UK. The project is led by Dr Ben Brown, a GP and senior lecturer, and the software is being developed by the technology company Spectra Analytics.

NIHR Greater Manchester PSTRC Symposium 2019 – through the eyes of an attendee

9 Sep

by Angela Ruddock – lay member of the PSTRC’s Executive Management Board (EMB)

Symposium_audience shot_banner

Delegates at the NIHR Greater Manchester PSTRC annual symposium

In May, the NIHR Greater Manchester PSTRC held its annual symposium. As a lay member of the centre’s Executive Management Board (EMB), I was looking forward to attending.

I’m one of two lay members of the EMB and my role is to ensure that the patient/public voice is fully represented and to give updates on how patients have been involved in research while encouraging more public involvement.

My personal experiences both past and present allow me to provide meaningful input into the EMB. The PSTRC has an innovative approach to public and patient involvement/engagement and I’m able to play a part in shaping it.

The symposium – on the day

As a member of the EMB I had heard about the plans for the symposium across many months. I was looking forward to the presentations, in particular, Richard Preece from a Greater Manchester perspective and Aidan Fowler who spoke regarding patient safety nationally. The other planned sessions on transitions of care and Artificial Intelligence interested me greatly, so knowing international experts were speaking on these topics was genuinely exciting.

The list of attendees included academics, NHS decision-makers, researchers, policymakers and experts in their fields. Having all these people in a room to take part in panel discussions was an interesting prospect.

The session on Artificial Intelligence was particularly thought provoking for me. The recent Citizens’ Juries commissioned by the PSTRC were discussed and a booklet of the results and process was available. The work, looking at Artificial Intelligence (AI) and decision making in healthcare, was carried out in partnership with the Information Commissioner’s Office (ICO).

The juries concluded that accuracy in decision-making should be prioritised over the ability to explain how a decision is reached. It was very interesting to hear the jurors’ viewpoints being discussed by experts who are leaders in the field of Artificial Intelligence. They were able to explain some of the issues in a way that was relatable.

International speakers

As well as hearing about research currently underway at the PSTRC, the symposium was an opportunity to learn from experts working on similar projects outside of the centre.

Professor Anita Burgun, Professor of Medical Informatics at Paris Descartes University, presented on the role AI can play in providing accurate and detailed information to help ensure the safety of patients. For me, she highlighted the need for further discussions with patients and patient contributors in developing public understanding of the role AI can play in drilling down accurate and timely patient information.

Professor Karina Aase, Centre Director for SHARE at the Centre of Resilience at the University of Stavanger in Norway, gave a keynote speech on Quality and Safety in Care Transitions – Expanding our Understanding.

She began by sharing a very human case study around an elderly patient with multiple morbidity issues who was nearing the end of his life. The situation was so familiar to me and, no doubt, to other ex-carers and patients. The story focussed on missed communication between the patient, his relatives and the healthcare professionals, which led to avoidable lapses in safety across care transitions. Karina stressed the importance of effective communication to ensure his family’s wishes were taken into account and that these wishes were given priority over information obtained through algorithms, to ensure the patient’s peaceful exit from life.

Karina’s work relates well to the PSTRC’s Safer Care Transitions theme and identified some common issues around the importance of ensuring the concerns of patients and their carers are fully listened to, particularly when weighed with, rather than against, other available information about the patient.

Highlight of the day

The symposium opened my eyes to the role AI has in patient safety both now and in the near future. It was clear that there’s work to do in educating patients on how AI can improve accuracy and speed in relation to diagnosis and treatment. 

The PSTRC is playing an important role and there’s potential to do more in the coming months and years. The commissioning of the Citizen’s Juries was a positive step and I look forward to hearing how this work is built upon.

Artificial Intelligence and decision making – experts discuss results of Citizens’ Juries to inform national guidance

1 Jul

Banner _Citizens Juries workshop image

Is it necessary to give reasons for decisions made using artificial intelligence (AI) software even if the results may not be as accurate? Leading academics and experts believe, with the development of AI advancing, guidance on how it can be used in decision making is needed.

The National Institute for Health Research Greater Manchester Patient Safety Translational Research Centre (NIHR Greater Manchester PSTRC) and the Information Commissioners Office (ICO) commissioned two citizens’ juries* to investigate AI and decision making. In healthcare, the juries prioritised accuracy over explanation, but in other scenarios** they reached different conclusions; clearly, context matters.

The results were presented and discussed recently at a workshop attended by NHS National Data Guardian, Dame Fiona Caldicott, and chaired by Professor Stephen Campbell, Director of the Greater Manchester PSTRC. More than 50 people from across the country participated, including members of the two juries, NHS executives, academics and researchers working in range of specialisms such as philosophy, computer science and the arts.

Professor Niels Peek, Theme Lead for Safety Informatics at the Greater Manchester PSTRC, said: “Before there is widespread adoption of AI across the NHS it is important to develop guidelines to make sure patient safety is assured. We were keen to work with the ICO on this to help inform the guidance it is writing. We chose to commission two citizens’ juries to find out what the public think about this complex issue.

“It was valuable to reflect on the findings of the juries with a large group of experts during the workshop. We presented the research and invited questions before taking part in table discussions about AI and decision making which helped to give extra weight to our existing findings.”

Kayshani Gibbon, from The Royal College of Art (RSA), also spoke about research it’s due to publish on the ethical use of AI, giving the attendees additional background information.

Dame Fiona Caldicott, NHS National Data Guardian, said: “I welcome this project and the way it has involved a cross-section of the public in these important considerations on AI that will have a major impact on our lives. We have also used the citizens’ jury approach, and have found it’s a valuable way to learn in more depth what members of the public think.”

Ben Bridgewater, CEO at Health Innovation Manchester, attended the event and expressed how important he felt it was to be establishing a position on AI and decision making in the NHS. Ben said: “In Manchester we’re at the forefront of innovation in healthcare so being involved in this research and having the opportunity to comment on the results has been valuable.”

Simon McDougall, Executive Director for Technology and Innovation at the ICO spoke about the wider importance of the research as well as its significance: “Better understanding the public’s views on explaining AI decisions is vital for the ICO and The Alan Turing Institute, who are working with us on this, as we are developing guidance for organisations in this area.”

“The citizens’ juries gave us the opportunity for an in-depth exploration and discussion about the issues arising from AI decision-making and equipped us with a unique and informed public opinion on this complex issue.

“Our research findings, informed by the juries, have recently been published in our interim report.”

The citizens’ juries were run by Dr Malcolm Oswald, Director of Citizens Juries c.i.c. He said: “Bringing together experts from across the country and varying specialisms to discuss AI and explainability was a great opportunity to open up the debate.  These questions on AI will become ever more crucial as it increasingly affects our daily lives. It’s a complicated topic, and these citizens’ juries gave us five days to bring expert evidence and the time for people to work together to reach reasoned recommendations that will inform national policy.”

To find out more about the NIHR Greater Manchester PSTRC visit –  http://www.patientsafety.manchester.ac.uk/ and you can learn more about the breadth of projects Citizens Juries c.i.c. are working on at https://citizensjuries.org/

Experts gathered to discuss patient safety at symposium in Manchester

26 Jun
PSTRC Symposium - Dr Aidan Fowler, Prof Stephen Campbell and Dr Richard Preece

l-r: Dr Aidan Fowler (NHS National Director of Patient Safety), Prof Stephen Campbell (Director, NIHR Greater Manchester PSTRC), Richard Preece (Executive Lead Quality, Greater Manchester Health and Social Care Partnership)

A symposium was held by the National Institute of Health Research (NIHR) Greater Manchester Patient Safety Translational Research Centre (PSTRC) in Manchester to discuss some of its areas of research. Experts from Paris and Norway joined NHS National Director of Patient Safety, Dr Aidan Fowler to speak at the event which around 100 academics, researchers, policy makers, and NHS executives attended last month.

The Greater Manchester PSTRC is one of three national NIHR funded patient safety research centres and is based at the University of Manchester. It works in partnership with Salford Royal NHS Foundation Trust. The centre carries out research across four themes which investigate patient safety in primary care and transitions of care. Its work is being adopted by the NHS, making a real difference.

The event gave attendees the opportunity to learn about crucial patient safety issues from experts and panel sessions allowed for meaningful discussion and insight.

The main areas of patient safety covered at the event were Artificial Intelligence and patience safety, transitions of care and patient safety, avoidable harm and patient safety, as well as the unique health and social care infrastructure in Greater Manchester.

Richard Preece, Executive Lead Quality Greater Manchester Health and Social Care Partnership spoke about the Greater Manchester Quality Improvement framework which he published at the end of last year. He highlighted that it is a framework for system safety. Richard, said: “We work in partnership and this involves our universities and research centres such as the Greater Manchester PSTRC, as well as patients and staff, to make care safer and to improve the outcomes and experiences of patients and service users.

“The symposium was a valuable opportunity to hear about some of the research underway and to speak to researchers and decision makers because by working together in this way we can start to improve the care system.”

Professor Anita Burgun from Paris Descartes University and Paris Artificial Intelligence Research Institute delivered a keynote speech on hybrid approaches in AI and its relevance to patient safety. She talked about her recent research looking at the overall prevalence of adverse events reported in social media.

Anita joined a panel, along with the PSTRC’s Theme Lead for Safety Informatics, Professor Neils Peek, Lecturer in Health Informatics at the University of Leeds, Professor David Wong and Professor David Clifton from the Department of Science and Engineering at the University of Oxford who chaired the session. The panel discussed the PSTRCs recent work on AI and patient safety.

The international input continued with Professor Karina Aase, Centre Director for SHARE, Centre for Resilience in Healthcare, University of Stavanger, Norway.  Karina presented some highlights from her research around Quality and Safety in Care Transitions, Expanding our Understanding. Karina spoke about creating common ground, widening the current perspective on care transitions and trying to work out a framework for researching it.

Professor Stephen Campbell, Director of the NIHR Greater Manchester PSTRC, said: “We were delighted that so many experts in patient safety could join us at our first symposium where we were all in agreement that the patient should be at the centre of everything we do. Working together and sharing research will help us to continue to improve patient safety.

“We value the input from all speakers and attendees who asked probing and insightful questions during our panel sessions. The research we undertake is making a tangible difference in the NHS and we are looking forward to seeing more of our research in practice, improving safety for patients.”

Should Artificial Intelligence give reasons for decisions even if it affects accuracy – Citizens’ Juries deliberate

27 Feb

L Riste_Citizens Jury day 3 small groupwork_pZNJshV8

Is it necessary for artificial intelligence (AI) to give reasons for its decisions even if it means the results aren’t as accurate? When and why are explanations of AI decisions most important? A leading research team investigated these questions in Coventry and Manchester and the results could affect future national policy.

The National Institute for Health Research (NIHR) Greater Manchester Patient Safety Translational Research Centre (Greater Manchester PSTRC) and the Information Commissioner’s Office (ICO) commissioned Citizens Juries c.i.c to find out what the general public thinks. The findings of the research will inform guidance under development by the ICO and the Alan Turing Institute to help organisations explain decisions made by AI to the individuals affected. One question considered by the two juries was, if a computer gives a diagnosis, is it better to be given an explanation of how the computer reaches its diagnosis even if that means the computer’s diagnosis is likely to be less accurate?

Each jury was made up of 18 people from a cross-section of the public across two locations. The first week was the turn of Coventry and the second, the process was repeated in Manchester. The jurors came together for five days to hear expert evidence before making their recommendations.

Professor Niels Peek, Research Lead for Safety Informatics at the NIHR Greater Manchester PSTRC based at The University of Manchester, and Principal Investigator for this research, said: “AI is fast becoming extremely useful in healthcare diagnosis and, in some cases, can be more accurate than a doctor. The most advanced AI systems are now so complex that some aren’t able to give a reason for a diagnosis. If people are given a diagnosis or decision by a computer but aren’t able to ask for a reason, does that affect how much they trust it? Or, are they prepared to forgo an explanation if that means greater accuracy?

“We need to find out what the general public thinks about this and that’s why we’re conducting this research and asking for their feedback to ensure they have the opportunity to give their opinion on something that will affect patient safety.”

The two juries considered the importance of explanations and the trade-off between accuracy and explanations for decisions made by AI in four different scenarios:

  • Healthcare: diagnosis of acute stroke
  • Healthcare: finding matches between kidney transplant donors and recipients
  • Criminal Justice: deciding which offenders should be referred to a rehabilitation programme
  • Recruitment: screening job applications and making shortlisting decisions

Dr Malcolm Oswald, Director of Citizens Juries c.i.c, said: ”These are important questions now, and they will become ever more crucial as artificial intelligence increasingly affects our daily lives. It’s a complicated topic, and these citizens’ juries give us five days to bring expert evidence and the time for people to work together to reach reasoned recommendations that will inform national policy.”

To find out more about the NIHR Greater Manchester PSTRC visit: http://www.patientsafety.manchester.ac.uk and you can learn more about the breadth of projects Citizens Juries c.i.c. are working on at https://citizensjuries.org/.

Citizens’ Juries: Using public opinion on Artificial Intelligence to inform policy

14 Feb

by Carly Rolfe

Jury illustration_iStock-859031624_cropped

Citizens’ Juries offer a novel way of gathering public opinion on important issues, which is used to inform decisions and policy-making at a national level. At the beginning of a Citizens’ Jury, a question is posed to public ‘jurors’. Expert ‘witnesses’ then provide detailed evidence for and against, providing jurors with a balanced view from which they ultimately make a decision on the issue.

This month, the NIHR Greater Manchester PSTRC, in conjunction with the Information Commissioner’s Office (ICO) and Citizens’ Juries c.i.c. is running two Citizens’ Juries, the first in Coventry and the second in Manchester.

The juries will pose four questions around the use of Artificial Intelligence (AI) for decision making in healthcare, criminal justice and recruitment. Specifically, jurors will be asked how important it is for AI to explain how it reaches its decisions, even if the ability to do so will make its decisions less accurate.

The results of the juries will feed directly into national guidance that the ICO is producing on citizens’ rights to an explanation when decisions that affect people are made using AI.

For more information, see the Citizens’ Juries page on the Greater Manchester PSTRC website, or the Citizens Juries c.i.c. website.

Using Artificial Intelligence to Process Correspondence Sent to GPs

14 Feb

Ben Brown_Using AI article

by Ben Brown, Wellcome Clinical Research Career Development Fellow and GP

GP practices receive hundreds of letters each day about their patients. They inform them of treatment undertaken by other health professionals (e.g. out of hours services), communicate new diagnoses (e.g. from hospital specialists), and request them to prescribe new medications. In a typical GP practice all letters are read and processed by GPs, which is a huge administrative burden. However, it’s thought that up to 90% don’t require GP input – they are simply ‘FYI’. Freeing up GPs from reading such letters could save 40 minutes per GP per day,[1] which could be used for more appropriate tasks such as direct patient care – or a well-deserved cup of tea.

Software is used to help GP practices read and process letters. The most popular is Docman (used by roughly 80% of all GP practices). A team of researchers, led by academic GP Dr Ben Brown, in collaboration with Spectra Analytics, have started working with Docman to produce an Artificially Intelligent (AI) algorithm (a set of rules learned by a computer to carry out a task) that can read patient letters. It’s intended the algorithm will then only highlight those to GPs that require their input.

The team is currently collecting and analysing thousands of letters previously read and processed by GPs so the algorithm can learn how to do it itself. The PSTRC has recently agreed to fund the next step of the project, which will involve testing the algorithm in real-life in GP practices around the country. If successful, it’s hoped the algorithm will be tested in a trial across the whole of England.

If you work at a GP practice and would like join to the project, or even if you aren’t and would simply like to know more about it, please contact Dr Ben Brown.

 


 

[1] NHS, Training for reception and clerical staff, Gen. Pract. Forw. View. (2016). https://www.england.nhs.uk/gp/gpfv/redesign/gpdp/reception-clerical/.

 

If a computer gives you a diagnosis, should it also give you an explanation?

15 Nov
Citizens Jury_cropped

Jurors from previous Citizens’ Juries

by Malcolm Oswald

Artificial intelligence (AI) plays an increasing role in our daily lives. Computers are being trained how to do many things, including making medical diagnoses. For example, AI can diagnose skin cancer from skin images as reliably as dermatologists, and this clever software is only going to get better. But how do we know whether a diagnosis we get is accurate? If we are given it by a human doctor, we can ask for an explanation. However, the most advanced AI systems are very complex and do not just act according to pre-defined rules, but continue to “learn”, and it may not be possible to explain to a patient how the computer reached its diagnosis.

If you were given a diagnosis by a computer, and were given the choice, would you always prefer to be given an explanation of how the computer reached its diagnosis even if that meant the computer’s diagnosis was likely to be a little less accurate?

That is one question being put to two “citizens’ juries” being commissioned by the NIHR Greater Manchester Patient Safety Translational Research Centre (Greater Manchester PSTRC) in early 2019. Citizens Juries c.i.c. will recruit 18 people from around Manchester – chosen to represent a cross-section of the public – to come together for five days to hear expert evidence and tackle difficult questions concerning how AI should be used within healthcare. The process will then be repeated with 18 different people from around Coventry to see whether they reach the same conclusions.

The Greater Manchester PSTRC is collaborating on this project with the Information Commissioner’s Office which has the challenging task of regulating the use of AI. The results of the juries will feed directly into national guidance that the Information Commissioner’s Office has to produce on citizens’ rights to an explanation when decisions that affect people are made using AI.

For more information about citizens’ juries, see the Citizens Juries c.i.c. website, or if you have a specific enquiry about this project, email Dr Malcolm Oswald or the principal investigator, Prof Niels Peek.

Using Artificial Intelligence to help primary care triage

5 Oct

Working on laptop, close up of business man

by Ben Brown

Researchers at The University of Manchester and Spectra Analytics are developing an Artificial Intelligence (AI) system to help support GP practices triage requests for appointments – the Patient Automated Triage and Clinical Hub Scheduling (PATCHS) system.

It’s often difficult to get an appointment with a GP, and it’s estimated that over a quarter of GP appointments could have been dealt with in an alternative way, for example by another clinician (such as a nurse) or through patient self-care. One solution may therefore be to allocate GP appointments to patients that really need them.

While receptionists at GP practices can direct patients to the most appropriate care provider, not all practices do this, and often patients are unwilling to disclose their problems to them. PATCHS plans to tackle this by providing an online tool that will efficiently direct patients to the right place, 24 hours a day. Patients will input their reasons for requesting a GP appointment, and PATCHS will analyse this request, taking into account the patient’s medical history, in addition to other factors such as the weather, to come to a triage decision. It is hoped the system could ultimately be integrated into practice websites and medical records, ensuring effective triage at the beginning of a patient’s care pathway.

The project is funded by Innovate UK and is currently in the development stage: PATCHS is currently learning from existing data about how patients are triaged when booking a GP appointment. However, the team are looking for volunteers – both patients and doctors – to participate in the project. If you’d like to know more please contact Dr Ben Brown on benjamin.brown@manchester.ac.uk.