22 May 2020

New project will help to prevent bias and discrimination in medical AI

AI

Widely used medical decision support systems based on algorithms carry a risk of overlooking high-risk patients if not designed carefully. Therefore, researchers from the University of Copenhagen’s Department of Computer Science and Faculty of Law have joined forces to investigate how bias and discrimination within medical AI can be detected and avoided.

Photo of tablet with medical software

In recent years, Artificial Intelligence (AI) has been introduced as decision support systems in healthcare. More and more medical practitioners are using AI for support when identifying high-risk patients, and, for example, when deciding on specific treatments for cancer patients. Yet, there is a risk that medical AI embraces and propagates bias and discrimination, which could cause serious impacts on people’s lives.

- Data-driven healthcare can be really helpful, but we need to be very conscious of the design of it. Recent studies have shown exposed bias in medical algorithms used in the United States, and we’re wondering whether algorithms used in the Danish healthcare system could also be biased. We would like to apply legal principles and frameworks, such as human rights law, and develop ethical guidelines, suggestions for legal reforms, bias awareness checklists for algorithm development, and design blueprints for healthcare AI solutions, says Professor Katarzyna Wac from the Department of Computer Science.

Professor Timo Minssen from the Faculty of Law concurs and adds:

- At our research center, CeBIL, we have touched upon some of these issues when assessing the legal and ethical challenges in medical AI in another research project with Harvard Law School. So, we knew already that the issues are vast. But, this project allows us for the first time to dive deeper into the technical details and to analyze not only the challenges but also the design opportunities.

These ideas provide the framework for the new, interdisciplinary research project AI@CARE, which will be co-led by Katarzyna Wac and Timo Minssen. Together with Audrey Lebret (postdoc) and Sofia Laghouila (PhD student), they will merge a computer science and a law perspective to look at both algorithmic, legal and ethical factors that are relevant to bias and discrimination scenarios in healthcare.  

Missing information can be fatal

To understand why medical AI has a risk of being biased, it’s helpful to know how AI systems work. In machine learning, which is a technique within AI used in healthcare solutions, machines are programmed through algorithms to recognise patterns in data and make predictions. To set up such a system, the machines need to be “fed” with both representable data and important variables, in order to provide accurate and effective feedback.

- When machines are making decisions, these are based on a past datasets that it has made sense of. If there’s a lack of individual variables in the algorithms, it will be biased and therefore not providing specific feedback for the individuals who may need it the most. We know from experience that some individuals who are in serious need of treatment are sometimes also the ones we have fewer data about because they don’t participate in public surveys that contribute with data for medical models, Katarzyna Wac explains.

In an American study published in Science in October 2019, researchers found significant racial bias in a widely used algorithm that helps determine which patients need additional attention. This specific algorithm favored white patients over black patients who were sicker and had more chronic health conditions, writes NBC News.

- The American study came out after we had submitted our proposal so it was a great statement supporting our motivation for our AI@CARE project. In the Danish dataset about chronic illness that we have access to, race is not considered a specific variable. However, I know that origin in terms of western/non-western is a variable because the chronic illness risk models for these are different. Even for me, being Polish of origin, I have a higher risk of cardiovascular diseases than Danish people. It’s extremely important that the AI-systems include factors like these, says Katarzyna Wac.

She and her colleagues will use the Danish dataset to conceptualize and computationally model how and why it is possible for bias and discrimination to slip in, and how the problem could be best addressed, both legally and technologically. The project started on 1 April and will last for three years. It is funded by the University of Copenhagen’s DATA+ pool, which supports multi-disciplinary projects involving data science.