Project hopes to ensure trust between citizens and algorithms
DIKU is part of a new, broad consortium that has received DKK 100 million to study how data and algorithms can be used to support society and enhance democratic processes, overcoming present challenges. The aim of the research is to develop new societal and technological solutions to ensure trust between citizens and society in a even more digitalized future.
Populism, polarization and conspiracies are well-known societal phenomena, which have recently been linked to the increased and broad use of algorithmic data processing in society. This is the opinion of several researchers who are now coming together across disciplines and universities to investigate precisely how this should be addressed.
"Algorithms, Data and Democracy", abbreviated as "ADD", is the name of the 10-year cross-disciplinary project that is now being launched to enhance the understanding of algorithmic data processing as part of society, in a healthy, human, and supportive way.
- The aim of technology is to support people’s well-being, individually but also as a society, promoting trust, security, fairness, and transparency, says DIKU professor Christina Lioma.
How to develop AI that supports human societal processes
Professor Christina Lioma, head of the Machine Learning section, will lead the computer science part of the project:
- The role of algorithmic data processing in crises of public trust is vital to society and democracy. We see a definite butterfly effect here, in the sense that algorithmic choices that may seem minor in the lab can end up having enormous consequences to society. We need to address this, right now and in a principled way.
Lioma’s work will focus on three core open problems in AI today: The first one is that search engines and social media technology in general are unintentionally designed to spread clickbait fake news, instead of actually spreading solely accurate information, simply because this type of content is highly clickable (and clicks generate traffic). The second problem is that it is impossible to exclude bias from the data generated by society and used to train AI algorithms. There will always be bias with respect to some dimension, such as gender, or age, or skin colour, or sexual orientation, or political beliefs, or income, or educational level, and so on.
- We humans have morals for guiding our behaviour even when exposed to biased situations, but algorithms do not, meaning that if they are trained on data where 90% of doctors are male for example, they will identify this as a pattern and learn to repeat it, for instance by automatically translating all doctors as he, says Lioma.
The third open problem in AI is understanding the relationship between overparameterization and generalizability in deep learning models:
- This basically means improving our understanding of how we can make sure that the algorithmic behaviour we see in the lab will also be seen outside the lab, in the real world. The conditions under which this can happen are not guaranteed right now, says Lioma.
She hopes that the work will help advance the understanding and development of AI - especially in the context of democratic and political processes that it very much affects.
Awarded DKK 100 million
- The aim of the project is to unite computer science, social sciences and humanities in the development of new solutions for the benefit of citizens and society. In other words, we will understand technological developments and strengthen their democratic potential, says Sine Nørholm Just.
The project is to run over the next 10 years. The Research Committee consists of researchers from Aalborg University, Copenhagen Business School, Aarhus University and University of Southern Denmark in addition to the UCPH and RUC. Also Lisbeth Knudsen, Strategy Director at the magazine Mandag Morgen (Monday Morning), heads the outreach part of the project.
Official launch event
9 April 2021, 9.00 am - 12.00 pm
Registration is necessary. (in Danish)
- Sine Nørholm Just, professor at the Department of Communication and Arts at Roskilde University.
- Christina Lioma, professor at the Department of Computer Science, University of Copenhagen
- Torben Elgaard Jensen, professor at the Department of Culture and Learning, Aalborg University
- Leonard Seabrooke, professor at the Department of Organisation, Copenhagen Business School
- Helene Friis Ratner, associate professor at the Danish School of Education, Aarhus University
- Alf Rehn, professor at the Department of Technology and Innovation, University of Southern Denmark