Inference & Retrieval Lab
In the Inference & Retrieval Lab, we conduct research in the areas of applied machine learning and artificial intelligence, such as large language models and generative AI, AI agents, neurophysiological multimodal learning, or search and recommendation, for instance. We study and develop tools that provide effective, efficient, ethical and sustainable access to large-scale, hetereogeneous data, and methods for processing and evaluating them. We publish internationally in these areas, and work with government and industry partners on research and technology transfer, as well as outreach activities for primary and secondary education. For more information, please visit this site.
Christina Lioma is interested in applied Machine Learning and Artificial Intelligence. Specifically:
- Improving how data is represented internally by models. Understanding the data-to-signal transformation as data enters the representation space (what we gain/lose). Improving the types of operations we can perform on representations.
- Improving inference (across modalities). This spans from pretraining and finetuning, to zero/few-shot learning and prompt-based learning. How is the inference capacity of a model affected by variations in data, task, tuning, prompting? What is the role of context, e.g. RAG, in inference?
- Evaluation at large: benchmarks, evaluation metrics and practices. How do we grapple with data leakage, overfitting, subpar metrics, or ground truth challenges? How do we reduce the gap between benchmarking distributions and real-life "in the wild" distributions? How do we reduce bias and incorporate human ethics & morality in evaluation at scale?
- Efforts to better interpret model representations and output (explainability). How to generate and measure explanations in reliable ways? How to feed explanations back into inference? How to communicate explanations to different audiences?
- Responsible AI and ML, in terms of regulation and codes of practice. How do we reduce misalignment of model inference & output with respect to human instruction? How do we encode human values in representation and inference? How to communicate risks and safeguards to stakeholders and the public at large?
- Sustainable AI and ML, in terms of resource efficiency and altenative forms of energy. Efficiency of models, data structures and hardware at scale. Computation on edge devices. Hybrid forms of energy and compute configurations. Reduction of architectural redundancy and brute force GPU scaling. Measurement of environmental sustainability.
- Practical applications of the above, for instance to Natural Language Processing, Recommendation, Information Retrieval and Neurophysiological Multimodal Learning.
-
Chengpeng Xia
-
Christina Lioma
-
Ervin Dervishaj
-
Ingemar Cox
-
Maria Maistro
-
Pietro Tropeano
-
Sara Vera Marjanovic
-
Shivam Adarsh
-
Thomas Vecchiato
-
Theresia Veronika Rampisela
-
Tuukka Ruotsalo
-
Vadym Gryshchuk
People
| Name | Title | Phone | |
|---|---|---|---|
| Chengpeng Xia | Postdoc | +4535322495 | |
| Christina Lioma | Professor | ||
| Ervin Dervishaj | PhD Fellow | ||
| Ingemar Johansson Cox | Professor | +4535335676 | |
| Maria Maistro | Associate Professor | +4553625389 | |
| Pietro Tropeano | PhD Fellow | ||
| Sara Vera Marjanovic | PhD Fellow | ||
| Shivam Adarsh | PhD Fellow | +4535325956 | |
| Theresia Veronika Rampisela | Guest Researcher | +4535330068 | |
| Thomas Vecchiato | PhD Fellow | ||
| Tuukka Ruotsalo | Associate Professor | ||
| Vadym Gryshchuk | PhD Fellow | +4535322163 |
Contact
Christina Lioma
Professor
c.lioma@di.ku.dk