Talk on Explainable NLP for Informed Human-AI Collaboration in High-Risk Domains

Title
Explainable NLP for Informed Human-AI Collaboration in High-Risk Domains
Abstract
Large language models are increasingly used in decision-critical domains, yet their opaque nature limits reliability and usability. This talk presents research from the XplaiNLP group that advances explainable NLP methods for high-stakes applications such as (semi-)automated fact-checking and medical decision-support. The XplaiNLP group is working on reliable evidence retrieval and narrative monitoring approaches and multi-level explanation techniques, such as attribution methods, natural language rationales, and counterfactuals. Model outputs and explanations are evaluate based on their impact on user trust and decision quality and overall task performance in empirical studies. The work aims to develop intelligent decision support systems for high-stake scenarios by aligning LLM outputs with domain knowledge, user expertise, and regulatory requirements, contributing toward actionable and responsible human-AI collaboration.
Bio
Vera Schmitt is head of the XplaiNLP research group at TU Berlin and the German Research Center for Artificial Intelligence, where she leads interdisciplinary research at the intersection of natural language processing, explainable AI, and human-computer interaction. Her work focuses on interpretable and robust language technologies for high-stakes decision-making, particularly in the domains of AI-supported fact-checking and medical decision-support. She has raised third-party research funding for building up her research group and is PI of multiple projects such as news-polygraph, VeraXtract, FakeXplain, and VERANDA. Her research has been published in venues such as FAccT, ACL, COLING, and LREC, and is actively contributing to connecting technical innovation with regulatory frameworks such as the AI Act and DSA.
Lunch will be served at the talk. If you wish to attend, please use the sign up form below.