Revisiting Softmax for Uncertainty Approximation in Text Classification

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt


  • Fulltext

    Forlagets udgivne version, 1,65 MB, PDF-dokument

Uncertainty approximation in text classification is an important area with applications in domain adaptation and interpretability. One of the most widely used uncertainty approximation methods is Monte Carlo (MC) dropout, which is computationally expensive as it requires multiple forward passes through the model. A cheaper alternative is to simply use a softmax based on a single forward pass without dropout to estimate model uncertainty. However, prior work has indicated that these predictions tend to be overconfident. In this paper, we perform a thorough empirical analysis of these methods on five datasets with two base neural architectures in order to identify the trade-offs between the two. We compare both softmax and an efficient version of MC dropout on their uncertainty approximations and downstream text classification performance, while weighing their runtime (cost) against performance (benefit). We find that, while MC dropout produces the best uncertainty approximations, using a simple softmax leads to competitive, and in some cases better, uncertainty estimation for text classification at a much lower computational cost, suggesting that softmax can in fact be a sufficient uncertainty estimate when computational resources are a concern.

TidsskriftInformation (Switzerland)
Udgave nummer7
Antal sider16
StatusUdgivet - 2023

Bibliografisk note

Funding Information:
This research was funded by Innovation Fund Denmark grant number 9065-00131B.

Publisher Copyright:
© 2023 by the authors.

Antal downloads er baseret på statistik fra Google Scholar og

Ingen data tilgængelig

ID: 364498618