News
December 2024: We strongly encourage applications from researchers from underrepresented backgrounds to ensure diverse perspectives and inclusive discussions. We will also host mentoring meetings in January for those who feel they would benefit from additional guidance. Please send us an email if interested.
Overview
How can we trust large language models (LLMs) when they generate text with confidence, but sometimes hallucinate or fail to recognize their own limitations? As foundation models like LLMs and multimodal systems become pervasive across high-stakes domains—from healthcare and law to autonomous systems—the need for uncertainty quantification (UQ) is more critical than ever. Uncertainty quantification provides a measure of how much confidence a model has in its predictions, allowing users to assess when to trust the outputs and when human oversight may be needed.
This workshop seeks to address the gap by defining, evaluating, and understanding the implications of uncertainty quantification for autoregressive models and large-scale foundation models. Researchers from machine learning, statistics, cognitive science, and human-computer interaction are invited to contribute through submitted papers, and structured discussions on key questions and topics:
- How can we create scalable and computationally efficient methods for estimating uncertainty in large language models?
- What are the theoretical foundations for understanding uncertainty in generative models?
- How can we effectively detect and mitigate hallucinations in generative models while preserving their creative capabilities?
- How is uncertainty affecting multimodal systems?
- What are the best practices for communicating model uncertainty to various stakeholders, from technical experts to end users?
- What practical and realistic benchmarks and datasets can be established to evaluate uncertainty for foundation models?
- How can uncertainty estimates guide decision-making under risk ensuring safer and more reliable deployment?
Important Dates
Submission link: https://openreview.net/group?id=ICLR.cc/2025/Workshop/QUESTION
Submission format: ICLR template, 6-8 pages for regular submissions, 3 page for tiny papers
Submission deadline: 5th February 2025, AOE
Author notification: 28th February 2025, AOE
This year, ICLR is discontinuing the separate “Tiny Papers” track, and is instead requiring each workshop to accept short (3–5 pages in ICLR format, exact page length to be determined by each workshop) paper submissions, with an eye towards inclusion; see https://iclr.cc/Conferences/2025/CallForTinyPapers for more details. Authors of these papers will be earmarked for potential funding from the main conference of ICLR, but need to submit a separate application for Financial Assistance that evaluates their eligibility. This application for Financial Assistance to attend ICLR 2025 will become available on https://iclr.cc/Conferences/2025/ at the beginning of February and close on March 2nd.
Tiny Paper Track: For submissions to the Tiny Paper Track, the Appendix will not be reviewed. The submission must be limited to 3 pages of content. Include the label "TINY" at the start of the title to distinguish it from regular submissions. If this label is not included, the submission will be treated as a regular one.
Notice that this workshop is non-archival. The workshop provides a platform for researchers to present and discuss their latest findings without the pressure of formal publication.
Schedule Detail
Tentative schedule
-
9.00 AM
Introduction and preliminaries
-
9.10 AM
Invited Talk
-
9.55 AM
Break
-
10.10 AM
Innvited Talk
-
10.55 AM
Break
-
11.10 AM
Poster Session
-
12.45 PM
Lunch break
-
1.30 PM
Invited Talk
-
2.15 PM
Break
-
2.30 PM
Invited Talk
-
3.15 PM
Break
-
3.30 PM
Poster Session
-
5.15 PM
Panel