News

Overview

How can we trust large language models (LLMs) when they generate text with confidence, but sometimes hallucinate or fail to recognize their own limitations? As foundation models like LLMs and multimodal systems become pervasive across high-stakes domains—from healthcare and law to autonomous systems—the need for uncertainty quantification (UQ) is more critical than ever. Uncertainty quantification provides a measure of how much confidence a model has in its predictions, allowing users to assess when to trust the outputs and when human oversight may be needed.

Schedule Detail

Tentative schedule

  • 9.00 AM

    Introduction and preliminaries

  • 9.10 AM

    Invited Talk

  • 9.55 AM

    Break

  • 10.10 AM

    Innvited Talk

  • 10.55 AM

    Break

  • 11.10 AM

    Poster Session

  • 12.45 PM

    Lunch break

  • 1.30 PM

    Invited Talk

  • 2.15 PM

    Break

  • 2.30 PM

    Invited Talk

  • 3.15 PM

    Break

  • 3.30 PM

    Poster Session

  • 5.15 PM

    Panel

VENUE

ICLR'25

Singapore Expo, Singapore

FAQ

Following the guidelines of ICLR, this tutorial will take place on 27th April 2025. You should register for the ICLR conference.
Please reach out at chrysos [at] wisc.edu.

Organizers

organizer img

Grigorios Chrysos

University of Wisconsin-Madison

organizer img

Sharon Li

University of Wisconsin-Madison

organizer img

Barbara Plank

LMU Munich

organizer img

Anastasios Angelopoulos

University of California, Berkeley

organizer img

Emtiyaz Khan

RIKEN