ICML 2025 Workshop on
Reliable and Responsible Foundation Models
July 18 or 19, 2025
The workshop will be held in a hybrid format.




News

  • Call for reviewers: We are actively looking for reviewers to join the program committee for the workshop. We encourage all interested researchers to apply, especially those from underrepresented groups. Prior reviewer experience is a nice-to-have, but not required. Interest and familiarity with subject matters related to the workshop is required. If you are interested, please fill out the application form to join us.
  • Call for papers: Submission portal open at OpenReview: https://openreview.net/group?id=ICML.cc/2025/Workshop/R2-FM

About

Foundation models (FMs), with their emergent abilities and reasoning potentials, are reshaping the future of scientific research and broader human society. However, as these models advance toward artificial general intelligence (AGI) or even superintelligence (ASI), pressing concerns emerge around their reliable and responsible deployment—particularly in areas such as safety, privacy, transparency, sustainability, and ethics. The workshop on reliable and responsible FMs delves into the urgent need to ensure that such powerful AI systems align with human values. The significance of this topic cannot be overstated, as the real-world implications of these models impact everything from daily information access to critical decision-making in fields like medicine and finance, especially for embodied FMs that directly interact with the physical world. Stakeholders, including developers, practitioners, and policymakers, care deeply about this because the reliable and responsible design, deployment, and governance of these models are essential for the preservation of societal norms, order, equity, and fairness in a human-AI symbiosis future.

Key Problems We Aim to Address

Diagnosis and Evaluation
How can we effectively identify unreliable or irresponsible behaviors in FMs, and comprehensively evaluate their broader capabilities and potential societal harm? In addition, how can we benchmark FMs with limited human-annotated data?
Sources and Mitigation
How can we pinpoint and understand the known or emerging sources of FM unreliability, such as training data, optimization objectives, and model design? How can we further mitigate these identified issues effectively?
Governance and Guarantee
What principles or guidelines should inform the next generation of reliable and responsible FMs? How can real-time monitoring be enabled? How can we establish theoretical frameworks to ensure reliable and responsible behavior?
Adaptation and Applications
How to enhance the reliability and responsibility of FMs increasingly advanced FMs, particularly as they adopt new features (e.g., multi-modality, long CoT) and deploy across diverse domains such as healthcare and education?

Call for Papers

The 2nd Workshop on Reliable and Responsible Foundation Models at ICML 2025 invites submissions from researchers focused on the reliability and responsibility of foundation models. Additionally, we welcome contributions from scholars in the natural sciences (such as physics, chemistry, and biology) and social sciences (including pedagogy and sociology) that necessitate the use of reliable and responsible foundation models for in-the-wild applications.

Key Dates

  • Paper Submission Open: April 29, 2025
  • Paper Submission Deadline: May 30, 2025 (AoE)
  • Paper Notification Deadline: June 26, 2025 (AoE)
  • Camera-ready Version Deadline: July 4, 2025 (AoE)

Deadlines are strict and will not be extended under any circumstances. All deadlines follow the Anywhere on Earth (AoE) timezone.

Submission Site

Submit papers through the Workshop Submission Portal on OpenReview.

Scope

We welcome contributions across a broad spectrum of topics, including but not limited to:

  • Theoretical foundations of FMs, including uncertainty quantification, continual learning, and reinforcement learning
  • Empirical investigations into the reliability and responsibility of various FMs
  • In-depth discussions exploring new dimensions of FM reliability and responsibility
  • Interventions during pre-training to enhance the reliability and responsibility of FMs
  • Innovations in post-training processes to bolster the reliability and responsibility of FMs
  • Advancements in improving the reliability and responsibility of FMs for test-time scaling
  • Discussions on aligning models with potentially superhuman capabilities to human values
  • Benchmark methodologies for assessing the reliability and responsibility of FMs
  • Issues of reliability and responsibility of FMs in broad applications

Submission Guidelines

Format:  All submissions must be a single PDF file. We welcome high-quality original papers in the following two tracks:

  • Technical Papers: up to 9 pages
  • Vision/Position Papers: up to 4 pages
References and appendices are not included in the page limit, but the main text must be self-contained. Reviewers are not required to read beyond the main text

Style file:   You must format your submission using the ICML 2025 LaTeX style file. For your convenience, we modified the main conference style file to refer to our workshop: icml2025_r2fm.sty. Please include the references and supplementary materials in the same PDF. The maximum file size for submissions is 50MB. Submissions that violate the ICML style (e.g., by decreasing margins or font sizes) or page limits may be rejected without further review.

Dual-submission and non-archival policy:  We welcome ongoing and unpublished work. We will also accept papers that are under review at the time of submission, or that have been recently accepted, provided they do not breach any dual-submission or anonymity policies of those venues. The workshop is a non-archival venue and will not have official proceedings. Workshop submissions can be subsequently or concurrently submitted to other venues.

Visibility:   Submissions and reviews will not be public. Only accepted papers will be made public.

Double-blind reviewing:   All submissions must be anonymized and may not contain any identifying information that may violate the double-blind reviewing policy. This policy applies to any supplementary or linked material as well, including code. If you are including links to any external material, it is your responsibility to guarantee anonymous browsing. Please do not include acknowledgements at submission time. If you need to cite one of your own papers, you should do so with adequate anonymization to preserve double-blind reviewing. Any papers found to be violating this policy will be rejected.

Contact:   For any questions, please contact us at r2fm2025@googlegroups.com.


Schedule

This is the tentative schedule of the workshop. All slots are provided in local time.

Morning Session

08:50 - 09:00 Introduction and opening remarks
09:00 - 09:30 Invited Talk 1
09:30 - 10:00 Invited Talk 2
10:00 - 10:15 Contributed Talk 1
10:15 - 11:15 Poster Session 1
11:15 - 11:45 Invited Talk 3
11:45 - 12:15 Invited Talk 4
12:15 - 13:30 Break

Afternoon Session

13:30 - 14:00 Invited Talk 5
14:00 - 14:30 Invited Talk 6
14:30 - 14:45 Contributed Talk 2
14:45 - 15:45 Poster Session 2
15:45 - 16:15 Invited Talk 7
16:15 - 16:30 Contributed Talk 3
16:30 - 17:00 Invited Talk 8
17:00 - 18:00 Panel discussion

Invited Speakers




Manish Raghavan

Massachusetts Institute of Technology

Richard Zemel

Columbia University

Sarah H. Cen

Carnegie Mellon University

Andrew Ilyas

Carnegie Mellon University

René Vidal

University of Pennsylvania

Workshop Organizers




Xinyu Yang

Carnegie Mellon University

Kate Donahue

University of Illinois Urbana-Champaign

Giulia Fanti

Carnegie Mellon University

Siwei Han

UNC-Chapel Hill

David Madras

Google Deepmind

Han Shao

University of Maryland, College Park

Hongyi Wang

Rutgers University

Steven Wu

Carnegie Mellon University

Peng Xia

UNC-Chapel Hill

Mohit Bansal

UNC-Chapel Hill

Zhun Deng

UNC-Chapel Hill

Huaxiu Yao

UNC-Chapel Hill