Skip to yearly menu bar Skip to main content



Workshops
Workshop
B.V. Alaka · Maria Skoularidou · Maximilian Vötsch · Amanda Bertsch · Michelle Lin · Vishal Dey · Yanan Long · William Agnew · Pranav A · Arjun Subramonian

[ Schubert 4 - 6 ]

Abstract

{Dis}Ability & Queer in AI: ICML 2024 Joint Affinity WorkshopThere are a sizable number of Queer and Disabled researchers in the AI community, and providing them with a space for discussion of issues relating specifically to their identities and experiences is paramount. As one of the central venues for AI/ML research, ICML provides an ideal setting for discussion and raising awareness of the issues posed by AI/ML and the associated industry and scholarly practices that are faced by diverse communities. This workshop, jointly hosted by Queer in AI and {Dis}Ability in AI, aims to bring together voices from these communities to share their experiences rooted in collective solidarity in an open, critical, and community-centric manner. We believe it is more important than ever to raise awareness about how the technologies we research impact the lives of Queer, Disabled, multiply-marginalized identities, and Global South communities.

Workshop
Laura Montoya

[ Schubert 1 - 3 ]

Abstract

The LatinX in AI research workshop is a one-day event with invited speakers, oral presentations, and posters. The event brings together faculty, graduate students, research scientists, and engineers for an opportunity to connect and exchange ideas. There will be a panel discussion and a mentoring session to discuss current research trends and career choices in artificial intelligence and machine learning, highlighting the unique challenges of LatinX identifying researchers. The workshop aims to create a platform for the work of Latinx researchers and we invite everyone to attend.We strongly encourage students, postdocs and researchers who primarily identify as Latinx in all areas of machine learning to submit an abstract describing new, previously, or concurrently published research. We welcome abstract submissions, in theory, methodology, as well as applications. Abstracts may describe completed research or work-in-progress. While the presenting author need not be the first author of the work, we encourage authors to highlight the contribution of Latinx individuals — particularly the presenting author — in the abstract. The LatinX authors of accepted abstracts will be asked to present their work in a poster session. A few authors will be selected to give 15-minute oral presentations. Authors accepted to present will be offered …

Workshop
Caroline Weis · Tatjana Chavdarova · Mandana Samiei

[ Schuber 1 - 3 ]

Abstract

The Women in Machine Learning (WiML) workshop was founded in 2006 to forge connections within the relatively small community of women working in machine learning, to encourage mentorship and exchange of ideas, and to promote communication. This year, we aim to focus particularly on the elements that have driven high participant interaction and networking based on our experience from past WiML events, while keeping the program shorter. Instead of the participant-led breakout sessions, the invited speakers and/or panelists will lead a Q&A/breakout session, occurring in parallel to each other in a 1-hour time-slot. The idea is that after participants have heard about a topic from the respective talk, there will be more questions and engagements. In addition to the short talks and parallel Q&A sessions, the program will include mentoring and career roundtables and panel discussions.To indicate the change to a shorter program and emphasize the more interactive format, we are planning to rebrand the next iteration of this workshop. We would like to organize the first “WiML Symposium” at the ICML 2024 conference.

Workshop
Dinghuai Zhang · Yuanqi Du · Guan-Horng Liu · Chenlin Meng · Ruiqi Gao · Max Welling · Yoshua Bengio

[ Lehar 3 ]

Abstract

The workshop focuses on theory, methodology, and application of structured probabilistic inference and generative modeling, both of which are important topics in machine learning.Specifically, probabilistic inference addresses the problem of amortization,sampling, and integration of complex quantities from graphical models, while generative modeling captures the underlying probability distributions of a dataset. Apart from applications in computer vision, natural language processing, and speech recognition, probabilistic inference and generative modeling approaches have also been widely used in natural science domains, including physics, chemistry, molecular biology, and medicine. Beyond applications in these domains, the span of tasks of the methods have been expanding beyond probabilistic inference and generative model such as optimal control, decision making, sampling, optimization, etc.Despite the promising results, probabilistic methods face challenges when applied to highly structured data, which are ubiquitous in real-world settings, limiting the applications of such methods. This workshop aims to bring experts from diverse backgrounds and related domains together to discuss the applications and challenges of probabilistic methods. The workshop will emphasize challenges in encoding domain knowledge when learning representations, performing inference and generations. By bringing together experts from academia and industry, the workshop will provide a platform for researchers to share their latest results and ideas, …

Workshop
Tianyu Gao · Weijia Shi · Amanda Bertsch · Tri Dao · Danqi Chen · Graham Neubig · Christopher Re

[ Hall A2 ]

Abstract

Foundation models have become a cornerstone in the advancement of artificial intelligence, widely used across both academic and practical applications. Across domains, many challenging tasks require synthesizing information over thousands to millions of individual pieces of data, which may take many forms, including images, text, audio, genomes, etc. As a result, much recent work has focused on developing long-context models capable of processing, understanding, and generating responses based on extensive inputs. Enabling foundation models to process long contexts introduces several key challenges: (1) Computation efficiency: transformers, the predominate architecture for foundation models, incur a quadratic computational complexity with respect to the input length. (2) Lack of data: The development of long-context foundation models requires access to a large amount of long-sequence data, which is difficult to satisfy due to the limited availability of such collections. (3) Evaluation complexity: Evaluating the performance of long-context foundation models is inherently complex, as it is costly to collect, construct, or verify such evaluation data by humans.Our workshop aims to convene researchers to address these challenges, fostering discussions, developments, and evaluation of long-context foundation models across various AI disciplines.

Workshop
Julien Launay · Tri Dao · Daniel Y Fu · Max Ryabinin · Daniel Hesslow · Beidi Chen · Percy Liang

[ Lehar 2 ]

Abstract
As models increase in size and training budget, they not only systematically improve in upstream quality, but also exhibit novel emergent capabilities, unlocking new AI applications. These new capabilities have led to a paradigm shift: large foundation models have become predominant in natural language processing and are growing increasingly common in computer vision, audio processing and even robotics. This increase in scale raises proportionate difficulties for practitioners: foundation model training and inference lie at a unique interdisciplinary crossroad, combining open problems in algorithms, system design, and software engineering.In response to these challenges, diverse research directions have spawned promising works: (1) training and inference either at large scale or in resource-constrained scenarios (e.g., with higher network latency and lower bandwidth, in a collaborative manner across a fleet of contributed devices, or with a single GPU); (2) large-scale distributed training approaches, such as 3D parallelism and sharding; and (3) deep system optimizations, with custom languages such as TVM and Triton. These novel interdisciplinary research directions directly shape and impact the trajectory of research across machine learning.Accordingly, these emerging lines of research are increasingly relevant to machine learning researchers. Indeed, researchers are key stakeholders: on the one hand, researchers may contribute algorithmic insights …
Workshop
Thomas Kleine Buening · Christos Dimitrakakis · Scott Niekum · Constantin Rothkopf · Aadirupa Saha · Lirong Xia

[ Schubert 4 - 6 ]

Abstract

Aligning AI agents with human intentions and values is one of the main barriers to the safe and ethical application of AI systems in the real world. Current approaches mostly rely on highly questionable assumptions about the meaning of observed human feedback or interactions. These include assumptions about rationality in decision-making and belief forming, homogeneity of the population, and other restrictive feedback assumptions. However, the role of such modeling assumptions has mostly been neglected in the literature on AI alignment. In this workshop, we want to bring together perspectives from various disciplines besides ML, including computational social choice, behavioral psychology, and economics, to share experiences and perspectives on models of human feedback and their importance for human-AI alignment and collaboration.

Workshop
Ian Kivlichan · Shibani Santurkar · Alex Beutel · Aleksander Madry · Preethi Lahoti · Ahmad Beirami · Adina Williams · Beyza Ermis · Tatsunori Hashimoto

[ Hall A1 ]

Abstract

In recent years, general-purpose AI has experienced a meteoric rise in capabilities and applications. This rise has continued to bring forth new safety challenges, requiring mitigation to ensure AI systems meet trustworthiness standards. In this workshop, we take a proactive approach to safety and focus on five emerging trends in AI and explore the challenges associated with deploying these technologies safely:1. Agentic AI: As AI agents become more autonomous, concerns about unintended consequences, ethical issues, and adversary exploitation emerge. How do we ensure these agents respect privacy, and adhere to safety protocols?2. Multimodal: With the evolution of AI systems to process and generate diverse modalities like audio, video, and images, concerns around content appropriateness, privacy, bias, and misinformation arise. How do we craft robust guidelines and security measures to tackle these challenges?3. Personalized Interactions: As conversational agents evolve for social and personal interaction, risks like data privacy breaches and echo chambers grow. How do we balance tailored experiences with user safety?4. Sensitive Applications: With AI’s integration into high-risk domains like legal, medical, and mental health, the stakes rise with risks such as overreliance on automation and potential catastrophic errors. How do we ensure that AI systems in these critical areas …

Workshop
Aviv Regev · Andrea Volkamer · Bruno Trentini · Cecilia Clementi · Charles Harris · Charlotte Deane · Christian Dallago · Ellen Zhong · Francesca Grisoni · Jinwoo Leem · Kevin Yang · Marwin Segler · Michael Pieler · Nicholas Sofroniew · Olivia Viessmann · Peter Koo · Pranam Chatterjee · Puck Van Gerwen · Rebecca Lindsay · Umberto Lupo · Ying Wai Li

[ Stolz 2 ]

Abstract

Biology and chemistry play a central role in understanding life, and are a fundamental pillar ofhuman well-being through their roles as medicines, materials, or agro-chemicals. With increasingchallenges associated with climate change, growth of the global population, diseases associatedwith aging, and the global supply of food and energy, it is becoming increasingly urgent toaccelerate the pace at which technical discoveries can be made, and translated into practicalsolutions to these societal issues. However, compared to other modalities such as images orlanguage, the study of biology and chemistry with machine learning is not as industriallyestablished. Multiple factors contribute to this delay. Different research questions require manylevels and scales of representation, from electronic structure to graph and point cloudrepresentations of (bio) molecules, to protein and nucleic acid sequences, crystals, omics data, celland tissue-level representations.This workshop aims to highlight translational ML research in biology and chemistry ML forreal-world applications in life-and materials science. The goal is to bridge theoretical advanceswith practical applications and connect academic and industry researchers. We envision abalanced scientific industrial and academic attendance, and propose committees and a lineup thatreflect a mix of top industry scientists, academic leaders and double-affiliated scientists, as well asemerging scientists and new voices in ML for healthcare, …

Workshop
Yinya Huang · Xiaodan Liang · Zhengying Liu · Pan Lu · Sean Welleck · Isabelle Guyon · Amaury Hayat · Bin Dong · Mateja Jamnik · Guangrun Wang

[ Lehar 1 ]

Abstract

Mathematical reasoning is one of the most advanced forms of human intelligence. Humans develop formal languages for rigorously describing mathematical problems and deriving mathematical knowledge. The machine learning community has endeavored to develop neural models with mathematical reasoning capabilities as humans. On the other hand, a shared vision in the community is that the models collaborate with humans for mathematical discoveries. The goal of this workshop is to bring together researchers working on various domains to discuss the progress and the future of applying AI technologies to mathematics. As mathematics is fundamental for almost all modern sciences (including computer science), a vast range of related topics are also within our scope. To this end, this workshop focuses on several crucial yet underexplored problems. Specifically, we are expecting attendants from various backgrounds, institutions, and disciplines to discuss areas related to the following: * Autoformalization and the reversed auto-informalization: How can we develop methods that improve the precision of the autoformalization process from natural language proof to formal proof, and as a dual process describing a formal proof in natural language?* Automated theorem proving: How do build consistent theorem proving? How do we relieve or solve the intermediate step errors …

Workshop
Yuanqi Du · Max Welling · Marinka Zitnik · Carla Gomes · Peter Dayan · Tommi Jaakkola · Ada Fang · Bowen Jing · Lixue Cheng · Li Kevin Wenliang · Di Luo

[ Hall A8 ]

Abstract

AI is integrated into scientific discovery ever more profusely to augment and accelerate research, helping scientists to generate hypotheses, design experiments, collect and interpret large datasets, and gain new insights that might not have been possible using traditional scientific methods alone. The main goal of this series of workshop is to discover synergy across a variety of scientific fields, encourage interdisciplinary discussions, and enhance the flow of knowledge between AI and Science communities. Throughout history, bridging seemly different fields has brought overarching benefits, with notable examples: entropy in thermodynamics and information theory, neuroscience and AI, and algorithms inspired by discoveries in science (e.g. genetic algorithm, simulated annealing and diffusion-based generative models). In the current AI era, successes of AI methods in different fields of science have alluded to the general effectiveness of collecting large simulated data, finding suitable architectures, enforcing invariances/equivariances, and utilizing foundation models. Our mission is to bring more scientists to attend ICML to share different perspectives on the use of AI, and to illuminate exciting research directions for AI researchers. In the following, we concentrate our discussion in this workshop on Scaling in AI for Science.Scaling models has addressed challenges once deemed insurmountable, including predicting 3D protein …

Workshop
Atish Agarwala · Courtney Paquette · Andrea Montanari · Cengiz Pehlevan · Sungyoon Lee · Murat Erdogdu · Naomi Saphra · Gowthami Somepalli · Swabha Swayamdipta · Tom Goldstein · Boaz Barak · Leshem Choshen · Shikhar Murty · Mengzhou Xia · Depen Morwani · Rosie Zhao

[ Straus 2 ]

Abstract

Modeling learning dynamics has long been a goal of the empirical science and theory communities in deep learning. These communities have grown rapidly in recent years, as our newly expanded understanding of the latent structures and capabilities of large models permits researchers to study these phenomena through the lens of the training process. Recent progress in understanding fully trained models can therefore enable understanding of their development and lead to insights that improve optimizer and architecture design, provide model interpretations, inform evaluation, and generally enhance the science of neural networks and their priors. We aim to foster discussion, discovery, and dissemination of state-of-the-art research in high-dimensional learning dynamics relevant to ML.

We invite participation in the 2nd Workshop on High-dimensional Learning Dynamics (HiLD), to be held as a part of the ICML 2024 conference. This year’s theme focuses on understanding how reasoning capabilities and internal structures develop over the course of neural network training; we encourage submissions related to our theme as well as other topics around the theoretical and empirical understanding of learning in high dimensional spaces. We will accept high quality submissions as poster presentations during the workshop, especially work-in-progress and state-of-art ideas.

We welcome any topics in …

Workshop
Antoine Moulin · Giorgia Ramponi · Dirk van der Hoeven · Alberto Maria Metelli · Audrey Huang · Felix Berkenkamp · Francesco Trovò · Csaba Szepesvari · Alizée Pace

[ Schuber 1 - 3 ]

Abstract

Reinforcement learning has evolved into a dynamic and expansive field, attracting both theorists and experimentalists. While theorists and experimentalists in reinforcement learning share a common interest in advancing the field, their research objectives, methodologies, and challenges sometimes diverge significantly. This workshop aims to bridge this gap by bringing them closer together and to shed light on recent developments and synergies in both communities.

Workshop
Xinyu Yang · Bilge Acun · Kamalika Chaudhuri · Beidi Chen · Giulia Fanti · Junlin Han · Lianhui Qin · Shengbang Tong · Phil Torr · Hao Wang · Cathy Wu · Huaxiu Yao · James Zou

[ Straus 1 ]

Abstract
In the era of AI-driven transformations, foundation models (FMs), like large-scale language and vision models, have become pivotal in various applications, from natural language processing to computer vision. These models, with their immense capabilities, reshape the future of scientific research and the broader human society, but also introduce challenges in their in-the-wild/real-world deployments. The Workshop on FMs in the wild delves into the urgent need for these models to be useful when deployed in our societies. The significance of this topic cannot be overstated, as the real-world implications of these models impact everything from daily information access to critical decision-making in fields like medicine and finance. Stakeholders, from developers to end-users, care deeply about this because the successful integration of FMs into in-the-wild frameworks necessitates a careful consideration of adaptivity, reliability and efficiency. Some of the fundamental questions that this workshop aims to address are:$$\textbf{1. Real-world Adaptation:}$$ In practical applications, how can we leverage the comprehensive knowledge in FMs to adapt them for specific domains, such as drug discovery, education, or clinical health?$$\textbf{2. Reliability and Responsibility:}$$ How can foundation models work reliably outside their training distribution? And how can we address issues like hallucination and privacy?$$\textbf{3. Safety, Ethics, and Fairness …
Workshop
Ritwik Gupta · Laura Mansfield · Tian Zheng · Margarita Geleta · Jerry Lin · Yongquan Qu · Maja Rudolph · Michael Pritchard

[ Stolz 1 ]

Abstract
Climate change is a major concern for human civilization, yet significant uncertainty remains in future warming, change in precipitation patterns, and frequency of climate extremes. Proper adaptation and mitigation demands accurate climate projections capable of simulating the atmosphere, ocean, land, and their interactions. Numerical models exhaustively tuned by domain scientists have been the gold standard for modeling both weather and climate because of their interpretability and ability to simulate “what-if” scenarios not present in the historical record. Although AI forecasts have started to make operational progress in weather prediction, climate projections are a harder problem. For example, High Impact-Low Likelihood events are undersampled in ERA5 reanalysis data, and substantial decadal variability in modes of climate variability (like the El-Niño Southern Oscillation) limit the ability of AI forecasts to reliably extrapolate into the future. This workshop seeks to accelerate progress on using machine learning to improve climate projections, emphasizing areas that domain scientists have deemed amenable to machine learning approaches. Examples include hybrid physics-ML climate models, where machine learning is used to emulate subgrid processes too expensive to resolve explicitly, and dynamical downscaling, where high-resolution climate variables are inferred from coarse-resolution models in a physically consistent manner. In service of this, …
Workshop
Felix Petersen · Marco Cuturi · Hilde Kuehne · Christian Borgelt · Lawrence Stewart · Michael Kagan · Stefano Ermon

[ Stolz 0 ]

Abstract

Gradients and derivatives are integral to machine learning, as they enable gradient-based optimization. In many real applications, however, models rest on algorithmic components that implement discrete decisions, or rely on discrete intermediate representations and structures. These discrete steps are intrinsically non-differentiable and accordingly break the flow of gradients. To use gradient-based approaches to learn the parameters of such models requires turning these non-differentiable components differentiable. This can be done with careful considerations, notably, using smoothing or relaxations to propose differentiable proxies for these components. With the advent of modular deep learning frameworks, these ideas have become more popular than ever in many fields of machine learning, generating in a short time-span a multitude of "differentiable everything", impacting topics as varied as rendering, sorting and ranking, convex optimizers, shortest-paths, dynamic programming, physics simulations, NN architecture search, top-k, graph algorithms, weakly- and self-supervised learning, and many more.

Workshop
Caglar Gulcehre · Razvan Pascanu · Antonio Orvieto · Carmen Amo Alonso · Maciej Wolczyk

[ Straus 3 ]

Abstract

This workshop aims to bring together various researchers to chart the course for the next generation of sequence models. The focus will be on better understanding the limitations of existing models like transformer architectures, recurrent neural networks, and state space models (e.g., S4, Mamba), as well as describing existing open problems. We will touch on topics such as memory, long-range context and in-context learning, optimization stability of these architectures, and their ability to represent different classes of problems. We will also cover interpretability and pragmatic aspects of getting these models to be efficient and perform well: how they should be scaled up, and the trade-offs and limitations imposed by current hardware. We will place additional emphasis on the discussion regarding how we should evaluate and benchmark sequential models at scale, for example, in the context of language or other domains like vision, audio, or biological signals.

Workshop
Zhenfei (Jeremy) Yin · Mahi Shafiullah · Zhenhua Xu · Quan Vuong · Jing Shao · Lu Sheng · Takayuki Osa · Hengshuang Zhao · Mohamed Elhoseiny · Xihui Liu · Tatsuya Harada · Cewu Lu · Wanli Ouyang · Pete Florence · Yu Qiao · Dacheng Tao · Phil Torr

[ Lehar 4 ]

Abstract

Multi-modal Foundation Model meets Embodied AI (MFM-EAI)In recent years, Multi-modal Foundation Models (MFM) such as CLIP, ImageBind, DALL·E 3, GPT-4V, and Gemini have emerged as one of the most captivating and rapidly advancing areas in AI, drawing significant attention and progressing swiftly. The open-source community for MFM has also seen vigorous growth, with the emergence of models and algorithms like LLaVA, LAMM, Stable Diffusion, and OpenFlamingo. These MFMs are now actively exploring ultimate application scenarios beyond traditional computer vision tasks.Recent studies have unveiled the immense potential these models hold in empowering embodied AI agents, marking the intersection of these fields with a multitude of open questions and unexplored territories. This workshop, MFM-EAI, is dedicated to exploring these critical challenges:- How can we train and evaluate MFM in open-ended environments?- What constitutes an effective system architecture for MFM-based Embodied AI Agents?- And importantly, how can MFM augment the perceptual and decision-making capabilities of these agents, balancing their high-level decision-making prowess with the nuanced requirements of low-level control in embodied systems?Topics include but are not limited to:- Training and evaluation of MFM in open-ended scenarios- Data collection for training Embodied AI Agents and corresponding MFM- Framework design for MFM-powered embodied agents- Decision-making …

Workshop
Xinyuan Sun · Anisoara Calinescu · Christian Schroeder · Georgios Piliouras · Dawn Song · Thomas Thiery · Hawra Milani · Klaudia Krawiecka

[ Stolz 2 ]

Abstract

This is a workshop proposal, targeting the intersection of Agentic AI and Market/Incentives Design.Workshop Summary: Recent developments in foundation models have paved the way for the wide adoption of AI agents that interact with humans and each other. The cooperation and safety of those models are a necessity, especially as they gain autonomy and participate in high stakes markets as autonomous systems, making those markets "agentic." However, those agentic markets face significant challenges as most existing methods at improving their performance and robustness presume critical use of policy and regulation, which are insufficient and too slow for an economy driven by a mixture of human and algorithmic participants, especially in zero-shot scenarios.As we advance towards an AI-centric future, the emergence of markets, mechanisms, and mediation platforms dedicated to preference elicitation and resource allocation for those highly agentic systems is inevitable. We expect many existing multi-agent security and cooperation approaches to break in high-stakes situations where hyper-adversarial incentives are present. This is compounded by the emergence of complexity from AI interactions, exemplified by intricate interdependencies within agentic systems.Given this complexity, how can we fully understand and assess the associated risks? How can we improve the performance and robustness of these markets? …

Workshop
Katherine Lee · A. Feder Cooper · Niloofar Mireshghallah · James Grimmelmann · Matthew Jagielski · Milad Nasresfahani · Fernando Delgado · Lydia Belkadi

[ Lehar 2 ]

Abstract

Excitement about the capabilities of generative-AI systems has touched nearly every corner of ML research and public life. Amid such exhilarating potential, there is also intensifying unease around the development and deployment of generative-AI systems. By now, it is well-known that generative models ingest vast quantities of intellectual property (IP) [8–10], which they can regurgitate verbatim [1–3, 11, 12]. Such memorization has been the continued focus of copyright-focused lawsuits [4], but memorization and copyright just scratch the surface of potential legal issues at play. In the report from our ICML workshop last year, we produced a taxonomy of emerging issues that touch on intent, privacy, misinformation and disinformation, and IP (more broadly) [5]. Indeed, based on the events of the past year alone — executive orders [13], lawsuits [4], new and amended laws [7], and labor strikes [6] — it has only become clearer that there are significant “technical, doctrinal, and policy challenges presented by law for Generative AI, and by Generative AI for law” [5]. Within this challenging and fast-moving landscape, GenLaw has played an important clarifying and cross-educational role. The first GenLaw workshop at ICML 2023 hosted over 400 attendees in person, and our workshop recording has been …

Workshop
Navid NaderiAlizadeh · Samuel Sledzieski · Kanchan Jha · Meghana Kshirsagar · Rohit Singh · Quincey Justman

[ Stolz 1 ]

Abstract

There is a growing gap between machine learning (ML) research on biology-inspired problems and the actual broad-based use of ML in the lab or the clinic. This gap is especially pressing in the context of foundation models and other large ML models. Accessibility and efficiency concerns limit the adoption of these models by biologists and clinicians. Large ML models may require extensive GPU clusters to train, while most biological labs only have access to much more modest computational resources. The usability of these models for non-expert users is also a concern, as is the need to iteratively adapt these models based on lab discoveries. This workshop seeks to bring ML and biomedical researchers together to identify interdisciplinary approaches to design and apply large, complex ML models for biomedical discovery. We invite researchers from academia and industry to submit original papers to bridge the accessibility and efficiency gap between ML research and wet lab use. All accepted papers will be invited to present posters at the workshop, and a few will be invited to give individual spotlight presentations.

Workshop
Arpit Agarwal · Tina Eliassi-Rad · Hoda Heidari · Alessandro Lazaric · Maximilian Nickel · Nicolas Usunier

[ Hall A2 ]

Abstract

With the widespread adoption of machine learning in social technologies, there are increasingly complex interactions between humans, algorithmic decision-makers, and society at large. For instance, algorithmic decisions influence the information and opportunities that are available to individuals, the news they read, the job listings they are matched to, the credit lines they receive, and the social circle they form. On a macroscopic level, such decisions can therefore affect societal outcomes such as social mobility, mental health, polarization etc. At the same time, humans also influence algorithmic decision-makers, for instance, by expressing their preferences through observed behaviors which might be inconsistent or strategic. To understand long-term individual and societal outcomes resulting from these interactions, and to develop algorithms that mitigate undesired outcomes, it has therefore become increasingly important to model these complex interactions as a whole. The goal of this workshop is to bring together researchers from both academia and industry who work on modeling interactions between AI systems, humans, and society. We aim to cover a wide range of topics including both theory and practice. In particular, we encourage submissions on the following topics:- Feedback loops between human and algorithmic decisions, and their long-term impacts- Strategic behavior and its impact …

Workshop
Michal Geyer · Joanna Materzynska · Jack Parker-Holder · Yuge Shi · Trevor Darrell · Nando de Freitas · Antonio Torralba

[ Hall A8 ]

Abstract

The past few years have seen the rapid development of Generative AI, with powerful foundation models demonstrating the ability to generate new, creative content in multiple modalities. Following breakthroughs in text and image generation, it is clear the next frontier lies in video. One challenging but compelling aspect unique to video generation is the various forms in which one could control such generation: from specifying the content of a video with text, to viewing a scene with different camera angles, or even directing the actions of characters within the video. We have also seen the use cases of these models diversify, with works that extend generation to 3D scenes, use such models to learn policies for robotics tasks or create an interactive environment for gameplay. Given the great variety of algorithmic approaches, the rapid progress, and the tremendous potential for applications, we believe now is the perfect time to engage the broader machine learning community in this exciting new research area. We thus propose the first workshop on Controllable Video Generation (CVG), focused on algorithms that can control videos with multiple modalities and frequencies, and the swathe of potential applications. We anticipate CVG would be uniquely relevant to ICML as …

Workshop
Sharvaree Vadgama · Erik Bekkers · Alison Pouplin · Robin Walters · Hannah Lawrence · Sékou-Oumar Kaba · Jakub Tomczak · Stefanie Jegelka

[ Stolz 0 ]

Abstract

By recognizing that nearly all data is rooted in our physical world, and thus inherently grounded in geometry and physics, it becomes evident that learning systems should preserve this grounding throughout the process of representation learning in order to be meaningful. For example, preserving group transformation laws and symmetries through equivariant layers is crucial in domains such as computational physics, chemistry, robotics, and medical imaging. It leads to effective and generalizable architectures and improved data efficiency. Similarly, in generative models applied to non-Euclidean data spaces, maintaining the manifold structure is essential to obtain meaningful samples. Therefore, this workshop focuses on the principle of grounding in geometry, which we define as follows: A representation, method, or theory is grounded in geometry if it can be amenable to geometric reasoning, that is, it abides by the mathematics of geometry.

Workshop
Julia Gusak · Jean Kossaifi · Alena Shilova · Rocco Sedona · Jan Kautz

[ Hall A1 ]

Abstract

Join HPC and AI experts to learn how to train neural networks at an unprecedented scale with your existing infrastructure

Workshop
Zhenfei (Jeremy) Yin · Yawen Duan · Jianfeng Chi · Jing Shao · Pavel Izmailov · Hang Su · Peyman Najafirad · Neil Gong · Cihang Xie · Bo Li · Yu Qiao · Wanli Ouyang · Alan Yuille · Jun Zhu · Dacheng Tao · Phil Torr

[ Straus 1 ]

Abstract

Advanced Multi-modal Foundation Models (MFMs) and AI Agents, equipped with diverse modalities and an increasing number of available affordances (e.g. tool use, code interpreter, API access, etc.), have the potential to accelerate and amplify their predecessors’ impact on society. Understanding and preempting the vulnerabilities of such systems and their induced harms becomes unprecedentedly crucial. Building trustworthy MLMs and AI Agents transcends adversarial robustness of such models, but also emphasizes the importance of proactive harm evaluation, mitigation, safeguards, and the establishment of comprehensive safety mechanisms throughout the lifecycle of the systems’ development and deployment. This approach demands a blend of technical and socio-technical strategies, incorporating AI governance and regulatory insights to build trustworthy MFMs and AI Agents.The goals of this workshop are threefold: 1) Highlight novel directions in trustworthy MFMs and AI Agent research; 2) Promote interdisciplinary collaboration on trustworthy MFMs and AI Agents, for example among i) trustworthy ML research in vision, language, and other modalities; as well as among ii) technical and governance communities; 3) Initiate discussions on best practices for responsible training, deployment, transparency, and security of MFMs and AI Agents.Topics include but are not limited to:• Robustness, attack and defense, poisoning, hijacking and security• Privacy and watermarking• …

Workshop
Berivan Isik · Ziteng Sun · Banghua Zhu · Enric Boix-Adserà · Nezihe Merve Gürel · Bo Li · Ahmad Beirami · Sanmi Koyejo

[ Straus 2 ]

Abstract

Recent advancements in generative foundation models (FMs) such as large language models (LLMs) and diffusion models have propelled the capability of deep neural models to seemingly magical heights. Yet, the soaring growth in the model size and capability has also led to pressing concerns surrounding such modern AI systems. The scaling of the models significantly increases their energy consumption and deployment cost. Overreliance on AI may perpetuate existing inequalities and lead to widening discrimination against certain groups of people. The gap between the understanding of the internal workings of FMs and their empirical success has also reached an unprecedented level, hindering accountability and transparency.For decades, theoretical tools from statistics, information theory, and optimization have played a pivotal role in extracting information from unstructured data. Currently, the rapid pace of FM development has outstripped theoretical investigation, creating a potential gap between theoretical researchers and the challenges surrounding FMs. This workshop proposes a platform for bringing together researchers and practitioners from the foundation model and theory community (including statistics, information theory, optimization, and learning theory), to discuss advances and challenges in addressing these concerns, with a focus on responsible AI, efficiency, and principled foundations.

Workshop
Fazl Barez · Lawrence Chan · Mor Geva · Kayo Yin · Neel Nanda · Max Tegmark

[ Lehar 1 ]

Abstract

We propose a one-day workshop on mechanistic interpretability -- reverse-engineering algorithms from the internals of neural networks.

Workshop
Beyza Ermis · Erin Grant · Frank Hutter · Julien Siems · Noah Hollmann · Jelena Bratulić

[ Lehar 4 ]

Abstract

In-context learning (ICL) is an emerging capability of large-scale models, including large language models (LLMs) like GPT-3, to acquire new capabilities directly from the context of an input example without separate training or fine-tuning, enabling these models to adapt rapidly to new tasks, datasets, and domains. This workshop brings together diverse perspectives on this new paradigm to assess progress, synthesize best practices, and chart open problems. Core topics will include architectural and other inductive biases enabling in-context skill acquisition, and reliable evaluation of ICL in application domains including reinforcement learning, representation learning, and safe and reliable machine learning.

Workshop
Adam Mahdi · Ludwig Schmidt · Alexandros Dimakis · Rotem Dror · Georgia Gkioxari · Sang Truong · Lilith Bat-Leah · Fatimah Alzamzami · Georgios Smyrnis · Thao Nguyen · Nezihe Merve Gürel · Paolo Climaco · Luis Oala · Hailey Schoelkopf · Andrew M. Bean · Berivan Isik · Vaishaal Shankar · Mayee Chen · Achal Dave

[ Straus 3 ]

Abstract

This workshop addresses the growing significance of preparing high quality datasets for the development of large-scale foundation models. With recent advancements highlighting the key role of dataset size, quality, diversity, and provenance in model performance, this workshop considers the strategies employed for enhancing data quality, including filtering, augmentation, and relabeling. The workshop draws upon the increasing interest in data-centric research. It seeks to advance understanding and methodologies for dataset composition and curation, ultimately fostering the development of more robust models capable of addressing diverse challenges across multiple domains and that can benefit the public.

Workshop
Claire Vernade · Michael Muehlebach · Johannes Kirschner · Dylan Foster · Alexandre Proutiere · Csaba Szepesvari · Andreas Krause · Onno Eberhard

[ Schubert 4 - 6 ]

Abstract

Despite rapid advances in machine learning, solving large-scale stochastic dynamic programming problems remains a significant challenge. The combination of neural networks with RL has opened new avenues for algorithm design, but the lack of theoretical guarantees of these approaches hinders their applicability to high-stake problems traditionally addressed using control theory, such as online supply chain optimization, industrial automation, and adaptive transportation systems. This workshop focuses on recent advances in developing a learning theory of decision (control) systems, that builds on techniques and concepts from two communities that have had limited interactions despite their shared target: reinforcement learning and control theory.

Workshop
Theresa Eimer · Raghu Rajan · Julian Dierkes · André Biedenkapp · Vu Nguyen · Aleksandra Faust

[ Schuber 1 - 3 ]

Abstract

The past few years has seen a surge of interest in reinforcement learning, with breakthrough successes of applying RL in games, robotics, chemistry, logistics, nuclear fusion and more. These headlines, however, blur the picture of what remains a brittle technology,with many successes relying on heavily engineered solutions. Indeed, several recent works have demonstrated that RL algorithms are brittle to seemingly mundane design choices. Thus, it is often a significant challenge to effectively apply RL in practice, especially on novel problems, limiting its potential impact and narrowing its accessibility. In this workshop, we want to bring together different communities working on solving these problems. A variety of distinct sub-communities spanning RL, Meta-Learning and AutoML havebeen working on making RL work “out-of-the-box” in arbitrary settings - this is the AutoRL setting. Recently, with the emergence of LLMs and their in-context learning abilities, they have significantly impacted all these communities. There are LLM agents tacklingtraditional RL tasks as well as few-shot RL agents increasing efficiency and generalization that arealso trying to automate RL. LLMs have also been influencing AutoML directly with papers such as OptFormer. However, there is currently little crossover between these communities. As such, we want to create the space to …

Workshop
Payel Das · Anna Ivanova · Aurelie Lozano · Subhajit Chaudhury · Ilia Sucholutsky · Badr AlKhamissi

[ Lehar 3 ]

Abstract

Large Language Models (LLMs) have undoubtedly taken center stage in the AI revolution, showing impressive performance in a wide variety of tasks, including machine translation, standardized tests, and conversational chatbots. It is even more impressive to uncover that these models exhibit unpredictable capabilities in solving unseen tasks. This demonstration of emergent abilities, often credited to the scale of the parameters and data size in the case of LLMs, is being considered as the footprint of intelligence.The goal of this workshop is to assess and understand the position of current LLMs’ abilities in the landscape of intelligent systems, with a strong focus on cognitive abilities. By bringing in experts from different scientific disciplines, such as AI/ML, neuroscience, cognitive science, and psychology, we aim to discuss topics that include but not limited to:• Where do LLMs stand in terms of performance on cognitive tasks, such as reasoning, navigation, planning, and theory of mind?What are the fundamental limits of language models with respect to cognitive abilities?• How do LLMs fine-tuned on specific tasks end-to-end compare to augmented LLMs coupled withexternal modules?• What are the similarities and differences between mechanistic interpretability approaches in AI and inneuroscience? What do they tell us about similarities and …