Skip to yearly menu bar Skip to main content


Poster

Controllable Prompt Tuning For Balancing Group Distributional Robustness

Hoang Phan · Andrew Wilson · Qi Lei


Abstract: Models trained on data composed of different groups or domains can suffer from severe performance degradation under distribution shifts. While recent methods have largely focused on optimizing the worst-group objective, this often comes at the expense of good performance on other groups. To address this problem, we introduce an optimization scheme to achieve good performance across groups and find a good solution for all without severely sacrificing performance on any of them. However, directly applying such optimization involves updating the parameters of the entire network, making it both computationally expensive and challenging. Thus, we introduce Controllable Prompt Tuning (CPT), which couples our approach with prompt-tuning techniques. On spurious correlation benchmarks, our procedures achieve state-of-the-art results across both transformer and non-transformer architectures, as well as unimodal and multimodal data, while requiring only $0.4\%$ tunable parameters. Our implementation is available at \url{https://anonymous.4open.science/r/CPT/}.

Live content is unavailable. Log in and register to view live content