Abstract
Performative prediction refers to scenarios where model predictions influence the underlying data distribution they aim to predict. A desirable property in this context is performative stability, where model predictions are already optimal for the distribution they induce, indicating converged model parameters and no need for further retraining. Achieving performative stability requires characterizing the data distribution map D(θ), i.e., the relationship between predictions and the resulting distribution shifts. Current studies typically quantify distribution differences using metrics like W1 distance or χ2 divergence, which may not provide isometric embeddings or maintain metric equivalence in practical scenarios, limiting their applicability across various data distribution maps. Moreover, the crucial smoothness parameter β in existing work is often unobtainable in performative scenarios, constraining the real-world utility of current theoretical results and methods. To address these challenges, we develop an algorithm that learns a performatively stable model for arbitrary data distribution maps without requiring the joint smoothness parameter β. Specifically, we introduce a new ε^-sensitivity measure for D(θ), quantified by the gradient of the loss function, which naturally and directly characterizes how distribution shifts affect the optimization of the objective function. Based on this sensitivity, we formulate a γ-strongly convex loss function and optimize the deployed model accordingly, where γ is derived from the defined ε^, eliminating the need for the β-joint smoothness assumption. Our theoretical results guarantee the convergence of the deployed model to performative stability. Extensive experiments on synthetic and real-world datasets with diverse data distribution maps demonstrate the superiority of our method over state-of-the-art techniques in two key aspects: prediction accuracy and performative stability.
| Original language | English |
|---|---|
| Article number | 42 |
| Number of pages | 22 |
| Journal | Machine Learning |
| Volume | 115 |
| Issue number | 3 |
| Early online date | 26 Feb 2026 |
| DOIs | |
| Publication status | Published - Mar 2026 |
User-Defined Keywords
- Performative prediction
- Performative stability
- Distribution shift
- Data distribution map
- Repeated risk minimization
Fingerprint
Dive into the research topics of 'Performative Prediction in the Wild: Adapting to Arbitrary Data Distribution Maps'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver