We introduce open-world motion forecasting, a novel setting in which new object classes are sequentially introduced over time and future object trajectories are estimated directly from camera images, and Open-World Motion PrEdictioN (OMEN), the first approach specifically designed to address this novel setting.
Teaser
Our approach is the first to tackle the problem of open-world motion forecasting. In contrast to (a) traditional and (b) end-to-end motion forecasting, (c) the underlying model is trained incrementally, with access to labels only for a subset of all classes $C^n$ and raw multi-view camera images. As a result, it continually learns to forecast the motion of all classes in an end-to-end manner, while handling imperfect object detections and successfully combating catastrophic forgetting

Abstract

Motion forecasting aims to predict the future trajectories of dynamic agents in the scene, enabling autonomous vehicles to effectively reason about scene evolution. Existing approaches operate under the closed-world regime and assume fixed object taxonomy as well as access to high-quality perception. Therefore, they struggle in real-world settings where perception is imperfect and object taxonomy evolves over time. In this work, we bridge this fundamental gap by introducing open-world motion forecasting, a novel setting in which new object classes are sequentially introduced over time and future object trajectories are estimated directly from camera images. We tackle this setting by proposing the first end-to-end class-incremental motion forecasting framework to mitigate catastrophic forgetting while simultaneously learning to forecast newly introduced classes. When a new class is introduced, our framework employs a pseudo-labeling strategy to first generate motion forecasting pseudo-labels for all known classes which are then processed by a vision-language model to filter inconsistent and over-confident predictions. Parallelly, our approach further mitigates catastrophic forgetting by using a novel replay sampling strategy that leverages query feature variance to sample previous sequences with informative motion patterns. Extensive evaluation on the nuScenes and Argoverse 2 datasets demonstrates that our approach successfully resists catastrophic forgetting and maintains performance on previously learned classes while improving adaptation to novel ones. Further, we demonstrate that our approach supports zero-shot transfer to real-world driving and naturally extends to end-to-end class-incremental planning, enabling continual adaptation of the full autonomous driving system.

Technical Approach

Overview of our approach

Illustration of the proposed OMEN architecture. At each incremental step $i$, we create detection and motion forecasting pseudo-labels for the old categories with the old model $\Phi^{i-1}$, filter them via a matching with the predictions of a VLM, and add them to the detection ($\triangle$) and motion forecasting ($\circ$) ground truth of $D^i$. Furthermore, a replay buffer is created based on the latent space of the old model.


Video

Code

We will release the code upon the acceptance of our paper.

Publications

If you find our work useful, please consider citing our paper:

Nicolas Schischka, Nikhil Gosala, B Ravi Kiran, Senthil Yogamani, and Abhinav Valada

Open-World Motion Forecasting
arXiv Preprint, 2026.

Authors

Nicolas Schischka

Nicolas Schischka

University of Freiburg

Nikhil Gosala

Nikhil Gosala

University of Freiburg

B Ravi Kiran

B Ravi Kiran

Qualcomm SARL France

Senthil Yogamani

Senthil Yogamani

QT Technologies Ireland Limited

Abhinav Valada

Abhinav Valada

University of Freiburg

Acknowledgment

Nikhil Gosala was funded by Qualcomm Technologies Inc., as well as an academic grant from NVIDIA. Nicolas Schischka was funded by the European Union with the HIDDEN project, under grant agreement No 101202228. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Climate, Infrastructure and Environment Executive Agency (CINEA). Neither the European Union nor the granting authority can be held responsible for them.