Datasets:

ArXiv:
License:
Dataset Viewer
The dataset could not be loaded because the splits use different data file formats, which is not supported. Read more about the splits configuration. Click for more details.
Couldn't infer the same data file format for all splits. Got {'train': ('parquet', {}), 'train_raw': ('parquet', {}), 'test': (None, {}), 'test_raw': (None, {})}
Error code:   FileFormatMismatchBetweenSplitsError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

KITScenes LongTail Dataset

In real-world domains such as self-driving, generalization to rare scenarios remains a fundamental challenge. To address this, we introduce a new dataset designed for end-to-end driving that focuses on long-tail driving events. We provide multi-view video data, trajectories, high-level instructions, and detailed reasoning traces, facilitating in-context learning and few-shot generalization. The resulting benchmark for multimodal models, such as VLMs and VLAs, goes beyond safety and comfort metrics by evaluating instruction following and semantic coherence between model outputs. The multilingual reasoning traces in English 🇺🇸, Spanish 🇪🇸, and Chinese 🇨🇳 are from domain experts with diverse cultural backgrounds. Thus, our dataset is a unique resource for studying how different forms of reasoning affect driving competence.

KITScenes-LongTail

Scenarios

We collected our data over the course of two years, beginning in late 2023. Our recordings include urban and suburban environments, as well as highways (the main locations are Karlsruhe, Heidelberg, Mannheim and the Black Forest). We adjusted our routes to include many construction zones and intersections. In particular, we filtered for rare events such as adverse weather conditions (heavy rain, snow, fog), road closures, and accidents. Consequently, our dataset encompasses scenarios that diverge from nominal data distributions (i.e., long-tail scenarios). Overall, our dataset contains one thousand 9s-long scenarios that are divided into three splits: train (500), test (400), and validation (100).

distribution_of_scenario_types

In addition to specifically selected challenging scenarios, adverse weather, and construction zones, we use the Pareto principle to determine further long-tail data. Specifically, we use the well-established nuScenes dataset (Caesar et al., 2020) as reference and rank-frequency plots with a 80% cumulative frequency threshold to define long-tail data. In nuScenes approx. 88% of the scenarios are recorded during the day, thus nighttime scenarios are long-tail data. For maneuver types, driving straight and regular turns account for approx. 90% of nuScenes. Therefore, overtaking and lane changing are part of the remaining long-tail. As an exception, we also include nominal driving at intersections to better evaluate instruction following since there are more viable trajectories than in most long-tail scenarios.

Our dataset contains multi-view video data with a 360° horizontal field of view (FoV) and six viewing angles (see (a) to (f) in the video below). Furthermore, we perform frame-wise image stitching (see Fig. 3(g) in our paper). Our stitching method introduces gradual image warping to generate 360° views.

Reasoning Traces

We ask domain experts (i.e., researchers working on self-driving) with diverse cultural backgrounds to label reasoning traces about driving actions. The experts answer five questions related to a given driving scenario and an expert-driven trajectory.

reasoning_traces Example prompts for few-shot CoT kinematic inference used in our experiments. We use an XML-like syntax for all prompts (see [Section 5 in our paper](https://arxiv.org/pdf/2603.23607#page=11)).

Citation

If you use KITScenes LongTail, please cite:

@misc{wagner2026longtaildrivingscenariosreasoning,
  title={LongTail Driving Scenarios with Reasoning Traces: The KITScenes LongTail Dataset},
  author={Royden Wagner and Omer Sahin Tas and Jaime Villa and Felix Hauser and Yinzhe Shen and
          Marlon Steiner and Dominik Strutz and Carlos Fernandez and Christian Kinzig and
          Guillermo S. Guitierrez-Cabello and Hendrik Königshof and Fabian Immel and Richard Schwarzkopf and
          Nils Alexander Rack and Kevin Rösch and Kaiwen Wang and Jan-Hendrik Pauls and Martin Lauer and
          Igor Gilitschenski and Holger Caesar and Christoph Stiller},
  year={2026},
  eprint={2603.23607},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2603.23607},
}

Paper: arXiv:2603.23607

Changelog

  • Mar 31, 2026: Version 1.0. We release the test split and 3 training samples for few-shot evaluations. We will release the val and train splits and stitched images in later version.
Downloads last month
-

Paper for KIT-MRT/KITScenes-LongTail