TAO
2D Box Tracking
Action/Event Detection
|Common
|...
许可协议: Unknown

Overview

TAO is a federated dataset for Tracking Any Object, containing 2,907 high resolution videos, captured in diverse environments, which are half a minute long on average. We adopt a bottom-up approach for discovering a large vocabulary of 833 categories, an order of magnitude more than prior tracking benchmarks. To this end, we ask annotators to label tracks for objects that move at any point in the video, and give names to them post factum. Our vocabulary is both significantly larger and qualitatively different from existing tracking datasets. To ensure scalability of annotation, we employ a federated approach that focuses manual effort on labeling tracks for those relevant objects in a video (e.g. those that move). We perform an extensive evaluation of state-of-the-art tracking methods and make a number of important discoveries regarding large-vocabulary tracking in an open-world. In particular, we show that existing single- and multi-object trackers struggle when applied to this scenario, and that detection-based, multi-object trackers are in fact competitive with user-initialized ones. We hope that our dataset and analysis will boost further progress in the tracking community.

img

Citation

Please use the following citation when referencing the dataset:

@article{dave2020tao,
  title={TAO: A Large-Scale Benchmark for Tracking Any Object},
  author={Dave, Achal and Khurana, Tarasha and Tokmakov, Pavel and Schmid, Cordelia and Ramanan,
Deva},
  journal={arXiv preprint arXiv:2005.10356},
  year={2020}
}
数据概要
数据格式
Video,
数据量
2.907K
文件大小
225.06GB
发布方
Achal Dave
A Ph.D. student at Carnegie Mellon University advised by Prof. Deva Ramanan. My research focuses on open-world object detection and tracking.
标注方
Scale AI, Inc
Trusted by world class companies, Scale delivers high quality training data for AI applications such as self-driving cars, mapping, AR/VR, robotics, and more
数据集反馈
立即开始构建AI