graviti
产品服务
解决方案
知识库
公开数据集
关于我们
Argoverse
Fusion Box
Autonomous Driving
|...
许可协议: CC-BY-NC-SA 4.0

Overview

What is Argoverse?

  • One dataset with 3D tracking annotations for 113 scenes
  • One dataset with 324,557 interesting vehicle trajectories extracted from over 1000 driving hours
  • Two high-definition (HD) maps with lane centerlines, traffic direction, ground height, and more
  • One API to connect the map data with sensor information

Data collection

Where was the data collected?

The data in Argoverse comes from a subset of the area in which Argo AI’s self-driving test vehicles are operating in Miami and Pittsburgh — two US cities with distinct urban driving challenges and local driving habits. We include recordings of our sensor data, or "log segments," across different seasons, weather conditions, and times of day to provide a broad range of real-world driving scenarios.

Total lane coverage: 204 linear kilometers in Miami and 86 linear kilometers in Pittsburgh.

img

Miami

Beverly Terrace, Edgewater, Town Square

img

Pittsburgh

Downtown, Strip District, Lower Lawrenceville

How was the data collected?

We collected all of our data using a fleet of identical Ford Fusion Hybrids, fully integrated with Argo AI self-driving technology. We include data from two LiDAR sensors, seven ring cameras and two front-facing stereo cameras. All sensors are roof-mounted:

img

LiDAR

  • 2 roof-mounted LiDAR sensors
  • Overlapping 40° vertical field of view
  • Range of 200m
  • On average, our LiDAR sensors produce a point cloud with ~ 107,000 points at 10 Hz

Localization

We use a city-specific coordinate system for vehicle localization. We include 6-DOF localization for each timestamp, from a combination of GPS-based and sensor-based localization methods.

Cameras

  • Seven high-resolution ring cameras (1920 x 1200) recording at 30 Hz with a combined 360° field of view
  • Two front-view facing stereo cameras (2056 x 2464) sampled at 5 Hz

Calibration

Sensor measurements for each driving session are stored in “logs.”v For each log, we provide intrinsic and extrinsic calibration data for LiDAR and all nine cameras.

Data Annotation

Argoverse contains amodal 3D bounding cuboids on all objects of interest on or near the drivable area. By “amodal” we mean that the 3D extent of each cuboid represents the spatial extent of the object in 3D space — and not simply the extent of observed pixels or observed LiDAR returns, which is smaller for occluded objects and ambiguous for objects seen from only one face.

Our amodal annotations are automatically generated by fitting cuboids to each object’s LiDAR returns observed throughout an entire tracking sequence. If the full spatial extent of an object is ambiguous in one frame, information from previous or later frames can be used to constrain the shape. The size of amodal cuboids is fixed over time. A few objects in the dataset dynamically change size (e.g. a car opening a door) and cause imperfect amodal cuboid fit.

To create amodal cuboids, we identify the points that belong to each object at every timestep. This information, as well as the orientation of each object, come from human annotators.

We provide ground truth labels for 15 object classes. Two of these classes include static and dynamic objects that lie outside of the key categories we defined, and are called *ON_ROAD_OBSTACLE* and *OTHER_MOVER*. The distribution of these object classes across all of the annotated objects in Argoverse 3D Tracking looks like this:

img

Citation

Please use the following citation when referencing the dataset:

@INPROCEEDINGS { Argoverse,
  author = {Ming-Fang Chang and John W Lambert and Patsorn Sangkloy and Jagjeet Singh
       and Slawomir Bak and Andrew Hartnett and De Wang and Peter Carr
       and Simon Lucey and Deva Ramanan and James Hays},
  title = {Argoverse: 3D Tracking and Forecasting with Rich Maps},
  booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2019}
}

License

Argo AI, LLC, and its affiliates (“Argo”) strive to promote future development in the field of self-driving cars. Using its autonomous vehicles, Argo collected, annotated and organized videos, maps and lidar data in a dataset (the “Argoverse”). By using or downloading the Argoverse, you are agreeing to comply with the terms of this page and any licensing terms referenced below.

License - Data and Documentation. Argoverse and any associated data or documentation are provided free of charge under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License (“CC BY-NC-SA 4.0”). The full text of the license is accessible at https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode.

License - Code and Documentation. Argoverse code and APIs are licensed under the MIT license, the full text of which can be found at https://opensource.org/licenses/MIT.

Using the Argoverse Non-Commercially. We license Argoverse data and documents for non-commercial use only. Under the terms of the CC BY-NC-SA 4.0 “non-commercial” means “not primarily intended for or directed towards commercial advantage or monetary compensation.” To assist you in determining whether your contemplated use is non-commercial, we make available the following examples:

  • Example 1, Acceptable Use:

    You are a researcher at an academic institution, working on computer vision and prediction. You use the Argoverse dataset to further your research and publish results based on the Argoverse benchmark. Your published paper includes imagery and figures derived from the Argoverse dataset. The imagery and figures are attributed to Argo and a hyperlink to the text of the CC-BY-NC-SA 4.0 license is provided.

  • Example 2, Acceptable Use:

    You are a researcher at an autonomous vehicle company. You are submitting your image-based pose estimation algorithm to a major conference for public review. You use the Argoverse dataset to train and evaluate your method. You publish the results of the comparison publicly. The Argoverse data in your paper is attributed in accordance with the CC-BY-NC-SA 4.0 license.

  • Example 3, Unacceptable Use:

    You are an engineer at an autonomous vehicle company. You use the Argoverse dataset to train a prototype detection model for use on your company’s AV at a test track. You use this prototype detection model as a placeholder until you build a large enough internal dataset to train your model against.

Example 4, Unacceptable Use:

You are an engineer at a company that produces a robotics platform for sale. You train the prediction and detection machine learning model in your company’s product on data from the Argoverse dataset. Your company releases the product for sale with the model you trained against Argoverse data.

Suggestions: If you use Argoverse and identify areas that could benefit from changes, we would like to hear from you. Please submit any comments on the issues page of our GitHub repository. If you submit your changes, you may be required to confirm that your changes are subject to CC BY-NC-SA 4.0.

Attribution. Please follow the attribution guidelines provided in section 3.A. of CC-BY-NC-SA 4.0 license with the copyright notice being “© 2018-2019 Argo AI, LLC”. APIs attribution shall comply with the MIT license.

Privacy. Argo takes steps to protect the privacy of individuals in Argoverse imagery by blurring faces and license plates. If you notice that your face or licence plate is still identifiable, or if you have any privacy concerns pertaining to Argoverse, please submit a request here.

Marketing. You may not use Argo’s trademark, logo, or name without Argo’s express written permission in connection with Argoverse.

数据概要
数据格式
fusion, point cloud, image,
数据量
--
文件大小
260.91GB
发布方
Argo AI
Argo AI is a self-driving technology platform company. We build the software, hardware, maps, and cloud-support infrastructure that power self-driving vehicles.
| 数据量 -- | 大小 260.91GB
Argoverse
Fusion Box
Autonomous Driving
许可协议: CC-BY-NC-SA 4.0

Overview

What is Argoverse?

  • One dataset with 3D tracking annotations for 113 scenes
  • One dataset with 324,557 interesting vehicle trajectories extracted from over 1000 driving hours
  • Two high-definition (HD) maps with lane centerlines, traffic direction, ground height, and more
  • One API to connect the map data with sensor information

Data collection

Where was the data collected?

The data in Argoverse comes from a subset of the area in which Argo AI’s self-driving test vehicles are operating in Miami and Pittsburgh — two US cities with distinct urban driving challenges and local driving habits. We include recordings of our sensor data, or "log segments," across different seasons, weather conditions, and times of day to provide a broad range of real-world driving scenarios.

Total lane coverage: 204 linear kilometers in Miami and 86 linear kilometers in Pittsburgh.

img

Miami

Beverly Terrace, Edgewater, Town Square

img

Pittsburgh

Downtown, Strip District, Lower Lawrenceville

How was the data collected?

We collected all of our data using a fleet of identical Ford Fusion Hybrids, fully integrated with Argo AI self-driving technology. We include data from two LiDAR sensors, seven ring cameras and two front-facing stereo cameras. All sensors are roof-mounted:

img

LiDAR

  • 2 roof-mounted LiDAR sensors
  • Overlapping 40° vertical field of view
  • Range of 200m
  • On average, our LiDAR sensors produce a point cloud with ~ 107,000 points at 10 Hz

Localization

We use a city-specific coordinate system for vehicle localization. We include 6-DOF localization for each timestamp, from a combination of GPS-based and sensor-based localization methods.

Cameras

  • Seven high-resolution ring cameras (1920 x 1200) recording at 30 Hz with a combined 360° field of view
  • Two front-view facing stereo cameras (2056 x 2464) sampled at 5 Hz

Calibration

Sensor measurements for each driving session are stored in “logs.”v For each log, we provide intrinsic and extrinsic calibration data for LiDAR and all nine cameras.

Data Annotation

Argoverse contains amodal 3D bounding cuboids on all objects of interest on or near the drivable area. By “amodal” we mean that the 3D extent of each cuboid represents the spatial extent of the object in 3D space — and not simply the extent of observed pixels or observed LiDAR returns, which is smaller for occluded objects and ambiguous for objects seen from only one face.

Our amodal annotations are automatically generated by fitting cuboids to each object’s LiDAR returns observed throughout an entire tracking sequence. If the full spatial extent of an object is ambiguous in one frame, information from previous or later frames can be used to constrain the shape. The size of amodal cuboids is fixed over time. A few objects in the dataset dynamically change size (e.g. a car opening a door) and cause imperfect amodal cuboid fit.

To create amodal cuboids, we identify the points that belong to each object at every timestep. This information, as well as the orientation of each object, come from human annotators.

We provide ground truth labels for 15 object classes. Two of these classes include static and dynamic objects that lie outside of the key categories we defined, and are called *ON_ROAD_OBSTACLE* and *OTHER_MOVER*. The distribution of these object classes across all of the annotated objects in Argoverse 3D Tracking looks like this:

img

Citation

Please use the following citation when referencing the dataset:

@INPROCEEDINGS { Argoverse,
  author = {Ming-Fang Chang and John W Lambert and Patsorn Sangkloy and Jagjeet Singh
       and Slawomir Bak and Andrew Hartnett and De Wang and Peter Carr
       and Simon Lucey and Deva Ramanan and James Hays},
  title = {Argoverse: 3D Tracking and Forecasting with Rich Maps},
  booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2019}
}

License

Argo AI, LLC, and its affiliates (“Argo”) strive to promote future development in the field of self-driving cars. Using its autonomous vehicles, Argo collected, annotated and organized videos, maps and lidar data in a dataset (the “Argoverse”). By using or downloading the Argoverse, you are agreeing to comply with the terms of this page and any licensing terms referenced below.

License - Data and Documentation. Argoverse and any associated data or documentation are provided free of charge under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License (“CC BY-NC-SA 4.0”). The full text of the license is accessible at https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode.

License - Code and Documentation. Argoverse code and APIs are licensed under the MIT license, the full text of which can be found at https://opensource.org/licenses/MIT.

Using the Argoverse Non-Commercially. We license Argoverse data and documents for non-commercial use only. Under the terms of the CC BY-NC-SA 4.0 “non-commercial” means “not primarily intended for or directed towards commercial advantage or monetary compensation.” To assist you in determining whether your contemplated use is non-commercial, we make available the following examples:

  • Example 1, Acceptable Use:

    You are a researcher at an academic institution, working on computer vision and prediction. You use the Argoverse dataset to further your research and publish results based on the Argoverse benchmark. Your published paper includes imagery and figures derived from the Argoverse dataset. The imagery and figures are attributed to Argo and a hyperlink to the text of the CC-BY-NC-SA 4.0 license is provided.

  • Example 2, Acceptable Use:

    You are a researcher at an autonomous vehicle company. You are submitting your image-based pose estimation algorithm to a major conference for public review. You use the Argoverse dataset to train and evaluate your method. You publish the results of the comparison publicly. The Argoverse data in your paper is attributed in accordance with the CC-BY-NC-SA 4.0 license.

  • Example 3, Unacceptable Use:

    You are an engineer at an autonomous vehicle company. You use the Argoverse dataset to train a prototype detection model for use on your company’s AV at a test track. You use this prototype detection model as a placeholder until you build a large enough internal dataset to train your model against.

Example 4, Unacceptable Use:

You are an engineer at a company that produces a robotics platform for sale. You train the prediction and detection machine learning model in your company’s product on data from the Argoverse dataset. Your company releases the product for sale with the model you trained against Argoverse data.

Suggestions: If you use Argoverse and identify areas that could benefit from changes, we would like to hear from you. Please submit any comments on the issues page of our GitHub repository. If you submit your changes, you may be required to confirm that your changes are subject to CC BY-NC-SA 4.0.

Attribution. Please follow the attribution guidelines provided in section 3.A. of CC-BY-NC-SA 4.0 license with the copyright notice being “© 2018-2019 Argo AI, LLC”. APIs attribution shall comply with the MIT license.

Privacy. Argo takes steps to protect the privacy of individuals in Argoverse imagery by blurring faces and license plates. If you notice that your face or licence plate is still identifiable, or if you have any privacy concerns pertaining to Argoverse, please submit a request here.

Marketing. You may not use Argo’s trademark, logo, or name without Argo’s express written permission in connection with Argoverse.

0
立即开始构建AI
graviti
wechat-QR
长按保存识别二维码,关注Graviti公众号

Copyright@Graviti
沪ICP备19019574号
沪公网安备 31011002004865号