OpenLORIS-2020
Classification
Robot
|...
许可协议: BSD-4-Clause

Overview

We provide a new lifelong robotic vision dataset (“OpenLORIS-Object”) collected via RGB-D cameras mounted on mobile robots. The dataset embeds the challenges faced by a robot in the real-life application and provides new benchmarks for validating lifelong object recognition algorithms.

Data Collection

Several grounded robots mounted by depth cameras and other sensors are used for the data collection. These robots are moving in the offices, homes, and malls, where the scenes are diverse and changing all the time. In the OpenLORIS-Object dataset, we provide the RGB-D video dataset for the objects.

The robot is actively recording the videos of targeted objects under multiple illuminations, occlusions, camera-object distances/angles, and context information (clutters). We do include the common challenges that the robot is usually faced with. For example,

  • Illumination. In a real-world application, the illumination can vary significantly across time, e.g., day and night differences. We repeat the data collection under weak, normal, and strong lighting conditions, respectively. The task becomes challenging with lights to be very weak.
  • Occlusion. Occlusion happens when a part of an object is hidden by other objects, or only a portion of the object is visible in the field of view. Since distinctive characteristics of the object might be hidden, occlusion significantly increases the difficulty for recognition.
  • Object size. Small-size or elongated objects make the task challenging, like dry batteries or glue sticks.
  • Camera-object angles/distances. The angles of the cameras affect the attributes detected from the object.
  • Clutter. Clutter refers to the presence of other objects in the vicinity of the considered object. The simultaneous presence of multiple objects may interfere with the classification task.

Data Format

The levels 1, 2, and 3 are ranked with increasing difficulties. For each instance at each level, we provided 260 to 600 samples, both have RGB and depth images. Thus, the total images provided is around 2 (RGB and depth) × 381 (mean samples per instance)× 121 (instances) × 4 (factors per level) × 3 (difficulty levels) = 1, 106, 424 images. Also, we have provided bounding boxes and masks for each RGB image. An example of two RGB-D frames of simple and complex clutter with 2D bounding box and mask annotations is shown in Fig. 2. The size of images under illumination, occlusion and clutter factors is 424×240 pixels, and the size of images under object pixel size factor are 424×240, 320×180, 1280×720 pixels.

Citation

The data provided here is the 1st version for evaluating Lifelong Object Recognition algorithms. Please cite our paper below in any academic work done with this dataset.

Qi She et al. "OpenLORIS-Object:
A Dataset and Benchmark towards Lifelong Object Recognition". arXiv:1911.06487, 2019
Qi She
et al., "IROS 2019 Lifelong Robotic Vision: Object Recognition Challenge [Competitions]," in
IEEE Robotics & Automation Magazine, vol. 27, no. 2, pp. 11-16, June 2020, doi: 10.1109/MRA.2020.2987186.


@inproceedings{she2019openlorisobject,
    title={ {OpenLORIS-Object}: A Robotic Vision Dataset and Benchmark for Lifelong Deep Learning},
    author={Qi She and Fan Feng and Xinyue Hao and Qihan Yang and Chuanlin Lan and Vincenzo
Lomonaco and Xuesong Shi and Zhengwei Wang and Yao Guo and Yimin Zhang and Fei Qiao and Rosa
H. M. Chan},
    booktitle={2020 International Conference on Robotics and Automation (ICRA)},
    year={2020},
    pages={4767-4773},
}
}


@article{9113359,
title={IROS 2019 Lifelong Robotic Vision: Object Recognition Challenge [Competitions]},
author={H. {Bae} and E. {Brophy} and R. H. M. {Chan} and B. {Chen} and F. {Feng}
and G. {Graffieti} and V. {Goel} and X. {Hao} and H. {Han} and S. {Kanagarajah} and S. {Kumar}
and S. {Lam} and T. L. {Lam} and C. {Lan} and Q. {Liu} and V. {Lomonaco} and L. {Ma} and D.
{Maltoni} and G. I. {Parisi} and L. {Pellegrini} and D. {Piyasena} and S. {Pu} and Q. {She}
and D. {Sheet} and S. {Song} and Y. {Son} and Z. {Wang} and T. E. {Ward} and J. {Wu} and M.
{Wu} and D. {Xie} and Y. {Xu} and L. {Yang} and Q. {Yang} and Q. {Zhong} and L. {Zhou}},
 journal={IEEE Robotics  Automation Magazine},
year={2020},
volume={27},
number={2},
pages={11-16},}

License

BSD-4-Clause

数据概要
数据格式
Image,
数据量
1106.424K
文件大小
125.05GB
发布方
Qi She et al.
数据集反馈
| 48 | 数据量 1106.424K | 大小 125.05GB
OpenLORIS-2020
Classification
Robot
许可协议: BSD-4-Clause

Overview

We provide a new lifelong robotic vision dataset (“OpenLORIS-Object”) collected via RGB-D cameras mounted on mobile robots. The dataset embeds the challenges faced by a robot in the real-life application and provides new benchmarks for validating lifelong object recognition algorithms.

Data Collection

Several grounded robots mounted by depth cameras and other sensors are used for the data collection. These robots are moving in the offices, homes, and malls, where the scenes are diverse and changing all the time. In the OpenLORIS-Object dataset, we provide the RGB-D video dataset for the objects.

The robot is actively recording the videos of targeted objects under multiple illuminations, occlusions, camera-object distances/angles, and context information (clutters). We do include the common challenges that the robot is usually faced with. For example,

  • Illumination. In a real-world application, the illumination can vary significantly across time, e.g., day and night differences. We repeat the data collection under weak, normal, and strong lighting conditions, respectively. The task becomes challenging with lights to be very weak.
  • Occlusion. Occlusion happens when a part of an object is hidden by other objects, or only a portion of the object is visible in the field of view. Since distinctive characteristics of the object might be hidden, occlusion significantly increases the difficulty for recognition.
  • Object size. Small-size or elongated objects make the task challenging, like dry batteries or glue sticks.
  • Camera-object angles/distances. The angles of the cameras affect the attributes detected from the object.
  • Clutter. Clutter refers to the presence of other objects in the vicinity of the considered object. The simultaneous presence of multiple objects may interfere with the classification task.

Data Format

The levels 1, 2, and 3 are ranked with increasing difficulties. For each instance at each level, we provided 260 to 600 samples, both have RGB and depth images. Thus, the total images provided is around 2 (RGB and depth) × 381 (mean samples per instance)× 121 (instances) × 4 (factors per level) × 3 (difficulty levels) = 1, 106, 424 images. Also, we have provided bounding boxes and masks for each RGB image. An example of two RGB-D frames of simple and complex clutter with 2D bounding box and mask annotations is shown in Fig. 2. The size of images under illumination, occlusion and clutter factors is 424×240 pixels, and the size of images under object pixel size factor are 424×240, 320×180, 1280×720 pixels.

Citation

The data provided here is the 1st version for evaluating Lifelong Object Recognition algorithms. Please cite our paper below in any academic work done with this dataset.

Qi She et al. "OpenLORIS-Object:
A Dataset and Benchmark towards Lifelong Object Recognition". arXiv:1911.06487, 2019
Qi She
et al., "IROS 2019 Lifelong Robotic Vision: Object Recognition Challenge [Competitions]," in
IEEE Robotics & Automation Magazine, vol. 27, no. 2, pp. 11-16, June 2020, doi: 10.1109/MRA.2020.2987186.


@inproceedings{she2019openlorisobject,
    title={ {OpenLORIS-Object}: A Robotic Vision Dataset and Benchmark for Lifelong Deep Learning},
    author={Qi She and Fan Feng and Xinyue Hao and Qihan Yang and Chuanlin Lan and Vincenzo
Lomonaco and Xuesong Shi and Zhengwei Wang and Yao Guo and Yimin Zhang and Fei Qiao and Rosa
H. M. Chan},
    booktitle={2020 International Conference on Robotics and Automation (ICRA)},
    year={2020},
    pages={4767-4773},
}
}


@article{9113359,
title={IROS 2019 Lifelong Robotic Vision: Object Recognition Challenge [Competitions]},
author={H. {Bae} and E. {Brophy} and R. H. M. {Chan} and B. {Chen} and F. {Feng}
and G. {Graffieti} and V. {Goel} and X. {Hao} and H. {Han} and S. {Kanagarajah} and S. {Kumar}
and S. {Lam} and T. L. {Lam} and C. {Lan} and Q. {Liu} and V. {Lomonaco} and L. {Ma} and D.
{Maltoni} and G. I. {Parisi} and L. {Pellegrini} and D. {Piyasena} and S. {Pu} and Q. {She}
and D. {Sheet} and S. {Song} and Y. {Son} and Z. {Wang} and T. E. {Ward} and J. {Wu} and M.
{Wu} and D. {Xie} and Y. {Xu} and L. {Yang} and Q. {Yang} and Q. {Zhong} and L. {Zhou}},
 journal={IEEE Robotics  Automation Magazine},
year={2020},
volume={27},
number={2},
pages={11-16},}

License

BSD-4-Clause

数据集反馈
0
立即开始构建AI
graviti
wechat-QR
长按保存识别二维码,关注Graviti公众号