MSRA10K
2D Polygon
Common
|...
许可协议: Unknown

Overview

The MSRA Salient Object Database, which originally provides salient object annotation in terms of bounding boxes provided by 3-9 users, is widely used in salient object detection and segmentation community.

Data Collection

The MSRA10K benchmark dataset (a.k.a. THUS10000) comprises of per-pixel ground truth annotation for 10, 000 MSRA images (181 MB), each of which has an unambiguous salient object and the object region is accurately annotated with pixel wise ground-truth labeling (13.1M). We provide saliency maps (5.3 GB containing 170, 000 image) for our methods as well as other 15 state of the art methods, including FT [1], AIM [2], MSS [3], SEG [4], SeR [5], SUN [6], SWD [7], IM [8], IT [9], GB [10], SR [11], CA [12], LC [13], AC [14], and CB [15]. Saliency segmentation (71.3MB) results for FT[1], SEG[4], and CB[10] are also available.

Citation

@article{ChengPAMI,
  author = {Ming-Ming Cheng and Niloy J. Mitra and Xiaolei Huang and Philip H. S. Torr and
Shi-Min Hu},
  title = {Global Contrast based Salient Region Detection},
  year  = {2015},
  journal= {IEEE TPAMI},
  volume={37},
  number={3},
  pages={569--582},
  doi = {10.1109/TPAMI.2014.2345401},
}


@conference{13iccv/Cheng_Saliency,
  title={Efficient Salient Region Detection with Soft Image Abstraction},
  author={Ming-Ming Cheng and Jonathan Warrell and Wen-Yan Lin and Shuai Zheng
and Vibhav Vineet and Nigel Crook},
  booktitle={IEEE ICCV},
  pages={1529--1536},
  year={2013},
}


@article{SalObjSurvey,
  author = {Ali Borji and Ming-Ming Cheng and Huaizu Jiang and Jia Li},
  title = {Salient Object Detection: A Survey},
  journal = {ArXiv e-prints},
  archivePrefix = {arXiv},
  eprint = {arXiv:1411.5878},
  year = {2014},
}


@article{SalObjBenchmark,
  author = {Ali Borji and Ming-Ming Cheng and Huaizu Jiang and Jia Li},
  title = {Salient Object Detection: A Benchmark},
  journal = {IEEE TIP},
  year={2015},
  volume={24},
  number={12},
  pages={5706-5722},
  doi={10.1109/TIP.2015.2487833},
}
数据概要
数据格式
Image,
数据量
--
文件大小
108.85MB
发布方
College of Computer Science, Nankai University
Ming-Ming Cheng is a professor with the College of Computer Science, Nankai University, leading the Media Computing Lab. He received his Ph.D. degree from Tsinghua University in 2012 and then worked with Prof. Philip Torr in Oxford for 3 years.
数据集反馈
| 63 | 数据量 -- | 大小 108.85MB
MSRA10K
2D Polygon
Common
许可协议: Unknown

Overview

The MSRA Salient Object Database, which originally provides salient object annotation in terms of bounding boxes provided by 3-9 users, is widely used in salient object detection and segmentation community.

Data Collection

The MSRA10K benchmark dataset (a.k.a. THUS10000) comprises of per-pixel ground truth annotation for 10, 000 MSRA images (181 MB), each of which has an unambiguous salient object and the object region is accurately annotated with pixel wise ground-truth labeling (13.1M). We provide saliency maps (5.3 GB containing 170, 000 image) for our methods as well as other 15 state of the art methods, including FT [1], AIM [2], MSS [3], SEG [4], SeR [5], SUN [6], SWD [7], IM [8], IT [9], GB [10], SR [11], CA [12], LC [13], AC [14], and CB [15]. Saliency segmentation (71.3MB) results for FT[1], SEG[4], and CB[10] are also available.

Citation

@article{ChengPAMI,
  author = {Ming-Ming Cheng and Niloy J. Mitra and Xiaolei Huang and Philip H. S. Torr and
Shi-Min Hu},
  title = {Global Contrast based Salient Region Detection},
  year  = {2015},
  journal= {IEEE TPAMI},
  volume={37},
  number={3},
  pages={569--582},
  doi = {10.1109/TPAMI.2014.2345401},
}


@conference{13iccv/Cheng_Saliency,
  title={Efficient Salient Region Detection with Soft Image Abstraction},
  author={Ming-Ming Cheng and Jonathan Warrell and Wen-Yan Lin and Shuai Zheng
and Vibhav Vineet and Nigel Crook},
  booktitle={IEEE ICCV},
  pages={1529--1536},
  year={2013},
}


@article{SalObjSurvey,
  author = {Ali Borji and Ming-Ming Cheng and Huaizu Jiang and Jia Li},
  title = {Salient Object Detection: A Survey},
  journal = {ArXiv e-prints},
  archivePrefix = {arXiv},
  eprint = {arXiv:1411.5878},
  year = {2014},
}


@article{SalObjBenchmark,
  author = {Ali Borji and Ming-Ming Cheng and Huaizu Jiang and Jia Li},
  title = {Salient Object Detection: A Benchmark},
  journal = {IEEE TIP},
  year={2015},
  volume={24},
  number={12},
  pages={5706-5722},
  doi={10.1109/TIP.2015.2487833},
}
数据集反馈
0
立即开始构建AI
graviti
wechat-QR
长按保存识别二维码,关注Graviti公众号