graviti
产品服务
解决方案
知识库
公开数据集
关于我们
avatar
Daimler Mono Pedestrian Classification Benchmark
2D Box
Person
|Urban
|...
许可协议: Unknown

Overview

The Daimler Mono Pedestrian Classification Benchmark dataset consists of two parts:a base data set. The base data set contains a total of 4000 pedestrian- and 5000 non-pedestrian samples cut out from video images and scaled to common size of 18x36 pixels. This data set has been used in Section VII-A of the paper referenced above.Pedestrian images were obtained from manually labeling and extracting the rectangular positions of pedestrians in video images. Video images were recorded at various (day) times and locations with no particular constraints on pedestrian pose or clothing, except that pedestrians are standing in upright position and are fully visible. As non-pedestrian images, patterns representative for typical preprocessing steps within a pedestrian classification application, from video images known not to contain any pedestrians. We chose to use a shape-based pedestrian detector that matches a given set of pedestrian shape templates to distance transformed edge images (i.e. comparatively relaxed matching threshold).Additional non-pedestrian images. An additional collection of 1200 video images NOT containing any pedestrians, intended for the extraction of additional negative training examples. Section V of the paper referenced above describes two methods on how to increase the training sample size from these images, and Section VII-B lists experimental results.See also:Munder and D. M. Gavrila.An Experimental Study on Pedestrian Classification.IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 28, no. 11, pp.1863-1868, Nov. 2006.

数据概要
数据格式
image,
数据量
--
文件大小
--
| 数据量 -- | 大小 --
Daimler Mono Pedestrian Classification Benchmark
2D Box
Person | Urban
许可协议: Unknown

Overview

The Daimler Mono Pedestrian Classification Benchmark dataset consists of two parts:a base data set. The base data set contains a total of 4000 pedestrian- and 5000 non-pedestrian samples cut out from video images and scaled to common size of 18x36 pixels. This data set has been used in Section VII-A of the paper referenced above.Pedestrian images were obtained from manually labeling and extracting the rectangular positions of pedestrians in video images. Video images were recorded at various (day) times and locations with no particular constraints on pedestrian pose or clothing, except that pedestrians are standing in upright position and are fully visible. As non-pedestrian images, patterns representative for typical preprocessing steps within a pedestrian classification application, from video images known not to contain any pedestrians. We chose to use a shape-based pedestrian detector that matches a given set of pedestrian shape templates to distance transformed edge images (i.e. comparatively relaxed matching threshold).Additional non-pedestrian images. An additional collection of 1200 video images NOT containing any pedestrians, intended for the extraction of additional negative training examples. Section V of the paper referenced above describes two methods on how to increase the training sample size from these images, and Section VII-B lists experimental results.See also:Munder and D. M. Gavrila.An Experimental Study on Pedestrian Classification.IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 28, no. 11, pp.1863-1868, Nov. 2006.

0
立即开始构建AI
graviti
wechat-QR
长按保存识别二维码,关注Graviti公众号

Copyright@Graviti
沪ICP备19019574号
沪公网安备 31011002004865号