Lane Marker
2D Polyline
Autonomous Driving
|...
许可协议: Custom

Overview

Automatically annotated lane markers using Lidar maps.

  • Over 100,000 annotated images
  • Annotations of over 100 meters
  • Resolution of 1276 x 717 pixels

A SEGMENTATION CHALLENGE

Lane markers are tricky to annotate because of their median width of only 12 cm. At farther distances, the number of pixels gets very sparse and the markers start to blend with the asphalt in the camera image.

LANE APPROXIMATIONS

While pixel-level segmentation can be very useful for localization, some automated driving systems benefit from higher level representations such as splines, clothoids, or polynomials. This section of the dataset allows for evaluating existing and novel techniques.

Data Annotation

The unsupervised LLAMAS (Labeled LAne Markers) dataset was generated automatically using map projections and reprojection optimizations. In the following, we are going to visualize the complete process of generating the lane marker labels step by step.

Highwa camera image

As a first step, we need an automated vehicle with extrinsically and intrinsically calibrated camera sensor(s). Recording camera images usually is fairly straightforward.

Map visualization

We also need a high definition map of lane markers that we can localize against. Such a map can be generated with a few drives over the same area. One important principle is that we only need to detect lane markers very close to our vehicle since we are going to closely pass all of them over time. At short distances, lane markers are mostly very easy to detect which we did with a lidar sensor, but that is not a necessity.

Image with initial lane marker projection

After localizing against the map, we can project the mapped markers into the image for arbitrary distances. Unfortunately, this projection is not going to be completely accurate because of inaccurate localization, calibration offsets, a moving suspension, and much more. Even small rotational offsets linearly increase with distance.

Image after top hat filter

Without labels, we can already (poorly) detect lane markers using very straightforward methods such as edge detectors or the output from an applied top hat filter based on a 9x9 kernel as displayed above.

Camera image after top hat filter and thresholding

By thresholding the filter output and removing the upper parts of the image, we can already get a very simple lane marker estimate with only three lines of code.

Detections and initial lane marker projection

In this view, especially on the right, we can see an offset between detected (white) and projected markers (red) already.

Detections with corrected lane marker projection

We can now optimize the marker projection from the map into the image to better fit the detected markers. The correct markers are displayed in green and can already be used as labels.

Sample lane marker segmentation inference

After annotating a few thousand images like that, we can train an actual detector on those initially generated labels. The output of such a trained model is displayed above.

Map projection and lane marker detection

These better detections (white) allow for a more accurate projection (green) of markers from the map into the image. In red, we show the initial projection (red) based on calibration and localization only.

Camera image of freeway driving with annotated lane markers

This image shows the corrected projection with map information such as individual lane markers and lane associations in the original image. There still are offsets because of occluded markers, inaccurate detections, and flaws in the auto generated map (for example, see the red marker on the left), but labeling up to the accuracy provided by this approach already is extremely time consuming and tricky.

Citation

Please use the following citation when referencing the dataset:

@inproceedings{llamas2019,   title={Unsupervised Labeled Lane Marker Dataset Generation Using
Maps},
  author={Behrendt, Karsten and Soussan, Ryan},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  year={2019}
}

License

Custom

数据概要
数据格式
Image,
数据量
--
文件大小
141.78GB
发布方
BOSCH
Bosch is a German multinational engineering and technology company headquartered in Gerlingen, near Stuttgart, Germany. The company was founded by Robert Bosch in Stuttgart in 1886.
数据集反馈
| 370 | 数据量 -- | 大小 141.78GB
Lane Marker
2D Polyline
Autonomous Driving
许可协议: Custom

Overview

Automatically annotated lane markers using Lidar maps.

  • Over 100,000 annotated images
  • Annotations of over 100 meters
  • Resolution of 1276 x 717 pixels

A SEGMENTATION CHALLENGE

Lane markers are tricky to annotate because of their median width of only 12 cm. At farther distances, the number of pixels gets very sparse and the markers start to blend with the asphalt in the camera image.

LANE APPROXIMATIONS

While pixel-level segmentation can be very useful for localization, some automated driving systems benefit from higher level representations such as splines, clothoids, or polynomials. This section of the dataset allows for evaluating existing and novel techniques.

Data Annotation

The unsupervised LLAMAS (Labeled LAne Markers) dataset was generated automatically using map projections and reprojection optimizations. In the following, we are going to visualize the complete process of generating the lane marker labels step by step.

Highwa camera image

As a first step, we need an automated vehicle with extrinsically and intrinsically calibrated camera sensor(s). Recording camera images usually is fairly straightforward.

Map visualization

We also need a high definition map of lane markers that we can localize against. Such a map can be generated with a few drives over the same area. One important principle is that we only need to detect lane markers very close to our vehicle since we are going to closely pass all of them over time. At short distances, lane markers are mostly very easy to detect which we did with a lidar sensor, but that is not a necessity.

Image with initial lane marker projection

After localizing against the map, we can project the mapped markers into the image for arbitrary distances. Unfortunately, this projection is not going to be completely accurate because of inaccurate localization, calibration offsets, a moving suspension, and much more. Even small rotational offsets linearly increase with distance.

Image after top hat filter

Without labels, we can already (poorly) detect lane markers using very straightforward methods such as edge detectors or the output from an applied top hat filter based on a 9x9 kernel as displayed above.

Camera image after top hat filter and thresholding

By thresholding the filter output and removing the upper parts of the image, we can already get a very simple lane marker estimate with only three lines of code.

Detections and initial lane marker projection

In this view, especially on the right, we can see an offset between detected (white) and projected markers (red) already.

Detections with corrected lane marker projection

We can now optimize the marker projection from the map into the image to better fit the detected markers. The correct markers are displayed in green and can already be used as labels.

Sample lane marker segmentation inference

After annotating a few thousand images like that, we can train an actual detector on those initially generated labels. The output of such a trained model is displayed above.

Map projection and lane marker detection

These better detections (white) allow for a more accurate projection (green) of markers from the map into the image. In red, we show the initial projection (red) based on calibration and localization only.

Camera image of freeway driving with annotated lane markers

This image shows the corrected projection with map information such as individual lane markers and lane associations in the original image. There still are offsets because of occluded markers, inaccurate detections, and flaws in the auto generated map (for example, see the red marker on the left), but labeling up to the accuracy provided by this approach already is extremely time consuming and tricky.

Citation

Please use the following citation when referencing the dataset:

@inproceedings{llamas2019,   title={Unsupervised Labeled Lane Marker Dataset Generation Using
Maps},
  author={Behrendt, Karsten and Soussan, Ryan},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  year={2019}
}

License

Custom

数据集反馈
1
立即开始构建AI
graviti
wechat-QR
长按保存识别二维码,关注Graviti公众号