Obstacle Detection for Self-Driving Cars Using Only Monocular Cameras And Wheel Odometry

October 26, 2015 in ETHZ-CVG, Publications, year 4 by Ulrich Schwesinger

Christian Haene, Torsten Sattler and Marc Pollefeys

IROS, 2015

Mapping the environment is crucial to enable path planning and obstacle avoidance for self-driving vehicles and other robots. In this paper, we concentrate on ground-based vehicles and present an approach which extracts static obstacles from depth maps computed out of multiple consecutive images. In contrast to existing approaches, our system does not require accurate visual inertial odometry estimation but solely relies on the readily available wheel odometry. To handle the resulting higher pose uncertainty, our system fuses obstacle detections over time and between cameras to estimate the free and occupied space around the vehicle. Using monocular fisheye cameras, we are able to cover a wider field of view and detect obstacles closer to the car, which are often not within the standard field of view of a classical binocular stereo camera setup. Our quantitative analysis shows that our system is accurate enough for navigation purposes of self-driving cars and runs in real-time.

title={Obstacle Detection for Self-Driving Cars Using Only Monocular Cameras and Wheel Odometry},
author={H{\"a}ne, Christian and Sattler, Torsten and Pollefeys, Marc}
booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},

Vision-Only Fully Automated Driving in Dynamic Mixed-Traffic Scenarios

October 6, 2015 in ETHZ-ASL, Journals, Publications, year 4 by Ulrich Schwesinger

U. Schwesinger, P. Versari, A. Broggi and R. Siegwart

it – Information Technology, 2015

In this work an overview of the motion planning and dynamic perception framework within the V-Charge project is presented. This framework enables the V-Charge car to autonomously navigate in dynamic mixed-traffic scenarios. Other traffic participants are detected, classified and tracked from a combination of stereo and wide-angle monocular cameras. Predictions of their future movements are generated utilizing infrastructure information. Safe motion plans are acquired with a system-compliant sampling-based local motion planner. We show the navigation performance of this vision-only autonomous vehicle in both simulation and real-world experiments.

author = {Ulrich Schwesinger and
Pietro Versari and
Alberto Broggi and
Roland Siegwart},
title = {Vision-only fully automated driving in dynamic mixed-traffic scenarios},
journal = {it - Information Technology},
volume = {57},
number = {4},
pages = {231--242},
year = {2015},
url = {http://www.degruyter.com/view/j/itit.2015.57.issue-4/itit-2015-0005/itit-2015-0005.xml},