by admin

CVG releases an open source library for calibration of multi-camera systems

September 29, 2013 in ETHZ-CVG, News by admin

Our partners at the ETH Zurich Computer Vision and Geometry lab have released the calibration pipeline that we use in V-Charge. The code is open source and it should work for calibrating the intrinsics and extrinsics of any multi-camera system.

View the overview webpage here:

Get the code on github:

by admin

Evaluation of Fisheye-Camera Based Visual Multi-Session Localization in a Real-World Scenario

September 20, 2013 in ETHZ-ASL, Publications, VW, year 3 by admin

Peter Muehlfellner, Paul Furgale, Wojciech Derendarz, Roland Philippsen

IEEE Intelligent Vehicles Symposium (IVS) 2013

The V-Charge Golf, showing its integrated sensors and the very subtle differences to a regular “consumer car”.

The European V-Charge project seeks to develop fully automated valet parking and charging of electric vehicles using only low-cost sensors. One of the challenges is to implement robust visual localization using only cameras and stock vehicle sensors. We integratedfour monocular, wide-angle, fisheye cameras on a consumer car and implemented a mapping and localization pipeline. Visual features and odometry are combined to build and localize against a keyframe-based three dimensional map. We report results for the first stage of the project, based on two months worth of data acquired under varying conditions, with the objective of localizing against a map created offline.



Address = {Gold Coast, Australia},
Author = {Peter Muehlfellner AND Paul Furgale AND Wojciech Derendarz AND Roland Philippsen},
Booktitle = {IEEE Intelligent Vehicles Symposium (IV)},
Month = jun,
Pages = {57--62},
Title = {{Evaluation of Fisheye-Camera Based Visual Multi-Session Localization in a Real-World Scenario}},
Year = {2013}

Article full text

by admin

Reciprocal Collision Avoidance With Motion Continuity Constraints

September 20, 2013 in ETHZ-ASL, Journals, Publications, year 3 by admin

Martin Rufli, Javier Alonso-Mora, Roland Siegwart

IEEE Transactions on Robotics (T-RO)

This paper addresses decentralized motion planning among a homogeneous set of feedback-controlled, decision-making agents. It introduces the continuous control obstacle (C^n-CO), which describes the set of C^n-continuous control sequences (and thus trajectories) that lead to a collision between interacting agents. By selecting a feasible trajectory from C^n-CO’s complement, a collision-free motion is obtained. The approach represents an extension to the reciprocal velocity obstacle (RVO, ORCA) collision-avoidance methods so that trajectory segments verify C^n continuity rather than piecewise linearity. This allows the large class of robots capable of tracking C^n-continuous trajectories to employ it for partial motion planning directly—rather than as a mere tool for collision checking. This paper further establishes that both the original velocity obstacle method and several of its recently developed reciprocal extensions (which treat specific robot physiologies only) correspond to particular instances of C^n-CO. In addition to the described extension in trajectory continuity, C^n-CO thus represents a unification of existing RVO theory. Finally, the presented method is validated in simulation—and a parameter study reveals under which environmental and control conditions C^n-CO with n>0 admits significantly improved navigation performance compared with inflated approaches based on ORCA.

author = {M Rufli and J Alonso-Mora and R Siegwart},
title = {Reciprocal Collision Avoidance With Motion Continuity Constraints},
journal = {IEEE Transactions on Robotics},
volume = {29},
number = {4},
year = {2013},
pages = {1-14}

by admin

Keyframe-Based Visual-Inertial SLAM using Nonlinear Optimization

September 20, 2013 in ETHZ-ASL, Publications, year 3 by admin

Stefan Leutenegger, Paul Furgale, Vincent Rabaud, Margarita Chli, Kurt Konolige and Roland Siegwart

Robotics Science and Systems 2013

The fusion of visual and inertial cues has become popular in robotics due to the complementary nature of the two sensing modalities. While most fusion strategies to date rely on filtering schemes, the visual robotics community has recently turned to non-linear optimization approaches for tasks such as visual Simultaneous Localization And Mapping (SLAM), following the discovery that this comes with significant advantages in quality of performance and computational complexity. Following this trend, we present a novel approach to tightly integrate visual measurements with readings from an Inertial Measurement Unit (IMU) in SLAM. An IMU error term is integrated with the landmark reprojection error in a fully probabilistic manner, resulting to a joint non-linear cost function to be optimized. Employing the powerful concept of `keyframes’ we partially marginalize old states to maintain a bounded-sized optimization window, ensuring real-time operation. Comparing against both vision-only and loosely-coupled visual-inertial algorithms, our experiments confirm the benefits of tight fusion in terms of accuracy and robustness.

year = { 2013 },
url = { },
title = { Keyframe-Based Visual-Inertial SLAM using Nonlinear Optimization },
month = { 24--28 June },
booktitle = { Proceedings of Robotics: Science and Systems },
author = { Stefan Leutenegger and Paul Furgale and Vincent Rabaud and Margarita Chli and Kurt Konolige and Roland Siegwart },
address = { Berlin, Germany },

Article full text