Summary maps for lifelong visual localization

November 13, 2015 in ETHZ-ASL, Journals, Publications, VW, year 4 by Ulrich Schwesinger

Peter Muehlfellner, Mathias Buerki, Michael Bosse, Wojciech Derendarz, Roland Philippsen and Paul Furgale

Journal of Field Robotics

Robots that use vision for localization need to handle environments which are subject to
seasonal and structural change, and operate under changing lighting and weather conditions.
We present a framework for lifelong localization and mapping designed to provide
robust and metrically accurate online localization in these kinds of changing environments.
Our system iterates between offline map building, map summary, and online localization.
The offline mapping fuses data from multiple visually varied datasets, thus dealing with
changing environments by incorporating new information. Before passing this data to the
online localization system, the map is summarized, selecting only the landmarks that are
deemed useful for localization. This Summary Map enables online localization that is accurate
and robust to the variation of visual information in natural environments while still
being computationally efficient.

We present a number of summary policies for selecting useful features for localization from
the multi-session map and explore the tradeoff between localization performance and computational
complexity. The system is evaluated on 77 recordings, with a total length of 30
kilometers, collected outdoors over sixteen months. These datasets cover all seasons, various
times of day, and changing weather such as sunshine, rain, fog, and snow. We show that it
is possible to build consistent maps that span data collected over an entire year, and cover
day-to-night transitions. Simple statistics computed on landmark observations are enough
to produce a Summary Map that enables robust and accurate localization over a wide range
of seasonal, lighting, and weather conditions.

@article{muhlfellner2015summary,
title={Summary maps for lifelong visual localization},
author={M{\"u}hlfellner, Peter and B{\"u}rki, Mathias and Bosse, Mike and Derendarz, Wojciech and Philippsen, Roland and Furgale, Paul},
journal={Journal of Field Robotics},
year={2015},
publisher={John Wiley \& Sons}
}

Integrating Metric and Semantic Maps for Vision-Only Automated Parking

November 13, 2015 in ETHZ-ASL, Oxford-MRG, Publications, year 4 by Ulrich Schwesinger

H. Grimmett, M. Buerki, L. Paz, P. Piniés, P. Furgale, I. Posner, and P. Newman

CONFERENCE

We present a framework for integrating two layers of map which are often required for fully automated operation: metric and semantic. Metric maps are likely to improve with subsequent visitations to the same place, while semantic maps can comprise both permanent and fluctuating features of the environment. However, it is not clear from the state of the art how to update the semantic layer as the metric map evolves.
The strengths of our method are threefold: the framework allows for the unsupervised evolution of both maps as the environment is revisited by the robot; it uses vision-only sensors, making it appropriate for production cars; and the human labelling effort is minimised as far as possible while maintaining high fidelity. We evaluate this on two different car parks with a fully automated car, performing repeated automated parking manoeuvres to demonstrate the robustness of the system.

@inproceedings{GrimmettICRA2015,
Address = {Seattle, WA, USA},
Author = {Grimmett, Hugo and Buerki, Mathias and Paz, Lina and Pini{\'e}s, Pedro and Furgale, Paul and Posner, Ingmar and Newman, Paul},
Booktitle = {{P}roceedings of the {IEEE} {I}nternational {C}onference on {R}obotics and {A}utomation ({ICRA})},
Month = {May},
Pdf = {http://www.robots.ox.ac.uk/~mobile/Papers/2015ICRA_Grimmett.pdf},
Title = {{I}ntegrating {M}etric and {S}emantic {M}aps for {V}ision-{O}nly {A}utomated {P}arking},
Year = {2015}}

Vision-Only Fully Automated Driving in Dynamic Mixed-Traffic Scenarios

October 6, 2015 in ETHZ-ASL, Journals, Publications, year 4 by Ulrich Schwesinger

U. Schwesinger, P. Versari, A. Broggi and R. Siegwart

it – Information Technology, 2015

In this work an overview of the motion planning and dynamic perception framework within the V-Charge project is presented. This framework enables the V-Charge car to autonomously navigate in dynamic mixed-traffic scenarios. Other traffic participants are detected, classified and tracked from a combination of stereo and wide-angle monocular cameras. Predictions of their future movements are generated utilizing infrastructure information. Safe motion plans are acquired with a system-compliant sampling-based local motion planner. We show the navigation performance of this vision-only autonomous vehicle in both simulation and real-world experiments.

@article{SchwesingerVBS15,
author = {Ulrich Schwesinger and
Pietro Versari and
Alberto Broggi and
Roland Siegwart},
title = {Vision-only fully automated driving in dynamic mixed-traffic scenarios},
journal = {it - Information Technology},
volume = {57},
number = {4},
pages = {231--242},
year = {2015},
url = {http://www.degruyter.com/view/j/itit.2015.57.issue-4/itit-2015-0005/itit-2015-0005.xml},
}

Fast Collision Detection Through Bounding Volume Hierarchies in Workspace-Time Space for Sampling-Based Motion Planners

July 1, 2015 in ETHZ-ASL, Publications, year 4 by Ulrich Schwesinger

U. Schwesinger, P. Furgale, and R. Siegwart

IEEE International Conference on Intelligent Robots and Systems (ICRA), 2015


This paper presents a fast collision-detection method for sampling-based
motion planners based on bounding volume hierarchies in workspace-time
space. By introducing time as an additional dimension to the robot’s workspace,
the method is able to quickly evaluate time-indexed candidate trajectories for
collision with the known future motions of other agents. The approach makes no
assumptions on the shape of the objects and is able to handle arbitrary motions.
We highlight implementation details regarding the application of the collision
detection technique within an online planning framework for automated
driving. Furthermore, we give detailed profiling information to show the
capability for real-time operation.

@article{Schwesinger2015,
author = {Schwesinger, U and Siegwart, R and Furgale, P},
journal = {Proc. of the IEEE International Conference on Robotics and Automation},
title = {{Fast Collision Detection Through Bounding Volume Hierarchies in Workspace-Time Space for Sampling-Based Motion Planners}},
year = {2015},
month = {May}
}

by admin

There and Back Again

January 21, 2015 in ETHZ-ASL, Publications, year 4 by admin

P. Furgale, P. Krüsi, F. Pomerleau, U. Schwesinger, F. Colas, and R. Siegwart

IEEE International Conference on Intelligent Robots and Systems (ICRA), 2014

Topological/metric route following, also called teach and repeat (T&R), enables long-range autonomous navigation even without globally consistent localization. This ren- ders T&R ideal for applications where a global positioning system may not be available, such as navigation through street canyons or forests in search and rescue, reconnaissance in underground structures, surveillance, or planetary exploration.

This talk will present our efforts to develop a T&R system suitable for long-term robot autonomy in highly dynamic, unstructured environments. We use the fast iterative closest point (ICP) algorithms from libpointmatcher 1 to build a T&R system based on a spinning laser range finder. The system deals with dynamic elements in two ways. First, we employ a system-compliant local motion planner to react to dynamic elements in the scene during route following. Second, the system infers the static or dynamic state of each 3D point in the environment based on repeated observations. The velocity of each dynamic point is estimated without requiring object models or explicit clustering of the points. At any time, the system is able to produce a most-likely representation of underlying static scene geometry. By storing the time history of velocities, we can infer the dominant motion patterns within the map. The result is an online mapping and localization system specifically designed to enable long-term autonomy within highly dynamic environments. We validate the approach using data collected around the campus of ETH Zurich over seven months and at an outdoor 3D test site in Thun, Switzerland

@article{furgale_workshop_icra14,
journal = {ICRA14 Workshop on Modelling, Estimation, Perception, and Control of All Terrain Mobile Robots},
year = 2014,
month = june,
title = { There and Back Again—Dealing with highly-dynamic scenes and long-term change during topological/metric route following },
author = { Furgale, P and Philipp, K and and Pomerleau, F and Schwesinger, U and Colas, F and Roland Siegwart },
address = { Hong Kong, China },
}

by admin

Unified Temporal and Spatial Calibration for Multi-Sensor Systems

July 17, 2014 in ETHZ-ASL, Publications, year 4 by admin

Paul Furgale, Joern Rehder and Roland Siegwart

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)

In order to increase accuracy and robustness in state estimation for robotics, a growing number of applications rely on data from multiple complementary sensors. For the best performance in sensor fusion, these different sensors must be spatially and temporally registered with respect to each other. To this end, a number of approaches have been developed to estimate these system parameters in a two stage process, first estimating the time offset and subsequently solving for the spatial transformation between sensors. In this work, we present on a novel framework for jointly estimating the temporal offset between measurements of different sensors and their spatial displacements with respect to each other. The approach is enabled by continuous-time batch estimation and extends previous work by seamlessly incorpo- rating time offsets within the rigorous theoretical framework of maximum likelihood estimation. Experimental results for a camera to inertial measurement unit (IMU) calibration prove the ability of this framework to accurately estimate time offsets up to a fraction of the smallest measurement period.


@inproceedings{furgale_iros13,
doi = { 10.1109/IROS.2013.6696514 },
year = { 2013 },
url = { bib/furgale_iros13.pdf },
title = { Unified Temporal and Spatial Calibration for Multi-Sensor Systems },
pages = { 1280--1286 },
month = { 3--7 November },
booktitle = { Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) },
author = { Paul Furgale and Joern Rehder and Roland Siegwart },
address = { Tokyo, Japan },
}

by admin

OpenGV: A Unified and Generalized Approach to Real-Time Calibrated Geometric Vision

July 17, 2014 in ETHZ-ASL, Publications, year 4 by admin

Laurent Kneip and Paul Timothy Furgale

IEEE International Conference on Robotics and Automation (ICRA) 2014

OpenGV is a new C++ library for calibrated real- time 3D geometric vision. It unifies both central and non-central absolute and relative camera pose computation algorithms within a single library. Each problem type comes with minimal and non-minimal closed-form solvers, as well as non-linear iterative optimization and robust sample consensus methods. OpenGV therefore contains an unprecedented level of com- pleteness with regard to calibrated geometric vision algorithms, and it is the first library with a dedicated focus on a unified real-time usage of non-central multi-camera systems, which are increasingly popular in robotics and in the automotive industry. This paper introduces OpenGV¿s flexible interface and abstraction for multi-camera systems, and outlines the performance of all contained algorithms. It is our hope that the introduction of this open-source platform will motivate people to use it and potentially also include more algorithms, which would further contribute to the general accessibility of geometric vision algorithms, and build a common playground for the fair comparison of different solutions.


@inproceedings{kneip_icra14,
year = { 2014 },
codeurl = { http://laurentkneip.github.io/opengv/ },
note = { (\href{http://laurentkneip.github.io/opengv/}{code}) },
pages = { 1--8 },
url = { bib/kneip_icra14.pdf },
title = { OpenGV: A Unified and Generalized Approach to Real-Time
Calibrated Geometric Vision },
booktitle = { Proceedings of the IEEE International Conference on
Robotics and Automation (ICRA) },
author = { Laurent Kneip and Paul Timothy Furgale },
month = { May 3 -- June 7 },
address = { Hong Kong, China },
}

by admin

Associating Uncertainty with Three-Dimensional Poses for use in Estimation Problems

July 17, 2014 in ETHZ-ASL, Journals, Publications, year 4 by admin

Barfoot, Furgale

Robotics, IEEE Transactions on, 30(3), pp. 679-693

In this paper, we provide specific and practical approaches to associate uncertainty with 4 ?? 4 transformation matrices, which is a common representation for pose variables in 3-D space. We show constraint-sensitive means of perturbing transformation matrices using their associated exponential-map generators and demonstrate these tools on three simple-yet-important estimation problems: 1) propagating uncertainty through a compound pose change, 2) fusing multiple measurements of a pose(e.g., for use in pose-graph relaxation), and 3) propagating uncertainty on poses (and landmarks) through a nonlinear camera model. The contribution of the paper is the presentation of the theoretical tools, which can be applied in the analysis of many problems involving 3-D pose and point variables.


@article{barfoot_tro14,
year = { 2014 },
title = { Associating Uncertainty with Three-Dimensional Poses for
use in Estimation Problems },
codeurl = { http://asrl.utias.utoronto.ca/code/barfoot_tro14.zip },
journal = { Robotics, IEEE Transactions on },
author = { Barfoot and Furgale },
pages = { 679-693 },
number = { 3 },
volume = { 30 },
month = { jun },
doi = { 10.1109/TRO.2014.2298059 },
}

by admin

Infrastructure-Based Calibration of a Multi-Camera Rig

June 12, 2014 in ETHZ-ASL, ETHZ-CVG, Publications, year 3 by admin

Lionel Heng, Mathias Buerki, Gim Hee Lee, Paul Furgale, Roland Siegwart, and Marc Pollefeys

2014 IEEE International Conference on Robotics and Automation (ICRA)

The online recalibration of multi-sensor systems is a fundamental problem that must be solved before complex automated systems are deployed in situations such as automated driving. In such situations, accurate knowledge of calibration parameters is critical for the safe operation of automated systems. However, most existing calibration methods for multi-sensor systems are computationally expensive, use installations of known fiducial patterns, and require expert supervision. We propose an alternative approach called infrastructure-based calibration that is efficient, requires no modification of the infrastructure, and is completely unsupervised. In a survey phase, a computationally expensive simultaneous localization and mapping (SLAM) method is used to build a highly accurate map of a calibration area. Once the map is built, many other vehicles are able to use it for calibration as if it were a known fiducial pattern.

We demonstrate the effectiveness of this method to calibrate the extrinsic parameters of a multi-camera system. The method does not assume that the cameras have an overlapping field of view and it does not require an initial guess. As the camera rig moves through the previously mapped area, we match features between each set of synchronized camera images and the map. Subsequently, we find the camera poses and inlier 2D-3D correspondences. From the camera poses, we obtain an initial estimate of the camera extrinsics and rig poses, and optimize these extrinsics and rig poses via non-linear refinement. The calibration code is publicly available as a standalone C++ package.


@inproceedings{hengICRA14,
author = {Lionel Heng and
Mathias Buerki and
Gim Hee Lee and
Paul Furgale and
Roland Siegwart and
Marc Pollefeys},
title = {Infrastructure-Based Calibration of a Multi-Camera Rig},
booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
year = {2014},
pages = {}
}

by admin

Evaluation of Fisheye-Camera Based Visual Multi-Session Localization in a Real-World Scenario

September 20, 2013 in ETHZ-ASL, Publications, VW, year 3 by admin

Peter Muehlfellner, Paul Furgale, Wojciech Derendarz, Roland Philippsen

IEEE Intelligent Vehicles Symposium (IVS) 2013

The V-Charge Golf, showing its integrated sensors and the very subtle differences to a regular “consumer car”.

The European V-Charge project seeks to develop fully automated valet parking and charging of electric vehicles using only low-cost sensors. One of the challenges is to implement robust visual localization using only cameras and stock vehicle sensors. We integratedfour monocular, wide-angle, fisheye cameras on a consumer car and implemented a mapping and localization pipeline. Visual features and odometry are combined to build and localize against a keyframe-based three dimensional map. We report results for the first stage of the project, based on two months worth of data acquired under varying conditions, with the objective of localizing against a map created offline.

 

 


@inproceedings{fisheye_iv13,
Address = {Gold Coast, Australia},
Author = {Peter Muehlfellner AND Paul Furgale AND Wojciech Derendarz AND Roland Philippsen},
Booktitle = {IEEE Intelligent Vehicles Symposium (IV)},
Month = jun,
Pages = {57--62},
Title = {{Evaluation of Fisheye-Camera Based Visual Multi-Session Localization in a Real-World Scenario}},
Year = {2013}
}

Article full text