Auto Bild to grant V-Charge with Connected Car Award

January 7, 2016 in News

Auto Bild honours V-Charge with the Connected Car Award 2015. The award was handed over on 6th of January, 2016 during the Consumer Electronics Show in Las Vegas. The head of VW Electronics Research accepted the award in the name of the V-Charge consortium.
The Connected Car Award covers all aspects of automotive connectivity and has been awarded by Auto Bild and Computer Bild since 2013. Experts from both magazines had pre-selected a shortlist, from which the readers were able to determine their favorites in nine categories via online voting.

Euronews Futuris episode on V-Charge

December 8, 2015 in News

Camera Pose Voting for Large-Scale Image-Based Localization

November 18, 2015 in ETHZ-CVG, Publications, year 4

Bernhard Zeisl, Torsten Sattler, Marc Pollefeys

IEEE Int. Conf. on Computer Vision (ICCV) 2015

Image-based localization approaches aim to determine the camera pose from which an image was taken. Finding correct 2D-3D correspondences between query image features and 3D points in the scene model becomes harder as the size of the model increases. Current state-of-the-art methods therefore combine elaborate matching schemes with camera pose estimation techniques that are able to handle large fractions of wrong matches. In this work we study the benefits and limitations of spatial verification compared to appearance-based filtering. We propose a voting-based pose estimation strategy that exhibits O(n) complexity in the number of matches and thus facilitates to consider much more matches than previous approaches – whose complexity grows at least quadratically. This new outlier rejection formulation enables us to evaluate pose estimation for 1-to-many matches and to surpass the state-of-the-art. At the same time, we show that using more matches does not automatically lead to a better performance.

@inproceedings {zeisl2015locationvoting,
author = {Zeisl, Bernhard and Sattler, Torsten and Pollefeys, Marc},
title = {{Camera Pose Voting for Large-Scale Image-Based Localization}},
booktitle = {Int. Conf. on Computer Vision (ICCV)},
year = {2015}

Query-response geocast for vehicular crowd sensing

November 13, 2015 in Journals, Publications, TUB, year 4

Timpner, Julian and Wolf, Lars

Ad Hoc Networks

Modern vehicles are essentially mobile sensor platforms collecting a vast amount of information, which can be shared in vehicular ad hoc networks. A prime example of a resulting vehicular crowd sensing application might be the search for a parking spot in a specific geographic area. The interested vehicle sends a corresponding query into the destination area—a technique known as geocast. As the query originator, however, is likely to have moved relatively far away from the location from where the query was started by the time the response arrives, an efficient routing approach towards the originator is required. In this paper, we extend the Breadcrumb Geocast Routing (BGR), a georouting protocol for vehicular networks that is able to close this functional gap. We introduce several performance improvements. In particular, we focus on further reducing both the delivery delay and network overhead and on the dynamic adaption of breadcrumbs to the street layout, node density and other scenario-specific parameters. Extensive simulations in four different urban scenarios show a significant improvement on BGR, especially in terms of delivery delay, which can be reduced by an average of 24%.
Breadcrumb Geocast Routing Version 2 (BGR2) thus not only avoids up to 93% of the traffic overhead of Epidemic, but increases the delivery rate of the underlying georouting protocol significantly from about 48% to almost 100% even in difficult scenarios. In sum, it is shown that BGR2 and breadcrumbs in general are a feasible and efficient approach for the routing of query responses to moving nodes via geocast, enabling a variety of vehicular crowd sensing applications.

abstract = {},
author = {Timpner, Julian and Wolf, Lars},
doi = {10.1016/j.adhoc.2015.06.003},
issn = {15708705},
journal = {Ad Hoc Networks},
keywords = {Crowd sensing,Geocast,Query response,Routing,V2V,VANET},
month = {jun},
number = {Special Issue on Vehicular Crowd Sensing},
title = {{Query-response geocast for vehicular crowd sensing}},
url = {},
year = {2015}

Trustworthy Parking Communities: Helping Your Neighbor to Find a Space

November 13, 2015 in Journals, Publications, TUB, year 4

Timpner, Julian and Schürmann, Dominik and Wolf, Lars

IEEE Transactions on Dependable and Secure Computing

Cooperation between vehicles facilitates traffic management, road safety and infotainment applications. Cooperation, however, requires trust in the validity of the received information. In this paper, we tackle the challenge of securely exchanging parking spot availability information. Trust is crucial in order to support the decision of
whether the querying vehicle should rely on the received information about free parking spots close to its destination and thus ignore other potentially free spots on the way. Therefore, we propose Parking Communities, which provide a distributed and dynamic means to establish trusted groups of vehicles helping each other to securely find parking in their respective community area. Our approach is based on high-performance state-of-the-art encryption and signature algorithms as well as a well-understood mathematical trust rating model. This approach allows end-to-end encrypted request-response communications in combination with geocast and can be used as an overlay to existing vehicular networking technologies. We provide a comprehensive comparison with other security architectures and simulation results showing the feasibility of our approach.

author = {Timpner, Julian and Sch\"{u}rmann, Dominik and Wolf, Lars},
doi = {},
journal = {IEEE Transactions on Dependable and Secure Computing},
publisher = {IEEE Comput. Soc},
title = {{Trustworthy Parking Communities: Helping Your Neighbor to Find a Space}},
year = {2015}

k-Stacks: High-Density Valet Parking for Automated Vehicles

November 13, 2015 in Publications, TUB, year 4

Timpner, Julian and Friedrichs, Stephan and van Balen, Johannes and Wolf, Lars

Intelligent Vehicles Symposium (IV)

Automated valet parking not only improves driving comfort, but can have a considerable impact on the urban land- scape by reducing the required parking space. We present the first study of parking space optimization for automated valet parking with an in-depth theoretical analysis of the parking lot properties under various aspects, inluding the worst-case extraction time, total shunting distance, and the number of shunting operations (each per car). Most importantly, the proposed model bounds all these values. We verify the theoretical properties of our model in four simulated scenarios, one of which is based on real- world data from a downtown parking garage. We show that very good pick-up times of about 1 min are possible with very little overhead in terms of shunting distance and time, while providing a significantly improved parking density as compared to conventional parking lots.

address = {Seoul, Korea},
author = {Timpner, Julian and Friedrichs, Stephan and van Balen, Johannes and Wolf, Lars},
booktitle = {Proceedings of the IEEE
Intelligent Vehicles Symposium (IV)},
keywords = {Automated Vehicles,Cooperative Vehicle-infrastructure Systems,V2X Communication},
month = jun,
publisher = {IEEE},
title = {{k-Stacks: High-Density Valet Parking for Automated Vehicles}},
year = {2015}

Summary maps for lifelong visual localization

November 13, 2015 in ETHZ-ASL, Journals, Publications, VW, year 4

Peter Muehlfellner, Mathias Buerki, Michael Bosse, Wojciech Derendarz, Roland Philippsen and Paul Furgale

Journal of Field Robotics

Robots that use vision for localization need to handle environments which are subject to
seasonal and structural change, and operate under changing lighting and weather conditions.
We present a framework for lifelong localization and mapping designed to provide
robust and metrically accurate online localization in these kinds of changing environments.
Our system iterates between offline map building, map summary, and online localization.
The offline mapping fuses data from multiple visually varied datasets, thus dealing with
changing environments by incorporating new information. Before passing this data to the
online localization system, the map is summarized, selecting only the landmarks that are
deemed useful for localization. This Summary Map enables online localization that is accurate
and robust to the variation of visual information in natural environments while still
being computationally efficient.

We present a number of summary policies for selecting useful features for localization from
the multi-session map and explore the tradeoff between localization performance and computational
complexity. The system is evaluated on 77 recordings, with a total length of 30
kilometers, collected outdoors over sixteen months. These datasets cover all seasons, various
times of day, and changing weather such as sunshine, rain, fog, and snow. We show that it
is possible to build consistent maps that span data collected over an entire year, and cover
day-to-night transitions. Simple statistics computed on landmark observations are enough
to produce a Summary Map that enables robust and accurate localization over a wide range
of seasonal, lighting, and weather conditions.

title={Summary maps for lifelong visual localization},
author={M{\"u}hlfellner, Peter and B{\"u}rki, Mathias and Bosse, Mike and Derendarz, Wojciech and Philippsen, Roland and Furgale, Paul},
journal={Journal of Field Robotics},
publisher={John Wiley \& Sons}

Introspective Classification for Robot Perception

November 13, 2015 in Journals, Oxford-MRG, Publications, year 4

H. Grimmett, R. Triebel, R. Paul, and I. Posner

International Journal of Robotics Research (IJRR)

In robotics, the use of a classification framework which produces scores with inappropriate confidences will ultimately lead to the robot making dangerous decisions. In order to select a framework which will make the best decisions, we should pay careful attention to the ways in which it generates scores. Precision and recall have been widely adopted as canonical metrics to quantify the performance of learning algorithms, but for robotics applications involving mission-critical decision making, good performance in relation to these metrics is insufficient. We introduce and motivate the importance of a classifier’s introspective capacity: the ability to associate an appropriate assessment of confidence with any test case. We propose that a key ingredient for introspection is a framework’s potential to increase its uncertainty with the distance between a test datum its training data.
We compare the introspective capacities of a number of commonly used classification frameworks in both classification and detection tasks, and show that better introspection leads to improved decision-making in the context of tasks such as autonomous driving or semantic map generation.

Author = {Grimmett, Hugo and Triebel, Rudolph and Paul, Rohan and Posner, Ingmar},
Journal = {International Journal of Robotics Research (IJRR)},
Pdf = {},
Title = {{I}ntrospective {C}lassification for {R}obot {P}erception},
Year = {2015}}

Integrating Metric and Semantic Maps for Vision-Only Automated Parking

November 13, 2015 in ETHZ-ASL, Oxford-MRG, Publications, year 4

H. Grimmett, M. Buerki, L. Paz, P. Piniés, P. Furgale, I. Posner, and P. Newman


We present a framework for integrating two layers of map which are often required for fully automated operation: metric and semantic. Metric maps are likely to improve with subsequent visitations to the same place, while semantic maps can comprise both permanent and fluctuating features of the environment. However, it is not clear from the state of the art how to update the semantic layer as the metric map evolves.
The strengths of our method are threefold: the framework allows for the unsupervised evolution of both maps as the environment is revisited by the robot; it uses vision-only sensors, making it appropriate for production cars; and the human labelling effort is minimised as far as possible while maintaining high fidelity. We evaluate this on two different car parks with a fully automated car, performing repeated automated parking manoeuvres to demonstrate the robustness of the system.

Address = {Seattle, WA, USA},
Author = {Grimmett, Hugo and Buerki, Mathias and Paz, Lina and Pini{\'e}s, Pedro and Furgale, Paul and Posner, Ingmar and Newman, Paul},
Booktitle = {{P}roceedings of the {IEEE} {I}nternational {C}onference on {R}obotics and {A}utomation ({ICRA})},
Month = {May},
Pdf = {},
Title = {{I}ntegrating {M}etric and {S}emantic {M}aps for {V}ision-{O}nly {A}utomated {P}arking},
Year = {2015}}

Obstacle Detection for Self-Driving Cars Using Only Monocular Cameras And Wheel Odometry

October 26, 2015 in ETHZ-CVG, Publications, year 4

Christian Haene, Torsten Sattler and Marc Pollefeys

IROS, 2015

Mapping the environment is crucial to enable path planning and obstacle avoidance for self-driving vehicles and other robots. In this paper, we concentrate on ground-based vehicles and present an approach which extracts static obstacles from depth maps computed out of multiple consecutive images. In contrast to existing approaches, our system does not require accurate visual inertial odometry estimation but solely relies on the readily available wheel odometry. To handle the resulting higher pose uncertainty, our system fuses obstacle detections over time and between cameras to estimate the free and occupied space around the vehicle. Using monocular fisheye cameras, we are able to cover a wider field of view and detect obstacles closer to the car, which are often not within the standard field of view of a classical binocular stereo camera setup. Our quantitative analysis shows that our system is accurate enough for navigation purposes of self-driving cars and runs in real-time.

title={Obstacle Detection for Self-Driving Cars Using Only Monocular Cameras and Wheel Odometry},
author={H{\"a}ne, Christian and Sattler, Torsten and Pollefeys, Marc}
booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},