Camera Pose Voting for Large-Scale Image-Based Localization

November 18, 2015 in ETHZ-CVG, Publications, year 4 by Ulrich Schwesinger

Bernhard Zeisl, Torsten Sattler, Marc Pollefeys

IEEE Int. Conf. on Computer Vision (ICCV) 2015

Image-based localization approaches aim to determine the camera pose from which an image was taken. Finding correct 2D-3D correspondences between query image features and 3D points in the scene model becomes harder as the size of the model increases. Current state-of-the-art methods therefore combine elaborate matching schemes with camera pose estimation techniques that are able to handle large fractions of wrong matches. In this work we study the benefits and limitations of spatial verification compared to appearance-based filtering. We propose a voting-based pose estimation strategy that exhibits O(n) complexity in the number of matches and thus facilitates to consider much more matches than previous approaches – whose complexity grows at least quadratically. This new outlier rejection formulation enables us to evaluate pose estimation for 1-to-many matches and to surpass the state-of-the-art. At the same time, we show that using more matches does not automatically lead to a better performance.

@inproceedings {zeisl2015locationvoting,
author = {Zeisl, Bernhard and Sattler, Torsten and Pollefeys, Marc},
title = {{Camera Pose Voting for Large-Scale Image-Based Localization}},
booktitle = {Int. Conf. on Computer Vision (ICCV)},
year = {2015}
}

Obstacle Detection for Self-Driving Cars Using Only Monocular Cameras And Wheel Odometry

October 26, 2015 in ETHZ-CVG, Publications, year 4 by Ulrich Schwesinger

Christian Haene, Torsten Sattler and Marc Pollefeys

IROS, 2015

Mapping the environment is crucial to enable path planning and obstacle avoidance for self-driving vehicles and other robots. In this paper, we concentrate on ground-based vehicles and present an approach which extracts static obstacles from depth maps computed out of multiple consecutive images. In contrast to existing approaches, our system does not require accurate visual inertial odometry estimation but solely relies on the readily available wheel odometry. To handle the resulting higher pose uncertainty, our system fuses obstacle detections over time and between cameras to estimate the free and occupied space around the vehicle. Using monocular fisheye cameras, we are able to cover a wider field of view and detect obstacles closer to the car, which are often not within the standard field of view of a classical binocular stereo camera setup. Our quantitative analysis shows that our system is accurate enough for navigation purposes of self-driving cars and runs in real-time.

@inproceedings{haene2015obstacle,
title={Obstacle Detection for Self-Driving Cars Using Only Monocular Cameras and Wheel Odometry},
author={H{\"a}ne, Christian and Sattler, Torsten and Pollefeys, Marc}
booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year=2015
}

by admin

Relative Pose Estimation for a Multi-Camera System with Known Vertical Direction

June 12, 2014 in ETHZ-CVG, Publications, year 3 by admin

Gim Hee Lee, Marc Pollefeys, and Friedrich Fraundorfer

2014 IEEE International Conference on Computer Vision and Pattern Recognition (CVPR)

In this paper, we present our minimal 4-point and linear 8-point algorithms to estimate the relative pose of a multi-camera system with known vertical directions, i.e. known absolute roll and pitch angles. We solve the minimal 4-point algorithm with the hidden variable resultant method and show that it leads to an 8-degree univariate polynomial that gives up to 8 real solutions. We identify a degenerated case from the linear 8-point algorithm when it is solved with the standard Singular Value Decomposition (SVD) method and adopt a simple alternative solution which is easy to implement. We show that our proposed algorithms can be efficiently used within RANSAC for robust estimation. We evaluate the accuracy of our proposed algorithms by comparisons with various existing algorithms for the multi-camera system on simulations and show the feasibility of our proposed algorithms with results from multiple real-world datasets.


@inproceedings{leeCVPR14,
author = {Gim Hee Lee and
Marc Pollefeys and
Friedrich Fraundorfer},
title = {Relative Pose Estimation for a Multi-Camera System with Known Vertical Direction},
booktitle = {IEEE International Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2014},
pages = {}
}

by admin

Class Specific 3D Object Shape Priors Using Surface Normals

June 12, 2014 in ETHZ-CVG, Publications, year 3 by admin

Christian Haene, Nikolay Savinov, and Marc Pollefeys

2014 IEEE International Conference on Computer Vision and Pattern Recognition (CVPR)

Dense 3D reconstruction of real world objects containing textureless, reflective and specular parts is a challenging task. Using general smoothness priors such as surface area regularization can lead to defects in the form of disconnected parts or unwanted indentations. We argue that this problem can be solved by exploiting the object class specific local surface orientations, e.g. a car is always close to horizontal in the roof area. Therefore, we formulate an object class specific shape prior in the form of spatially varying anisotropic smoothness terms. The parameters of the shape prior are extracted from training data. We detail how our shape prior formulation directly fits into recently proposed volumetric multi-label reconstruction approaches. This allows a segmentation between the object and its supporting ground. In our experimental evaluation we show reconstructions using our trained shape prior on several challenging datasets.


@inproceedings{haeneCVPR14,
author = {Christian Haene and
Nikolay Savinov and
Marc Pollefeys},
title = {Class Specific 3D Object Shape Priors Using Surface Normals},
booktitle = {IEEE International Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2014},
pages = {}
}

by admin

Unsupervised Learning of Threshold for Geometric Verification in Visual-Based Loop-Closure

June 12, 2014 in ETHZ-CVG, Publications, year 3 by admin

Gim Hee Lee, and Marc Pollefeys

2014 IEEE International Conference on Robotics and Automation (ICRA)

A potential loop-closure image pair passes the geometric verification test if the number of inliers from the computation of the geometric constraint with RANSAC exceed a pre-defined threshold. The choice of the threshold is critical to the success of identifying the correct loop-closure image pairs. However, the value for this threshold often varies for different datasets and is chosen empirically. In this paper, we propose an unsupervised method that learns the threshold for geometric verification directly from the observed inlier counts of all the potential loop-closure image pairs. We model the distributions of the inlier counts from all the potential loop-closure image pairs with a two components Log-Normal mixture model – one component represents the state of non loop-closure and the other represents the state of loop-closure, and learn the parameters with the Expectation-Maximization algorithm. The intersection of the Log-Normal mixture distributions is the optimal threshold for geometric verification, i.e. the threshold that gives the minimum false positive and negative loop-closures. Our algorithm degenerates when there are too few or no loop-closures and we propose the ^_chi-squared test to detect this degeneracy. We verify our proposed method with several large-scale datasets collected from both the multi-camera setup and stereo camera.


@inproceedings{leeICRA14,
author = {Gim Hee Lee and
Marc Pollefeys},
title = {Unsupervised Learning of Threshold for Geometric Verification in Visual-Based Loop-Closure},
booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
year = {2014},
pages = {}
}

by admin

Infrastructure-Based Calibration of a Multi-Camera Rig

June 12, 2014 in ETHZ-ASL, ETHZ-CVG, Publications, year 3 by admin

Lionel Heng, Mathias Buerki, Gim Hee Lee, Paul Furgale, Roland Siegwart, and Marc Pollefeys

2014 IEEE International Conference on Robotics and Automation (ICRA)

The online recalibration of multi-sensor systems is a fundamental problem that must be solved before complex automated systems are deployed in situations such as automated driving. In such situations, accurate knowledge of calibration parameters is critical for the safe operation of automated systems. However, most existing calibration methods for multi-sensor systems are computationally expensive, use installations of known fiducial patterns, and require expert supervision. We propose an alternative approach called infrastructure-based calibration that is efficient, requires no modification of the infrastructure, and is completely unsupervised. In a survey phase, a computationally expensive simultaneous localization and mapping (SLAM) method is used to build a highly accurate map of a calibration area. Once the map is built, many other vehicles are able to use it for calibration as if it were a known fiducial pattern.

We demonstrate the effectiveness of this method to calibrate the extrinsic parameters of a multi-camera system. The method does not assume that the cameras have an overlapping field of view and it does not require an initial guess. As the camera rig moves through the previously mapped area, we match features between each set of synchronized camera images and the map. Subsequently, we find the camera poses and inlier 2D-3D correspondences. From the camera poses, we obtain an initial estimate of the camera extrinsics and rig poses, and optimize these extrinsics and rig poses via non-linear refinement. The calibration code is publicly available as a standalone C++ package.


@inproceedings{hengICRA14,
author = {Lionel Heng and
Mathias Buerki and
Gim Hee Lee and
Paul Furgale and
Roland Siegwart and
Marc Pollefeys},
title = {Infrastructure-Based Calibration of a Multi-Camera Rig},
booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
year = {2014},
pages = {}
}

by admin

Minimal Solutions for Pose Estimation of a Multi-Camera System

February 3, 2014 in ETHZ-CVG, Publications, year 3 by admin

Gim Hee Lee, Bo Li, Marc Pollefeys, and Friedrich Fraundorfer

International Symposium on Robotics Research (ISRR) 2013

In this paper, we propose a novel formulation to solve the pose estimation problem of a calibrated multi-camera system.
The non-central rays that pass through the 3D world points and multi-camera system are elegantly represented as Pluecker lines. This allows us to solve for the depth of the points along the Pluecker lines with a minimal set of 3-point correspondences.
We show that the minimal solution for the depth of the points along the Pluecker lines is an 8 degree polynomial that gives up to 8 real solutions.
The coordinates of the 3D world points in the multi-camera frame are computed from the known depths. Consequently, the pose of the multi-camera system, i.e. the rigid transformation between the world and multi-camera frames can be obtained from absolute orientation.
We also derive a closed-form minimal solution for the absolute orientation. This removes the need for the computationally expensive Singular Value Decompositions (SVD) during the evaluations of the possible solutions for the depths. We identify the correct solution and do robust estimation with RANSAC.
Finally, the solution is further refined by including all the inlier correspondences in a non-linear refinement step. We verify our approach by showing comparisons with other existing approaches and results from large-scale real-world datasets.


@inproceedings{leeISRR13,
author = {Gim Hee Lee, Bo Li, Marc Pollefeys, and Friedrich Fraundorfer},
title = {Minimal Solutions for Pose Estimation of a Multi-Camera System},
booktitle = {International Symposium on Robotics Research (ISRR) 2013},
year = {2013}
}

by admin

CVG releases an open source library for calibration of multi-camera systems

September 29, 2013 in ETHZ-CVG, News by admin


Our partners at the ETH Zurich Computer Vision and Geometry lab have released the calibration pipeline that we use in V-Charge. The code is open source and it should work for calibrating the intrinsics and extrinsics of any multi-camera system.

View the overview webpage here: http://people.inf.ethz.ch/hengli/camodocal/

Get the code on github: https://github.com/hengli/camodocal

by admin

A Multiple-Camera System Calibration Toolbox Using a Feature Descriptor-Based Calibration Pattern

July 18, 2013 in ETHZ-CVG, Publications, year 3 by admin

Bo Li, Lionel Heng, Kevin Koeser, and Marc Pollefeys

2013 IEEE/RSJ International Conference on Intelligent Robots and Systems

This paper presents a novel feature descriptorbased calibration pattern and a Matlab toolbox which uses the specially designed pattern to easily calibrate both the intrinsics and extrinsics of a multiple-camera system. In contrast to existing calibration patterns, in particular, the ubiquitous chessboard, the proposed pattern contains many more features of varying scales; such features can be easily and automatically detected. The proposed toolbox supports the calibration of a camera system which can comprise either normal pinhole cameras or catadioptric cameras. The calibration only requires that neighboring cameras observe parts of the calibration pattern at the same time; the observed parts may not overlap at all. No overlapping ???elds of view are assumed for the camera system. We show that the toolbox can easily be used to automatically calibrate camera systems.


@inproceedings{liIROS13b,
author = {Bo Li and
Lionel Heng and
Kevin Koeser and
Marc Pollefeys},
title = {A Multiple-Camera System Calibration Toolbox Using a Feature Descriptor-Based Calibration Pattern},
booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year = {2013},
pages = {}
}

Article full text

by admin

A 4-Point Algorithm for Relative Pose Estimation of a Calibrated Camera with a Known Relative Rotation Angle

July 18, 2013 in ETHZ-CVG, Publications, year 3 by admin

Bo Li, Lionel Heng, Gim Hee Lee, and Marc Pollefeys

2013 IEEE/RSJ International Conference on Intelligent Robots and Systems

We propose an algorithm to estimate the relative camera pose using four feature correspondences and one relative rotation angle measurement. The algorithm can be used for relative pose estimation of a rigid body equipped with a camera and a relative rotation angle sensor which can be either an odometer, an IMU or a GPS/INS system. This algorithm exploits the fact that the relative rotation angles of both the camera and relative rotation angle sensor are the same as the camera and sensor are rigidly mounted to a rigid body. Therefore, knowledge of the extrinsic calibration between the camera and sensor is not required. We carry out a quantitative comparison of our algorithm with the well-known 5-point and 1-point algorithms, and show that our algorithm exhibits the highest level of accuracy.


@inproceedings{liIROS13a,
author = {Bo Li and
Lionel Heng and
Gim Hee Lee and
Marc Pollefeys},
title = {A 4-Point Algorithm for Relative Pose Estimation of a Calibrated Camera with a Known Relative Rotation Angle},
booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year = {2013},
pages = {}
}

Article full text