Intelligent Systems Research Group
We present a visual odometry dataset for the evaluation and comparison of plenoptic, monocular and stereo camera based visual odometry and SLAM algorithms. The dataset contains 11 sequences recorded by a hand-held platform consisting of a plenoptic camera and a pair of stereo cameras. The sequences comprising different indoor and outdoor sequences with trajectory length ranging from 25 meters up to several hundred meters. The recorded sequences show moving objects as well as changing lighting conditions.
To the best of our knowledge the presented dataset is the first public dataset of its kind and the first one which includes sequences of large scale scenes recorded by a plenoptic camera.
While the light field (or plenoptic) sequences were recorded by a Raytrix camera, which is a focused plenoptic camera with a multi-focal micro lens array (MLA), the stereo images were recorded by a pair of industrial grade cameras. All three camera were hardware-triggered to obtain time-synchronized image sequences.
All sequences are performed in a large loop where start and end show the same scene. Thus, the start and end segment of a sequence are used to measure the pose error of an algorithm accumulated over the entire loop. For all sequences loop closure trajectories are supplied and serve as ground truth.
For the cameras, intrinsic camera parameters, as well as white images which are used for vignetting correction are supplied.
The dataset contains two different camera calibration datasets. The calibration dataset “dataset_01” is valid for the sequences “seq_001” to “seq_005”, while “dataset_02” is valid for “seq_006” to “seq_011”.
More information regarding the recorded sequences, the sensor setup, the camera calibration as well as the proposed evaluation method can be found in related publications listed below.
Zeller N., Quint F., Stilla U. (2018)
A Synchronized Stereo and Plenoptic Visual Odometry Dataset
The following table contains the entire image sequences of the dataset.
Matlab scripts to preprocess the image sequences and to measure the drift of an algorithm with respect to the ground truth can be found below.
seq_001 [plenoptic] [stereo]
seq_002 [plenoptic] [stereo]
seq_003 [plenoptic] [stereo]
seq_004 [plenoptic] [stereo]
seq_005 [plenoptic] [stereo]
seq_006 [plenoptic] [stereo]
seq_007 [plenoptic] [stereo]
seq_008 [plenoptic] [stereo]
seq_009 [plenoptic] [stereo]
seq_010 [plenoptic] [stereo]
seq_011 [plenoptic] [stereo]
Camera Calibration Data
The following table contains two compete calibration datasets, including images used for calibration, coordinates of the detected marker points, and predefined standards for scale observation.
While calibration dataset “dataset_01” is valid for the sequences “seq_001” to “seq_005”, “dataset_02” is valid for “seq_006” to “seq_011”.
Attention: Currently, in "CameraModel.xml" the principal point c = (cx, cy) is given in pixel coordinates for an image resolution of 1024 x 1024 pixels. If one wants to use the camera model at full image resolution of 2048 x 2048 pixels, the principal point coordinates change as follows:
cxnew = (cx + 0.5) ⋅ 2 - 0.5
cynew = (cy + 0.5) ⋅ 2 - 0.5
At the same time the pixel size is halved. The shift of 0.5 occurs because the origin of the image coordinate frame is in the center of the upper left pixel.
Preprocessing and Evaluation Code
At the following link one can find a set of Matlab scripts to preprocess the raw image sequences and to evaluate estimated trajectories.
Preprocessing comprises vignetting correction and debayering for the images of the plenoptic camera and vignetting correction and rectification for the stereo image pairs:
This dataset was developed at Karlsruhe University of Applied Sciences. The dataset, as well as all corresponding code, may be used freely for non-commercial purposes (see license terms form details).