: For transparency and reproducability, we have added the evaluation codes to the development kits.: Added more detailed coordinate transformation descriptions to the raw data development kit.: The velodyne laser scan data has been released for the odometry benchmark.: Uploaded the missing oxts file for raw data sequence 2011_09_26_drive_0093.: Added demo code to read and project tracklets into images to the raw data development kit.: Added pre-trained LSVM baseline models for download.: Added demo code to read and project 3D Velodyne points into images to the raw data development kit.: The right color images and the Velodyne laser scans have been released for the object detection benchmark.: We are looking for a PhD student in 3D semantic scene parsing (position available at MPI Tübingen).: More complete calibration information (cameras, velodyne, imu) has been added to the object detection benchmark.: A preprint of our IJRR data paper is available for download now!.: The tracking benchmark has been released!.: The road and lane estimation benchmark has been released!.: The evaluation for the odometry benchmark has been modified such that longer sequences are taken into account.: We are organizing a workshop on reconstruction meets recognition at ICCV 2013!.: The pose files for the odometry benchmark have been replaced with a properly interpolated (subsampled) version which doesn't exhibit artefacts when computing velocities from the poses. : The ground truth disparity maps and flow fields have been refined/improved.The server evaluation scripts have been updated to also evaluate the bird's eye view metrics as well as to provide more detailed results for each evaluated method : The KITTI road devkit has been updated and some bugs have been fixed in the training ground truth.: For detection methods that use flow features, the 3 preceding frames have been made available in the object detection benchmark.: Added colored versions of the images and ground truth for reflective regions to the stereo/flow dataset.: We are organizing a workshop on reconstruction meets recognition at ECCV 2014!.: Fixed the bug in the sorting of the object detection benchmark (ordering should be according to moderate level of difficulty).: We have fixed some bugs in the ground truth of the road segmentation benchmark and updated the data, devkit and results.In contrast to the stereo 2012 and flow 2012 benchmarks, they provide more difficult sequences as well as ground truth for dynamic objects. : We have released our new stereo 2015, flow 2015, and scene flow 2015 benchmarks.: For flexibility, we now allow a maximum of 3 submissions per month and count submissions to different benchmarks separately.: We have added novel benchmarks for 3D object detection including 3D and bird's eye view evaluation.: We have added novel benchmarks for depth completion and single image depth prediction!. : We have added novel benchmarks for semantic segmentation and semantic instance segmentation!.: We have added a novel benchmark for multi-object tracking and segmentation (MOTS)!.Evaluation now uses the HOTA metrics and is performed with the TrackEval codebase. : We have updated the evaluation procedure for Tracking and MOTS. When using this dataset in your research, we will be happy if you cite us! (or bring us some self-made cake or ice-cream)įor the stereo 2012, flow 2012, odometry, object detection or tracking benchmarks, please Geiger2012CVPR,Īuthor = , Our goal is to reduce this bias and complement existing benchmarks by providing real-world benchmarks with novel difficulties to the community. Preliminary experiments show that methods ranking high on established benchmarks such as Middlebury perform below average when being moved outside the laboratory to the real world. For each of our benchmarks, we also provide an evaluation metric and this evaluation website. Besides providing all data in raw format, we extract benchmarks for each task. Up to 15 cars and 30 pedestrians are visible per image. Our datsets are captured by driving around the mid-size city of Karlsruhe, in rural areas and on highways. Accurate ground truth is provided by a Velodyne laser scanner and a GPS localization system. For this purpose, we equipped a standard station wagon with two high-resolution color and grayscale video cameras. Our tasks of interest are: stereo, optical flow, visual odometry, 3D object detection and 3D tracking. We take advantage of our autonomous driving platform Annieway to develop novel challenging real-world computer vision benchmarks.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |