All datasets should be downloaded in $DATA_PATH
. We give below additional details as well as the expected directory structures.
These datasets are introduced in Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions by Sattler et al., and can be downloaded from the associated website.
For each dataset, there are separate directories for the images, the SfM models, and the localization databases.
aachen/
├── aachen.db
├── day_time_queries_with_intrinsics.txt
├── night_time_queries_with_intrinsics.txt
├── databases/
├── images_upright/
│ ├── db/
│ │ └── ...
│ └── query/
│ └── ...
└── models/
└── hfnet_model/
├── cameras.bin
├── images.bin
└── points3D.bin
robotcar/
├── overcast-reference.db
├── query.db
├── images/
│ ├── overcast-reference/
│ ├── sun/
│ ├── dusk/
│ ├── night/
│ └── night-rain/
├── intrinsics/
│ ├── left_intrinsics.txt
│ ├── rear_intrinsics.txt
│ └── right_intrinsics.txt
├── queries/
│ ├── dusk_queries_with_intrinsics.txt
│ ├── night_queries_with_intrinsics.txt
│ ├── night-rain_queries_with_intrinsics.txt
│ └── sun_queries_with_intrinsics.txt
└── models/
└── hfnet_model/
├── cameras.bin
├── images.bin
└── points3D.bin
The query lists generated with setup/utils/generate_robotcar_query_list.py
are available here.
cmu/
├── images/
│ ├── slice2/
│ │ ├── database/
│ │ └── query/
│ └── ...
├── slice2/
│ ├── sift_database.db
│ ├── sift_queries.db
│ ├── slice2.queries_with_intrinsics.txt
│ └── models/
│ └── hfnet_model/
│ ├── cameras.bin
│ ├── images.bin
│ └── points3D.bin
├── slice3/
│ └── ...
└── ...
Local feature detectors and descriptors can be evaluated on the HPatches and SfM datasets, as reported in our paper.
The dataset is described in the paper HPatches: A benchmark and evaluation of handcrafted and learned local descriptors by Balntas et al., and can be downloaded here.
hpatches/
├── i_ajuntament/
└── ...
The dataset is introduced by Ono et al. in their paper [LF-Net: Learning Local Features from Images and can be obtained here.
sfm/
├── british_museum/
│ └── dense/
├── florence_cathedral_side/
│ └── dense/
├── lincoln_memorial_statue/
│ └── dense/
├── london_bridge/
│ └── dense/
├── milan_cathedral/
│ └── dense/
├── mount_rushmore/
│ └── dense/
├── piazza_san_marco/
│ └── dense/
├── reichstag/
│ └── dense/
├── sacre_coeur/
│ └── dense/
├── sagrada_familia/
│ └── dense/
├── st_pauls_cathedral/
│ └── dense/
├── united_states_capitol/
│ └── dense/
├── scales.txt
└── exif
├── brandenburg_gate
├── british_museum
├── buckingham
├── colosseum_exterior
├── florence_cathedral_side
├── grand_place_brussels
├── hagia_sophia_interior
├── lincoln_memorial_statue
├── london_bridge
├── milan_cathedral
├── mount_rushmore
├── notre_dame_front_facade
├── pantheon_exterior
├── piazza_san_marco
├── sagrada_familia
├── st_pauls_cathedral
├── st_peters_square
├── taj_mahal
├── temple_nara_japan
├── trevi_fountain
├── united_states_capitol
└── westminster_abbey
HF-Net is trained on the Google Landmarks and Berkeley Deep Drive datasets. For the former, first download the index of images and then the dataset itself using the script setup/scripts/download_google_landmarks.py
. The latter can be downloaded on the dataset website (we used the night and dawn sequences).
The labels are predictions of SuperPoint and NetVLAD. Their export is described in the training documentation.
google_landmarks/
├── images/
├── global_descriptors/
└── superpoint_predictions/
bdd/
├── dawn_images_vga/
├── night_images_vga/
├── global_descriptors/
└── superpoint_predictions/