The workshop consists of two types of challenges. The winner of each challenge will receive 1000 USD while the runner-up will receive 500 USD.
Concretely there are three challenges:
For each dataset and challenge, we evaluate the pose accuracy of a method. To this end, we follow [Sattler et al., Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions, CVPR 2018] and define a set of thresholds on the position and orientation errors of the estimate pose. For each (X meters, Y degrees) threshold, we report the percentage of query images localized within X meters and Y degrees of the ground truth pose.
For ranking the methods, we follow the Robust Vision Challenge from CVPR 2018: For each dataset and challenge, we will rank the submitted results based on these percentages. Afterwards, we rank all methods submitted to a challenge based on their ranks on the individual datasets. The rankings are computed using the Schulze Proportional Ranking method from [Markus Schulze, A new monotonic, clone-independent, reversal symmetric, and condorcet-consistent single-winner election method, Social Choice and Welfare 2011].
The Schulze Proportional Ranking method is based on pairwise comparison of results. If the results of a method are not available for a dataset, the comparison will assume that it performs worse than a method for which the results are available.
Challenge submissions will be handled via the evaluation service set up at https://visuallocalization.net/ :
The deadline for submitting to any of the three challenges is June 1st, 23:59 PM (PST). In order to be able to receive a prize, you will need to notify us of the corresponding publication or arXiv paper until June 1st, 23:59 PM (PST) (contact Torsten Sattler at [email protected]). We will notify the winners by June 4th.
The following datasets will be used for the End-to-End Visual Localization and Visual Localization challenges:
The following datasets will be used for the Local Feature Evaluation challenge:
The following is provided for the challenge:
See this Github repository for the code, data, and information on using both.
The workflow for submitting to the challenge is: