Progressive Correspondence Pruning by Consensus Learning

Chen Zhao1*      Yixiao Ge3*      Feng Zhu2      Rui Zhao2,4      Hongsheng Li3      Mathieu Salzmann1     
1. Computer Vision Laboratory, École polytechnique fédérale de Lausanne (EPFL)          
2. SenseTime Research    
3. The Chinese University of Hong Kong    
4. Qing Yuan Research Institute, Shanghai Jiao Tong University

Abstract [Full Paper]


Fig. 1 - Progressive correspondence pruning via local-to-global consensus learning.


Correspondence selection aims to correctly select the consistent matches (inliers) from an initial set of putative correspondences. The selection is challenging since putative matches are typically extremely unbalanced, largely dominated by outliers, and the random distribution of such outliers further complicates the learning process for learning-based methods.

Our Contributions:

  • We propose to progressively prune correspondences for better inlier identification, which alleviates the effects of unbalanced initial matches and random outlier distribution.

  • We introduce a local-to-global consensus learning network for robust correspondence pruning, achieved by establishing dynamic graphs on-the-fly and estimating both local and global consensus scores to prune correspondences.

Method Overview


Fig. 2 - We gradually prune the raw data into $\hat{N}$ candidates via $K$ pruning blocks guided by local-to-global consensus learning. A parametric model is then estimated, employing inliers identified among the $\hat{N}$ candidates. A full-size verification is further conducted based on the estimated model, yielding $N\times 1$ inlier/outlier predictions for the initial correspondences.


Fig. 3 - Detailed architecture of the proposed pruning block.

Results on YFCC100M and SUN3D


Fig. 4 - Pose estimation on YFCC100M and SUN3D



Fig. 5 - Visualization of progressive pruning.


          title={Progressive Correspondence Pruning by Consensus Learning},
          author={Zhao, Chen and Ge, Yixiao and Zhu, Feng and Zhao, Rui and Li, Hongsheng and Salzmann, Mathieu},
          journal={arXiv preprint arXiv:2101.00591},


If you have any question, please contact Chen ZHAO at