Abstract
As the need for sensor fusion systems has grown, developing methods to find correspondences between images with different spectral ranges has become increasingly important. Since most images do not share low-level information, such as textures and edges, existing matching approaches fail even with convolutional neural networks (CNNs). In this paper, we propose an end-to-end metric learning method, called SPIMNet (SPectral-Invariant Matching Network) for robust cross- and multi-spectral image patch matching. While existing methods based on CNNs learn matching features directly from cross- and multi-spectral image patches, SPIMNet transforms across spectral bands and discriminates for similarity in three steps. First, (1) SPIMNet is adjusted for a feature domain by introducing a domain translation network; then (2) two Siamese networks learn to match the adjusted features with the same spectral domain; and (3) the matching features are fed to fully-connected layers to determine the identity of the patches as a classification task. By effectively incorporating each step, SPIMNet achieved competitive results on a variety of challenging datasets, including both VIS–NIR and VIS–Thermal image pairs. Our code is available at https://github.com/koyeongmin/SPIMNet.
Original language | English |
---|---|
Pages (from-to) | 623-632 |
Number of pages | 10 |
Journal | Information Fusion |
Volume | 91 |
DOIs | |
State | Published - Mar 2023 |
Keywords
- Domain conversion
- Image matching
- Sensor fusion
- Stereo matching
- VIS–NIR