TY - GEN
T1 - A robust matching network for gradually estimating geometric transformation on remote sensing imagery
AU - Kim, Dong Geon
AU - Nam, Woo Jeoung
AU - Lee, Seong Whan
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/10
Y1 - 2019/10
N2 - In this paper, we propose a matching network for gradually estimating the geometric transformation parameters between two aerial images taken in the same area but in different environments. To precisely matching two aerial images, there are important factors to consider such as different time, a variation of viewpoint, size, and rotation. The conventional methods for matching aerial image pairs with the large variations are extremely time-consuming process and have the limitations finding correct correspondences, because the image gradient and grayscale intensity for generating the feature descriptors are not robust to the variations. We design the network architecture as an end-to-end trainable deep neural network to reflect the characteristics of aerial images. The hierarchical structures that orderly estimate the rotation and the affine transformations make it possible to reduce the range of predictions and minimize errors caused by misalignment, resulting in more precise matching performance. Furthermore, we apply transfer learning to make the feature extraction networks more robust and suitable for the aerial image domain with the large variations. For the experiment, we apply the remote sensing image datasets from Google Earth and International Society for Photogrammetry and Remote Sensing (ISPRS). To evaluate our method quantitatively, we measure the probability of correct keypoints (PCK) metrics for objectively comparing the degree of matching. In terms of qualitative and quantitative assessment, our method demonstrates the state-of-the-art performances compared to the existing methods.
AB - In this paper, we propose a matching network for gradually estimating the geometric transformation parameters between two aerial images taken in the same area but in different environments. To precisely matching two aerial images, there are important factors to consider such as different time, a variation of viewpoint, size, and rotation. The conventional methods for matching aerial image pairs with the large variations are extremely time-consuming process and have the limitations finding correct correspondences, because the image gradient and grayscale intensity for generating the feature descriptors are not robust to the variations. We design the network architecture as an end-to-end trainable deep neural network to reflect the characteristics of aerial images. The hierarchical structures that orderly estimate the rotation and the affine transformations make it possible to reduce the range of predictions and minimize errors caused by misalignment, resulting in more precise matching performance. Furthermore, we apply transfer learning to make the feature extraction networks more robust and suitable for the aerial image domain with the large variations. For the experiment, we apply the remote sensing image datasets from Google Earth and International Society for Photogrammetry and Remote Sensing (ISPRS). To evaluate our method quantitatively, we measure the probability of correct keypoints (PCK) metrics for objectively comparing the degree of matching. In terms of qualitative and quantitative assessment, our method demonstrates the state-of-the-art performances compared to the existing methods.
UR - http://www.scopus.com/inward/record.url?scp=85076795250&partnerID=8YFLogxK
U2 - 10.1109/SMC.2019.8913881
DO - 10.1109/SMC.2019.8913881
M3 - Conference contribution
AN - SCOPUS:85076795250
T3 - Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics
SP - 3889
EP - 3894
BT - 2019 IEEE International Conference on Systems, Man and Cybernetics, SMC 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 IEEE International Conference on Systems, Man and Cybernetics, SMC 2019
Y2 - 6 October 2019 through 9 October 2019
ER -