Contrastive Self-Supervised Learning with Smoothed Representation for Remote Sensing

Heechul Jung, Yoonju Oh, Seongho Jeong, Chaehyeon Lee, Taegyun Jeon

Research output: Contribution to journalArticlepeer-review

44 Scopus citations

Abstract

In remote sensing, numerous unlabeled images are continuously accumulated over time, and it is difficult to annotate all the data. Therefore, a self-supervised learning technique that can improve the recognition rate using unlabeled data will be useful for remote sensing. This letter presents contrastive self-supervised learning with smoothed representation for remote sensing based on the SimCLR framework. In self-supervised learning for remote sensing, the well-known characteristic that images within a short distance might be semantically similar is usually used. Our algorithm is based on this knowledge, and it simultaneously utilizes several neighboring images as a positive pair of the anchor image, unlike existing methods such as Tile2Vec. Furthermore, MoCo and SimCLR, which are among the state-of-the-art self-supervised learning approaches, only use two augmented views of the single-input image, but our proposed approach uses multiple-input images and averages their representations (e.g., smoothed representation). Consequently, the proposed approach outperforms state-of-the-art self-supervised learning methods, such as Tile2Vec, MoCo, and SimCLR, in the cropland data layer (CDL), RESISC-45, UCMerced, and EuroSAT data sets. The proposed approach is comparable to the pretrained ImageNet model in the CDL classification task.

Original languageEnglish
JournalIEEE Geoscience and Remote Sensing Letters
Volume19
DOIs
StatePublished - 2022

Keywords

  • Classification
  • contrastive learning
  • remote sensing
  • self-supervised learning
  • smoothed representation

Fingerprint

Dive into the research topics of 'Contrastive Self-Supervised Learning with Smoothed Representation for Remote Sensing'. Together they form a unique fingerprint.

Cite this