Segmentation-Guided Context Learning Using EO Object Labels for Stable SAR-to-EO Translation

Jaehyup Lee, Hyun Ho Kim, Doochun Seo, Munchurl Kim

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Recently, the analysis and use of synthetic aperture radar (SAR) imagery have become crucial for surveillance, military operations, and environmental monitoring. A common challenge with SAR images is the presence of speckle noise, which can hinder their interpretability. To enhance the clarity of SAR images, this letter introduces a novel SAR-to-electro-optical (EO) image translation (SET) network, called SGCL-SET, which first incorporates EO object label information for stable translation. We use a pretrained segmentation network to provide the segmentation regions with their labels into learning the SET. Our SGCL-SET can be trained to effectively learn the translation for the regions of confusing contexts using the segmentation and label information. Through comprehensive experiments on our KOMPSAT dataset, our SGCL-SET significantly outperforms all the previous methods with large margins across nine image quality evaluation metrics.

Original languageEnglish
Article number4001305
Pages (from-to)1-5
Number of pages5
JournalIEEE Geoscience and Remote Sensing Letters
Volume21
DOIs
StatePublished - 2024

Keywords

  • Generative adversarial network
  • SAR-to-EO translation
  • synthetic aperture radar (SAR) image

Fingerprint

Dive into the research topics of 'Segmentation-Guided Context Learning Using EO Object Labels for Stable SAR-to-EO Translation'. Together they form a unique fingerprint.

Cite this