Revisiting video super-resolution: you only look outstanding frames

Jaehyun Bae, Sang Hyo Park

Research output: Contribution to journalArticlepeer-review

Abstract

Video super-resolution (VSR) has been improved with various deep learning architectures and datasets that mostly contain clean images. However, most real-world videos are compressed and here arises a critical issue: each frame in a video usually varies in quality. We discover an important but simple coding prior that affects the performance of the existing VSR models because the prior can tell us which frame is outstanding than others in terms of quality, namely outstanding-frames. Exploiting the prior, we propose a method that allows you only look outstanding frames (YOLOF) to enhance the existing VSR models as a universal approach, which feeds VSR models the best quality of frames near the reference frame with given distance. Extensive evaluations with various VSR models show that our YOLOF method enhances existing VSR models substantially without harming original architectures.

Original languageEnglish
Article number023012
JournalJournal of Electronic Imaging
Volume32
Issue number2
DOIs
StatePublished - 1 Mar 2023

Keywords

  • compression domain
  • deep learning
  • high efficiency video coding
  • image enhancement
  • video coding
  • video compression
  • video super-resolution

Fingerprint

Dive into the research topics of 'Revisiting video super-resolution: you only look outstanding frames'. Together they form a unique fingerprint.

Cite this