Abstract
Video super-resolution (VSR) has been improved with various deep learning architectures and datasets that mostly contain clean images. However, most real-world videos are compressed and here arises a critical issue: each frame in a video usually varies in quality. We discover an important but simple coding prior that affects the performance of the existing VSR models because the prior can tell us which frame is outstanding than others in terms of quality, namely outstanding-frames. Exploiting the prior, we propose a method that allows you only look outstanding frames (YOLOF) to enhance the existing VSR models as a universal approach, which feeds VSR models the best quality of frames near the reference frame with given distance. Extensive evaluations with various VSR models show that our YOLOF method enhances existing VSR models substantially without harming original architectures.
Original language | English |
---|---|
Article number | 023012 |
Journal | Journal of Electronic Imaging |
Volume | 32 |
Issue number | 2 |
DOIs | |
State | Published - 1 Mar 2023 |
Keywords
- compression domain
- deep learning
- high efficiency video coding
- image enhancement
- video coding
- video compression
- video super-resolution