Ghostformer: A GhostNet-Based Two-Stage Transformer for Small Object Detection

Sijia Li, Furkat Sultonov, Jamshid Tursunboev, Jun Hyun Park, Sangseok Yun, Jae Mo Kang

Research output: Contribution to journalArticlepeer-review

14 Scopus citations

Abstract

In this paper, we propose a novel two-stage transformer with GhostNet, which improves the performance of the small object detection task. Specifically, based on the original Deformable Transformers for End-to-End Object Detection (deformable DETR), we chose GhostNet as the backbone to extract features, since it is better suited for an efficient feature extraction. Furthermore, at the target detection stage, we selected the 300 best bounding box results as regional proposals, which were subsequently set as primary object queries of the decoder layer. Finally, in the decoder layer, we optimized and modified the queries to increase the target accuracy. In order to validate the performance of the proposed model, we adopted a widely used COCO 2017 dataset. Extensive experiments demonstrated that the proposed scheme yielded a higher average precision (AP) score in detecting small objects than the existing deformable DETR model.

Original languageEnglish
Article number6939
JournalSensors
Volume22
Issue number18
DOIs
StatePublished - Sep 2022

Keywords

  • GhostNet
  • regional proposals
  • small object detection
  • two-stage transformer

Fingerprint

Dive into the research topics of 'Ghostformer: A GhostNet-Based Two-Stage Transformer for Small Object Detection'. Together they form a unique fingerprint.

Cite this