Joint Semantic Understanding with a Multilevel Branch for Driving Perception

Dong Gyu Lee, Yoon Ki Kim

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

Visual perception is a critical task for autonomous driving. Understanding the driving environment in real time can assist a vehicle in driving safely. In this study, we proposed a multi-task learning framework for simultaneous traffic object detection, drivable area segmentation, and lane line segmentation in an efficient way. Our network encoder extracts features from an input image and three decoders at multilevel branches handle specific tasks. The decoders share the feature maps with more similar tasks for joint semantic understanding. Multiple loss functions are automatically weighted summed to learn multiple objectives simultaneously. We demonstrate the effectiveness of this framework on a BerkeleyDeepDrive100K (BDD100K) dataset. In the experiment, the proposed method outperforms the competing multi-task and single-task methods in terms of accuracy and maintains a real-time inference at more than 37 frames per second.

Original languageEnglish
Article number2877
JournalApplied Sciences (Switzerland)
Volume12
Issue number6
DOIs
StatePublished - 1 Mar 2022

Keywords

  • Drivable area segmentation
  • Joint semantic understanding
  • Lane line segmentation
  • Multi-level branch network
  • Multi-task learning
  • Real-time inference
  • Traffic object detection

Fingerprint

Dive into the research topics of 'Joint Semantic Understanding with a Multilevel Branch for Driving Perception'. Together they form a unique fingerprint.

Cite this