An accurate and cost-effective stereo matching algorithm and processor for real-time embedded multimedia systems

Kyeong ryeol Bae, Byungin Moon

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

Stereo matching is a vision technique for finding three-dimensional (3D) distance information in various multimedia applications by calculating pixel disparities between the matching points of a stereo image pair captured from a stereo camera. The most important considerations in stereo matching are highly accurate results and real-time performance. Thus, this paper proposes an accurate stereo matching algorithm that uses both a census transform algorithm and the sum of absolute differences algorithm in a complementary manner and its real-time hardware architecture. In addition, the proposed algorithm uses a vertical census transform with cost aggregation (VCTCA) to reduce hardware costs while maintaining high matching accuracy. We model the proposed algorithm using C language and verify it in several environments. Using a hardware description language, we implement the proposed hardware architecture and verify it on a field-programmable gate array-based platform to confirm the cost and performance of the hardware. The experimental results show that the proposed algorithm using the VCTCA produces accurate 3D distance information in real environments and reduces the hardware complexity. Thus, the algorithm and its hardware architecture are suitable for real-time embedded multimedia systems.

Original languageEnglish
Pages (from-to)17907-17922
Number of pages16
JournalMultimedia Tools and Applications
Volume76
Issue number17
DOIs
StatePublished - 1 Sep 2017

Keywords

  • 3D content
  • Algorithm cross-check
  • Hardware implementation
  • Stereo matching
  • Vertical census transform

Fingerprint

Dive into the research topics of 'An accurate and cost-effective stereo matching algorithm and processor for real-time embedded multimedia systems'. Together they form a unique fingerprint.

Cite this