Stereo Vision-Based Gamma-Ray Imaging for 3D Scene Data Fusion

Pathum Rathnayaka, Seung Hae Baek, Soon Yong Park

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Modern developments of gamma-ray imagers by integrating multi-contextual sensors and advanced computer vision theories have enabled unprecedented capabilities in detection and imaging, reconstruction and mapping of radioactive sources. Notwithstanding these remarkable capabilities, the addition of multiple sensors such as light detection and ranging units (LiDAR), RGB-D sensors (Microsoft Kinect), and inertial measurement units (IMU) are mostly expensive. Instead of using such expensive sensors, we, in this paper, introduce a modest three-dimensional (3D) gamma-ray imaging method by exploiting the advancements in modern stereo vision technologies. A stereo line equation model is proposed to properly identify the distribution area of gamma-ray intensities that are used for two-dimensional (2D) visualizations. Scene data information of the surrounding environment captured at different locations are reconstructed by re-projecting disparity images created with the semi-global matching algorithm (SGM) and are merged together by employing the point-to-point iterative closest point algorithm (ICP). Instead of superimposing/overlaying 2D radioisotopes on the merged scene area, reconstructions of 2D gamma images are fused together with it to create a detailed 3D volume. Through experimental results, we try to emphasize the accuracy of our proposed fusion method.

Original languageEnglish
Article number8754741
Pages (from-to)89604-89613
Number of pages10
JournalIEEE Access
Volume7
DOIs
StatePublished - 2019

Keywords

  • 3D imaging
  • gamma-ray imaging
  • scene data fusion
  • stereo matching
  • Stereo vision

Fingerprint

Dive into the research topics of 'Stereo Vision-Based Gamma-Ray Imaging for 3D Scene Data Fusion'. Together they form a unique fingerprint.

Cite this