First-person activity recognition based on three-stream deep features

Ye Ji Kim, Dong Gyu Lee, Seong Whan Lee

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Scopus citations

Abstract

In this paper, we present a novel three-stream deep feature fusion technique to recognize interaction-level human activities from a first-person viewpoint. Specifically, the proposed approach distinguishes human motion and camera ego-motion to focus on human’s movement. The features of human and camera ego-motion information are extracted from the three-stream architecture. These features are fused by considering a relationship of human action and camera ego-motion. To validate the effectiveness of our approach, we perform experiments on UTKinect-FirstPerson dataset, and achieve state-of-the-art performance.

Original languageEnglish
Title of host publicationInternational Conference on Control, Automation and Systems
PublisherIEEE Computer Society
Pages297-299
Number of pages3
ISBN (Electronic)9788993215151
StatePublished - 10 Dec 2018
Event18th International Conference on Control, Automation and Systems, ICCAS 2018 - PyeongChang, Korea, Republic of
Duration: 17 Oct 201820 Oct 2018

Publication series

NameInternational Conference on Control, Automation and Systems
Volume2018-October
ISSN (Print)1598-7833

Conference

Conference18th International Conference on Control, Automation and Systems, ICCAS 2018
Country/TerritoryKorea, Republic of
CityPyeongChang
Period17/10/1820/10/18

Keywords

  • First-person activity recognition
  • Human-robot interaction
  • Robot surveillance.
  • Three-stream deep features

Fingerprint

Dive into the research topics of 'First-person activity recognition based on three-stream deep features'. Together they form a unique fingerprint.

Cite this