EENet: embedding enhancement network for compositional image-text retrieval using generated text

Chan Hur, Hyeyoung Park

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we consider the compositional image-text retrieval task, which searches for appropriate target images given a reference image with feedback text as a query. For instance, when a user finds a dress on an E-commerce site that meets all their needs except for the length and decoration, the user can give sentence-form feedback, e.g., "I like this dress, but I wish it was a little shorter and had no ribbon," to the system. This is a practical scenario for advanced retrieval systems and is applicable to user interactive search systems or E-commerce systems. To tackle this task, we propose a model, the Embedding Enhancement Network (EENet), which includes a text generation module and an image feature enhancement module using the generated text. While the conventional works mainly focus on developing an efficient composition module of a given image and text query, EENet actively generates an additional textual description to enhance the image feature vector in the embedding space, which is inspired by the human ability to recognize an object using a visual sensor and prior textual information. Also, a new training loss is introduced to ensure that images and additional generated texts are well combined. The experimental results show that the EENet achieves considerable improvement on retrieval performance evaluations; for the Recall@1 metric, it improved by 3.4% in Fashion200k and 1.4% in MIT-States over the baseline model.

Original languageEnglish
Pages (from-to)49689-49705
Number of pages17
JournalMultimedia Tools and Applications
Volume83
Issue number16
DOIs
StatePublished - May 2024

Keywords

  • Compositional Image-Text Retrieval
  • Image-Captioning
  • Joint embedding
  • Textual Feature Generation
  • Visual Feature Enhancement

Fingerprint

Dive into the research topics of 'EENet: embedding enhancement network for compositional image-text retrieval using generated text'. Together they form a unique fingerprint.

Cite this