Optimizing Prompts Using In-Context Few-Shot Learning for Text-to-Image Generative Models

Seunghun Lee, Jihoon Lee, Chan Ho Bae, Myung Seok Choi, Ryong Lee, Sangtae Ahn

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Recently, various text-to-image generative models have been released, demonstrating their ability to generate high-quality synthesized images from text prompts. Despite these advancements, determining the appropriate text prompts to obtain desired images remains challenging. The quality of the synthesized images heavily depends on the user input, making it difficult to achieve consistent and satisfactory results. This limitation has sparked the need for an effective prompt optimization method to generate optimized text prompts automatically for text-to-image generative models. Thus, this study proposes a prompt optimization method that uses in-context few-shot learning in a pretrained language model. The proposed approach aims to generate optimized text prompts to guide the image synthesis process by leveraging the available contextual information in a few text examples. The results revealed that synthesized images using the proposed prompt optimization method achieved a higher performance, at 18% on average, based on an evaluation metric that measures the similarity between the generated images and prompts for generation. The significance of this research lies in its potential to provide a more efficient and automated approach to obtaining high-quality synthesized images. The findings indicate that prompt optimization may offer a promising pathway for text-to-image generative models.

Original languageEnglish
Pages (from-to)2660-2673
Number of pages14
JournalIEEE Access
Volume12
DOIs
StatePublished - 2024

Keywords

  • In-context few-shot learning
  • pretrained language model
  • prompt optimization
  • text-to-image generation

Fingerprint

Dive into the research topics of 'Optimizing Prompts Using In-Context Few-Shot Learning for Text-to-Image Generative Models'. Together they form a unique fingerprint.

Cite this