Skip to main navigation Skip to search Skip to main content

Facial Attribute Editing with Diffusion Models using Data-Efficient SVMs

Research output: Contribution to journalConference articlepeer-review

Abstract

Facial image editing via the latent space of generative models has recently gained significant attention, particularly for its efficiency in eliminating the need for model training. Among these methods, support vector machines (SVMs) are widely used to define semantic edit directions. However, existing methods lack clear guidelines on the selection and quantity of training data for the SVM, making the preparation process time-consuming, especially in the case of diffusion model-based approaches. In this paper, we progressively reduce the number of images and evaluate the results in terms of quality, identity preservation, and attribute consistency. Based on our findings, we propose a practical lower bound for the number of images required for effective SVM training along with criteria to ensure attribute-specific editing, thus improving editing efficiency.

Fingerprint

Dive into the research topics of 'Facial Attribute Editing with Diffusion Models using Data-Efficient SVMs'. Together they form a unique fingerprint.

Cite this