Abstract
Facial image editing via the latent space of generative models has recently gained significant attention, particularly for its efficiency in eliminating the need for model training. Among these methods, support vector machines (SVMs) are widely used to define semantic edit directions. However, existing methods lack clear guidelines on the selection and quantity of training data for the SVM, making the preparation process time-consuming, especially in the case of diffusion model-based approaches. In this paper, we progressively reduce the number of images and evaluate the results in terms of quality, identity preservation, and attribute consistency. Based on our findings, we propose a practical lower bound for the number of images required for effective SVM training along with criteria to ensure attribute-specific editing, thus improving editing efficiency.
| Original language | English |
|---|---|
| Journal | Proceedings - IEEE International Conference on Advanced Video and Signal-Based Surveillance, AVSS |
| Issue number | 2025 |
| DOIs | |
| State | Published - 2025 |
| Event | 2025 IEEE International Conference on Advanced Visual and Signal-Based Systems, AVSS 2025 - Tainan, Taiwan, Province of China Duration: 11 Aug 2025 → 13 Aug 2025 |
Fingerprint
Dive into the research topics of 'Facial Attribute Editing with Diffusion Models using Data-Efficient SVMs'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver