TY - GEN
T1 - StyleBoost
T2 - 14th International Conference on Information and Communication Technology Convergence, ICTC 2023
AU - Park, Junseo
AU - Ko, Beomseok
AU - Jang, Hyeryung
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Recent advancements in text-to-image models, such as Stable Diffusion, have demonstrated their ability to synthesize visual images through natural language prompts. One approach of personalizing text-to-image models, exemplified by Dream-Booth, fine-tunes the pre-trained model by binding unique text identifiers with a few images of a specific subject. Although existing fine-tuning methods have demonstrated competence in rendering images asking to the styles of famous painters, it is still challenging to learn to produce images encapsulating distinct art styles due to abstract and broad visual perceptions of stylistic attributes such as lines, shapes, textures, and colors. In this paper, we present a new fine-tuning method, called StyleBoost, that equips pre-trained text-to-image models to produce diverse images in specified styles from text prompts. By leveraging around 15 to 20 images of StyleRef and Aux images each, our approach establishes a foundational binding of a unique token identifier with a broad realm of the target style, where the Aux images is carefully selected to strengthen the binding. This dual-binding strategy grasps the essential concept of art styles and accelerates learning of diverse and comprehensive attributes of the target style. Experimental evaluation conducted on three distinct styles - realism art, SureB art, and anime - demonstrates substantial improvements in both the quality of generated images and the perceptual fidelity metrics, such as FID and CLIP scores.
AB - Recent advancements in text-to-image models, such as Stable Diffusion, have demonstrated their ability to synthesize visual images through natural language prompts. One approach of personalizing text-to-image models, exemplified by Dream-Booth, fine-tunes the pre-trained model by binding unique text identifiers with a few images of a specific subject. Although existing fine-tuning methods have demonstrated competence in rendering images asking to the styles of famous painters, it is still challenging to learn to produce images encapsulating distinct art styles due to abstract and broad visual perceptions of stylistic attributes such as lines, shapes, textures, and colors. In this paper, we present a new fine-tuning method, called StyleBoost, that equips pre-trained text-to-image models to produce diverse images in specified styles from text prompts. By leveraging around 15 to 20 images of StyleRef and Aux images each, our approach establishes a foundational binding of a unique token identifier with a broad realm of the target style, where the Aux images is carefully selected to strengthen the binding. This dual-binding strategy grasps the essential concept of art styles and accelerates learning of diverse and comprehensive attributes of the target style. Experimental evaluation conducted on three distinct styles - realism art, SureB art, and anime - demonstrates substantial improvements in both the quality of generated images and the perceptual fidelity metrics, such as FID and CLIP scores.
KW - diffusion models
KW - fine-tuning
KW - person-alization
KW - text-to-image models
UR - https://www.scopus.com/pages/publications/85184568897
U2 - 10.1109/ICTC58733.2023.10392676
DO - 10.1109/ICTC58733.2023.10392676
M3 - Conference contribution
AN - SCOPUS:85184568897
T3 - International Conference on ICT Convergence
SP - 93
EP - 98
BT - ICTC 2023 - 14th International Conference on Information and Communication Technology Convergence
PB - IEEE Computer Society
Y2 - 11 October 2023 through 13 October 2023
ER -