SPF-Portrait : Towards Pure Portrait Customization with Semantic Pollution-Free Fine-tuning

Xiaole Xian★ ♱ ♡, Zhichao Liao★ ♱ ♤, Qingyu Li, Wenyu Qin, Pengfei Wan, Weicheng Xie✉ ♡,
Long Zeng✉ ♤, Linlin Shen, Pingfa Fengn
Co-first authors (equal contribution)  Corresponding author 
Work done during internship at KwaiVGI, Kuaishou Technology 
Shenzhen University,  Tsinghua University,  Kuaishou Technology 

Overview of SPF-Portrait

SPF-Portrait: We introduce a training pipeline eliminates the pollution during human attributes of fine-tuning.

Abstract

Fine-tuning a pre-trained Text-to-Image (T2I) model on a tailored portrait dataset is the mainstream method for text-to-portrait customization. However, existing methods often severely impact the original model’s behavior (e.g., changes in ID, layout, etc.) while customizing portrait attributes. To address this issue, we propose SPF-Portrait, a pioneering work to purely understand customized target semantics and minimize disruption to the original model. In our SPF-Portrait, we design a dual-path contrastive learning pipeline, which introduces the original model as a behavioral alignment reference for the conventional fine-tuning path. During the contrastive learning, we propose a novel Semantic-Aware Fine Control Map that indicates the intensity of response regions of the target semantics, to spatially guide the alignment process between the contrastive paths. It adaptively balances the behavioral alignment across different regions and the responsiveness of the target semantics. Furthermore, we propose a novel response enhancement mechanism to reinforce the presentation of target semantics, while mitigating representation discrepancy inherent in direct cross-modal supervision. Through the above strategies, we achieve incremental learning of customized target semantics for pure text-to-portrait customization. Extensive experiments show that SPF-Portrait achieves state-of-the-art performance.

SPF-Portrait Pipeline

Given a batch data of image-text pairs, SPF-Portrait can adapt the T2I model to the new concepts without semantic pollution. The fine-tuned T2I model can achieve the target attributes while inherits the original model's pretrained priors including text understanding, layout details and so on.

More visualization Results

Qualitative Comparisons with SOTA methods. We compare ours with naive fine-tuning, PEFT-based methods (LoRA, AdaLoRA) and the decoupled methods (Tokencompose, Magenet). Please zoom in for more details. Our approach not only achieves the target semantics, but also better preserves the behavior of the original model.

Extend to T2I domain

Results of extending our method to the general Text-to-Image domain.

BibTeX

If you find this project useful in your research, please consider cite:


      @article{xian2025spf,
        title={SPF-Portrait: Towards Pure Portrait Customization with Semantic Pollution-Free Fine-tuning},
        author={Xian, Xiaole and Liao, Zhichao and Li, Qingyu and Qin, Wenyu and Wan, Pengfei and Xie, Weicheng and Zeng, Long and Shen, Linlin and Feng, Pingfa},
        journal={arXiv preprint arXiv:2504.00396},
        year={2025}
        }