diff --git a/docs/pulid_for_flux.md b/docs/pulid_for_flux.md index 83f5e7b..1992cf9 100644 --- a/docs/pulid_for_flux.md +++ b/docs/pulid_for_flux.md @@ -70,7 +70,7 @@ As shown in the above image, in terms of ID fidelity, using fake CFG is similar ## Some Technical Details - We switch the ID encoder from an MLP structure to a Transformer structure. Interested users can refer to [source code](https://github.com/ToTheBeginning/PuLID/blob/cce7cdd65b5bf283c1a39c29f2726902a3c135ca/pulid/encoders_flux.py#L122) - Inspired by [Flamingo](https://arxiv.org/abs/2204.14198), we insert additional cross-attention blocks every few DIT blocks to interact ID features with DIT image features -- We would like to clarify that the acceleration method (lile SDXL-Lightning) serves as an +- We would like to clarify that the acceleration method (like SDXL-Lightning) serves as an optional acceleration trick, but it is not indispensable for training PuLID. We will update the arxiv paper with the relevant details in the near future. Please stay tuned. @@ -81,4 +81,4 @@ The model is currently in beta version, and we have observed that the ID fidelit As long as you use FLUX.1-dev model, you should follow the [FLUX.1-dev model license](https://github.com/black-forest-labs/flux/tree/main/model_licenses) ## contact -If you have any questions or suggestions about the model, please contact [Yanze Wu](https://tothebeginning.github.io/) or open an issue/discussion here. \ No newline at end of file +If you have any questions or suggestions about the model, please contact [Yanze Wu](https://tothebeginning.github.io/) or open an issue/discussion here.