A comprehensive set of ComfyUI nodes for using Large Language Models (LLM) as text encoders for SDXL image generation through a trained adapter.

Trained adapter for using Gemma-3-1b as text encoder for Rouwei v0.8 (vpred or epsilon or base).
Download Links:
- Python 3.8+
- ComfyUI
- Latest transformers library (tested on 4.53.1)
pip install transformers>=4.53.1 safetensors einops torch
- Clone the repository to
ComfyUI/custom_nodes/
:
cd ComfyUI/custom_nodes/
git clone https://github.com/NeuroSenko/ComfyUI_LLM_SDXL_Adapter.git
- Restart ComfyUI
-
Download the adapter:
- Download from CivitAI or HuggingFace
- Place the adapter file in
ComfyUI/models/llm_adapters/
-
Download Gemma-3-1b-it model:
- Download gemma-3-1b-it (non-gated mirror)
- Place in
ComfyUI/models/llm/gemma-3-1b-it/
- Note: You need ALL files from the original model for proper functionality (not just .safetensors)
-
Download Rouwei checkpoint:
- Get Rouwei v0.8 (vpred, epsilon, or base) if you don't have it
- Place in your regular ComfyUI checkpoints folder
ComfyUI/models/
├── llm/gemma-3-1b-it/
│ ├── added_tokens.json
│ ├── config.json
│ ├── generation_config.json
│ ├── model.safetensors
│ ├── special_tokens_map.json
│ ├── tokenizer.json
│ ├── tokenizer.model
│ └── tokenizer_config.json
├── llm_adapters/
│ └── rouweiGemma_g31b27k.safetensors
└── checkpoints/
└── rouwei_v0.8_vpred.safetensors
To enable detailed logging, edit __init__.py
:
# Change from:
logger.setLevel(logging.WARN)
# To:
logger.setLevel(logging.INFO)