We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
for it to make images
it doesn't make images
have a 5080 video card
# ComfyUI Error Report ## Error Details - **Node ID:** 7 - **Node Type:** CLIPTextEncode - **Exception Type:** RuntimeError - **Exception Message:** CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ## Stack Trace File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 327, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 202, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 174, in _map_node_over_list process_inputs(input_dict, i) File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 163, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\nodes.py", line 69, in encode return (clip.encode_from_tokens_scheduled(tokens), ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd.py", line 149, in encode_from_tokens_scheduled pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd.py", line 211, in encode_from_tokens o = self.cond_stage_model.encode_token_weights(tokens) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd1_clip.py", line 640, in encode_token_weights out = getattr(self, self.clip).encode_token_weights(token_weight_pairs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd1_clip.py", line 45, in encode_token_weights o = self.encode(to_encode) ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd1_clip.py", line 252, in encode return self(tokens) ^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd1_clip.py", line 224, in forward outputs = self.transformer(tokens, attention_mask_model, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state, dtype=torch.float32) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\clip_model.py", line 137, in forward x = self.text_model(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\clip_model.py", line 101, in forward x = self.embeddings(input_tokens, dtype=dtype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\clip_model.py", line 82, in forward return self.token_embedding(input_tokens, out_dtype=dtype) + comfy.ops.cast_to(self.position_embedding.weight, dtype=dtype, device=input_tokens.device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\ops.py", line 203, in forward return self.forward_comfy_cast_weights(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\ops.py", line 199, in forward_comfy_cast_weights return torch.nn.functional.embedding(input, weight, self.padding_idx, self.max_norm, self.norm_type, self.scale_grad_by_freq, self.sparse).to(dtype=output_dtype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\functional.py", line 2551, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ## System Information - **ComfyUI Version:** 0.3.14 - **Arguments:** C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\main.py --user-directory C:\Users\WiseM\Documents\ComfyUI\user --input-directory C:\Users\WiseM\Documents\ComfyUI\input --output-directory C:\Users\WiseM\Documents\ComfyUI\output --front-end-root C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\web_custom_versions\desktop_app --base-directory C:\Users\WiseM\Documents\ComfyUI --extra-model-paths-config C:\Users\WiseM\AppData\Roaming\ComfyUI\extra_models_config.yaml --listen 127.0.0.1 --port 8000 - **OS:** nt - **Python Version:** 3.12.8 (main, Jan 14 2025, 22:49:36) [MSC v.1942 64 bit (AMD64)] - **Embedded Python:** false - **PyTorch Version:** 2.6.0+cu126 ## Devices - **Name:** cuda:0 NVIDIA GeForce RTX 5080 : cudaMallocAsync - **Type:** cuda - **VRAM Total:** 17094475776 - **VRAM Free:** 15329295868 - **Torch VRAM Total:** 268435456 - **Torch VRAM Free:** 21134844 ## Logs 2025-02-15T18:39:35.719190 - Adding extra search path custom_nodes C:\Users\WiseM\Documents\ComfyUI\custom_nodes/ 2025-02-15T18:39:35.719190 - Adding extra search path download_model_base C:\Users\WiseM\Documents\ComfyUI\models 2025-02-15T18:39:35.719190 - Adding extra search path custom_nodes C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes 2025-02-15T18:39:35.719190 - Setting output directory to: C:\Users\WiseM\Documents\ComfyUI\output 2025-02-15T18:39:35.719190 - Setting input directory to: C:\Users\WiseM\Documents\ComfyUI\input 2025-02-15T18:39:35.719190 - Setting user directory to: C:\Users\WiseM\Documents\ComfyUI\user 2025-02-15T18:39:35.881850 - [START] Security scan2025-02-15T18:39:35.881850 - 2025-02-15T18:39:36.468114 - [DONE] Security scan2025-02-15T18:39:36.468114 - 2025-02-15T18:39:36.548135 - ## ComfyUI-Manager: installing dependencies done.2025-02-15T18:39:36.548135 - 2025-02-15T18:39:36.548135 - ** ComfyUI startup time:2025-02-15T18:39:36.548135 - 2025-02-15T18:39:36.548135 - 2025-02-15 18:39:36.5482025-02-15T18:39:36.548135 - 2025-02-15T18:39:36.548135 - ** Platform:2025-02-15T18:39:36.548135 - 2025-02-15T18:39:36.548135 - Windows2025-02-15T18:39:36.548135 - 2025-02-15T18:39:36.548135 - ** Python version:2025-02-15T18:39:36.548135 - 2025-02-15T18:39:36.548135 - 3.12.8 (main, Jan 14 2025, 22:49:36) [MSC v.1942 64 bit (AMD64)]2025-02-15T18:39:36.548135 - 2025-02-15T18:39:36.548135 - ** Python executable:2025-02-15T18:39:36.548135 - 2025-02-15T18:39:36.548135 - C:\Users\WiseM\Documents\ComfyUI\.venv\Scripts\python.exe2025-02-15T18:39:36.548135 - 2025-02-15T18:39:36.548135 - ** ComfyUI Path:2025-02-15T18:39:36.548135 - 2025-02-15T18:39:36.548135 - C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI2025-02-15T18:39:36.548135 - 2025-02-15T18:39:36.548135 - ** ComfyUI Base Folder Path:2025-02-15T18:39:36.548135 - 2025-02-15T18:39:36.548135 - C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI2025-02-15T18:39:36.548135 - 2025-02-15T18:39:36.548135 - ** User directory:2025-02-15T18:39:36.548135 - 2025-02-15T18:39:36.548135 - C:\Users\WiseM\Documents\ComfyUI\user2025-02-15T18:39:36.548135 - 2025-02-15T18:39:36.548135 - ** ComfyUI-Manager config path:2025-02-15T18:39:36.549134 - 2025-02-15T18:39:36.549134 - C:\Users\WiseM\Documents\ComfyUI\user\default\ComfyUI-Manager\config.ini2025-02-15T18:39:36.549134 - 2025-02-15T18:39:36.549134 - ** Log path:2025-02-15T18:39:36.549134 - 2025-02-15T18:39:36.549134 - C:\Users\WiseM\Documents\ComfyUI\user\comfyui.log2025-02-15T18:39:36.549134 - 2025-02-15T18:39:37.152211 - Prestartup times for custom nodes: 2025-02-15T18:39:37.152211 - 1.4 seconds: C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes\ComfyUI-Manager 2025-02-15T18:39:37.152211 - 2025-02-15T18:39:38.306145 - Checkpoint files will always be loaded safely. 2025-02-15T18:39:38.334127 - C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\cuda\__init__.py:235: UserWarning: NVIDIA GeForce RTX 5080 with CUDA capability sm_120 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90. If you want to use the NVIDIA GeForce RTX 5080 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/ warnings.warn( 2025-02-15T18:39:38.433370 - Total VRAM 16303 MB, total RAM 65298 MB 2025-02-15T18:39:38.433370 - pytorch version: 2.6.0+cu126 2025-02-15T18:39:38.434500 - Set vram state to: NORMAL_VRAM 2025-02-15T18:39:38.434500 - Device: cuda:0 NVIDIA GeForce RTX 5080 : cudaMallocAsync 2025-02-15T18:39:39.299653 - Using pytorch attention 2025-02-15T18:39:40.513983 - ComfyUI version: 0.3.14 2025-02-15T18:39:40.526065 - [Prompt Server] web root: C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\web_custom_versions\desktop_app 2025-02-15T18:39:40.845922 - ### Loading: ComfyUI-Manager (V3.17.7) 2025-02-15T18:39:40.846928 - ### ComfyUI Revision: UNKNOWN (The currently installed ComfyUI is not a Git repository) 2025-02-15T18:39:40.849405 - Import times for custom nodes: 2025-02-15T18:39:40.850907 - 0.0 seconds: C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes\websocket_image_save.py 2025-02-15T18:39:40.850907 - 0.0 seconds: C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes\ComfyUI-Manager 2025-02-15T18:39:40.850907 - 2025-02-15T18:39:40.856977 - Starting server 2025-02-15T18:39:40.857973 - To see the GUI go to: http://127.0.0.1:8000 2025-02-15T18:39:40.967159 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json 2025-02-15T18:39:41.019700 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json 2025-02-15T18:39:41.034851 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json 2025-02-15T18:39:41.050992 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json 2025-02-15T18:39:41.068930 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json 2025-02-15T18:39:42.296186 - FETCH DATA from: C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json2025-02-15T18:39:42.296186 - 2025-02-15T18:39:42.300272 - [DONE]2025-02-15T18:39:42.300272 - 2025-02-15T18:39:44.174412 - FETCH ComfyRegistry Data: 5/332025-02-15T18:39:44.174412 - 2025-02-15T18:39:44.258712 - got prompt 2025-02-15T18:39:44.381880 - model weight dtype torch.float16, manual cast: None 2025-02-15T18:39:44.383321 - model_type EPS 2025-02-15T18:39:44.612453 - Using pytorch attention in VAE 2025-02-15T18:39:44.612453 - Using pytorch attention in VAE 2025-02-15T18:39:44.675901 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16 2025-02-15T18:39:44.719085 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16 2025-02-15T18:39:44.999523 - Requested to load SD1ClipModel 2025-02-15T18:39:45.039424 - loaded completely 13447.8 235.84423828125 True 2025-02-15T18:39:45.055183 - !!! Exception during processing !!! CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. 2025-02-15T18:39:45.056179 - Traceback (most recent call last): File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 327, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 202, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 174, in _map_node_over_list process_inputs(input_dict, i) File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 163, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\nodes.py", line 69, in encode return (clip.encode_from_tokens_scheduled(tokens), ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd.py", line 149, in encode_from_tokens_scheduled pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd.py", line 211, in encode_from_tokens o = self.cond_stage_model.encode_token_weights(tokens) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd1_clip.py", line 640, in encode_token_weights out = getattr(self, self.clip).encode_token_weights(token_weight_pairs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd1_clip.py", line 45, in encode_token_weights o = self.encode(to_encode) ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd1_clip.py", line 252, in encode return self(tokens) ^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd1_clip.py", line 224, in forward outputs = self.transformer(tokens, attention_mask_model, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state, dtype=torch.float32) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\clip_model.py", line 137, in forward x = self.text_model(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\clip_model.py", line 101, in forward x = self.embeddings(input_tokens, dtype=dtype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\clip_model.py", line 82, in forward return self.token_embedding(input_tokens, out_dtype=dtype) + comfy.ops.cast_to(self.position_embedding.weight, dtype=dtype, device=input_tokens.device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\ops.py", line 203, in forward return self.forward_comfy_cast_weights(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\ops.py", line 199, in forward_comfy_cast_weights return torch.nn.functional.embedding(input, weight, self.padding_idx, self.max_norm, self.norm_type, self.scale_grad_by_freq, self.sparse).to(dtype=output_dtype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\functional.py", line 2551, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. 2025-02-15T18:39:45.058864 - Prompt executed in 0.80 seconds 2025-02-15T18:39:47.734162 - FETCH ComfyRegistry Data: 10/332025-02-15T18:39:47.734162 - 2025-02-15T18:39:51.644429 - FETCH ComfyRegistry Data: 15/332025-02-15T18:39:51.644429 - 2025-02-15T18:39:55.199168 - FETCH ComfyRegistry Data: 20/332025-02-15T18:39:55.199168 - 2025-02-15T18:39:59.169441 - FETCH ComfyRegistry Data: 25/332025-02-15T18:39:59.169441 - 2025-02-15T18:40:02.757731 - FETCH ComfyRegistry Data: 30/332025-02-15T18:40:02.757731 - 2025-02-15T18:40:05.379335 - FETCH ComfyRegistry Data [DONE]2025-02-15T18:40:05.379335 - 2025-02-15T18:40:05.403113 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes 2025-02-15T18:40:05.432550 - nightly_channel: 2025-02-15T18:40:05.432550 - 2025-02-15T18:40:05.432550 - https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/remote2025-02-15T18:40:05.432550 - 2025-02-15T18:40:05.432550 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-02-15T18:40:05.432550 - 2025-02-15T18:40:05.623385 - [DONE]2025-02-15T18:40:05.623385 - 2025-02-15T18:40:05.641415 - [ComfyUI-Manager] All startup tasks have been completed. 2025-02-15T18:40:48.083118 - got prompt 2025-02-15T18:40:48.086100 - !!! Exception during processing !!! CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. 2025-02-15T18:40:48.087343 - Traceback (most recent call last): File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 327, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 202, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 174, in _map_node_over_list process_inputs(input_dict, i) File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 163, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\nodes.py", line 69, in encode return (clip.encode_from_tokens_scheduled(tokens), ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd.py", line 149, in encode_from_tokens_scheduled pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd.py", line 211, in encode_from_tokens o = self.cond_stage_model.encode_token_weights(tokens) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd1_clip.py", line 640, in encode_token_weights out = getattr(self, self.clip).encode_token_weights(token_weight_pairs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd1_clip.py", line 45, in encode_token_weights o = self.encode(to_encode) ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd1_clip.py", line 252, in encode return self(tokens) ^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd1_clip.py", line 224, in forward outputs = self.transformer(tokens, attention_mask_model, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state, dtype=torch.float32) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\clip_model.py", line 137, in forward x = self.text_model(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\clip_model.py", line 101, in forward x = self.embeddings(input_tokens, dtype=dtype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\clip_model.py", line 82, in forward return self.token_embedding(input_tokens, out_dtype=dtype) + comfy.ops.cast_to(self.position_embedding.weight, dtype=dtype, device=input_tokens.device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\ops.py", line 203, in forward return self.forward_comfy_cast_weights(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\ops.py", line 199, in forward_comfy_cast_weights return torch.nn.functional.embedding(input, weight, self.padding_idx, self.max_norm, self.norm_type, self.scale_grad_by_freq, self.sparse).to(dtype=output_dtype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\WiseM\Documents\ComfyUI\.venv\Lib\site-packages\torch\nn\functional.py", line 2551, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. 2025-02-15T18:40:48.088424 - Prompt executed in 0.00 seconds ## Attached Workflow Please make sure that workflow does not contain any sensitive information such as API keys or passwords. {"last_node_id":9,"last_link_id":9,"nodes":[{"id":7,"type":"CLIPTextEncode","pos":[413,389],"size":[425.27801513671875,180.6060791015625],"flags":{},"order":3,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":5}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[6],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["text, watermark"]},{"id":6,"type":"CLIPTextEncode","pos":[415,186],"size":[422.84503173828125,164.31304931640625],"flags":{},"order":2,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":3}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[4],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["beautiful scenery nature glass bottle landscape, , purple galaxy bottle,"]},{"id":5,"type":"EmptyLatentImage","pos":[473,609],"size":[315,106],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"LATENT","type":"LATENT","links":[2],"slot_index":0}],"properties":{"Node name for S&R":"EmptyLatentImage"},"widgets_values":[512,512,1]},{"id":3,"type":"KSampler","pos":[863,186],"size":[315,262],"flags":{},"order":4,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":1},{"name":"positive","type":"CONDITIONING","link":4},{"name":"negative","type":"CONDITIONING","link":6},{"name":"latent_image","type":"LATENT","link":2}],"outputs":[{"name":"LATENT","type":"LATENT","links":[7],"slot_index":0}],"properties":{"Node name for S&R":"KSampler"},"widgets_values":[495080922788724,"randomize",20,8,"euler","normal",1]},{"id":8,"type":"VAEDecode","pos":[1209,188],"size":[210,46],"flags":{},"order":5,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":7},{"name":"vae","type":"VAE","link":8}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[9],"slot_index":0}],"properties":{"Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":9,"type":"SaveImage","pos":[1451,189],"size":[210,58],"flags":{},"order":6,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":9}],"outputs":[],"properties":{},"widgets_values":["ComfyUI"]},{"id":4,"type":"CheckpointLoaderSimple","pos":[26,474],"size":[315,98],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[1],"slot_index":0},{"name":"CLIP","type":"CLIP","links":[3,5],"slot_index":1},{"name":"VAE","type":"VAE","links":[8],"slot_index":2}],"properties":{"Node name for S&R":"CheckpointLoaderSimple"},"widgets_values":["v1-5-pruned-emaonly-fp16.safetensors"]}],"links":[[1,4,0,3,0,"MODEL"],[2,5,0,3,3,"LATENT"],[3,4,1,6,0,"CLIP"],[4,6,0,3,1,"CONDITIONING"],[5,4,1,7,0,"CLIP"],[6,7,0,3,2,"CONDITIONING"],[7,3,0,8,0,"LATENT"],[8,4,2,8,1,"VAE"],[9,8,0,9,0,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":1,"offset":[0.66650390625,-0.666656494140625]},"node_versions":{"comfy-core":"0.3.14"}},"version":0.4} ## Additional Context (Please add any additional context or steps to reproduce the error here)
No response
The text was updated successfully, but these errors were encountered:
"pytorch version: 2.6.0+cu126"
maybe you need to try this first: #6643
Sorry, something went wrong.
I have the same problem just bought 5080 upgrade from my 3060.
No branches or pull requests
Expected Behavior
for it to make images
Actual Behavior
it doesn't make images
Steps to Reproduce
have a 5080 video card
Debug Logs
Other
No response
The text was updated successfully, but these errors were encountered: