-
Notifications
You must be signed in to change notification settings - Fork 7.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No response with using Cloud provided LLMs... #1491
Comments
@arhanjain11 can you please provide logs and screenshot of bolt. as well as the dev-console and terminal logs. There must be a problem / error. |
@leex279 It loads on forever. I even double checked the API key being used, both from .env.example (renamed to .env) and UI. Even when I use a model like qwen: qwq 32b from openrouter, it gets stuck on the start only with a long error message: Note - I have not tested all the models and providers yet, but most which I have tested simply do not work. Few like qwen qwq 32b and qwen qwq 32b (free) work halfway and get stuck most of the times. I managed to get qwen qwq 32b to work once.
|
Describe the bug
When using a cloud provided LLM like DeepSeek, Anthropic, OpenRouter etc., the API key just doesn't seem to work. Many fail to fetch. OpenRouter API key does fetch, but even after that, there is no response from bolt.diy and it keeps on loading forever. I have set the API key from both UI and .env file. Locally installed LLMs are fine though.
Steps to reproduce
Expected behavior
It should be able to respond when using a cloud provider! It gets stuck.
Platform
Provider Used
Just any provider
Model Used
Just any model
The text was updated successfully, but these errors were encountered: