You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here's the error in terminal (not browser console):
INFO stream-text Sending llm call to HuggingFace with model Qwen/Qwen2.5-Coder-32B-Instruct
ERROR api.chat [object Object]
DEBUG api.chat usage {"promptTokens":null,"completionTokens":null,"totalTokens":null}
I read here that others could used Qwen2.5-Coder-32B-Instruct, would you please share how you do it?
Thanks!
@leex279 this was not a question ERROR api.chat [object Object] is not helpful as an error 😸
My API key is full write so it's not an issue.
Redoing it, I have another message in the browser:
There was an error processing your request: Custom error: Input validation error: inputs tokens + max_new_tokens must be <= 16000. Given: 146038 inputs tokens and 8000 max_new_tokens
Describe the bug
Here's the error in terminal (not browser console):
I read here that others could used
Qwen2.5-Coder-32B-Instruct
, would you please share how you do it?Thanks!
Link to the Bolt URL that caused the error
http://localhost:5173/chat/1?rewindTo=d3cy3j0zj9b
Steps to reproduce
Expected behavior
code 😄
Screen Recording / Screenshot
No response
Platform
Provider Used
HF
Model Used
Qwen2.5-Coder-32B-Instruct
Additional context
Running bolt.diy with
npm run dev
The text was updated successfully, but these errors were encountered: