You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
AuthenticationError: litellm.AuthenticationError: AuthenticationError: Lm_studioException - The api_key client option must be set either by passing api_key to the client or by setting the LM_STUDIO_API_KEY environment variable
File "/tmp/windmill/wk-default-ba88b0a5a6db-rXH7K/01946fe6-19c2-4f33-a97c-2d076050b3fe/f/default/chat_completion.py", line 40, in main
response = completion(**completion_kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/windmill/cache/python_311/litellm==1.58.2/litellm/utils.py", line 1030, in wrapper
raise e
File "/tmp/windmill/cache/python_311/litellm==1.58.2/litellm/utils.py", line 906, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/windmill/cache/python_311/litellm==1.58.2/litellm/main.py", line 2967, in completion
raise exception_type(
^^^^^^^^^^^^^^^
File "/tmp/windmill/cache/python_311/litellm==1.58.2/litellm/litellm_core_utils/exception_mapping_utils.py", line 2189, in exception_type
raise e
File "/tmp/windmill/cache/python_311/litellm==1.58.2/litellm/litellm_core_utils/exception_mapping_utils.py", line 355, in exception_type
raise AuthenticationError(
{
"body": null,
"code": null,
"type": null,
"model": "typhoon2-quen2.5-7b-instruct",
"param": null,
"message": "litellm.AuthenticationError: AuthenticationError: Lm_studioException - The api_key client option must be set either by passing api_key to the client or by setting the LM_STUDIO_API_KEY environment variable",
"request": "<Request('POST', 'https://api.openai.com/v1')>",
"response": "<Response [500 Internal Server Error]>",
"request_id": null,
"max_retries": null,
"num_retries": null,
"status_code": 500,
"llm_provider": "lm_studio",
"litellm_debug_info": "\nModel: typhoon2-quen2.5-7b-instruct\nAPI Base: `http://host.docker.internal:1234/v1`\nMessages: `[{'content': 'สวัสดี', 'role': 'user'}]`",
"litellm_response_headers": null
}
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
1.58.2
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered:
* fix(lm_studio/chat/transformation.py): Fix#7811
* fix(router.py): fix mock timeout check
* fix: drop model name from fallback args since it causes a conflict with the model=model that is provided later on. (#7806)
This error happens if you provide multiple fallback models to the completion function with model name defined in each one.
* fix(router.py): remove mock_timeout before sending to request
prevents reuse in fallbacks
* test: update test
* test: revert test change - wrong pr
---------
Co-authored-by: Dudu Lasry <[email protected]>
What happened?
works fine
fails
fails
Relevant log output
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
1.58.2
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: