Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: lm_studio requires non-empty api_key even if there is no auth #7811

Closed
Lodimup opened this issue Jan 16, 2025 · 0 comments · Fixed by #7826
Closed

[Bug]: lm_studio requires non-empty api_key even if there is no auth #7811

Lodimup opened this issue Jan 16, 2025 · 0 comments · Fixed by #7826
Labels
bug Something isn't working

Comments

@Lodimup
Copy link

Lodimup commented Jan 16, 2025

What happened?

model =  "lm_studio/typhoon2-quen2.5-7b-instruct"
api_base =  "http://host.docker.internal:1234/v1"
...
    completion_kw = {
        "model": model,
        "messages": messages,
        "api_base": api_base,
        "max_tokens": max_tokens,
    }
    if api_base == "http://host.docker.internal:1234/v1":
        completion_kw["api_key"] = "lm_studio_needs_one"
    response = completion(**completion_kw)

works fine

    completion_kw = {
        "model": model,
        "messages": messages,
        "api_base": api_base,
        "max_tokens": max_tokens,
    }
    if api_base == "http://host.docker.internal:1234/v1":
        completion_kw["api_key"] = ""
    response = completion(**completion_kw)

fails

    completion_kw = {
        "model": model,
        "messages": messages,
        "api_base": api_base,
        "max_tokens": max_tokens,
    }
    response = completion(**completion_kw)

fails

Relevant log output

AuthenticationError: litellm.AuthenticationError: AuthenticationError: Lm_studioException - The api_key client option must be set either by passing api_key to the client or by setting the LM_STUDIO_API_KEY environment variable
  File "/tmp/windmill/wk-default-ba88b0a5a6db-rXH7K/01946fe6-19c2-4f33-a97c-2d076050b3fe/f/default/chat_completion.py", line 40, in main
    response = completion(**completion_kw)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/tmp/windmill/cache/python_311/litellm==1.58.2/litellm/utils.py", line 1030, in wrapper
    raise e

  File "/tmp/windmill/cache/python_311/litellm==1.58.2/litellm/utils.py", line 906, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/tmp/windmill/cache/python_311/litellm==1.58.2/litellm/main.py", line 2967, in completion
    raise exception_type(
          ^^^^^^^^^^^^^^^

  File "/tmp/windmill/cache/python_311/litellm==1.58.2/litellm/litellm_core_utils/exception_mapping_utils.py", line 2189, in exception_type
    raise e

  File "/tmp/windmill/cache/python_311/litellm==1.58.2/litellm/litellm_core_utils/exception_mapping_utils.py", line 355, in exception_type
    raise AuthenticationError(
{
    "body": null,
    "code": null,
    "type": null,
    "model": "typhoon2-quen2.5-7b-instruct",
    "param": null,
    "message": "litellm.AuthenticationError: AuthenticationError: Lm_studioException - The api_key client option must be set either by passing api_key to the client or by setting the LM_STUDIO_API_KEY environment variable",
    "request": "<Request('POST', 'https://api.openai.com/v1')>",
    "response": "<Response [500 Internal Server Error]>",
    "request_id": null,
    "max_retries": null,
    "num_retries": null,
    "status_code": 500,
    "llm_provider": "lm_studio",
    "litellm_debug_info": "\nModel: typhoon2-quen2.5-7b-instruct\nAPI Base: `http://host.docker.internal:1234/v1`\nMessages: `[{'content': 'สวัสดี', 'role': 'user'}]`",
    "litellm_response_headers": null
}

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

1.58.2

Twitter / LinkedIn details

No response

@Lodimup Lodimup added the bug Something isn't working label Jan 16, 2025
krrishdholakia added a commit that referenced this issue Jan 18, 2025
* fix(lm_studio/chat/transformation.py): Fix #7811

* fix(router.py): fix mock timeout check

* fix: drop model name from fallback args since it causes a conflict with the model=model that is provided later on. (#7806)

This error happens if you provide multiple fallback models to the completion function with model name defined in each one.

* fix(router.py): remove mock_timeout before sending to request

prevents reuse in fallbacks

* test: update test

* test: revert test change - wrong pr

---------

Co-authored-by: Dudu Lasry <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant