Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Cannot see the http request that was sent #7802

Open
mrm1001 opened this issue Jan 16, 2025 · 2 comments
Open

[Bug]: Cannot see the http request that was sent #7802

mrm1001 opened this issue Jan 16, 2025 · 2 comments
Labels
bug Something isn't working

Comments

@mrm1001
Copy link

mrm1001 commented Jan 16, 2025

What happened?

This is related to: #489.

I'm trying to see the raw request that was sent to the LLM endpoint. When I set litellm.set_verbose=True, I only get to see the raw response. I've also tried setting os.environ['LITELLM_LOG'] = 'DEBUG'.

My code:

import litellm

response = litellm.completion(
    model="gemini/gemini-2.0-flash-exp", 
    messages=[{"role": "user", "content": "hi there"}],    
) 

The message that gets printed in stdout:

�[92m11:02:02 - LiteLLM:WARNING�[0m: utils.py:322 - `litellm.set_verbose` is deprecated. Please set `os.environ['LITELLM_LOG'] = 'DEBUG'` for debug logs.
SYNC kwargs[caching]: False; litellm.cache: None; kwargs.get('cache')['no-cache']: False
Final returned optional params: {}
RAW RESPONSE:
{
  "candidates": [
    {
      "content": {
        "parts": [
          {
            "text": "Hi! How can I help you today?\n"
          }
        ],
        "role": "model"
      },
      "finishReason": "STOP",
      "safetyRatings": [
        {
          "category": "HARM_CATEGORY_HATE_SPEECH",
          "probability": "NEGLIGIBLE"
        },
        {
          "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
          "probability": "NEGLIGIBLE"
        },
        {
          "category": "HARM_CATEGORY_HARASSMENT",
          "probability": "NEGLIGIBLE"
        },
        {
          "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
          "probability": "NEGLIGIBLE"
        }
      ],
      "avgLogprobs": -0.12977735996246337
    }
  ],
  "usageMetadata": {
    "promptTokenCount": 3,
    "candidatesTokenCount": 10,
    "totalTokenCount": 13
  },
  "modelVersion": "gemini-2.0-flash-exp"
}

Relevant log output

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

v1.58.2

Twitter / LinkedIn details

No response

@mrm1001 mrm1001 added the bug Something isn't working label Jan 16, 2025
@superpoussin22
Copy link
Contributor

if you want to see prompts and response you could use langfuse

@mrm1001
Copy link
Author

mrm1001 commented Jan 16, 2025

Langfuse is great, but this is very basic functionality that the library should provide. I have been trying to see the raw request for hours for another issue I'm experiencing (#7804) and have not managed!

Also, the docs are wrong.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants