-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
usage_metadata return None in Langgraph Studio #3936
Comments
Are you sure you're running frmo the same python environment? LangGraph has no concept of llms, etc. so I'm not sure how the two would interact in this way. |
I'm not sure about the situation with AzureChatOpenAI, but the output from ChatOpenAI seems normal? from typing import Any
from dotenv import load_dotenv
from langchain_core.messages import SystemMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import MessagesState, StateGraph
from rich import get_console
load_dotenv()
llm = ChatOpenAI(model="gpt-4o-mini")
system_message = SystemMessage(content="You are a helpful assistant.")
def assistant(state: MessagesState) -> dict[str, Any]:
result = llm.invoke([system_message] + state["messages"])
print("Usage metadata: ", result.usage_metadata)
return {"messages": [result]}
builder = StateGraph(MessagesState)
builder.add_node("assistant", assistant)
builder.set_entry_point("assistant")
builder.set_finish_point("assistant")
graph = builder.compile()
result = graph.invoke({"messages": [("human", "hello")]})
get_console().print(result) |
Yes, absolutely.Double checked that the library versions are the same.
|
When you're running using You can use |
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
Description
When trying to access tokens usage inside the graph - always return None.
Usage metadata: None.
However if i use just the same code outside of langgraph (in test.py file)
{'input_tokens': 8, 'output_tokens': 11, 'total_tokens': 19, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}
Same with llm.with_structured_output(myPydanticClass, include_raw=True) - if i use inside the langgraph information about token usage is not returned, but if i run outside - all works as expected.
Am i doing something wrong?
System Info
System Information
Package Information
Optional packages not installed
Other Dependencies
The text was updated successfully, but these errors were encountered: