Skip to content

improve instrumentation docs #1625

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
May 13, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file modified docs/img/logfire-evals-case-trace.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/img/logfire-evals-case.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/img/logfire-evals-overview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/img/logfire-monitoring-pydanticai.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/img/logfire-run-python-code.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/logfire-simple-agent.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/img/logfire-weather-agent.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/img/logfire-with-httpx.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/img/logfire-without-httpx.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/otel-tui-simple.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/otel-tui-weather.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
204 changes: 154 additions & 50 deletions docs/logfire.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ LLM Observability tools that just let you understand how your model is performin

## Pydantic Logfire

[Pydantic Logfire](https://pydantic.dev/logfire) is an observability platform developed by the team who created and maintain Pydantic and PydanticAI. Logfire aims to let you understand your entire application: Gen AI, classic predictive AI, HTTP traffic, database queries and everything else a modern application needs.
[Pydantic Logfire](https://pydantic.dev/logfire) is an observability platform developed by the team who created and maintain Pydantic and PydanticAI. Logfire aims to let you understand your entire application: Gen AI, classic predictive AI, HTTP traffic, database queries and everything else a modern application needs, all using OpenTelemetry.

!!! tip "Pydantic Logfire is a commercial product"
Logfire is a commercially supported, hosted platform with an extremely generous and perpetual [free tier](https://pydantic.dev/pricing/).
Expand All @@ -27,15 +27,17 @@ Here's an example showing details of running the [Weather Agent](examples/weathe

![Weather Agent Logfire](img/logfire-weather-agent.png)

A trace is generated for the agent run, and spans are emitted for each model request and tool call.

## Using Logfire

To use logfire, you'll need a logfire [account](https://logfire.pydantic.dev), and logfire installed:
To use Logfire, you'll need a Logfire [account](https://logfire.pydantic.dev), and the Logfire Python SDK installed:

```bash
pip/uv-add "pydantic-ai[logfire]"
```

Then authenticate your local environment with logfire:
Then authenticate your local environment with Logfire:

```bash
py-cli logfire auth
Expand All @@ -49,34 +51,40 @@ py-cli logfire projects new

(Or use an existing project with `logfire projects use`)

Then add logfire to your code:

```python {title="adding_logfire.py"}
import logfire
This will write to a `.logfire` directory in the current working directory, which the Logfire SDK will use for configuration at run time.

logfire.configure()
```
With that, you can start using Logfire to instrument PydanticAI code:

and enable instrumentation in your agent:
```python {title="instrument_pydantic_ai.py" hl_lines="1 5 6"}
import logfire

```python {title="instrument_agent.py"}
from pydantic_ai import Agent

agent = Agent('openai:gpt-4o', instrument=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like you've removed all mentions of Agent(instrument=...) which seems extreme

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's still in the API docs.

# or instrument all agents to avoid needing to add `instrument=True` to each agent:
Agent.instrument_all()
logfire.configure() # (1)!
logfire.instrument_pydantic_ai() # (2)!

agent = Agent('openai:gpt-4o', instructions='Be concise, reply with one sentence.')
result = agent.run_sync('Where does "hello world" come from?') # (3)!
print(result.output)
"""
The first known use of "hello, world" was in a 1974 textbook about the C programming language.
"""
```

The [logfire documentation](https://logfire.pydantic.dev/docs/) has more details on how to use logfire,
including how to instrument other libraries like [Pydantic](https://logfire.pydantic.dev/docs/integrations/pydantic/),
[HTTPX](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) and [FastAPI](https://logfire.pydantic.dev/docs/integrations/web-frameworks/fastapi/).
1. [`logfire.configure()`][logfire.configure] configures the SDK, by default it will find the write token from the `.logfire` directory, but you can also pass a token directly.
2. [`logfire.instrument_pydantic_ai()`][logfire.Logfire.instrument_pydantic_ai] enables instrumentation of PydanticAI.
3. Since we've enabled instrumentation, a trace will be generated for each run, with spans emitted for models calls and tool function execution

_(This example is complete, it can be run "as is")_

Since Logfire is built on [OpenTelemetry](https://opentelemetry.io/), you can use the Logfire Python SDK to send data to any OpenTelemetry collector.
Which will display in Logfire thus:

Once you have logfire set up, there are two primary ways it can help you understand your application:
![Logfire Simple Agent Run](img/logfire-simple-agent.png)

* **Debugging** — Using the live view to see what's happening in your application in real-time.
* **Monitoring** — Using SQL and dashboards to observe the behavior of your application, Logfire is effectively a SQL database that stores information about how your application is running.
The [logfire documentation](https://logfire.pydantic.dev/docs/) has more details on how to use Logfire,
including how to instrument other libraries like [HTTPX](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) and [FastAPI](https://logfire.pydantic.dev/docs/integrations/web-frameworks/fastapi/).

Since Logfire is built on [OpenTelemetry](https://opentelemetry.io/), you can use the Logfire Python SDK to send data to any OpenTelemetry collector, see [below](#using-opentelemetry).

### Debugging
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The video here really needs updating


Expand All @@ -90,65 +98,161 @@ We can also query data with SQL in Logfire to monitor the performance of an appl

![Logfire monitoring PydanticAI](img/logfire-monitoring-pydanticai.png)

### Monitoring HTTPX Requests
### Monitoring HTTP Requests

In order to monitor HTTPX requests made by models, you can use `logfire`'s [HTTPX](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) integration.
!!! tip ""F**k you, show me the prompt.""
As per Hamel Husain's influential 2024 blog post ["Fuck You, Show Me The Prompt."](https://hamel.dev/blog/posts/prompt/)
(bear with the capitalization, the point is valid), it's often useful to be able to view the raw HTTP requests and responses made to model providers.

Instrumentation is as easy as adding the following three lines to your application:
To observe raw HTTP requests made to model providers, you can use `logfire`'s [HTTPX instrumentation](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) since all provider SDKs use the [HTTPX](https://www.python-httpx.org/) library internally.

```py {title="instrument_httpx.py" test="skip" lint="skip"}
import logfire
logfire.configure()
logfire.instrument_httpx(capture_all=True) # (1)!

=== "With HTTP instrumentation"

```py {title="with_logfire_instrument_httpx.py" hl_lines="7"}
import logfire

from pydantic_ai import Agent

logfire.configure()
logfire.instrument_pydantic_ai()
logfire.instrument_httpx(capture_all=True) # (1)!
agent = Agent('openai:gpt-4o')
result = agent.run_sync('What is the capital of France?')
print(result.output)
#> Paris
```

1. See the [`logfire.instrument_httpx` docs][logfire.Logfire.instrument_httpx] more details, `capture_all=True` means both headers and body are captured for both the request and response.

![Logfire with HTTPX instrumentation](img/logfire-with-httpx.png)

=== "Without HTTP instrumentation"

```py {title="without_logfire_instrument_httpx.py"}
import logfire

from pydantic_ai import Agent

logfire.configure()
logfire.instrument_pydantic_ai()

agent = Agent('openai:gpt-4o')
result = agent.run_sync('What is the capital of France?')
print(result.output)
#> Paris
```

![Logfire without HTTPX instrumentation](img/logfire-without-httpx.png)

## Using OpenTelemetry

PydanticAI's instrumentation uses [OpenTelemetry](https://opentelemetry.io/) (OTel), which Logfire is based on.

This means you can debug and monitor PydanticAI with any OpenTelemetry backend.

PydanticAI follows the [OpenTelemetry Semantic Conventions for Generative AI systems](https://opentelemetry.io/docs/specs/semconv/gen-ai/), so while we think you'll have the best experience using the Logfire platform :wink:, you should be able to use any OTel service with GenAI support.

### Logfire with an alternative OTel backend

You can use the Logfire SDK completely freely and send the data to any OpenTelemetry backend.

Here's an example of configuring the Logfire library to send data to the excellent [otel-tui](https://github.com/ymtdzzz/otel-tui) — an open source terminal based OTel backend and viewer (no association with Pydantic).

Run `otel-tui` with docker (see [the otel-tui readme](https://github.com/ymtdzzz/otel-tui) for more instructions):

```txt title="Terminal"
docker run --rm -it -p 4318:4318 --name otel-tui ymtdzzz/otel-tui:latest
```

1. See the [logfire docs](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) for more `httpx` instrumentation details.
then run,

In particular, this can help you to trace specific requests, responses, and headers:
```python {title="otel_tui.py" hl_lines="7 8" test="skip"}
import os

```py {title="instrument_httpx_example.py", test="skip" lint="skip"}
import logfire

from pydantic_ai import Agent

logfire.configure()
logfire.instrument_httpx(capture_all=True) # (1)!
os.environ['OTEL_EXPORTER_OTLP_ENDPOINT'] = 'http://localhost:4318' # (1)!
logfire.configure(send_to_logfire=False) # (2)!
logfire.instrument_pydantic_ai()
logfire.instrument_httpx(capture_all=True)

agent = Agent('openai:gpt-4o', instrument=True)
agent = Agent('openai:gpt-4o')
result = agent.run_sync('What is the capital of France?')
print(result.output)
# > The capital of France is Paris.
#> Paris
```

1. Capture all of headers, request body, and response body.
1. Set the `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable to the URL of your OpenTelemetry backend. If you're using a backend that requires authentication, you may need to set [other environment variables](https://opentelemetry.io/docs/languages/sdk-configuration/otlp-exporter/). Of course, these can also be set outside the process, e.g. with `export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318`.
2. We [configure][logfire.configure] Logfire to disable sending data to the Logfire OTel backend itself. If you removed `send_to_logfire=False`, data would be sent to both Logfire and your OpenTelemetry backend.

=== "With `httpx` instrumentation"
Running the above code will send tracing data to `otel-tui`, which will display like this:

![Logfire with HTTPX instrumentation](img/logfire-with-httpx.png)
![otel tui simple](img/otel-tui-simple.png)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is missing the httpx instrumentation spans

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's right.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Screenshot 2025-05-13 at 13 12 02

i'm saying that it says "running the above code containing logfire.instrument_httpx will display this" and that's not what it will display


=== "Without `httpx` instrumentation"
Running the [weather agent](examples/weather-agent.md) example connected to `otel-tui` shows how it can be used to visualise a more complex trace:

![Logfire without HTTPX instrumentation](img/logfire-without-httpx.png)
![otel tui weather agent](img/otel-tui-weather.png)

!!! tip
`httpx` instrumentation might be of particular utility if you're using a custom `httpx` client in your model in order to get insights into your custom requests.
For more information on using the Logfire SDK to send data to alternative backends, see
[the Logfire documentation](https://logfire.pydantic.dev/docs/how-to-guides/alternative-backends/).

## Using OpenTelemetry
### OTel without Logfire

You can also emit OpenTelemetry data from PydanticAI without using Logfire at all.

To do this, you'll need to install and configure the OpenTelemetry packages you need. To run the following examples, use

PydanticAI's instrumentation uses [OpenTelemetry](https://opentelemetry.io/), which Logfire is based on. You can use the Logfire SDK completely freely and follow the [Alternative backends](https://logfire.pydantic.dev/docs/how-to-guides/alternative-backends/) guide to send the data to any OpenTelemetry collector, such as a self-hosted Jaeger instance. Or you can skip Logfire entirely and use the OpenTelemetry Python SDK directly.
```txt title="Terminal"
uv run \
--with 'pydantic-ai-slim[openai]' \
--with opentelemetry-sdk \
--with opentelemetry-exporter-otlp \
raw_otel.py
```

```python {title="raw_otel.py" test="skip"}
import os

from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.trace import set_tracer_provider

from pydantic_ai.agent import Agent

os.environ['OTEL_EXPORTER_OTLP_ENDPOINT'] = 'http://localhost:4318'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand it might still be worth showing this this way, but I just want to point out that:

  1. OTLPSpanExporter uses that value by default anyway
  2. OTLPSpanExporter(endpoint=...) can be used instead of an env var

exporter = OTLPSpanExporter()
span_processor = BatchSpanProcessor(exporter)
tracer_provider = TracerProvider()
tracer_provider.add_span_processor(span_processor)

set_tracer_provider(tracer_provider)

Agent.instrument_all()
agent = Agent('openai:gpt-4o')
result = agent.run_sync('What is the capital of France?')
print(result.output)
#> Paris
```

## Data format

PydanticAI follows the [OpenTelemetry Semantic Conventions for Generative AI systems](https://opentelemetry.io/docs/specs/semconv/gen-ai/), with one caveat. The semantic conventions specify that messages should be captured as individual events (logs) that are children of the request span. By default, PydanticAI instead collects these events into a JSON array which is set as a single large attribute called `events` on the request span. To change this, use [`InstrumentationSettings(event_mode='logs')`][pydantic_ai.agent.InstrumentationSettings].

```python {title="instrumentation_settings_event_mode.py"}
from pydantic_ai import Agent
from pydantic_ai.agent import InstrumentationSettings
import logfire

instrumentation_settings = InstrumentationSettings(event_mode='logs')
from pydantic_ai import Agent

agent = Agent('openai:gpt-4o', instrument=instrumentation_settings)
# or instrument all agents:
Agent.instrument_all(instrumentation_settings)
logfire.configure()
logfire.instrument_pydantic_ai(event_mode='logs')
agent = Agent('openai:gpt-4o')
result = agent.run_sync('What is the capital of France?')
print(result.output)
#> Paris
```

For now, this won't look as good in the Logfire UI, but we're working on it.
Expand Down
2 changes: 1 addition & 1 deletion docs/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,4 +24,4 @@ If you're running into issues with setting the API key for your model, visit the

You can use custom `httpx` clients in your models in order to access specific requests, responses, and headers at runtime.

It's particularly helpful to use `logfire`'s [HTTPX integration](logfire.md#monitoring-httpx-requests) to monitor the above.
It's particularly helpful to use `logfire`'s [HTTPX integration](logfire.md#monitoring-http-requests) to monitor the above.
3 changes: 2 additions & 1 deletion examples/pydantic_ai_examples/chat_app.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,8 +38,9 @@

# 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured
logfire.configure(send_to_logfire='if-token-present')
logfire.instrument_pydantic_ai()

agent = Agent('openai:gpt-4o', instrument=True)
agent = Agent('openai:gpt-4o')
THIS_DIR = Path(__file__).parent


Expand Down
2 changes: 1 addition & 1 deletion examples/pydantic_ai_examples/flight_booking.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@

# 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured
logfire.configure(send_to_logfire='if-token-present')
logfire.instrument_pydantic_ai()


class FlightDetails(BaseModel):
Expand Down Expand Up @@ -49,7 +50,6 @@ class Deps:
system_prompt=(
'Your job is to find the cheapest flight for the user on the given date. '
),
instrument=True,
)


Expand Down
3 changes: 2 additions & 1 deletion examples/pydantic_ai_examples/pydantic_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@

# 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured
logfire.configure(send_to_logfire='if-token-present')
logfire.instrument_pydantic_ai()


class MyModel(BaseModel):
Expand All @@ -23,7 +24,7 @@ class MyModel(BaseModel):

model = os.getenv('PYDANTIC_AI_MODEL', 'openai:gpt-4o')
print(f'Using model: {model}')
agent = Agent(model, output_type=MyModel, instrument=True)
agent = Agent(model, output_type=MyModel)

if __name__ == '__main__':
result = agent.run_sync('The windy city in the US of A.')
Expand Down
3 changes: 2 additions & 1 deletion examples/pydantic_ai_examples/question_graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,9 @@

# 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured
logfire.configure(send_to_logfire='if-token-present')
logfire.instrument_pydantic_ai()

ask_agent = Agent('openai:gpt-4o', output_type=str, instrument=True)
ask_agent = Agent('openai:gpt-4o', output_type=str)


@dataclass
Expand Down
3 changes: 2 additions & 1 deletion examples/pydantic_ai_examples/rag.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@
# 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured
logfire.configure(send_to_logfire='if-token-present')
logfire.instrument_asyncpg()
logfire.instrument_pydantic_ai()


@dataclass
Expand All @@ -48,7 +49,7 @@ class Deps:
pool: asyncpg.Pool


agent = Agent('openai:gpt-4o', deps_type=Deps, instrument=True)
agent = Agent('openai:gpt-4o', deps_type=Deps)


@agent.tool
Expand Down
1 change: 0 additions & 1 deletion examples/pydantic_ai_examples/roulette_wheel.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@ class Deps:
system_prompt=(
'Use the `roulette_wheel` function to determine if the customer has won based on the number they bet on.'
),
instrument=True,
)


Expand Down
2 changes: 1 addition & 1 deletion examples/pydantic_ai_examples/sql_gen.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@
# 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured
logfire.configure(send_to_logfire='if-token-present')
logfire.instrument_asyncpg()
logfire.instrument_pydantic_ai()

DB_SCHEMA = """
CREATE TABLE records (
Expand Down Expand Up @@ -96,7 +97,6 @@ class InvalidRequest(BaseModel):
# Type ignore while we wait for PEP-0747, nonetheless unions will work fine everywhere else
output_type=Response, # type: ignore
deps_type=Deps,
instrument=True,
)


Expand Down
3 changes: 2 additions & 1 deletion examples/pydantic_ai_examples/stream_markdown.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,9 @@

# 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured
logfire.configure(send_to_logfire='if-token-present')
logfire.instrument_pydantic_ai()

agent = Agent(instrument=True)
agent = Agent()

# models to try, and the appropriate env var
models: list[tuple[KnownModelName, str]] = [
Expand Down
Loading