diff --git a/docs/img/logfire-evals-case-trace.png b/docs/img/logfire-evals-case-trace.png index df3a0d7d6..a6a30983f 100644 Binary files a/docs/img/logfire-evals-case-trace.png and b/docs/img/logfire-evals-case-trace.png differ diff --git a/docs/img/logfire-evals-case.png b/docs/img/logfire-evals-case.png index 0bd8b1c2d..108b3c8e4 100644 Binary files a/docs/img/logfire-evals-case.png and b/docs/img/logfire-evals-case.png differ diff --git a/docs/img/logfire-evals-overview.png b/docs/img/logfire-evals-overview.png index fde5e2d82..516df8f51 100644 Binary files a/docs/img/logfire-evals-overview.png and b/docs/img/logfire-evals-overview.png differ diff --git a/docs/img/logfire-monitoring-pydanticai.png b/docs/img/logfire-monitoring-pydanticai.png index 51cf2e1d0..16094ab78 100644 Binary files a/docs/img/logfire-monitoring-pydanticai.png and b/docs/img/logfire-monitoring-pydanticai.png differ diff --git a/docs/img/logfire-run-python-code.png b/docs/img/logfire-run-python-code.png index d5d692f1b..e9eccefae 100644 Binary files a/docs/img/logfire-run-python-code.png and b/docs/img/logfire-run-python-code.png differ diff --git a/docs/img/logfire-simple-agent.png b/docs/img/logfire-simple-agent.png new file mode 100644 index 000000000..8f8e07e6b Binary files /dev/null and b/docs/img/logfire-simple-agent.png differ diff --git a/docs/img/logfire-weather-agent.png b/docs/img/logfire-weather-agent.png index 81fa9b3c5..3d690d7c4 100644 Binary files a/docs/img/logfire-weather-agent.png and b/docs/img/logfire-weather-agent.png differ diff --git a/docs/img/logfire-with-httpx.png b/docs/img/logfire-with-httpx.png index 181a090fb..a34e1b564 100644 Binary files a/docs/img/logfire-with-httpx.png and b/docs/img/logfire-with-httpx.png differ diff --git a/docs/img/logfire-without-httpx.png b/docs/img/logfire-without-httpx.png index ab21e3c0f..457299e81 100644 Binary files a/docs/img/logfire-without-httpx.png and b/docs/img/logfire-without-httpx.png differ diff --git a/docs/img/otel-tui-simple.png b/docs/img/otel-tui-simple.png new file mode 100644 index 000000000..5e14ca74b Binary files /dev/null and b/docs/img/otel-tui-simple.png differ diff --git a/docs/img/otel-tui-weather.png b/docs/img/otel-tui-weather.png new file mode 100644 index 000000000..0e417fa7a Binary files /dev/null and b/docs/img/otel-tui-weather.png differ diff --git a/docs/logfire.md b/docs/logfire.md index 4aa269dbd..54d01f966 100644 --- a/docs/logfire.md +++ b/docs/logfire.md @@ -15,7 +15,7 @@ LLM Observability tools that just let you understand how your model is performin ## Pydantic Logfire -[Pydantic Logfire](https://pydantic.dev/logfire) is an observability platform developed by the team who created and maintain Pydantic and PydanticAI. Logfire aims to let you understand your entire application: Gen AI, classic predictive AI, HTTP traffic, database queries and everything else a modern application needs. +[Pydantic Logfire](https://pydantic.dev/logfire) is an observability platform developed by the team who created and maintain Pydantic and PydanticAI. Logfire aims to let you understand your entire application: Gen AI, classic predictive AI, HTTP traffic, database queries and everything else a modern application needs, all using OpenTelemetry. !!! tip "Pydantic Logfire is a commercial product" Logfire is a commercially supported, hosted platform with an extremely generous and perpetual [free tier](https://pydantic.dev/pricing/). @@ -27,15 +27,17 @@ Here's an example showing details of running the [Weather Agent](examples/weathe ![Weather Agent Logfire](img/logfire-weather-agent.png) +A trace is generated for the agent run, and spans are emitted for each model request and tool call. + ## Using Logfire -To use logfire, you'll need a logfire [account](https://logfire.pydantic.dev), and logfire installed: +To use Logfire, you'll need a Logfire [account](https://logfire.pydantic.dev), and the Logfire Python SDK installed: ```bash pip/uv-add "pydantic-ai[logfire]" ``` -Then authenticate your local environment with logfire: +Then authenticate your local environment with Logfire: ```bash py-cli logfire auth @@ -49,34 +51,40 @@ py-cli logfire projects new (Or use an existing project with `logfire projects use`) -Then add logfire to your code: - -```python {title="adding_logfire.py"} -import logfire +This will write to a `.logfire` directory in the current working directory, which the Logfire SDK will use for configuration at run time. -logfire.configure() -``` +With that, you can start using Logfire to instrument PydanticAI code: -and enable instrumentation in your agent: +```python {title="instrument_pydantic_ai.py" hl_lines="1 5 6"} +import logfire -```python {title="instrument_agent.py"} from pydantic_ai import Agent -agent = Agent('openai:gpt-4o', instrument=True) -# or instrument all agents to avoid needing to add `instrument=True` to each agent: -Agent.instrument_all() +logfire.configure() # (1)! +logfire.instrument_pydantic_ai() # (2)! + +agent = Agent('openai:gpt-4o', instructions='Be concise, reply with one sentence.') +result = agent.run_sync('Where does "hello world" come from?') # (3)! +print(result.output) +""" +The first known use of "hello, world" was in a 1974 textbook about the C programming language. +""" ``` -The [logfire documentation](https://logfire.pydantic.dev/docs/) has more details on how to use logfire, -including how to instrument other libraries like [Pydantic](https://logfire.pydantic.dev/docs/integrations/pydantic/), -[HTTPX](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) and [FastAPI](https://logfire.pydantic.dev/docs/integrations/web-frameworks/fastapi/). +1. [`logfire.configure()`][logfire.configure] configures the SDK, by default it will find the write token from the `.logfire` directory, but you can also pass a token directly. +2. [`logfire.instrument_pydantic_ai()`][logfire.Logfire.instrument_pydantic_ai] enables instrumentation of PydanticAI. +3. Since we've enabled instrumentation, a trace will be generated for each run, with spans emitted for models calls and tool function execution + +_(This example is complete, it can be run "as is")_ -Since Logfire is built on [OpenTelemetry](https://opentelemetry.io/), you can use the Logfire Python SDK to send data to any OpenTelemetry collector. +Which will display in Logfire thus: -Once you have logfire set up, there are two primary ways it can help you understand your application: +![Logfire Simple Agent Run](img/logfire-simple-agent.png) -* **Debugging** — Using the live view to see what's happening in your application in real-time. -* **Monitoring** — Using SQL and dashboards to observe the behavior of your application, Logfire is effectively a SQL database that stores information about how your application is running. +The [logfire documentation](https://logfire.pydantic.dev/docs/) has more details on how to use Logfire, +including how to instrument other libraries like [HTTPX](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) and [FastAPI](https://logfire.pydantic.dev/docs/integrations/web-frameworks/fastapi/). + +Since Logfire is built on [OpenTelemetry](https://opentelemetry.io/), you can use the Logfire Python SDK to send data to any OpenTelemetry collector, see [below](#using-opentelemetry). ### Debugging @@ -90,65 +98,161 @@ We can also query data with SQL in Logfire to monitor the performance of an appl ![Logfire monitoring PydanticAI](img/logfire-monitoring-pydanticai.png) -### Monitoring HTTPX Requests +### Monitoring HTTP Requests -In order to monitor HTTPX requests made by models, you can use `logfire`'s [HTTPX](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) integration. +!!! tip ""F**k you, show me the prompt."" + As per Hamel Husain's influential 2024 blog post ["Fuck You, Show Me The Prompt."](https://hamel.dev/blog/posts/prompt/) + (bear with the capitalization, the point is valid), it's often useful to be able to view the raw HTTP requests and responses made to model providers. -Instrumentation is as easy as adding the following three lines to your application: +To observe raw HTTP requests made to model providers, you can use `logfire`'s [HTTPX instrumentation](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) since all provider SDKs use the [HTTPX](https://www.python-httpx.org/) library internally. -```py {title="instrument_httpx.py" test="skip" lint="skip"} -import logfire -logfire.configure() -logfire.instrument_httpx(capture_all=True) # (1)! + +=== "With HTTP instrumentation" + + ```py {title="with_logfire_instrument_httpx.py" hl_lines="7"} + import logfire + + from pydantic_ai import Agent + + logfire.configure() + logfire.instrument_pydantic_ai() + logfire.instrument_httpx(capture_all=True) # (1)! + agent = Agent('openai:gpt-4o') + result = agent.run_sync('What is the capital of France?') + print(result.output) + #> Paris + ``` + + 1. See the [`logfire.instrument_httpx` docs][logfire.Logfire.instrument_httpx] more details, `capture_all=True` means both headers and body are captured for both the request and response. + + ![Logfire with HTTPX instrumentation](img/logfire-with-httpx.png) + +=== "Without HTTP instrumentation" + + ```py {title="without_logfire_instrument_httpx.py"} + import logfire + + from pydantic_ai import Agent + + logfire.configure() + logfire.instrument_pydantic_ai() + + agent = Agent('openai:gpt-4o') + result = agent.run_sync('What is the capital of France?') + print(result.output) + #> Paris + ``` + + ![Logfire without HTTPX instrumentation](img/logfire-without-httpx.png) + +## Using OpenTelemetry + +PydanticAI's instrumentation uses [OpenTelemetry](https://opentelemetry.io/) (OTel), which Logfire is based on. + +This means you can debug and monitor PydanticAI with any OpenTelemetry backend. + +PydanticAI follows the [OpenTelemetry Semantic Conventions for Generative AI systems](https://opentelemetry.io/docs/specs/semconv/gen-ai/), so while we think you'll have the best experience using the Logfire platform :wink:, you should be able to use any OTel service with GenAI support. + +### Logfire with an alternative OTel backend + +You can use the Logfire SDK completely freely and send the data to any OpenTelemetry backend. + +Here's an example of configuring the Logfire library to send data to the excellent [otel-tui](https://github.com/ymtdzzz/otel-tui) — an open source terminal based OTel backend and viewer (no association with Pydantic). + +Run `otel-tui` with docker (see [the otel-tui readme](https://github.com/ymtdzzz/otel-tui) for more instructions): + +```txt title="Terminal" +docker run --rm -it -p 4318:4318 --name otel-tui ymtdzzz/otel-tui:latest ``` -1. See the [logfire docs](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) for more `httpx` instrumentation details. +then run, -In particular, this can help you to trace specific requests, responses, and headers: +```python {title="otel_tui.py" hl_lines="7 8" test="skip"} +import os -```py {title="instrument_httpx_example.py", test="skip" lint="skip"} import logfire + from pydantic_ai import Agent -logfire.configure() -logfire.instrument_httpx(capture_all=True) # (1)! +os.environ['OTEL_EXPORTER_OTLP_ENDPOINT'] = 'http://localhost:4318' # (1)! +logfire.configure(send_to_logfire=False) # (2)! +logfire.instrument_pydantic_ai() +logfire.instrument_httpx(capture_all=True) -agent = Agent('openai:gpt-4o', instrument=True) +agent = Agent('openai:gpt-4o') result = agent.run_sync('What is the capital of France?') print(result.output) -# > The capital of France is Paris. +#> Paris ``` -1. Capture all of headers, request body, and response body. +1. Set the `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable to the URL of your OpenTelemetry backend. If you're using a backend that requires authentication, you may need to set [other environment variables](https://opentelemetry.io/docs/languages/sdk-configuration/otlp-exporter/). Of course, these can also be set outside the process, e.g. with `export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318`. +2. We [configure][logfire.configure] Logfire to disable sending data to the Logfire OTel backend itself. If you removed `send_to_logfire=False`, data would be sent to both Logfire and your OpenTelemetry backend. -=== "With `httpx` instrumentation" +Running the above code will send tracing data to `otel-tui`, which will display like this: - ![Logfire with HTTPX instrumentation](img/logfire-with-httpx.png) +![otel tui simple](img/otel-tui-simple.png) -=== "Without `httpx` instrumentation" +Running the [weather agent](examples/weather-agent.md) example connected to `otel-tui` shows how it can be used to visualise a more complex trace: - ![Logfire without HTTPX instrumentation](img/logfire-without-httpx.png) +![otel tui weather agent](img/otel-tui-weather.png) -!!! tip - `httpx` instrumentation might be of particular utility if you're using a custom `httpx` client in your model in order to get insights into your custom requests. +For more information on using the Logfire SDK to send data to alternative backends, see +[the Logfire documentation](https://logfire.pydantic.dev/docs/how-to-guides/alternative-backends/). -## Using OpenTelemetry +### OTel without Logfire + +You can also emit OpenTelemetry data from PydanticAI without using Logfire at all. + +To do this, you'll need to install and configure the OpenTelemetry packages you need. To run the following examples, use -PydanticAI's instrumentation uses [OpenTelemetry](https://opentelemetry.io/), which Logfire is based on. You can use the Logfire SDK completely freely and follow the [Alternative backends](https://logfire.pydantic.dev/docs/how-to-guides/alternative-backends/) guide to send the data to any OpenTelemetry collector, such as a self-hosted Jaeger instance. Or you can skip Logfire entirely and use the OpenTelemetry Python SDK directly. +```txt title="Terminal" +uv run \ + --with 'pydantic-ai-slim[openai]' \ + --with opentelemetry-sdk \ + --with opentelemetry-exporter-otlp \ + raw_otel.py +``` + +```python {title="raw_otel.py" test="skip"} +import os + +from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter +from opentelemetry.sdk.trace import TracerProvider +from opentelemetry.sdk.trace.export import BatchSpanProcessor +from opentelemetry.trace import set_tracer_provider + +from pydantic_ai.agent import Agent + +os.environ['OTEL_EXPORTER_OTLP_ENDPOINT'] = 'http://localhost:4318' +exporter = OTLPSpanExporter() +span_processor = BatchSpanProcessor(exporter) +tracer_provider = TracerProvider() +tracer_provider.add_span_processor(span_processor) + +set_tracer_provider(tracer_provider) + +Agent.instrument_all() +agent = Agent('openai:gpt-4o') +result = agent.run_sync('What is the capital of France?') +print(result.output) +#> Paris +``` ## Data format PydanticAI follows the [OpenTelemetry Semantic Conventions for Generative AI systems](https://opentelemetry.io/docs/specs/semconv/gen-ai/), with one caveat. The semantic conventions specify that messages should be captured as individual events (logs) that are children of the request span. By default, PydanticAI instead collects these events into a JSON array which is set as a single large attribute called `events` on the request span. To change this, use [`InstrumentationSettings(event_mode='logs')`][pydantic_ai.agent.InstrumentationSettings]. ```python {title="instrumentation_settings_event_mode.py"} -from pydantic_ai import Agent -from pydantic_ai.agent import InstrumentationSettings +import logfire -instrumentation_settings = InstrumentationSettings(event_mode='logs') +from pydantic_ai import Agent -agent = Agent('openai:gpt-4o', instrument=instrumentation_settings) -# or instrument all agents: -Agent.instrument_all(instrumentation_settings) +logfire.configure() +logfire.instrument_pydantic_ai(event_mode='logs') +agent = Agent('openai:gpt-4o') +result = agent.run_sync('What is the capital of France?') +print(result.output) +#> Paris ``` For now, this won't look as good in the Logfire UI, but we're working on it. diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md index 17b4c3e80..7f73d0fb4 100644 --- a/docs/troubleshooting.md +++ b/docs/troubleshooting.md @@ -24,4 +24,4 @@ If you're running into issues with setting the API key for your model, visit the You can use custom `httpx` clients in your models in order to access specific requests, responses, and headers at runtime. -It's particularly helpful to use `logfire`'s [HTTPX integration](logfire.md#monitoring-httpx-requests) to monitor the above. +It's particularly helpful to use `logfire`'s [HTTPX integration](logfire.md#monitoring-http-requests) to monitor the above. diff --git a/examples/pydantic_ai_examples/chat_app.py b/examples/pydantic_ai_examples/chat_app.py index e3250af7c..5807c9ef9 100644 --- a/examples/pydantic_ai_examples/chat_app.py +++ b/examples/pydantic_ai_examples/chat_app.py @@ -38,8 +38,9 @@ # 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured logfire.configure(send_to_logfire='if-token-present') +logfire.instrument_pydantic_ai() -agent = Agent('openai:gpt-4o', instrument=True) +agent = Agent('openai:gpt-4o') THIS_DIR = Path(__file__).parent diff --git a/examples/pydantic_ai_examples/flight_booking.py b/examples/pydantic_ai_examples/flight_booking.py index 2721a468e..5029c2038 100644 --- a/examples/pydantic_ai_examples/flight_booking.py +++ b/examples/pydantic_ai_examples/flight_booking.py @@ -17,6 +17,7 @@ # 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured logfire.configure(send_to_logfire='if-token-present') +logfire.instrument_pydantic_ai() class FlightDetails(BaseModel): @@ -49,7 +50,6 @@ class Deps: system_prompt=( 'Your job is to find the cheapest flight for the user on the given date. ' ), - instrument=True, ) diff --git a/examples/pydantic_ai_examples/pydantic_model.py b/examples/pydantic_ai_examples/pydantic_model.py index 980c7ab61..2ad754a32 100644 --- a/examples/pydantic_ai_examples/pydantic_model.py +++ b/examples/pydantic_ai_examples/pydantic_model.py @@ -14,6 +14,7 @@ # 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured logfire.configure(send_to_logfire='if-token-present') +logfire.instrument_pydantic_ai() class MyModel(BaseModel): @@ -23,7 +24,7 @@ class MyModel(BaseModel): model = os.getenv('PYDANTIC_AI_MODEL', 'openai:gpt-4o') print(f'Using model: {model}') -agent = Agent(model, output_type=MyModel, instrument=True) +agent = Agent(model, output_type=MyModel) if __name__ == '__main__': result = agent.run_sync('The windy city in the US of A.') diff --git a/examples/pydantic_ai_examples/question_graph.py b/examples/pydantic_ai_examples/question_graph.py index 70bc9c9ed..e5d18c9a3 100644 --- a/examples/pydantic_ai_examples/question_graph.py +++ b/examples/pydantic_ai_examples/question_graph.py @@ -25,8 +25,9 @@ # 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured logfire.configure(send_to_logfire='if-token-present') +logfire.instrument_pydantic_ai() -ask_agent = Agent('openai:gpt-4o', output_type=str, instrument=True) +ask_agent = Agent('openai:gpt-4o', output_type=str) @dataclass diff --git a/examples/pydantic_ai_examples/rag.py b/examples/pydantic_ai_examples/rag.py index a51d877c6..6d10b03a5 100644 --- a/examples/pydantic_ai_examples/rag.py +++ b/examples/pydantic_ai_examples/rag.py @@ -40,6 +40,7 @@ # 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured logfire.configure(send_to_logfire='if-token-present') logfire.instrument_asyncpg() +logfire.instrument_pydantic_ai() @dataclass @@ -48,7 +49,7 @@ class Deps: pool: asyncpg.Pool -agent = Agent('openai:gpt-4o', deps_type=Deps, instrument=True) +agent = Agent('openai:gpt-4o', deps_type=Deps) @agent.tool diff --git a/examples/pydantic_ai_examples/roulette_wheel.py b/examples/pydantic_ai_examples/roulette_wheel.py index f72719ba8..7df3229d2 100644 --- a/examples/pydantic_ai_examples/roulette_wheel.py +++ b/examples/pydantic_ai_examples/roulette_wheel.py @@ -28,7 +28,6 @@ class Deps: system_prompt=( 'Use the `roulette_wheel` function to determine if the customer has won based on the number they bet on.' ), - instrument=True, ) diff --git a/examples/pydantic_ai_examples/sql_gen.py b/examples/pydantic_ai_examples/sql_gen.py index ade55b006..28b5459fb 100644 --- a/examples/pydantic_ai_examples/sql_gen.py +++ b/examples/pydantic_ai_examples/sql_gen.py @@ -30,6 +30,7 @@ # 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured logfire.configure(send_to_logfire='if-token-present') logfire.instrument_asyncpg() +logfire.instrument_pydantic_ai() DB_SCHEMA = """ CREATE TABLE records ( @@ -96,7 +97,6 @@ class InvalidRequest(BaseModel): # Type ignore while we wait for PEP-0747, nonetheless unions will work fine everywhere else output_type=Response, # type: ignore deps_type=Deps, - instrument=True, ) diff --git a/examples/pydantic_ai_examples/stream_markdown.py b/examples/pydantic_ai_examples/stream_markdown.py index 6fed4ea07..53f61737b 100644 --- a/examples/pydantic_ai_examples/stream_markdown.py +++ b/examples/pydantic_ai_examples/stream_markdown.py @@ -20,8 +20,9 @@ # 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured logfire.configure(send_to_logfire='if-token-present') +logfire.instrument_pydantic_ai() -agent = Agent(instrument=True) +agent = Agent() # models to try, and the appropriate env var models: list[tuple[KnownModelName, str]] = [ diff --git a/examples/pydantic_ai_examples/stream_whales.py b/examples/pydantic_ai_examples/stream_whales.py index 1a99a5c98..aca14a3bd 100644 --- a/examples/pydantic_ai_examples/stream_whales.py +++ b/examples/pydantic_ai_examples/stream_whales.py @@ -21,6 +21,7 @@ # 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured logfire.configure(send_to_logfire='if-token-present') +logfire.instrument_pydantic_ai() class Whale(TypedDict): @@ -38,7 +39,7 @@ class Whale(TypedDict): description: NotRequired[Annotated[str, Field(description='Short Description')]] -agent = Agent('openai:gpt-4', output_type=list[Whale], instrument=True) +agent = Agent('openai:gpt-4', output_type=list[Whale]) async def main(): diff --git a/examples/pydantic_ai_examples/weather_agent.py b/examples/pydantic_ai_examples/weather_agent.py index 99fe5bcad..791e5326b 100644 --- a/examples/pydantic_ai_examples/weather_agent.py +++ b/examples/pydantic_ai_examples/weather_agent.py @@ -13,6 +13,7 @@ import asyncio import os +import urllib.parse from dataclasses import dataclass from typing import Any @@ -24,6 +25,7 @@ # 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured logfire.configure(send_to_logfire='if-token-present') +logfire.instrument_pydantic_ai() @dataclass @@ -37,14 +39,13 @@ class Deps: 'openai:gpt-4o', # 'Be concise, reply with one sentence.' is enough for some models (like openai) to use # the below tools appropriately, but others like anthropic and gemini require a bit more direction. - system_prompt=( + instructions=( 'Be concise, reply with one sentence.' 'Use the `get_lat_lng` tool to get the latitude and longitude of the locations, ' 'then use the `get_weather` tool to get the weather.' ), deps_type=Deps, retries=2, - instrument=True, ) @@ -62,18 +63,17 @@ async def get_lat_lng( # if no API key is provided, return a dummy response (London) return {'lat': 51.1, 'lng': -0.1} - params = { - 'q': location_description, - 'api_key': ctx.deps.geo_api_key, - } - with logfire.span('calling geocode API', params=params) as span: - r = await ctx.deps.client.get('https://geocode.maps.co/search', params=params) - r.raise_for_status() - data = r.json() - span.set_attribute('response', data) - - if data: - return {'lat': data[0]['lat'], 'lng': data[0]['lon']} + params = {'access_token': ctx.deps.geo_api_key} + loc = urllib.parse.quote(location_description) + r = await ctx.deps.client.get( + f'https://api.mapbox.com/geocoding/v5/mapbox.places/{loc}.json', params=params + ) + r.raise_for_status() + data = r.json() + + if features := data['features']: + lat, lng = features[0]['center'] + return {'lat': lat, 'lng': lng} else: raise ModelRetry('Could not find the location') @@ -139,9 +139,10 @@ async def get_weather(ctx: RunContext[Deps], lat: float, lng: float) -> dict[str async def main(): async with AsyncClient() as client: + logfire.instrument_httpx(client, capture_all=True) # create a free API key at https://www.tomorrow.io/weather-api/ weather_api_key = os.getenv('WEATHER_API_KEY') - # create a free API key at https://geocode.maps.co/ + # create a free API key at https://www.mapbox.com/ geo_api_key = os.getenv('GEO_API_KEY') deps = Deps( client=client, weather_api_key=weather_api_key, geo_api_key=geo_api_key diff --git a/examples/pyproject.toml b/examples/pyproject.toml index 25bcf8fde..1e7ec0473 100644 --- a/examples/pyproject.toml +++ b/examples/pyproject.toml @@ -46,7 +46,7 @@ dependencies = [ "pydantic-evals=={{ version }}", "asyncpg>=0.30.0", "fastapi>=0.115.4", - "logfire[asyncpg,fastapi,sqlite3]>=2.6", + "logfire[asyncpg,fastapi,sqlite3,httpx]>=2.6", "python-multipart>=0.0.17", "rich>=13.9.2", "uvicorn>=0.32.0", diff --git a/mkdocs.yml b/mkdocs.yml index 02cf3e703..2eefa3509 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -208,6 +208,7 @@ plugins: # 3 because docs are in pages with an H2 just above them heading_level: 3 import: + - url: https://logfire.pydantic.dev/docs/objects.inv - url: https://docs.python.org/3/objects.inv - url: https://docs.pydantic.dev/latest/objects.inv - url: https://dirty-equals.helpmanual.io/latest/objects.inv diff --git a/uv.lock b/uv.lock index 227c424b5..5fdc93270 100644 --- a/uv.lock +++ b/uv.lock @@ -1466,6 +1466,9 @@ asyncpg = [ fastapi = [ { name = "opentelemetry-instrumentation-fastapi" }, ] +httpx = [ + { name = "opentelemetry-instrumentation-httpx" }, +] sqlite3 = [ { name = "opentelemetry-instrumentation-sqlite3" }, ] @@ -2293,6 +2296,22 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/55/1c/ec2d816b78edf2404d7b3df6d09eefb690b70bfd191b7da06f76634f1bdc/opentelemetry_instrumentation_fastapi-0.51b0-py3-none-any.whl", hash = "sha256:10513bbc11a1188adb9c1d2c520695f7a8f2b5f4de14e8162098035901cd6493", size = 12117 }, ] +[[package]] +name = "opentelemetry-instrumentation-httpx" +version = "0.51b0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "opentelemetry-api" }, + { name = "opentelemetry-instrumentation" }, + { name = "opentelemetry-semantic-conventions" }, + { name = "opentelemetry-util-http" }, + { name = "wrapt" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/b7/d5/4a3990c461ae7e55212115e0f8f3aa412b5ce6493579e85c292245ac69ea/opentelemetry_instrumentation_httpx-0.51b0.tar.gz", hash = "sha256:061d426a04bf5215a859fea46662e5074f920e5cbde7e6ad6825a0a1b595802c", size = 17700 } +wheels = [ + { url = "https://files.pythonhosted.org/packages/c3/ba/23d4ab6402408c01f1c3f32e0c04ea6dae575bf19bcb9a0049c9e768c983/opentelemetry_instrumentation_httpx-0.51b0-py3-none-any.whl", hash = "sha256:2e3fdf755ba6ead6ab43031497c3d55d4c796d0368eccc0ce48d304b7ec6486a", size = 14109 }, +] + [[package]] name = "opentelemetry-instrumentation-sqlite3" version = "0.51b0" @@ -2867,7 +2886,7 @@ dependencies = [ { name = "devtools" }, { name = "fastapi" }, { name = "gradio", marker = "python_full_version >= '3.10'" }, - { name = "logfire", extra = ["asyncpg", "fastapi", "sqlite3"] }, + { name = "logfire", extra = ["asyncpg", "fastapi", "httpx", "sqlite3"] }, { name = "mcp", extra = ["cli"], marker = "python_full_version >= '3.10'" }, { name = "pydantic-ai-slim", extra = ["anthropic", "groq", "openai", "vertexai"] }, { name = "pydantic-evals" }, @@ -2882,7 +2901,7 @@ requires-dist = [ { name = "devtools", specifier = ">=0.12.2" }, { name = "fastapi", specifier = ">=0.115.4" }, { name = "gradio", marker = "python_full_version >= '3.10'", specifier = ">=5.9.0" }, - { name = "logfire", extras = ["asyncpg", "fastapi", "sqlite3"], specifier = ">=2.6" }, + { name = "logfire", extras = ["asyncpg", "fastapi", "httpx", "sqlite3"], specifier = ">=2.6" }, { name = "mcp", extras = ["cli"], marker = "python_full_version >= '3.10'", specifier = ">=1.4.1" }, { name = "pydantic-ai-slim", extras = ["anthropic", "groq", "openai", "vertexai"], editable = "pydantic_ai_slim" }, { name = "pydantic-evals", editable = "pydantic_evals" },