Skip to content

release: 1.87.0 #2410

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Jun 16, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "1.86.0"
".": "1.87.0"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 111
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-3ae9c18dd7ccfc3ac5206f24394665f563a19015cfa8847b2801a2694d012abc.yml
openapi_spec_hash: 48175b03b58805cd5c80793c66fd54e5
config_hash: 4caff63b74a41f71006987db702f2918
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-9e41d2d5471d2c28bff0d616f4476f5b0e6c541ef4cb51bdaaef5fdf5e13c8b2.yml
openapi_spec_hash: 86f765e18d00e32cf2ce9db7ab84d946
config_hash: fd2af1d5eff0995bb7dc02ac9a34851d
20 changes: 20 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,25 @@
# Changelog

## 1.87.0 (2025-06-16)

Full Changelog: [v1.86.0...v1.87.0](https://github.com/openai/openai-python/compare/v1.86.0...v1.87.0)

### Features

* **api:** add reusable prompt IDs ([36bfe6e](https://github.com/openai/openai-python/commit/36bfe6e8ae12a31624ba1a360d9260f0aeec448a))


### Bug Fixes

* **client:** update service_tier on `client.beta.chat.completions` ([aa488d5](https://github.com/openai/openai-python/commit/aa488d5cf210d8640f87216538d4ff79d7181f2a))


### Chores

* **internal:** codegen related update ([b1a31e5](https://github.com/openai/openai-python/commit/b1a31e5ef4387d9f82cf33f9461371651788d381))
* **internal:** update conftest.py ([bba0213](https://github.com/openai/openai-python/commit/bba0213842a4c161f2235e526d50901a336eecef))
* **tests:** add tests for httpx client instantiation & proxies ([bc93712](https://github.com/openai/openai-python/commit/bc9371204f457aee9ed9b6ec1b61c2084f32faf1))

## 1.86.0 (2025-06-10)

Full Changelog: [v1.85.0...v1.86.0](https://github.com/openai/openai-python/compare/v1.85.0...v1.86.0)
Expand Down
1 change: 1 addition & 0 deletions api.md
Original file line number Diff line number Diff line change
Expand Up @@ -750,6 +750,7 @@ from openai.types.responses import (
ResponseOutputRefusal,
ResponseOutputText,
ResponseOutputTextAnnotationAddedEvent,
ResponsePrompt,
ResponseQueuedEvent,
ResponseReasoningDeltaEvent,
ResponseReasoningDoneEvent,
Expand Down
5 changes: 3 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "openai"
version = "1.86.0"
version = "1.87.0"
description = "The official Python library for the openai API"
dynamic = ["readme"]
license = "Apache-2.0"
Expand Down Expand Up @@ -68,6 +68,7 @@ dev-dependencies = [
"types-pyaudio > 0",
"trio >=0.22.2",
"nest_asyncio==1.6.0",
"pytest-xdist>=3.6.1",
]

[tool.rye.scripts]
Expand Down Expand Up @@ -139,7 +140,7 @@ replacement = '[\1](https://github.com/openai/openai-python/tree/main/\g<2>)'

[tool.pytest.ini_options]
testpaths = ["tests"]
addopts = "--tb=short"
addopts = "--tb=short -n auto"
xfail_strict = true
asyncio_mode = "auto"
asyncio_default_fixture_loop_scope = "session"
Expand Down
4 changes: 4 additions & 0 deletions requirements-dev.lock
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,8 @@ exceptiongroup==1.2.2
# via anyio
# via pytest
# via trio
execnet==2.1.1
# via pytest-xdist
executing==2.1.0
# via inline-snapshot
filelock==3.12.4
Expand Down Expand Up @@ -129,7 +131,9 @@ pyjwt==2.8.0
pyright==1.1.399
pytest==8.3.3
# via pytest-asyncio
# via pytest-xdist
pytest-asyncio==0.24.0
pytest-xdist==3.7.0
python-dateutil==2.8.2
# via pandas
# via time-machine
Expand Down
18 changes: 16 additions & 2 deletions src/openai/_base_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -1088,7 +1088,14 @@ def _process_response(

origin = get_origin(cast_to) or cast_to

if inspect.isclass(origin) and issubclass(origin, BaseAPIResponse):
if (
inspect.isclass(origin)
and issubclass(origin, BaseAPIResponse)
# we only want to actually return the custom BaseAPIResponse class if we're
# returning the raw response, or if we're not streaming SSE, as if we're streaming
# SSE then `cast_to` doesn't actively reflect the type we need to parse into
and (not stream or bool(response.request.headers.get(RAW_RESPONSE_HEADER)))
):
if not issubclass(origin, APIResponse):
raise TypeError(f"API Response types must subclass {APIResponse}; Received {origin}")

Expand Down Expand Up @@ -1606,7 +1613,14 @@ async def _process_response(

origin = get_origin(cast_to) or cast_to

if inspect.isclass(origin) and issubclass(origin, BaseAPIResponse):
if (
inspect.isclass(origin)
and issubclass(origin, BaseAPIResponse)
# we only want to actually return the custom BaseAPIResponse class if we're
# returning the raw response, or if we're not streaming SSE, as if we're streaming
# SSE then `cast_to` doesn't actively reflect the type we need to parse into
and (not stream or bool(response.request.headers.get(RAW_RESPONSE_HEADER)))
):
if not issubclass(origin, AsyncAPIResponse):
raise TypeError(f"API Response types must subclass {AsyncAPIResponse}; Received {origin}")

Expand Down
2 changes: 1 addition & 1 deletion src/openai/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

__title__ = "openai"
__version__ = "1.86.0" # x-release-please-version
__version__ = "1.87.0" # x-release-please-version
8 changes: 4 additions & 4 deletions src/openai/resources/beta/chat/completions.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ def parse(
presence_penalty: Optional[float] | NotGiven = NOT_GIVEN,
reasoning_effort: Optional[ReasoningEffort] | NotGiven = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex"]] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex", "scale"]] | NotGiven = NOT_GIVEN,
stop: Union[Optional[str], List[str], None] | NotGiven = NOT_GIVEN,
store: Optional[bool] | NotGiven = NOT_GIVEN,
stream_options: Optional[ChatCompletionStreamOptionsParam] | NotGiven = NOT_GIVEN,
Expand Down Expand Up @@ -228,7 +228,7 @@ def stream(
presence_penalty: Optional[float] | NotGiven = NOT_GIVEN,
reasoning_effort: Optional[ReasoningEffort] | NotGiven = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex"]] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex", "scale"]] | NotGiven = NOT_GIVEN,
stop: Union[Optional[str], List[str], None] | NotGiven = NOT_GIVEN,
store: Optional[bool] | NotGiven = NOT_GIVEN,
stream_options: Optional[ChatCompletionStreamOptionsParam] | NotGiven = NOT_GIVEN,
Expand Down Expand Up @@ -360,7 +360,7 @@ async def parse(
presence_penalty: Optional[float] | NotGiven = NOT_GIVEN,
reasoning_effort: Optional[ReasoningEffort] | NotGiven = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex"]] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex", "scale"]] | NotGiven = NOT_GIVEN,
stop: Union[Optional[str], List[str], None] | NotGiven = NOT_GIVEN,
store: Optional[bool] | NotGiven = NOT_GIVEN,
stream_options: Optional[ChatCompletionStreamOptionsParam] | NotGiven = NOT_GIVEN,
Expand Down Expand Up @@ -507,7 +507,7 @@ def stream(
presence_penalty: Optional[float] | NotGiven = NOT_GIVEN,
reasoning_effort: Optional[ReasoningEffort] | NotGiven = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex"]] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex", "scale"]] | NotGiven = NOT_GIVEN,
stop: Union[Optional[str], List[str], None] | NotGiven = NOT_GIVEN,
store: Optional[bool] | NotGiven = NOT_GIVEN,
stream_options: Optional[ChatCompletionStreamOptionsParam] | NotGiven = NOT_GIVEN,
Expand Down
16 changes: 8 additions & 8 deletions src/openai/resources/chat/completions/completions.py
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ def create(
reasoning_effort: Optional[ReasoningEffort] | NotGiven = NOT_GIVEN,
response_format: completion_create_params.ResponseFormat | NotGiven = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex"]] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex", "scale"]] | NotGiven = NOT_GIVEN,
stop: Union[Optional[str], List[str], None] | NotGiven = NOT_GIVEN,
store: Optional[bool] | NotGiven = NOT_GIVEN,
stream: Optional[Literal[False]] | NotGiven = NOT_GIVEN,
Expand Down Expand Up @@ -365,7 +365,7 @@ def create(
reasoning_effort: Optional[ReasoningEffort] | NotGiven = NOT_GIVEN,
response_format: completion_create_params.ResponseFormat | NotGiven = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex"]] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex", "scale"]] | NotGiven = NOT_GIVEN,
stop: Union[Optional[str], List[str], None] | NotGiven = NOT_GIVEN,
store: Optional[bool] | NotGiven = NOT_GIVEN,
stream_options: Optional[ChatCompletionStreamOptionsParam] | NotGiven = NOT_GIVEN,
Expand Down Expand Up @@ -634,7 +634,7 @@ def create(
reasoning_effort: Optional[ReasoningEffort] | NotGiven = NOT_GIVEN,
response_format: completion_create_params.ResponseFormat | NotGiven = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex"]] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex", "scale"]] | NotGiven = NOT_GIVEN,
stop: Union[Optional[str], List[str], None] | NotGiven = NOT_GIVEN,
store: Optional[bool] | NotGiven = NOT_GIVEN,
stream_options: Optional[ChatCompletionStreamOptionsParam] | NotGiven = NOT_GIVEN,
Expand Down Expand Up @@ -902,7 +902,7 @@ def create(
reasoning_effort: Optional[ReasoningEffort] | NotGiven = NOT_GIVEN,
response_format: completion_create_params.ResponseFormat | NotGiven = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex"]] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex", "scale"]] | NotGiven = NOT_GIVEN,
stop: Union[Optional[str], List[str], None] | NotGiven = NOT_GIVEN,
store: Optional[bool] | NotGiven = NOT_GIVEN,
stream: Optional[Literal[False]] | Literal[True] | NotGiven = NOT_GIVEN,
Expand Down Expand Up @@ -1198,7 +1198,7 @@ async def create(
reasoning_effort: Optional[ReasoningEffort] | NotGiven = NOT_GIVEN,
response_format: completion_create_params.ResponseFormat | NotGiven = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex"]] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex", "scale"]] | NotGiven = NOT_GIVEN,
stop: Union[Optional[str], List[str], None] | NotGiven = NOT_GIVEN,
store: Optional[bool] | NotGiven = NOT_GIVEN,
stream: Optional[Literal[False]] | NotGiven = NOT_GIVEN,
Expand Down Expand Up @@ -1468,7 +1468,7 @@ async def create(
reasoning_effort: Optional[ReasoningEffort] | NotGiven = NOT_GIVEN,
response_format: completion_create_params.ResponseFormat | NotGiven = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex"]] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex", "scale"]] | NotGiven = NOT_GIVEN,
stop: Union[Optional[str], List[str], None] | NotGiven = NOT_GIVEN,
store: Optional[bool] | NotGiven = NOT_GIVEN,
stream_options: Optional[ChatCompletionStreamOptionsParam] | NotGiven = NOT_GIVEN,
Expand Down Expand Up @@ -1737,7 +1737,7 @@ async def create(
reasoning_effort: Optional[ReasoningEffort] | NotGiven = NOT_GIVEN,
response_format: completion_create_params.ResponseFormat | NotGiven = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex"]] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex", "scale"]] | NotGiven = NOT_GIVEN,
stop: Union[Optional[str], List[str], None] | NotGiven = NOT_GIVEN,
store: Optional[bool] | NotGiven = NOT_GIVEN,
stream_options: Optional[ChatCompletionStreamOptionsParam] | NotGiven = NOT_GIVEN,
Expand Down Expand Up @@ -2005,7 +2005,7 @@ async def create(
reasoning_effort: Optional[ReasoningEffort] | NotGiven = NOT_GIVEN,
response_format: completion_create_params.ResponseFormat | NotGiven = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex"]] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex", "scale"]] | NotGiven = NOT_GIVEN,
stop: Union[Optional[str], List[str], None] | NotGiven = NOT_GIVEN,
store: Optional[bool] | NotGiven = NOT_GIVEN,
stream: Optional[Literal[False]] | Literal[True] | NotGiven = NOT_GIVEN,
Expand Down
20 changes: 12 additions & 8 deletions src/openai/resources/fine_tuning/jobs/jobs.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ def create(
Response includes details of the enqueued job including job status and the name
of the fine-tuned models once complete.

[Learn more about fine-tuning](https://platform.openai.com/docs/guides/fine-tuning)
[Learn more about fine-tuning](https://platform.openai.com/docs/guides/model-optimization)

Args:
model: The name of the model to fine-tune. You can select one of the
Expand All @@ -105,7 +105,8 @@ def create(
[preference](https://platform.openai.com/docs/api-reference/fine-tuning/preference-input)
format.

See the [fine-tuning guide](https://platform.openai.com/docs/guides/fine-tuning)
See the
[fine-tuning guide](https://platform.openai.com/docs/guides/model-optimization)
for more details.

hyperparameters: The hyperparameters used for the fine-tuning job. This value is now deprecated
Expand Down Expand Up @@ -142,7 +143,8 @@ def create(
Your dataset must be formatted as a JSONL file. You must upload your file with
the purpose `fine-tune`.

See the [fine-tuning guide](https://platform.openai.com/docs/guides/fine-tuning)
See the
[fine-tuning guide](https://platform.openai.com/docs/guides/model-optimization)
for more details.

extra_headers: Send extra headers
Expand Down Expand Up @@ -189,7 +191,7 @@ def retrieve(
"""
Get info about a fine-tuning job.

[Learn more about fine-tuning](https://platform.openai.com/docs/guides/fine-tuning)
[Learn more about fine-tuning](https://platform.openai.com/docs/guides/model-optimization)

Args:
extra_headers: Send extra headers
Expand Down Expand Up @@ -462,7 +464,7 @@ async def create(
Response includes details of the enqueued job including job status and the name
of the fine-tuned models once complete.

[Learn more about fine-tuning](https://platform.openai.com/docs/guides/fine-tuning)
[Learn more about fine-tuning](https://platform.openai.com/docs/guides/model-optimization)

Args:
model: The name of the model to fine-tune. You can select one of the
Expand All @@ -483,7 +485,8 @@ async def create(
[preference](https://platform.openai.com/docs/api-reference/fine-tuning/preference-input)
format.

See the [fine-tuning guide](https://platform.openai.com/docs/guides/fine-tuning)
See the
[fine-tuning guide](https://platform.openai.com/docs/guides/model-optimization)
for more details.

hyperparameters: The hyperparameters used for the fine-tuning job. This value is now deprecated
Expand Down Expand Up @@ -520,7 +523,8 @@ async def create(
Your dataset must be formatted as a JSONL file. You must upload your file with
the purpose `fine-tune`.

See the [fine-tuning guide](https://platform.openai.com/docs/guides/fine-tuning)
See the
[fine-tuning guide](https://platform.openai.com/docs/guides/model-optimization)
for more details.

extra_headers: Send extra headers
Expand Down Expand Up @@ -567,7 +571,7 @@ async def retrieve(
"""
Get info about a fine-tuning job.

[Learn more about fine-tuning](https://platform.openai.com/docs/guides/fine-tuning)
[Learn more about fine-tuning](https://platform.openai.com/docs/guides/model-optimization)

Args:
extra_headers: Send extra headers
Expand Down
24 changes: 24 additions & 0 deletions src/openai/resources/images.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,8 @@ def edit(
mask: FileTypes | NotGiven = NOT_GIVEN,
model: Union[str, ImageModel, None] | NotGiven = NOT_GIVEN,
n: Optional[int] | NotGiven = NOT_GIVEN,
output_compression: Optional[int] | NotGiven = NOT_GIVEN,
output_format: Optional[Literal["png", "jpeg", "webp"]] | NotGiven = NOT_GIVEN,
quality: Optional[Literal["standard", "low", "medium", "high", "auto"]] | NotGiven = NOT_GIVEN,
response_format: Optional[Literal["url", "b64_json"]] | NotGiven = NOT_GIVEN,
size: Optional[Literal["256x256", "512x512", "1024x1024", "1536x1024", "1024x1536", "auto"]]
Expand Down Expand Up @@ -171,6 +173,14 @@ def edit(

n: The number of images to generate. Must be between 1 and 10.

output_compression: The compression level (0-100%) for the generated images. This parameter is only
supported for `gpt-image-1` with the `webp` or `jpeg` output formats, and
defaults to 100.

output_format: The format in which the generated images are returned. This parameter is only
supported for `gpt-image-1`. Must be one of `png`, `jpeg`, or `webp`. The
default value is `png`.

quality: The quality of the image that will be generated. `high`, `medium` and `low` are
only supported for `gpt-image-1`. `dall-e-2` only supports `standard` quality.
Defaults to `auto`.
Expand Down Expand Up @@ -204,6 +214,8 @@ def edit(
"mask": mask,
"model": model,
"n": n,
"output_compression": output_compression,
"output_format": output_format,
"quality": quality,
"response_format": response_format,
"size": size,
Expand Down Expand Up @@ -447,6 +459,8 @@ async def edit(
mask: FileTypes | NotGiven = NOT_GIVEN,
model: Union[str, ImageModel, None] | NotGiven = NOT_GIVEN,
n: Optional[int] | NotGiven = NOT_GIVEN,
output_compression: Optional[int] | NotGiven = NOT_GIVEN,
output_format: Optional[Literal["png", "jpeg", "webp"]] | NotGiven = NOT_GIVEN,
quality: Optional[Literal["standard", "low", "medium", "high", "auto"]] | NotGiven = NOT_GIVEN,
response_format: Optional[Literal["url", "b64_json"]] | NotGiven = NOT_GIVEN,
size: Optional[Literal["256x256", "512x512", "1024x1024", "1536x1024", "1024x1536", "auto"]]
Expand Down Expand Up @@ -495,6 +509,14 @@ async def edit(

n: The number of images to generate. Must be between 1 and 10.

output_compression: The compression level (0-100%) for the generated images. This parameter is only
supported for `gpt-image-1` with the `webp` or `jpeg` output formats, and
defaults to 100.

output_format: The format in which the generated images are returned. This parameter is only
supported for `gpt-image-1`. Must be one of `png`, `jpeg`, or `webp`. The
default value is `png`.

quality: The quality of the image that will be generated. `high`, `medium` and `low` are
only supported for `gpt-image-1`. `dall-e-2` only supports `standard` quality.
Defaults to `auto`.
Expand Down Expand Up @@ -528,6 +550,8 @@ async def edit(
"mask": mask,
"model": model,
"n": n,
"output_compression": output_compression,
"output_format": output_format,
"quality": quality,
"response_format": response_format,
"size": size,
Expand Down
Loading