-
-
Notifications
You must be signed in to change notification settings - Fork 157
Integrations OpenAI
remsky edited this page Feb 2, 2025
·
5 revisions
You can use Kokoro as the model with OpenAI. Here's how you can integrate it:
-
Install OpenAI Python Library:
- If you haven't already, install the OpenAI Python library:
pip install openai
- If you haven't already, install the OpenAI Python library:
-
Use the OpenAI Client:
- Here is an example of how to use the OpenAI client to generate speech:
from openai import OpenAI client = OpenAI( base_url="http://localhost:8880/v1", api_key="not-needed" ) with client.audio.speech.with_streaming_response.create( model="kokoro", voice="af_sky+af_bella", # Single or multiple voicepack combo input="Hello world!" ) as response: response.stream_to_file("output.mp3")
- Here is an example of how to use the OpenAI client to generate speech:
-
Or try streaming direct to speaker (may have to install and/or configure pyAudio)
import time
from pathlib import Path
from openai import OpenAI
# gets OPENAI_API_KEY from your environment variables
openai = OpenAI(base_url="http://localhost:8880/v1", api_key="not-needed-for-local")
def stream_to_speakers() -> None:
import pyaudio
player_stream = pyaudio.PyAudio().open(format=pyaudio.paInt16, channels=1, rate=24000, output=True)
start_time = time.time()
with openai.audio.speech.with_streaming_response.create(
model="kokoro",
voice="af_bella+af_irulan",
response_format="pcm", # similar to WAV, but without a header chunk at the start.
input="""I see skies of blue and clouds of white
The bright blessed days, the dark sacred nights
And I think to myself
What a wonderful world""",
) as response:
print(f"Time to first byte: {int((time.time() - start_time) * 1000)}ms")
for chunk in response.iter_bytes(chunk_size=1024):
player_stream.write(chunk)
print(f"Done in {int((time.time() - start_time) * 1000)}ms.")
See sidebar for pages