Skip to content

[inference provider] Add wavespeed.ai as an inference provider #1424

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 30 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 8 commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
a4d8504
add wavespeed.ai as an inference provider
arabot777 May 5, 2025
686931e
delete debug log
arabot777 May 5, 2025
0e71b88
Merge branch 'main' into feat/wavespeedai
arabot777 May 5, 2025
4461225
Merge branch 'main' into feat/wavespeedai
arabot777 May 6, 2025
e0bf580
Merge branch 'main' into feat/wavespeedai
arabot777 May 9, 2025
07af35f
Merge branch 'main' into feat/wavespeedai
arabot777 May 16, 2025
fa3afa4
Merge branch 'main' into feat/wavespeedai
arabot777 May 17, 2025
214ff99
support lora
arabot777 May 17, 2025
47c64c6
code review
arabot777 May 20, 2025
7270c5c
Merge branch 'main' into feat/wavespeedai
arabot777 May 20, 2025
ba35791
code review
arabot777 May 20, 2025
ca35eab
Merge branch 'main' into feat/wavespeedai
arabot777 May 20, 2025
80d4640
delete unused import
arabot777 May 20, 2025
77be0c6
Merge branch 'main' into feat/wavespeedai
arabot777 May 21, 2025
0c77b3b
Update packages/inference/src/lib/getProviderHelper.ts
arabot777 May 21, 2025
3ab254e
Update packages/inference/src/lib/getProviderHelper.ts
arabot777 May 21, 2025
a8fe74c
Merge branch 'main' into feat/wavespeedai
arabot777 May 22, 2025
f706e02
Merge branch 'main' into feat/wavespeedai
arabot777 May 22, 2025
47f41f0
code review modification
arabot777 May 22, 2025
0cfefe8
Merge branch 'main' into feat/wavespeedai
arabot777 May 23, 2025
f162e89
Merge branch 'main' into feat/wavespeedai
arabot777 May 23, 2025
b23a000
Merge branch 'main' into feat/wavespeedai
arabot777 May 23, 2025
6cabc5a
import js file
arabot777 May 23, 2025
71e4939
Merge branch 'main' into feat/wavespeedai
arabot777 May 24, 2025
6341233
Merge branch 'main' into feat/wavespeedai
arabot777 May 26, 2025
bf5ccb4
lora optimize and image-to-image getresponse use header
arabot777 May 26, 2025
554bd19
Merge branch 'main' into feat/wavespeedai
arabot777 May 27, 2025
8507385
Merge branch 'main' into feat/wavespeedai
arabot777 May 28, 2025
054ecb9
Merge branch 'main' into feat/wavespeedai
arabot777 May 29, 2025
1a1f672
Merge branch 'main' into feat/wavespeedai
arabot777 May 29, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions packages/inference/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@ Currently, we support the following providers:
- [Cohere](https://cohere.com)
- [Cerebras](https://cerebras.ai/)
- [Groq](https://groq.com)
- [Wavespeed.ai](https://wavespeed.ai/)

To send requests to a third-party provider, you have to pass the `provider` parameter to the inference function. The default value of the `provider` parameter is "auto", which will select the first of the providers available for the model, sorted by your preferred order in https://hf.co/settings/inference-providers.

Expand Down Expand Up @@ -96,6 +97,8 @@ Only a subset of models are supported when requesting third-party providers. You
- [Cohere supported models](https://huggingface.co/api/partners/cohere/models)
- [Cerebras supported models](https://huggingface.co/api/partners/cerebras/models)
- [Groq supported models](https://console.groq.com/docs/models)
- [Wavespeed.ai supported models](https://huggingface.co/api/partners/wavespeed-ai/models)
- [HF Inference API (serverless)](https://huggingface.co/models?inference=warm&sort=trending)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- [HF Inference API (serverless)](https://huggingface.co/models?inference=warm&sort=trending)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

has deleted

❗**Important note:** To be compatible, the third-party API must adhere to the "standard" shape API we expect on HF model pages for each pipeline task type.
This is not an issue for LLMs as everyone converged on the OpenAI API anyways, but can be more tricky for other tasks like "text-to-image" or "automatic-speech-recognition" where there exists no standard API. Let us know if any help is needed or if we can make things easier for you!
Expand Down
6 changes: 6 additions & 0 deletions packages/inference/src/lib/getProviderHelper.ts
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ import type {
import * as Replicate from "../providers/replicate";
import * as Sambanova from "../providers/sambanova";
import * as Together from "../providers/together";
import * as WavesppedAI from "../providers/wavespeed-ai";
import type { InferenceProvider, InferenceTask } from "../types";

export const PROVIDERS: Record<InferenceProvider, Partial<Record<InferenceTask, TaskProviderHelper>>> = {
Expand Down Expand Up @@ -146,6 +147,11 @@ export const PROVIDERS: Record<InferenceProvider, Partial<Record<InferenceTask,
conversational: new Together.TogetherConversationalTask(),
"text-generation": new Together.TogetherTextGenerationTask(),
},
"wavespeed-ai": {
"text-to-image": new WavesppedAI.WavespeedAITextToImageTask(),
"text-to-video": new WavesppedAI.WavespeedAITextToVideoTask(),
"image-to-image": new WavesppedAI.WavespeedAIImageToImageTask(),
},
};

/**
Expand Down
1 change: 1 addition & 0 deletions packages/inference/src/providers/consts.ts
Original file line number Diff line number Diff line change
Expand Up @@ -36,4 +36,5 @@ export const HARDCODED_MODEL_INFERENCE_MAPPING: Record<
replicate: {},
sambanova: {},
together: {},
"wavespeed-ai": {},
};
193 changes: 193 additions & 0 deletions packages/inference/src/providers/wavespeed-ai.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,193 @@
import { InferenceOutputError } from "../lib/InferenceOutputError";
import { ImageToImageArgs } from "../tasks";
import type { BodyParams, HeaderParams, RequestArgs, UrlParams } from "../types";
import { delay } from "../utils/delay";
import { omit } from "../utils/omit";
import { base64FromBytes } from "../utils/base64FromBytes";
import {
TaskProviderHelper,
TextToImageTaskHelper,
TextToVideoTaskHelper,
ImageToImageTaskHelper,
} from "./providerHelper";

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We use import type when the import is only used as a type

Suggested change
import { InferenceOutputError } from "../lib/InferenceOutputError";
import { ImageToImageArgs } from "../tasks";
import type { BodyParams, HeaderParams, RequestArgs, UrlParams } from "../types";
import { delay } from "../utils/delay";
import { omit } from "../utils/omit";
import { base64FromBytes } from "../utils/base64FromBytes";
import {
TaskProviderHelper,
TextToImageTaskHelper,
TextToVideoTaskHelper,
ImageToImageTaskHelper,
} from "./providerHelper";
import { InferenceOutputError } from "../lib/InferenceOutputError";
import type { ImageToImageArgs } from "../tasks";
import type { BodyParams, HeaderParams, RequestArgs, UrlParams } from "../types";
import { delay } from "../utils/delay";
import { omit } from "../utils/omit";
import { base64FromBytes } from "../utils/base64FromBytes";
import type {
TaskProviderHelper,
TextToImageTaskHelper,
TextToVideoTaskHelper,
ImageToImageTaskHelper,
} from "./providerHelper";

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modify as suggested

const WAVESPEEDAI_API_BASE_URL = "https://api.wavespeed.ai";

/**
* Common response structure for all WaveSpeed AI API responses
*/
interface WaveSpeedAICommonResponse<T> {
code: number;
message: string;
data: T;
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This abstraction is not necessary IMO, let's remove it (see my other comment)

Suggested change
/**
* Common response structure for all WaveSpeed AI API responses
*/
interface WaveSpeedAICommonResponse<T> {
code: number;
message: string;
data: T;
}

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It has been modified as suggested

/**
* Response structure for task status and results
*/
interface WaveSpeedAITaskResponse {
id: string;
model: string;
outputs: string[];
urls: {
get: string;
};
has_nsfw_contents: boolean[];
status: "created" | "processing" | "completed" | "failed";
created_at: string;
error: string;
executionTime: number;
timings: {
inference: number;
};
}

/**
* Response structure for initial task submission
*/
interface WaveSpeedAISubmitResponse {
id: string;
urls: {
get: string;
};
}

type WaveSpeedAIResponse<T = WaveSpeedAITaskResponse> = WaveSpeedAICommonResponse<T>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure this type alias is needed, can we remove it?

Suggested change
type WaveSpeedAIResponse<T = WaveSpeedAITaskResponse> = WaveSpeedAICommonResponse<T>;

WaveSpeedAICommonResponse can be renamed to WaveSpeedAIResponse

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This type is needed and will be used in two places. It's uncertain whether it will be used again in the future.
It follows the DRY (Don't Repeat Yourself) principle
It provides better type safety (through default generic parameters)
It makes the code more readable and maintainable


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following the previous comment - let's remove one level of abstraction

Suggested change
type WaveSpeedAIResponse<T = WaveSpeedAITaskResponse> = WaveSpeedAICommonResponse<T>;
interface WaveSpeedAIResponse {
code: number;
message: string;
data: WaveSpeedAITaskResponse
}

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It has been modified as suggested

abstract class WavespeedAITask extends TaskProviderHelper {
private accessToken: string | undefined;

constructor(url?: string) {
super("wavespeed-ai", url || WAVESPEEDAI_API_BASE_URL);
}

makeRoute(params: UrlParams): string {
return `/api/v2/${params.model}`;
}
preparePayload(params: BodyParams): Record<string, unknown> {
const payload: Record<string, unknown> = {
...omit(params.args, ["inputs", "parameters"]),
...(params.args.parameters as Record<string, unknown>),
prompt: params.args.inputs,
};
// Add LoRA support if adapter is specified in the mapping
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need to cast into Result<string, unknown> if the params have the proper type
ImageToImageArgs, TextToImageArgs, and TextToVideoArgs need to be improrted from "../tasks"

Suggested change
preparePayload(params: BodyParams): Record<string, unknown> {
const payload: Record<string, unknown> = {
...omit(params.args, ["inputs", "parameters"]),
...(params.args.parameters as Record<string, unknown>),
prompt: params.args.inputs,
};
// Add LoRA support if adapter is specified in the mapping
preparePayload(params: BodyParams<ImageToImageArgs | TextToImageArgs | TextToVideoArgs>): Record<string, unknown> {
const payload: Record<string, unknown> = {
...omit(params.args, ["inputs", "parameters"]),
...params.args.parameters,
prompt: params.args.inputs,
};
// Add LoRA support if adapter is specified in the mapping

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It has been modified as suggested

if (params.mapping?.adapter === "lora" && params.mapping.adapterWeightsPath) {
payload.loras = [
{
path: params.mapping.adapterWeightsPath,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For reference, adapterWeightsPath is the path to the LoRA weights inside the associated HF repo
eg, for nerijs/pixel-art-xl, it will be

"pixel-art-xl.safetensors"

Let's make sure that is indeed what your API is expecting when running LoRAs

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here I see that fal is the endpoint that has been concatenated with hf.
Can I directly set the adapterWeightsPath to a lora http address? Or any other address.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the test cases, I conducted the test in this way. The adapterWeightsPath was directly passed over as an input parameter of lora.

"wavespeed-ai/flux-dev-lora": {
	hfModelId: "wavespeed-ai/flux-dev-lora",
	providerId: "wavespeed-ai/flux-dev-lora",
	status: "live",
	task: "text-to-image",
	adapter: "lora",
	adapterWeightsPath:
		"https://d32s1zkpjdc4b1.cloudfront.net/predictions/599f3739f5354afc8a76a12042736bfd/1.safetensors",
},
"wavespeed-ai/flux-dev-lora-ultra-fast": {
	hfModelId: "wavespeed-ai/flux-dev-lora-ultra-fast",
	providerId: "wavespeed-ai/flux-dev-lora-ultra-fast",
	status: "live",
	task: "text-to-image",
	adapter: "lora",
	adapterWeightsPath: "linoyts/yarn_art_Flux_LoRA",
},

in wavespeedai task is :
image
image

However, I'm not sure whether the input parameters submitted by hf to lora must be the abbreviation of the file path of the hf model and then concatenated with the hf address in the code. If it is this kind of specification, I can complete it in the format of fal

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think your API can just take the hf model id as the loras path, right?

Suggested change
path: params.mapping.adapterWeightsPath,
path: params.mapping.hfModelId,,

As mentioned by @SBrandeis, this part depends on what your API is expecting as inputs when using LoRAs weights.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you're correct.
In the example, linoyts/yarn_art_Flux_LoRA is the lora model address of hf. We will automatically match and download the hf model。

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I completed the modification and ran the use case successfully

scale: 1, // Default scale value
},
];
}
return payload;
}

override prepareHeaders(params: HeaderParams, isBinary: boolean): Record<string, string> {
this.accessToken = params.accessToken;
const headers: Record<string, string> = { Authorization: `Bearer ${params.accessToken}` };
if (!isBinary) {
headers["Content-Type"] = "application/json";
}
return headers;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the same behavior as the blanket implementation here:
https://github.com/arabot777/huggingface.js/blob/f706e02d6128f559bd5551072344ff6e31b9c4be/packages/inference/src/providers/providerHelper.ts#L114-L124

No need for an override IMO

Suggested change
override prepareHeaders(params: HeaderParams, isBinary: boolean): Record<string, string> {
this.accessToken = params.accessToken;
const headers: Record<string, string> = { Authorization: `Bearer ${params.accessToken}` };
if (!isBinary) {
headers["Content-Type"] = "application/json";
}
return headers;
}

Copy link
Author

@arabot777 arabot777 May 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed this part of the logic at the beginning. However, the getresponse method of imageToimage.ts did not pass in header information.
image

I have to rewrite prepareHeaders here and by assignment
this.accessToken = params.accessToken; To ensure that the complete ak information of the header can be passed on when calling getresponse
image

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather update ImageToImage to be able to pass headers to getResponse:

export async function imageToImage(args: ImageToImageArgs, options?: Options): Promise<Blob> {
	const provider = await resolveProvider(args.provider, args.model, args.endpointUrl);
	const providerHelper = getProviderHelper(provider, "image-to-image");
	const payload = await providerHelper.preparePayloadAsync(args);
	const { data: res } = await innerRequest<Blob>(payload, providerHelper, {
		...options,
		task: "image-to-image",
	});
	const { url, info } = await makeRequestOptions(args, providerHelper, { ...options, task: "image-to-image" });
	return providerHelper.getResponse(res, url, info.headers as Record<string, string>);
}

rather than overriding prepareHeaders and doing this.accessToken = params.accessToken

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your suggestion makes sense. Initially, this was a common/public function, so I took a minimalistic approach and didn't modify it. Now, let me try making some changes here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I completed the modification and ran the use case successfully


override async getResponse(
response: WaveSpeedAIResponse<WaveSpeedAISubmitResponse>,
url?: string,
headers?: Record<string, string>
): Promise<Blob> {
if (!headers && this.accessToken) {
headers = { Authorization: `Bearer ${this.accessToken}` };
}
if (!headers) {
throw new InferenceOutputError("Headers are required for WaveSpeed AI API calls");
}

const resultUrl = response.data.urls.get;

// Poll for results until completion
while (true) {
const resultResponse = await fetch(resultUrl, { headers });

if (!resultResponse.ok) {
throw new InferenceOutputError(`Failed to get result: ${resultResponse.statusText}`);
}

const result: WaveSpeedAIResponse = await resultResponse.json();
if (result.code !== 200) {
throw new InferenceOutputError(`API request failed with code ${result.code}: ${result.message}`);
}

const taskResult = result.data;

switch (taskResult.status) {
case "completed": {
// Get the video data from the first output URL
if (!taskResult.outputs?.[0]) {
throw new InferenceOutputError("No video URL in completed response");
}
const videoResponse = await fetch(taskResult.outputs[0]);
if (!videoResponse.ok) {
throw new InferenceOutputError("Failed to fetch video data");
}
return await videoResponse.blob();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what I understand, the payload can be something else than a video (eg an image)
Let's update the error message to reflect that

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes,
I revised it.

}
case "failed": {
throw new InferenceOutputError(taskResult.error || "Task failed");
}
case "processing":
case "created":
// Wait before polling again
await delay(100);
continue;

default: {
throw new InferenceOutputError(`Unknown status: ${taskResult.status}`);
}
}
}
}
}

export class WavespeedAITextToImageTask extends WavespeedAITask implements TextToImageTaskHelper {
constructor() {
super(WAVESPEEDAI_API_BASE_URL);
}
}

export class WavespeedAITextToVideoTask extends WavespeedAITask implements TextToVideoTaskHelper {
constructor() {
super(WAVESPEEDAI_API_BASE_URL);
}
}

export class WavespeedAIImageToImageTask extends WavespeedAITask implements ImageToImageTaskHelper {
constructor() {
super(WAVESPEEDAI_API_BASE_URL);
}

async preparePayloadAsync(args: ImageToImageArgs): Promise<RequestArgs> {
if (!args.parameters) {
return {
...args,
model: args.model,
data: args.inputs,
};
} else {
return {
...args,
inputs: base64FromBytes(
new Uint8Array(args.inputs instanceof ArrayBuffer ? args.inputs : await (args.inputs as Blob).arrayBuffer())
),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the wavespeed API support base64-encoded images as inputs?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes

};
}
}

override preparePayload(params: BodyParams): Record<string, unknown> {
return {
...omit(params.args, ["inputs", "parameters"]),
...(params.args.parameters as Record<string, unknown>),
image: params.args.inputs,
};
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think only one of the two ( preparePayload or preparePayloadAsync) should be responsible for building the payload, meaning, I'd rather move the rename of inputs to image in preparePayloadAsync an have preparePayload as dumb as possible

cc @hanouticelina - would love your opinion on that specific point

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I only kept preparePayloadAsync func

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think only one of the two ( preparePayload or preparePayloadAsync) should be responsible for building the payload, meaning, I'd rather move the rename of inputs to image in preparePayloadAsync an have preparePayload as dumb as possible

yes agree!

}
1 change: 1 addition & 0 deletions packages/inference/src/types.ts
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@ export const INFERENCE_PROVIDERS = [
"replicate",
"sambanova",
"together",
"wavespeed-ai",
] as const;

export const PROVIDERS_OR_POLICIES = [...INFERENCE_PROVIDERS, "auto"] as const;
Expand Down
115 changes: 115 additions & 0 deletions packages/inference/test/InferenceClient.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2023,4 +2023,119 @@ describe.skip("InferenceClient", () => {
},
TIMEOUT
);
describe.concurrent(
"Wavespeed AI",
() => {
const client = new InferenceClient(env.HF_WAVESPEED_KEY ?? "dummy");

HARDCODED_MODEL_INFERENCE_MAPPING["wavespeed-ai"] = {
"wavespeed-ai/flux-schnell": {
hfModelId: "wavespeed-ai/flux-schnell",
providerId: "wavespeed-ai/flux-schnell",
status: "live",
task: "text-to-image",
},
"wavespeed-ai/wan-2.1/t2v-480p": {
hfModelId: "wavespeed-ai/wan-2.1/t2v-480p",
providerId: "wavespeed-ai/wan-2.1/t2v-480p",
status: "live",
task: "text-to-video",
},
"wavespeed-ai/hidream-e1-full": {
hfModelId: "wavespeed-ai/hidream-e1-full",
providerId: "wavespeed-ai/hidream-e1-full",
status: "live",
task: "image-to-image",
},
"wavespeed-ai/wan-2.1/i2v-480p": {
hfModelId: "wavespeed-ai/wan-2.1/i2v-480p",
providerId: "wavespeed-ai/wan-2.1/i2v-480p",
status: "live",
task: "image-to-video",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this task is not supported in the client code - let's remove it for now

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

has deleted

},
"wavespeed-ai/flux-dev-lora": {
hfModelId: "wavespeed-ai/flux-dev-lora",
providerId: "wavespeed-ai/flux-dev-lora",
status: "live",
task: "text-to-image",
adapter: "lora",
adapterWeightsPath:
"https://d32s1zkpjdc4b1.cloudfront.net/predictions/599f3739f5354afc8a76a12042736bfd/1.safetensors",
},
"wavespeed-ai/flux-dev-lora-ultra-fast": {
hfModelId: "wavespeed-ai/flux-dev-lora-ultra-fast",
providerId: "wavespeed-ai/flux-dev-lora-ultra-fast",
status: "live",
task: "text-to-image",
adapter: "lora",
adapterWeightsPath: "linoyts/yarn_art_Flux_LoRA",
},
};

it(`textToImage - wavespeed-ai/flux-schnell`, async () => {
const res = await client.textToImage({
model: "wavespeed-ai/flux-schnell",
provider: "wavespeed-ai",
inputs:
"Cute boy with a hat, exploring nature, holding a telescope, backpack, surrounded by flowers, cartoon style, vibrant colors.",
});
expect(res).toBeInstanceOf(Blob);
});

it(`textToImage - wavespeed-ai/flux-dev-lora`, async () => {
const res = await client.textToImage({
model: "wavespeed-ai/flux-dev-lora",
provider: "wavespeed-ai",
inputs:
"Cute boy with a hat, exploring nature, holding a telescope, backpack, surrounded by flowers, cartoon style, vibrant colors.",
});
expect(res).toBeInstanceOf(Blob);
});

it(`textToImage - wavespeed-ai/flux-dev-lora-ultra-fast`, async () => {
const res = await client.textToImage({
model: "wavespeed-ai/flux-dev-lora-ultra-fast",
provider: "wavespeed-ai",
inputs:
"Cute boy with a hat, exploring nature, holding a telescope, backpack, surrounded by flowers, cartoon style, vibrant colors.",
});
expect(res).toBeInstanceOf(Blob);
});

it(`textToVideo - wavespeed-ai/wan-2.1/t2v-480p`, async () => {
const res = await client.textToVideo({
model: "wavespeed-ai/wan-2.1/t2v-480p",
provider: "wavespeed-ai",
inputs:
"A cool street dancer, wearing a baggy hoodie and hip-hop pants, dancing in front of a graffiti wall, night neon background, quick camera cuts, urban trends.",
parameters: {
guidance_scale: 5,
num_inference_steps: 30,
seed: -1,
},
duration: 5,
enable_safety_checker: true,
flow_shift: 2.9,
size: "480*832",
});
expect(res).toBeInstanceOf(Blob);
});

it(`imageToImage - wavespeed-ai/hidream-e1-full`, async () => {
const res = await client.imageToImage({
model: "wavespeed-ai/hidream-e1-full",
provider: "wavespeed-ai",
inputs: new Blob([readTestFile("cheetah.png")], { type: "image / png" }),
parameters: {
prompt: "The leopard chases its prey",
guidance_scale: 5,
num_inference_steps: 30,
seed: -1,
},
});
expect(res).toBeInstanceOf(Blob);
});
},
60000 * 5
);
});