-
Notifications
You must be signed in to change notification settings - Fork 383
[inference provider] Add wavespeed.ai as an inference provider #1424
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello, thank you for your contribution
The code is of great quality overall - I left a few comments regarding our code style.
Please make sure the client can be used to query your API for all supported tasks, and that the payload are matching your API.
Thanks again!
- [HF Inference API (serverless)](https://huggingface.co/models?inference=warm&sort=trending) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- [HF Inference API (serverless)](https://huggingface.co/models?inference=warm&sort=trending) |
hfModelId: "wavespeed-ai/wan-2.1/i2v-480p", | ||
providerId: "wavespeed-ai/wan-2.1/i2v-480p", | ||
status: "live", | ||
task: "image-to-video", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this task is not supported in the client code - let's remove it for now
import { InferenceOutputError } from "../lib/InferenceOutputError"; | ||
import { ImageToImageArgs } from "../tasks"; | ||
import type { BodyParams, HeaderParams, RequestArgs, UrlParams } from "../types"; | ||
import { delay } from "../utils/delay"; | ||
import { omit } from "../utils/omit"; | ||
import { base64FromBytes } from "../utils/base64FromBytes"; | ||
import { | ||
TaskProviderHelper, | ||
TextToImageTaskHelper, | ||
TextToVideoTaskHelper, | ||
ImageToImageTaskHelper, | ||
} from "./providerHelper"; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We use import type
when the import is only used as a type
import { InferenceOutputError } from "../lib/InferenceOutputError"; | |
import { ImageToImageArgs } from "../tasks"; | |
import type { BodyParams, HeaderParams, RequestArgs, UrlParams } from "../types"; | |
import { delay } from "../utils/delay"; | |
import { omit } from "../utils/omit"; | |
import { base64FromBytes } from "../utils/base64FromBytes"; | |
import { | |
TaskProviderHelper, | |
TextToImageTaskHelper, | |
TextToVideoTaskHelper, | |
ImageToImageTaskHelper, | |
} from "./providerHelper"; | |
import { InferenceOutputError } from "../lib/InferenceOutputError"; | |
import type { ImageToImageArgs } from "../tasks"; | |
import type { BodyParams, HeaderParams, RequestArgs, UrlParams } from "../types"; | |
import { delay } from "../utils/delay"; | |
import { omit } from "../utils/omit"; | |
import { base64FromBytes } from "../utils/base64FromBytes"; | |
import type { | |
TaskProviderHelper, | |
TextToImageTaskHelper, | |
TextToVideoTaskHelper, | |
ImageToImageTaskHelper, | |
} from "./providerHelper"; | |
}; | ||
} | ||
|
||
type WaveSpeedAIResponse<T = WaveSpeedAITaskResponse> = WaveSpeedAICommonResponse<T>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure this type alias is needed, can we remove it?
type WaveSpeedAIResponse<T = WaveSpeedAITaskResponse> = WaveSpeedAICommonResponse<T>; |
WaveSpeedAICommonResponse
can be renamed to WaveSpeedAIResponse
case "completed": { | ||
// Get the video data from the first output URL | ||
if (!taskResult.outputs?.[0]) { | ||
throw new InferenceOutputError("No video URL in completed response"); | ||
} | ||
const videoResponse = await fetch(taskResult.outputs[0]); | ||
if (!videoResponse.ok) { | ||
throw new InferenceOutputError("Failed to fetch video data"); | ||
} | ||
return await videoResponse.blob(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From what I understand, the payload can be something else than a video (eg an image)
Let's update the error message to reflect that
if (!args.parameters) { | ||
return { | ||
...args, | ||
model: args.model, | ||
data: args.inputs, | ||
}; | ||
} else { | ||
return { | ||
...args, | ||
inputs: base64FromBytes( | ||
new Uint8Array(args.inputs instanceof ArrayBuffer ? args.inputs : await (args.inputs as Blob).arrayBuffer()) | ||
), | ||
}; | ||
} | ||
} | ||
|
||
override preparePayload(params: BodyParams): Record<string, unknown> { | ||
return { | ||
...omit(params.args, ["inputs", "parameters"]), | ||
...(params.args.parameters as Record<string, unknown>), | ||
image: params.args.inputs, | ||
}; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think only one of the two ( preparePayload
or preparePayloadAsync
) should be responsible for building the payload, meaning, I'd rather move the rename of inputs
to image
in preparePayloadAsync
an have preparePayload
as dumb as possible
cc @hanouticelina - would love your opinion on that specific point
inputs: base64FromBytes( | ||
new Uint8Array(args.inputs instanceof ArrayBuffer ? args.inputs : await (args.inputs as Blob).arrayBuffer()) | ||
), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the wavespeed API support base64-encoded images as inputs?
What’s in this PR
WaveSpeedAI is a high-performance AI image and video generation service platform, offering industry-leading generation speeds. Now, want to be listed as an Inference Provider on the Hugging Face Hub
The JS Client Integration was completed based on the inference-providers help documentation and passed the test. I am submitting the pr now and look forward to further communication with you
Test