Skip to content

Add dat1.co as an inference provider #1460

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

ArsenyYankovsky
Copy link

🚀 Added dat1.co as a Hugging Face inference provider

Dat1.co is a low cold start, high-performance serverless inference provider that respects your privacy.

What we do:

  • provide access to popular models while protecting our clients’ privacy and complying with important data protection laws (GDPR, CCPA, etc.)
  • serve wide variety of image generation models, utilizing our low cold start to switch between them efficiently

Changes

  • README.md - added links to dat1.co website and a list of supported models on Hugging Face
  • packages/inference/src/lib/getProviderHelper.ts, packages/inference/src/providers/consts.ts, packages/inference/src/providers/dat1.ts, packages/inference/src/types.ts - integrated dat1.co with Hugging Face JS client
  • packages/inference/test/InferenceClient.spec.ts - added automatic test that can be run using a fresh dat1.co account without adding a payment method

Testing

We added automatic tests and ran them using a freshly created dat1.co account (which doesn't require a payment method until you reach a certain usage threshold).

pnpm test -- --reporter verbose \ test/InferenceClient.spec.ts -t "dat1"
pnpm test -- --reporter verbose \ test/InferenceClient.spec.ts -t "dat1"

@huggingface/[email protected] test D:\dev\workspace\huggingface.js\packages\inference
vitest run --config vitest.config.mts "--reporter" "verbose" "\" "test/InferenceClient.spec.ts" "-t" "dat1"

RUN v0.34.6 D:/dev/workspace/huggingface.js/packages/inference

stderr | unknown test
Set HF_TOKEN in the env to run the tests for better rate limits

   ↓ chatCompletion [skipped]
   ↓ chatCompletion stream [skipped]
   ↓ textToImage [skipped]
   ↓ textGeneration [skipped]
 ↓ Nebius (4) [skipped]
   ↓ chatCompletion [skipped]
   ↓ chatCompletion stream [skipped]
   ↓ textToImage [skipped]
   ↓ featureExtraction [skipped]

↓ src/vendor/fetch-event-source/parse.spec.ts (17) [skipped]
↓ parse (17) [skipped]
↓ getLines (11) [skipped]
↓ single line [skipped]
↓ multiple lines [skipped]
↓ single line split across multiple arrays [skipped]
↓ multiple lines split across multiple arrays [skipped]
↓ new line [skipped]
↓ comment line [skipped]
↓ line with no field [skipped]
↓ line with multiple colons [skipped]
↓ single byte array with multiple lines separated by \n [skipped]
↓ single byte array with multiple lines separated by \r [skipped]
↓ single byte array with multiple lines separated by \r\n [skipped]
↓ getMessages (6) [skipped]
↓ happy path [skipped]
↓ skip unknown fields [skipped]
↓ ignore non-integer retry [skipped]
↓ skip comment-only messages [skipped]
↓ should append data split across multiple lines [skipped]
↓ should reset id if sent multiple times [skipped]
✓ test/InferenceClient.spec.ts (106) 21740ms
✓ InferenceClient (106) 21740ms
↓ backward compatibility (1) [skipped]
↓ works with old HfInference name [skipped]
↓ HF Inference (49) [skipped]
↓ throws error if model does not exist [skipped]
↓ fillMask [skipped]
↓ works without model [skipped]
↓ summarization [skipped]
↓ questionAnswering [skipped]
↓ tableQuestionAnswering [skipped]
↓ documentQuestionAnswering [skipped]
↓ documentQuestionAnswering with non-array output [skipped]
↓ visualQuestionAnswering [skipped]
↓ textClassification [skipped]
↓ textGeneration - gpt2 [skipped]
↓ textGeneration - openai-community/gpt2 [skipped]
↓ textGenerationStream - meta-llama/Llama-3.2-3B [skipped]
↓ textGenerationStream - catch error [skipped]
↓ textGenerationStream - Abort [skipped]
↓ tokenClassification [skipped]
↓ translation [skipped]
↓ zeroShotClassification [skipped]
↓ sentenceSimilarity [skipped]
↓ FeatureExtraction [skipped]
↓ FeatureExtraction - auto-compatibility sentence similarity [skipped]
↓ FeatureExtraction - facebook/bart-base [skipped]
↓ FeatureExtraction - facebook/bart-base, list input [skipped]
↓ automaticSpeechRecognition [skipped]
↓ audioClassification [skipped]
↓ audioToAudio [skipped]
↓ textToSpeech [skipped]
↓ imageClassification [skipped]
↓ zeroShotImageClassification [skipped]
↓ objectDetection [skipped]
↓ imageSegmentation [skipped]
↓ imageToImage [skipped]
↓ imageToImage blob data [skipped]
↓ textToImage [skipped]
↓ textToImage with parameters [skipped]
↓ imageToText [skipped]
↓ request - openai-community/gpt2 [skipped]
↓ tabularRegression [skipped]
↓ tabularClassification [skipped]
↓ endpoint - makes request to specified endpoint [skipped]
↓ endpoint - makes request to specified endpoint - alternative syntax [skipped]
↓ chatCompletion modelId - OpenAI Specs [skipped]
↓ chatCompletionStream modelId - OpenAI Specs [skipped]
↓ chatCompletionStream modelId Fail - OpenAI Specs [skipped]
↓ chatCompletion - OpenAI Specs [skipped]
↓ chatCompletionStream - OpenAI Specs [skipped]
↓ custom mistral - OpenAI Specs [skipped]
↓ custom openai - OpenAI Specs [skipped]
↓ OpenAI client side routing - model should have provider as prefix [skipped]
✓ dat1 (3) 21740ms
✓ chatCompletion 4946ms
✓ chatCompletion stream 9329ms
✓ textToImage 21702ms
↓ Fal AI (4) [skipped]
↓ textToImage - black-forest-labs/FLUX.1-schnell [skipped]
↓ textToImage - SD LoRAs [skipped]
↓ textToImage - Flux LoRAs [skipped]
↓ automaticSpeechRecognition - openai/whisper-large-v3 [skipped]
↓ Featherless (3) [skipped]
↓ chatCompletion [skipped]
↓ chatCompletion stream [skipped]
↓ textGeneration [skipped]
↓ Replicate (11) [skipped]
↓ textToImage canonical - black-forest-labs/FLUX.1-schnell [skipped]
↓ textToImage canonical - black-forest-labs/FLUX.1-dev [skipped]
↓ textToImage - all Flux LoRAs [skipped]
↓ textToImage canonical - stabilityai/stable-diffusion-3.5-large-turbo [skipped]
↓ textToImage versioned - ByteDance/SDXL-Lightning [skipped]
↓ textToImage versioned - ByteDance/Hyper-SD [skipped]
↓ textToImage versioned - playgroundai/playground-v2.5-1024px-aesthetic [skipped]
↓ textToImage versioned - stabilityai/stable-diffusion-xl-base-1.0 [skipped]
↓ textToSpeech versioned [skipped]
↓ textToSpeech OuteTTS - usually Cold [skipped]
↓ textToSpeech Kokoro [skipped]
↓ SambaNova (3) [skipped]
↓ chatCompletion [skipped]
↓ chatCompletion stream [skipped]
↓ featureExtraction [skipped]
↓ Together (4) [skipped]
↓ chatCompletion [skipped]
↓ chatCompletion stream [skipped]
↓ textToImage [skipped]
↓ textGeneration [skipped]
↓ Nebius (4) [skipped]
↓ chatCompletion [skipped]
↓ chatCompletion stream [skipped]
↓ textToImage [skipped]
↓ featureExtraction [skipped]
↓ 3rd party providers (1) [skipped]
↓ chatCompletion - fails with unsupported model [skipped]
↓ Fireworks (2) [skipped]
↓ chatCompletion [skipped]
↓ chatCompletion stream [skipped]
↓ Hyperbolic (4) [skipped]
↓ chatCompletion - hyperbolic [skipped]
↓ chatCompletion stream [skipped]
↓ textToImage [skipped]
↓ textGeneration [skipped]
↓ Novita (2) [skipped]
↓ chatCompletion [skipped]
↓ chatCompletion stream [skipped]
↓ Black Forest Labs (2) [skipped]
↓ textToImage [skipped]
↓ textToImage URL [skipped]
↓ Cohere (2) [skipped]
↓ chatCompletion [skipped]
↓ chatCompletion stream [skipped]
↓ Cerebras (2) [skipped]
↓ chatCompletion [skipped]
↓ chatCompletion stream [skipped]
↓ Nscale (3) [skipped]
↓ chatCompletion [skipped]
↓ chatCompletion stream [skipped]
↓ textToImage [skipped]
↓ Groq (2) [skipped]
↓ chatCompletion [skipped]
↓ chatCompletion stream [skipped]
↓ OVHcloud (4) [skipped]
↓ chatCompletion [skipped]
↓ chatCompletion stream [skipped]
↓ textGeneration [skipped]
↓ textGeneration stream [skipped]

Test Files 1 passed | 1 skipped (2)
Tests 3 passed | 120 skipped (123)
Start at 12:43:22
Duration 22.58s (transform 397ms, setup 28ms, collect 507ms, tests 21.74s, environment 0ms, prepare 140ms)

@hanouticelina hanouticelina added the inference-providers integration of a new or existing Inference Provider label May 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
inference-providers integration of a new or existing Inference Provider
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants