Skip to content

Releases: huggingface/huggingface_hub

[v1.0.1] Remove `aiohttp` from extra dependencies

28 Oct 12:49
c8b9350

Choose a tag to compare

In huggingface_hub v1.0 release, we've removed our dependency on aiohttp to replace it with httpx but we forgot to remove it from the huggingface_hub[inference] extra dependencies in setup.py. This patch release removes it, making the inference extra removed as well.

The internal method _import_aiohttp being unused, it has been removed as well.

Full Changelog: v1.0.0...v1.0.1

v1.0: Building for the Next Decade

24 Oct 08:01
cd57d24

Choose a tag to compare

Screenshot 2025-10-24 at 10 51 55

Check out our blog post announcement!

🚀 HTTPx migration

The huggingface_hub library now uses httpx instead of requests for HTTP requests. This change was made to improve performance and to support both synchronous and asynchronous requests the same way. We therefore dropped both requests and aiohttp dependencies.

The get_session and hf_raise_for_status still exist and respectively returns an httpx.Client and processes a httpx.Response object. An additional get_async_client utility has been added for async logic.

The exhaustive list of breaking changes can be found here.

🪄 CLI revamp

huggingface_hub 1.0 marks a complete transformation of our command-line experience. We've reimagined the CLI from the ground up, creating a tool that feels native to modern ML workflows while maintaining the simplicity the community love.

One CLI to Rule: Goodbye huggingface-cli

This release marks the end of an era with the complete removal of the huggingface-cli command. The new hf command (introduced in v0.34.0) takes its place with a cleaner, more intuitive design that follows a logical "resource-action" pattern. This breaking change simplifies the user experience and aligns with modern CLI conventions - no more typing those extra 11 characters!

  • Remove huggingface-cli entirely in favor of hf  by @Wauplin in #3404

hf CLI Revamp

The new CLI introduces a comprehensive set of commands for repository and file management that expose powerful HfApi functionality directly from the terminal:

> hf repo --help
Usage: hf repo [OPTIONS] COMMAND [ARGS]...

  Manage repos on the Hub.

Options:
  --help  Show this message and exit.

Commands:
  branch    Manage branches for a repo on the Hub.
  create    Create a new repo on the Hub.
  delete    Delete a repo from the Hub.
  move      Move a repository from a namespace to another namespace.
  settings  Update the settings of a repository.
  tag       Manage tags for a repo on the Hub.

A dry run mode has been added to hf download, which lets you preview exactly what will be downloaded before committing to the transfer—showing file sizes, what's already cached, and total bandwidth requirements in a clean table format:

> hf download gpt2 --dry-run   
[dry-run] Fetching 26 files: 100%|██████████████████████████████████████████████████████████| 26/26 [00:00<00:00, 50.66it/s]
[dry-run] Will download 26 files (out of 26) totalling 5.6G.
File                              Bytes to download 
--------------------------------- ----------------- 
.gitattributes                    445.0             
64-8bits.tflite                   125.2M            
64-fp16.tflite                    248.3M            
64.tflite                         495.8M            
README.md                         8.1K              
config.json                       665.0             
flax_model.msgpack                497.8M            
generation_config.json            124.0             
merges.txt                        456.3K            
model.safetensors                 548.1M            
onnx/config.json                  879.0             
onnx/decoder_model.onnx           653.7M            
onnx/decoder_model_merged.onnx    655.2M 
...

The CLI now provides intelligent shell auto-completion that suggests available commands, subcommands, options, and arguments as you type - making command discovery effortless and reducing the need to constantly check --help.

CLI auto-completion Demo

The CLI now also checks for updates in the background, ensuring you never miss important improvements or security fixes. Once every 24 hours, the CLI silently checks PyPI for newer versions and notifies you when an update is available - with personalized upgrade instructions based on your installation method.

The cache management CLI has been completely revamped with the removal of hf scan cache and hf scan delete in favor of docker-inspired commands that are more intuitive. The new hf cache ls provides rich filtering capabilities, hf cache rm enables targeted deletion, and hf cache prune cleans up detached revisions.

# List cached repos
>>> hf cache ls
ID                          SIZE     LAST_ACCESSED LAST_MODIFIED REFS        
--------------------------- -------- ------------- ------------- ----------- 
dataset/nyu-mll/glue          157.4M 2 days ago    2 days ago    main script 
model/LiquidAI/LFM2-VL-1.6B     3.2G 4 days ago    4 days ago    main        
model/microsoft/UserLM-8b      32.1G 4 days ago    4 days ago    main  

Found 3 repo(s) for a total of 5 revision(s) and 35.5G on disk.

# List cached repos with filters
>>> hf cache ls --filter "type=model" --filter "size>3G" --filter "accessed>7d"

# Output in different format
>>> hf cache ls --format json
>>> hf cache ls --revisions  # Replaces the old --verbose flag

# Cache removal
>>> hf cache rm model/meta-llama/Llama-2-70b-hf
>>> hf cache rm $(hf cache ls --filter "accessed>1y" -q)  # Remove old items

# Clean up detached revisions
hf cache prune  # Removes all unreferenced revisions

Under the hood, this transformation is powered by Typer, significantly reducing boilerplate and making the CLI easier to maintain and extend with new features.

CLI Installation: Zero-Friction Setup

The new cross-platform installers simplify CLI installation by creating isolated sandboxed environments without interfering with your existing Python setup or project dependencies. The installers work seamlessly across macOS, Linux, and Windows, automatically handling dependencies and PATH configuration.

# On macOS and Linux
>>> curl -LsSf https://hf.co/cli/install.sh | sh

# On Windows
>>> powershell -ExecutionPolicy ByPass -c "irm https://hf.co/cli/install.ps1 | iex"

Finally, the [cli] extra has been removed - The CLI now ships with the core huggingface_hub package.

💔 Breaking changes

The v1.0 release is a major milestone for the huggingface_hub library. It marks our commitment to API stability and the maturity of the library. We have made several improvements and breaking changes to make the library more robust and easier to use. A migration guide has been written to reduce friction as much as possible: https://huggingface.co/docs/huggingface_hub/concepts/migration.

We'll list all breaking changes below:

  • Minimum Python version is now 3.9 (instead of 3.8).

  • HTTP backend migrated from requests to httpx. Expect some breaking changes on advances features and errors. The exhaustive list can be found here.

  • The deprecated huggingface-cli has been removed, hf (introduced in v0.34) replaces it with a clearer ressource-action CLI.

    • Remove huggingface-cli entirely in favor of hf  by @Wauplin in #3404
  • The [cli] extra has been removed - The CLI now ships with the core huggingface_hub package.

  • Long deprecated classes like HfFolder, InferenceAPI, and Repository have been removed.

  • constant.hf_cache_home have been removed. Use constants.HF_HOME instead.

  • use_auth_token is not supported anymore. Use token instead. Previously using use_auth_token automatically redirected to token with a warning

  • removed get_token_permission. Became useless when fine-grained tokens arrived.

  • removed update_repo_visibility. Use update_repo_settings instead.

  • removed is_write_action is all build_hf_headers methods. Not relevant since fine-grained tokens arrived.

  • removed write_permission arg from login method. Not relevant anymore.

  • renamed login(new_session) to login(skip_if_logged_in) in login methods. Not announced but hopefully very little friction. Only some notebooks to update on the Hub (will do it once released)

  • removed resume_download / force_filename / local_dir_use_symlinks parameters from hf_hub_download/snapshot_downlo...

Read more

[v0.36.0] Last Stop Before 1.0

23 Oct 12:16
5af5644

Choose a tag to compare

This is the final minor release before v1.0.0. This release focuses on performance optimizations to HfFileSystem and adds a new get_organization_overview API endpoint.

We'll continue to release security patches as needed, but v0.37 will not happen. The next release will be 1.0.0. We’re also deeply grateful to the entire Hugging Face community for their feedback, bug reports, and suggestions that have shaped this library.

Full Changelog: v0.35.0...v0.36.0

📁 HfFileSystem

Major optimizations have been implemented in HfFileSystem:

  • Cache is kept when pickling a fs instance. This is particularily useful when streaming datasets in a distributed training environment. Each worker won't have to rebuild their cache anymore

Listing files with .glob() has been greatly optimized:

from huggingface_hub import HfFileSystem

HfFileSystem().glob("datasets/HuggingFaceFW/fineweb-edu/data/*/*")
# Before: ~100 /tree calls (one per subdirectory)
# Now: 1 /tree call
  • [HfFileSystem] Optimize maxdepth: do less /tree calls in glob()  by @lhoestq in #3389

Minor updates:

🌍 HfApi

It is now possible to get high-level information about an organization, the same way it is already possible to do with users:

>>> from huggingface_hub import get_organization_overview
>>> get_organization_overview("huggingface")
Organization(
    avatar_url='https://cdn-avatars.huggingface.co/v1/production/uploads/1583856921041-5dd96eb166059660ed1ee413.png',
    name='huggingface',
    fullname='Hugging Face',
    details='The AI community building the future.',
    is_verified=True,
    is_following=True,
    num_users=198,
    num_models=164, num_spaces=96,
    num_datasets=1043,
    num_followers=64814
)

🛠️ Small fixes and maintenance

🐛 Bug and typo fixes

🏗️ internal

Community contributions

The following contributors have made changes to the library over the last release. Thank you!

[v0.35.3] Fix `image-to-image` target size parameter mapping & tiny agents allow tools list bug

29 Sep 14:35

Choose a tag to compare

This release includes two bug fixes:

Full Changelog: v0.35.2...v0.35.3

[v0.35.2] Welcoming Z.ai as Inference Providers!

29 Sep 09:58
41630de

Choose a tag to compare

Full Changelog: v0.35.1...v0.35.2

New inference provider! 🔥

Z.ai is now officially an Inference Provider on the Hub. See full documentation here: https://huggingface.co/docs/inference-providers/providers/zai-org.

from huggingface_hub import InferenceClient

client = InferenceClient(provider="zai-org")
completion = client.chat.completions.create(
    model="zai-org/GLM-4.5",
    messages=[{"role": "user", "content": "What is the capital of France?"}],
)

print("\nThinking:")
print(completion.choices[0].message.reasoning_content)
print("\nOutput:")
print(completion.choices[0].message.content)
Thinking:
Okay, the user is asking about the capital of France. That's a pretty straightforward geography question. 

Hmm, I wonder if this is just a casual inquiry or if they need it for something specific like homework or travel planning. The question is very basic though, so probably just general knowledge. 

Paris is definitely the correct answer here. It's been the capital for centuries, since the Capetian dynasty made it the seat of power. Should I mention any historical context? Nah, the user didn't ask for details - just the capital. 

I recall Paris is also France's largest city and major cultural hub. But again, extra info might be overkill unless they follow up. Better keep it simple and accurate. 

The answer should be clear and direct: "Paris". No need to overcomplicate a simple fact. If they want more, they'll ask.

Output:
The capital of France is **Paris**.  

Paris has been the political and cultural center of France for centuries, serving as the seat of government, the residence of the President (Élysée Palace), and home to iconic landmarks like the Eiffel Tower, the Louvre Museum, and Notre-Dame Cathedral. It is also France's largest city and a global hub for art, fashion, gastronomy, and history.

Misc:

  • [HfFileSystem] Optimize maxdepth: do less /tree calls in glob() by @lhoestq in #3389

[v0.35.1] Do not retry on 429 and skip forward ref in strict dataclass

23 Sep 13:45
f46eb28

Choose a tag to compare

  • Do not retry on 429 (only on 5xx) #3377
  • Skip unresolved forward ref in strict dataclasses #3376

Full Changelog: v0.35.0...v0.35.1

[v0.35.0] Announcing Scheduled Jobs: run cron jobs on GPU on the Hugging Face Hub!

16 Sep 13:48
0f12365

Choose a tag to compare

Scheduled Jobs

In v0.34.0 release, we announced Jobs, a new way to run compute on the Hugging Face Hub. In this new release, we are announcing Scheduled Jobs to run Jobs on a regular basic. Think "cron jobs running on GPU".

This comes with a fully-fledge CLI:

hf jobs scheduled run @hourly ubuntu echo hello world
hf jobs scheduled run "0 * * * *" ubuntu echo hello world
hf jobs scheduled ps -a
hf jobs scheduled inspect <id>
hf jobs scheduled delete <id>
hf jobs scheduled suspend <id>
hf jobs scheduled resume <id>
hf jobs scheduled uv run @weekly train.py

It is now possible to run a command with uv run:

hf jobs uv run --with lighteval -s HF_TOKEN lighteval endpoint inference-providers "model_name=openai/gpt-oss-20b,provider=groq" "lighteval|gsm8k|0|0"

Some other improvements have been added to the existing Jobs API for a better UX.

And finally, Jobs documentation has been updated with new examples (and some fixes):

CLI updates

In addition to the Scheduled Jobs, some improvements have been added to the hf CLI.

Inference Providers

Welcome Scaleway and PublicAI!

Two new partners have been integrated to Inference Providers: Scaleway and PublicAI! (as part of releases 0.34.5 and 0.34.6).

Image-to-video

Image to video is now supported in the InferenceClient:

from huggingface_hub import InferenceClient

client = InferenceClient(provider="fal-ai")

video = client.image_to_video(
    "cat.png",
    prompt="The cat starts to dance",
    model="Wan-AI/Wan2.2-I2V-A14B",
)

Miscellaneous

Header content-type is now correctly set when sending an image or audio request (e.g. for image-to-image task). It is inferred either from the filename or the URL provided by the user. If user is directly passing raw bytes, the content-type header has to be set manually.

  • [InferenceClient] Add content-type header whenever possible + refacto by @Wauplin in #3321

A .reasoning field has been added to the Chat Completion output. This is used by some providers to return reasoning tokens separated from the .content stream of tokens.

  • Add reasoning field in chat completion output by @Wauplin in #3338

MCP & tiny-agents updates

tiny-agents now handles AGENTS.md instruction file (see https://agents.md/).

Tools filtering has already been improved to avoid loading non-relevant tools from an MCP server:

🛠️ Small fixes and maintenance

🐛 Bug and typo fixes

🏗️ internal

Community contributions

The following contributors have made changes to the library over the last release. Thank you!

[v0.34.6]: Welcoming PublicAI as Inference Providers!

16 Sep 08:14
0666153

Choose a tag to compare

Full Changelog: v0.34.5...v0.34.6

⚡ New provider: PublicAI

Tip

All supported PublicAI models can be found here.

Public AI Inference Utility is a nonprofit, open-source project building products and organizing advocacy to support the work of public AI model builders like the Swiss AI Initiative, AI Singapore, AI Sweden, and the Barcelona Supercomputing Center. Think of a BBC for AI, a public utility for AI, or public libraries for AI.

from huggingface_hub import InferenceClient

client = InferenceClient(provider="publicai")
completion = client.chat.completions.create(
    model="swiss-ai/Apertus-70B-Instruct-2509",
    messages=[{"role": "user", "content": "What is the capital of Switzerland?"}],
)

print(completion.choices[0].message.content)

[v0.34.5]: Welcoming Scaleway as Inference Providers!

15 Sep 14:29
ebf4311

Choose a tag to compare

Full Changelog: v0.34.4...v0.34.5

⚡ New provider: Scaleway

Tip

All supported Scaleway models can be found here. For more details, check out its documentation page.

Scaleway is a European cloud provider, serving latest LLM models through its Generative APIs alongside a complete cloud ecosystem.

from huggingface_hub import InferenceClient

client = InferenceClient(provider="scaleway")

completion = client.chat.completions.create(
    model="Qwen/Qwen3-235B-A22B-Instruct-2507",
    messages=[
        {
            "role": "user",
            "content": "What is the capital of France?"
        }
    ],
)

print(completion.choices[0].message)

[v0.34.4] Support Image to Video inference + QoL in jobs API, auth and utilities

08 Aug 09:19
84a92a9

Choose a tag to compare

Biggest update is the support of Image-To-Video task with inference provider Fal AI

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> video = client.image_to_video("cat.jpg", model="Wan-AI/Wan2.2-I2V-A14B", prompt="turn the cat into a tiger")
>>> with open("tiger.mp4", "wb") as f:
 ...     f.write(video)

And some quality of life improvements:

Full Changelog: v0.34.3...v0.34.4