Skip to content

Fix MLX-VLM snippet and prio hf_xet for MLX models #1463

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
May 20, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 6 additions & 5 deletions packages/tasks/src/model-libraries-snippets.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1355,15 +1355,14 @@ model = SwarmFormerModel.from_pretrained("${model.id}")

const mlx_unknown = (model: ModelData): string[] => [
`# Download the model from the Hub
pip install huggingface_hub hf_transfer
pip install huggingface_hub[hf_xet]
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc: @jsulz - should we put any other env variables here? IIRC no, but wondering if you'd recommend any!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

put this here: 3c5ab82


export HF_HUB_ENABLE_HF_TRANSFER=1
huggingface-cli download --local-dir ${nameWithoutNamespace(model.id)} ${model.id}`,
];

const mlxlm = (model: ModelData): string[] => [
`# Make sure mlx-lm is installed
pip install --upgrade mlx-lm
# pip install --upgrade mlx-lm
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah good call, no longer a cli


# Generate text with mlx-lm
from mlx_lm import load, generate
Expand All @@ -1376,7 +1375,7 @@ text = generate(model, tokenizer, prompt=prompt, verbose=True)`,

const mlxchat = (model: ModelData): string[] => [
`# Make sure mlx-lm is installed
pip install --upgrade mlx-lm
# pip install --upgrade mlx-lm

# Generate text with mlx-lm
from mlx_lm import load, generate
Expand All @@ -1393,7 +1392,9 @@ text = generate(model, tokenizer, prompt=prompt, verbose=True)`,
];

const mlxvlm = (model: ModelData): string[] => [
`Make sure mlx-vlm is installed
`# Make sure mlx-vlm is installed
# pip install --upgrade mlx-vlm

from mlx_vlm import load, generate
from mlx_vlm.prompt_utils import apply_chat_template
from mlx_vlm.utils import load_config
Expand Down
Loading