Skip to content

Conversation

AlphaINF
Copy link

@AlphaINF AlphaINF commented Nov 29, 2024

Currently, vllm can't loading Lora models from modelscope.
Beside, If a model from modelscope is private, it can't load from modelscope, have to login manually before starting vllm.
this pull request fix two problems below.
If you want to using private models, you can set MODELSCOPE_ACCESS_TOKEN as environmental variables

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@AlphaINF
Copy link
Author

/ready

@jeejeelee
Copy link
Collaborator

IMHO, downloading LoRA weights from remote servers should not be handled internally by vllm, as this would increase the risk of service crashes and impact performance. What are your thoughts on this? cc @simon-mo @youkaichao @DarkLight1337

@AlphaINF
Copy link
Author

IMHO, downloading LoRA weights from remote servers should not be handled internally by vllm, as this would increase the risk of service crashes and impact performance. What are your thoughts on this? cc @simon-mo @youkaichao @DarkLight1337

Indeed, if the download path is incorrect or some internet errors occured, it will crash the server(eg. the lora from the internet is lack of some files). However, we can add additional try-except or add download instruction when starting vllm or interacting with /upload_lora .

Actually, vllm now support downloading LoRA weight from huggingface, and it doesn't matter.

@jeejeelee
Copy link
Collaborator

@AlphaINF I know, so I want to hear their thoughts and suggestions.
BTW I knew you would say to support HuggingFace downloads, then our other users might also say they want to support downloads from S3...

@DarkLight1337
Copy link
Member

Not sure about dynamic LoRA adapters (I don't have context on this), but since we currently support downloading models from both HF and ModelScope, it makes sense to enable this for LoRA as well.

@AlphaINF
Copy link
Author

@jeejeelee Through some days trial, your concern is right!
I found several bugs in it.
One of the bug is the download bug. For example, If we deploy the main model on four gpus, when the user send a message, the vllm will start four download threads(as much as the num of threads), multiple download threads which downloading the same lora adapter will finally cause the failure about the download.
What's even worse is that receiving multiple requests, vllm will start multiple download threads.
So If we want to load mora model from remote path, we need to move the snapshot_download to the add_lora.

@jeejeelee
Copy link
Collaborator

@jeejeelee Through some days trial, your concern is right! I found several bugs in it. One of the bug is the download bug. For example, If we deploy the main model on four gpus, when the user send a message, the vllm will start four download threads(as much as the num of threads), multiple download threads which downloading the same lora adapter will finally cause the failure about the download. What's even worse is that receiving multiple requests, vllm will start multiple download threads. So If we want to load mora model from remote path, we need to move the snapshot_download to the add_lora.

Have you tried using filelock?
BTW, @Jeffwan have you encountered similar issues?

@AlphaINF
Copy link
Author

@jeejeelee Through some days trial, your concern is right! I found several bugs in it. One of the bug is the download bug. For example, If we deploy the main model on four gpus, when the user send a message, the vllm will start four download threads(as much as the num of threads), multiple download threads which downloading the same lora adapter will finally cause the failure about the download. What's even worse is that receiving multiple requests, vllm will start multiple download threads. So If we want to load mora model from remote path, we need to move the snapshot_download to the add_lora.

Have you tried using filelock? BTW, @Jeffwan have you encountered similar issues?

That's a great idea, I will try to add it and test for some days!

Copy link

This pull request has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this pull request should remain open. Thank you!

@github-actions github-actions bot added the stale Over 90 days of inactivity label Mar 13, 2025
Copy link

This pull request has been automatically closed due to inactivity. Please feel free to reopen if you intend to continue working on it. Thank you!

@github-actions github-actions bot closed this Apr 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

stale Over 90 days of inactivity

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants