Skip to content

Conversation

houseroad
Copy link
Collaborator

find_loaded_library is a platform specific implementation, so create "platform_aware_utils.py". For different platform (such as using PAR to pack the binary), we can create our own version of "platform_aware_utils.py" to override such platform aware utility functions.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@houseroad houseroad force-pushed the create_platform_aware_utils branch from 515f144 to f0af1d9 Compare January 20, 2025 22:50
@youkaichao
Copy link
Member

The change of separating the function makes sense to me.

in vllm, platform mainly indicates accelerators, e.g. cuda, rocm, xpu, etc.

can you provide more intuitive names to them?

maybe you can provide your implementation of find_loaded_library , and we can see how to create proper abstraction for it.

@houseroad
Copy link
Collaborator Author

How about deployment_aware_utils.py instead? Our internal changes can be found in https://github.com/vllm-project/vllm/pull/12129/files. So far two things to change: 1) libcudart path, 2) PYTHONPATH due to PAR packing method.

@youkaichao
Copy link
Member

In this case, I think we can have VLLM_CUDART_SO_PATH env var, similar to how we handle the VLLM_NCCL_SO_PATH. see

"VLLM_NCCL_SO_PATH":

@houseroad
Copy link
Collaborator Author

I see, I can have similar way to handle VLLM_CUDART_SO_PATH. How about the solution for subprocess.run? I was about to have a wrapper of subprocess_run to pass in the PYTHONPATH internally. Any suggestion on that?

@youkaichao
Copy link
Member

How about the solution for subprocess.run?

commented at #12129 (comment)

@simon-mo
Copy link
Collaborator

simon-mo commented Feb 7, 2025

Hi @houseroad @youkaichao, checking in on this PR, anything blocking landing it?

@houseroad
Copy link
Collaborator Author

Thanks for checking. @youkaichao feels a bit hacky of using file replacement, but it's quite common on the PyTorch internal usage. I am adding another hook in the _version.py internally. Will create some PR to see if it's enough. Let's keep this PR open until the internal hook needs meet our requirement. :-)

@houseroad
Copy link
Collaborator Author

Close this one, since the new proposal got landed.

@houseroad houseroad closed this Feb 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants