Skip to content

Conversation

tharittk
Copy link
Collaborator

@tharittk tharittk commented Aug 28, 2025

Objective

This PR has two main goals:

  • moving all MPI/NCCL communication calls from within DistributedArray and various linear operators to a single common place, namely the DistributedMixIn class, whose methods are used by both DistributedArray and the linear operators
  • implement support for mpi4py buffered communications to be used in spite of object communications in NumPy+MPI (better performance, and supported with any version of MPI/mpi4py) and CuPy+Cuda-aware MPI scenarios (whilst still falling back to object communications in CuPy+Non-Cuda-aware MPI scenario). Note that to allow users to force object communications when dealing with CuPy, a new environment variable PYLOPS_MPI_CUDA_AWARE is introduced (defaults to 1 but can be set to 0 to force object communications)

CUDA-Aware MPI

In order to have a CUDA-aware mpi4py installation mpi4py must be build against a CUDA-aware MPI installation. Since conda installations of mpi4py do not ship with a CUDA-aware MPI, it is therefore required to use pip for installing mpi4py. In the case for NCSA Delta, I create a new conda environment and do
module load openmpi/5.0.5+cuda
then
MPICC=/path/to/mpicc pip install --no-cache-dir --force-reinstall mpi4py
(where --force-reinstall is only needed because we install already mpi4py as part of the conda environment creation.

And to run the test (assuming you're in the compute node already):

module load openmpi/5.0.5+cuda
export PYLOPS_MPI_CUDA_AWARE=1
echo "TESTING **WITH** CUDA_AWARE"

echo "TEST NUMPY MPI"
export TEST_CUPY_PYLOPS=0
mpirun -n 2 pytest tests/ --with-mpi

echo "TEST CUPY MPI"
export TEST_CUPY_PYLOPS=1
mpirun -n 2 pytest tests/ --with-mpi

echo " TEST NCCL "
mpirun -n 2 pytest tests_nccl/ --with-mpi

To Do

  • So far the mpi_allgather method uses the _prepare_allgather_inputs method to prepare inputs such that they are all of the same size (via zero-padding). Whilst this is strictly needed for NCCL, we should instead consider leveraging MPI `AllGatherv' instead to avoid extra padding and cutting of arrays.
  • Modify building process in Makefile and environment/requirement files: I suggest to have some targets for cuda-aware MPI where we put mpi4py in the pip section of the environment file and we ask users to set MPICC upfront (can be documented in the installation.rst section)
  • Modify MatrixMult to remove any direct call to mpi4py communication methods - note that we need to either merge main or do this in a separate PR as MatrixMult is changed from what we have in this branch..

tharittk and others added 5 commits August 17, 2025 04:07
A new DistributedMix class is create with the aim of simpflify and unify
all comm. calls in both DistributedArray and operators (further hiding
away all implementation details).
@mrava87
Copy link
Contributor

mrava87 commented Sep 7, 2025

@tharittk great start!

Regarding the setup, I completely agree with the need to change the installation process for CUDA-Aware MPI. I have personally so far mostly relied on conda to install mpi as part of the installation of mpi4py, but it seems like this cannot be done to get CUDA-Aware MPI (see https://urldefense.com/v3/https://chatgpt.com/share/68bdf141-0658-800d-9c6c-e85aa4ab6d87;!!BgN1JKhRo9Eh4Q!SnZ79GzfYSo75i0MB4v9O_mBEnH1UA5IVYuisb-NWb0p9kRXKab9gydJlsLTleI51ozFLiVK8FDInCoRknrulElJpw$); so whilst the module load ... part would change (one may have the same luck that you have to get a pre-installed MPI with CUDA support or may need to install themselves), the second part should be universal, so we may want to add some Makefile targets for this setup 😄

Regarding the code, as I briefly mentioned offline, whilst I think this is the right way to go:

  • buffer comms for NumPy
  • have the PYLOPS_MPI_CUDA_AWARE env variable for CuPy to allow using object comms for non CUDA-Aware MPI + CuPy

i am starting to feel that the number of branches in code is growing and it is about time to put it all in one place... what I am mostly concerned is that this kind of branches will not only be present in DistributedArray but they will start to permeate into operators. I had a first go at it, only with the allgather method to give you and idea and discuss together if you think this is a good approach before we implement it for all the other comm method. The approach I took is two-fold:

  • create a _mpi subpackage (similar to _ncll) where all MPI methods are implemented with the various branches - what so far you had in the else branch in the _allreduce method in DistributedArray
  • create a mixin class DistributedMixIn (in Distributed file) where we can basically move all comm methods that are currently in DistributedArray. However, by doing so, also operators can inherit this class and access those methods - I used VStack as an example.

@astroC86 we have also talked a bit about this in the context of your MatrixMult operator. Pinging you so you can folllow this space, and hopefully once this PR is merged the bar for the implementation of operators that support all backends (Numpy+MPI, Cupy+MPI, Cupy+NCCL) will be lowered as one would just need to know what communication pattern they want to use and call the one from the mixin class without worrying about the subtleties of the different backends

@mrava87 mrava87 mentioned this pull request Sep 9, 2025
@mrava87 mrava87 changed the title Buffered communication for CUDA-Aware MPI Feat: restructuring of communication methods (and buffered communication for CUDA-Aware MPI) Sep 23, 2025
@mrava87
Copy link
Contributor

mrava87 commented Sep 23, 2025

@tharittk I worked a bit more on this, but there is still quite a bit to do (added to the TODO list in the PR main comment)....

Also i am not really sure why some tests are failing on some specific combinations of python/mpi/nranks but not on others... have not investigated yet...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants