-
Notifications
You must be signed in to change notification settings - Fork 159
Various documentation improvements #547
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
/ok to test |
This comment has been minimized.
This comment has been minimized.
/ok to test |
1 similar comment
/ok to test |
/ok to test |
/ok to test |
"python": ("https://docs.python.org/3/", None), | ||
"numpy": ("https://numpy.org/doc/stable/", None), | ||
"nvvm": ("https://docs.nvidia.com/cuda/libnvvm-api/", None), | ||
"nvjitlink": ("https://docs.nvidia.com/cuda/nvjitlink/", None), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason to add nvrtc or the CUDA runtime / driver APIs here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently the two codegens set different expectations regarding API doc generation. The codegen used by driver/runtime/nvrtc regenerates the entire C API references in the docs (with signatures adjusted to match Python), whereas the codegen used by nvvm/nvjitlink generates basic docs with a see also
link to the corresponding C API. The addition of these links here contain proper objects.inv
that allows cross-linking, e.g.
https://nvidia.github.io/cuda-python/pr-preview/pr-547/cuda-bindings/latest/module/nvjitlink.html#cuda.bindings.nvjitlink.create
The see also link works.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(FWIW regarding objects.inv
, it's a binary that is generated by Sphinx and can be introspected by the intersphinx plugin.)
$ export CUDA_HOME=/usr/local/cuda | ||
$ export LIBRARY_PATH=$CUDA_HOME/lib64:$LIBRARY_PATH |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note for the future: we really shouldn't need to set these when things are installed in standard locations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, at some point we should revisit the build-time search behavior. Currently we limit to full explicitness at build time (no implicit auto-discovery behind users' back), while the ongoing path finder project (#451) focuses on run-time use cases. Once the path finder is mature we can consider using it at build time too (cc @rwgk for vis).
FWIW though, right now it is not as bad as it seems. If CUDA is installed via Linux system pkg mgr or conda, we need at most $CUDA_HOME
defined. The system or conda compiler knows where the static library is. So this is required really only for CUDA installed to non-default locations.
BTW, Python projects can be built against CUDA wheels, as long as they don't contain device code that needs to be compiled by nvcc. I've enabled this for cuquantum-python
/nvmath-python
, and it's quite handy actually. It's only a matter of time that we also propagate this capability to cuda-bindings
. The only downside is that if the C libraries are not yet public, building against public C wheels does not work for obvious reasons and we need a fallback (i.e. the current behavior).
Co-authored-by: Keith Kraus <[email protected]>
/ok to test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
The only super tiny thing that stood out to me: maybe keep nvJitLink, NVRTC, NVVM in lexicographical order? — It doesn't matter much for just three items, but might be helpful if we add more.
|
numba-cuda
to the list of CUDA Python projects #537cuda.cooperative
andcuda.parallel
)cuda.bindings
installation instructions