Skip to content

Lazy load code modules #269

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Dec 10, 2024

Conversation

ksimpson-work
Copy link
Contributor

@ksimpson-work ksimpson-work commented Dec 5, 2024

load modules in CodeObject lazily
close #268

@ksimpson-work ksimpson-work self-assigned this Dec 5, 2024
Copy link
Contributor

copy-pr-bot bot commented Dec 5, 2024

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@leofang leofang added this to the cuda.core beta 2 milestone Dec 5, 2024
@leofang leofang added enhancement Any code-related improvements P1 Medium priority - Should do cuda.core Everything related to the cuda.core module labels Dec 5, 2024
@ksimpson-work ksimpson-work marked this pull request as ready for review December 6, 2024 23:32
@ksimpson-work ksimpson-work force-pushed the lazy_load_code_modules branch from cba8d04 to 9fba2b7 Compare December 9, 2024 22:23
@ksimpson-work
Copy link
Contributor Author

/ok to test

@ksimpson-work
Copy link
Contributor Author

/ok to test

@@ -7,20 +7,12 @@
# is strictly prohibited.

import pytest
from conftest import can_load_generated_ptx
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is that we don't need explicit imports for things from conftest?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this only aplies to fixtures. helper functions still need to be imported. This is what my research and tests have shown.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could add a new file called utils, or helpers and import that instead. There is mixed opinions on best practice online

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what my research and tests have shown.

I was only relying on chatgpt before. It told me the imports are not needed. If that's not true: keeping it simple seems best to me, unless you anticipate that we're accumulating many helper functions. I.e. just keep what you have?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, it told me that too, then changed its mind. Keeping it simple was my idea as well, with only 1 helper function, it seems premature to split it into a new file, but perhaps down the line it will make sense

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah we can move it to tests/utils next time we touch this file

rwgk
rwgk previously approved these changes Dec 10, 2024
Copy link
Collaborator

@rwgk rwgk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM (assuming the explicit conftest imports are removed).

rwgk
rwgk previously approved these changes Dec 10, 2024
@ksimpson-work
Copy link
Contributor Author

/ok to test

rwgk
rwgk previously approved these changes Dec 10, 2024
@leofang
Copy link
Member

leofang commented Dec 10, 2024

FYI, the PR title can be renamed

@ksimpson-work ksimpson-work changed the title WIP Lazy load code modules Lazy load code modules Dec 10, 2024
if isinstance(module, str):
# TODO: this option is only taken by the new library APIs, but we have
# a bug that we can't easily support it just yet (NVIDIA/cuda-python#73).
if jit_options is not None:
raise ValueError
module = module.encode()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch thanks

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't thank me, thank the good folks at ruff

@leofang
Copy link
Member

leofang commented Dec 10, 2024

/ok to test

@leofang
Copy link
Member

leofang commented Dec 10, 2024

/ok to test

@ksimpson-work ksimpson-work merged commit 64cbc4c into NVIDIA:main Dec 10, 2024
24 of 30 checks passed
@ksimpson-work ksimpson-work deleted the lazy_load_code_modules branch December 10, 2024 22:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cuda.core Everything related to the cuda.core module enhancement Any code-related improvements P1 Medium priority - Should do
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Make the ObjectCode class lazy load modules
3 participants