-
Notifications
You must be signed in to change notification settings - Fork 571
[Request impl] Devtool end-to-end tests #9778
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
It's an interesting and helpful issue for me to start. Would it be alright if I take this? |
Sure! |
Hey bro! I've seen a directory named I was thinking of creating a new file in this directory to add a unittest class. Within this class, I plan to write methods to create |
@HonestDeng Thanks for your response!
Which directory here are you working on? I think executorch/devtools/test/test_end2end.py would be great.
Yeah as long as they will be composed into a single e2e test pipeline that should be fine. Target of e2e test will be testing the pipeline as a whole, rather than testing them as individual module |
Oh no, my bad! I mistakenly wrote I'll implement it in |
Based on my reading of the documentation, here is my proposed approach:
Finally, assertions, such as self.assertEqual, are needed to verify that the test executed successfully. |
Thanks for sharing! The overall logic looks great! |
Hey team, I've successfully generated the I looked at the I also reviewed the C++ code in If execution is indeed required, how can I achieve this using the Python bindings? Specifically, how do I run the loaded module via Python in a way that triggers the Thanks a lot! |
you can use |
I build the project using
Then try to run:
But got:
I also try to use:
But still failed. why? Here is the commit e449350 |
@HonestDeng thansk for sharing
Also left some comment in your commit Please let me know if anything i can help |
I'm stuck because I'm encountering the following error: |
def generate_etrecord_(self):
aten_model: ExportedProgram = export(
self.model,
(torch.randn(1, 1, 32, 32), torch.randn(1, 1, 32, 32)),
)
edge_program_manager = to_edge(
aten_model,
compile_config=EdgeCompileConfig(
_use_edge_ops=False,
_check_ir_validity=False,
),
)
edge_program_manager_copy = copy.deepcopy(edge_program_manager)
et_program_manager = edge_program_manager.to_executorch()
self.et_program_manager = et_program_manager
generate_etrecord(self.etrecord_path, edge_program_manager_copy, et_program_manager)
def generate_etdump(self):
# load executorch program from buffer, and set enable_etdump to True
program = _load_for_executorch_from_buffer(self.et_program_manager.buffer, enable_etdump=True)
# run program with example inputs to generate etdump
program.forward((torch.randn(1, 1, 32, 32), torch.randn(1, 1, 32, 32)))
# write etdump to file
program.write_etdump_result_to_file(self.etdump_path) I trid to don't use bundled program. But it still failed: Is there any problem with my code? |
Currently, I installed the executorch from source using |
Interesting i didn;t got the issue before. |
I've committed my demo code to github. Thanks! |
Is this issue done @HonestDeng ? |
@metascroy nope |
Uh oh!
There was an error while loading. Please reload this page.
🚀 The feature, motivation and pitch
Devtools now have multiple components, like etdump, etrecrod, inspector, etc. Inpsector should take etdump and etrecord (optional) as input, but now we only have tests for each module separately, but do not have end-to-end tests, which may introduce some risk for our debugging pipeline. We would love for you to help us build end-to-end testing.
RFC
The overall picture of the e2e test will be like:
to_dataframe
) and compare the result with your expectation.This doc might be a good resource for you to understand the end2end pipeline better.
The text was updated successfully, but these errors were encountered: