-
Notifications
You must be signed in to change notification settings - Fork 13.6k
[mlir][test] Test conversion of TOSA to EmitC via LinAlg #94640
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[mlir][test] Test conversion of TOSA to EmitC via LinAlg #94640
Conversation
Created using spr 1.3.4
@mtrofin @simon-camp, sorry that this is about a week behind schedule. I had most of this done last week, but I was trying to figure out why some of the transforms weren't working as expected. I believe that using the Once we settle on the precise input MLIR, alls that's left to do is to validate the generated header via CHECK lines. @simon-camp perhaps you have an idea why the second RUN line is failing? These are all basically translations from your script, and there shouldn't be any material deviations from the original.
|
What's happening is that the TL;DR: remove the Interestingly the test then runs to line 15 where it crashes. The input to that invocation is this. I expected the lowering to fail there as we support neither dynamic shapes nor 0d memrefs in the conversion to EmitC, but the passes should fail with an error message instead of an assertion. I can take a look at this next week. |
Thanks. I'll update the patch in a bit.
I guess that may require some of the post processing you mentioned? I'll see if I can add some of those python clean ups you provided in the meantime. We can add some TODO's and remove those as the functionality improves. |
I pushed a fix for the crash in #94936. I would suggest that you test the script on an actual model, as it should have fixed shapes. That will not run into most of unsupported features seen in the current tests. Feel free to share the input model in the TOSA dialect if things don't work directly. |
Thanks for the suggestion. Since this was testing the conversion, from ToSa -> LinAlg -> EmitC, I assumed using the tests for ToSa -> LinAlg would be a good starting point. I'm not all that familiar w/ how the models are derived. Do you have a suggestion as to how we should generate a minimal case? On the LLVM side, I'd either start w/ some minimal C code and generate IR or write it by hand. I'm a little out of my depth on the MLIR side of things, and its never been clear to me how the MLIR used for MLGO is derived in the first place. |
Created using spr 1.3.4
Created using spr 1.3.4
✅ With the latest revision this PR passed the Python code formatter. |
@simon-camp I finally made some progress here after getting some help from @mtrofin on the saved model. For now I've gone w/ a super simple model from TF Lite and we can use more complex ones once we fix the issues w/ the test/conversion pipeline. Right now the main blocker I have is that the final I think this is happening because we don't have a materialization callback available in Error:
|
Created using spr 1.3.4
Hi @ilovepi, @mtrofin. I had some time to look into the failure. I have an idea how to fix the error; I'm just waiting on feedback from other folks before sending a PR. Alongside that I stripped down the pass pipeline to a minimal version working for the new test case. I have a working prototype here that get's completely rid of the Python hack. |
@simon-camp, that's great news! The prototype looks much nicer than using the complicated nest of RUN lines. Is there a PR for the prototype yet? if so, we can probably abandon this, since your implementation looks much better, and we can just keep incrementally adding more test cases after that. |
@simon-camp I saw #114204 landed. What's the delta between upstream and your prototype? It seems like there's a bit more of the conversion changes to land. Are these things we can help with? |
Great timing @ilovepi, I wanted to post an update today before I go on vacation. There are 2-3 PRs missing:
I will be back at work in mid-November and can continue working on it. |
Hi @ilovepi , are you still working on this PR? Should we close it? |
Ah, right, @simon-camp's conversion work in #117549 supersedes this work. Thanks for the reminder to close. |
This is an important use case for MLGO, to simplify maintenance and
foster easier to share models via C headers.