-
-
Notifications
You must be signed in to change notification settings - Fork 60
MTK v11 release blog post #193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
This post fully explains the licensing changes
AayushSabharwal
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly just grammatical/phrasing changes.
| ModelingToolkit compiler when ready. The ModelingToolkit tearing passes still need to improve their rules around | ||
| arrays to remove some scalarization, and this will happen in the v11 time frame as no breaking changes are required | ||
| for that. When completed, the generated code for array expressions will be O(1) in code size and compilation time. | ||
| This will solve a long-standing issue in the ModelingToolkit compiler, and approximately 80% of the work is now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
80% seems a little over the top 😅 considering we still need tensor calculus. We also need to update StateSelection.jl's core data structures to support array equations, which it remains (rather blissfully) unaware of.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's a better guestimate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uhm. I'd say around 60% best case.
|
This feels very premature to me. We need to make a community post explaining the situation on Discourse and solicit feedback before proceeding here. The community may have a strong feeling that the proposed split of MTK is not something they support, and they would prefer to just continue with an MIT MTK 10 as the SciML version (note, I'm not saying I'd personally advocate this direction). |
|
I talked with quite a few people and the issue with that is that it gives a false sense of choice. Continuing with the MIT MTK v10 version as the SciML version just isn't really on the docket. It's either we make it the project, in which case to compensate for the loses we need to get a private donor (because both EU and US sources have already been fully checked) of ~$15mil + 2 people who will leave their jobs to go full time on maintenance that are not part of that funding, or we do this route. The first is such a ridiculous ask that it sounds more like a satire, but we have all come to terms with that being the reality of the project by now, as explained in the post. I don't think the community will just find something to satisfy that in the next week or two, especially after we've had open calls for donations and maintainers since the start of the project. So it's either we leave the repo dead and abandoned, which still would need to probably find some compensation because that too can be a downside to further grant reviews, or we do this approach. At the end of the day, it's a false sense of choice unless some miracle happens. There are grants to submit for the next iteration of all entities, including MIT, and they all have had this requirement for years. And of course we can always merge backports to v10 (and v9) if anyone does show up with commits. |
Co-authored-by: Aayush Sabharwal <[email protected]>
Co-authored-by: Aayush Sabharwal <[email protected]>
|
edit: I have edited this as I thought the post had been merged and was not still open for comments. My apologies. |
|
At least for me, I would not vote to make this post / official announcement without providing an opportunity for community feedback. Even if the feedback post states that this is the only way to realistically continue MTK development. |
Co-authored-by: Aayush Sabharwal <[email protected]>
Co-authored-by: Aayush Sabharwal <[email protected]>
Co-authored-by: Aayush Sabharwal <[email protected]>
Co-authored-by: Aayush Sabharwal <[email protected]>
Co-authored-by: Aayush Sabharwal <[email protected]>
Co-authored-by: Aayush Sabharwal <[email protected]>
Co-authored-by: Aayush Sabharwal <[email protected]>
Co-authored-by: Aayush Sabharwal <[email protected]>
Co-authored-by: Aayush Sabharwal <[email protected]>
Co-authored-by: Aayush Sabharwal <[email protected]>
Co-authored-by: Aayush Sabharwal <[email protected]>
|
What do you think is the best way to share it? A link to this PR? |
I agree with @isaacsas. I haven't even had time yet to read Pamela's reply, so it's still not clear to me what the legal consequences of this license change are for opensource packages and commercial users such as PumasAI. For commercial users I assume it would also be interesting to know if there's a possibility to purchase a commercial license. Has this been discussed yet? |
|
I don't see why Pumas wouldn't just use ModelingToolkitBase, it doesn't rely on the other functionality. Nor should Catalyst.jl, or the SBML readers. |
|
I have to deal with teaching and grading till later tonight, but I'll try to review what is stated here and reply about how to proceed. If there is a desire to stop waiting on the lawyer, we could draft a Discourse post explaining the situation, all review and approve it, and then make the post. We can link here for more details but the post should cover the main points about why this is happening; the reality of who is carrying out and funding MTK development; and mention the options we discussed, why they don't seem workable, and what the SciML Council concluded was probably the most workable practical solution (i.e. the split mentioned here). This lets our users know our thought process, and that we were trying to balance their best interests for using MTK, with the realities of ensuring MTK is sustainably developed. We then need to allow a bit of time for people to process the post and reply (1 week before proceeding with any changes?) I am happy to make the draft and then circulate it tonight. It can avoid discussing anything detailed legal implications and just tell everyone that if they use base MTK only they can stay MIT, but if they use AGPL components they will need to research and conclude about it themselves. |
|
Also, I'd add that if grant reviewers are requiring commercialization protections across multiple agencies, that's a message that should be in the Discourse post. Chris, when we draft the post could you provide info on the funders and programs with these requirements, along with (redacted) reviewer comments we could include? That would really make the point to users nicely that this is a major issue with some agencies now, and I think some community members would be even more sympathetic to this issue than JuliaHub's original concerns (not that those aren't important too!). |
|
I have some short general comments on the post text, but I think in either case it might be useful to as soon as possible have a quick council meeting to go through the details before we post it. |
|
Here's some details that are being asked for. In general, ModelingToolkit has had a lot of difficulty getting academic funds because it doesn't fall into the topics generally seen as mathematics, computer science, or engineering of agencies such as the NSF and NIH. Numerical methods for solving equations and scientific machine learning falls squarely into that domain, and so that is what has tended to work for MIT Julia Lab funding of most SciML projects. But notably, those two topics exclude MTK, and so most grants that are funded from the traditional sources, like the that NSF we share, exclude MTK. Thus any student funding has to come from discretionary funds or some other pot. Of this we really only had funded Shashi, and that was with the ARPA-E joint with JuliaHub which I will describe below. That is why we've only had one student at MIT on the topic, this has had many many failures. The only one that has worked was Digiwell, which was actually a Norweigian Research Council project which then MIT had a lose association with to help on some aims related to PDE modeling (so specifically MethodOfLines.jl, so it wasn't ModelingToolkit because it was originally DiffEqOperators.jl as the auto-PDE discretizer but it worked ModelingToolkit in there via MethodOfLines which then justified work on ModelingToolkt 😉, you can see how this game is played) and then now the NSF with the EarthSciML group, which is a small amount. So with most of the traditional math, computer science, and engineering research grants having an abnormally low success rate, we have focused a lot more effort on grant sources beyond the traditional NSF/NIH. Here are some clips from the Research Council of Norway rejection on Tuesday, November 25th:
It was the second attempt at that grant, the first time was also a rejection one had this line:
That's just a recent grant at the top of my email, but there's many others. For example, it's pretty publicly known that most of MTK was funded through ARPA-E DIFFERENTIATE https://arpa-e.energy.gov/programs-and-initiatives/view-all-programs/differentiate which has heavy commercialization requirements:
https://arpa-e.energy.gov/about/tech-to-market/commercialization We were denied the follow-on funding because of a weak score by the T2M portion, in large part due to the open source nature of part of the project. I don't know if I can share the exact working, but it's very similar to the NRC response. All follow-ons from that point on thus completely removed and excluded ModelingToolkit from the potential projects because it was seen as unfundable. One of the other sources as well has been things like https://www.sbir.gov/. In fact, the MIT Julia Lab is now having to move more to SBIR/STTR type funding sources given the change in the US funding agencies: NSF and NIH has generally seen broad decreases in the available funds, while the amount of funds in SBIR/STTR sources has steady risen. One successful MIT grant was the one with Deliniate. I suspect that one of the things we will do more is help students in the lab become entrepreneurs and co-PI on an SBIR/STTR. But these of course have to have a path to commercialization:
We did an SBIR once before on QSP simulator built on ModelingToolkit, which had phase I but never received a phase II, and have since not tried to do any with ModelingToolkit in future grants. In particular, the MIT Julia Lab has been looking at some army and navy SBIR/STTRs related to some previous work with Lincoln Labs, but all of those focused more on building a commercial element on SciML/Reactant/Lux/PINNs due to topic alignment. In order to align with the combustion, thrust, etc. types of projects they would want, ModelingToolkit would need to build out chemical process libraries in a similar vein to NPSS, which was one of the NRC grants that was rejected for lack of a valid commercialization path. DARPA funds have been the other source holding MTK up, but we have not been able to apply successfully with ModelingToolkit contributions due to not meeting the commercialization requirements (https://www.darpa.mil/sites/default/files/attachment/2025-01/Transition-and-Commercialization-Guide.pdf) both from MIT and JuliaHub. So as it stands, ModelingToolkit projects are the ones for which we have written the most grants and have gotten the most rejections. There has not been a single grant that we have gotten considering symbolic-numerics a proper area of mathematical or computer science research, those have all been rejected. It's not considered actual engineering, it's just a compiler for engineering models, so those have been rejected. Every single grant it has gotten except the one small sub of the NSF EarthSciML grant has thus required a commercialization element, because a compiler for actually making good fast code is seen as industrial useful work, not research. And the vast majority of those submissions get rejected failing the commercialization aspect because of the open source nature with all of the language looking just like the clips I sent. And for those that we did get, they tend to fail the follow on funds (which is the larger amount). And even for those, the grants that get funded are the ones which satisfy the commercialization element by having almost all of the aims not being about the open source ModelingToolkit but instead the GUIs and other aspects of the DSLs, such as Dyad, built on ModelingToolkit. It is the project which takes the most maintanance work, the most cost, the most grant writing time, yet pulls in the absolute least for both MIT Julia Lab and JuliaHub. And we repeatedly here for this specific project the same exact thing over and over again, that this project requires a commercialization element to it. So, I wish the reality was different. I think symbolic-numerics could be a cool research area. The reality is, it's not a supported area of NSF/NIH. Even if it was, those agencies don't have money anymore. It all shifted to things that require commercialization, and the reviewers have been repeatedly loud and clear about this issue. But again, the change here does not make anything not open source. AGPL is still an open source license. It just gives the bare minimum of protection that can satisfy the granting requirements. I think some people here will try to pin this as a JuliaHub thing but think about it, think about how many students we have had at MIT since I got there in 2018, about 16 graduated? And the only one that did anything remotely close to ModelingToolkit was Shashi, and his work was actually on Symbolics, and was only funded by the ARPA-E DIFFERENTIATE with a commericalization requirement (i.e. JuliaSim/Dyad). This project only works because of unfunded nights and weekends from people who are actually funded to do other things. While this project also has the most remaining issues, the most Discourse threads, and the most constant discussions about the growing requirements of making the compiler bigger, better, faster, stronger. This is a project where I have begged the Julia related companies to give internships to people so we can keep it afloat, have self-funded contracts to get things fixed up in the standard library, and have contiuned to support even while every single source has told me no. I believe in this project so much that I have put my life blood into it, but it has been going on for so long that we are being bled dry. And we are given a way out by taking one small portion of it, a portion which was originally slated to not be open source in the ARPA-E grant that funded it and was kept as a separate module knowing that we needed to do something with it to make it maintainable, a part that Catalyst.jl and all of the ODE stuff doesn't rely on or use, and we are asking to take that part and give it just a slightly more restrictive AGPL license (still an OSI approved open source license) in order to satify the countless number of grant reviewers and other funders who have demanded this change. I never ask for much from the open source community and try to serve as a bedrock that it can always rely on, but no, this is a cry for help and an ask for a compromise to stop the bleeding. This post is a bit more direct and raw than the polish of the blog post, but I leave it online because those who are searching for it should know the hidden life toll that occurs to keep such open source projects alive. Agencies don't fund compilers. I wish they did, but they don't. I am glad I actually found any option that could let this project still be a reality, and an open source one at that. |
|
Chris, I think you are misinterpreting my comments / thoughts on this. I am not trying to push back on a hybrid approach as we previously discussed. I do, however, think we need to give our users a chance to understand the change and its ramifications, and to comment on it. I don't expect anyone to offer a better solution, but they deserve our letting them know the situation, what we think is the best resolution we can construct, and a chance to at least give us their feedback. Let's give users a heads up about this change and its necessity instead of surprising them with it, thereby giving them no ability to at least give us feedback before we officially make any switch. On another note, we did get MTK funding from Wellcome and CZI, though the funding ultimately primarily contributed to MethodOfLines and Catalyst. However, in both cases I believe the funders were happy to fund MTK in addition to its ecosystem (isn't MTK part of what pulled Wellcome in originally?). That said, I don't think either of these funders are continuing such programs, so it completely reinforces your comments about lack of funding mechanisms these days. |
|
1 similar comment
|
|
I think it might be easiest to just have a quick meeting to go through the details, ideally soon so that we donät have to delay too much with sorting this out. I'm mostly available. Probably it just hinges on finding a time Chris and Sam can both do |
A primitive form of |
That's the crucial part in the SBML/QSP workflow (and I guess more generally when integrating MTK models, but that's were it came up repeatedly recently) 🙂
That's great to hear and was not clear to me from previous discussions. |
Pumas and SBML do rely on a very primitive version of this though, which is effectively just expressions like |
|
I actually have a pass since early in v10 which does exactly this, since it speeds up |
|
I think that's worth adding. |
|
If someone takes the time and effort to port pieces of the MTK 10 functionality that has moved to the JuliaHub library, will that be accepted into the new base MIT MTK 11? |
|
It would probably be best as a subpackage / sublibrary, so that it stays maintainable. Part of the refactoring is that it's just better to have a flexible pass system in the compiler: this will enable future alternative algorithms for doing tearing / index reduction (of which there are many possible alternative algorithms which have trade-offs, which we have not been able to explore because we have had one hardcoded path). So MTKBase having a flexible pass system makes it so that there can be many different forms of tearing and index reduction, and if someone wants to give a contribution of a sublibrary that adds new passes, such as the kind you suggest, that is a very reasonable thing to live in the Note that I will likely be submitting some grants from the MIT Julia Lab during the spring around work on doing optimal tearing algorithms, which would be an alternative tearing implementation which would use a SAT solver to directly solve the NP-complete problem to find the minimal tearing solution. This also gives uniqueness, something that the heuristic greedy methods like the one we have don't guarantee (this is where the sorting stuff comes from). However, it necessarily has issues with scaling and requires scalarization of all array equations, so it likely isn't so useful to lots of the things people are doing in Dyad, but as a symbolic-numeric push for research and a drive to get more stable methods with provable characteristics, this would be a very fun and interesting algorithm that would probably take like 3 years of a PhD student to get robust. This, if funded to MIT, would be something that would go into the The MTKBase refactor is simply a better infrastructure for continuing this research and development because it does not favor any pass, and opens up the research and development of others. MTK defaulting to the JuliaHub one makes sense because it's the one that is the fastest, most tested, and most robust, but the whole goal of this is to foster more passes to be created. Again, the whole point is to get more contributors into the ecosystem, to get more people playing with all of this, and to foster more growth in people writing such passes and add-ons. So, no it wouldn't go into the core of MTK because nothing should, there should be no hardcoded pass. All should be add-ons. And then MTK just chooses whatever is generally the best for most people to use. If someone spends enough time to build one that is strictly better than the one that has been developing over the last 5 years, MTK should change that to the default. I just think it's pretty unlikely that will be the case looking at the history of what it has required to even get to this point, so what will likely happen is that there will be some half alternatives which are more research focused and solve some interesting problems but the core recommended one will likely be the StateSelection.jl and ModelingToolkitTearing.jl ones. |
| ModelingToolkit.jl already had some GPL implications. However, with this change, the implications include | ||
| the symbolic transformation libraries specifically for handling acausal models and high-index DAEs. | ||
|
|
||
| This was a difficult decision to make, and the evolution of ModelingToolkit.jl shows many scars of previous |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that we maybe should make more clear that this decision would be made and supported by the SciML Steering Council. That way readers know the mechanisms but which things like this is handled.





This post fully explains the licensing changes