-
-
Notifications
You must be signed in to change notification settings - Fork 72
add a flag to control the hash checking #77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
I would prefer if this were an additional constructor parameter and/or command-line argument. The way you wrote this looks like you're going to modify this field with reflection. Which is fine, but if there's no other way to access it then it defeats the purpose. |
Signed-off-by: Trial97 <[email protected]>
|
Sure thing, I updated the PR to make it a command-line argument. |
|
The better option, since we have to rebuild the installers anyways, is to rebuild them to just specify the correct compression level. It's already been fixed ij modernnversions it's just older shit that's the issue. |
|
What is the correct compression level? Or does it need to be changed in multiple places? Also, it would be nice to still have the option to skip the hash checks even if it is fixed, but as long as it is fixed, I do not complain. |
|
MinecraftForge/JarSplitter#2 |
|
Continuing from: #78 (comment) |
|
Honestly I don't want to include this because knowing when things are going wrong is important. This whole issue is that a somewhat common Linux library has a behavioral bug in it so that it doesn't respect standard defaults. Adding this flag would just cause people to not report issues. IF/when someone wants to work on making a installer updater pass that fixes this properly then I may consider that side. But reguarding this flag, it is not needed as the update will fix the issue. Also need a way to actually validate that this is an issue on our end. As the assumption is that it exists. But never been able to reproduce. If someone can produce like a socket container or virtual machine image. I would appreciate it. |
|
I'm not sure I would be able to regenerate the old version myself as that is a bit too much for me. Not sure exactly where the issue is or why it is happening, but I had people complaining about this: PrismLauncher/PrismLauncher#3406 and https://www.reddit.com/r/PrismLauncher/comments/1ifpbgp/issue_with_modpacks/ . And from the change you linked previously, it should have already been fixed a long time ago, but this was reported this year. If this is not desired as a public flag, can it at least be accepted as a private one, as I proposed originally? My use case is just wanting to add a way for Forgewrapper(to modify the field with reflection) to be able to bypass this check without needing to reimplement the logic. From my knowledge, this setting is already present in the FTB App, but I would want to avoid reimplementing the install process. |
|
Okay an update on this. Regarding this PR: Regarding the actual issue: Okay so I have a few options, All of which have the con of needing to rebuild every installer.
I have been able to reproduce the issue, FAINLLY, be creating a docker container. |
|
This is on the back burner as its not a super critical error for already stated reasons. But once I was able to get a dynamically linked distro on ubuntu, I am able to reliably reproduce the issue. Some fun things: The basic take from that is that zlib is very fragmented in modern development. Tons of variants, some that follow spec, some that dont. Some that are better at one aspect, but horrible at others. For our purposes tho, any solution that mitigates zlib-ng's issues yet still keeps compression will just cause issues with other implementations. I am still not willing to just strip out caching that works for the VAST MAJORITY of users for the few who decide to go out of their way to install a custom implementation and a statically linked Java distro. (See my unhinged rants about this on discord if you wanna see how hard it was to get a reliable reproduction case) So this leaves us with two options. Store, I rebuild all the old installers/tools used by them to use no compression when creating the jars needed. Or Deep, making the installer check the uncompressed contents of processor steps. With hard drive space being cheap, I'm really leaning twards Store. So i'm looking into what processors we run, and if its possible to rebuild the installers to use versions that function with that ability. Some statistics on issues with old versions:
I'll eventually be setting up a system to do automated tests of all older client installations, but the server is the easiest for now. There are other projects that are on my higher priority, I wanna get a initial release of the new toolchain out. so if you wanna see this any time soon, it'd be worth it to help. |
|
I must admit that I'm still unclear on what exactly is expected from me regarding the installer processor work. If there is any form of documentation available on how the installer and its processors function — particularly regarding how they are initialized and managed — I would greatly appreciate being pointed to it, as I am not yet familiar with the surrounding ecosystem. I did make an effort to review the relevant code, but I’m currently unsure where the processor definitions are populated from, which makes it difficult to fully grasp the workflow or identify the appropriate starting point. From what I can tell so far, addressing this issue seems to involve a significant refactoring of the installer infrastructure. Given that I have had no prior interaction with this codebase, this appears to be too large in scope for me to take on at this time. As such, I would prefer to focus solely on this topic if I am to contribute, rather than shift to unrelated or preparatory tasks. |
|
If that's the case then my answer is what it has been for a while. There are far bigger priorities at this point, combined with this being a highly niche issue. Means that this is pretty much my.lowest priority. So it'll be addressed eventually. Until then you can detect if a non standard zlib is installed on your end. And then nuke the hash checks yourself. This PR is not an acceptable solution as it is incomplete and requires the work to be done to retroactively deploy this anyways. |
|
Just to add a little context that I feel is missing here, zlib does not guarantee bit-identical compressed output, and that is not part of the spec. In fact zlib recently (past year or so) implemented a small change (The Zlib-ng and others also do not guarantee bit-identical output, in fact several also generate slightly different output depending on the kind of cpu is being used during compression, because different instruction-set specific optimizations can yield differences in things like hash-tables used to find data matches during compression. So comparing the hash of the compressed data is going to break from time to time, and has been breaking for many other projects that also make this false assumption. What they all do guarantee however is bit-identical decompressed data, so hashing that will work reliably. I am not going to tell you what to do with your own software, I just want to make sure this information was presented correctly. I'll throw in a reference to what the author of Zlib, Mark Adler, says as well. Just skip the question and go directly to the answer: https://stackoverflow.com/questions/52121632/different-same-but-same-result-with-zlib Edit: |
|
I understand that, I've done the research needed to find the Zlib/Zlib-ng's stance on it. Which is completely understandable. It'll be addressed eventually. I am not willing to completely throw away caching for the majority of users who are not running into this issue. Especially considering the explicit use case of this request is something that forces our installer to run every time a user plays the game. Instead of just doing it once at install time. So the time savings for users is quite significant. On their end, they could opt into this solution by simply removing the hashes from the installer data in memory before running it. Should be a 1 line change on their end if they want to shotgun solve it. Better option in my opinion would be to only remove hashes when you know that you're running on a native zlib implementation. On our end the best/simplest option would be to migrate our system to simply not use zlib compression. But in order to do that requires quite a bit of work. Which I haven't had the motivation to deal with. (Seriously, dealing with gradle to solve this is like pulling teeth) Its on my list of things to address when I am working through the back porting process of the new toolchain. But as it sits. As its not an issue for the majority of users, and those who do have issues typically have to go out of their way to have the issue arise. It's not a high priority. |
|
See my response on #80 |
Address the compatibility issue with zlib-ng-compat, which is now shipped in place of zlib in various Linux distros. This change introduces an option to disable hash mismatches reported on processors.
It doesn't need to be visible in the project, as I plan to integrate it with the ForgeWrapper.