-
Notifications
You must be signed in to change notification settings - Fork 42
cargo test fails from crates.io tarball #38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I guess this is slightly ironic. The Please endeavour to have some sort of useful tests in the shipped crate. Its important for linux-vendor QA purposes ( pretend linux vendors are a kind of upstream who have to make sure everything works on a new rust the same way you would, but have to use a shipped crate as a baseline ) |
Adding tests to this crate bloats the size of the published crate (which matters for many consumers), they were explicitly stripped because the tests rely on unicode tables. |
That's fine, just I find it funny, as the problem emerges from there being an incomplete exorcising of tests :D Though it would be nice if there was some medium ground, where there's enough test in place to provide something useful to end users, and having more exhaustive tests gated behind something so as not to cause that kind of pain. |
Both As far as I can tell, this module and the stuff from |
Totally okay with merging a PR for that, though really without the
NormalizationTests tests there's not much of a point to running cargo test.
…On Mon, Oct 28, 2019, 5:27 AM Anthony Ramine ***@***.***> wrote:
Both is_combining_mark and UnicodeNormalization are public items of this
crate, and they are the only two things used by the test module.
As far as I can tell, this module and the stuff from normalization_tests
can be moved to an integration test in tests/ to be properly excluded
from the published package, letting downstream vendors run the other tests
successfully.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#38?email_source=notifications&email_token=AAMK6SA4INJZGAAHUOU266LQQ3LB5A5CNFSM4HZRFJL2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECMWJKQ#issuecomment-546923690>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAMK6SCN3FVVJOZHQB5ZHEDQQ3LB5ANCNFSM4HZRFJLQ>
.
|
Yeah, turns out to be a little more complicated than nox thought. I've managed so far to move a bunch of the tests to tests/, in a way that would make it more easy to nuke the ones that were "heavy" without consequence. Just a few stragglers remain that annoyingly require access to private interfaces to function. Got a few potential ways of making this work though, one idea is a build.rs that probes for the existence of the normalization_tests.rs file, and passes a flag to enable the relevant tests. That aught to work transparently for both authors and consumers. |
Using a build.rs that way seems quite brittle :/
-Manish Goregaokar
…On Mon, Oct 28, 2019 at 8:41 AM Kent Fredric ***@***.***> wrote:
Yeah, turns out to be a little more complicated than nox thought.
I've managed so far to move a bunch of the tests to tests/, in a way that
would make it more easy to nuke the ones that were "heavy" without
consequence.
Just a few stragglers remain that annoyingly require access to private
interfaces to function.
Got a few potential ways of making this work though, one idea is a
build.rs that probes for the existence of the normalization_tests.rs
file, and passes a flag to enable the relevant tests.
That aught to work transparently for both authors and consumers.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#38?email_source=notifications&email_token=AAMK6SEVDBU2B2VNG3KAKCTQQ4B3FA5CNFSM4HZRFJL2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECNKXHQ#issuecomment-547007390>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAMK6SB6VH3DFFUYIQF4TEDQQ4B3FANCNFSM4HZRFJLQ>
.
|
well... you don't have to use build.rs, just means you'll need |
Given that these tests really really should be run by default that solution is not acceptable either. |
Ultimately, "we want to run cargo test from packaging" is not a use case we particularly care about, I'm fine supporting it but it should not make things worse for the default use case. Those tests are the important ones, the other tests don't really matter much. If you're excluding those I don't even see the point of running It's a pretty common exercise to exclude test files from the published package, fwiw. Some of the regex dependencies do this as well since they include large text corpuses. |
That translates to "Do not care about linux vendors at all", which translates to limiting your own adoption ( and seriously limits the quality of rust at large, as there's plenty of crates that do fail tests for reasons that are real problems, and we can expect to see this get worse when features get deprecated ) Some of this misery is worse than it aught to be because there's no good way to have tests outside Abusing If tests could be entirely in |
Something that would be gross, but survivable, is repeat the PR, but with the flag inverted. That way at least, people who want to run tests downstream only have to dance with setting RUSTFLAGS=, not with having to patch the entire crate. ( But it would be by default, still horribly broken ) |
You haven't sufficiently made the case for why you need to run tests here. You're effectively asking for Similarly, why not run If you can figure out a way to stick the normalization tests in |
This comes off as super arrogant given that your original solution was to make this crate similarly horribly broken for developers. The normalization tests are the important tests, if they're not run by default there's not much point to having them. We're not going to flag off the main tests in this crate to support your use case. That's ridiculous. |
Here are my constraints:
You can still use |
Not being able to run "the most important tests" is still, agreeably, not ideal. But in general, being able to run some tests, is better than being able to run no tests. If we're desperate and tests can't remotely be feasible to run, a last ditch alternative is "Just have something run cargo build at some point before we say its fine, at least we know it compiles". But having some tests is better than that.
Mostly because adding python to the build chain makes matters more complicated. Doable, maybe, but as-is, the exclusion rules remove more than just that file. unicode-normalization/Cargo.toml Line 21 in c469e87
It removes the
I have recurring problems where I seem about 1000% more an ass than I am. Its @nox 's fault, I swear. Something about too many fires to put out. But to re-iterate what that proposal does:
I'm under no illusions this is a perfect solution, but the goal at this point is to merely improve the status-quo for vendor's from "no tests" or "compile-only testing" to "something tangentially plausible to break". Improving that situation so that vendors and end users get useful tests would be something that could happen down the road, but I'm at a loss how to do this presently within the tools currently at our disposal. Especially seeing the motivation for excluding this file is "keep the crate size down", and there's no real way I can see that happening while also providing comprehensive tests for vendors and other end users. Maybe this data can be somehow stored in a packed form that can be loaded more efficiently or something, but doing that is currently beyond my skills/patience.
I don't see a dev-dependency presently for |
I could be completely wrong but from my few interactions with vendors, it seems like it is valuable to them to have On a funny side note, I too thought this crate was depending on quichcheck multiple times, hah. |
More-or-less. The context that may help is:
But diagnosing that is much more straight forward if the "previous" state didn't fail. And we have to run this for thousands of packages at a time. And that's why "some tests, no matter how scant, are better than no tests, and also better than broken-by-design tests". ( not intended to be malicious, just, as is, design constraints and criteria are why tests are currently broken ) Its all about making the squeakyest wheels stand out. Just presently "test suite is not working" is highly squeaky. 😉 |
Hopefully this provides the best of all worlds. Importantly, this re-introduces src/test.rs, as that's something that can't be trivially generated. Both approaches are documented so the path to the easiest working solution is clear for vendors, and they can pick either the more expensive solution, or the simpler (but less comprehensive) one. If this PR meets general approval, I'd like to add a .travis.yml step that automatically tests the --cfg minimal_tests path, so at least, that will be a "stable" target, as testing that the deployed crate is free from errors is presently too much work to expect the average maintainer or contributor to remember to do. Closes: unicode-rs#38
As mentioned in the PR, I feel like the solution I proposed with |
I'm not sure I fully understand what you're suggesting here. Are you suggesting splitting the file src/normalization_tests.rs to be an independent crate, and then simply Wherein you're suggesting That could work, I guess. |
No, I'm proposing |
Oh, I see, a test-proxy of sorts. That should work nicely. |
Yeah, something like |
This is my progress so far: master...kentfredric:attempt-3 Any thoughts? This currently means:
However, the one remaining downside is its not trivial for anyone to perform the comprehensive test suite on the published crate, as the data file, and the consumers of that data file, are all by necessity missing. |
@kentfredric I don't get it : why is that set of changesets moving all the tests out? Just do it for the one test causing problems. It's a much simpler change. |
Like, let's do this with minimal churn |
Redone: master...kentfredric:attempt-4 This time only migrated the ones that required NORMALIZATION_TESTS Also migrated all those tests to a single unit in order to keep the test output looking lean. This is what the test output looks like.
Does that look PR-ready yet or is there anything else significant I need to handle still? |
@kentfredric that looks great! Thanks! |
I would appreciate if you would just make it under some cfg or something like that so that it does not fail for us.
The text was updated successfully, but these errors were encountered: