-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Use hashes in Key instead of allocated strings #2616
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Visit the preview URL for this PR (updated for commit 923efa0): https://yew-rs-api--pr2616-key-as-bytes-mkqpfsb3.web.app (expires Thu, 28 Apr 2022 08:11:17 GMT) 🔥 via Firebase Hosting GitHub Action 🌎 |
Size Comparison
|
Instead of converting internal representation of keys from |
That would make sense to me! If you use something like twox-hash you could store keys as 64 or 128-bit integers. It still might be desirable that things are salted appropriately (i.e. hash some constant to distinguish between byte arrays and integers) and that you encode integers somehow so that the same number in different widths or signedness result in the same hash. |
If it's done with Hash, it's worth benchmarking the performance (not sure how good our CI is for that, @voidpumpkin can comment on that). While I doubt there will be performance impact, we may see a noticeable increase in binary size by adding another dependency |
We can use |
@hamza1311 Tried a branch which uses hashing (but does not use zig-zag encoding): https://github.com/udoprog/yew/commit/key-hashing Size differences when cherry picked on top of 0.19.3 and used in one of my applications (twox-hash added as a dependency):
@futursolo It's an option. You'd have to be comfortable with 64-bit wide keys and it does do a bit of extra work by virtue of being cryptographic and randomly seeded. But I don't think that matters much. |
I assume that is in bytes? An increase of ~40KB is a significant increase in binary size |
I replaced the current propsal with one that does hashing, and Note that benches do not take allocating into account very well. I.e. the situation in a benchmark is an ideal allocation circumstance to simply copy the specified segment of memory. But we can note that it's roughly as fast as calculating the hash over it which is all we should really care about since the new proposed key isn't allocating at all. SipHasher is apparently really good and consistent, and it has a constructor to avoid random seeding. The std implementation has been excellently implemented and tuned. Current Key:
DefaultHasher (which is SipHasher using 64-bit keys and without random seeding):
twox-hash (using 128-bit keys):
|
Up to you to decide. I get something like 10kb variability between builds so it isn't that big of a deal for me. But I put twox-hash behind a non-default feature flag in this proposal as a result of your comment. If you don't want it at all, please tell me and I'll remove it. |
It sounds we might use different measurement processes. There is a CI test for size comparison, and any change above ~30 bytes is out of the ordinary if functionality is unchanged. Be sure to run the comparison with build flags optimized for size (see the size-cmp workflow) which should run wasm-opt and use nightly rust for tree-shaking and best size performance. |
What about hash collisions? The std implementation of Although hash collisions should be relatively rare, especially with the default hasher, they still can occur. And the biggest problem that I see with this is that hashes are not deterministic which means that an error caused by a hash collision would not be reproducible. |
Hash collisions in a HashMap are common because the hash is used to pick between a much lower number of buckets, which are sized according to the capacity of the hash map. The way we'd use keys and given that the a hash has a decent distribution (which I believe
If I'm reading the table here correctly: https://en.wikipedia.org/wiki/Birthday_attack I prefer the latter (which is why I at least want it as a feature to test it). But I don't see much of an issue with the former which is why I also think a 64-bit hash is fine. Random UUIDs for example are 128-bit which is considered sufficient to be globally unique.
The hashes proposed here are deterministic. Both |
In release builds with Side note: @udoprog, can you run our benchmark "suite"? You can refer to the benchmark workflow to see how it's run. |
It's unclear to me how you run the full suite of benchmarks using that action since it looks rather complicated. Is there a simplified one we can run or can we simply just invoke the action itself? Furthermore, is that or anything else blocking this PR? Is this something you want to adopt? |
I see two blockers, one you can't do anything about. I'd want to run the performance benchmarks locally, to see how exactly it affects them, since the github runner is a bit unstable and flactuates too much for accurate results. The other things is clippy complaining about the use of Note: I'm also convinced that hash collisions occur basically never. I have yet to see a website with more than a few thousand items in a single element. Usually, browsers begin to struggle with other things way before we hit a point where I'd start worrying about hash collisions and the birthday problem, even with 64 bit hashes, but going for 128 bits for good measure seems reasonable. Perhaps we should add a lints when the number of elements in a list is greater than some X, e.g. 10000 when we opt to go with 64 bits. |
Switched to criterion, since it's way harder to just selectively disable the default benches. @WorldSEnder If I read you correct you personally wanted to run the webdriver benchmarks? |
Yep. Running the |
Update: looks like wins across the board. I think it's definitely something for the release profile and it has some nice performance wins on update. Measurements. There seems to be a persistent small overhead when creating "trivial" keys. Which brings me to three more questions I want to ask:
|
I suppose we can add a loop which tries to construct a NonZeroU64/128. Alternatively provide a default constructor which returns an empty all zero key that can be used instead. Does seem like a good follow-up PR to me.
FxHash is in my mind is very similar to a single round of fnv, which does not provide very randomly distributed outputs making it more prone to conflict. How much? Not sure. Increasing the number of rounds helps, but then it's supposedly much slower than something like xxhash (or siphash).
I'm not following. Could you elaborate? The proposed algorithms all have good avalanche effects, which should mean that crafting colissions shouldn't be easy. |
Sorry, I bunched a few things together. The crafted part was conditional on using a faster but worse hasher such as Fx. The overall recommendation still stands though, especially when using u64 hashes. A chance of 1e-12 for ~6.1k element "will" happen if you start testing with actually random data (it's like 1/1000th of winning the powerball or there about, right?), and it will be very hard to debug. If you use controlled keys, such as a running index in a collection, the debuggability should be a lot higher, and tying in with testing against the full, unhashed, key in debug mode, should give at least reproducible issues if they do happen. EDIT: at least a flag to turn it off? I do like the speedup, I just like to be able to debug errors, too :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good for the most part, just some minor issues.
In addition, we should also look into why the code size of function_router
has increased for 8KB (the example itself does not use key
).
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result { | ||
self.key.fmt(f) | ||
// Default implementation for byte sequences. | ||
impl From<&[u8]> for Key { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not impl<T> From<T> for Key where T: Hash
and only specialise on types that needs special handling?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Providing a blanket impl<T> From<T> for Key where T: Hash
would prevent any specialization, since Hash
might be implemented upstream for any T
in the future (trait coherence rules) and we don't have access to stable specialization
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess instead of using From<T>
, having pub fn new<H: Hash>(h: H) -> Key
as an associated function would work.
Usage of From
(Into) for conversion into Keys is designated by the html!
macro and can be changed to something else.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see how it semantically makes sense to construct a Key
from an abstract hashable type. I see the hashing as a performance improvement that acts as-if it would compare the actual values the Key
was constructed from. I wouldn't expand this. Users can just hash those types themselves before constructing the keys and wrestle their own collision problems.
key_impl_from_to_string!(i64); | ||
key_impl_from_to_string!(i128); | ||
key_impl_from_to_string!(isize); | ||
macro_rules! signed { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean that Key::from(0_u8) == Key::from(0_u16)
but Key::from(0_i8) != Key::from(0_u8)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this can be remedied with zig zag encoding as mentioned under Solving the shortcomings here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally, I feel that passing a key of 0_u16
into an element / component where its last key is 0_u8
to be treated as the same element / component may not be an intentional behaviour for most cases.
I would prefer to treat them as different values if a correct implementation could potentially negatively impact code size.
We can emit a warning if the type of key changes between renders in debug mode.
Uh, as a heads up I didn't intend to close it. I did remove my fork while not remembering this was still open. But it didn't seem like anyone moved on this anyway. |
Description
Changes Key to wrap an
Rc<[u8]>
instead of anRc<str>
. Which allowsKey
to among other things implementFrom<&[u8]>
. This in turn allows it to more efficiently wrap types which directly have a binary representation.This does add some unsafe which we wouldn't have to if
From<Rc<str>>
existed in std, see rust-lang/rust#96078This also introduces a handful of breaking changes which needs to be considered, especially since the binary representation of seemingly unrelated things might overlap or not:
0u32
would never match0u64
. Previously this was slated over since they both compared to the string"0"
.Vec<u8>
which has the same byte sequence as aString
. Like"foo"
andb"foo"
.In my mind the shortcomings are mitigated since keys tend to be uniform in the context which they are used. Consider when you're iterating over something and generating keys, they tend to be generated in the same manner:
The upside of this is that using keys is made more efficient and sometimes avoids the extra allocation necessary to stringify things. My motivating use-case was that I wanted to use a
Uuid
askey
. With this change I could directly useUuid::as_bytes
.Solving the shortcomings
If we want to, I do believe we could solve the above shortcomings with a bit of extra work.
Key
or define our own. E.g. I personally think it would be reasonable to compare Rust strings with their binary equivalents since UTF-8 is intentionally ascii-compatible.Checklist
cargo make pr-flow