-
Notifications
You must be signed in to change notification settings - Fork 13.3k
Use little-endian encoding for Blake2 hashing on all architectures #38960
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix!
I think that one comment is should say little-endian, instead of big endian. And we probably should a comment that all data going in must be converted to little-endian (which the hasher should already do).
if cfg!(target_endian = "big") { | ||
for word in &mut m[..] { | ||
*word = word.to_be(); | ||
*word = u64::from_le(*word); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch!
let m: &mut [u64; 16] = unsafe { | ||
let b: &mut [u8; 128] = &mut ctx.b; | ||
::std::mem::transmute(b) | ||
}; | ||
|
||
// It's OK to modify the buffer in place since this is the last time | ||
// this data will be accessed before it's overwritten |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why remove this comment?
// Re-interpret the input buffer in the state as u64s | ||
// Re-interpret the input buffer in the state as | ||
// an array of big-endian u64s, converting them | ||
// to machine endianness. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't it the other way round: We always make sure that this buffer is little-endian and now we are making sure to convert it to machine endianess?
Fixed nits.
Blake2b is a standard bytestream hasher. It should not do anything weird with its Hasher impl. @bors r=michaelwoerister |
📌 Commit bccd756 has been approved by |
Like many hash functions, the blake2 hash is mathematically defined on a sequence of 64-bit words. As Rust's hash interface operates on sequences of octets, some encoding must be used to bridge that difference. The Blake2 RFC (RFC 7693) specifies that: Byte (octet) streams are interpreted as words in little-endian order, with the least-significant byte first. So use that encoding consistently. Fixes rust-lang#38891.
@bors r=michaelwoerister |
📌 Commit a89475d has been approved by |
@bors rollup |
@bors rollup- |
@bors p=1
|
Accepting for beta. Small patch, regression. cc @rust-lang/compiler |
⌛ Testing commit a89475d with merge e4fee52... |
Use little-endian encoding for Blake2 hashing on all architectures Like many hash functions, the blake2 hash is mathematically defined on a sequence of 64-bit words. As Rust's hash interface operates on sequences of octets, some encoding must be used to bridge that difference. The Blake2 RFC (RFC 7693) specifies that: ``` Byte (octet) streams are interpreted as words in little-endian order, with the least-significant byte first. ``` So use that encoding consistently. Fixes #38891. Beta-nominating since this is a regression since 1.15. r? @michaelwoerister
☀️ Test successful - status-appveyor, status-travis |
Like many hash functions, the blake2 hash is mathematically defined on
a sequence of 64-bit words. As Rust's hash interface operates on
sequences of octets, some encoding must be used to bridge that
difference.
The Blake2 RFC (RFC 7693) specifies that:
So use that encoding consistently.
Fixes #38891.
Beta-nominating since this is a regression since 1.15.
r? @michaelwoerister