Skip to content

Add assertions for in-order block [dis]connection in ChannelManager #831

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

TheBlueMatt
Copy link
Collaborator

Sadly the connected-in-order tests have to be skipped in our normal
test suite as many tests violate it. Luckily we can still enforce
it in the tests which run in other crates.

Co-authored-by: Matt Corallo [email protected]
Co-authored-by: Jeffrey Czyz [email protected]

This was initially part of #823 but was dropped to avoid needlessly delaying it.

Sadly the connected-in-order tests have to be skipped in our normal
test suite as many tests violate it. Luckily we can still enforce
it in the tests which run in other crates.

Co-authored-by: Matt Corallo <[email protected]>
Co-authored-by: Jeffrey Czyz <[email protected]>
@codecov
Copy link

codecov bot commented Mar 7, 2021

Codecov Report

Merging #831 (07d1e83) into main (b5d88a5) will increase coverage by 0.01%.
The diff coverage is 94.73%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #831      +/-   ##
==========================================
+ Coverage   90.98%   90.99%   +0.01%     
==========================================
  Files          48       48              
  Lines       26538    26546       +8     
==========================================
+ Hits        24145    24155      +10     
+ Misses       2393     2391       -2     
Impacted Files Coverage Δ
lightning/src/ln/channelmanager.rs 85.45% <75.00%> (-0.02%) ⬇️
lightning-persister/src/lib.rs 92.52% <100.00%> (ø)
lightning/src/ln/functional_test_utils.rs 95.04% <100.00%> (+0.02%) ⬆️
lightning/src/ln/reorg_tests.rs 99.58% <100.00%> (ø)
lightning/src/ln/functional_tests.rs 96.94% <0.00%> (+0.05%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b5d88a5...07d1e83. Read the comment docs.

}
self.max_height = cmp::max(self.height, self.max_height);
}

fn disconnect_block(&mut self) {
if self.height > 0 && (self.max_height < 6 || self.height >= self.max_height - 6) {
let header = BlockHeader { version: 0x20000000, prev_blockhash: self.header_hashes[self.height], merkle_root: Default::default(), time: 42, bits: 42, nonce: 42 };
let header = BlockHeader { version: 0x20000000, prev_blockhash: self.header_hashes[self.height - 1].0, merkle_root: Default::default(), time: self.header_hashes[self.height].1, bits: 42, nonce: 42 };
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing! Was the time here the problem? Wondering why it was fine as a constant prior to the change.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It has to match what was used in connect_block (as otherwise the hashes wont match), so we have to store it somehow. We both had an off-by-one in the prev_blockhash lookup and we didn't have a way to make time match.

@valentinewallace
Copy link
Contributor

A thought -- instead of adding last_block_hash, we could go straight to pulling in part of https://github.com/rust-bitcoin/rust-lightning/pull/838/commits/8642de55e1895c61149f7214d7a549f38ace2589` and store all the connected block headers if we'll need them eventually anyway.

@TheBlueMatt
Copy link
Collaborator Author

Gonna close this, as @valentinewallace points out, it likely makes sense to pull more bits of 838 into its own PR than just this commit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants