Skip to content

Make Channel's block connection API more electrum-friendly #838

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Apr 5, 2021

Conversation

TheBlueMatt
Copy link
Collaborator

This is very much a WIP, but we're gonna YOLO this one into the Java bindings to see if the API works. It breaks a ton of tests which rely on bogus blockchain structure, but we need to fix those either way. At least some tests pass, right?

@@ -3400,7 +3400,7 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
let short_to_id = &mut channel_state.short_to_id;
let pending_msg_events = &mut channel_state.pending_msg_events;
channel_state.by_id.retain(|_, v| {
if v.block_disconnected(header) {
if v.block_disconnected(header, new_height) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be the old height of the disconnected block?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I dont think so? Thew new argument is passed into update_best_block in channel and handled the same as a new block (and the header part ignored, except to get the time).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess my confusion here is because in ChainMonitor and ChannelMonitor, the height is that of the disconnected header, which is inconsistent with this.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note new_height is calculated by fetching the height from the local latest_block_height variable, not from the user. In later commits, that variable is checked against what the user passes in.

if Some(header.block_hash()) == self.funding_tx_confirmed_in {
self.funding_tx_confirmations = self.minimum_depth as u64 - 1;
pub fn block_disconnected(&mut self, header: &BlockHeader, new_height: u32) -> bool {
if self.update_best_block(new_height, header.time).is_err() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Won't this be the header of the disconnected block rather than the best block? Presumably the best block will come from the next call to block_connected.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, that API needs to be clarified - I dropped the Channel::block_disconnected method.

Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you plan also to dry-up ChannelManager::block_disconnected in ChannelManager::update_best_block?

///
/// If we return Err, the channel may have been closed, at which point the standard
/// requirements apply - no calls may be made except those explicitly stated to be allowed
/// post-shutdown.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this requirement holds right now ? We don't have a ChannelState::Reorged blocking any further non post-shutdown calls, it should rather say "force_shutdown() should be the next call" ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm? It could be clarified, but not just force_shutdown is the next call, but force_shutdown is the only allowed call ever again on this Channel (and the stuff for get_channel_update). This is true in any "return channel-closing-error" case.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree on the only call allowed, my remark was more on the how of this restriction. After this PR, if we return from Channel::update_best_block, we'll call force_shutdown in ChannelManager::do_chain_event.

I believe a more straightforward API would be to call force_shutdown directly inside update_best_block thus avoiding caller to forget to call force_shutdown. State transition to ShutdownComplete is only done in this latter method.

At least comment should say, "If we return Err, force_shutdown must be called" ?

log_trace!(logger, "Detected channel-closing tx {} spending {}:{}, closing channel {}", tx.txid(), inp.previous_output.txid, inp.previous_output.vout, log_bytes!(self.channel_id()));
return Err(msgs::ErrorMessage {
channel_id: self.channel_id(),
data: "Commitment transaction was confirmed on chain.".to_owned()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the future we might consider to dry-up that kind of messages. Otherwise, even if your light-client shelter its connections to an Electrum server behind Tor, a malicious server can still delay announcement of commitment transaction to observe a concomitant error message as a collusioning channel peer. And thus link your Lightning node to its validation backend.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, possibly, we'd need to be very careful, however, that we handle several cases identically - eg it would be very easy to return a dummy error message, but it be detectable exactly which case we hit by observing other behavior.

// may have already happened for this block).
// XXX: Test this case!
if let Some(funding_locked) = self.check_get_funding_locked(height) {
return Ok(Some(funding_locked));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay I understand the rational of this special case -- Making 1-conf lock-in functional with regards to order of block confirmation/transaction announcement. But this should be documented pretty clearly in our API that either transactions_confirmed or update_best_block might generate FundingLocked in function of minimum_depth value. Really counter-intuitive to have a channel configuration variable changing by which method an event is expected to be delivered.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could, but we don't document which message events a function may generate in any other place? What makes this site special - in general the PeerHandler should just be polling for new message events regularly.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would serve more as a friendly note for library developers than library users. It was surprising at first sight with the old API model in mind, now I get it.

/// XXX: Note that in the case that a transaction which was provided via this method is
/// reorganized out of the best chain, the only current way to correctly handle the any
/// associated channel force-closure is to walk the chain back from the current tip with
/// repeated `block_disconnected` calls, followed by an `update_best_block` call.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is calling update_best_block requirement true ? ChannelManager::block_disconnected is calling Channel::update_best_block ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mmm, docs unclear - I meant to go from the fork point to the new tip. In any case, I'm going to work on updating block_disconnected to build a cleaner API so I'll just fix it with that.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Docs is better now by mentioning both reorg fork point and new best-chain tip as required called for update_best_block. Thanks


/// Updates channel state with the current best blockchain tip. You should attempt to call this
/// quickly after a new block becomes available, however it does not need to be called for each
/// new block.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"It does not need to be called for each new block" What about check_get_funding_locked() to account for confirmations ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not really sure what you're asking? check_get_funding_locked handles skipped blocks fine, or, it should?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better now. I think you can add a small comment on check_get_funding_locked "Confirmations are accounted with an absolute height and not incrementally. This method might be only be called when new chain tip reach, after multiple block connections".

@valentinewallace
Copy link
Contributor

valentinewallace commented Mar 10, 2021

Fixing tests over here: https://github.com/valentinewallace/rust-lightning/tree/make-tests-compile (will update this comment as I go)

  • ln::functional_tests::htlc_claim_single_commitment_only_a
  • ln::functional_tests::htlc_claim_single_commitment_only_b
  • ln::functional_tests::test_bump_penalty_txn_on_revoked_commitment
  • ln::functional_tests::test_bump_penalty_txn_on_revoked_htlcs
  • ln::functional_tests::test_check_htlc_underpaying
  • ln::functional_tests::test_concurrent_monitor_claim
  • ln::functional_tests::test_holding_cell_htlc_add_timeouts
  • ln::functional_tests::test_htlc_on_chain_timeout
  • ln::functional_tests::test_htlc_timeout
  • test_unconf_chan
  • test_test_onchain_htlc_settlement_after_close

} else {
self.monitor_pending_funding_locked = true;
return Ok((None, timed_out_htlcs));
for inp in tx.input.iter() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this code block looks like it might be convenient to isolate in a utility method somewhere

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It depends on the if let Some(funding_txo), though, and its also nice to do it in the same transaction walk loop to avoid looping twice per channel.


self.update_time_counter = cmp::max(self.update_time_counter, highest_header_time);

if self.funding_tx_confirmation_height > 0 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps too short a repeated code fragment to extract, but maybe worth considering? Something like update_confirmation_height or similar?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can move it around a bit to clean it up, actually, relying on the fact that check_get_funding_locked will never return a funding_locked if we also want to fail the channel due to the < minimum_depth / 2 check.

@TheBlueMatt
Copy link
Collaborator Author

TheBlueMatt commented Mar 19, 2021

Thanks for the work on this last week @valentinewallace. This is now rebased on #846 and should be passing all the tests. I'll add a few tests for things tomorrow and then it should be good to go.

@TheBlueMatt TheBlueMatt force-pushed the 2021-03-skip-blocks branch 2 times, most recently from c406072 to 6abdd28 Compare March 19, 2021 19:27
@codecov
Copy link

codecov bot commented Mar 19, 2021

Codecov Report

Merging #838 (22f280c) into main (8a8c75a) will increase coverage by 0.04%.
The diff coverage is 91.72%.

❗ Current head 22f280c differs from pull request most recent head 47ad3d6. Consider uploading reports for the commit 47ad3d6 to get more accurate results
Impacted file tree graph

@@            Coverage Diff             @@
##             main     #838      +/-   ##
==========================================
+ Coverage   90.59%   90.63%   +0.04%     
==========================================
  Files          51       51              
  Lines       27109    27204      +95     
==========================================
+ Hits        24560    24657      +97     
+ Misses       2549     2547       -2     
Impacted Files Coverage Δ
lightning/src/ln/channel.rs 87.32% <86.74%> (+0.19%) ⬆️
lightning/src/ln/channelmanager.rs 83.42% <90.90%> (+0.21%) ⬆️
lightning/src/ln/functional_test_utils.rs 95.02% <93.75%> (-0.20%) ⬇️
lightning/src/ln/functional_tests.rs 96.83% <93.97%> (-0.02%) ⬇️
lightning-persister/src/lib.rs 95.61% <100.00%> (ø)
lightning/src/ln/chanmon_update_fail_tests.rs 97.56% <100.00%> (ø)
lightning/src/ln/reorg_tests.rs 99.57% <100.00%> (+0.01%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8a8c75a...47ad3d6. Read the comment docs.

@TheBlueMatt TheBlueMatt force-pushed the 2021-03-skip-blocks branch 2 times, most recently from 4940ffa to 3a55b08 Compare March 20, 2021 04:29
@TheBlueMatt TheBlueMatt marked this pull request as ready for review March 20, 2021 04:39
@TheBlueMatt TheBlueMatt force-pushed the 2021-03-skip-blocks branch from 3a55b08 to 9a1e392 Compare March 20, 2021 04:39
@TheBlueMatt
Copy link
Collaborator Author

Still needs more tests, but now has no PR dependencies and I think is code/doc-complete (though I'd very much welcome suggestions on making the docs more clear).

@TheBlueMatt TheBlueMatt force-pushed the 2021-03-skip-blocks branch 5 times, most recently from 879a162 to 4dadc9d Compare March 20, 2021 17:29
@TheBlueMatt
Copy link
Collaborator Author

Added tests which I'm somewhat happy with. I think there may end up being a followup to tweak the API further, but I'd prefer to land this as-is first.

Copy link
Contributor

@jkczyz jkczyz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Went through commit-by-commit, so please disregard anything that changed in a later commit.

if self.funding_tx_confirmations > 0 {
if self.funding_tx_confirmations == self.minimum_depth as u64 {
if self.funding_tx_confirmation_height > 0 {
let funding_tx_confirmations = height as i64 - self.funding_tx_confirmation_height as i64 + 1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is i64 needed in case of a reorg?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, if you call with a lower height it could underflow and go negative.

Comment on lines 3633 to 3672
let non_shutdown_state = self.channel_state & (!MULTI_STATE_FLAGS);
if (non_shutdown_state >= ChannelState::ChannelFunded as u32 ||
(non_shutdown_state & ChannelState::OurFundingLocked as u32) == ChannelState::OurFundingLocked as u32) &&
funding_tx_confirmations < self.minimum_depth as i64 / 2 {
return true;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A one-line comment explaining this would be useful or the reader.

@@ -3400,7 +3400,7 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
let short_to_id = &mut channel_state.short_to_id;
let pending_msg_events = &mut channel_state.pending_msg_events;
channel_state.by_id.retain(|_, v| {
if v.block_disconnected(header) {
if v.block_disconnected(header, new_height) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess my confusion here is because in ChainMonitor and ChannelMonitor, the height is that of the disconnected header, which is inconsistent with this.

Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly minor points. Test changes sounds good to me, will look twice.

true
} else if non_shutdown_state == (ChannelState::FundingSent as u32 | ChannelState::OurFundingLocked as u32) {
// We got a reorg but not enough to trigger a force close, just update
// funding_tx_confirmed_in and return.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After this PR, funding_tx_confirmed_in is directly set in transactions_confirmed so I don't think it makes sense to mention it here ?

I even think you can cut this branch if you initialize need_commitment_update to false ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, its an exception to the next (channel_state < ChannelFunded) panic. Updated comments.

false
};

//TODO: Note that this must be a duplicate of the previous commitment point they sent us,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does this comment mentions the commitment point they send us when in fact it's us sending our commitment point to be used in next holder commitment transaction ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its completely stale nonsense based on my misunderstanding the spec in 2017, and it hasn't been touched since. I've removed it.

} else {
if self.is_outbound() {
for input in tx.input.iter() {
if input.witness.is_empty() {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we hit this branch, it does mean the confirmed transaction matches our expectation (tx.txid() = funding_txo.index) so we might be okay there, even if we're exposed to malleation during tx-relay.

That said our expected funding_txo might belong to a malleable transaction given as such by get_outbound_funding_created and in fact this method should recall that funding_txo MUST NOT belong to a malleable transaction.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, technically by then we're ok, but its also a critical bug in the implementation of the wallet calling LDK, so we should do something. Honestly, I think we should probably replace the whole FundingTransactionBroadcastable path to make the user provide us the transaction, but that's a separate PR.

@@ -3537,112 +3647,32 @@ impl<Signer: Sign> Channel<Signer> {
}
});

if self.funding_tx_confirmations > 0 {
self.funding_tx_confirmations += 1;
self.update_time_counter = cmp::max(self.update_time_counter, highest_header_time);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we have a risk of breaking bolt 7 requirement by setting update_time_counter to highest header time ? "MUST set timestamp to greater than 0, AND to greater than any previously-sent channel_update for this short_channel_id."

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its a cmp::max so it should never go down? But we definitely need to do this because many clients filter updates based on the timestamp, so we need the time to be at least as of very recently (without having a "real" time source).

return Ok((None, timed_out_htlcs));
}
}
if funding_tx_confirmations < self.minimum_depth as i64 / 2 {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you document the divide by 2 usage ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly, its arbitrary. I think we should drop it entirely and replace with "funding tx conf is now 0", but there's enough changes in this PR already.

Copy link
Contributor

@jkczyz jkczyz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly some confusion on my part it seems. :P

Comment on lines -205 to +207
self.manager.block_connected(&header, &txdata, self.height as u32);
self.manager.transactions_confirmed(&header, self.height as u32, &txdata);
self.manager.update_best_block(&header, self.height as u32);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I missed that this wasn't using the Listen version of block_connected. Nevermind.

node.node.block_connected(&block.header, &txdata, height);
node.node.block_connected(&block, height);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that I confused block_connected with update_best_block when I wrote that comment. I understand the change now.

Comment on lines 3509 to 3557
let non_shutdown_state = self.channel_state & (!MULTI_STATE_FLAGS);
if non_shutdown_state & !(ChannelState::TheirFundingLocked as u32) == ChannelState::FundingSent as u32 {
for &(index_in_block, tx) in txdata.iter() {
let funding_txo = self.get_funding_txo().unwrap();
if tx.txid() == funding_txo.txid {
let txo_idx = funding_txo.index as usize;
if txo_idx >= tx.output.len() || tx.output[txo_idx].script_pubkey != self.get_funding_redeemscript().to_v0_p2wsh() ||
tx.output[txo_idx].value != self.channel_value_satoshis {
if self.is_outbound() {
// If we generated the funding transaction and it doesn't match what it
// should, the client is really broken and we should just panic and
// tell them off. That said, because hash collisions happen with high
// probability in fuzztarget mode, if we're fuzzing we just close the
// channel and move on.
#[cfg(not(feature = "fuzztarget"))]
panic!("Client called ChannelManager::funding_transaction_generated with bogus transaction!");
for &(index_in_block, tx) in txdata.iter() {
if let Some(funding_txo) = self.get_funding_txo() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May have misread this. I think I was referring to:

let non_shutdown_state = self.channel_state & (!MULTI_STATE_FLAGS);
if non_shutdown_state & !(ChannelState::TheirFundingLocked as u32) == ChannelState::FundingSent as u32 {

and

if let Some(funding_txo) = self.get_funding_txo() {

but did't catch the move inside the for-loop.

@TheBlueMatt TheBlueMatt force-pushed the 2021-03-skip-blocks branch 2 times, most recently from 6ea63e2 to 22f280c Compare March 26, 2021 19:09
Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good to me, just have to squash ?

panic!("Client called ChannelManager::funding_transaction_generated with bogus transaction!");
}
self.channel_state = ChannelState::ShutdownComplete as u32;
self.update_time_counter += 1;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we going to send any channel gossip if we fail here before ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Presumably not - we can't have announced the channel yet. I'm gonna leave this unless you feel strongly - its generally "correct" to increment the counter before failure, even if we don't need it here.

///
/// May return some HTLCs (and their payment_hash) which have timed out and should be failed
/// back.
pub fn block_connected(&mut self, header: &BlockHeader, txdata: &TransactionData, height: u32) -> Result<(Option<msgs::FundingLocked>, Vec<(HTLCSource, PaymentHash)>), msgs::ErrorMessage> {
pub fn update_best_block(&mut self, height: u32, highest_header_time: u32) -> Result<(Option<msgs::FundingLocked>, Vec<(HTLCSource, PaymentHash)>), msgs::ErrorMessage> {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's highest_header_time there ? Block timestamp or MTP or local client reception of Electrum header

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case its just the header.time, which is max current time + 2 hours. We just use it to ensure our gossip messages aren't too far in the past.

// the funding transaction's confirmation count has dipped below minimum_depth / 2,
// close the channel and hope we can get the latest state on chain (because presumably
// the funding transaction is at least still in the mempool of most nodes).
if funding_tx_confirmations < self.minimum_depth as i64 / 2 {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For low-conf chans (e.g 1 or 2 confs) eventually we could attach a CPFP and rebroadcast to avoid canceling payments already sold-out. If bumping feerate is lower than the ongoing-to-be-lost HTLCs received as holder balance.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that this only triggers on a disconnect, ie if the transaction was confirmed, but is now less-confirmed. Honestly I think we should drop this entirely, its kinda dumb, but that shouldn't be in this PR.

// the funding transaction's confirmation count has dipped below minimum_depth / 2,
// close the channel and hope we can get the latest state on chain (because presumably
// the funding transaction is at least still in the mempool of most nodes).
if funding_tx_confirmations < self.minimum_depth as i64 / 2 {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this expected to break test coverage ?

diff --git a/lightning/src/ln/channel.rs b/lightning/src/ln/channel.rs
index dd026398..48602544 100644
--- a/lightning/src/ln/channel.rs
+++ b/lightning/src/ln/channel.rs
@@ -3664,7 +3664,7 @@ impl<Signer: Sign> Channel<Signer> {
                        // the funding transaction's confirmation count has dipped below minimum_depth / 2,
                        // close the channel and hope we can get the latest state on chain (because presumably
                        // the funding transaction is at least still in the mempool of most nodes).
-                       if funding_tx_confirmations < self.minimum_depth as i64 / 2 {
+                       if funding_tx_confirmations < self.minimum_depth as i64 / 4 {
                                return Err(msgs::ErrorMessage {
                                        channel_id: self.channel_id(),
                                        data: format!("Funding transaction was un-confirmed. Locked at {} confs, now have {} confs.", self.minimum_depth, funding_tx_confirmations),

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, likely not, I'm not sure how much coverage we have for the "confs went down, close channel" stuff, and we should drop it anyway.

Previously, we expected every block to be connected in-order,
allowing us to track confirmations by simply incrementing a counter
for each new block connected. In anticipation of moving to a
update-height model in the next commit, this moves to tracking
confirmations by simply storing the height at which the funding
transaction was confirmed.

This commit also corrects our "funding was reorganized out of the
best chain" heuristic, instead of a flat 6 blocks, it uses half the
confirmation count required as the point at which we force-close.

Even still, for low confirmation counts (eg 1 block), an ill-timed
reorg may still cause spurious force-closes, though that behavior
is not new in this commit.
Electrum clients primarily operate in a world where they query (and
subscribe to notifications for) transactions by script_pubkeys.
They may never learn very much about the actual blockchain and
orient their events around individual transactions, not the
blockchain.

This makes our ChannelManager interface somewhat more amenable to
such a client by splitting `block_connected` into
`transactions_confirmed` and `update_best_block`. The first handles
checking the funding transaction and storing its height/confirmation
block, whereas the second handles funding_locked and reorg logic.

Sadly, this interface is somewhat easy to misuse - notifying the
channel of the funding transaction being reorganized out of the
chain is complicated when the only notification received is that
a new block is connected at a given height. This will be addressed
in a future commit.
@TheBlueMatt TheBlueMatt force-pushed the 2021-03-skip-blocks branch from 22f280c to 945f213 Compare April 2, 2021 17:33
@TheBlueMatt
Copy link
Collaborator Author

Squashed fixup commits with only one wording diff:

$ git diff-tree -U1 22f280c8d 945f2136c
diff --git a/lightning/src/ln/channel.rs b/lightning/src/ln/channel.rs
index dd026398a..174eed9b9 100644
--- a/lightning/src/ln/channel.rs
+++ b/lightning/src/ln/channel.rs
@@ -3610,3 +3610,3 @@ impl<Signer: Sign> Channel<Signer> {
 							channel_id: self.channel_id(),
-							data: "Commitment transaction was confirmed on chain.".to_owned()
+							data: "Commitment or closing transaction was confirmed on chain.".to_owned()
 						});

When we force-close a channel, for whatever reason, it is nice to
send an error message to our peer. This allows them to closes the
channel on their end instead of trying to send through it and
failing. Further, it may induce them to broadcast their commitment
transaction, possibly getting that confirmed and saving us on fees.

This commit adds a few more cases where we should have been sending
error messages but weren't. It also includes an almost-global
replace in tests of the second argument in
`check_closed_broadcast!()` from false to true (indicating an error
message is expected). There are only a few exceptions, notably
those where the closure is the result of our counterparty having
sent *us* an error message.
This also moves the scanning of the block for commitment
transactions into channel, unifying the error path.
@TheBlueMatt TheBlueMatt force-pushed the 2021-03-skip-blocks branch from 945f213 to 7f2f046 Compare April 5, 2021 17:03
Copy link
Contributor

@jkczyz jkczyz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK 7f2f046

No other concerns after my last review.

@TheBlueMatt TheBlueMatt added this to the 0.0.14 milestone Apr 5, 2021
@ariard
Copy link

ariard commented Apr 5, 2021

Code Review ACK 7f2f046

Okay to revise later the closing-channel-on-reorg feature.

See the similar commit that operates on `Channel`'s internal API
for more details on the reasoning.
It is now entirely redundant with ChannelManager::update_best_block
and is still accessible via `Listen::block_disconnected`.
@TheBlueMatt TheBlueMatt force-pushed the 2021-03-skip-blocks branch from 7f2f046 to 47ad3d6 Compare April 5, 2021 21:33
@TheBlueMatt
Copy link
Collaborator Author

Ugh, noticed CI was broken due to bad doc links. The fix diff is pure doc and trivial, so I'm going to merge this after CI passes:

$ git diff-tree -U1 7f2f046f 47ad3d6b
diff --git a/lightning/src/ln/channelmanager.rs b/lightning/src/ln/channelmanager.rs
index 25013780..31c8dcca 100644
--- a/lightning/src/ln/channelmanager.rs
+++ b/lightning/src/ln/channelmanager.rs
@@ -3422,7 +3422,9 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
 	///
-	/// This method may be called for a previous block after an [`update_best_block()`] call has
+	/// This method may be called for a previous block after an [`update_best_block`] call has
 	/// been made for a later block, however it must *not* be called with transaction data from a
-	/// block which is no longer in the best chain (ie where [`update_best_block()`] has already
+	/// block which is no longer in the best chain (ie where [`update_best_block`] has already
 	/// been informed about a blockchain reorganization which no longer includes the block which
 	/// corresponds to `header`).
+	///
+	/// [`update_best_block`]: `Self::update_best_block`
 	pub fn transactions_confirmed(&self, header: &BlockHeader, height: u32, txdata: &TransactionData) {

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants