Skip to content

Electrum interface for ChannelMonitor #858

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 16 commits into from
Apr 15, 2021

Conversation

jkczyz
Copy link
Contributor

@jkczyz jkczyz commented Mar 29, 2021

Define an Electrum-friendly interface for ChannelMonitor where transaction are confirmed and unconfirmed independently from updating the latest block.

@jkczyz
Copy link
Contributor Author

jkczyz commented Mar 29, 2021

This was more straight-forward than I had imagined, but we may want to take the opportunity to flatten onchain_events_waiting_threshold_conf from

onchain_events_waiting_threshold_conf: HashMap<u32, Vec<OnchainEvent>>,

to something like this

onchain_events_waiting_threshold_conf: Vec<(Txid, u32, OnchainEvent)>,

which would likely simplify the code a bit.

@ariard @TheBlueMatt Let me know what you think about that before I go too far down that path.

@codecov
Copy link

codecov bot commented Mar 29, 2021

Codecov Report

Merging #858 (524c532) into main (9397db6) will increase coverage by 0.02%.
The diff coverage is 93.29%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #858      +/-   ##
==========================================
+ Coverage   90.18%   90.20%   +0.02%     
==========================================
  Files          55       55              
  Lines       28985    29088     +103     
==========================================
+ Hits        26141    26240      +99     
- Misses       2844     2848       +4     
Impacted Files Coverage Δ
lightning/src/ln/channelmanager.rs 83.11% <86.36%> (+0.05%) ⬆️
lightning/src/ln/onchaintx.rs 87.27% <89.23%> (+0.47%) ⬆️
lightning/src/ln/functional_test_utils.rs 95.08% <92.85%> (-0.10%) ⬇️
lightning/src/chain/channelmonitor.rs 95.45% <94.83%> (+0.03%) ⬆️
background-processor/src/lib.rs 96.00% <100.00%> (-0.04%) ⬇️
lightning/src/chain/chainmonitor.rs 94.81% <100.00%> (+0.84%) ⬆️
lightning/src/ln/channel.rs 87.24% <100.00%> (ø)
lightning/src/ln/functional_tests.rs 96.79% <100.00%> (-0.04%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9397db6...524c532. Read the comment docs.

fn update_best_block<L: Deref>(&mut self, header: &BlockHeader, height: u32, logger: &L)
where L::Target: Logger {
let block_hash = header.block_hash();
log_trace!(logger, "New best block {} at height {}", block_hash, height);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All of the height-based triggers should fire if only update_best_block is called but transactions_confirmed isn't at a given height.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would you define "height-based triggers"? AFAICT, there are places where only height is considered:

https://github.com/rust-bitcoin/rust-lightning/blob/cb442f8271f21d25fb50b82c215bf2dd9cf5bfa5/lightning/src/chain/channelmonitor.rs#L2144-L2163

And other places where height is at first only considered:

https://github.com/rust-bitcoin/rust-lightning/blob/cb442f8271f21d25fb50b82c215bf2dd9cf5bfa5/lightning/src/chain/channelmonitor.rs#L2129-L2143

But which the produced claimable_outpoints are used later along with both txn_matched and height

https://github.com/rust-bitcoin/rust-lightning/blob/cb442f8271f21d25fb50b82c215bf2dd9cf5bfa5/lightning/src/chain/channelmonitor.rs#L2165

But then digging into update_claims_view, sometimes only height is needed while other times tx_matched is also needed.

Just want some clarification before pursuing this as it seems there may be a lot to untangle.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, I do think there's gonna be some stuff to untagle, sadly. I think both of those cases, indeed, need to be untangled, which will definitely require a similar refactor/untangling in OnchainTxHandler, though it may not be too crazy in there.

Copy link
Contributor Author

@jkczyz jkczyz Apr 2, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This all fell into place in a much cleaner way than I had imagined. AFAICT, OnchainTxHandler::update_claims_view works with or without any transactions, which makes things easy. So the ChannelMonitor refactoring was very straight-forward.

Could you give this a quick pass in case I'm missing something crucial? If all looks good, I can add wrappers in ChainMonitor.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At first glance this looks good. Obviously would be nice to test, though.

Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to expose these functions via a wrapper in ChainMonitor as well that calls out to all the individual ChannelMonitors. Also, it would be nice to have a method to get the list of Txids which may be relevant, presumably that would be pretty easy.

@TheBlueMatt
Copy link
Collaborator

This was more straight-forward than I had imagined, but we may want to take the opportunity to flatten onchain_events_waiting_threshold_conf from

I don't really feel super strongly about this - I guess flattening it is nice given we are gonna walk it on reorgs and its pretty small, but whatever.

@valentinewallace
Copy link
Contributor

Code looks reasonable :) Maybe you were already planning to add this but it'd be nice to be clear that a user should pick either block_connected(..)-type API or transaction_(un)confirmed API, not both.

@jkczyz
Copy link
Contributor Author

jkczyz commented Mar 31, 2021

This was more straight-forward than I had imagined, but we may want to take the opportunity to flatten onchain_events_waiting_threshold_conf from

I don't really feel super strongly about this - I guess flattening it is nice given we are gonna walk it on reorgs and its pretty small, but whatever.

Went ahead and flattened this. It makes it much simpler since the txid gets pulled up rather than needing to match on each event type in order to extract it.

@jkczyz jkczyz force-pushed the 2021-03-electrum-interface branch 3 times, most recently from 7fa50a7 to cdb2c89 Compare April 2, 2021 03:24
@jkczyz
Copy link
Contributor Author

jkczyz commented Apr 2, 2021

It would be nice to expose these functions via a wrapper in ChainMonitor as well that calls out to all the individual ChannelMonitors. Also, it would be nice to have a method to get the list of Txids which may be relevant, presumably that would be pretty easy.

Regarding relevant Txids, your comment on #853 seems to indicate this is not desirable for ChannelMonitors. Am I misinterpreting this?

@jkczyz jkczyz force-pushed the 2021-03-electrum-interface branch from cdb2c89 to 4682ae3 Compare April 2, 2021 22:26
@TheBlueMatt
Copy link
Collaborator

Regarding relevant Txids, your comment on #853 seems to indicate this is not desirable for ChannelMonitors. Am I misinterpreting this?

Oh, oops, what I meant by that was instead that we can't know all the txids fully in advance, instead, ChannelMonitor's get_relevant_txids method will need to walk the list of txids which have already been confirmed on chain (and which we now care about if they are reorganized-out).

@TheBlueMatt TheBlueMatt added this to the 0.0.14 milestone Apr 5, 2021
@jkczyz
Copy link
Contributor Author

jkczyz commented Apr 5, 2021

Regarding relevant Txids, your comment on #853 seems to indicate this is not desirable for ChannelMonitors. Am I misinterpreting this?

Oh, oops, what I meant by that was instead that we can't know all the txids fully in advance, instead, ChannelMonitor's get_relevant_txids method will need to walk the list of txids which have already been confirmed on chain (and which we now care about if they are reorganized-out).

Ah, so basically the txid field of all events waiting for confirmation threshold.

log_trace!(logger, "New best block {} at height {}", block_hash, height);
self.last_block_hash = block_hash;

self.block_confirmed(height, vec![], vec![], vec![], broadcaster, fee_estimator, logger)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to handle the height-decreased case somehow here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, this slipped through. Added some conditional logic which required tracking the last height rather than just the last hash.

//We may discard:
//- htlc update there as failure-trigger tx (revoked commitment tx, non-revoked commitment tx, HTLC-timeout tx) has been disconnected
//- maturing spendable output has transaction paying us has been disconnected
self.onchain_events_waiting_threshold_conf.retain(|ref entry| entry.height != height);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Depending on how we handle height decreases in update_best_block this may need to be a < or > instead of !=.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given update_best_block won't call this, I think this can remain as is since block_disconnected is meant to be called for each disconnected block. However, I changed this to < for consistency with the check in update_best_block. Let me know if you have a preference.

let onchain_events_waiting_threshold_conf =
self.onchain_events_waiting_threshold_conf.drain(..).collect::<Vec<_>>();
for entry in onchain_events_waiting_threshold_conf {
if entry.height == height {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as in ChannelMontiro, depending on how we handle height decreases, this may need to be < or > not ==.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to >= since the height here is for the disconnected block. FYI, also changed a couple other places within the method.

This method could probably use a better name now that it is called by both ChannelMonitor::update_best_block and ChannelMonitor::block_disconnected directly and ChannelMonitor::transaction_unconfirmed indirectly. Any suggestions?

@TheBlueMatt
Copy link
Collaborator

Ah, so basically the txid field of all events waiting for confirmation threshold.

Yes, I believe that will work. May want to add a general thing in the test harness to check that any transaction we are calling transaction_unconfirmed on is in the set.

Copy link
Contributor Author

@jkczyz jkczyz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed comments but still need to add tests and docs. There may be some edge cases that need to be considered regarding order of transactions_confirmed / transaction_unconfirmed and update_best_block. So any feedback there would be helpful.

log_trace!(logger, "New best block {} at height {}", block_hash, height);
self.last_block_hash = block_hash;

self.block_confirmed(height, vec![], vec![], vec![], broadcaster, fee_estimator, logger)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, this slipped through. Added some conditional logic which required tracking the last height rather than just the last hash.

//We may discard:
//- htlc update there as failure-trigger tx (revoked commitment tx, non-revoked commitment tx, HTLC-timeout tx) has been disconnected
//- maturing spendable output has transaction paying us has been disconnected
self.onchain_events_waiting_threshold_conf.retain(|ref entry| entry.height != height);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given update_best_block won't call this, I think this can remain as is since block_disconnected is meant to be called for each disconnected block. However, I changed this to < for consistency with the check in update_best_block. Let me know if you have a preference.

let onchain_events_waiting_threshold_conf =
self.onchain_events_waiting_threshold_conf.drain(..).collect::<Vec<_>>();
for entry in onchain_events_waiting_threshold_conf {
if entry.height == height {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to >= since the height here is for the disconnected block. FYI, also changed a couple other places within the method.

This method could probably use a better name now that it is called by both ChannelMonitor::update_best_block and ChannelMonitor::block_disconnected directly and ChannelMonitor::transaction_unconfirmed indirectly. Any suggestions?

@jkczyz jkczyz force-pushed the 2021-03-electrum-interface branch 2 times, most recently from 9ea1f83 to 23bf97a Compare April 7, 2021 19:32
@jkczyz
Copy link
Contributor Author

jkczyz commented Apr 7, 2021

Updated functional_test_utils to test the different use cases. Will wait for #853 to test transaction_unconfirmed if that sounds alright.

@jkczyz jkczyz force-pushed the 2021-03-electrum-interface branch from 23bf97a to 73c2967 Compare April 8, 2021 02:26
@jkczyz jkczyz marked this pull request as ready for review April 8, 2021 02:26
@jkczyz
Copy link
Contributor Author

jkczyz commented Apr 8, 2021

Filled in the docs. Should be good for review. Requesting additional feedback on how the stated contracts compare to those in ChannelManager and if they need to be better aligned.

}
});
e.push(OnchainEvent::HTLCUpdate { htlc_update: ((**source).clone(), htlc.payment_hash.clone())});
self.onchain_events_waiting_threshold_conf.retain(|ref entry| {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a note, but this doc on transactions_confirmed in ChannelManager applies double here - if we accept a transaction that is (now) on a fork, without getting transaction_unconfirmed call before it matures, we may end up failing back an HTLC before the fail-back transaction actually confirms.

	/// This method may be called for a previous block after an [`update_best_block`] call has
	/// been made for a later block, however it must *not* be called with transaction data from a
	/// block which is no longer in the best chain (ie where [`update_best_block`] has already
	/// been informed about a blockchain reorganization which no longer includes the block which
	/// corresponds to `header`).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you like me to add this to the corresponding methods in ChainMonitor and ChannelMonitor? I'm a wondering if we should move this to some module-level documentation and reference it rather than repeating the comment. Though note that the code is across two different modules currently.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, I was thinking after writing this that maybe a parallel trait to chain::Listen may make sense.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any objection to expanding chain::Listen to include the three new methods? Otherwise, I can ponder a good name for a new trait.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I guess we could, but it feels strange to me. Really you should only ever use one or the other, never both. Sure, different traits isn't really an enforcement of that, but it at least exposes it somewhat.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. chain::Update is the first thing that comes to mind but will let the subconscious work on it while making the change.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will make this change in a follow-up PR as discussed offline.

});
e.push(OnchainEvent::HTLCUpdate { htlc_update: ((**source).clone(), htlc.payment_hash.clone())});
self.onchain_events_waiting_threshold_conf.retain(|ref entry| {
if entry.height != height { return true; }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, on second thought, it feels like this could even be a panic!() if the sources match and heights are different. It implies that we have two transactions which we currently believe are confirmed on-chain at different heights which resolve the same HTLC (either spending the same commitment transaction output or via two different commitment transactions). It likely implies either a bug in the client application (forgetting to call transaction_unconfirmed), or a race in the client application (calling transaction_unconfirmed in the background later). I don't think we should panic here, as the race condition is legitimate, and should hopefully just resolve itself before the confirmation timer finishes, but we could maybe add a panic check when we resolve to check that there isn't a second resolution of the same HTLC still pending when the confirmation timer finishes (indicating a severe bug in the client application that should be resolved).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I follow how the race condition would occur. This method is only called either directly by transactions_confirmed or indirectly via block_connected.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess my mental model for a user is two completely separate processes - one which checks with a server regularly as to whether any of the txids to monitor for have been unconfirmed, and one which is scanning the chain as it moves forward for transactions matching the filter. Those could run on different schedules and come in at different times.

ie you could have the transaction_unconfirmed process inform us of an unconfirmed transaction 30 seconds after we call transactions_confirmed on a conflicting transaction, and that's ok, as long as its before we get a bunch of blocks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added debug assertions as discussed.

@@ -318,20 +336,18 @@ impl<ChannelSigner: Sign> OnchainTxHandler<ChannelSigner> {
}

writer.write_all(&byte_utils::be64_to_array(self.onchain_events_waiting_threshold_conf.len() as u64))?;
for (ref target, ref events) in self.onchain_events_waiting_threshold_conf.iter() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to make sure we agree - I think there exist mainnet ChannelMonitors out there that we'd like to be able to deserialize, but in this case its OK, because the ones that exist shouldn't have any onchain events waiting threshold conf currently, correct?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was working off the understanding that its ok to break the serialization format for now. But maybe that's no longer the case?

As for your question, any such ChannelMonitor could have been serialized with such events, so I would think that deserialization would fail in that case. They'd have to use the old code to deserialize, clear the events without producing any new ones, and then re-serialize. Then it would work with the new code.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, given a monitor with no such events, it should roundtrip old code -> new code fine.

@@ -2693,33 +2893,30 @@ impl<'a, Signer: Sign, K: KeysInterface<Signer = Signer>> ReadableArgs<&'a K>
}

let last_block_hash: BlockHash = Readable::read(reader)?;
let last_block_height: u32 = Readable::read(reader)?;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you move this to the end and add support for reading (old) channel monitors without it (presumably if last block hash is null, defaulting to 0, otherwise maybe failing is OK).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're going to fail for one case (non-zero block hash) is it worth supporting a zero hash? I'm just a little worried we're going to accumulate some ad hoc versioning support this way.

This is related to another comment for OnchainTxHandler serialization. If we want to support older versions, we'll not only have to move new fields to the end as you mention but also read old fields (converting if possible) and writing using a new format which would need to write out dummy values for old fields (e.g., a zero length for older onchain_events_waiting_threshold_conf).

So moving the height down is not a problem. But supporting older onchain_events_waiting_threshold_conf serializations may not be possible given the Txid would be missing.

Do you think we should avoid breaking the format from now on?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, I think we should start caring at least weakly in 0.0.14, for now I'm just worried about a few ChannelMonitors in the wild that we know about. I think post-0.0.14, we can drop the code, so we could bump the version number...or not. Up to you.

@jkczyz jkczyz force-pushed the 2021-03-electrum-interface branch from 73c2967 to c4297bc Compare April 9, 2021 06:56
@@ -2090,7 +2090,13 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> {
self.last_block_hash = block_hash;
self.last_block_height = height;

self.block_confirmed(height, vec![], vec![], vec![], broadcaster, fee_estimator, logger)
if height > self.last_block_height {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI, this is wrong since self.last_block_height is set to height above. No tests for decreasing height yet so unfortunately not caught. Will be fixed in my next push.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, perhaps the tests should have caught this. The setup might not be capturing the behavior that would have caused this to fail.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried running the functional tests by changing the default ConnectionStyle (thanks for the suggestion @TheBlueMatt ). I'm able to get ln::functional_tests::test_htlc_on_chain_timeout to fail under ConnectStyle::BestBlockFirstSkippingBlocks and ConnectStyle::TransactionsFirstSkippingBlocks prior to the bug fix and passing after it.

thread 'ln::functional_tests::test_htlc_on_chain_timeout' panicked at 'assertion failed: `(left == right)`
  left: `0`,
 right: `1`', lightning/src/ln/functional_tests.rs:3040:9

https://github.com/rust-bitcoin/rust-lightning/blob/c4297bcccfcbb6f12276dcef3c494ae81bb6284a/lightning/src/ln/functional_tests.rs#L3040

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add those connection types in test_htlc_on_chain_timeout to find any regressions?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'm basically happy with this after fixing the one bug you noted.

///
/// Must be called after new blocks become available for the most recent block. Intermediary
/// blocks, however, may be safely skipped. For a chain reorganization, this must be called at
/// least at the fork point and then again for the most recent block; intermediary blocks may be
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With transaction_unconfirmed, do we care about the fork point+ requirement?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm... right, I suppose with the way it is implemented users can do either given update_best_block will take care of any transactions with a greater height when the new height decreases.

Later, we'll want to have consistent requirements at the trait level even if an implementation's requirements are not as strict.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, I think that requirement also goes away with #853, so we wont need it at all.

@jkczyz jkczyz force-pushed the 2021-03-electrum-interface branch 2 times, most recently from 8a8b0d9 to 796e4ab Compare April 10, 2021 07:40
///
/// [`block_connected`]: Self::block_connected
/// [`update_best_block`]: Self::update_best_block
pub fn transactions_confirmed(&self, header: &BlockHeader, txdata: &TransactionData, height: u32) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be a bit ambiguous whether it means are_transactions_confirmed or transations_got_confirmed. Not sure I have a good idea for a better name, but food for thought?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think the intended meaning for this and block_connected is that they are named as events with an implicit "handle" prefix. Let's punt a naming discussion to the follow-up PR.

@jkczyz jkczyz force-pushed the 2021-03-electrum-interface branch from ca03f44 to d908ed0 Compare April 12, 2021 19:11
}

/// Returns the set of txids that should be monitored for confirmation on chain or for
/// re-organization out of the chain.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think "confirmation on chain" is correct here - we only care about these transactions for reorganization out of the chain (and probably it could be named better in the next PR).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, that makes a ton more sense given the transactions returned are already confirmed on chain. :P

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should ultimately end up with one set of/the same docs, but for the sake of having two starting points, I took a crack at from-scratch docs at 4d11f31

Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've only this blocker comment, the remaining ones can be addressed in follow-ups as they're more OnchainTxHandler-specifics.

});
}

fn process_transactions<FN>(&self, header: &BlockHeader, txdata: &TransactionData, process: FN)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about calling this process_chain_event ? More illuminating than current name in case of process=block_connected or process=update_best_block ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Up for a better name but note that events here are a very ChannelMonitor concept whereas here we are dealing with the output of processing those events (i.e. the abstraction of outputs to watch) -- the processing itself is an implementation detail of FN.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good observation, what do you think about process_watched_outputs ? Otherwise, we can just let it as it is.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So it's processing chain data (header and/or txdata) and as a side effect registering outputs to watch. So maybe process_chain_data would be a good alternative.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI, renamed to process_chain_data.

monitor.update_best_block(
header, height, &*self.broadcaster, &*self.fee_estimator, &*self.logger)
} else {
monitor.transactions_confirmed(
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What case this alternative path is covering ?

I can think of the following scenario :

  • broadcast of a holder commitment transaction in case of reaching some timeout
  • this transaction is in-flight, it is super-quickly propagated and confirmed
  • local client fetches it from an Electrum server through register_output
  • process_transaction recurse the second time with transactions_confirmed

This seems plausible but a) sounds super-racy and b) in fact transactions_confirmed belong to the next block update and is going to be called with the wrong header ?

Or do you have another scenario in mind this API tweak is covering ?

Codecov is right, mutating this branch doesn't break test coverage.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What case this alternative path is covering ?

That's a good question! I thought of this more in terms of what is theoretically possible based on call graph analysis but didn't analyze what can happen in practice.

I can think of the following scenario :

  • broadcast of a holder commitment transaction in case of reaching some timeout
  • this transaction is in-flight, it is super-quickly propagated and confirmed
  • local client fetches it from an Electrum server through register_output
  • process_transaction recurse the second time with transactions_confirmed

This seems plausible but a) sounds super-racy and b) in fact transactions_confirmed belong to the next block update and is going to be called with the wrong header ?

Note: register_output must only return a transaction from the same block.

Or do you have another scenario in mind this API tweak is covering ?

Nothing in mind as per above. Do you think we should assert if the txdata is not empty?

Codecov is right, mutating this branch doesn't break test coverage.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, right, its tricky - don't really want to have dead code, and it certainly seems like if there's no transactions which matched there shouldn't by any new transactions to register, but also good to handle cases. Note that update_best_block may be called for blocks where there is a transaction that matches, where we'd have a transactions_confirmed call in parallel either before or after the update_best_block call. I don't think that changes anything, but its good to keep in mind.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I lean towards dropping it and asserting we never even call register_output if txdata is empty.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So essentially in ChainMonitor::process_transactions, if txdata is empty the result of process should also be empty. @ariard How does that sound?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

D'oh! Right, I think that's fine to leave?

Could we still replace the conditional code with an assert in ChainMonitor::update_best_block ? I'm presuming the transaction we are to broadcast couldn't have had its output spent yet -- since we have not yet broadcast the transaction -- so register_output shouldn't return anything... which is a little weird because the WatchOutput used in the call will have the current block hash set even though the output hasn't made it on chain.

Let me know if I'm mistaken in the above explanation.

I guess probably leave the code as-is, though I'm somewhat surprised we can't hit it with ConnectStyle::BestBlockFirst in those failing tests.

FWIW, I was running from a rebase prior to modifying do_connect_block to use the new ChainMonitor interface. So those failures were from block_connected hitting those tests.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we still replace the conditional code with an assert in ChainMonitor::update_best_block ? I'm presuming the transaction we are to broadcast couldn't have had its output spent ye

For the current case, yes, that's true, but something like this we could easily break and not catch in tests so it worries me. Its not a lot of code and its not hard to see how its correct, even if currently unreachable. If you feel strongly, I'm happy to go with an assertion (or maybe debug assertion?) but for now, meh.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re-wrote the comment and kept as is.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can get ride of the transaction_confirmed alternative, inserting an assert about txdata.is_empty() doesn't break our test coverage which confirms my intuition that this scenario isn't possible per-current-protocol and respecting our current chain::Filter API.

diff --git a/lightning/src/chain/chainmonitor.rs b/lightning/src/chain/chainmonitor.rs
index 2f8c7a15..fe7e9f84 100644
--- a/lightning/src/chain/chainmonitor.rs
+++ b/lightning/src/chain/chainmonitor.rs
@@ -126,13 +126,9 @@ where C::Target: chain::Filter,
                self.process_transactions(header, &[], |monitor, txdata| {
                        // While in practice there shouldn't be any recursive calls when given empty txdata,
                        // it's still possible for a chain::Filter implementation to return a transaction.
-                       if txdata.is_empty() {
-                               monitor.update_best_block(
-                                       header, height, &*self.broadcaster, &*self.fee_estimator, &*self.logger)
-                       } else {
-                               monitor.transactions_confirmed(
-                                       header, txdata, height, &*self.broadcaster, &*self.fee_estimator, &*self.logger)
-                       }
+                       assert!(txdata.is_empty());
+                       monitor.update_best_block(
+                               header, height, &*self.broadcaster, &*self.fee_estimator, &*self.logger)
                });
        }

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good as discussed offline.

@@ -2039,11 +2205,28 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> {

self.is_paying_spendable_output(&tx, height, &logger);
}

self.block_confirmed(height, txn_matched, watch_outputs, claimable_outpoints, broadcaster, fee_estimator, logger)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this redundant with block_confirmed within update_best_block or does the API assume non-block-connected clients can't call update_best_block ?

I think that's okay as a temporary location because you need txn_matched for OnchainTxHandler::update_claims_view and we might need to split update_claims_view in two halves update_best_block/event_confirmed.

I can do it as a follow-up if you wanna :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this redundant with block_confirmed within update_best_block or does the API assume non-block-connected clients can't call update_best_block ?

I suppose it may be redundant. @TheBlueMatt If a user must call transaction_confirmed for relevant transactions, why does update_best_block need to call block_confirmed? I think we discussed this earlier in an earlier conversation.

I think that's okay as a temporary location because you need txn_matched for OnchainTxHandler::update_claims_view and we might need to split update_claims_view in two halves update_best_block/event_confirmed.

I can do it as a follow-up if you wanna :)

By all means. :)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose it may be redundant. @TheBlueMatt If a user must call transaction_confirmed for relevant transactions, why does update_best_block need to call block_confirmed? I think we discussed this earlier in an earlier conversation.

Hmm, your link is dead, so its entirely possible I said something different, but I think it doesnt. I guess really its a question of what the API should be, on the ChannelManager side we kinda went with (but can change) "you call them in any order, but you always have to call both transactions_confirmed and update_best_block". We could drop the update_best_block call if you've called transactions_confirmed, making this line required, but I think we'll have users who want to call both in many cases, so we can't/shouldn't move to "you cannot call both, only one".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For reference, I think this is what I had tried to link: #858 (comment).

IIUC, the question here is not whether a user should call update_best_block but rather if update_best_block should call block_confirmed (i.e., to handle height-based triggers described in the linked comment).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but rather if update_best_block should call block_confirmed

Hmm? This line is in transactions_confirmed, though. I read the question as should we handle the height-based triggers in transactions_confirmed, even if the user doesn't call update_best_block.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So in that world would you call transactions_confirmed with empty txdata in lieu of update_best_block? Doesn't that bring us back full circle to the equivalent of calling block_connected? Or did I misunderstand / am missing something about the reorg case?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I wasn't assuming going that far, I dunno, its hard. Thinking about it from the client's perspective, having two calls I think is useful - I believe it wouldn't be strange for an electrum client to receive both a notification that a new block is connected and a notification that transactions matched the installed filter, and ideally you'd be able to just map 1:1 notifications to LDK calls.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think here the question of how we refactor block_confirmed to extract only what matters in case of new transactions confirmed is dependent on the API we want to expose to an Electrum user. If we go for a 1:1 mapping, we can repeat the onchain_tx_handler.update_claims_view() in transaction_confirmed and don't care about the rest of the method. The remaining logic of block_confirmed is height-triggered so shouldn't be impacted. At first sight

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed that a 1:1 mapping would be preferable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried this but many tests failed. Haven't looked into them though. Is this the change you had in mind?

diff --git a/lightning/src/chain/channelmonitor.rs b/lightning/src/chain/channelmonitor.rs
index 777bc06b..f73e7630 100644
--- a/lightning/src/chain/channelmonitor.rs
+++ b/lightning/src/chain/channelmonitor.rs
@@ -2205,7 +2205,29 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> {
                        self.is_paying_spendable_output(&tx, height, &logger);
                }
 
-               self.block_confirmed(height, txn_matched, watch_outputs, claimable_outpoints, broadcaster, fee_estimator, logger)
+               self.onchain_tx_handler.update_claims_view(&txn_matched, claimable_outpoints, Some(height), &&*broadcaster, &&*fee_estimator, &&*logger);
+
+               // Determine new outputs to watch by comparing against previously known outputs to watch,
+               // updating the latter in the process.
+               watch_outputs.retain(|&(ref txid, ref txouts)| {
+                       let idx_and_scripts = txouts.iter().map(|o| (o.0, o.1.script_pubkey.clone())).collect();
+                       self.outputs_to_watch.insert(txid.clone(), idx_and_scripts).is_none()
+               });
+               #[cfg(test)]
+               {
+                       // If we see a transaction for which we registered outputs previously,
+                       // make sure the registered scriptpubkey at the expected index match
+                       // the actual transaction output one. We failed this case before #653.
+                       for tx in &txn_matched {
+                               if let Some(outputs) = self.get_outputs_to_watch().get(&tx.txid()) {
+                                       for idx_and_script in outputs.iter() {
+                                               assert!((idx_and_script.0 as usize) < tx.output.len());
+                                               assert_eq!(tx.output[idx_and_script.0 as usize].script_pubkey, idx_and_script.1);
+                                       }
+                               }
+                       }
+               }
+               watch_outputs
        }
 
        fn block_confirmed<B: Deref, F: Deref, L: Deref>(

// Find which on-chain events have reached their confirmation threshold.
let onchain_events_awaiting_threshold_conf =
self.onchain_events_awaiting_threshold_conf.drain(..).collect::<Vec<_>>();
let mut onchain_events_reaching_threshold_conf = Vec::new();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: _reached_ ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"reached" is really short for "having reached" which is comparable to "reaching", so leaving as is to mirror "awaiting".

}

if let Some(height) = height {
self.block_disconnected(height, broadcaster, fee_estimator, logger);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could have OnchainTxHandler::process_disconnect_event call either by transaction_unconfirmed or block_disconnected sharing the API-independent code of current block_disconnected. By API-independent, I mean the regenerate-and-broadcast part which is not dependent on either txid or height.

I think it will make the code more proper rather than iterating twice on onchain_events_awaiting_threshold_conf and also make OnchainTxHandler API-ready if we expose it as its own abstract interface in the future. Though it's can be done in a follow-up and for now we can swallow the bullet as it's internal logic.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I understand what you're suggesting. Just to clarify, are you saying that process_disconnect_event is the name of the proposed method refactored from block_disconnected?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes process_disconnect_event would be the name of the block code extracted from block_disconnected !

@jkczyz jkczyz force-pushed the 2021-03-electrum-interface branch 2 times, most recently from 5d1615a to ed42c53 Compare April 14, 2021 06:52
@jkczyz jkczyz force-pushed the 2021-03-electrum-interface branch from ed42c53 to eba3cc3 Compare April 14, 2021 15:41
@jkczyz jkczyz force-pushed the 2021-03-electrum-interface branch from eba3cc3 to d09d23f Compare April 14, 2021 19:52
jkczyz added 16 commits April 14, 2021 12:57
Rather than mapping height to a vector of events, use a single vector
for all events. This allows for easily processing events by either
height or transaction. The latter will be used for an interface suitable
for Electrum.
When using Electrum, transactions are individually unconfirmed during a
reorg rather than by block. Store the txid of the transaction creating
the on-chain event so that it can be used to determine which events need
to be removed when a transaction is unconfirmed.
There is a possible race condition when both the latest block hash and
height are needed. Combine these in one struct and place them behind a
single lock.
Define an Electrum-friendly interface for ChannelMonitor where
transactions are confirmed independently of updating the last connected
block.
Expose a way for Electrum users to update the best block on a
ChannelMonitor independently of confirming transactions.
Define an Electrum-friendly interface for ChannelMonitor where
transactions are unconfirmed independently from updating the latest
block.
Define an Electrum-friendly interface for ChannelMonitor where txids of
relevant transactions can be obtained. For any of these transactions
that are re-orged out of the chain, users must call
transaction_unconfirmed.
Add an interface to ChainMonitor for Electrum users, delegating to the
corresponding methods in each ChannelMonitor.
This test failed when ConnectionStyle was set to a SkippingBlocks
variant because of a bug in ChannelMonitor::update_best_block.
Parameterize the test with these styles to catch any regressions.
@jkczyz jkczyz force-pushed the 2021-03-electrum-interface branch from d09d23f to 524c532 Compare April 14, 2021 20:04
@jkczyz
Copy link
Contributor Author

jkczyz commented Apr 14, 2021

Squashed and rebased.

@ariard
Copy link

ariard commented Apr 15, 2021

Code Review ACK 524c532, thanks for taking the latest suggestion :)

@TheBlueMatt TheBlueMatt merged commit feeb893 into lightningdevkit:main Apr 15, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants