Skip to content

Add method to note transaction unconfirmed/reorged-out #853

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 28 additions & 2 deletions lightning/src/ln/channel.rs
Original file line number Diff line number Diff line change
Expand Up @@ -377,7 +377,7 @@ pub(super) struct Channel<Signer: Sign> {

/// The hash of the block in which the funding transaction was included.
funding_tx_confirmed_in: Option<BlockHash>,
funding_tx_confirmation_height: u64,
funding_tx_confirmation_height: u32,
short_channel_id: Option<u64>,

counterparty_dust_limit_satoshis: u64,
Expand Down Expand Up @@ -3591,7 +3591,7 @@ impl<Signer: Sign> Channel<Signer> {
}
}
}
self.funding_tx_confirmation_height = height as u64;
self.funding_tx_confirmation_height = height;
self.funding_tx_confirmed_in = Some(*block_hash);
self.short_channel_id = match scid_from_parts(height as u64, index_in_block as u64, txo_idx as u64) {
Ok(scid) => Some(scid),
Expand Down Expand Up @@ -3678,6 +3678,32 @@ impl<Signer: Sign> Channel<Signer> {
Ok((None, timed_out_htlcs))
}

/// Indicates the funding transaction is no longer confirmed in the main chain. This may
/// force-close the channel, but may also indicate a harmless reorganization of a block or two
/// before the channel has reached funding_locked and we can just wait for more blocks.
pub fn funding_transaction_unconfirmed(&mut self) -> Result<(), msgs::ErrorMessage> {
if self.funding_tx_confirmation_height != 0 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I don't see funding_tx_confirmation_height being reset back to 0 at any point -- should it be?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, we do reset it back to zero in update_best_block in the funding_tx_confirmation_height == 0 check.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, that doesn't reset self.funding_tx_confirmation_height I think? https://i.imgur.com/kgXgXLV.png

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, should we be resetting funding_tx_confirmed_in and short_channel_id as well? (And maybe unit test this if it's easy?)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently we'll always close the channel in this case due to the if funding_tx_confirmations < self.minimum_depth as i64 / 2 { check. I'll add a test and stop doing that in a followup.

// We handle the funding disconnection by calling update_best_block with a height one
// below where our funding was connected, implying a reorg back to conf_height - 1.
let reorg_height = self.funding_tx_confirmation_height - 1;
// We use the time field to bump the current time we set on channel updates if its
// larger. If we don't know that time has moved forward, we can just set it to the last
// time we saw and it will be ignored.
let best_time = self.update_time_counter;
match self.update_best_block(reorg_height, best_time) {
Ok((funding_locked, timed_out_htlcs)) => {
assert!(funding_locked.is_none(), "We can't generate a funding with 0 confirmations?");
assert!(timed_out_htlcs.is_empty(), "We can't have accepted HTLCs with a timeout before our funding confirmation?");
Ok(())
},
Err(e) => Err(e)
}
} else {
// We never learned about the funding confirmation anyway, just ignore
Ok(())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this theoretically supposed to be unreachable, but it seems harmless so we can just Ok(()) in practice?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, I think it should be really rare, but you could see some async process getting delayed and ending up calling transaction_unconfirmed twice in a row which could hit this.

}
}

// Methods to get unprompted messages to send to the remote end (or where we already returned
// something in the handler for the message that prompted this message):

Expand Down
109 changes: 87 additions & 22 deletions lightning/src/ln/channelmanager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ use bitcoin::hashes::hmac::{Hmac, HmacEngine};
use bitcoin::hashes::sha256::Hash as Sha256;
use bitcoin::hashes::sha256d::Hash as Sha256dHash;
use bitcoin::hashes::cmp::fixed_time_eq;
use bitcoin::hash_types::BlockHash;
use bitcoin::hash_types::{BlockHash, Txid};

use bitcoin::secp256k1::key::{SecretKey,PublicKey};
use bitcoin::secp256k1::Secp256k1;
Expand Down Expand Up @@ -851,6 +851,11 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
}
}

/// Gets the current configuration applied to all new channels, as
pub fn get_current_default_configuration(&self) -> &UserConfig {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to correct comment, "as" what ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, still needs to be fixed.

&self.default_configuration
}

/// Creates a new outbound channel to the given remote node and with the given value.
///
/// user_id will be provided back as user_channel_id in FundingGenerationReady events to allow
Expand Down Expand Up @@ -3353,7 +3358,7 @@ where
"Blocks must be disconnected in chain-order - the disconnected block must have the correct height");
*self.last_block_hash.write().unwrap() = header.prev_blockhash;

self.do_chain_event(new_height, |channel| channel.update_best_block(new_height, header.time));
self.do_chain_event(Some(new_height), |channel| channel.update_best_block(new_height, header.time));
}
}

Expand All @@ -3364,8 +3369,11 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
F::Target: FeeEstimator,
L::Target: Logger,
{
/// Calls a function which handles an on-chain event (blocks dis/connected, transactions
/// un/confirmed, etc) on each channel, handling any resulting errors or messages generated by
/// the function.
fn do_chain_event<FN: Fn(&mut Channel<Signer>) -> Result<(Option<msgs::FundingLocked>, Vec<(HTLCSource, PaymentHash)>), msgs::ErrorMessage>>
(&self, height: u32, f: FN) {
(&self, height_opt: Option<u32>, f: FN) {
// Note that we MUST NOT end up calling methods on self.chain_monitor here - we're called
// during initialization prior to the chain_monitor being fully configured in some cases.
// See the docs for `ChannelManagerReadArgs` for more.
Expand Down Expand Up @@ -3424,24 +3432,26 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
true
});

channel_state.claimable_htlcs.retain(|&(ref payment_hash, _), htlcs| {
htlcs.retain(|htlc| {
// If height is approaching the number of blocks we think it takes us to get
// our commitment transaction confirmed before the HTLC expires, plus the
// number of blocks we generally consider it to take to do a commitment update,
// just give up on it and fail the HTLC.
if height >= htlc.cltv_expiry - HTLC_FAIL_BACK_BUFFER {
let mut htlc_msat_height_data = byte_utils::be64_to_array(htlc.value).to_vec();
htlc_msat_height_data.extend_from_slice(&byte_utils::be32_to_array(height));
timed_out_htlcs.push((HTLCSource::PreviousHopData(htlc.prev_hop.clone()), payment_hash.clone(), HTLCFailReason::Reason {
failure_code: 0x4000 | 15,
data: htlc_msat_height_data
}));
false
} else { true }
if let Some(height) = height_opt {
channel_state.claimable_htlcs.retain(|&(ref payment_hash, _), htlcs| {
htlcs.retain(|htlc| {
// If height is approaching the number of blocks we think it takes us to get
// our commitment transaction confirmed before the HTLC expires, plus the
// number of blocks we generally consider it to take to do a commitment update,
// just give up on it and fail the HTLC.
if height >= htlc.cltv_expiry - HTLC_FAIL_BACK_BUFFER {
let mut htlc_msat_height_data = byte_utils::be64_to_array(htlc.value).to_vec();
htlc_msat_height_data.extend_from_slice(&byte_utils::be32_to_array(height));
timed_out_htlcs.push((HTLCSource::PreviousHopData(htlc.prev_hop.clone()), payment_hash.clone(), HTLCFailReason::Reason {
failure_code: 0x4000 | 15,
data: htlc_msat_height_data
}));
false
} else { true }
});
!htlcs.is_empty() // Only retain this entry if htlcs has at least one entry.
});
!htlcs.is_empty() // Only retain this entry if htlcs has at least one entry.
});
}
}

self.handle_init_event_channel_failures(failed_channels);
Expand Down Expand Up @@ -3477,7 +3487,7 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
log_trace!(self.logger, "{} transactions included in block {} at height {} provided", txdata.len(), block_hash, height);

let _persistence_guard = PersistenceNotifierGuard::new(&self.total_consistency_lock, &self.persistence_notifier);
self.do_chain_event(height, |channel| channel.transactions_confirmed(&block_hash, height, txdata, &self.logger).map(|a| (a, Vec::new())));
self.do_chain_event(Some(height), |channel| channel.transactions_confirmed(&block_hash, height, txdata, &self.logger).map(|a| (a, Vec::new())));
}

/// Updates channel state with the current best blockchain tip. You should attempt to call this
Expand Down Expand Up @@ -3506,7 +3516,7 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
self.latest_block_height.store(height as usize, Ordering::Release);
*self.last_block_hash.write().unwrap() = block_hash;

self.do_chain_event(height, |channel| channel.update_best_block(height, header.time));
self.do_chain_event(Some(height), |channel| channel.update_best_block(height, header.time));

loop {
// Update last_node_announcement_serial to be the max of its current value and the
Expand All @@ -3522,6 +3532,61 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
}
}

/// Gets the set of txids which should be monitored for their confirmation state.
///
/// If you're providing information about reorganizations via [`transaction_unconfirmed`], this
/// is the set of transactions which you may need to call [`transaction_unconfirmed`] for.
///
/// This may be useful to poll to determine the set of transactions which must be registered
/// with an Electrum server or for which an Electrum server needs to be polled to determine
/// transaction confirmation state.
///
/// This may update after any [`transactions_confirmed`] or [`block_connected`] call.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the use case for calling this with block_connected?

///
/// Note that this is NOT the set of transactions which must be included in calls to
/// [`transactions_confirmed`] if they are confirmed, but a small subset of it.
Comment on lines +3546 to +3547
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume the rest will be registered in one of the register_tx calls? Could be worth adding a note, since this line may come across naively as a bit scary but unactionable...

Nw though if Jeff is addressing the docs more thoroughly in #858.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the plan is still for @jkczyz to do a followup PR after this + #858 that will add a new top-level trait similar to chain::Listen and merge all the docs/functions then.

///
/// [`transactions_confirmed`]: Self::transactions_confirmed
/// [`transaction_unconfirmed`]: Self::transaction_unconfirmed
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm I think it's not obvious to me what exact situations this is supposed to be used.
Are Electrum users supposed to call it on startup so they can register these txs w/ Electrum?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tried to address this, let me know if that's clearer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, I'll try to unify and clean-up the docs for this and those in #858 when making a trait for these to implement.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, sounds good. good to have an example here to pull from/merge from, too.

/// [`block_connected`]: chain::Listen::block_connected
pub fn get_relevant_txids(&self) -> Vec<Txid> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this be considered an alternative interface to chain::Filter::register_tx then?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the ChannelManager context, yes, but it wouldn't work for ChannelMonitor - ChannelManager only ever cares about one, known-in-advance-txid transaction, whereas ChannelMonitor doesn't know about txids that it cares about until they are confirmed (or, it shouldn't expose them, the list is potentially huge).

let channel_state = self.channel_state.lock().unwrap();
let mut res = Vec::with_capacity(channel_state.short_to_id.len());
for chan in channel_state.by_id.values() {
if let Some(funding_txo) = chan.get_funding_txo() {
res.push(funding_txo.txid);
}
}
res
}

/// Marks a transaction as having been reorganized out of the blockchain.
///
/// If a transaction is included in [`get_relevant_txids`], and is no longer in the main branch
/// of the blockchain, this function should be called to indicate that the transaction should
/// be considered reorganized out.
///
/// Once this is called, the given transaction will no longer appear on [`get_relevant_txids`],
/// though this may be called repeatedly for a given transaction without issue.
///
/// Note that if the transaction is confirmed on the main chain in a different block (indicated
/// via a call to [`transactions_confirmed`]), it may re-appear in [`get_relevant_txids`], thus
/// be very wary of race-conditions wherein the final state of a transaction indicated via
/// these APIs is not the same as its state on the blockchain.
///
/// [`transactions_confirmed`]: Self::transactions_confirmed
/// [`get_relevant_txids`]: Self::get_relevant_txids
pub fn transaction_unconfirmed(&self, txid: &Txid) {
let _persistence_guard = PersistenceNotifierGuard::new(&self.total_consistency_lock, &self.persistence_notifier);
self.do_chain_event(None, |channel| {
if let Some(funding_txo) = channel.get_funding_txo() {
if funding_txo.txid == *txid {
channel.funding_transaction_unconfirmed().map(|_| (None, Vec::new()))
} else { Ok((None, Vec::new())) }
} else { Ok((None, Vec::new())) }
});
}

/// Blocks until ChannelManager needs to be persisted or a timeout is reached. It returns a bool
/// indicating whether persistence is necessary. Only one listener on
/// `await_persistable_update` or `await_persistable_update_timeout` is guaranteed to be woken
Expand Down
32 changes: 21 additions & 11 deletions lightning/src/ln/functional_test_utils.rs
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ pub fn mine_transaction<'a, 'b, 'c, 'd>(node: &'a Node<'b, 'c, 'd>, tx: &Transac
pub fn confirm_transaction_at<'a, 'b, 'c, 'd>(node: &'a Node<'b, 'c, 'd>, tx: &Transaction, conf_height: u32) {
let first_connect_height = node.best_block_info().1 + 1;
assert!(first_connect_height <= conf_height);
if conf_height - first_connect_height >= 1 {
if conf_height > first_connect_height {
connect_blocks(node, conf_height - first_connect_height);
}
let mut block = Block {
Expand All @@ -77,6 +77,7 @@ pub fn confirm_transaction_at<'a, 'b, 'c, 'd>(node: &'a Node<'b, 'c, 'd>, tx: &T
}

/// The possible ways we may notify a ChannelManager of a new block
#[derive(Clone, Copy, PartialEq)]
pub enum ConnectStyle {
/// Calls update_best_block first, detecting transactions in the block only after receiving the
/// header and height information.
Expand Down Expand Up @@ -281,7 +282,7 @@ impl<'a, 'b, 'c> Drop for Node<'a, 'b, 'c> {
let mut w = test_utils::TestVecWriter(Vec::new());
self.node.write(&mut w).unwrap();
<(BlockHash, ChannelManager<EnforcingSigner, &test_utils::TestChainMonitor, &test_utils::TestBroadcaster, &test_utils::TestKeysInterface, &test_utils::TestFeeEstimator, &test_utils::TestLogger>)>::read(&mut ::std::io::Cursor::new(w.0), ChannelManagerReadArgs {
default_config: UserConfig::default(),
default_config: *self.node.get_current_default_configuration(),
keys_manager: self.keys_manager,
fee_estimator: &test_utils::TestFeeEstimator { sat_per_kw: 253 },
chain_monitor: self.chain_monitor,
Expand Down Expand Up @@ -1413,7 +1414,7 @@ pub fn check_preimage_claim<'a, 'b, 'c>(node: &Node<'a, 'b, 'c>, prev_txn: &Vec<
res
}

pub fn get_announce_close_broadcast_events<'a, 'b, 'c>(nodes: &Vec<Node<'a, 'b, 'c>>, a: usize, b: usize) {
pub fn handle_announce_close_broadcast_events<'a, 'b, 'c>(nodes: &Vec<Node<'a, 'b, 'c>>, a: usize, b: usize, needs_err_handle: bool, expected_error: &str) {
let events_1 = nodes[a].node.get_and_clear_pending_msg_events();
assert_eq!(events_1.len(), 2);
let as_update = match events_1[0] {
Expand All @@ -1425,25 +1426,30 @@ pub fn get_announce_close_broadcast_events<'a, 'b, 'c>(nodes: &Vec<Node<'a, 'b,
match events_1[1] {
MessageSendEvent::HandleError { node_id, action: msgs::ErrorAction::SendErrorMessage { ref msg } } => {
assert_eq!(node_id, nodes[b].node.get_our_node_id());
assert_eq!(msg.data, "Commitment or closing transaction was confirmed on chain.");
assert_eq!(msg.data, expected_error);
if needs_err_handle {
nodes[b].node.handle_error(&nodes[a].node.get_our_node_id(), msg);
}
},
_ => panic!("Unexpected event"),
}

let events_2 = nodes[b].node.get_and_clear_pending_msg_events();
assert_eq!(events_2.len(), 2);
assert_eq!(events_2.len(), if needs_err_handle { 1 } else { 2 });
let bs_update = match events_2[0] {
MessageSendEvent::BroadcastChannelUpdate { ref msg } => {
msg.clone()
},
_ => panic!("Unexpected event"),
};
match events_2[1] {
MessageSendEvent::HandleError { node_id, action: msgs::ErrorAction::SendErrorMessage { ref msg } } => {
assert_eq!(node_id, nodes[a].node.get_our_node_id());
assert_eq!(msg.data, "Commitment or closing transaction was confirmed on chain.");
},
_ => panic!("Unexpected event"),
if !needs_err_handle {
match events_2[1] {
MessageSendEvent::HandleError { node_id, action: msgs::ErrorAction::SendErrorMessage { ref msg } } => {
assert_eq!(node_id, nodes[a].node.get_our_node_id());
assert_eq!(msg.data, expected_error);
},
_ => panic!("Unexpected event"),
}
}

for node in nodes {
Expand All @@ -1452,6 +1458,10 @@ pub fn get_announce_close_broadcast_events<'a, 'b, 'c>(nodes: &Vec<Node<'a, 'b,
}
}

pub fn get_announce_close_broadcast_events<'a, 'b, 'c>(nodes: &Vec<Node<'a, 'b, 'c>>, a: usize, b: usize) {
handle_announce_close_broadcast_events(nodes, a, b, false, "Commitment or closing transaction was confirmed on chain.");
}

#[cfg(test)]
macro_rules! get_channel_value_stat {
($node: expr, $channel_id: expr) => {{
Expand Down
Loading