Skip to content

Commit 728ea70

Browse files
committed
Process monitor update events in block_[dis]connected asynchronously
The instructions for `ChannelManagerReadArgs` indicate that you need to connect blocks on a newly-deserialized `ChannelManager` in a separate pass from the newly-deserialized `ChannelMontiors` as the `ChannelManager` assumes the ability to update the monitors during block [dis]connected events, saying that users need to: ``` 4) Reconnect blocks on your ChannelMonitors 5) Move the ChannelMonitors into your local chain::Watch. 6) Disconnect/connect blocks on the ChannelManager. ``` This is fine for `ChannelManager`'s purpose, but is very awkward for users. Notably, our new `lightning-block-sync` implemented on-load reconnection in the most obvious (and performant) way - connecting the blocks all at once, violating the `ChannelManagerReadArgs` API. Luckily, the events in question really don't need to be processed with the same urgency as most channel monitor updates. The only two monitor updates which can occur in block_[dis]connected is either a) in block_connected, we identify a now-confirmed commitment transaction, closing one of our channels, or b) in block_disconnected, the funding transaction is reorganized out of the chain, making our channel no longer funded. In the case of (a), sending a monitor update which broadcasts a conflicting holder commitment transaction is far from time-critical, though we should still ensure we do it. In the case of (b), we should try to broadcast our holder commitment transaction when we can, but within a few minutes is fine on the scale of block mining anyway. Note that in both cases cannot simply move the logic to ChannelMonitor::block[dis]_connected, as this could result in us broadcasting a commitment transaction from ChannelMonitor, then revoking the now-broadcasted state, and only then receiving the block_[dis]connected event in the ChannelManager. Thus, we move both events into an internal invent queue and process them in timer_chan_freshness_every_min().
1 parent ba6eee2 commit 728ea70

File tree

4 files changed

+109
-11
lines changed

4 files changed

+109
-11
lines changed

lightning/src/ln/channel.rs

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4180,6 +4180,10 @@ impl<Signer: Sign> Channel<Signer> {
41804180
/// Also returns the list of payment_hashes for channels which we can safely fail backwards
41814181
/// immediately (others we will have to allow to time out).
41824182
pub fn force_shutdown(&mut self, should_broadcast: bool) -> (Option<(OutPoint, ChannelMonitorUpdate)>, Vec<(HTLCSource, PaymentHash)>) {
4183+
// Note that we MUST only generate a monitor update that indicates force-closure - we're
4184+
// called during initialization prior to the chain_monitor in the encompassing ChannelManager
4185+
// being fully configured in some cases. Thus, its likely any monitor events we generate will
4186+
// be delayed in being processed! See the docs for `ChannelManagerReadArgs` for more.
41834187
assert!(self.channel_state != ChannelState::ShutdownComplete as u32);
41844188

41854189
// We go ahead and "free" any holding cell HTLCs or HTLCs we haven't yet committed to and

lightning/src/ln/channelmanager.rs

Lines changed: 102 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -333,6 +333,15 @@ pub(super) struct ChannelHolder<Signer: Sign> {
333333
pub(super) pending_msg_events: Vec<MessageSendEvent>,
334334
}
335335

336+
/// Events which we process internally but cannot be procsesed immediately at the generation site
337+
/// for some reason. They are handled in timer_chan_freshness_every_min, so may be processed with
338+
/// quite some time lag.
339+
enum BackgroundEvent {
340+
/// Handle a ChannelMonitorUpdate that closes a channel, broadcasting its current latest holder
341+
/// commitment transaction.
342+
ClosingMonitorUpdate((OutPoint, ChannelMonitorUpdate)),
343+
}
344+
336345
/// State we hold per-peer. In the future we should put channels in here, but for now we only hold
337346
/// the latest Init features we heard from the peer.
338347
struct PeerState {
@@ -436,6 +445,7 @@ pub struct ChannelManager<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref,
436445
per_peer_state: RwLock<HashMap<PublicKey, Mutex<PeerState>>>,
437446

438447
pending_events: Mutex<Vec<events::Event>>,
448+
pending_background_events: Mutex<Vec<BackgroundEvent>>,
439449
/// Used when we have to take a BIG lock to make sure everything is self-consistent.
440450
/// Essentially just when we're serializing ourselves out.
441451
/// Taken first everywhere where we are making changes before any other locks.
@@ -794,6 +804,7 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
794804
per_peer_state: RwLock::new(HashMap::new()),
795805

796806
pending_events: Mutex::new(Vec::new()),
807+
pending_background_events: Mutex::new(Vec::new()),
797808
total_consistency_lock: RwLock::new(()),
798809
persistence_notifier: PersistenceNotifier::new(),
799810

@@ -1854,13 +1865,40 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
18541865
events.append(&mut new_events);
18551866
}
18561867

1868+
/// Free the background events, generally called from timer_chan_freshness_every_min.
1869+
///
1870+
/// Exposed for testing to allow us to process events quickly without generating accidental
1871+
/// BroadcastChannelUpdate events in timer_chan_freshness_every_min.
1872+
///
1873+
/// Expects the caller to have a total_consistency_lock read lock.
1874+
fn process_background_events(&self) {
1875+
let mut background_events = Vec::new();
1876+
mem::swap(&mut *self.pending_background_events.lock().unwrap(), &mut background_events);
1877+
for event in background_events.drain(..) {
1878+
match event {
1879+
BackgroundEvent::ClosingMonitorUpdate((funding_txo, update)) => {
1880+
// The channel has already been closed, so no use bothering to care about the
1881+
// monitor updating completing.
1882+
let _ = self.chain_monitor.update_channel(funding_txo, update);
1883+
},
1884+
}
1885+
}
1886+
}
1887+
1888+
#[cfg(any(test, feature = "_test_utils"))]
1889+
pub(crate) fn test_process_background_events(&self) {
1890+
self.process_background_events();
1891+
}
1892+
18571893
/// If a peer is disconnected we mark any channels with that peer as 'disabled'.
18581894
/// After some time, if channels are still disabled we need to broadcast a ChannelUpdate
18591895
/// to inform the network about the uselessness of these channels.
18601896
///
18611897
/// This method handles all the details, and must be called roughly once per minute.
18621898
pub fn timer_chan_freshness_every_min(&self) {
18631899
let _persistence_guard = PersistenceNotifierGuard::new(&self.total_consistency_lock, &self.persistence_notifier);
1900+
self.process_background_events();
1901+
18641902
let mut channel_state_lock = self.channel_state.lock().unwrap();
18651903
let channel_state = &mut *channel_state_lock;
18661904
for (_, chan) in channel_state.by_id.iter_mut() {
@@ -1953,6 +1991,10 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
19531991
//identify whether we sent it or not based on the (I presume) very different runtime
19541992
//between the branches here. We should make this async and move it into the forward HTLCs
19551993
//timer handling.
1994+
1995+
// Note that we MUST NOT end up calling methods on self.chain_monitor here - we're called
1996+
// from block_connected which may run during initialization prior to the chain_monitor
1997+
// being fully configured. See the docs for `ChannelManagerReadArgs` for more.
19561998
match source {
19571999
HTLCSource::OutboundRoute { ref path, .. } => {
19582000
log_trace!(self.logger, "Failing outbound payment HTLC with payment_hash {}", log_bytes!(payment_hash.0));
@@ -3100,6 +3142,29 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
31003142
self.finish_force_close_channel(failure);
31013143
}
31023144
}
3145+
3146+
/// Handle a list of channel failures during a block_connected or block_disconnected call,
3147+
/// pushing the channel monitor update (if any) to the background events queue and removing the
3148+
/// Channel object.
3149+
fn handle_init_event_channel_failures(&self, mut failed_channels: Vec<ShutdownResult>) {
3150+
for mut failure in failed_channels.drain(..) {
3151+
// Either a commitment transactions has been confirmed on-chain or
3152+
// Channel::block_disconnected detected that the funding transaction has been
3153+
// reorganized out of the main chain.
3154+
// We cannot broadcast our latest local state via monitor update (as
3155+
// Channel::force_shutdown tries to make us do) as we may still be in initialization,
3156+
// so we track the update internally and handle it when the user next calls
3157+
// timer_chan_freshness_every_min, guaranteeing we're running normally.
3158+
if let Some((funding_txo, update)) = failure.0.take() {
3159+
assert_eq!(update.updates.len(), 1);
3160+
if let ChannelMonitorUpdateStep::ChannelForceClosed { should_broadcast } = update.updates[0] {
3161+
assert!(should_broadcast);
3162+
} else { unreachable!(); }
3163+
self.pending_background_events.lock().unwrap().push(BackgroundEvent::ClosingMonitorUpdate((funding_txo, update)));
3164+
}
3165+
self.finish_force_close_channel(failure);
3166+
}
3167+
}
31033168
}
31043169

31053170
impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> MessageSendEventsProvider for ChannelManager<Signer, M, T, K, F, L>
@@ -3167,6 +3232,9 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
31673232
{
31683233
/// Updates channel state based on transactions seen in a connected block.
31693234
pub fn block_connected(&self, header: &BlockHeader, txdata: &TransactionData, height: u32) {
3235+
// Note that we MUST NOT end up calling methods on self.chain_monitor here - we're called
3236+
// during initialization prior to the chain_monitor being fully configured in some cases.
3237+
// See the docs for `ChannelManagerReadArgs` for more.
31703238
let header_hash = header.block_hash();
31713239
log_trace!(self.logger, "Block {} at height {} connected", header_hash, height);
31723240
let _persistence_guard = PersistenceNotifierGuard::new(&self.total_consistency_lock, &self.persistence_notifier);
@@ -3218,9 +3286,7 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
32183286
if let Some(short_id) = channel.get_short_channel_id() {
32193287
short_to_id.remove(&short_id);
32203288
}
3221-
// It looks like our counterparty went on-chain. We go ahead and
3222-
// broadcast our latest local state as well here, just in case its
3223-
// some kind of SPV attack, though we expect these to be dropped.
3289+
// It looks like our counterparty went on-chain. Close the channel.
32243290
failed_channels.push(channel.force_shutdown(true));
32253291
if let Ok(update) = self.get_channel_update(&channel) {
32263292
pending_msg_events.push(events::MessageSendEvent::BroadcastChannelUpdate {
@@ -3254,9 +3320,8 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
32543320
!htlcs.is_empty() // Only retain this entry if htlcs has at least one entry.
32553321
});
32563322
}
3257-
for failure in failed_channels.drain(..) {
3258-
self.finish_force_close_channel(failure);
3259-
}
3323+
3324+
self.handle_init_event_channel_failures(failed_channels);
32603325

32613326
for (source, payment_hash, reason) in timed_out_htlcs.drain(..) {
32623327
self.fail_htlc_backwards_internal(self.channel_state.lock().unwrap(), source, &payment_hash, reason);
@@ -3282,6 +3347,9 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
32823347
/// If necessary, the channel may be force-closed without letting the counterparty participate
32833348
/// in the shutdown.
32843349
pub fn block_disconnected(&self, header: &BlockHeader) {
3350+
// Note that we MUST NOT end up calling methods on self.chain_monitor here - we're called
3351+
// during initialization prior to the chain_monitor being fully configured in some cases.
3352+
// See the docs for `ChannelManagerReadArgs` for more.
32853353
let _persistence_guard = PersistenceNotifierGuard::new(&self.total_consistency_lock, &self.persistence_notifier);
32863354
let mut failed_channels = Vec::new();
32873355
{
@@ -3306,9 +3374,7 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
33063374
}
33073375
});
33083376
}
3309-
for failure in failed_channels.drain(..) {
3310-
self.finish_force_close_channel(failure);
3311-
}
3377+
self.handle_init_event_channel_failures(failed_channels);
33123378
self.latest_block_height.fetch_sub(1, Ordering::AcqRel);
33133379
*self.last_block_hash.try_lock().expect("block_(dis)connected must not be called in parallel") = header.block_hash();
33143380
}
@@ -3914,6 +3980,18 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> Writeable f
39143980
event.write(writer)?;
39153981
}
39163982

3983+
let background_events = self.pending_background_events.lock().unwrap();
3984+
(background_events.len() as u64).write(writer)?;
3985+
for event in background_events.iter() {
3986+
match event {
3987+
BackgroundEvent::ClosingMonitorUpdate((funding_txo, monitor_update)) => {
3988+
0u8.write(writer)?;
3989+
funding_txo.write(writer)?;
3990+
monitor_update.write(writer)?;
3991+
},
3992+
}
3993+
}
3994+
39173995
(self.last_node_announcement_serial.load(Ordering::Acquire) as u32).write(writer)?;
39183996

39193997
Ok(())
@@ -3932,8 +4010,11 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> Writeable f
39324010
/// 3) Register all relevant ChannelMonitor outpoints with your chain watch mechanism using
39334011
/// ChannelMonitor::get_outputs_to_watch() and ChannelMonitor::get_funding_txo().
39344012
/// 4) Reconnect blocks on your ChannelMonitors.
3935-
/// 5) Move the ChannelMonitors into your local chain::Watch.
3936-
/// 6) Disconnect/connect blocks on the ChannelManager.
4013+
/// 5) Disconnect/connect blocks on the ChannelManager.
4014+
/// 6) Move the ChannelMonitors into your local chain::Watch.
4015+
///
4016+
/// Note that the ordering of #4-6 is not of importance, however all three must occur before you
4017+
/// call any other methods on the newly-deserialized ChannelManager.
39374018
///
39384019
/// Note that because some channels may be closed during deserialization, it is critical that you
39394020
/// always deserialize only the latest version of a ChannelManager and ChannelMonitors available to
@@ -4135,6 +4216,15 @@ impl<'a, Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref>
41354216
}
41364217
}
41374218

4219+
let background_event_count: u64 = Readable::read(reader)?;
4220+
let mut pending_background_events_read: Vec<BackgroundEvent> = Vec::with_capacity(cmp::min(background_event_count as usize, MAX_ALLOC_SIZE/mem::size_of::<BackgroundEvent>()));
4221+
for _ in 0..background_event_count {
4222+
match <u8 as Readable>::read(reader)? {
4223+
0 => pending_background_events_read.push(BackgroundEvent::ClosingMonitorUpdate((Readable::read(reader)?, Readable::read(reader)?))),
4224+
_ => return Err(DecodeError::InvalidValue),
4225+
}
4226+
}
4227+
41384228
let last_node_announcement_serial: u32 = Readable::read(reader)?;
41394229

41404230
let mut secp_ctx = Secp256k1::new();
@@ -4164,6 +4254,7 @@ impl<'a, Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref>
41644254
per_peer_state: RwLock::new(per_peer_state),
41654255

41664256
pending_events: Mutex::new(pending_events_read),
4257+
pending_background_events: Mutex::new(pending_background_events_read),
41674258
total_consistency_lock: RwLock::new(()),
41684259
persistence_notifier: PersistenceNotifier::new(),
41694260

lightning/src/ln/functional_test_utils.rs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -83,11 +83,13 @@ pub fn connect_block<'a, 'b, 'c, 'd>(node: &'a Node<'b, 'c, 'd>, block: &Block,
8383
let txdata: Vec<_> = block.txdata.iter().enumerate().collect();
8484
node.chain_monitor.chain_monitor.block_connected(&block.header, &txdata, height);
8585
node.node.block_connected(&block.header, &txdata, height);
86+
node.node.test_process_background_events();
8687
}
8788

8889
pub fn disconnect_block<'a, 'b, 'c, 'd>(node: &'a Node<'b, 'c, 'd>, header: &BlockHeader, height: u32) {
8990
node.chain_monitor.chain_monitor.block_disconnected(header, height);
9091
node.node.block_disconnected(header);
92+
node.node.test_process_background_events();
9193
}
9294

9395
pub struct TestChanMonCfg {

lightning/src/ln/reorg_tests.rs

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -207,6 +207,7 @@ fn test_unconf_chan() {
207207
nodes[0].node.block_disconnected(&headers.pop().unwrap());
208208
}
209209
check_closed_broadcast!(nodes[0], false);
210+
nodes[0].node.test_process_background_events(); // Required to free the pending background monitor update
210211
check_added_monitors!(nodes[0], 1);
211212
let channel_state = nodes[0].node.channel_state.lock().unwrap();
212213
assert_eq!(channel_state.by_id.len(), 0);

0 commit comments

Comments
 (0)