Skip to content

Fail back HTLCs that fail to be freed from the holding cell #640

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

valentinewallace
Copy link
Contributor

Also update comment explaining when this case would be hit.

This case was discussed on Slack a long time ago due to me being confused about the original comment, and adding a test was suggested.

@codecov
Copy link

codecov bot commented Jun 18, 2020

Codecov Report

Merging #640 into master will increase coverage by 0.07%.
The diff coverage is 91.32%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #640      +/-   ##
==========================================
+ Coverage   91.30%   91.38%   +0.07%     
==========================================
  Files          35       35              
  Lines       21400    21663     +263     
==========================================
+ Hits        19539    19796     +257     
- Misses       1861     1867       +6     
Impacted Files Coverage Δ
lightning/src/ln/channelmanager.rs 85.26% <87.23%> (+<0.01%) ⬆️
lightning/src/ln/channel.rs 87.18% <87.32%> (+0.53%) ⬆️
lightning/src/ln/functional_tests.rs 96.99% <93.42%> (-0.16%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d735a24...523cab8. Read the comment docs.

@jkczyz jkczyz self-requested a review June 18, 2020 20:51
Copy link
Contributor

@jkczyz jkczyz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test looks great! Just one comment though not sure if can be addressed.

@valentinewallace valentinewallace force-pushed the test-holding-cell-edge-case branch 5 times, most recently from 22950b8 to 38b44f5 Compare June 24, 2020 19:48
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Didn't do a super detailed review, but the new logic is definitely the right direction.

// needs to be surfaced to the user.
fn fail_htlcs(&self, mut htlcs_to_fail: Vec<(HTLCSource, PaymentHash)>, channel_id: [u8; 32]) -> Result<(), MsgHandleErrInternal> {
for (htlc_src, payment_hash) in htlcs_to_fail.drain(..) {
match htlc_src {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like this could be DRY'd, but didn't dig into it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I DRY'd it a liiittle bit but LMK if you have a better way in mind.

@valentinewallace valentinewallace changed the title Test adding an HTLC to the holding cell when a fee update is pending Fail back HTLCs that fail to be freed from the holding cell Jun 24, 2020
@valentinewallace valentinewallace force-pushed the test-holding-cell-edge-case branch 2 times, most recently from 4f38bad to 6cf1a5d Compare June 26, 2020 23:01
@@ -2115,7 +2115,7 @@ impl<ChanSigner: ChannelKeys> Channel<ChanSigner> {

/// Used to fulfill holding_cell_htlcs when we get a remote ack (or implicitly get it by them
/// fulfilling or failing the last pending HTLC)
fn free_holding_cell_htlcs<L: Deref>(&mut self, logger: &L) -> Result<Option<(msgs::CommitmentUpdate, ChannelMonitorUpdate)>, ChannelError> where L::Target: Logger {
fn free_holding_cell_htlcs<L: Deref>(&mut self, logger: &L) -> Result<(Option<(msgs::CommitmentUpdate, ChannelMonitorUpdate)>, Vec<(HTLCSource, PaymentHash)>), ChannelError> where L::Target: Logger {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm getting concerned that the return values are growing more complicated. These bleed out into the public interface (e.g., revoke_and_ack, channel_reestablish), which have even more complicated return values -- lots of Nones and empty Vecs returned.

Not sure how much this should be considered now (they were already quite complicated), but it may be an indication that the interaction between Channel and ChannelManager may need to be rethought in the future. Would love to hear any insights you may have had working with these modules.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd agree that there's a lot of return values, though it reassures me that Channel's "public" API isn't intended to be seen by RL users.

I looked into refactoring it to have fewer return values but nothing jumped out as low-hanging fruit. I'd have to take a deeper look. Open an issue, maybe?

}

#[test]
fn test_holding_cell_htlc_with_pending_fee_update_multihop() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be valuable for posterity if these two tests had high-level documentation. The difference between the two names is the single word "multihop" which -- while accurately differentiating the scenarios -- abstracts away much of the additional behavior that is tested.

Would you mind adding a few lines each stating the scenario and behaviors tested? In general, doing so may possibly lend to finding a succinct test name as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated -- let me know if you think the comments still need work.

// into the holding cell without ever being
// successfully forwarded/failed/fulfilled, causing
// our counterparty to eventually close on us.
htlcs_to_fail.push((source.clone(), *payment_hash));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After the outter match we do this:

					if err.is_some() {
						self.holding_cell_htlc_updates.push(htlc_update);
						if let Some(ChannelError::Ignore(_)) = err {
							// If we failed to add the HTLC, but got an Ignore error, we should
							// still send the new commitment_signed, so reset the err to None.
							err = None;
						}

Which, IIUC, re-pushes the send onto the holding cell, even though we're about to fail it. ISTM this will means we'll try to forward that HTLC again later?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yikes, should've caught that. Do you think it's worth (maybe in a follow-up PR) adding to this area to make sure there's no holding cell HTLCs when a test finishes?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, yea, that would be cool! Probably would hit a few tests that fail just cause its a new requirement but probably worth doing.

@valentinewallace valentinewallace force-pushed the test-holding-cell-edge-case branch 2 times, most recently from 2c333cf to 5db8841 Compare July 6, 2020 21:07
if let Some(ChannelError::Ignore(_)) = err {
// If we failed to add the HTLC, but got an Ignore error, we should
// still send the new commitment_signed, so reset the err to None.
err = None;
} else {
self.holding_cell_htlc_updates.push(htlc_update);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I think its fine to move this here, but it would be nice to add a comment why its fine to drop ChannelError::Ignore responses from get_update_fail_htlc and get_update_fulfill_htlc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hope I interpreted this correctly

@TheBlueMatt
Copy link
Collaborator

Once you add that comment, feel free to squash, I think this is fine.

@TheBlueMatt
Copy link
Collaborator

Hmm, ok, looking at this again sorry for the delay. Looks like the tests all pass dropping the err tracking stuff entirely (see patch below). I think this would totally be the right direction - the previous comment calls out a possible case of hitting Err(Ignore) when we end up back in AwaitingRemoteRevoke but it seems like thats no longer true - in send_htlc as well as fulfill/fail we always return Ok(None) if we go to send and succeed but are in AwaitingRemoteRevoke, so mostly the only cases where we hit Err(Ignore) are when we don't have channel balance available.

The one exception which I think may need longer-term work is the MonitorUpdateFailed case for async monitor updating clients, but thats not specific to this PR, see issue #661.

diff --git a/lightning/src/ln/channel.rs b/lightning/src/ln/channel.rs
index 907a137f..563b5780 100644
--- a/lightning/src/ln/channel.rs
+++ b/lightning/src/ln/channel.rs
@@ -2135,3 +2135,2 @@ impl<ChanSigner: ChannelKeys> Channel<ChanSigner> {
                        let mut htlcs_to_fail = Vec::new();
-                       let mut err = None;
                        for htlc_update in htlc_updates.drain(..) {
@@ -2142,5 +2141,2 @@ impl<ChanSigner: ChannelKeys> Channel<ChanSigner> {
                                // to rebalance channels.
-                               if err.is_some() { // We're back to AwaitingRemoteRevoke (or are about to fail the channel)
-                                       self.holding_cell_htlc_updates.push(htlc_update);
-                               } else {
                                        match &htlc_update {
@@ -2162,6 +2158,5 @@ impl<ChanSigner: ChannelKeys> Channel<ChanSigner> {
                                                                                _ => {
-                                                                                       log_info!(logger, "Failed to send HTLC with payment_hash {} resulting in a channel closure during holding_cell freeing", log_bytes!(payment_hash.0));
+                                                                                       panic!("Why can't we just panic here?");
                                                                                },
                                                                        }
-                                                                       err = Some(e);
                                                                }
@@ -2197,19 +2192,3 @@ impl<ChanSigner: ChannelKeys> Channel<ChanSigner> {
                                        }
-                                       if err.is_some() {
-                                               if let Some(ChannelError::Ignore(_)) = err {
-                                                       // If we failed to add the HTLC, but got an Ignore error, we should
-                                                       // still send the new commitment_signed, so reset the err to None.
-                                                       // If we failed to fail or fulfill an HTLC, but got an Ignore error,
-                                                       // it's OK to drop the error because these errors are caused by
-                                                       // the ChannelManager generating duplicate claim/fail events during
-                                                       // block rescan.
-                                                       err = None;
-                                               } else {
-                                                       self.holding_cell_htlc_updates.push(htlc_update);
-                                               }
-                                       }
-                               }
                        }
-                       match err {
-                               None => {
                                        if update_add_htlcs.is_empty() && update_fulfill_htlcs.is_empty() && update_fail_htlcs.is_empty() && self.holding_cell_update_fee.is_none() {
@@ -2244,5 +2223,2 @@ impl<ChanSigner: ChannelKeys> Channel<ChanSigner> {
                                        }, monitor_update)), htlcs_to_fail))
-                               },
-                               Some(e) => Err(e)
-                       }
                } else {

@valentinewallace valentinewallace force-pushed the test-holding-cell-edge-case branch 2 times, most recently from d436792 to ed48943 Compare July 30, 2020 16:51
@valentinewallace valentinewallace force-pushed the test-holding-cell-edge-case branch from ed48943 to 492041b Compare July 30, 2020 19:14
@valentinewallace valentinewallace force-pushed the test-holding-cell-edge-case branch 2 times, most recently from 0df7bb5 to f9290a7 Compare July 31, 2020 18:28
if commitment_update.is_none() {
order = RAACommitmentOrder::RevokeAndACKFirst;
}
return_monitor_err!(self, e, channel_state, chan, order, revoke_and_ack.is_some(), commitment_update.is_some());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think its a bug if we get htlcs_to_fail and then return early here without actually failing the HTLCs before returning (of course I don't know that we can get HTLCs to fail here to begin with, but its not ideal).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes sense. However, my understanding from our slack conversation on Friday was that we can just assert that there are no htlcs to fail backwards (given that we drop all holding cell HTLC forwards on peer disconnect). Let me know if that isn't right.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep! Can you add a more conservative assert?

diff --git a/lightning/src/ln/channel.rs b/lightning/src/ln/channel.rs
index cf2ccb9b..9e7b653e 100644
--- a/lightning/src/ln/channel.rs
+++ b/lightning/src/ln/channel.rs
@@ -2827,6 +2827,11 @@ impl<ChanSigner: ChannelKeys> Channel<ChanSigner> {
                                // update_adds should have been dropped on peer disconnect. If this changes in
                                // the future, corresponding changes will need to be made in the ChannelManager's
                                // reestablish logic, because the logic assumes there are no HTLCs to fail backwards.
+                               for htlc_update in self.holding_cell_htlc_updates.iter() {
+                                       if let &HTLCUpdateAwaitingACK::AddHTLC { .. } = htlc_update {
+                                               debug_assert!(false, "We don't handle some pending add-HTLC edge-cases properly on reconnect, and they shouldn't be there to begin with");
+                                       }
+                               }
                                match self.free_holding_cell_htlcs(logger) {
                                        Err(ChannelError::Close(msg)) => return Err(ChannelError::Close(msg)),
                                        Err(ChannelError::Ignore(_)) | Err(ChannelError::CloseDelayBroadcast(_)) => panic!("Got non-channel-failing result from free_holding_cell_htlcs"),

@valentinewallace valentinewallace force-pushed the test-holding-cell-edge-case branch 5 times, most recently from 7b0841f to a79e075 Compare August 3, 2020 20:12
@@ -2679,7 +2718,7 @@ impl<ChanSigner: ChannelKeys, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref>
return Err(MsgHandleErrInternal::send_err_msg_no_close("Got a message for a channel from the wrong node!".to_owned(), msg.channel_id));
}
let was_frozen_for_monitor = chan.get().is_awaiting_monitor_update();
let (commitment_update, pending_forwards, pending_failures, closing_signed, monitor_update) =
let (commitment_update, pending_forwards, pending_failures, closing_signed, monitor_update, htlcs_to_fail) =
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, isn't it an issue that we may return early two lines down and thus not fail back the HTLCs which were returned?

Copy link
Contributor Author

@valentinewallace valentinewallace Aug 4, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm yeah, I think so. Is that an existing problem for the pending failures here too?

Edit: never mind, i see that those failures are saved back in the Channel to be processed later

@TheBlueMatt
Copy link
Collaborator

Sadly due to codecov breakage I think this needs rebase on master.

Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, looks good sans the comments left.

@valentinewallace valentinewallace force-pushed the test-holding-cell-edge-case branch from a79e075 to b8750c7 Compare August 4, 2020 19:05
}
};

// Failing back holding cell HTLCs requires acquiring the channel state
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we can drop the channel_state lock between getting the return value from revoke_and_ack and handling it - otherwise we introduce race issues where we may eg drop the lock, have a different thread go forward HTLCs into the channel, and then take the lock and try to submit monitor updates etc. That would result in messages getting sent to the peer out of order or monitor updates happening out of order.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes sense. To be clear, I'm interpreting this as: these lines should also be moved to within the first lock (in addition to moving the holding-cell-htlc-failings to within the first lock).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think basically everything which was under the lock previously needs to remain under the lock, we just need to also be able to process the fail-backs even if we want to return early.

@valentinewallace valentinewallace force-pushed the test-holding-cell-edge-case branch 4 times, most recently from 4f299e4 to 47230b3 Compare August 5, 2020 19:42
@TheBlueMatt
Copy link
Collaborator

Hmm, instead of holding the lock through fail-backs, maybe its easier to just wrap internal_revoke_and_ack to internal_revoke_and_ack_pre_fails and return the list of fails to an outer function. That would also make it eaiser if we move forward with per-channel locks.

@valentinewallace valentinewallace force-pushed the test-holding-cell-edge-case branch from 47230b3 to 14b02f2 Compare August 6, 2020 18:59
@valentinewallace
Copy link
Contributor Author

valentinewallace commented Aug 6, 2020

I'm unsure if there's a way around making try_chan_entry super repetitive here. Tried a bunch of stuff. But I've also been staring at it for a long time so I'll take a fresh look later.

Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, right, that is awkward. One alternative to try is to use the break_chan_entry!() macro instead and put the whole thing inside of something you can break out of (ie a dummy loop {}). That gives you manual control of the control-flow without having to do a full return - then you can just store the htlcs_to_fail on a separate stack entry and set it when it gets returned to you.

@valentinewallace
Copy link
Contributor Author

Hmm, right, that is awkward. One alternative to try is to use the break_chan_entry!() macro instead and put the whole thing inside of something you can break out of (ie a dummy loop {}). That gives you manual control of the control-flow without having to do a full return - then you can just store the htlcs_to_fail on a separate stack entry and set it when it gets returned to you.

Good to know about break_chan_entry! The issue I'm having with this is that it move's the chan to inside the loop, and then the compiler complains when we move chan again into return_monitor_err!.

I'm sorta back to my original solution? But I might just make a bespoke try_raa macro. Because the last clause of try_chan_entry isn't necessary for try_raa, it's not as much code to copy&paste as I thought 😬

@TheBlueMatt
Copy link
Collaborator

This compiles for me (though I didn't test it):

diff --git a/lightning/src/ln/channelmanager.rs b/lightning/src/ln/channelmanager.rs
index efc2a6cf..1ef01c60 100644
--- a/lightning/src/ln/channelmanager.rs
+++ b/lightning/src/ln/channelmanager.rs
@@ -2735,24 +2735,28 @@ impl<ChanSigner: ChannelKeys, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref>
                }
        }
 
-       fn internal_revoke_and_ack(&self, their_node_id: &PublicKey, msg: &msgs::RevokeAndACK) -> (Vec<(HTLCSource, PaymentHash)>, Result<(), MsgHandleErrInternal>){
-               let (pending_forwards, mut pending_failures, short_channel_id, htlcs_to_fail) = {
+       fn internal_revoke_and_ack(&self, their_node_id: &PublicKey, msg: &msgs::RevokeAndACK) -> Result<(), MsgHandleErrInternal> {
+               let mut htlcs_to_fail = Vec::new();
+               let res = loop {
                        let mut channel_state_lock = self.channel_state.lock().unwrap();
                        let channel_state = &mut *channel_state_lock;
                        match channel_state.by_id.entry(msg.channel_id) {
                                hash_map::Entry::Occupied(mut chan) => {
                                        if chan.get().get_their_node_id() != *their_node_id {
-                                               return (Vec::new(), Err(MsgHandleErrInternal::send_err_msg_no_close("Got a message for a channel from the wrong node!".to_owned(), msg.channel_id)));
+                                               break Err(MsgHandleErrInternal::send_err_msg_no_close("Got a message for a channel from the wrong node!".to_owned(), msg.channel_id));
                                        }
                                        let was_frozen_for_monitor = chan.get().is_awaiting_monitor_update();
-                                       let (commitment_update, pending_forwards, pending_failures, closing_signed, monitor_update, htlcs_to_fail) =
-                                               try_chan_entry!(self, chan.get_mut().revoke_and_ack(&msg, &self.fee_estimator, &self.logger), channel_state, chan, Vec::new());
+                                       let (commitment_update, pending_forwards, pending_failures, closing_signed, monitor_update, htlcs_to_fail_in) =
+                                               break_chan_entry!(self, chan.get_mut().revoke_and_ack(&msg, &self.fee_estimator, &self.logger), channel_state, chan);
+                                       htlcs_to_fail = htlcs_to_fail_in;
                                        if let Err(e) = self.monitor.update_monitor(chan.get().get_funding_txo().unwrap(), monitor_update) {
                                                if was_frozen_for_monitor {
                                                        assert!(commitment_update.is_none() && closing_signed.is_none() && pending_forwards.is_empty() && pending_failures.is_empty());
-                                                       return (htlcs_to_fail, Err(MsgHandleErrInternal::ignore_no_close("Previous monitor update failure prevented responses to RAA".to_owned())));
+                                                       break Err(MsgHandleErrInternal::ignore_no_close("Previous monitor update failure prevented responses to RAA".to_owned()));
                                                } else {
-                                                       return_monitor_err!(self, e, channel_state, chan, RAACommitmentOrder::CommitmentFirst, false, commitment_update.is_some(), pending_forwards, pending_failures, htlcs_to_fail);
+                                                       if let Err(e) = handle_monitor_err!(self, e, channel_state, chan, RAACommitmentOrder::CommitmentFirst, false, commitment_update.is_some(), pending_forwards, pending_failures) {
+                                                               break Err(e);
+                                                       } else { unreachable!(); }
                                                }
                                        }
                                        if let Some(updates) = commitment_update {
@@ -2767,17 +2771,22 @@ impl<ChanSigner: ChannelKeys, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref>
                                                        msg,
                                                });
                                        }
-                                       (pending_forwards, pending_failures, chan.get().get_short_channel_id().expect("RAA should only work on a short-id-available channel"), htlcs_to_fail)
+                                       break Ok((pending_forwards, pending_failures, chan.get().get_short_channel_id().expect("RAA should only work on a short-id-available channel")))
                                },
-                               hash_map::Entry::Vacant(_) => return (Vec::new(), Err(MsgHandleErrInternal::send_err_msg_no_close("Failed to find corresponding channel".to_owned(), msg.channel_id)))
+                               hash_map::Entry::Vacant(_) => break Err(MsgHandleErrInternal::send_err_msg_no_close("Failed to find corresponding channel".to_owned(), msg.channel_id))
                        }
                };
-               for failure in pending_failures.drain(..) {
-                       self.fail_htlc_backwards_internal(self.channel_state.lock().unwrap(), failure.0, &failure.1, failure.2);
+               self.fail_holding_cell_htlcs(htlcs_to_fail, msg.channel_id);
+               match res {
+                       Ok((pending_forwards, mut pending_failures, short_channel_id)) => {
+                               for failure in pending_failures.drain(..) {
+                                       self.fail_htlc_backwards_internal(self.channel_state.lock().unwrap(), failure.0, &failure.1, failure.2);
+                               }
+                               self.forward_htlcs(&mut [(short_channel_id, pending_forwards)]);
+                               Ok(())
+                       },
+                       Err(e) => Err(e)
                }
-               self.forward_htlcs(&mut [(short_channel_id, pending_forwards)]);
-
-               (htlcs_to_fail, Ok(()))
        }
 
        fn internal_update_fee(&self, their_node_id: &PublicKey, msg: &msgs::UpdateFee) -> Result<(), MsgHandleErrInternal> {
@@ -3278,8 +3287,7 @@ impl<ChanSigner: ChannelKeys, M: Deref + Sync + Send, T: Deref + Sync + Send, K:
 
        fn handle_revoke_and_ack(&self, their_node_id: &PublicKey, msg: &msgs::RevokeAndACK) {
                let _ = self.total_consistency_lock.read().unwrap();
-               let (htlcs_to_fail, res) = self.internal_revoke_and_ack(their_node_id, msg);
-               self.fail_holding_cell_htlcs(htlcs_to_fail, msg.channel_id);
+               let res = self.internal_revoke_and_ack(their_node_id, msg);
                let _ = handle_error!(self, res, *their_node_id);
        }
 

@valentinewallace
Copy link
Contributor Author

Oh, that works great 🤦‍♀️ I was putting the loop inside the lock.

@valentinewallace valentinewallace force-pushed the test-holding-cell-edge-case branch from 14b02f2 to 523cab8 Compare August 8, 2020 20:32
@TheBlueMatt TheBlueMatt merged commit 99eef23 into lightningdevkit:master Aug 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants