Skip to content

PeerStorage: Add feature and store peer storage in ChannelManager #3575

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

adi2011
Copy link

@adi2011 adi2011 commented Jan 30, 2025

Breaking up PR #2943 into smaller chunks to make it easier to review.

Key Changes:

  • Added the ProvidePeerBackupStorage feature.
  • Added message handlers for peer storage, i.e., handle_your_peer_storage and handle_peer_storage.
  • Store peer data inside ChannelManager and handle its persistence.
  • Send back the data when reconnecting.

@adi2011 adi2011 force-pushed the peer-storage/channel-manager branch 3 times, most recently from d95d7e1 to 0ade437 Compare January 30, 2025 17:42
Copy link

codecov bot commented Jan 30, 2025

Codecov Report

Attention: Patch coverage is 28.80000% with 89 lines in your changes missing coverage. Please review.

Project coverage is 88.48%. Comparing base (c7c3973) to head (ee684f2).

Files with missing lines Patch % Lines
lightning/src/ln/channelmanager.rs 21.21% 50 Missing and 2 partials ⚠️
lightning/src/ln/peer_handler.rs 0.00% 14 Missing ⚠️
lightning/src/util/test_utils.rs 0.00% 8 Missing ⚠️
lightning/src/ln/wire.rs 0.00% 6 Missing ⚠️
lightning-net-tokio/src/lib.rs 0.00% 5 Missing ⚠️
lightning/src/ln/functional_test_utils.rs 0.00% 4 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3575      +/-   ##
==========================================
- Coverage   88.52%   88.48%   -0.05%     
==========================================
  Files         149      149              
  Lines      115023   115149     +126     
  Branches   115023   115149     +126     
==========================================
+ Hits       101829   101888      +59     
- Misses      10699    10767      +68     
+ Partials     2495     2494       -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@jkczyz jkczyz requested review from TheBlueMatt and tnull January 30, 2025 18:16
@jkczyz jkczyz added the weekly goal Someone wants to land this this week label Jan 30, 2025
@adi2011 adi2011 force-pushed the peer-storage/channel-manager branch 2 times, most recently from e766ee9 to 79b6458 Compare January 31, 2025 05:47
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM once these comments are addressed, I think.

@@ -725,6 +725,24 @@ pub struct UpdateFulfillHTLC {
pub payment_preimage: PaymentPreimage,
}

/// A [`PeerStorage`] message to be sent to or received from a peer.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A bit more documentation would be nice on both these structs (just because the other structs around it have bad docs doesn't mean we have to copy that :p).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I've improved the documentation :)

/// [`PeerStorage`]: https://github.com/lightning/bolts/pull/1110
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct PeerStorageMessage {
/// Data included in the msg
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, but what is the data for? This doesn't tell me anything.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now it does.
Thanks for pointing this out.


// Check if we have any channels with the peer (Currently we only provide the servie to peers we have a channel with).
if !peer_state.channel_by_id.values().any(|phase| phase.is_funded()) {
log_debug!(logger, "We do not have any channel with {}", log_pubkey!(counterparty_node_id));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This log doesn't tell me why its being printed nor what we're doing. Something more like "Ignoring peer storage request from {} as we don't have any funded channels with them" would be much more informative.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, my bad. I should've written them more clearly. Fixed it now!


#[cfg(not(test))]
if msg.data.len() > 1024 {
log_debug!(logger, "We do not allow more than 1 KiB of data for each peer in peer storage. Sending warning to peer {}", log_pubkey!(counterparty_node_id));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly, would be nice to phrase this in terms of what we're doing and why, rather than just stating a restriction we have. Something more like "Sending warning to peer and ignoring peer storage request from {} as its over 1KiB" might be more readable.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@@ -12858,6 +12904,9 @@ where
if !peer_state.ok_to_remove(false) {
peer_pubkey.write(writer)?;
peer_state.latest_features.write(writer)?;

peer_state.peer_storage.write(writer)?;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can't just go writing new fields in the middle of the existing file format. Instead, we have to write them as TLVs below in the write_tlv_fields call - we'll need to build a Vec<(&PublicKey, &Vec<u8>)> (note the references to avoid unnecessary clones) and fill it with our stored data per-peer, then on the read side read the same, iterate it, and insert the data into the peer structs.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pointing that out—clearly, my brain was on vacation.
Fixed!

Copy link
Contributor

@tnull tnull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool! Did a first high-level pass to get started.

Note that I might not be aware of the full extent of previous discussions to this point, but I'd love if we could split out the peer store logic into a dedicated peer_store.rs module holding PeerStoreManager (or similar), and preferably even introduce a new PeerStoreMessageHandler rather than tagging the methods onto the ChannelMessageHandler trait.

It would be even better if we could find a way around persisting it as part of ChannelManager (e.g., by having PeerStoreManager take a KVStore), but if we're set on persisting it as part of ChannelManager for now, we could at least already isolate the logic more and prepare for such a refactoring in the future.

43,
ProvidePeerBackupStorage,
[InitContext, NodeContext],
"Feature flags for `provide_peer_backup_storage`.",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this should be option_provide_storage?

Below method names should be adjusted accordingly.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pointing it out. Fixed it.

@@ -72,6 +72,8 @@
//! (see the [`Trampoline` feature proposal](https://github.com/lightning/bolts/pull/836) for more information).
//! - `DnsResolver` - supports resolving DNS names to TXT DNSSEC proofs for BIP 353 payments
//! (see [bLIP 32](https://github.com/lightning/blips/blob/master/blip-0032.md) for more information).
//! - `ProvidePeerBackupStorage` - Indicates that we offer the capability to store data of our peers
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably also best to rename this ProvideStorage to align with the feature name?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I should have modified it as per the spec... Thanks!

},
/// Sends a channel partner their own peer storage which we store and update when they send
/// a [`msgs::PeerStorageMessage`].
SendYourPeerStorageMessage {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should these be called ..RetrievalMessage to align with the spec? This would potentially avoid confusion around the difference going forward?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed. Thanks!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that peer storage is not necessarily connected to channel operation (as, IIUC, the spec allows to store/retrieve for non-channel peers), I wonder if this should introduce a new message handler type rather than just adding onto ChannelMessageHandler? Splitting it out now might also make it easier to move the storage into a dedicated object rather than handling everything via ChannelManager?

@@ -4527,6 +4559,26 @@ mod tests {
assert_eq!(encoded_value, target_value);
}

#[test]
fn encoding_peer_storage() {
let peerstorage = msgs::PeerStorageMessage {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Prefer snake_case for variable names.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed this one and the test right after it. Thanks!

@@ -1379,6 +1379,8 @@ pub(super) struct PeerState<SP: Deref> where SP::Target: SignerProvider {
/// [`ChannelMessageHandler::peer_connected`] and no corresponding
/// [`ChannelMessageHandler::peer_disconnected`].
pub is_connected: bool,
/// Holds the peer storage data for the channel partner on a per-peer basis.
peer_storage: Vec<u8>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we encapsulate this in a (maybe even optional?) PeerStorageManager or similar type that we can transition to in the future, even if for now we're storing it in ChannelManager?

FWIW, such a type and it's associated logic could live in a new src/ln/peer_store.rs module, instead of adding to the (already humongous) channelmanager.rs (which we're planning to split up going forward anyways).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TheBlueMatt, what's your opinion on this? Would love to hear your thoughts!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, I commented on discord that we should store them in ChannelManager because we should include them in the PeerState, whether that means a Vec or another struct doesn't matter much, but I imagine it wouldn't actually encapsulate very much (this PR is only +300, including the test), so I'm not quite sure I understand the desire for it. To be clear, @tnull, the stuff for actually storing our own stuff (rather than storing stuff for our peers) is relatively encapsulated.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, I commented on discord that we should store them in ChannelManager because we should include them in the PeerState, whether that means a Vec or another struct doesn't matter much, but I imagine it wouldn't actually encapsulate very much (this PR is only +300, including the test), so I'm not quite sure I understand the desire for it.

Well, IIRC we know that we want to considerably refactor and move things out of channelmanager.rs at some point in the future, and we ofc will have to eventually run the formatter on it. So from my perspective, anything that could be added in separate modules should be, as otherwise we'll only have to move the code anyways going forward. Essentially, by starting to modularization now we not only make it easier to keep context on certain parts of the code (e.g., peer storage), but also make our future-selves' jobs easier.

I imagine if we introduce a separate message handler and dedicated types, essentially almost all of the peer storage related logic (and tests) could live in a peer_storage.rs, even if the data itself is kept as part of PeerState?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In principle, sure, but looking at this PR we'd move about 10 LoC into the new module? Basically we'd take the peer_state lock in channelmanager.rs, look up the peer entry, then pass the message off to the module which would only check if the message is too long and then store it? I don't see how that makes for cleaner code.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, sure, but from a look at #2943 it seems there are quite some more changes ahead, also to other huge files. But, I don't want to hold this PR over this too long, I guess at some point we'll have to go through refactoring pain either way.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm? That PR only adds about another 50 LoC in channelmanager.rs beyond what this one adds. And its all working with channels in the ChannelManager so not sure its easily extract-able.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think having a separate module for peer storage would be useful if, in the future, we decide to sell this as a service to our peers and increase our storage limit to 64 KiB. For now, I believe this approach is simplistic and good enough.

Copy link
Contributor

@tnull tnull Feb 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, while I still think it would be nice to have at least a separate message handler, I don't want to delay this PR too much given there have been prior discussions. So probably fine to move forward now, and possibly refactor later.

@adi2011 adi2011 force-pushed the peer-storage/channel-manager branch 5 times, most recently from 079bb9a to ee684f2 Compare February 1, 2025 19:08
@adi2011
Copy link
Author

adi2011 commented Feb 1, 2025

@tnull Initially, I wrote a peer_store_manager to handle all the peer storages and discussed this with @TheBlueMatt on Discord (screenshot attached for reference). He suggested that we should instead store them in the ChannelManager.

Screenshot 2025-02-02 at 12 50 17 AM

@adi2011 adi2011 requested review from TheBlueMatt and tnull February 1, 2025 19:26
@adi2011 adi2011 force-pushed the peer-storage/channel-manager branch 2 times, most recently from fe9f248 to 86d98ab Compare February 2, 2025 10:15
///
/// [`PeerStorageRetrievalMessage`]: https://github.com/lightning/bolts/pull/1110
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct PeerStorageRetrievalMessage {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why "Message" at the end (same for PeerStorageMessage)? We don't do that elsewhere and IMO the name stands on its own.

Suggested change
pub struct PeerStorageRetrievalMessage {
pub struct PeerStorageRetrieval {

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, my bad. Fixed!

@adi2011 adi2011 force-pushed the peer-storage/channel-manager branch from 86d98ab to 569c642 Compare February 6, 2025 18:26
Copy link
Contributor

@tnull tnull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly looks good to me, mod some comments (mostly nits, but I think we should consider starting to have the handlers take messages by-value).

SendPeerStorage {
/// The node_id of this message recipient
node_id: PublicKey,
/// The PeerStorage which should be sent.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: To keep it consistent this should be peer_storage, same below with peer_storage_retrieval.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pointing this out.

@@ -725,6 +725,36 @@ pub struct UpdateFulfillHTLC {
pub payment_preimage: PaymentPreimage,
}

/// A [`PeerStorage`] message that can be sent to or received from a peer.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as mentioned above, you'll want to use peer_storage, i.e., the name from the BOLTs in these docs, and not shadow the object link here.

@@ -625,6 +637,14 @@ impl Encode for msgs::GossipTimestampFilter {
const TYPE: u16 = 265;
}

impl Encode for msgs::PeerStorage {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems the other types are roughly ordered by type number, so these should likely get moved waaay up.

let peer_state = &mut *peer_state_lock;
let logger = WithContext::from(&self.logger, Some(*counterparty_node_id), None, None);

// Check if we have any channels with the peer (Currently we only provide the servie to peers we have a channel with).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/servie/service/

}

#[cfg(not(test))]
if msg.data.len() > 1024 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we make this magic number an appropriately named const? Let's reuse it in the log/warning messages below then to make sure they never get stale.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely


// Check if we have any channels with the peer (Currently we only provide the servie to peers we have a channel with).
if !peer_state.channel_by_id.values().any(|phase| phase.is_funded()) {
log_debug!(logger, "Ignoring peer storage request from {} as we don't have any funded channels with them.", log_pubkey!(counterparty_node_id));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to return a warning in this case to let the peer know we're not gonna store the data?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By protocol, we don’t need to do it, but yes, it’s a good suggestion.

return;
}

log_trace!(logger, "Received Peer Storage from {}", log_pubkey!(counterparty_node_id));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/Peer Storage/peer_storage message/

@@ -760,6 +760,11 @@ mod tests {
fn handle_tx_init_rbf(&self, _their_node_id: PublicKey, _msg: &TxInitRbf) {}
fn handle_tx_ack_rbf(&self, _their_node_id: PublicKey, _msg: &TxAckRbf) {}
fn handle_tx_abort(&self, _their_node_id: PublicKey, _msg: &TxAbort) {}
fn handle_peer_storage(&self, _their_node_id: PublicKey, _msg: &PeerStorage) {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we give these msgs by value rather than by reference? We should eventually refactor the other variants too, but for these new variants it will avoid unnecessary cloning later.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that is what I thought, but then I was tempted to follow the convention...

@@ -8170,9 +8172,65 @@ This indicates a bug inside LDK. Please report this error at https://github.com/
}
}

fn internal_peer_storage_retrieval(&self, _counterparty_node_id: &PublicKey, _msg: &msgs::PeerStorageRetrieval) {}
fn internal_peer_storage_retrieval(&self, counterparty_node_id: &PublicKey, _msg: &msgs::PeerStorageRetrieval) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These methods should take both arguments by value, IMO.

}

log_trace!(logger, "Received Peer Storage from {}", log_pubkey!(counterparty_node_id));
peer_state.peer_storage = msg.data.clone();
Copy link
Contributor

@tnull tnull Feb 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can avoid this clone by making use of move semantics.

Copy link
Contributor

@tnull tnull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, mod one comment, I think.

It's fine for now, but for the next PRs please add fixups in fixup commits under the respective 'main' commits and squash them only after asking the reviewers if it's fine to do so. Also, please only rebase on main if really necessary to resolve conflicts, not on every push. Following these steps makes it easier for reviewers to follow what changed in-between review rounds. Thank you!

// Check if we have any channels with the peer (Currently we only provide the service to peers we have a channel with).
if !peer_state.channel_by_id.values().any(|phase| phase.is_funded()) {
log_debug!(logger, "Ignoring peer storage request from {} as we don't have any funded channels with them.", log_pubkey!(counterparty_node_id));
peer_state.pending_msg_events.push(events::MessageSendEvent::HandleError {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for catching this only now, but I think these intenal_ methods should follow the usual pattern and return a Result<(), MsgHandleErrInternal>? Then, we can just return with the respective ErrorAction here, instead of "manually" pushing to the pending_msg_events?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for catching this! You're absolutely right—having the internal_ methods return a Result<(), MsgHandleErrInternal> would align better with the usual pattern and simplify the error handling.

@adi2011 adi2011 requested a review from tnull February 14, 2025 11:52
@adi2011
Copy link
Author

adi2011 commented Feb 14, 2025

Thanks for the feedback @tnull! I’ll make sure to follow these guidelines for future PRs.

@TheBlueMatt
Copy link
Collaborator

LGTM. Please squash the fix up into the appropriate commit and let's get this over the line!

@adi2011 adi2011 force-pushed the peer-storage/channel-manager branch from e2298d0 to 551f1f2 Compare February 14, 2025 19:46
Aditya Sharma added 2 commits February 15, 2025 01:17
Introduce the 'ProvideStorage' feature to enable nodes to distribute
and store peer storage backups for channel partners. This functionality enhances
resilience by ensuring critical peer data is backed up and can be retrieved if needed.

- Added 'ProvideStorage' to the 'InitContext' & 'NodeContext'.
- Set feature bit for this feature inside 'provide_init_features()'
Introduce message types and handlers to enable the exchange of peer storage data between nodes.
PeerStorageMessage: Used to send encrypted backups to peers.
PeerStorageRetrievalMessage: Used to return a peer's stored data upon reconnection.

- Define two new message types: PeerStorageMessage and PeerStorageRetrievalMessage.
- Implement handlers for these messages in ChannelMessageHandler.
- Add SendPeerStorageMessage and SendPeerStorageRetrievalMessage to MessageSendEvent.
@adi2011 adi2011 force-pushed the peer-storage/channel-manager branch from 551f1f2 to e55cc1e Compare February 14, 2025 19:49
TheBlueMatt
TheBlueMatt previously approved these changes Feb 14, 2025
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Gonna go ahead and land this after CI passes since the changes since @tnull's "Looks good" were straightforward.

Aditya Sharma added 2 commits February 15, 2025 03:31
This commit introduces the handling and persistence of PeerStorage messages on a per-peer basis.
The peer storage is stored within the PeerState to simplify management, ensuring we do not need to remove it
when there are no active channels with the peer.

Key changes include:

 - Add PeerStorage to PeerState for persistent storage.
 - Implement internal_peer_storage to manage PeerStorage and its updates.
 - Add resend logic in peer_connected() to resend PeerStorage before sending the channel reestablish message upon reconnection.
 - Update PeerState's write() and read() methods to support PeerStorage persistence.
This test ensures that PeerStorage & PeerStorageRetrieval handling behaves as expected. It simulates
receiving a dummy peer storage message, disconnecting and reconnecting peers, and
validates that the correct messages are exchanged during reestablishment.

- Added a test case `test_peer_storage` to verify the handling of `PeerStorageMessage`
  and the validation of warning messages in the event of invalid peer storage retrieval.

- Simulated peer storage retrieval upon reconnection between nodes.

- Validated that a warning message is generated when `PeerStorageRetrievalMessage`
  is received.

- Ensured the warning message contains the expected error description.
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly:

$ git diff-tree -U1 e55cc1ecb 6c8e7e40b
diff --git a/lightning/src/ln/channelmanager.rs b/lightning/src/ln/channelmanager.rs
index 0db5bfbd6..91df6a709 100644
--- a/lightning/src/ln/channelmanager.rs
+++ b/lightning/src/ln/channelmanager.rs
@@ -8263,3 +8263,3 @@ This indicates a bug inside LDK. Please report this error at https://github.com/
 		Err(MsgHandleErrInternal::from_chan_no_close(ChannelError::Warn(
-			format!("Invalid peer_storage_retrieval message received.")
+			"Invalid peer_storage_retrieval message received.".into(),
 		), ChannelId([0; 32])))
@@ -8283,3 +8283,4 @@ This indicates a bug inside LDK. Please report this error at https://github.com/
 			return Err(MsgHandleErrInternal::from_chan_no_close(ChannelError::Warn(
-				format!("Ignoring peer_storage message, as peer storage is currently supported only for peers with an active funded channel.")
+				"Ignoring peer_storage message, as peer storage is currently supported only for \
+				peers with an active funded channel.".into(),
 			), ChannelId([0; 32])));

@adi2011
Copy link
Author

adi2011 commented Feb 15, 2025

I've updated these lines as suggested. Are there other instances in the code where similar changes should be made?

@TheBlueMatt TheBlueMatt merged commit bce5db7 into lightningdevkit:main Feb 15, 2025
22 of 26 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
weekly goal Someone wants to land this this week
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants