-
Notifications
You must be signed in to change notification settings - Fork 404
PeerStorage: Add feature and store peer storage in ChannelManager #3575
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PeerStorage: Add feature and store peer storage in ChannelManager #3575
Conversation
d95d7e1
to
0ade437
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3575 +/- ##
==========================================
- Coverage 88.52% 88.48% -0.05%
==========================================
Files 149 149
Lines 115023 115149 +126
Branches 115023 115149 +126
==========================================
+ Hits 101829 101888 +59
- Misses 10699 10767 +68
+ Partials 2495 2494 -1 ☔ View full report in Codecov by Sentry. |
e766ee9
to
79b6458
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM once these comments are addressed, I think.
lightning/src/ln/msgs.rs
Outdated
@@ -725,6 +725,24 @@ pub struct UpdateFulfillHTLC { | |||
pub payment_preimage: PaymentPreimage, | |||
} | |||
|
|||
/// A [`PeerStorage`] message to be sent to or received from a peer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A bit more documentation would be nice on both these structs (just because the other structs around it have bad docs doesn't mean we have to copy that :p).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I've improved the documentation :)
lightning/src/ln/msgs.rs
Outdated
/// [`PeerStorage`]: https://github.com/lightning/bolts/pull/1110 | ||
#[derive(Clone, Debug, Hash, PartialEq, Eq)] | ||
pub struct PeerStorageMessage { | ||
/// Data included in the msg |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, but what is the data for? This doesn't tell me anything.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now it does.
Thanks for pointing this out.
lightning/src/ln/channelmanager.rs
Outdated
|
||
// Check if we have any channels with the peer (Currently we only provide the servie to peers we have a channel with). | ||
if !peer_state.channel_by_id.values().any(|phase| phase.is_funded()) { | ||
log_debug!(logger, "We do not have any channel with {}", log_pubkey!(counterparty_node_id)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This log doesn't tell me why its being printed nor what we're doing. Something more like "Ignoring peer storage request from {} as we don't have any funded channels with them" would be much more informative.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, my bad. I should've written them more clearly. Fixed it now!
lightning/src/ln/channelmanager.rs
Outdated
|
||
#[cfg(not(test))] | ||
if msg.data.len() > 1024 { | ||
log_debug!(logger, "We do not allow more than 1 KiB of data for each peer in peer storage. Sending warning to peer {}", log_pubkey!(counterparty_node_id)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly, would be nice to phrase this in terms of what we're doing and why, rather than just stating a restriction we have. Something more like "Sending warning to peer and ignoring peer storage request from {} as its over 1KiB" might be more readable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
lightning/src/ln/channelmanager.rs
Outdated
@@ -12858,6 +12904,9 @@ where | |||
if !peer_state.ok_to_remove(false) { | |||
peer_pubkey.write(writer)?; | |||
peer_state.latest_features.write(writer)?; | |||
|
|||
peer_state.peer_storage.write(writer)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can't just go writing new fields in the middle of the existing file format. Instead, we have to write them as TLVs below in the write_tlv_fields
call - we'll need to build a Vec<(&PublicKey, &Vec<u8>)>
(note the references to avoid unnecessary clones) and fill it with our stored data per-peer, then on the read side read the same, iterate it, and insert the data into the peer structs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for pointing that out—clearly, my brain was on vacation.
Fixed!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool! Did a first high-level pass to get started.
Note that I might not be aware of the full extent of previous discussions to this point, but I'd love if we could split out the peer store logic into a dedicated peer_store.rs
module holding PeerStoreManager
(or similar), and preferably even introduce a new PeerStoreMessageHandler
rather than tagging the methods onto the ChannelMessageHandler
trait.
It would be even better if we could find a way around persisting it as part of ChannelManager
(e.g., by having PeerStoreManager
take a KVStore
), but if we're set on persisting it as part of ChannelManager
for now, we could at least already isolate the logic more and prepare for such a refactoring in the future.
lightning-types/src/features.rs
Outdated
43, | ||
ProvidePeerBackupStorage, | ||
[InitContext, NodeContext], | ||
"Feature flags for `provide_peer_backup_storage`.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this should be option_provide_storage
?
Below method names should be adjusted accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for pointing it out. Fixed it.
lightning-types/src/features.rs
Outdated
@@ -72,6 +72,8 @@ | |||
//! (see the [`Trampoline` feature proposal](https://github.com/lightning/bolts/pull/836) for more information). | |||
//! - `DnsResolver` - supports resolving DNS names to TXT DNSSEC proofs for BIP 353 payments | |||
//! (see [bLIP 32](https://github.com/lightning/blips/blob/master/blip-0032.md) for more information). | |||
//! - `ProvidePeerBackupStorage` - Indicates that we offer the capability to store data of our peers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably also best to rename this ProvideStorage
to align with the feature name?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I should have modified it as per the spec... Thanks!
lightning/src/events/mod.rs
Outdated
}, | ||
/// Sends a channel partner their own peer storage which we store and update when they send | ||
/// a [`msgs::PeerStorageMessage`]. | ||
SendYourPeerStorageMessage { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should these be called ..RetrievalMessage
to align with the spec? This would potentially avoid confusion around the difference going forward?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that peer storage is not necessarily connected to channel operation (as, IIUC, the spec allows to store/retrieve for non-channel peers), I wonder if this should introduce a new message handler type rather than just adding onto ChannelMessageHandler
? Splitting it out now might also make it easier to move the storage into a dedicated object rather than handling everything via ChannelManager
?
lightning/src/ln/msgs.rs
Outdated
@@ -4527,6 +4559,26 @@ mod tests { | |||
assert_eq!(encoded_value, target_value); | |||
} | |||
|
|||
#[test] | |||
fn encoding_peer_storage() { | |||
let peerstorage = msgs::PeerStorageMessage { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Prefer snake_case
for variable names.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed this one and the test right after it. Thanks!
@@ -1379,6 +1379,8 @@ pub(super) struct PeerState<SP: Deref> where SP::Target: SignerProvider { | |||
/// [`ChannelMessageHandler::peer_connected`] and no corresponding | |||
/// [`ChannelMessageHandler::peer_disconnected`]. | |||
pub is_connected: bool, | |||
/// Holds the peer storage data for the channel partner on a per-peer basis. | |||
peer_storage: Vec<u8>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we encapsulate this in a (maybe even optional?) PeerStorageManager
or similar type that we can transition to in the future, even if for now we're storing it in ChannelManager
?
FWIW, such a type and it's associated logic could live in a new src/ln/peer_store.rs
module, instead of adding to the (already humongous) channelmanager.rs
(which we're planning to split up going forward anyways).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@TheBlueMatt, what's your opinion on this? Would love to hear your thoughts!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, I commented on discord that we should store them in ChannelManager
because we should include them in the PeerState
, whether that means a Vec
or another struct doesn't matter much, but I imagine it wouldn't actually encapsulate very much (this PR is only +300, including the test), so I'm not quite sure I understand the desire for it. To be clear, @tnull, the stuff for actually storing our own stuff (rather than storing stuff for our peers) is relatively encapsulated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, I commented on discord that we should store them in ChannelManager because we should include them in the PeerState, whether that means a Vec or another struct doesn't matter much, but I imagine it wouldn't actually encapsulate very much (this PR is only +300, including the test), so I'm not quite sure I understand the desire for it.
Well, IIRC we know that we want to considerably refactor and move things out of channelmanager.rs
at some point in the future, and we ofc will have to eventually run the formatter on it. So from my perspective, anything that could be added in separate modules should be, as otherwise we'll only have to move the code anyways going forward. Essentially, by starting to modularization now we not only make it easier to keep context on certain parts of the code (e.g., peer storage), but also make our future-selves' jobs easier.
I imagine if we introduce a separate message handler and dedicated types, essentially almost all of the peer storage related logic (and tests) could live in a peer_storage.rs
, even if the data itself is kept as part of PeerState
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In principle, sure, but looking at this PR we'd move about 10 LoC into the new module? Basically we'd take the peer_state
lock in channelmanager.rs
, look up the peer entry, then pass the message off to the module which would only check if the message is too long and then store it? I don't see how that makes for cleaner code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, sure, but from a look at #2943 it seems there are quite some more changes ahead, also to other huge files. But, I don't want to hold this PR over this too long, I guess at some point we'll have to go through refactoring pain either way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm? That PR only adds about another 50 LoC in channelmanager.rs
beyond what this one adds. And its all working with channels in the ChannelManager
so not sure its easily extract-able.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think having a separate module for peer storage would be useful if, in the future, we decide to sell this as a service to our peers and increase our storage limit to 64 KiB. For now, I believe this approach is simplistic and good enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright, while I still think it would be nice to have at least a separate message handler, I don't want to delay this PR too much given there have been prior discussions. So probably fine to move forward now, and possibly refactor later.
079bb9a
to
ee684f2
Compare
@tnull Initially, I wrote a |
fe9f248
to
86d98ab
Compare
lightning/src/ln/msgs.rs
Outdated
/// | ||
/// [`PeerStorageRetrievalMessage`]: https://github.com/lightning/bolts/pull/1110 | ||
#[derive(Clone, Debug, Hash, PartialEq, Eq)] | ||
pub struct PeerStorageRetrievalMessage { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why "Message" at the end (same for PeerStorageMessage
)? We don't do that elsewhere and IMO the name stands on its own.
pub struct PeerStorageRetrievalMessage { | |
pub struct PeerStorageRetrieval { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, my bad. Fixed!
86d98ab
to
569c642
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly looks good to me, mod some comments (mostly nits, but I think we should consider starting to have the handlers take messages by-value).
lightning/src/events/mod.rs
Outdated
SendPeerStorage { | ||
/// The node_id of this message recipient | ||
node_id: PublicKey, | ||
/// The PeerStorage which should be sent. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: To keep it consistent this should be peer_storage
, same below with peer_storage_retrieval
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for pointing this out.
lightning/src/ln/msgs.rs
Outdated
@@ -725,6 +725,36 @@ pub struct UpdateFulfillHTLC { | |||
pub payment_preimage: PaymentPreimage, | |||
} | |||
|
|||
/// A [`PeerStorage`] message that can be sent to or received from a peer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as mentioned above, you'll want to use peer_storage
, i.e., the name from the BOLTs in these docs, and not shadow the object link here.
lightning/src/ln/wire.rs
Outdated
@@ -625,6 +637,14 @@ impl Encode for msgs::GossipTimestampFilter { | |||
const TYPE: u16 = 265; | |||
} | |||
|
|||
impl Encode for msgs::PeerStorage { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems the other types are roughly ordered by type number, so these should likely get moved waaay up.
lightning/src/ln/channelmanager.rs
Outdated
let peer_state = &mut *peer_state_lock; | ||
let logger = WithContext::from(&self.logger, Some(*counterparty_node_id), None, None); | ||
|
||
// Check if we have any channels with the peer (Currently we only provide the servie to peers we have a channel with). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/servie/service/
lightning/src/ln/channelmanager.rs
Outdated
} | ||
|
||
#[cfg(not(test))] | ||
if msg.data.len() > 1024 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make this magic number an appropriately named const
? Let's reuse it in the log/warning messages below then to make sure they never get stale.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely
|
||
// Check if we have any channels with the peer (Currently we only provide the servie to peers we have a channel with). | ||
if !peer_state.channel_by_id.values().any(|phase| phase.is_funded()) { | ||
log_debug!(logger, "Ignoring peer storage request from {} as we don't have any funded channels with them.", log_pubkey!(counterparty_node_id)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to return a warning in this case to let the peer know we're not gonna store the data?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By protocol, we don’t need to do it, but yes, it’s a good suggestion.
lightning/src/ln/channelmanager.rs
Outdated
return; | ||
} | ||
|
||
log_trace!(logger, "Received Peer Storage from {}", log_pubkey!(counterparty_node_id)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/Peer Storage/peer_storage message/
lightning-net-tokio/src/lib.rs
Outdated
@@ -760,6 +760,11 @@ mod tests { | |||
fn handle_tx_init_rbf(&self, _their_node_id: PublicKey, _msg: &TxInitRbf) {} | |||
fn handle_tx_ack_rbf(&self, _their_node_id: PublicKey, _msg: &TxAckRbf) {} | |||
fn handle_tx_abort(&self, _their_node_id: PublicKey, _msg: &TxAbort) {} | |||
fn handle_peer_storage(&self, _their_node_id: PublicKey, _msg: &PeerStorage) {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we give these msg
s by value rather than by reference? We should eventually refactor the other variants too, but for these new variants it will avoid unnecessary cloning later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that is what I thought, but then I was tempted to follow the convention...
lightning/src/ln/channelmanager.rs
Outdated
@@ -8170,9 +8172,65 @@ This indicates a bug inside LDK. Please report this error at https://github.com/ | |||
} | |||
} | |||
|
|||
fn internal_peer_storage_retrieval(&self, _counterparty_node_id: &PublicKey, _msg: &msgs::PeerStorageRetrieval) {} | |||
fn internal_peer_storage_retrieval(&self, counterparty_node_id: &PublicKey, _msg: &msgs::PeerStorageRetrieval) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These methods should take both arguments by value, IMO.
lightning/src/ln/channelmanager.rs
Outdated
} | ||
|
||
log_trace!(logger, "Received Peer Storage from {}", log_pubkey!(counterparty_node_id)); | ||
peer_state.peer_storage = msg.data.clone(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can avoid this clone
by making use of move semantics.
569c642
to
af8ff1b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, mod one comment, I think.
It's fine for now, but for the next PRs please add fixups in fixup commits under the respective 'main' commits and squash them only after asking the reviewers if it's fine to do so. Also, please only rebase on main
if really necessary to resolve conflicts, not on every push. Following these steps makes it easier for reviewers to follow what changed in-between review rounds. Thank you!
lightning/src/ln/channelmanager.rs
Outdated
// Check if we have any channels with the peer (Currently we only provide the service to peers we have a channel with). | ||
if !peer_state.channel_by_id.values().any(|phase| phase.is_funded()) { | ||
log_debug!(logger, "Ignoring peer storage request from {} as we don't have any funded channels with them.", log_pubkey!(counterparty_node_id)); | ||
peer_state.pending_msg_events.push(events::MessageSendEvent::HandleError { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for catching this only now, but I think these intenal_
methods should follow the usual pattern and return a Result<(), MsgHandleErrInternal>
? Then, we can just return with the respective ErrorAction
here, instead of "manually" pushing to the pending_msg_events
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for catching this! You're absolutely right—having the internal_
methods return a Result<(), MsgHandleErrInternal>
would align better with the usual pattern and simplify the error handling.
Thanks for the feedback @tnull! I’ll make sure to follow these guidelines for future PRs. |
LGTM. Please squash the fix up into the appropriate commit and let's get this over the line! |
e2298d0
to
551f1f2
Compare
Introduce the 'ProvideStorage' feature to enable nodes to distribute and store peer storage backups for channel partners. This functionality enhances resilience by ensuring critical peer data is backed up and can be retrieved if needed. - Added 'ProvideStorage' to the 'InitContext' & 'NodeContext'. - Set feature bit for this feature inside 'provide_init_features()'
Introduce message types and handlers to enable the exchange of peer storage data between nodes. PeerStorageMessage: Used to send encrypted backups to peers. PeerStorageRetrievalMessage: Used to return a peer's stored data upon reconnection. - Define two new message types: PeerStorageMessage and PeerStorageRetrievalMessage. - Implement handlers for these messages in ChannelMessageHandler. - Add SendPeerStorageMessage and SendPeerStorageRetrievalMessage to MessageSendEvent.
551f1f2
to
e55cc1e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Gonna go ahead and land this after CI passes since the changes since @tnull's "Looks good" were straightforward.
This commit introduces the handling and persistence of PeerStorage messages on a per-peer basis. The peer storage is stored within the PeerState to simplify management, ensuring we do not need to remove it when there are no active channels with the peer. Key changes include: - Add PeerStorage to PeerState for persistent storage. - Implement internal_peer_storage to manage PeerStorage and its updates. - Add resend logic in peer_connected() to resend PeerStorage before sending the channel reestablish message upon reconnection. - Update PeerState's write() and read() methods to support PeerStorage persistence.
This test ensures that PeerStorage & PeerStorageRetrieval handling behaves as expected. It simulates receiving a dummy peer storage message, disconnecting and reconnecting peers, and validates that the correct messages are exchanged during reestablishment. - Added a test case `test_peer_storage` to verify the handling of `PeerStorageMessage` and the validation of warning messages in the event of invalid peer storage retrieval. - Simulated peer storage retrieval upon reconnection between nodes. - Validated that a warning message is generated when `PeerStorageRetrievalMessage` is received. - Ensured the warning message contains the expected error description.
e55cc1e
to
6c8e7e4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly:
$ git diff-tree -U1 e55cc1ecb 6c8e7e40b
diff --git a/lightning/src/ln/channelmanager.rs b/lightning/src/ln/channelmanager.rs
index 0db5bfbd6..91df6a709 100644
--- a/lightning/src/ln/channelmanager.rs
+++ b/lightning/src/ln/channelmanager.rs
@@ -8263,3 +8263,3 @@ This indicates a bug inside LDK. Please report this error at https://github.com/
Err(MsgHandleErrInternal::from_chan_no_close(ChannelError::Warn(
- format!("Invalid peer_storage_retrieval message received.")
+ "Invalid peer_storage_retrieval message received.".into(),
), ChannelId([0; 32])))
@@ -8283,3 +8283,4 @@ This indicates a bug inside LDK. Please report this error at https://github.com/
return Err(MsgHandleErrInternal::from_chan_no_close(ChannelError::Warn(
- format!("Ignoring peer_storage message, as peer storage is currently supported only for peers with an active funded channel.")
+ "Ignoring peer_storage message, as peer storage is currently supported only for \
+ peers with an active funded channel.".into(),
), ChannelId([0; 32])));
I've updated these lines as suggested. Are there other instances in the code where similar changes should be made? |
Breaking up PR #2943 into smaller chunks to make it easier to review.
Key Changes: