Skip to content

Extract encode_and_send_msg into a method. #624

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

arik-so
Copy link
Contributor

@arik-so arik-so commented May 18, 2020

This is a response to splitting #585 into smaller components. This extraction will permit a future extraction of all message handling into a separate method, and then later even into a separate trait that will be accessible through language bindings.

@arik-so arik-so requested a review from jkczyz May 18, 2020 18:01
@codecov
Copy link

codecov bot commented May 18, 2020

Codecov Report

Merging #624 into master will increase coverage by 0.01%.
The diff coverage is 90.90%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #624      +/-   ##
==========================================
+ Coverage   91.29%   91.31%   +0.01%     
==========================================
  Files          35       35              
  Lines       20776    20784       +8     
==========================================
+ Hits        18967    18978      +11     
+ Misses       1809     1806       -3     
Impacted Files Coverage Δ
lightning/src/ln/peer_handler.rs 58.65% <90.90%> (+0.17%) ⬆️
lightning/src/ln/functional_tests.rs 97.41% <0.00%> (+0.09%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d4ad57b...dbdef7f. Read the comment docs.

Copy link
Contributor

@jkczyz jkczyz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pulling out of #585!

@@ -508,7 +508,7 @@ impl<Descriptor: SocketDescriptor, CM: Deref, L: Deref> PeerManager<Descriptor,
},
msgs::ErrorAction::SendErrorMessage { msg } => {
log_trace!(self.logger, "Got Err handling message, sending Error message because {}", e.err);
encode_and_send_msg!(msg);
self.enqueue_message(&mut peers.peers_needing_send, peer,peer_descriptor.clone(), &msg);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: space after peer, here and other call sites.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you!

let mut encoded_message: Vec<u8> = vec![];
wire::write(message, &mut encoded_message);

log_trace!(self.logger, "Encoding and sending message of type {} to {}", message.type_id().to_string(), log_pubkey!(peer.their_node_id.unwrap()));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to_string not needed

/// Append a message to a peer's pending outbound/write buffer, and update the map of peers needing sends accordingly
fn enqueue_message<M: Encode + Writeable>(&self, peers_needing_send: &mut HashSet<Descriptor>, peer: &mut Peer, descriptor: Descriptor, message: &M) {
let mut encoded_message: Vec<u8> = vec![];
wire::write(message, &mut encoded_message);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm seeing a new warning that the result is not used. Looks like encode_msg! explicitly unwraps it.

Comment on lines 464 to 465
let mut encoded_message: Vec<u8> = vec![];
wire::write(message, &mut encoded_message);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was it intentional to not use VecWriter here? It tries to optimize by reserving space, but I noticed the implementation of Write for Vec<u8> already does that under the hood. Maybe we don't need VecWriter then.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, though I can change it back to VecWriter for consistency's sake

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly, I'd prefer to remove the other occurrence of VecWriter if it really isn't needed. Then we get both consistency and reduced complexity.

let mut encoded_message: Vec<u8> = vec![];
wire::write(message, &mut encoded_message);

log_trace!(self.logger, "Encoding and sending message of type {} to {}", message.type_id().to_string(), log_pubkey!(peer.their_node_id.unwrap()));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be worth updating this logging given the message is only be enqueued but not sent?

I know the behavior hasn't changed, but the logging accuracy is now dependent on the call site. All the call sites are currently in do_read_event, which always calls do_attempt_write_data. So in practice the message is sent. (For background, see #456.)

But if enqueue_message is called from elsewhere then the logging may not be necessarily accurate.

@@ -459,6 +459,16 @@ impl<Descriptor: SocketDescriptor, CM: Deref, L: Deref> PeerManager<Descriptor,
}
}

/// Append a message to a peer's pending outbound/write buffer, and update the map of peers needing sends accordingly
fn enqueue_message<M: Encode + Writeable>(&self, peers_needing_send: &mut HashSet<Descriptor>, peer: &mut Peer, descriptor: Descriptor, message: &M) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not necessary to address now, but it would be worth thinking about the various ways we are sending messages in this module. Currently:

  • log, enqueue, mark as needing send (here; called by do_read_event)
  • log, enqueue (macro in do_attempt_write_data)
  • enqueue (process_events and timer_tick_occured)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do_attempt_write_data differs from do_read_event in that it doesn't update the peers needing send map, but it's also a very tiny method where the sending is self-contained. I am open for future brainstorming on its refactor. In principle, all sends should look the same anyway, and having the same method/macro name do different things would have been confusing, too.
At its core, enqueue is comprised of three components: serialization, the enqueueing, and the signaling mechanism. The signaling mechanism doubles the number of necessary arguments. Perhaps splitting that method into two, where one calls the other, would be appropriate?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's possible that peers_needing_send is just a historical artifact and thus no longer needed. See discussion in #456. So there may be some refactoring that will get us to one simple method.

The method could also be on Peer rather than PeerManager, though I think Peer would then need a logger...

@arik-so arik-so force-pushed the encode_and_send_msg_method_refactor branch from 95a91c2 to dea9da3 Compare May 18, 2020 20:08
/// Append a message to a peer's pending outbound/write buffer, and update the map of peers needing sends accordingly
fn enqueue_message<M: Encode + Writeable>(&self, peers_needing_send: &mut HashSet<Descriptor>, peer: &mut Peer, descriptor: Descriptor, message: &M) {
let mut encoded_message: Vec<u8> = vec![];
wire::write(message, &mut encoded_message).unwrap(); // crash if the write failed
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be surprised if this weren't a good bit slower in some cases than VecWriter. VecWriter implements the size_hint by allocating capacity in the Vec, which we use extensively in the message serialization path.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our default implementation of Writer delegates to Write. And then the Write implementation for Vec<u8> calls extend_from_slice, which uses Iterator's size_hint to allocate enough space.

The implementation of Writer for VecWriter seems to be reinventing this. Or worse it's attempting to reserve enough capacity twice: once on its own in size_hint and then again when calling extend_from_slice on the underlying vector (as per above).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, we might wanna abandon VecWriter in this file altogether

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose with VecWriter you only need to allocate once, though, if there is a size hint for the entire message vs needing to allocate for each of the message's component parts.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, VecWriter does it at a higher level - we tell it how long the entire message is going to be vs as we write individual fields. This should avoid quite a few allocations in our serialization path, which tends to be a common performance issue.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per your original comment, I had switched it to VecWriter, so we can just keep it.

Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review ACK d6de5eb

We can defer optimizing by dropping VecWriter in future work IMO.

@@ -459,6 +459,17 @@ impl<Descriptor: SocketDescriptor, CM: Deref, L: Deref> PeerManager<Descriptor,
}
}

/// Append a message to a peer's pending outbound/write buffer, and update the map of peers needing sends accordingly
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"and insert its descriptor in the map of peers needing sends" ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm open to either phrasing. I believe you and Matt have the power to edit this though, so feel free to update it to whichever you prefer.

Copy link
Contributor

@jkczyz jkczyz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK d6de5eb

Please squash commits.

@arik-so arik-so force-pushed the encode_and_send_msg_method_refactor branch from d6de5eb to 70274a0 Compare May 19, 2020 02:57
@arik-so
Copy link
Contributor Author

arik-so commented May 20, 2020

I'm really not sure why the code coverage failed after squashing the commits when it hadn't before.

This is a response to splitting lightningdevkit#585 into smaller components. This extraction will permit a future extraction of all message handling into a separate method, and then later even into a separate trait that will be accessible through language bindings.
@arik-so arik-so force-pushed the encode_and_send_msg_method_refactor branch from 70274a0 to dbdef7f Compare May 22, 2020 09:11
@TheBlueMatt TheBlueMatt merged commit fd71f8b into lightningdevkit:master May 22, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants