-
Notifications
You must be signed in to change notification settings - Fork 408
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reduce buffer size of "unused" side of socket #4313
base: master
Are you sure you want to change the base?
Conversation
85a196a
to
e62193f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh, this one slipped through the cracks. I doubt it will make any difference, but can you please rebase to tip as a precaution. I'm guessing your branch is on two-month-old-tip-of-master right now
e62193f
to
83a9a82
Compare
rebased! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
testing on unstaked mnb node does not reveal any adverse impacts. will start on testnet, if all goes well I'll merge it in the morning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok tested on staked node, TPU works fine, LGTM!
Follow Up to PR: #3929
Problem
We allocate large send and receive buffers for all sockets. However, not all sockets read and write equally from/to their buffers. In fact, 6 sockets are primarily Read and 2 are primarily Write. Meaning we are allocating memory that is never used.
Summary of Changes
Services/Sockets
Read/Write
Gossip
RPC - TCP
Ip_echo - TCP
Tvu
tvu_quic
Repair
Repair_quic
Serve_repair
Serve_repair_quic
Ancestor_hashes_requests_quic
Ancestor_hashes_requests
Primarily Read
Tpu
Tpu_forwards
Tpu_vote
Tpu_quic
Tpu_forwards_quic
Tpu_vote_quic
Primarily Write
Retransmitter
Broadcast
Follow Up PR: