-
Notifications
You must be signed in to change notification settings - Fork 602
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Iceberg backlog controller #24990
base: dev
Are you sure you want to change the base?
Iceberg backlog controller #24990
Conversation
Previously the `disk_log_impl::size_bytes_after_offset` was returning a very rough approximation of the partition size after offset as it limited the result accuracy to the segment sized. Changed the code to return a better approximation of size after offset by using a segment index. When requested offset is within the segment a closest segment index entry is queried, the entry contains a position in segment file, the position is an offset in bytes from the beginning of the file. By subtracting the position from file size the code calculates the remaining segment part with the accuracy of segment index entry interval which by default is equal to 32KiB. Signed-off-by: Michał Maślanka <[email protected]>
Added `partition_translator` method that calculates the translation backlog. A backlog is a number of bytes in partition that are ready to be translated but did not yet went through the translation process. Signed-off-by: Michał Maślanka <[email protected]>
f9e0982
to
3f2293e
Compare
Added properties that are responsible for changing behavior of backlog controller. Signed-off-by: Michał Maślanka <[email protected]>
Added datalake backlog controller to dynamically adjust the `datalake` scheduling group shares. Simple proportional controller tracks the average size of partition translation backlog and calculates how far is it from the desired value. The number of shares given to the `datalake` scheduling group is proportional to the difference between the desired and current backlog values. The rationale behind the controller is that datalake translation is almost always CPU bound and the translation rate (bytes of translated bytes per second) can be controlled by giving the translator more CPU time when required. Signed-off-by: Michał Maślanka <[email protected]>
3f2293e
to
29b8682
Compare
CI test resultstest results on build#61441
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is cool! Have you tried it out on a cluster yet? Curious if you have any empirical improvements worth sharing
@@ -1845,8 +1845,21 @@ uint64_t disk_log_impl::size_bytes_after_offset(model::offset o) const { | |||
uint64_t size = 0; | |||
for (size_t i = _segs.size(); i-- > 0;) { | |||
auto& seg = _segs[i]; | |||
if (seg->offsets().get_base_offset() < o) { | |||
break; | |||
auto& offsets = seg->offsets(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious if you think this is worth backporting? Seems this function is used in Raft at least and I'm wondering the effects on other callers
@@ -340,6 +340,32 @@ partition_translator::do_translation_for_range( | |||
co_return std::move(result.value()); | |||
} | |||
|
|||
std::optional<size_t> partition_translator::translation_backlog() const { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm curious if there's actually correlation between translation time and bytes on disk, particularly in the face of compressed batches, or complex schemas. I guess the thinking is larger messages likely imply more complex payload, but just thinking out loud I wonder if something as simple as number of records pending would be usable here.
Not really advocating for any changes, but just thought I'd pose the question, and also point out that this won't work for read replicas or restored/FPM-ed partitions without implementing a similar sizing method for cloud.
"Average size of the datalake translation backlog, per partition, that " | ||
"the " | ||
"backlog controller will try to maintain.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just want to make sure I understand, does this mean that when we have more than this value not translated, translation is scheduled more aggressively? If so, maybe worth explicitly writing that
"Proportional coefficient for the Iceberg backlog controller. Number of " | ||
"shares assigned to the datalake scheduling group will be proportional " | ||
"to the backlog size error.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's worth leaving a note around tuning this, like if more negative values mean that the system has more aggressive adaptivity to workloads or something
config::binding<double> _proportional_coeff; | ||
config::binding<uint32_t> _setpoint; | ||
std::chrono::steady_clock::duration _sampling_interval; | ||
ss::timer<> _sampling_timer; | ||
long double _current_sample{0}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: mind adding some docs about these? In particular i think it's helpful to at least understand:
- setpoint is the target value we're trying to maintain
- current sample has the same units as setpoint
- current sample is an output of _sampling_f
- maybe how to think about the coefficient, though maybe that's more useful to put in the cluster config property description?
_setpoint(), | ||
_current_sample, | ||
current_err, | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: extra newline, and update
used twice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm generally, few clarifying questions
ss::timer<> _sampling_timer; | ||
long double _current_sample{0}; | ||
|
||
int _min_shares{1}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wondering if min should be at least default 100.
update, | ||
update); | ||
|
||
_scheduling_group.set_shares(static_cast<float>(update)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any use to just adjust the weights of translation and produce, something like
translation_shares + delta
produce_shares - delta
In the current form I think it shuffles the entire share distribution which effectively means (for example) fetch gets fewer shares if there is a translation lag, is that the intention?
*this, | ||
"iceberg_target_backlog_size", | ||
"Average size of the datalake translation backlog, per partition, that " | ||
"the " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: formatting
total_lag += backlog_size.value(); | ||
} | ||
|
||
return total_lag / _translators.size(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/ _translators.size();
nit: account for translators that only report something?
"the " | ||
"backlog controller will try to maintain.", | ||
{.needs_restart = needs_restart::no, .visibility = visibility::tunable}, | ||
5_MiB, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was initially considering whether the default size should be slightly larger than 5 MiB. However, a smaller default has the advantage of smaller proportional changes to scheduling weights. This allows the system to correct itself more quickly than with a larger default (e.g., 1 GiB), where weight changes would be drastic and throttling more significant (bigger sawtooth). Therefore, we likely prefer smaller values for faster system adjustment, am I thinking about it correctly?
Added datalake backlog controller to dynamically adjust the
datalake
scheduling group shares. Simple proportional controller tracks the
average size of partition translation backlog and calculates how far is
it from the desired value. The number of shares given to the
datalake
scheduling group is proportional to the difference between the desired
and current backlog values.
The rationale behind the controller is that datalake translation is
almost always CPU bound and the translation rate (bytes of translated
bytes per second) can be controlled by giving the translator more CPU
time when required.
Backports Required
Release Notes
Improvements