Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Adding API for parallel block to task_arena to warm-up/retain/release worker threads #1522

Open
wants to merge 11 commits into
base: master
Choose a base branch
from

Conversation

pavelkumbrasev
Copy link
Contributor

@pavelkumbrasev pavelkumbrasev commented Oct 1, 2024

Adding API for parallel block to task_arena to warm-up/retain/release worker threads

Signed-off-by: pavelkumbrasev <[email protected]>
@vossmjp vossmjp changed the title Adding API for parallel block to task_arena to warm-up/retain/release worker threads [RFC} Adding API for parallel block to task_arena to warm-up/retain/release worker threads Oct 3, 2024
@vossmjp vossmjp changed the title [RFC} Adding API for parallel block to task_arena to warm-up/retain/release worker threads [RFC] Adding API for parallel block to task_arena to warm-up/retain/release worker threads Oct 3, 2024
Copy link
Contributor

@aleksei-fedotov aleksei-fedotov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, it looks as too certain about the things that will or will not happen when the new API is utilized. I think that the explanation should be written in a more vague terms using the more of "may", "might", etc. words. Essentially, conveying the idea that all this is up to the implementation and serve as a hint rather than a concrete behavior.

What do others think?

rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
@pavelkumbrasev
Copy link
Contributor Author

Overall, it looks as too certain about the things that will or will not happen when the new API is utilized. I think that the explanation should be written in a more vague terms using the more of "may", "might", etc. words. Essentially, conveying the idea that all this is up to the implementation and serve as a hint rather than a concrete behavior.

What do others think?

I tried to indicate that this set of APIs is a hint to the scheduler. But if you believe that we can relax this guarantees even more I think we should do this.

Signed-off-by: pavelkumbrasev <[email protected]>
@pavelkumbrasev
Copy link
Contributor Author

Ping @aleksei-fedotov @vossmjp @akukanov

@isaevil isaevil marked this pull request as ready for review November 20, 2024 14:45
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
isaevil and others added 2 commits November 25, 2024 10:53
Copy link
Contributor

@aleksei-fedotov aleksei-fedotov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A bunch of comment from my side. Have not reviewed the new API yet.

rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
rfcs/proposed/parallel_block_for_task_arena/README.md Outdated Show resolved Hide resolved
Comment on lines 140 to 141
void start_parallel_block();
void end_parallel_block(bool set_one_time_fast_leave = false);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. To make the new API more composable I would indicate that the setting affects primarily this parallel block. While in the absence of other parallel blocks with conflicting requests, it affects the behavior of arena in a whole.
  2. It looks as if it is tailored to a single scenario. Along with the first bullet I believe this is the reason why there is that "NOP" thing.

Therefore, my suggestion is to address both of these by changing the API (here and in other places) to something like the following:

Suggested change
void start_parallel_block();
void end_parallel_block(bool set_one_time_fast_leave = false);
void start_parallel_block();
void end_parallel_block(workers_leave this_block_leave = workers_leave::delayed);

Then to add somewhere the explanation how this affects/changes the behavior of the current parallel block and how this composes with the arena's setting and other parallel blocks within it. For example, it may be like:

This start and end of parallel block API allows making one time change in the behavior of the arena setting with which it was initialized. If this behavior matches the arena's setting, then the workers' leave behavior does not change. In case of a conflicting requests coming from multiple parallel blocks simultaneously the scheduler chooses the behavior it considers optimal

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no composability problem really, as all but the last end-of-block calls are simply ignored, and only the last one has the one-time impact on the leave policy. Also it does not affect the arena settings, according to the design.

Of course if the calls come from different threads, in general it is impossible to predict which one will be the last. However, even if the code is designed to create parallel blocks in the same arena by multiple threads, all these blocks might have the same leave policy so that it does not matter which one is the last to end.

Using the same enum for the end of block as for the construction of the arena seems more confusing than helpful to me, as it may be perceived as changing the arena state permanently.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in general it is impossible to predict which one will be the last

My guess it is the last one that decreases the ref counter to zero. I don't see any issue with this. Later blocks use the arena's policy if not specified explicitly.

Using the same enum for the end of block as for the construction of the arena seems more confusing than helpful to me, as it may be perceived as changing the arena state permanently.

I indicated the difference in the parameter naming this_block_leave, but if it is not enough, we can also indicate that more explicitly with an additional type: arena_workers_leave and phase_workers_leave. Nevertheless, my opinion is that it would not be a problem if documentation/specification includes explanation of this.

Copy link
Contributor

@aleksei-fedotov aleksei-fedotov Nov 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One more question here is - what if the default value would be the opposite to the one specified for the arena setting? Meaning that if arena is constructed with the "fast leave" policy, then each parallel block/phase would have "delayed leave". I understand that it might be perceived as even more confusing, but I just don't quite understand the idea of having additional possibility for the user to specify a parallel phase that ends with the same arena's workers leave policy. What the user wanted to say by this? Why to use "parallel block" API in this case at all? It might be even more confusing.

Since we only have two policies, perhaps, it would be better to introduce something like:

class task_arena: {
//... current declarations go here, including the constructor with the new parameter
task_arena(/*...*/, workers_leave wl = workers_leave::delayed);

// Denote a parallel phase that has alternative, in this case "fast leave", workers behavior.
// If the arena was initialized with "fast leave" setting, then such alternative phase 
// will have alternative, i.e., "delayed leave" behavior.
alternative_parallel_phase_begin();
alternative_parallel_phase_end();

// or even (in addition?)
template <typename F>
alternative_parallel_phase(F&& functor);
};

If later demand appears, other parameters could be added to these functions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems to me that this discussion is not just about API semantics but really about a different architecture, where each parallel block/stage might have its own customizable retention policy. It differs significantly from what is proposed, so I think it needs deeper elaboration, perhaps with new state change diagrams etc.

Copy link
Contributor

@akukanov akukanov Nov 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More on this:

I just don't quite understand the idea of having additional possibility for the user to specify a parallel phase that ends with the same arena's workers leave policy.

The primary point of a parallel phase is not to set a certain leave policy when the phase ends (for which it would be sufficient to have a single "switch the state" method). The parallel phase allows to use a distinct retention policy during the phase - for example, to prolong the default busy-wait duration or to utilize different heuristics. I.e., it does not switch between "fast" and "delayed" but introduces a third possible state of thread retention.

Once all initiated parallel phases end, the retention policy returns, according to the proposed design, to the state set at arena construction. However the use case for threads to leave as soon as possible still remains. For that reason, the extra argument at the end of the block is useful to indicate this "one time fast leave" request.

Hope that helps.

Comment on lines 140 to 141
void start_parallel_block();
void end_parallel_block(bool set_one_time_fast_leave = false);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the arena's workers_leave behavior and scoped_parallel_block both specified in the constructors, this change in behavior set at the end of a parallel block looks inconsistent.

Would it be better to have this setting be specified at the start of a parallel block rather than at its end?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would make the API harder from usability standpoint. User will need somehow link this parameter from the start of the block to the end of the block.

Comment on lines +188 to +189
* What if different types of workloads are mixed in one application?
* What if there concurrent calls to this API?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my comment above about making the approach to be a bit more generic. Essentially, I think we can write something like "implementation-defined" in case of a concurrent calls to this API. However, it seems to me that the behavior should be kind of relaxed, so to say. Meaning that if there is at least one "delayed leave" request happening concurrently with possibly a number of "fast leave" requests, then it, i.e., "delayed leave" policy prevails.

Also, having the request stated up front allows scheduler to know the runtime situation earlier, hence making better decisions about optimality of the workers' behavior.

Comment on lines 119 to 122
enum class workers_leave : /* unspecified type */ {
fast = /* unspecifed */,
delayed = /* unspecifed */
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to find better names to the enum class and its values.
I am specifically concerned about the use of "delayed" in case the actual behavior might be platform specific, not always delayed. But also workers_leave is not a very good name.

Copy link
Contributor

@isaevil isaevil Nov 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been thinking... Combined with your previous comment about automatic, perhaps we could have 3 modes instead:

  • automatic - platform specific default setting
  • fast (or any other more appropriate name)
  • delayed (or any other more appropriate name)

If we assume that we have these 3 modes now, fast and delayed modes would enforce behavior regardless of the platform. That would give user more control while preserving usability (for example, automatic would be translated to fast option for hybrid systems).

What do you think?

Copy link
Contributor

@akukanov akukanov Nov 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's an option we can consider. We'd need to think though what the "enforced" delayed mode will guarantee.

For the "automatic" mode, we say that after work completion threads might be retained in the arena for unspecified time chosen by internal heuristics. How would the definition of "enforced" delayed leave mode differ, what additional guarantees it would provide to make sense for users to choose it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like if we have a dedicated automatic policy, that means that delayed policy should guarantee at least some level of threads' retention relatively to fast policy.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the whole structure is just a hint to the scheduler no real guarantees provided therefore from the description we get:

  1. threads will leave without delay with "Fast"
  2. threads might have a delay before leaving with "Delayed"
  3. automatic will decide what state to choose

From the implementation stand point it has a lot of sense since we will have clear invariants for arena e.g., default arena on hybrid platform will have "Fast" leave state.
So it defiantly improves implementation logic while bringing some potential value to the users ("Delayed" will behave as thread retantion if user explicitly specified).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@akukanov Regarding enumeration class name. Do you find leave_policy a better name than workers_leave? Seems more natural to me when using it during arena construction:

tbb::task_arena ta{..., tbb::task_arena::leave_policy::fast};

Copy link
Contributor

@akukanov akukanov Nov 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pavelkumbrasev please describe the semantics of the delayed hint in a way that is meaningful for users.

For example, the description I used above

after work completion threads might be retained in the arena for unspecified time chosen by internal heuristics.

as well as what you mentioned

threads might have a delay before leaving with "Delayed"

are good for automatic but rather bad for delayed because all aspects of the decision are left to the implementation. Even if changed to "will be retained for unspecified time", it would still be rather weak because the time can be any, including arbitrarily close to 0 - that is, it's not really different from automatic, and there is no clear reason to prefer it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. Perhaps we are not on the same page. I would like it to be:

  1. delayed - after work completion threads might be retained in the arena for unspecified time chosen by internal heuristics.
  2. automatic - implementation will choose between "fast" and "delayed"

Automatic is basically another heuristic on choosing "fast" or "leave" based on underlying HW. Perhaps, automatic not the best name.

Copy link
Contributor

@akukanov akukanov Nov 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With this definition, I see no difference for users between "automatic" and "delayed", because in both cases the decision of whether to delay or not to delay, and for how long, is left to the implementation. If that is the intended behavior, let's not complicate the API with a redundant enum value.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, let's have fast and automatic policies then since we're not sure right now whether we can provide some meaningful guarantees for delayed policy to user.

Comment on lines +104 to +106
* The semantics for retaining threads is a hint to the scheduler;
thus, no real guarantee is provided. The scheduler can ignore the hint and
move threads to another arena or to sleep if conditions are met.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's a hint without any strong guarantees, but at least the intent should be outlined, and/or the use cases where introduction of a parallel phase could positively impact performance.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not entirely sure how to better formulate that. I provided an example of how possible implementation of "parallel phase" can benefit its user.

* End of a parallel phase:
* Indicates the point from which the scheduler may drop a hint and
no longer retain threads in the arena.
* Indicates that arena should enter the “One-time Fast leave” thus workers can leave sooner.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The wording "should enter..." might create/add to confusion. As already mentioned, from the usage standpoint "one time fast leave" is not a state to enter, but rather a command to execute when the parallel phase has ended and no other one is active.
So I would change to something like "Indicates that worker threads should avoid busy-waiting once there is no more work in the arena".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replaced to the suggested one.

Comment on lines 111 to 113
* If work was submitted immediately after the end of the parallel phase,
the default arena behavior with regard to "workers leave" policy is restored.
* If the default "workers leave" policy was the "Fast leave", the result is NOP.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would replace both these bullets with e.g. "Temporarily overrides the default arena policy/behavior, which will be restored when new work is submitted"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Comment on lines 165 to 166
By the contract, users should indicate the end of _parallel phase_ for each
previous start of _parallel phase_.<br>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe "The parallel phase continues until for each previous start_parallel_phase call a matching end_parallel_phase call has been made."

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about "The parallel phase continues until each previous start_parallel_phase call has a matching end_parallel_phase call"?

Signed-off-by: Isaev, Ilya <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants