-
Notifications
You must be signed in to change notification settings - Fork 164
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The bifurcation of AnyOf/Conjunction and RuleBased doesn't feel right. They are all in the realm of Boolean logic, with or without short circuiting, can we not come up with a more streamlined scheme?
For example, why does RuleBased sampler need to take a list of predicates, instead of one predicate and achieving multi-predicate behavior through composition?
text/0250-Composite_Samplers.md
Outdated
|
||
The `RuleBased` sampler SHOULD NOT accept lists of different length, i.e. it SHOULD report an error. Implementations MAY allow for replacing both lists with a list of pairs (`Predicate`, `Sampler`), if this is supported by the platform. | ||
|
||
For making the sampling decision, if the `Span` kind matches the specified kind, the sampler goes through the lists in the declared order. If a Predicate matches the `Span` in question, the corresponding `Sampler` will be called to make the final sampling decision. If the `SpanKind` does not match, or none of the Predicates evaluates to `true`, the final decision is `DROP`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a Predicate matches the
Span
in question, the correspondingSampler
will be called to make the final sampling decision.
This will result in poor observability as the underlying sampler can be a primitive one so the only attributes it will be able to add will be related to its algorithm, not the fact that a specific predicate was matched. I would much rather see an ability for the user to give names to individual decision points (applies to all composite samplers). In addition to attributes these samplers could be emitting metrics with corresponding names.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting ideas - do you have a specific use case in mind?
Anyway, I do not quite agree with your claim of poor observability. Most Predicates are expected to test span attributes only and therefore their underlying logic can be re-evaluated at any later time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the example at the top you have English description of different categories of spans being sampled. As a user, I would appreciate a mechanism to not only keep those description (maybe in a shorter form of "names"), but also to have observability where I can see which triggers fire how often (either by recording the trigger name as span attribute or emitting metrics labeled with those names). Your current mechanism does not allow for that afaict.
|
…w clarifying sentences.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haven't been able to finish this review. (Still need to understand Approach 2.) Sorry for the delay!
text/0250-Composite_Samplers.md
Outdated
- The set of span Attributes to be added to the `Span` is the sum of the sets of Attributes as provided by the delegate samplers within their `SamplingResults`s. | ||
- The `TraceState` to be used with the new `Span` is obtained by cumulatively applying all the potential modfications of the parent `TraceState` by the delegate samplers, with special handling of the `th` sub-key (the sampling rejection `THRESHOLD`) for the `ot` entry as described below. | ||
|
||
If the final sampling Decision is `DROP` or `RECORD_ONLY`, the `th` entry MUST be removed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If all delegated samplers are consistent, it still makes sense to set the th
value even in the case of negative sampling decisions.
One could perhaps say that, in general, for both EachOf
and AnyOf
, the th
value must be set if and only if all delegated samplers are consistent (regardless of the sampling decision).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this idea, but didn't we expect that th
is removed after a negative sampling decision? Similarly to how the p-values were handled?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. If we never put th
to the trace state for negative sampling decisions, which I think makes sense, then again what is the purpose of an empty th
field representing 0% probability? @jmacd
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had to read-through the latest iteration of this document before coming back to this thread--
I don't think an "empty" th
field should be meaningful except to represent unknown information, in which case the field should be absent, not present w/ an empty value.
I guess the reason this might matter, the explicit unsetting of th-values for negative sampling decisions, is that for approach-1 spans where a RECORD_ONLY
decision is reached, there will be a copy of the span for reading, in memory. When the span is accessed, both the former th
value and the unset th
value are somehow meaningful.
I would like us to come back to the invariants we believe there are, including the sampled
flag.
If the sampled
flag is NOT set and th
is non-empty, I believe we have an inconsistency. This is why I believe we should unset th
.
If the sampled
flag is set and the th
is unset (or empty), I believe we have unknown sampling.
text/0250-Composite_Samplers.md
Outdated
Upon invocation of its `GetThreshold` function, the composite sampler MUST get the threshold from the delegate sampler, passing the same arguments as received. | ||
If using the obtained threshold value as the final threshold would entail sampling more spans than the declared target rate, the sampler SHOULD increase the threshold to a value that would meet the target rate. Several algorithms can be used for threshold adjustment, no particular behavior is prescried by the specification though. | ||
|
||
When using `ConsistentRateLimiting` in our requirements example as a replacement for `EachOf` and `RateLimiting`, we are left with no use case for any direct equivalent to `EachOf` in Approach Two. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you clarify this sentence for me? I am not sure I follow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For symmetry, one would expect that we introduce something like ConsistentEachOf
. But the only situation I see we needed that kind of logic (each of) was with combining one sampler with a rate-limiting sampler. So now, when we have a delegate for ConsistentRateLimiting
, that need disappears. Unless we find a use case where it is is useful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see --
would you agree that the concept of a ConsistentEachOf
sampler can be defined easily, and that it reduces to the maximum-threshold of the delegates. It's well-defined, but not necessary for the example being developed, I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. I do not see any difficulties in defining the behavior of ConsistentEachOf
, it is just its usefulness that is questionable. As soon as we find a nice use case for it, it can be added to this spec.
text/0250-Composite_Samplers.md
Outdated
|
||
### Limitations of composite samplers in Approach Two | ||
|
||
While making sampling decisions with samplers from Approach Two is more efficient and avoids dealing with non-mainstream cases, it puts some limits on the capabilities of the Consistent Probability Samplers. In particular, a custom CP sampler that wishes to add a span `Attribute` or modify TraceState will be out of luck if it is used as a delegate. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This leaves me thinking about future work, possibly out-of-scope in this document. It looks like both approaches are necessary, one for historical compatibility and one for enabling future span-to-metrics and logs-to-metrics pipelines.
Do you agree that the two approaches may be complimentary, and can co-exist? This leads to two suggestions:
- For the approach-one case, is there a set of rules that ensure when any non-probabilistic or CP-incompatible samplers are used, that the
th
value becomes "unknown" by being unset? (I think yes--this is discussed in comment threads above.) - To handle the limitations here, for approach 2, is there a way to combine approach-1 and approach-2 samplers so the
th
field is governed but otherwise Samplers can extend attributes, and so on? (I think yes.)
For the second case, I imagine an existing sampler that is written like:
if arbitraryPredicate(context) {
result.decision = RECORD_AND_SAMPLE
result.attributes = someNewAttributes()
} else {
result.decision = DROP
}
can be rewritten, probably, as:
S = Conjunction(
ConsistentRuleBased(
(arbitraryPredicate => ConsistentAlwaysOn),
true => ConsistentAlwaysOff,
),
AttributeInserter(
(result) => {
result.attributes = someNewAttributes
return result
}))
If this works, it seems we ought to be able to construct an implementation that combines approach-1 and approach-2 samplers via, essentially, this rewrite. Does this sound possible?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I expect both approaches to be useful, targeting different users. I do not think we should abandon either one in favor of the other.
With respect to your point 1 above, the intention is to handle the th
value "correctly". But we cannot just say that in all cases if any non-CP sampler is used then the th
value needs to be erased. Both RuleBased
and AnyOf
samplers can have non-CP delegates, yet still deliver valid th
values.
It is possible to use Approach Two delegates within Approach One samplers, and this is more or less what your example does. So it is a workaround for some limitations of Approach Two.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. I think we're in agreement on this topic. I think our next step is really to prototype what this looks like in a prototype implementation.
text/0250-Composite_Samplers.md
Outdated
- The set of span Attributes to be added to the `Span` is the sum of the sets of Attributes as provided by the delegate samplers within their `SamplingResults`s. | ||
- The `TraceState` to be used with the new `Span` is obtained by cumulatively applying all the potential modfications of the parent `TraceState` by the delegate samplers, with special handling of the `th` sub-key (the sampling rejection `THRESHOLD`) for the `ot` entry as described below. | ||
|
||
If the final sampling Decision is `DROP` or `RECORD_ONLY`, the `th` entry MUST be removed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had to read-through the latest iteration of this document before coming back to this thread--
I don't think an "empty" th
field should be meaningful except to represent unknown information, in which case the field should be absent, not present w/ an empty value.
I guess the reason this might matter, the explicit unsetting of th-values for negative sampling decisions, is that for approach-1 spans where a RECORD_ONLY
decision is reached, there will be a copy of the span for reading, in memory. When the span is accessed, both the former th
value and the unset th
value are somehow meaningful.
I would like us to come back to the invariants we believe there are, including the sampled
flag.
If the sampled
flag is NOT set and th
is non-empty, I believe we have an inconsistency. This is why I believe we should unset th
.
If the sampled
flag is set and the th
is unset (or empty), I believe we have unknown sampling.
text/0250-Composite_Samplers.md
Outdated
- If all of the delegate Decisions are `RECORD_AND_SAMPLE`, the composite sampler MUST return `RECORD_AND_SAMPLE` Decision as well. | ||
If any of the delegate Decisions is `DROP`, the composite sampler MUST return `DROP` Decision. | ||
Otherwise, if any of the delegate Decisions is `RECORD_ONLY`, the composite sampler MUST return `RECORD_ONLY` Decision. | ||
- If the resulting sampling Decision is `DROP`, the set of span Attributes to be added to the `Span` is empty. Otherwise, it is the sum of the sets of Attributes as provided by the delegate samplers within their `SamplingResults`s. In case of conflicting attribute keys, the attribute definition from the last delegate that uses that key takes effect. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we want to have symmetry (see above):
- If the resulting sampling Decision is `DROP`, the set of span Attributes to be added to the `Span` is empty. Otherwise, it is the sum of the sets of Attributes as provided by the delegate samplers within their `SamplingResults`s. In case of conflicting attribute keys, the attribute definition from the last delegate that uses that key takes effect. | |
- If the resulting sampling Decision is `DROP`, the union of span Attributes to be added to the `Span` is empty. Otherwise, it is the sum of the sets of Attributes as provided by the delegate samplers within their `SamplingResults`s. Conflicting attribute keys must be removed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing conflicting attributes is an interesting concept. However, if we were to set attributes using Span.SetAttribute
, the specification says
Setting an attribute with the same key as an existing attribute SHOULD overwrite the existing attribute's value, so there would be an inconsistency. Adding entries with conflicting keys to TraceState
also overwrites the previous entries - another inconsistency.
And we would be left with an interesting corner case: what to do if two samplers attempt to set Attributes with the same keys and values?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Upon invocation of its
shouldSample
method, it MUST go through the whole list and invokeshouldSample
method on each delegate sampler, passing the same arguments as received, and collecting the delegates' sampling Decisions.
I understand this means the delegate samplers must be called with the same attributes. After invocating the delegates we have different sets of attributes that must be merged to a final result. Given an initial set of attributes, individual copies of this initial set are then passed to the delegates, who may set/modify attributes. As the final step we have to combine all the resulting sets of attributes including the initial one. To achieve symmetry, one could define that an attribute is only changed/set if the modifications by the delegates are not conflicting meaning that they don't try to set different values for the same key. If we take the initial set of attributes and apply only the non-conflicting modifications using Span.SetAttribute
we should be fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay, so you suggest looking only at the conflicts between the delegates - it makes a lot of sense. I had thought about conflicts with the initial attributes as well, my bad. I like this idea. Unfortunately, similar treatment of TraceState modifications does not seem possible with the current implementation of Java SDK. I did not look at other platforms.
text/0250-Composite_Samplers.md
Outdated
- If all of the delegate Decisions are `RECORD_AND_SAMPLE`, the composite sampler MUST return `RECORD_AND_SAMPLE` Decision as well. | ||
If any of the delegate Decisions is `DROP`, the composite sampler MUST return `DROP` Decision. | ||
Otherwise, if any of the delegate Decisions is `RECORD_ONLY`, the composite sampler MUST return `RECORD_ONLY` Decision. | ||
- If the resulting sampling Decision is `DROP`, the set of span Attributes to be added to the `Span` is empty. Otherwise, it is the sum of the sets of Attributes as provided by the delegate samplers within their `SamplingResults`s. In case of conflicting attribute keys, the attribute definition from the last delegate that uses that key takes effect. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- If the resulting sampling Decision is `DROP`, the set of span Attributes to be added to the `Span` is empty. Otherwise, it is the sum of the sets of Attributes as provided by the delegate samplers within their `SamplingResults`s. In case of conflicting attribute keys, the attribute definition from the last delegate that uses that key takes effect. | |
- If the resulting sampling Decision is `DROP`, the set of span Attributes to be added to the `Span` is empty. Otherwise, it is the union of the sets of Attributes as provided by the delegate samplers within their `SamplingResults`s. In case of conflicting attribute keys, the attribute definition from the last delegate that uses that key takes effect. |
text/0250-Composite_Samplers.md
Outdated
- If all of the delegate Decisions are `DROP`, the composite sampler MUST return `DROP` Decision as well. | ||
If any of the delegate Decisions is `RECORD_AND_SAMPLE`, the composite sampler MUST return `RECORD_AND_SAMPLE` Decision. | ||
Otherwise, if any of the delegate Decisions is `RECORD_ONLY`, the composite sampler MUST return `RECORD_ONLY` Decision. | ||
- The set of span Attributes to be added to the `Span` is the sum of the sets of Attributes as provided by those delegate samplers which produced a sampling Decision other than `DROP`. In case of conflicting attribute keys, the attribute definition from the last delegate that uses that key takes effect. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- The set of span Attributes to be added to the `Span` is the sum of the sets of Attributes as provided by those delegate samplers which produced a sampling Decision other than `DROP`. In case of conflicting attribute keys, the attribute definition from the last delegate that uses that key takes effect. | |
- The set of span Attributes to be added to the `Span` is the sum of the union of Attributes as provided by those delegate samplers which produced a sampling Decision other than `DROP`. In case of conflicting attribute keys, the attribute definition from the last delegate that uses that key takes effect. |
text/0250-Composite_Samplers.md
Outdated
If any of the delegate Decisions is `RECORD_AND_SAMPLE`, the composite sampler MUST return `RECORD_AND_SAMPLE` Decision. | ||
Otherwise, if any of the delegate Decisions is `RECORD_ONLY`, the composite sampler MUST return `RECORD_ONLY` Decision. | ||
- The set of span Attributes to be added to the `Span` is the sum of the sets of Attributes as provided by those delegate samplers which produced a sampling Decision other than `DROP`. In case of conflicting attribute keys, the attribute definition from the last delegate that uses that key takes effect. | ||
- The `TraceState` to be used with the new `Span` is obtained by cumulatively applying all the potential modfications of the parent `TraceState` by the delegate samplers. In case of conflicting entry keys, the entry definition provided by the last delegate that uses that key takes effect. However, the `th` sub-key (the sampling rejection `THRESHOLD`) for the `ot` entry gets special handling as described below. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we want to have symmetry:
- The `TraceState` to be used with the new `Span` is obtained by cumulatively applying all the potential modfications of the parent `TraceState` by the delegate samplers. In case of conflicting entry keys, the entry definition provided by the last delegate that uses that key takes effect. However, the `th` sub-key (the sampling rejection `THRESHOLD`) for the `ot` entry gets special handling as described below. | |
- The `TraceState` to be used with the new `Span` is obtained by cumulatively applying all the potential modifications of the parent `TraceState` by the delegate samplers. Conflicting entry keys must be removed. However, the `th` sub-key (the sampling rejection `THRESHOLD`) for the `ot` entry gets special handling as described below. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to have, but I see it difficult to implement. In Java SDK the SamplingResult
can modify a given TraceState
, but the modification itself (delta) is not tangible. Also, in contrast with the Attributes, trace state modification can remove keys (at least I have not found anything that would prohibit doing that).
text/0250-Composite_Samplers.md
Outdated
|
||
### Limitations of composite samplers in Approach Two | ||
|
||
While making sampling decisions with samplers from Approach Two is more efficient and avoids dealing with non-mainstream cases, it puts some limits on the capabilities of the Consistent Probability Samplers. In particular, a custom CP sampler that wishes to add a span `Attribute` or modify TraceState will be out of luck if it is used as a delegate. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As it is explicitly mentioned as a limitation. I wonder why a consistent sampler would add any span attributes or change the trace state at all (other than th
and rv
)? What would be a use case for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. With trace state, I imagine a head sampler can mark some traces for downstream (head) samplers and for further trace processing, effectively classifying traces according to arbitrary criteria. Obviously, this requires writing custom samplers, but these samplers can be still CP compliant.
Adding Attributes is similar, even though it works only on the Span level and only for sampled Spans.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. I think this could also be achieved with Approach Two by extending the ConsistentSampler interface by methods other than getThreshold
such as getAttributes
and defining some logic how those attributes are finally added in case the overall sampling decision is positive. So I would not necessarily see that as limitation of Approach Two as we also have to define some logic for the attributes for Approach One.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, and we could do a similar thing with TraceState
by providing adjustTraceState
. For that call, we could pass the final SamplingDecision
(which can be different from what the delegate decided). One difficulty is that for RuleBased
, the Predicates would get re-evaluated in order to route the call to the correct delegate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am thinking in the same direction. I have a sandbox where I've begun drafting my hopes&dreams for a V2 sampler API, right now I have something like:
type SamplingResult struct {
Decision SamplingDecision
Mutation func(trace.TraceState) ([]attribute.KeyValue, trace.TraceState)
}
for an Approach-2 sampler, the idea is that a composition of sampler delegates will first evaluate the decision, then it would modify the special TraceState fields, then for the positive decision makers it would allow them (sequentially) to apply modified attributes and modify the trace state in other ways. I think it will be fine to say that attributes are applied in the same order as the original delegate list, and that later samplers clobber earlier samplers in case of conflicts. I will flush this out more, but I think we can get past the trouble associated with modifying attributes and tracestate independent of the decision & essential tracestate updates.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I modified the document to present this two-phase process.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've begun to prototype jmacd/opentelemetry-go#877
Co-authored-by: Otmar Ertl <[email protected]>
Co-authored-by: Otmar Ertl <[email protected]>
Co-authored-by: J. Kalyana Sundaram <[email protected]>
Co-authored-by: J. Kalyana Sundaram <[email protected]>
Adding clarifications and cleaning up inconsistencies.
text/0250-Composite_Samplers.md
Outdated
|
||
Many users are interested in [Consistent Probability Sampling](https://opentelemetry.io/docs/specs/otel/trace/tracestate-probability-sampling/#consistent-probability-sampler) (CP), as it gives them a chance to calculate span-based metrics even when sampling is active. The configuration presented above uses the traditional samplers, which do not offer this benefit. | ||
|
||
Here is how an equivalent configuration can be put together using [CP samplers](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/tracestate-probability-sampling.md#samplers). In this example, the following implementations are used: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think sampling configurations 1 and 2 are not equivalent. To demonstrate this, consider the following example:
Assume you have two types of root spans A and B, which should be sampled with probabilities of 40% and 20%, respectively. Overall, at most 750 spans should be sampled per minute. The corresponding configuration 1 for this would be
Conjunction(
RuleBased(ROOT, {IsTypeA => TraceIdRatioBased(0.4), IsTypeB => TraceIdRatioBased(0.2)}),
RateLimitingSampler(750 * 60)
)
If the original rates are 1000/min for A and 2000/min for B, 325/min would be sampled for each of both, giving a total of exactly 750/min as desired.
Correspondingly, configuration 2 would be
EachOf(
RuleBased(ROOT, {IsTypeA => ConsistentFixedThreshold(0.4), IsTypeB => ConsistentFixedThreshold(0.2)}),
ConsistentRateLimiting(750 * 60)
)
This would give 250/min for A and 400/min for B, which gives a total of 650/min. Hence, the result differs from that of configuration 1.
To make configuration 2 equivalent, we must replace EachOf
with Conjunction
. Furthermore, we would need a weighted variant of the ConsistentRateLimiting
sampler, which analyzes the recent distribution of the th
values given by the first sampler of the conjunction and which adapts the th
values in such a way that the desired sampling rate is met. Ideally, the ConsistentWeightedRateLimiting
would increase the thresholds for A and B from 0.6 to 0.675 and from 0.8 to 0.8375, respectively. (Assuming thresholds are values from [0,1].) This would again give 325/min for both, like configuration 1.
Conjunction(
RuleBased(ROOT, {IsTypeA => ConsistentFixedThreshold(0.4), IsTypeB => ConsistentFixedThreshold(0.2)}),
ConsistentWeightedRateLimiting(750 * 60)
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EachOf(
RuleBased(ROOT, {IsTypeA => ConsistentFixedThreshold(0.4), IsTypeB => ConsistentFixedThreshold(0.2)}),
ConsistentRateLimiting(750 * 60)
)This would give 250/min for A and 400/min for B, which gives a total of 650/min. Hence, the result differs from that of configuration 1.
The total of 650/min is a concern, I'm less worried about the individual distribution between type A and B. After all, Consistent samplers have very different behavior than leaky-bucket (balancing vs. proportional).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To make configuration 2 equivalent, we must replace
EachOf
withConjunction
.
Yes, Conjunction
will work better here. Which raises another question: do we need EachOf
at all (neither in Approach One nor Approach Two)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Furthermore, we would need a weighted variant of the
ConsistentRateLimiting
sampler, which analyzes the recent distribution of theth
values given by the first sampler of the conjunction and which adapts theth
values in such a way that the desired sampling rate is met. Ideally, theConsistentWeightedRateLimiting
would increase the thresholds for A and B from 0.6 to 0.675 and from 0.8 to 0.8375, respectively. (Assuming thresholds are values from [0,1].) This would again give 325/min for both, like configuration 1.
Looking at the th
values only makes sense if the given sampler is used as the Second delegate in Conjunction
.
So how about generalizing this idea by defining Weighted
or Proportional
sampler which analyzes the population of the rv
values in the population to sample. We cannot use the sampling probability value if it is influenced by the span's rv
, but it does not mean that it cannot be influenced by the rv
values from the previous spans. Keeping track of the distribution of the rv
values at a given sampling stage allows us not only to define a proportional variant of RateLimitingSampler
, but also something like ProportionalFixedRate
sampler. It would always reduce the volume of spans according to the defined factor. Not much use in head sampling perhaps, but I believe it would be useful in tail sampling.
More specifically, such a sampler would track the average rv
of the incoming spans. For an unsampled population this average is expected to be 0.5. If the population has been pre-sampled, the spans with smaller rv
would have been eliminated, and the observed average would be higher. This would allow the samplers with proportional behavior to adjust their sampling probability accordingly (just as you proposed in your example).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will re-review in three weeks as I set out to prototype this. I expect to find / hope to find little resistance to Approach 2 samplers from the community. Thanks @PeterF778!
|
||
Each delegate sampler MUST be given a chance to participate in the sampling decision as described above and MUST see the same _parent_ state. The resulting sampling Decision does not depend on the order of the delegate samplers. | ||
|
||
### Conjunction |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Conditional
, maybe, or Contingent
?
text/0250-Composite_Samplers.md
Outdated
|
||
The return value is a structure (`SamplingIntent`) with the following elements: | ||
|
||
- The THRESHOLD value represented as a 14-character hexadecimal string, with value of `null` representing non-probabilistic `DROP` decision, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggest permitting an implementation to use an unsigned representation, too.
- The THRESHOLD value represented as a 14-character hexadecimal string, with value of `null` representing non-probabilistic `DROP` decision, | |
- The THRESHOLD value indicating the 56-bits defined in OTEP 235, with a special value (e.g., `null`) representing the non-probabilistic `DROP` decision, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Now I see you used "lexicographical" below, which explains why you spelled out the 14-character hex-string part here. I can accept it this way, but in https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/93a78c6ad8f57c5e4ff29a5f41779af03fc85145/pkg/sampling/threshold.go#L37 I've used an unsigned representation ;)
Seems related to open-telemetry/opentelemetry-specification#1844, I was looking for a link to this OTEP. |
@yurishkuro Yes, this group is beginning to prototype the approach described in this OTEP in order to address open-telemetry/opentelemetry-specification#1844. |
This PR modifies the Consistent Probability Samplers to make them conform to the proposed Composite Samplers at open-telemetry/oteps#250
Here is a follow-up to @jmacd's suggestion posted in the slack channel (#otel-sampling) about backward compatibility of Consistent Probability Samplers. A particular challenge while trying to inter-operate with legacy samplers is with
The former breaks the new structural approach to Consistent Probability Sampling, and fails if the The suggested solution then is that
The composite samplers would propagate this flag upstream. And in some cases reset it when it can be determined that the adjusted count is correct after all, in given circumstances. |
OTEPs have been moved to the Specification repository. Please consider re-opening this PR against the new location. Closing. |
This PR has been re-opened at the new repository, see open-telemetry/opentelemetry-specification#4321 |
This is another approach for introducing Composite Samplers. The previous one (A Sampling Configuration proposal) proved too large, with some controversial elements.
This OTEP is a split-off from that one, focusing just on one area - new Composite Samplers.