-
Notifications
You must be signed in to change notification settings - Fork 525
fix: downgrade NoopMeterProvider log to TRACE #2970
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Oh, I don't want to really bother with this CLA as signing it would require giving away my home address. I guess this PR still serves as a good discussion point. Perhaps one of the maintainers can take this over? |
Not sure if this changes your mind, but that's not true anymore. |
@@ -26,7 +26,7 @@ impl NoopMeterProvider { | |||
|
|||
impl MeterProvider for NoopMeterProvider { | |||
fn meter_with_scope(&self, scope: crate::InstrumentationScope) -> Meter { | |||
otel_debug!(name: "NoopMeterProvider.MeterCreation", meter_name = scope.name(), message = "Meter was obtained from a NoopMeterProvider. No metrics will be recorded. If global::meter_with_scope()/meter() was used, ensure that a valid MeterProvider is set globally before creating Meter."); | |||
otel_trace!(name: "NoopMeterProvider.MeterCreation", meter_name = scope.name(), message = "Meter was obtained from a NoopMeterProvider. No metrics will be recorded. If global::meter_with_scope()/meter() was used, ensure that a valid MeterProvider is set globally before creating Meter."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isn't debug right severity? This is useful when users don't see metrics flowing due to NoOp being active... You can filter out otel debug logs easily too, as in below example:
let filter_fmt = EnvFilter::new("info").add_directive("opentelemetry=info".parse().unwrap());
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isn't debug right severity? This is useful when users don't see metrics flowing due to NoOp being active... You can filter out otel debug logs easily too, as in below example:
let filter_fmt = EnvFilter::new("info").add_directive("opentelemetry=info".parse().unwrap());
That is what I am doing right now but I am not sure what other logs are being surpressed here if I only filter by info
. Typically, these logs are namespaced by their crate and module path but it seems that opentelemetry
is explicitly overwriting this so I cannot filter out logs just from the noop
module.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My personal guideline for logs is:
- ERROR: Something is unrecoverably broken, likely affecting the user
- WARN: Something weird is happening, stuff might be broken in the future but for now we can continue
- INFO: Major state change in the system, likely triggered by user activity
- DEBUG: We hit an odd condition somewhere, an early return from a function, etc. In most cases, it should be conditional on something. The happy path should not log DEBUGs because it is likely too spammy.
- TRACE: Follow along with the current function execution. Usually placed unconditional, i.e. logs every request, packet, interaction etc.
The line between DEBUG and TRACE is obviously blurry which is why this is a guideline. In this particular case, I'd argue that DEBUG is too spammy. To give you some context, your application emitted over 3000 of these logs in the process of 25 minutes. We attach multiple counters to each socket we are creating and we create one UDP socket for each outgoing DNS query (DNS should always use randomized ports so it can't be as easily fingerprinted).
It is my understanding that the NoopMeterProvider
is explicitly there to avoid computational cost from metric collection when they aren't being exported. So this is totally a module that gets used in production code.
This log executes unconditionally on every metric. If you want to continue logging it on DEBUG, it should IMO be gated to once per unique meter name to avoid the log spam.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is what I am doing right now but I am not sure what other logs are being surpressed here if I only filter by info
You'll miss all other debug level logs (Not sure if that was the question, sorry If I misread...)
I think this should be okay, as you won't find DEBUG level logs anyway useful in production scenarios..
Typically, these logs are namespaced by their crate and module path but it seems that opentelemetry is explicitly overwriting this so I cannot filter out logs just from the noop module
Yes, only crate name is currently used for target. We could consider including module name too, to achieve finer granular filtering.
We can make it gated to do once-per-meter-name, but this would require adding more complexity/code to no-provider, which would make it non-zero cost. (it is supposed to cost as close to zero!)
I understand the debug vs trace is bit subjective, but my thinking was:
- If a user is not seeing metrics from his app, and wants to understand why - this DEBUG level log is perfectly suited to help them understand that they missed to set GlobalProvider/similar. This is not tracing (TRACE) the entire flow of execution... Which is why I think DEBUG is apt level here.
Happy to hear more thoughts/ideas and improve this situation.
Well I tried to go through the process and it still takes me to DocuSign. |
Having already signed I am not sure how to replicate. However according to open-telemetry/community#1252 (comment) it no longer requires your home address. |
Changes
Downgrade the log about a meter being obtained from the
NoopMeterProvider
to TRACE. This can be very spammy on systems that request many meters but might have metric collection temporarily disabled.Merge requirement checklist
CHANGELOG.md
files updated for non-trivial, user-facing changes