-
-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow log request performance #539
Comments
Hello @deathalt
can you count this over the last 7 days? |
Another question is I see no use of
do you really need to extract these two labels? |
sum by (job) (count_over_time({level=~"WARN|INFO|ERROR|FATAL", job="Service"} |= took 1.83 mins and fails then
I cannot answer, because it is one our dev use case, and I think he just want to get this label in log body |
I have to know the amount of data we deal with so I can reproduce it locally. @deathalt what about
? |
1.59s |
@deathalt Clickhouse is definitely unable to parse about 2.5B rows in 1 minute.
starts passing.
Thanks for the information. I'll try to ingest a comparable amount of data and try the request. |
I don't understand what the problem is.
but I still get
~1.5 minutes after the request
|
@deathalt no errors in the qryn logs preceding this response? Just to be sure could you also confirm you have no proxy between the client and qryn? |
only this clickhouse -> consul -> qryn no proxy at all |
Is consul providing the connectivity when the queries run? I'm not familiar with it. |
@deathalt thanks for the update. Which ClickHouse setting was affecting this? I'll add it to the configuration notes. |
@deathalt so you're trying to filter about 350 rows out of 3.3B . Mmmm... It's an interesting experiment to import 1B of rows and check the difference between skipIndexes and nothing. |
log format:
Loki request:
{level=~"WARN|INFO|ERROR|FATAL", job="Service"} |=
85577686
| json body="body", Environment="resources["deployment.environment"]", Scope="instrumentation_scope["name"]" | Environment =Stage
Took more then 30 seconds and crush then.
{level=~"WARN|INFO|ERROR|FATAL", job="Service"}
works fine.
{level=~"WARN|INFO|ERROR|FATAL", job="Service"} | json body="body"
7 days gap -> 15.2 seconds response time, "Logs volume" in grafana 502
{level=~"WARN|INFO|ERROR|FATAL", job="Service"} | json body="body", Environment="resources[\"deployment.environment\"]", Scope="instrumentation_scope[\"name\"]" | Environment =
Stage``7 days gap -> 30 seconds response time -> 502 Client.Timeout exceeded while awaiting headers
Any chance to speed it up?
The text was updated successfully, but these errors were encountered: