Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use x2i with Telegraf or VictoriaMetrics or InfluxDB 2? #2

Open
polarnik opened this issue Aug 25, 2024 · 2 comments
Open

How to use x2i with Telegraf or VictoriaMetrics or InfluxDB 2? #2

polarnik opened this issue Aug 25, 2024 · 2 comments

Comments

@polarnik
Copy link

polarnik commented Aug 25, 2024

Hello!

I'm using nginx as a metric proxy.

Telegraf

The chain is: x2i => nginx:8088 => telegraf:8087 => influxdb2/influxdb1:8086

nginx.conf

There is a basic config for the telegraf tool:

http {
    upstream telegraf_8087_x2i_gatling {
        server telegraf:8087;
        keepalive 5000;
    }
    server {
        listen 8088;
        default_type application/json;
        location /ping {
            add_header 'X-Influxdb-Build' 'OSS';
            add_header 'X-Influxdb-Version' '1.8.10';
            return 204;
        }
        location /write {
            proxy_pass http://telegraf_8087_g2i_gatling;
        }
        location /query {
            add_header 'X-Influxdb-Build' 'OSS';
            add_header 'X-Influxdb-Version' '1.8.10';
            return 200 '{"results":[{"statement_id":0}]}';
        }
    }
}

In the telegraf we can use two plugins:

  • listen the /write endpoint
  • sending metrics into the storage
    with one main config telegraf.conf

telegraf/telegraf.conf

The config with some global tags.. Global tags can help with making CI/CD processes

# Global tags can be specified here in key="value" format.
[global_tags]
    # dc = "us-east-1" # will tag all metrics with dc=us-east-1
    # rack = "1a"
    ## Environment variables can be used as tags, and throughout the config file
    # user = "$USER"
    GITHUB_EVENT_NAME = "$GITHUB_EVENT_NAME"
    GITHUB_JOB = "$GITHUB_JOB"
    GITHUB_RUN_URL = "$GITHUB_RUN_URL"
    GITHUB_WORKFLOW = "$GITHUB_WORKFLOW"
    GITHUB_RUN_ID = "$GITHUB_RUN_ID"
    TEST_NAME = "$TEST_NAME"

# Configuration for telegraf agent
[agent]
    ## Default data collection interval for all inputs
    interval = "20s"
    ## Rounds collection interval to 'interval'
    ## ie, if interval="10s" then always collect on :00, :10, :20, etc.
    round_interval = false

    ## Telegraf will send metrics to outputs in batches of at most
    ## metric_batch_size metrics.
    ## This controls the size of writes that Telegraf sends to output plugins.
    metric_batch_size = 1000

    ## Maximum number of unwritten metrics per output.  Increasing this value
    ## allows for longer periods of output downtime without dropping metrics at the
    ## cost of higher maximum memory usage.
    metric_buffer_limit = 100000

telegraf/telegraf.d/inputs.influxdb_listener.8087.x2i_gatling.conf

The input config

# Accept metrics over InfluxDB 2.x HTTP API
[[inputs.influxdb_listener]]
    ## Address and port to host InfluxDB listener on
    service_address = ":8087"
    parser_type = "upstream"

    [inputs.influxdb_listener.tags]
        influxdb_backet = "x2i_gatling"

telegraf/telegraf.d/outputs.influxdb_v2.many-buckets.conf

The output config

[[outputs.influxdb_v2]]

  urls = ["$INFLUX_HOST"]
  token = "$INFLUX_TOKEN"
  organization = "$INFLUX_ORG"
  bucket = ""
  bucket_tag = "influxdb_backet"
  exclude_bucket_tag = true

In this case telegraf uses a simpe influxdb output. It works well for low and medium workload 0-300 RPS.

For high workload 1000+ RPS we can use other outputs:

with inputs:

The chain will be:

  • x2i => nginx:8088 => telegraf_1.influxdb_listener:8087 => telegraf_1.outputs.mqtt => RabbitMQ => telegraf_2.inputs.mqtt_consumer => telegraf_2.outputs.influxdb_v2 => influxdb2:8086
  • x2i => nginx:8088 => telegraf_1.influxdb_listener:8087 => telegraf_1.outputs.kafka => Kafka => telegraf_2.inputs.kafka_consumer => telegraf_2.outputs.influxdb_v2 => influxdb2:8086
  • x2i => nginx:8088 => telegraf_1.influxdb_listener:8087 => telegraf_1.outputs.nats => NATS => telegraf_2.inputs.nats_consumer => telegraf_2.outputs.influxdb_v2 => influxdb2:8086

And we can use x2i for static files, without an online metric sending:

  • files => x2i => (any target)

Victoria Metrics

We can use Victoria Metrics as well.

The chain is: x2i => nginx:8086 => victoria-metrics:8428

nginx.conf

http {
    upstream victoria {
        server victoria-metrics:8428;
        keepalive 5000;
    }
    server {
        listen 8086;
        default_type application/json;
        location /ping {
            add_header 'X-Influxdb-Build' 'OSS';
            add_header 'X-Influxdb-Version' '1.8.10';
            return 204;
        }
        location /write {
            proxy_pass http://victoria;
        }
        location /query {
            add_header 'X-Influxdb-Build' 'OSS';
            add_header 'X-Influxdb-Version' '1.8.10';
            return 200 '{"results":[{"statement_id":0}]}';
        }
    }
}

In this case we will use PromQL or MetricsQL for making a report in Grafana

InfluxDB 2

We can use InfluxDB 2.0 as well

The chain is: x2i => influxdb2:8086

We can create an influxdb 2 bucket, and create an influxdb v1 login/password for the bucket:

#!/bin/bash -x

influx bucket create --name x2i_gatling --description "x2i_gatling metrics" \
    --retention "90d" --shard-group-duration "12h"

export BUCKET_ID=$( influx bucket list --hide-headers \
    --name x2i_gatling | awk '{ print $1}' )

export PASSWORD=password

influx v1 auth create --description "x2i_gatling" \
    --username "x2i_gatling" --password ${PASSWORD} \
    --read-bucket ${BUCKET_ID} --write-bucket ${BUCKET_ID}

echo "Login is: x2i_gatling"
echo "Password is: ${PASSWORD}"

The endpoints /query and /write are working well:

The endpoint /ping is working well:

@polarnik
Copy link
Author

The nginx can improve the performance of the /query endpoint as well:
https://gist.github.com/polarnik/cb6f22751e8d1590342198609243c529

@PeterPaul-Perfana
Copy link
Contributor

Hi @polarnik, thanks for this valuable information! Apologies for the late reply.

Please let us know what is expected outcome of this issue. For instance, do you propose to add this as additional documentation and possibly add example template nginx configs to the project? Or should x2i be updated to cater for better integration in these chains?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants