SigLens Helm Chart provides a simple deployment for a highly performant, low overhead observability solution that supports automatic Kubernetes events & container logs exporting
helm repo add siglens-repo https://siglens.github.io/charts
helm install siglens siglens-repo/siglens
Please ensure that helm
is installed.
To install SigLens from source:
git clone
cd charts/siglens
helm install siglens .
Important configs in values.yaml
Values | Description |
---|---|
siglens.configs | Server configs for siglens |
siglens.storage | Defines storage class to use for siglens StatefulSet |
siglens.storage.size | Storage size for persistent volume claim. Recommended to be half of license limit |
siglens.ingest.service | Configurations to expose an ingest service |
siglens.ingest.service | Configurations to expose a query service |
k8sExporter.enabled | Enable automatic exporting of k8s events using an exporting tool |
k8sExporter.configs.index | Output index name for kubernetes events |
logsExporter.enabled | Enable automatic exporting of logs using a Daemonset fluentd |
logsExporter.configs.indexPrefix | Prefix of index name used by logStash. Suffix will be namespace of log source |
affinity | Affinity rules for pod scheduling. |
tolerations | Tolerations for pod scheduling. |
ingress.enabled | Enable or disable ingress for the service. |
ingress.className | Ingress class to use. |
ingress.annotations | Annotations for the ingress resource. |
ingress.hosts | List of hosts for the ingress. |
ingress.tls | TLS configuration for the ingress. |
If k8sExporter or logsExporter is enabled, then a ClusterRole will be created to get/watch/list all resources in all apigroups. Which resources and apiGroups can be edited in serviceAccount.yaml
Currently, only awsEBS
and local
storage classes provisioners can be configured by setting storage.defaultClass: false
and setting the required configs. To add more types of storage classes, add the necessary provisioner info to storage.yaml
.
It it recommended to use a storage class that supports volume expansion.
Example configuration to use an EBS storage class.
storage:
defaultClass: false
size: 20Gi
awsEBS:
parameters:
type: "gp2"
fsType: "ext4"
Example configuration to use a local storage class.
storage:
defaultClass: false
size: 20Gi
local:
hostname: minikube
capacity: 5Gi
path: /data # must be present on local machine
To add AWS credentials, add the following configuration:
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: <<arn-of-role-to-use>>
If issues with AWS credentials are encountered, refer to this guide.
To use abc.txt
as a license, add the following configmap:
kubectl create configmap siglens-license --from-file=license.txt=abc.txt
Set the following config:
siglens:
configs:
license: abc.txt
-
Prepare Configuration:
- Begin by creating a
custom-values.yaml
file, where you'll provide your license key and other necessary configurations - Please look at this sample
values.yaml
file for all the available config - By default, the Helm chart installs in the
siglensent
namespace. If needed, you can change this in yourcustom-values.yaml
, or manually create the namespace with the command:kubectl create namespace siglensent
- Begin by creating a
-
Add Helm Repository:
Add the Siglens Helm repository with the following command:
helm repo add siglens-repo https://siglens.github.io/charts
If you've previously added the repository, ensure it's updated:
helm repo update siglens-repo
-
Update License and TLS Settings:
- Update your
licenseBase64
with your Base64-encoded license key. For license key, please reach out at [email protected] - If TLS is enabled, ensure you also update
acme.registrationEmail
,ingestHost
, andqueryHost
in your configuration
- Update your
-
Apply Cert-Manager (If TLS is enabled): If TLS is enabled, apply the Cert-Manager CRDs using the following command:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.4/cert-manager.crds.yaml
-
Update the Resources Config:
- Update the CPU and memory resources for both raft and worker nodes:
raft.deployment.cpu.request
raft.deployment.memory.request
worker.deployment.cpu.request
worker.deployment.memory.request
worker.deployment.replicas
- Set the required storage size for the PVC of the worker node:
pvc.size
and storage class type:storageClass.diskType
- Update the CPU and memory resources for both raft and worker nodes:
5.5. (Optional) Customize Storage Classes:
By default, Siglensent uses GCP Persistent Disk (pd.csi.storage.gke.io
) as the provisioner and defines two StorageClass
objects:
gcp-pd-rwo
— used for worker node volumes (pd-ssd
by default)gcp-pd-standard-rwo
— used for raft node volumes (pd-standard
by default)
These are customizable through the storageClass
section in your custom-values.yaml
.
You can define shared defaults under storageClass.<key>
and override per component (worker
, raft
) only if needed.
storageClass:
provisioner: pd.csi.storage.gke.io # GCP default provisioner
diskType: pd-standard # Default disk type
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
worker:
name: gcp-pd-rwo # name for worker PVCs
raft:
name: gcp-pd-standard-rwo # name for raft PVCs
storageClass:
provisioner: ebs.csi.aws.com # AWS EBS CSI driver
diskType: gp2
reclaimPolicy: Delete
worker:
name: aws-ebs-gp2-rwo-worker
diskType: gp3 # Override the default gp2 value
raft:
name: aws-ebs-gp2-rwo-raft
💡 You only need to override fields that differ from the shared defaults.
⚠️ Important: Avoid changing thename
of an existingStorageClass
in a running cluster unless you know what you're doing. Doing so may break existing PersistentVolumeClaims and lead to data loss or pod scheduling issues.
-
Update the RBAC Database Config (If SaaS is Enabled):
config: rbac: provider: "postgresql" # Valid options are: postgresql, sqlite dbname: db1 # Postgres configuration for RBAC host: "pstgresDbHost" port: 5432 user: "username" password: "password"
-
(Optional) Enable Blob Storage:
-
Use S3:
-
Update Config: Update the config section in
values.yaml
:config: ... # other config params blobStoreMode: "S3" s3: enabled: true bucketName: "bucketName" bucketPrefix: "subdir" regionName: "us-east-1" ... # other config params
-
Setup Permissions
Option 1: AWS access keys:
- Create a secret with IAM keys that have access to S3 using the below command:
kubectl create secret generic aws-keys \ --from-literal=aws_access_key_id=<accessKey> \ --from-literal=aws_secret_access_key=<secretKey> \ --namespace=siglensent
- Set
s3.accessThroughAwsKeys: true
in yourcustom-values.yaml
Option 2: IAM Role:
- Get the OpenID Connect provider URL for your cluster
- Go to IAM -> Identity providers -> Add provider, and setup a new OpenID Connect provider with that URL and the audience as
sts.amazonaws.com
- Setup a role
- Go to IAM -> Roles -> Create role, and select the OIDC provider you just created
- Add the condition
<IDENTITY_PROVIDER>:sub = system:serviceaccount:<NAMESPACE>:<RELEASE_NAME>-service-account
. The<NAMESPACE>
and<RELEASE_NAME>
are the namespace and release name of your Helm chart; they'll both besiglensent
if you follow this README exactly. - Add S3 full access permissions, and create the role
- Add the service account to the
serviceAccountAnnotations
section invalues.yaml
- Ensure your
custom-values.yaml
hass3.accessThroughAwsKeys: false
-
-
Use GCS:
-
Update Config: Update the
config
section in thevalues.yaml
:config: ... # other config params blobStoreMode: "GCS" s3: enabled: true gcs: bucketName: "bucketName" bucketPrefix: "subdir" regionName: "us-east1" ... # other config params
-
Create GCS secret:
- Create a service account with these permissions:
- Storage Admin
- Storage Object Admin
- Create a key for the service account and download the JSON file
- Create a secret with the key using the below command (use the absolute path):
kubectl create secret generic gcs-key \ --from-file=key.json=/path/to/your-key.json \ --namespace=siglensent
- Add the service account to the
serviceAccountAnnotations
section invalues.yaml
- Create a service account with these permissions:
-
-
-
Install Siglensent: Install Siglensent using Helm with your custom configuration file:
helm install siglensent siglens-repo/siglensent -f custom-values.yaml --namespace siglensent
-
Update DNS for TLS (If Applicable):
- Run:
kubectl get svc -n siglensent
- Find the External IP of the
ingress-nginx-controller
service. Then create two A records in your DNS to point to this IP; one foringestHost
and one forqueryHost
as defined in yourcustom-values.yaml
- Run:
Note: If you uninstall and reinstall the chart, you'll need to update your DNS again. But if you do a helm upgrade
instead, the ingress controller will persist, so you won't have to update your DNS.