Skip to content

DOC-136 #129

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 37 commits into from
Jun 3, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
f3bf4b7
Final version
AnneLaure1307 Feb 21, 2025
27d3e6b
Alright
AnneLaure1307 Feb 21, 2025
a7ec320
Change
AnneLaure1307 Feb 21, 2025
0e49640
Fix
AnneLaure1307 Feb 21, 2025
07a2895
Adustements from Guillaume
AnneLaure1307 Feb 24, 2025
29142e3
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 Feb 27, 2025
bd2f366
Steps
AnneLaure1307 Feb 28, 2025
55df3fd
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 Mar 3, 2025
4e04ba3
Removed step 2 in AWS example
AnneLaure1307 Mar 4, 2025
c8a9966
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 Apr 22, 2025
1ed539c
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 Apr 28, 2025
1a91571
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 Apr 29, 2025
9d96d84
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 Apr 30, 2025
8afb828
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 May 5, 2025
5bf767d
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 May 13, 2025
470c26b
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 May 13, 2025
6536116
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 May 13, 2025
5e91008
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 May 13, 2025
2c90625
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 May 19, 2025
ed6fe93
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 May 19, 2025
eb5306b
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 May 19, 2025
a107704
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 May 19, 2025
bed5004
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 May 22, 2025
e9289da
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 May 26, 2025
92e1f52
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 May 26, 2025
291166a
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 May 27, 2025
a92919b
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 May 27, 2025
22860cb
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 May 27, 2025
433b7ae
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 May 27, 2025
3dc2661
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 May 27, 2025
9354430
Small fixes
AnneLaure1307 May 28, 2025
eca4734
Vale checks
AnneLaure1307 May 28, 2025
84c732f
Rendering checks
AnneLaure1307 May 28, 2025
a21e52d
Adjustments
AnneLaure1307 May 28, 2025
5dbea01
Merge remote-tracking branch 'origin/main' into DOC-136
AnneLaure1307 Jun 2, 2025
a3acf62
Changed title
AnneLaure1307 Jun 3, 2025
1a4b021
Rendering fixes
AnneLaure1307 Jun 3, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,174 @@
# Deploy Cortex on Kubernetes

Deploying Cortex on Kubernetes improves scalability, reliability, and resource management. Kubernetes handles automated deployment, dynamic resource allocation, and isolated execution of analyzers and responders, boosting performance and security. This setup simplifies the management of large workloads.

This guide provides step-by-step instructions for deploying Cortex on a Kubernetes cluster using [the StrangeBee Helm chart repository](https://github.com/StrangeBeeCorp/helm-charts).

!!! warning "Prerequisites"
Make sure you have:
- A running Kubernetes cluster (version 1.23.0 or later)
- [Helm](https://helm.sh/) installed (version 3.8.0 or later)

## Step 1: Ensure all users can access files on the shared filesystem

!!! warning "Configuration errors"
Improperly configured shared filesystems can cause errors when running jobs with Cortex.

!!! info "Why a shared filesystem"
When running on Kubernetes, Cortex launches a new pod for each analyzer or responder execution. After the job completes and Cortex retrieves the result, the pod is terminated. Without a shared filesystem accessible by both the Cortex pod and these analyzer and responder pods, they can't access the data they need to operate, causing the jobs to fail.

Kubernetes supports several methods for sharing filesystems between pods, including:
- [PersistentVolume (PV) using an NFS server](https://kubernetes.io/docs/concepts/storage/volumes/#nfs)
- Dedicated storage solutions like [Longhorn](https://longhorn.io/) or [Rook](https://rook.io/)

This guide focuses on configuring a PV using an NFS server, with an example for [AWS Elastic File System (EFS)](#example-deploy-cortex-using-aws-efs).

At runtime, Cortex and its jobs run on different pods and may use different user IDs (UIDs) and group IDs (GIDs):

* Cortex defaults to uid:gid `1001:1001`.
* Analyzers may use different uid:gid, such as `1000:1000` or `0:0` if running as root.

To prevent permission errors when reading or writing files on the shared filesystem, [configure the NFS server](https://manpages.ubuntu.com/manpages/noble/man5/exports.5.html) with the `all_squash` parameter. This ensures all filesystem operations use uid:gid `65534:65534`, regardless of the user's actual UID and GID.

## Step 2: Configure PersistentVolume, PersistentVolumeClaim, service account, and deployment for Cortex

!!! info "Definitions"
- A [PersistentVolume (PV)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) represents a piece of storage provisioned by an administrator or dynamically provisioned using storage classes. When using an NFS server, the PV allows multiple pods to access shared storage concurrently.
- A [PersistentVolumeClaim (PVC)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) is a request for storage by a pod. It connects to an existing PV or dynamically creates one, specifying the required storage capacity.
- A service account (SA) allows a pod to authenticate and interact with the Kubernetes API, enabling it to perform specific actions within the cluster. When deploying Cortex, a dedicated SA is essential for creating and managing Kubernetes jobs that run analyzers and responders. Without proper configuration, Cortex can't execute these jobs.

Use the `cortex` Helm chart to automate the creation of the PV, PVC, and SA during deployment.

!!! note "Using an existing PersistentVolume"
The `cortex` Helm Chart can operate without creating a new PV, provided that an existing PV—created by the cluster administrator—matches the PVC configuration specified in the Helm Chart.

### Quick start

1. Add the StrangeBee Helm repository

```bash
helm repo add strangebee https://strangebeecorp.github.io/helm-charts
```

2. Update your local Helm repositories

```bash
helm repo update
```

3. Create a release using the `cortex` Helm chart

```bash
helm install <release_name> strangebee/cortex
```

!!! warning "First start"
At first start, you must access the Cortex web page to update the Elasticsearch database.

For more options, see [the Helm documentation for installation](https://helm.sh/docs/helm/helm_install/).

!!! info "Dependency"
The `cortex` Helm chart relies on the [Bitnami Elasticsearch Stack](https://github.com/bitnami/charts/tree/main/bitnami/elasticsearch) by default as the search index.

!!! note "Upgrades"
To upgrade your release to the latest version of the `cortex` Helm chart, run:
```bash
helm upgrade <release_name> strangebee/cortex
```
For additional options and best practices, see [the Helm upgrade documentation](https://helm.sh/docs/helm/helm_upgrade/).

### Advanced configuration

For convenience, the `cortex` Helm chart includes all required components out of the box. While this setup is suitable for a development environment, it's highly recommended to review and configure both Cortex and its dependency before deploying to production.

Use the following command to view all available configuration options for the `cortex` Helm chart:

```bash
helm show values strangebee/cortex
```

For more information on customization, see [the dedicated Helm documentation](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing). You can also review the available options for the dependency.

#### StorageClasses

!!! warning "Shared storage requirement"
Cortex requires a shared PVC with `ReadWriteMany` access mode to allow multiple pods to read and write data simultaneously—essential for job inputs and outputs.

By default, this chart attempts to create such a PVC using your cluster’s default StorageClass. Ensure this StorageClass supports `ReadWriteMany` to avoid deployment issues or target a StorageClass in your cluster compatible with this access mode.

Common solutions include:

* Running an NFS server reachable from your cluster and creating a PV targeting it
* Using dedicated storage solutions like [Longhorn](https://longhorn.io/) or [Rook](https://rook.io/)
* Leveraging cloud provider-specific solutions like [AWS EFS with the EFS CSI Driver](https://github.com/kubernetes-sigs/aws-efs-csi-driver)

Also note that Cortex stores data in Elasticsearch. Regular backups are strongly recommended to prevent data loss, especially when deploying Elasticsearch on Kubernetes using the [Bitnami Elasticsearch Stack](https://github.com/bitnami/charts/tree/main/bitnami/elasticsearch).

#### Elasticsearch

!!! warning "Elasticsearch support"
Cortex currently supports only Elasticsearch version 7.x.

By default, this chart deploys an Elasticsearch cluster with two nodes, both master-eligible and general-purpose.

You can review the [Bitnami Elasticsearch Helm chart](https://github.com/bitnami/charts/tree/main/bitnami/elasticsearch) for available configuration options.

!!! note "Same Elasticsearch instance for both TheHive and Cortex"
Using the same Elasticsearch instance for both TheHive and Cortex isn't recommended. If this setup is necessary, ensure proper connectivity and configuration for both pods and use Elasticsearch version 7.x. Be aware that sharing an Elasticsearch instance creates an interdependency that may lead to issues during updates or downtime.

#### Cortex server in TheHive

See [Add a Cortex Server](../../thehive/administration/cortex/add-a-cortex-server.md) for detailed instructions.

When TheHive and Cortex deploy in the same Kubernetes cluster, use the Cortex service Domain Name System (DNS) as the server URL.

```bash
http://cortex.<namespace>.svc:9001
```

## Example: Deploy Cortex using AWS EFS

### Prerequisites

Before setting up the PV for AWS EFS, complete the following steps:

* [Create an Identity and Access Management (IAM) role](https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html) to allow the EFS CSI driver to interact with EFS.
* Install the EFS CSI driver on your Kubernetes cluster using one of the following methods:
* [EKS add-ons](https://www.eksworkshop.com/docs/fundamentals/storage/efs/efs-csi-driver) (recommended)
* [Official Helm chart](https://github.com/kubernetes-sigs/aws-efs-csi-driver/releases?q=helm-chart&expanded=true)
* [Create an EFS filesystem](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/docs/efs-create-filesystem.md) and note the associated EFS filesystem ID.

### 1. Create a StorageClass for EFS

!!! note "Reference example"
The following manifests are based on the [EFS CSI driver multiple pods example](https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/master/examples/kubernetes/multiple_pods).

Create a StorageClass that references your EFS filesystem:

```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
# https://github.com/kubernetes-sigs/aws-efs-csi-driver?tab=readme-ov-file#storage-class-parameters-for-dynamic-provisioning
parameters:
provisioningMode: efs-ap # EFS access point provisioning mode
fileSystemId: fs-01234567 # Replace with your EFS filesystem ID
directoryPerms: "700" # Permissions for newly created directories
uid: 1001 # User ID to set file permissions
gid: 1001 # Group ID to set file permissions
ensureUniqueDirectory: "false" # Set to false to allow shared folder access between Cortex and job containers
subPathPattern: "${.PVC.namespace}/${.PVC.name}" # Optional subfolder structure inside the NFS filesystem
```

### 2. Create a PVC using the EFS StorageClass

Kubernetes automatically creates a PV when defining a PVC with the EFS StorageClass.

Use the `cortex` Helm chart to configure [the correct storageClass value in the chart’s settings](https://github.com/StrangeBeeCorp/helm-charts/blob/cortex-initial-helm-chart/cortex-charts/cortex/values.yaml#L67-L79), and the PVC will be created automatically during deployment.

## Next steps

* [Analyzers & Responders](analyzers-responders.md)
* [Advanced Configuration](advanced-configuration.md)
5 changes: 2 additions & 3 deletions docs/cortex/installation-and-configuration/index.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Installation & configuration guides
# Installation and Configuration Guides

## Overview
Cortex relies on Elasticsearch to store its data. A basic setup to install Elasticsearch, then Cortex on a standalone and dedicated server (physical or virtual).
Expand Down Expand Up @@ -38,10 +38,9 @@ Cortex has been tested and is supported on the following operating systems:
3. Install Cortex and all its dependencies to run Analyzers & Responders as Docker Iiages
4. Install Cortex and all its dependencies to run Analyzers & Responders on the host (Debian and Ubuntu **ONLY**)



For each release, DEB, RPM and ZIP binary packages are built and provided.

For deploying Cortex on a Kubernetes cluster, refer to our detailed [Kubernetes deployment guide](deploy-cortex-on-kubernetes.md).

The [following Guide](step-by-step-guide.md) let you **prepare**, **install** and **configure** Cortex and its prerequisites for Debian and RPM packages based Operating Systems, as well as for other systems and using our binary packages.

Expand Down
35 changes: 22 additions & 13 deletions docs/thehive/installation/kubernetes.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# How to Deploy TheHive on Kubernetes

This topic provides step-by-step instructions for deploying TheHive on a Kubernetes cluster using [the TheHive Helm chart](https://github.com/StrangeBeeCorp/helm-charts/tree/main/thehive-charts/thehive).
This topic provides step-by-step instructions for deploying TheHive on a Kubernetes cluster using [the StrangeBee Helm chart repository](https://github.com/StrangeBeeCorp/helm-charts).

!!! info "License"
The Community license supports only a single node. To deploy multiple TheHive nodes, you must upgrade to a Gold or Platinum license. A fresh deployment of TheHive on an empty database includes a two-week Platinum trial, allowing you to test multi-node setups.
Expand All @@ -14,7 +14,7 @@ TheHive provides an [official Helm chart for Kubernetes deployments](https://git
- A running Kubernetes cluster (version 1.23.0 or later)
- [Helm](https://helm.sh/) installed (version 3.8.0 or later)

1. Add the TheHive Helm repository
1. Add the StrangeBee Helm repository

```bash
helm repo add strangebee https://strangebeecorp.github.io/helm-charts
Expand All @@ -26,7 +26,7 @@ TheHive provides an [official Helm chart for Kubernetes deployments](https://git
helm repo update
```

3. Create a release using the TheHive Helm chart
3. Create a release using the `thehive` Helm chart

```bash
helm install <release_name> strangebee/thehive
Expand All @@ -35,23 +35,23 @@ TheHive provides an [official Helm chart for Kubernetes deployments](https://git
For more options, see [the Helm documentation for installation](https://helm.sh/docs/helm/helm_install/).

!!! info "Dependencies"
The TheHive Helm chart relies on the following charts by default:
The `thehive` Helm chart relies on the following charts by default:
- [Bitnami Apache Cassandra](https://github.com/bitnami/charts/tree/main/bitnami/cassandra) - used as the database
- [Bitnami Elasticsearch Stack](https://github.com/bitnami/charts/tree/main/bitnami/elasticsearch) - used as the search index
- [MinIO Community Helm Chart](https://github.com/minio/minio/tree/master/helm/minio) - used as S3-compatible object storage

!!! note "Upgrades"
To upgrade your release to the latest version of TheHive Helm chart, run:
To upgrade your release to the latest version of the `thehive` Helm chart, run:
```bash
helm upgrade <release_name> strangebee/thehive
```
For additional options and best practices, see [the Helm upgrade documentation](https://helm.sh/docs/helm/helm_upgrade/).

## Advanced configuration

For convenience, the TheHive Helm chart includes all required components out of the box. While this setup is suitable for a development environment, it's highly recommended to review and configure both TheHive and its dependencies before deploying to production.
For convenience, the `thehive` Helm chart includes all required components out of the box. While this setup is suitable for a development environment, it's highly recommended to review and configure both TheHive and its dependencies before deploying to production.

Use the following command to view all available configuration options for the TheHive Helm chart:
Use the following command to view all available configuration options for the `thehive` Helm chart:

```bash
helm show values strangebee/thehive
Expand All @@ -61,7 +61,7 @@ For more information on customization, see [the dedicated Helm documentation](ht

### Pods resources

Resources allocated to pods are optimized for production use. If you need to adjust these values, especially memory limits, make sure you update the `JVM_OPTS` environment variable accordingly to avoid OOM (out of memory) crashes.
Resources allocated to pods are optimized for production use. If you need to adjust these values, especially memory limits, make sure you update the `JVM_OPTS` environment variable accordingly to avoid out of memory (OOM) crashes.

```bash
# JVM_OPTS variable for TheHive
Expand All @@ -88,15 +88,15 @@ Refer to the official [Cassandra](https://cassandra.apache.org/doc/latest/cassan

### StorageClasses

By default, this chart uses your cluster's default `StorageClass` to create persistent volumes (PVs).
By default, this chart uses your cluster's default StorageClass to create persistent volumes (PVs).

You can customize the `StorageClass` to suit your environment. In all cases, whether you use the default or a custom configuration, make sure it meets the following criteria:
You can customize the StorageClass to suit your environment. In all cases, whether you use the default or a custom configuration, make sure it meets the following criteria:

* It is regularly backed up to prevent data loss. Tools like [Velero](https://velero.io/) can help automate this process.
* It's regularly backed up to prevent data loss. Tools like [Velero](https://velero.io/) can help automate this process.
* It has an appropriate `reclaimPolicy` to minimize the risk of data loss when volumes are deleted or released.
* It provides sufficient performance to ensure reliable operation of databases and applications.

To configure `StorageClasses` based on your needs, refer to the relevant CSI drivers for your infrastructure. For example, use the EBS CSI driver for AWS or the Persistent Disk CSI driver for GCP.
To configure StorageClasses based on your needs, refer to the relevant CSI drivers for your infrastructure. For example, use the EBS CSI driver for AWS or the Persistent Disk CSI driver for Google Cloud Platform.

### Cassandra

Expand All @@ -110,6 +110,9 @@ By default, this chart deploys an Elasticsearch cluster with two nodes, both mas

You can review the [Bitnami Elasticsearch Helm chart](https://github.com/bitnami/charts/tree/main/bitnami/elasticsearch) for available configuration options.

!!! note "Same Elasticsearch instance for both TheHive and Cortex"
Using the same Elasticsearch instance for both TheHive and Cortex isn't recommended. If this setup is necessary, ensure proper connectivity and configuration for both pods and use Elasticsearch version 7.x. Be aware that sharing an Elasticsearch instance creates an interdependency that may lead to issues during updates or downtime.

### Object storage

To support multiple replicas of TheHive, this chart defines an [object storage](../configuration/file-storage.md) in the configuration and deploys a single instance of MinIO.
Expand All @@ -121,7 +124,7 @@ For production environments, use a managed object storage service to ensure opti

Network shared filesystems, like NFS, are supported but can be more complex to implement and may offer lower performance.

### Cortex
### Cortex server in TheHive

No Cortex server is defined by default in TheHive configuration.

Expand All @@ -145,6 +148,12 @@ cortex:
#k8sSecretKey: "api-keys"
```

When TheHive and Cortex are deployed in the same Kubernetes cluster, use the Cortex service Domain Name System (DNS) as the server URL.

```bash
http://cortex.<namespace>.svc:9001
```

<h2>Next steps</h2>

* [Monitoring TheHive](../operations/monitoring.md)
Expand Down
1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -770,6 +770,7 @@ nav:
- ./cortex/installation-and-configuration/proxy-settings.md
- ./cortex/installation-and-configuration/docker.md
- ./cortex/installation-and-configuration/database.md
- ./cortex/installation-and-configuration/deploy-cortex-on-kubernetes.md
- 'User Guides':
- 'First start' : 'cortex/user-guides/first-start.md'
- 'User roles' : 'cortex/user-guides/roles.md'
Expand Down