diff --git a/content/kubermatic/main/_index.en.md b/content/kubermatic/main/_index.en.md index 0ead10bba..0b8012732 100644 --- a/content/kubermatic/main/_index.en.md +++ b/content/kubermatic/main/_index.en.md @@ -17,10 +17,12 @@ KKP is the easiest and most effective software for managing cloud native IT infr ## Features -#### Powerful & Intuitive Dashboard to Visualize your Kubernetes Deployment +### Powerful & Intuitive Dashboard to Visualize your Kubernetes Deployment + Manage your [projects and clusters with the KKP dashboard]({{< ref "./tutorials-howtos/project-and-cluster-management/" >}}). Scale your cluster by adding and removing nodes in just a few clicks. As an admin, the dashboard also allows you to [customize the theme]({{< ref "./tutorials-howtos/dashboard-customization/" >}}) and disable theming options for other users. -#### Deploy, Scale & Update Multiple Kubernetes Clusters +### Deploy, Scale & Update Multiple Kubernetes Clusters + Kubernetes environments must be highly distributed to meet the performance demands of modern cloud native applications. Organizations can ensure consistent operations across all environments with effective cluster management. KKP empowers you to take advantage of all the advanced features that Kubernetes has to offer and increases the speed, flexibility and scalability of your cloud deployment workflow. At Kubermatic, we have chosen to do multi-cluster management with Kubernetes Operators. Operators (a method of packaging, deploying and managing a Kubernetes application) allow KKP to automate creation as well as the full lifecycle management of clusters. With KKP you can create a cluster for each need, fine-tune it, reuse it and continue this process hassle-free. This results in: @@ -29,15 +31,17 @@ At Kubermatic, we have chosen to do multi-cluster management with Kubernetes Ope - Smaller individual clusters being more adaptable than one big cluster. - Faster development thanks to less complex environments. -#### Kubernetes Autoscaler Integration +### Kubernetes Autoscaler Integration + Autoscaling in Kubernetes refers to the ability to increase or decrease the number of nodes as the demand for service response changes. Without autoscaling, teams would manually first provision and then scale up or down resources every time conditions change. This means, either services fail at peak demand due to the unavailability of enough resources or you pay at peak capacity to ensure availability. [The Kubernetes Autoscaler in a cluster created by KKP]({{< ref "./tutorials-howtos/kkp-autoscaler/cluster-autoscaler/" >}}) can automatically scale up/down when one of the following conditions is satisfied: 1. Some pods fail to run in the cluster due to insufficient resources. -2. There are nodes in the cluster that have been underutilized for an extended period (10 minutes by default) and pods running on those nodes can be rescheduled to other existing nodes. +1. There are nodes in the cluster that have been underutilized for an extended period (10 minutes by default) and pods running on those nodes can be rescheduled to other existing nodes. + +### Manage all KKP Users Directly from a Single Panel -#### Manage all KKP Users Directly from a Single Panel The admin panel allows KKP administrators to manage the global settings that impact all KKP users directly. As an administrator, you can do the following: - Customize the way custom links (example: Twitter, Github, Slack) are displayed in the Kubermatic dashboard. @@ -46,32 +50,39 @@ The admin panel allows KKP administrators to manage the global settings that imp - Define Preset types in a Kubernetes Custom Resource Definition (CRD) allowing the assignment of new credential types to supported providers. - Enable and configure etcd backups for your clusters through Backup Buckets. -#### Manage Worker Nodes via the UI or the CLI +### Manage Worker Nodes via the UI or the CLI + Worker nodes can be managed [via the KKP web dashboard]({{< ref "./tutorials-howtos/manage-workers-node/via-ui/" >}}). Once you have installed kubectl, you can also manage them [via CLI]({{< ref "./tutorials-howtos/manage-workers-node/via-command-line" >}}) to automate the creation, deletion, and upgrade of nodes. -#### Monitoring, Logging & Alerting +### Monitoring, Logging & Alerting + When it comes to monitoring, no approach fits all use cases. KKP allows you to adjust things to your needs by enabling certain customizations to enable easy and tactical monitoring. KKP provides two different levels of Monitoring, Logging, and Alerting. 1. The first targets only the management components (master, seed, CRDs) and is independent. This is the Master/Seed Cluster MLA Stack and only the KKP Admins can access this monitoring data. -2. The other component is the User Cluster MLA Stack which is a true multi-tenancy solution for all your end-users as well as a comprehensive overview for the KKP Admin. It helps to speed up individual progress but lets the Admin keep an overview of the big picture. It can be configured per seed to match the requirements of the organizational structure. All users can access monitoring data of the user clusters under the projects that they are members of. +1. The other component is the User Cluster MLA Stack which is a true multi-tenancy solution for all your end-users as well as a comprehensive overview for the KKP Admin. It helps to speed up individual progress but lets the Admin keep an overview of the big picture. It can be configured per seed to match the requirements of the organizational structure. All users can access monitoring data of the user clusters under the projects that they are members of. Integrated Monitoring, Logging and Alerting functionality for applications and services in KKP user clusters are built using Prometheus, Loki, Cortex and Grafana. Furthermore, this can be enabled with a single click on the KKP UI. -#### OIDC Provider Configuration +### OIDC Provider Configuration + Since Kubernetes does not provide an OpenID Connect (OIDC) Identity Provider, KKP allows the user to configure a custom OIDC. This way you can grant access and information to the right stakeholders and fulfill security requirements by managing user access in a central identity provider across your whole infrastructure. -#### Easily Upgrading Control Plane and Nodes +### Easily Upgrading Control Plane and Nodes + A specific version of Kubernetes’ control plane typically supports a specific range of kubelet versions connected to it. KKP enforces the rule “kubelet must not be newer than kube-apiserver, and maybe up to two minor versions older” on its own. KKP ensures this rule is followed by checking during each upgrade of the clusters’ control plane or node’s kubelet. Additionally, only compatible versions are listed in the UI as available for upgrades. -#### Open Policy Agent (OPA) +### Open Policy Agent (OPA) + To enforce policies and improve governance in Kubernetes, Open Policy Agent (OPA) can be used. KKP integrates it using OPA Gatekeeper as a kubernetes-native policy engine supporting OPA policies. As an admin you can enable and enforce OPA integration during cluster creation by default via the UI. -#### Cluster Templates +### Cluster Templates + Clusters can be created in a few clicks with the UI. To take the user experience one step further and make repetitive tasks redundant, cluster templates allow you to save data entered into a wizard to create multiple clusters from a single template at once. Templates can be saved to be used subsequently for new cluster creation. -#### Use Default Addons to Extend the Functionality of Kubernetes +### Use Default Addons to Extend the Functionality of Kubernetes + [Addons]({{< ref "./architecture/concept/kkp-concepts/addons/" >}}) are specific services and tools extending the functionality of Kubernetes. Default addons are installed in each user cluster in KKP. The KKP Operator comes with a tool to output full default KKP configuration, serving as a starting point for adjustments. Accessible addons can be installed in each user cluster in KKP on user demand. {{% notice tip %}} diff --git a/content/kubermatic/main/architecture/compatibility/os-support-matrix/_index.en.md b/content/kubermatic/main/architecture/compatibility/os-support-matrix/_index.en.md index 1eab1534c..98e979c8a 100644 --- a/content/kubermatic/main/architecture/compatibility/os-support-matrix/_index.en.md +++ b/content/kubermatic/main/architecture/compatibility/os-support-matrix/_index.en.md @@ -11,11 +11,12 @@ KKP supports a multitude of operating systems. One of the unique features of KKP The following operating systems are currently supported by Kubermatic: -* Ubuntu 20.04, 22.04 and 24.04 -* RHEL beginning with 8.0 (support is cloud provider-specific) -* Flatcar (Stable channel) -* Rocky Linux beginning with 8.0 -* Amazon Linux 2 +- Ubuntu 20.04, 22.04 and 24.04 +- RHEL beginning with 8.0 (support is cloud provider-specific) +- Flatcar (Stable channel) +- Rocky Linux beginning with 8.0 +- Amazon Linux 2 + **Note:** CentOS was removed as a supported OS in KKP 2.26.3 This table shows the combinations of operating systems and cloud providers that KKP supports: diff --git a/content/kubermatic/main/architecture/compatibility/supported-versions/_index.en.md b/content/kubermatic/main/architecture/compatibility/supported-versions/_index.en.md index bfc0b4eb0..17811a9f3 100644 --- a/content/kubermatic/main/architecture/compatibility/supported-versions/_index.en.md +++ b/content/kubermatic/main/architecture/compatibility/supported-versions/_index.en.md @@ -39,8 +39,7 @@ recommend upgrading to a supported Kubernetes release as soon as possible. Refer [Kubernetes website](https://kubernetes.io/releases/) for more information on the supported releases. -Upgrades from a previous Kubernetes version are generally supported whenever a version is -marked as supported, for example KKP 2.27 supports updating clusters from Kubernetes 1.30 to 1.31. +Upgrades from a previous Kubernetes version are generally supported whenever a version is marked as supported, for example KKP 2.27 supports updating clusters from Kubernetes 1.30 to 1.31. ## Provider Incompatibilities diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/addons/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/addons/_index.en.md index 9fa9d2ac6..feb4805ea 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/addons/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/addons/_index.en.md @@ -24,22 +24,22 @@ In general, we recommend the usage of Applications for workloads running inside Default addons are installed in each user-cluster in KKP. The default addons are: -* [Canal](https://github.com/projectcalico/canal): policy based networking for cloud native applications -* [Dashboard](https://github.com/kubernetes/dashboard): General-purpose web UI for Kubernetes clusters -* [kube-proxy](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/): Kubernetes network proxy -* [rbac](https://kubernetes.io/docs/reference/access-authn-authz/rbac/): Kubernetes Role-Based Access Control, needed for +- [Canal](https://github.com/projectcalico/canal): policy based networking for cloud native applications +- [Dashboard](https://github.com/kubernetes/dashboard): General-purpose web UI for Kubernetes clusters +- [kube-proxy](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/): Kubernetes network proxy +- [rbac](https://kubernetes.io/docs/reference/access-authn-authz/rbac/): Kubernetes Role-Based Access Control, needed for [TLS node bootstrapping](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) -* [OpenVPN client](https://openvpn.net/index.php/open-source/overview.html): virtual private network (VPN). Lets the control +- [OpenVPN client](https://openvpn.net/index.php/open-source/overview.html): virtual private network (VPN). Lets the control plan access the Pod & Service network. Required for functionality like `kubectl proxy` & `kubectl port-forward`. -* pod-security-policy: Policies to configure KKP access when PSPs are enabled -* default-storage-class: A cloud provider specific StorageClass -* kubeadm-configmap & kubelet-configmap: A set of ConfigMaps used by kubeadm +- pod-security-policy: Policies to configure KKP access when PSPs are enabled +- default-storage-class: A cloud provider specific StorageClass +- kubeadm-configmap & kubelet-configmap: A set of ConfigMaps used by kubeadm Installation and configuration of these addons is done by 2 controllers which are part of the KKP seed-controller-manager: -* `addon-installer-controller`: Ensures a given set of addons will be installed in all clusters -* `addon-controller`: Templates the addons & applies the manifests in the user clusters +- `addon-installer-controller`: Ensures a given set of addons will be installed in all clusters +- `addon-controller`: Templates the addons & applies the manifests in the user clusters The KKP binaries come with a `kubermatic-installer` tool, which can output a full default `KubermaticConfiguration` (`kubermatic-installer print`). This will also include the default configuration for addons and can serve as @@ -86,7 +86,7 @@ regular addons, which are always installed and cannot be removed by the user). I and accessible, then it will be installed in the user-cluster, but also be visible to the user, who can manage it from the KKP dashboard like the other accessible addons. The accessible addons are: -* [node-exporter](https://github.com/prometheus/node_exporter): Exports metrics from the node +- [node-exporter](https://github.com/prometheus/node_exporter): Exports metrics from the node Accessible addons can be managed in the UI from the cluster details view: @@ -256,6 +256,7 @@ spec: ``` There is a short explanation of the single `formSpec` fields: + - `displayName` is the name that is displayed in the UI as the control label. - `internalName` is the name used internally. It can be referenced with template variables (see the description below). - `required` indicates if the control should be required in the UI. @@ -317,7 +318,7 @@ the exact templating syntax. KKP injects an instance of the `TemplateData` struct into each template. The following Go snippet shows the available information: -``` +```plaintext {{< readfile "kubermatic/main/data/addondata.go" >}} ``` diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/addons/aws-node-termination-handler/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/addons/aws-node-termination-handler/_index.en.md index c4191f0a8..e3e440b9d 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/addons/aws-node-termination-handler/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/addons/aws-node-termination-handler/_index.en.md @@ -32,6 +32,7 @@ AWS node termination handler is deployed with any aws user cluster created by KK cluster once the spot instance is interrupted. ## AWS Spot Instances Creation + To create a user cluster which runs some spot instance machines, the user can specify the machine type whether it's a spot instance or not at the step number four (Initial Nodes). A checkbox that has the label "Spot Instance" should be checked. diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/addons/kubeflow/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/addons/kubeflow/_index.en.md index da43a0e4f..d50274b7f 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/addons/kubeflow/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/addons/kubeflow/_index.en.md @@ -28,12 +28,12 @@ Before this addon can be deployed in a KKP user cluster, the KKP installation ha as an [accessible addon](../#accessible-addons). This needs to be done by the KKP installation administrator, once per KKP installation. -* Request the KKP addon Docker image with Kubeflow Addon matching your KKP version from Kubermatic +- Request the KKP addon Docker image with Kubeflow Addon matching your KKP version from Kubermatic (or [build it yourself](../#creating-a-docker-image) from the [Flowmatic repository](https://github.com/kubermatic/flowmatic)). -* Configure KKP - edit `KubermaticConfiguration` as follows: - * modify `spec.userClusters.addons.kubernetes.dockerRepository` to point to the provided addon Docker image repository, - * add `kubeflow` into `spec.api.accessibleAddons`. -* Apply the [AddonConfig from the Flowmatic repository](https://raw.githubusercontent.com/kubermatic/flowmatic/master/addon/addonconfig.yaml) in your KKP installation. +- Configure KKP - edit `KubermaticConfiguration` as follows: + - modify `spec.userClusters.addons.kubernetes.dockerRepository` to point to the provided addon Docker image repository, + - add `kubeflow` into `spec.api.accessibleAddons`. +- Apply the [AddonConfig from the Flowmatic repository](https://raw.githubusercontent.com/kubermatic/flowmatic/master/addon/addonconfig.yaml) in your KKP installation. ### Kubeflow prerequisites @@ -66,7 +66,8 @@ For a LoadBalancer service, an external IP address will be assigned by the cloud This address can be retrieved by reviewing the `istio-ingressgateway` Service in `istio-system` Namespace, e.g.: ```bash -$ kubectl get service istio-ingressgateway -n istio-system +kubectl get service istio-ingressgateway -n istio-system + NAME TYPE CLUSTER-IP EXTERNAL-IP istio-ingressgateway LoadBalancer 10.240.28.214 a286f5a47e9564e43ab4165039e58e5e-1598660756.eu-central-1.elb.amazonaws.com ``` @@ -162,33 +163,33 @@ This section contains a list of known issues in different Kubeflow components: **Kubermatic Kubernetes Platform** -* Not all GPU instances of various providers can be started from the KKP UI: +- Not all GPU instances of various providers can be started from the KKP UI: **Istio RBAC in Kubeflow:** -* If enabled, this issue can be hit in the pipelines: +- If enabled, this issue can be hit in the pipelines: **Kubeflow UI issues:** -* Error by adding notebook server: 500 Internal Server Error: +- Error by adding notebook server: 500 Internal Server Error: -* Experiment run status shows as unknown: +- Experiment run status shows as unknown: **Kale Pipeline:** -* "Namespace is empty" exception: +- "Namespace is empty" exception: **NVIDIA GPU Operator** -* Please see the official NVIDIA GPU documentation for known limitations: +- Please see the official NVIDIA GPU documentation for known limitations: **AMD GPU Support** -* The latest AMD GPU -enabled instances in AWS ([EC2 G4ad](https://aws.amazon.com/blogs/aws/new-amazon-ec2-g4ad-instances-featuring-amd-gpus-for-graphics-workloads/)) +- The latest AMD GPU -enabled instances in AWS ([EC2 G4ad](https://aws.amazon.com/blogs/aws/new-amazon-ec2-g4ad-instances-featuring-amd-gpus-for-graphics-workloads/)) featuring Radeon Pro V520 GPUs do not seem to be working with Kubeflow (yet). The GPUs are successfully attached to the pods but the notebook runtime does not seem to recognize them. diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/_index.en.md index af587dace..c41fc5d04 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/_index.en.md @@ -15,6 +15,7 @@ Currently, helm is exclusively supported as a templating method, but integration Helm Applications can both be installed from helm registries directly or from a git repository. ## Concepts + KKP manages Applications using two key mechanisms: [ApplicationDefinitions]({{< ref "./application-definition" >}}) and [ApplicationInstallations]({{< ref "./application-installation" >}}). `ApplicationDefinitions` are managed by KKP Admins and contain all the necessary information for an application's installation. diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/application-definition/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/application-definition/_index.en.md index d8455e6e4..af197ac32 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/application-definition/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/application-definition/_index.en.md @@ -8,8 +8,9 @@ weight = 1 An `ApplicationDefinition` represents a single Application and contains all its versions. It holds the necessary information to install an application. Two types of information are required to install an application: -* How to download the application's source (i.e Kubernetes manifest, helm chart...). We refer to this as `source`. -* How to render (i.e. templating) the application's source to install it into user-cluster. We refer to this as`templating method`. + +- How to download the application's source (i.e Kubernetes manifest, helm chart...). We refer to this as `source`. +- How to render (i.e. templating) the application's source to install it into user-cluster. We refer to this as`templating method`. Each version can have a different `source` (`.spec.version[].template.source`) but share the same `templating method` (`.spec.method`). Here is the minimal example of `ApplicationDefinition`. More advanced configurations are described in subsequent paragraphs. @@ -43,13 +44,17 @@ spec: In this example, the `ApplicationDefinition` allows the installation of two versions of apache using the [helm method](#helm-method). Notice that one source originates from a [Helm repository](#helm-source) and the other from a [git repository](#git-source) ## Templating Method + Templating Method describes how the Kubernetes manifests are being packaged and rendered. ### Helm Method + This method use [Helm](https://helm.sh/docs/) to install, upgrade and uninstall the application into the user-cluster. ## Templating Source + ### Helm Source + The Helm Source allows downloading the application's source from a Helm [HTTP repository](https://helm.sh/docs/topics/chart_repository/) or an [OCI repository](https://helm.sh/blog/storing-charts-in-oci/#helm). The following parameters are required: @@ -57,8 +62,8 @@ The following parameters are required: - `chartName` -> Name of the chart within the repository - `chartVersion` -> Version of the chart; corresponds to the chartVersion field - **Example of Helm source with HTTP repository:** + ```yaml - template: source: @@ -69,6 +74,7 @@ The following parameters are required: ``` **Example of Helm source with OCI repository:** + ```yaml - template: source: @@ -77,11 +83,12 @@ The following parameters are required: chartVersion: 1.13.0-rc5 url: oci://quay.io/kubermatic/helm-charts ``` + For private git repositories, please check the [working with private registries](#working-with-private-registries) section. Currently, the best way to obtain `chartName` and `chartVersion` for an HTTP repository is to make use of `helm search`: -```sh +```bash # initial preparation helm repo add helm repo update @@ -99,9 +106,11 @@ helm search repo prometheus-community/prometheus --versions --version ">=15" For OCI repositories, there is currently [no native helm search](https://github.com/helm/helm/issues/9983). Instead, you have to rely on the capabilities of your OCI registry. For example, harbor supports searching for helm-charts directly [in their UI](https://goharbor.io/docs/2.4.0/working-with-projects/working-with-images/managing-helm-charts/#list-charts). ### Git Source + The Git source allows you to download the application's source from a Git repository. **Example of Git Source:** + ```yaml - template: source: @@ -121,7 +130,6 @@ The Git source allows you to download the application's source from a Git reposi For private git repositories, please check the [working with private registries](#working-with-private-registries) section. - ## Working With Private Registries For private registries, the Applications Feature supports storing credentials in Kubernetes secrets in the KKP master and referencing the secrets in your ApplicationDefinitions. @@ -134,67 +142,68 @@ In order for the controller to sync your secrets, they must be annotated with `a ### Git Repositories KKP supports three types of authentication for git repositories: -* `password`: authenticate with a username and password. -* `Token`: authenticate with a Bearer token -* `SSH-Key`: authenticate with an ssh private key. + +- `password`: authenticate with a username and password. +- `Token`: authenticate with a Bearer token +- `SSH-Key`: authenticate with an ssh private key. Their setup is comparable: 1. Create a secret containing our credentials + ```bash + # inside KKP master + + # user-pass + kubectl create secret -n generic --from-literal=pass= --from-literal=user= + + # token + kubectl create secret -n generic --from-literal=token= + + # ssh-key + kubectl create secret -n generic --from-literal=sshKey= + + # after creation, annotate + kubectl annotate secret apps.kubermatic.k8c.io/secret-type="git" + ``` + +1. Reference the secret in the ApplicationDefinition + ```yaml + spec: + versions: + - template: + source: + git: + path: + ref: + branch: + remote: # for ssh-key, an ssh url must be chosen (e.g. git@example.com/repo.git) + credentials: + method: + # user-pass + username: + key: user + name: + password: + key: pass + name: + # token + token: + key: token + name: + # ssh-key + sshKey: + key: sshKey + name: + ``` -```sh -# inside KKP master - -# user-pass -kubectl create secret -n generic --from-literal=pass= --from-literal=user= - -# token -kubectl create secret -n generic --from-literal=token= - -# ssh-key -kubectl create secret -n generic --from-literal=sshKey= - -# after creation, annotate -kubectl annotate secret apps.kubermatic.k8c.io/secret-type="git" -``` - -2. Reference the secret in the ApplicationDefinition - -```yaml -spec: - versions: - - template: - source: - git: - path: - ref: - branch: - remote: # for ssh-key, an ssh url must be chosen (e.g. git@example.com/repo.git) - credentials: - method: - # user-pass - username: - key: user - name: - password: - key: pass - name: - # token - token: - key: token - name: - # ssh-key - sshKey: - key: sshKey - name: -``` #### Compatibility Warning Be aware that all authentication methods may be available on your git server. More and more servers disable the authentication with username and password. More over on some providers like GitHub, to authenticate with an access token, you must use `password` method instead of `token`. Example of secret to authenticate with [GitHub access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token#using-a-token-on-the-command-line): -```sh + +```bash kubectl create secret -n generic --from-literal=pass= --from-literal=user= ``` @@ -205,73 +214,71 @@ For other providers, please refer to their official documentation. [Helm OCI registries](https://helm.sh/docs/topics/registries/#enabling-oci-support) are being accessed using a JSON configuration similar to the `~/.docker/config.json` on the local machine. It should be noted, that all OCI server urls need to be prefixed with `oci://`. 1. Create a secret containing our credentials - -```sh -# inside KKP master -kubectl create secret -n docker-registry --docker-server= --docker-username= --docker-password= -kubectl annotate secret apps.kubermatic.k8c.io/secret-type="helm" - -# example -kubectl create secret -n kubermatic docker-registry --docker-server=harbor.example.com/my-project --docker-username=someuser --docker-password=somepaswword oci-cred -kubectl annotate secret oci-cred apps.kubermatic.k8c.io/secret-type="helm" -``` - -2. Reference the secret in the ApplicationDefinition - -```yaml -spec: - versions: - - template: - source: - helm: - chartName: examplechart - chartVersion: 0.1.0 - credentials: - registryConfigFile: - key: .dockerconfigjson # `kubectl create secret docker-registry` stores by default the creds under this key - name: - url: -``` + ```bash + # inside KKP master + kubectl create secret -n docker-registry --docker-server= --docker-username= --docker-password= + kubectl annotate secret apps.kubermatic.k8c.io/secret-type="helm" + + # example + kubectl create secret -n kubermatic docker-registry --docker-server=harbor.example.com/my-project --docker-username=someuser --docker-password=somepaswword oci-cred + kubectl annotate secret oci-cred apps.kubermatic.k8c.io/secret-type="helm" + ``` + +1. Reference the secret in the ApplicationDefinition + ```yaml + spec: + versions: + - template: + source: + helm: + chartName: examplechart + chartVersion: 0.1.0 + credentials: + registryConfigFile: + key: .dockerconfigjson # `kubectl create secret docker-registry` stores by default the creds under this key + name: + url: + ``` ### Helm Userpass Registries To use KKP Applications with a helm [userpass auth](https://helm.sh/docs/topics/registries/#auth) registry, you can configure the following: 1. Create a secret containing our credentials - -```sh -# inside KKP master -kubectl create secret -n generic --from-literal=pass= --from-literal=user= -kubectl annotate secret apps.kubermatic.k8c.io/secret-type="helm" -``` - -2. Reference the secret in the ApplicationDefinition - -```yaml -spec: - versions: - - template: - source: - helm: - chartName: examplechart - chartVersion: 0.1.0 - credentials: - password: - key: pass - name: - username: - key: user - name: - url: -``` + ```bash + # inside KKP master + kubectl create secret -n generic --from-literal=pass= --from-literal=user= + kubectl annotate secret apps.kubermatic.k8c.io/secret-type="helm" + ``` + +1. Reference the secret in the ApplicationDefinition + ```yaml + spec: + versions: + - template: + source: + helm: + chartName: examplechart + chartVersion: 0.1.0 + credentials: + password: + key: pass + name: + username: + key: user + name: + url: + ``` ### Templating Credentials + There is a particular case where credentials may be needed at the templating stage to render the manifests. For example, if the template method is `helm` and the source is git. To install the chart into the user cluster, we have to build the chart dependencies. These dependencies may be hosted on a private registry requiring authentication. You can specify the templating credentials by settings `.spec.version[].template.templateCredentials`. It works the same way as source credentials. **Example of template credentials:** + ```yaml spec: versions: @@ -293,7 +300,9 @@ spec: ``` ## Advanced Configuration + ### Default Values + The `.spec.defaultValuesBlock` field describes overrides for manifest-rendering in UI when creating an application. For example if the method is Helm, then this field contains the Helm values. **Example for helm values** @@ -308,20 +317,23 @@ spec: ``` ### Customize Deployment + You can tune how the application will be installed by setting `.spec.defaultDeployOptions`. The options depend on the template method (i.e. `.spec.method`). -*note: `defaultDeployOptions` can be overridden at `ApplicationInstallation` level by settings `.spec.deployOptions`* +*Note: `defaultDeployOptions` can be overridden at `ApplicationInstallation` level by settings `.spec.deployOptions`* #### Customize Deployment For Helm Method + You may tune how Helm deploys the application with the following options: -* `atomic`: corresponds to the `--atomic` flag on Helm CLI. If set, the installation process deletes the installation on failure; the upgrade process rolls back changes made in case of failed upgrade. -* `wait`: corresponds to the `--wait` flag on Helm CLI. If set, will wait until all Pods, PVCs, Services, and minimum number of Pods of a Deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as `--timeout` -* `timeout`: corresponds to the `--timeout` flag on Helm CLI. It's time to wait for any individual Kubernetes operation. -* `enableDNS`: corresponds to the `-enable-dns ` flag on Helm CLI. It enables DNS lookups when rendering templates. if you enable this flag, you have to verify that helm template function 'getHostByName' is not being used in a chart to disclose any information you do not want to be passed to DNS servers.(c.f. [CVE-2023-25165](https://github.com/helm/helm/security/advisories/GHSA-pwcw-6f5g-gxf8)) +- `atomic`: corresponds to the `--atomic` flag on Helm CLI. If set, the installation process deletes the installation on failure; the upgrade process rolls back changes made in case of failed upgrade. +- `wait`: corresponds to the `--wait` flag on Helm CLI. If set, will wait until all Pods, PVCs, Services, and minimum number of Pods of a Deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as `--timeout` +- `timeout`: corresponds to the `--timeout` flag on Helm CLI. It's time to wait for any individual Kubernetes operation. +- `enableDNS`: corresponds to the `-enable-dns ` flag on Helm CLI. It enables DNS lookups when rendering templates. if you enable this flag, you have to verify that helm template function 'getHostByName' is not being used in a chart to disclose any information you do not want to be passed to DNS servers.(c.f. [CVE-2023-25165](https://github.com/helm/helm/security/advisories/GHSA-pwcw-6f5g-gxf8)) Example: + ```yaml apiVersion: apps.kubermatic.k8c.io/v1 kind: ApplicationDefinition @@ -335,11 +347,11 @@ spec: timeout: "5m" ``` -*note: if `atomic` is true, then wait must be true. If `wait` is true then `timeout` must be defined.* - +*Note: if `atomic` is true, then wait must be true. If `wait` is true then `timeout` must be defined.* ## ApplicationDefinition Reference -**The following is an example of ApplicationDefinition, showing all the possible options**. + +**The following is an example of ApplicationDefinition, showing all the possible options** ```yaml {{< readfile "kubermatic/main/data/applicationDefinition.yaml" >}} diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/application-installation/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/application-installation/_index.en.md index ac785df1c..add323fe1 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/application-installation/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/application-installation/_index.en.md @@ -37,19 +37,23 @@ The `.spec.namespace` defines in which namespace the application will be install The `values` is a schemaless field that describes overrides for manifest-rendering (e.g., if the method is Helm, then this field contains the Helm values.) ## Application Life Cycle + It mainly composes of 2 steps: download the application's source and install or upgrade the application. You can monitor these steps thanks to the conditions in the applicationInstallation's status. - `ManifestsRetrieved` condition indicates if the application's source has been correctly downloaded. - `Ready` condition indicates the installation / upgrade status. It can have four states: + - `{status: "Unknown", reason: "InstallationInProgress"}`: meaning the application installation / upgrade is in progress. - `{status: "True", reason: "InstallationSuccessful"}`: meaning the application installation / upgrade was successful. - `{status: "False", reason: "InstallationFailed"}`: meaning the installation / upgrade has failed. - `{status: "False", reason: "InstallationFailedRetriesExceeded"}`: meaning the max number of retries was exceeded. ### Helm additional information + If the [templating method]({{< ref "../application-definition#templating-method" >}}) is `Helm`, then additional information regarding the install or upgrade is provided under `.status.helmRelease`. Example: + ```yaml status: [...] @@ -83,12 +87,15 @@ status: ``` ## Advanced Configuration + This section is relevant to advanced users. However, configuring advanced parameters may impact performance, load, and workload stability. Consequently, it must be treated carefully. ### Periodic Reconciliation + By default, Applications are only reconciled on changes in the spec, annotations, or the parent application definition. Meaning that if the user manually deletes the workload deployed by the application, nothing will happen until the `ApplicationInstallation` CR changes. You can periodically force the reconciliation of the application by setting `.spec.reconciliationInterval`: + - a value greater than zero forces reconciliation even if no changes occurred on application CR. - a value equal to 0 disables the force reconciliation of the application (default behavior). @@ -99,20 +106,23 @@ Setting this too low can cause a heavy load and disrupt your application workloa The application will not be reconciled if the maximum number of retries is exceeded. ### Customize Deployment + You can tune how the application will be installed by setting `.spec.deployOptions`. The options depend on the template method (i.e., `.spec.method`) of the `ApplicationDefinition`. *Note: if `deployOptions` is not set, then it uses the default defined at the `ApplicationDefinition` level (`.spec.defaultDeployOptions`)* #### Customize Deployment for Helm Method + You may tune how Helm deploys the application with the following options: -* `atomic`: corresponds to the `--atomic` flag on Helm CLI. If set, the installation process deletes the installation on failure; the upgrade process rolls back changes made in case of a failed upgrade. -* `wait`: corresponds to the `--wait` flag on Helm CLI. If set, will wait until all Pods, PVCs, Services, and a minimum number of Pods of a Deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as `--timeout` -* `timeout`: corresponds to the `--timeout` flag on Helm CLI. It's time to wait for any individual Kubernetes operation. -* `enableDNS`: corresponds to the `-enable-dns ` flag on Helm CLI. It enables DNS lookups when rendering templates. If you enable this flag, you have to verify that the Helm template function 'getHostByName' is not being used in a chart to disclose any information you do not want to be passed to DNS servers. (c.f. [CVE-2023-25165](https://github.com/helm/helm/security/advisories/GHSA-pwcw-6f5g-gxf8)) +- `atomic`: corresponds to the `--atomic` flag on Helm CLI. If set, the installation process deletes the installation on failure; the upgrade process rolls back changes made in case of a failed upgrade. +- `wait`: corresponds to the `--wait` flag on Helm CLI. If set, will wait until all Pods, PVCs, Services, and a minimum number of Pods of a Deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as `--timeout` +- `timeout`: corresponds to the `--timeout` flag on Helm CLI. It's time to wait for any individual Kubernetes operation. +- `enableDNS`: corresponds to the `-enable-dns ` flag on Helm CLI. It enables DNS lookups when rendering templates. If you enable this flag, you have to verify that the Helm template function 'getHostByName' is not being used in a chart to disclose any information you do not want to be passed to DNS servers. (c.f. [CVE-2023-25165](https://github.com/helm/helm/security/advisories/GHSA-pwcw-6f5g-gxf8)) Example: + ```yaml apiVersion: apps.kubermatic.k8c.io/v1 kind: ApplicationInstallation @@ -133,6 +143,7 @@ If it reaches the max number of retries (hardcoded to 5), then the ApplicationIn This behavior reduces the load on the cluster and avoids an infinite loop that disrupts the workload. ## ApplicationInstallation Reference + **The following is an example of ApplicationInstallation, showing all the possible options**. ```yaml diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/application-templating/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/application-templating/_index.en.md index 634bbec58..27f78546c 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/application-templating/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/application-templating/_index.en.md @@ -15,7 +15,7 @@ the exact templating syntax. KKP injects an instance of the `TemplateData` struct into each template. The following Go snippet shows the available information: -``` +```text {{< readfile "kubermatic/main/data/applicationdata.go" >}} ``` diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/aikit/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/aikit/_index.en.md index 0bce1a83c..46b57e768 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/aikit/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/aikit/_index.en.md @@ -23,16 +23,16 @@ For more information on AIKit, please refer to the [official documentation](http AIKit is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready (existing cluster) from the Applications tab via UI. -* Select the AIKit application from the Application Catalog. +- Select the AIKit application from the Application Catalog. ![Select AIKit Application](01-select-application-aikit-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for AIKit Application](02-settings-aikit-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the AIKit application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the AIKit application to the user cluster. ![Application Values for AIKit Application](03-applicationvalues-aikit-app.png) -To further configure the `values.yaml`, find more information on the [AIKit Helm Chart Configuration](https://github.com/sozercan/aikit/tree/v0.16.0/charts/aikit) \ No newline at end of file +To further configure the `values.yaml`, find more information on the [AIKit Helm Chart Configuration](https://github.com/sozercan/aikit/tree/v0.16.0/charts/aikit) diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/argocd/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/argocd/_index.en.md index 445948042..f516e2b39 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/argocd/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/argocd/_index.en.md @@ -7,7 +7,8 @@ weight = 1 +++ -# What is ArgoCD? +## What is ArgoCD? + ArgoCD is a declarative, GitOps continuous delivery tool for Kubernetes. ArgoCD follows the GitOps pattern of using Git repositories as the source of truth for defining the desired application state. Kubernetes manifests can be specified in several ways: @@ -18,23 +19,22 @@ ArgoCD follows the GitOps pattern of using Git repositories as the source of tru - Plain directory of YAML/json manifests - Any custom config management tool configured as a config management plugin - For more information on the ArgoCD, please refer to the [official documentation](https://argoproj.github.io/cd/) -# How to deploy? +## How to deploy? ArgoCD is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the ArgoCD application from the Application Catalog. +- Select the ArgoCD application from the Application Catalog. ![Select ArgoCD Application](01-select-application-argocd-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for ArgoCD Application](02-settings-argocd-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the ArgoCD application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the ArgoCD application to the user cluster. ![Application Values for ArgoCD Application](03-applicationvalues-argocd-app.png) diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/cert-manager/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/cert-manager/_index.en.md index da9d4dad5..7f797ada8 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/cert-manager/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/cert-manager/_index.en.md @@ -7,7 +7,7 @@ weight = 3 +++ -# What is cert-manager? +## What is cert-manager? cert-manager adds certificates and certificate issuers as resource types in Kubernetes clusters. It simplifies the process of obtaining, renewing and using certificates. @@ -17,20 +17,20 @@ It will ensure certificates are valid and up to date, and attempt to renew certi For more information on the cert-manager, please refer to the [official documentation](https://cert-manager.io/) -# How to deploy? +## How to deploy? cert-manager is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the cert-manager application from the Application Catalog. +- Select the cert-manager application from the Application Catalog. ![Select cert-manager Application](01-select-application-cert-manager-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for cert-manager Application](02-settings-cert-manager-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the cert-manager application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the cert-manager application to the user cluster. ![Application Values for cert-manager Application](03-applicationvalues-cert-manager-app.png) diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/cluster-autoscaler/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/cluster-autoscaler/_index.en.md index 013815e66..a7c194c95 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/cluster-autoscaler/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/cluster-autoscaler/_index.en.md @@ -6,24 +6,24 @@ weight = 1 +++ -# What is the Kubernetes Cluster Autoscaler? +## What is the Kubernetes Cluster Autoscaler? Kubernetes Cluster Autoscaler is a tool that automatically adjusts the size of the worker’s node up or down depending on the consumption. This means that the cluster autoscaler, for example, automatically scale up a cluster by increasing the node count when there are not enough node resources for cluster workload scheduling and scale down when the node resources have continuously staying idle, or there are more than enough node resources available for cluster workload scheduling. In a nutshell, it is a component that automatically adjusts the size of a Kubernetes cluster so that all pods have a place to run and there are no unneeded nodes. -# How to deploy? +## How to deploy? Kubernetes Cluster Autoscaler is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the Cluster Autoscaler application from the Application Catalog. +- Select the Cluster Autoscaler application from the Application Catalog. ![Select Cluster Autoscaler Application](01-select-application-cluster-autoscaler-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for Cluster Autoscaler Application](02-settings-cluster-autoscaler-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Clustet Autoscaler application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Clustet Autoscaler application to the user cluster. ![Application Values for Cluster Autoscaler Application](03-applicationvalues-cluster-autoscaler-app.png) diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/falco/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/falco/_index.en.md index 1579a2b95..ee6118a1d 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/falco/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/falco/_index.en.md @@ -7,25 +7,25 @@ weight = 7 +++ -# What is Falco? +## What is Falco? Falco is a cloud-native security tool designed for Linux systems. It employs custom rules on kernel events, which are enriched with container and Kubernetes metadata, to provide real-time alerts. Falco helps you gain visibility into abnormal behavior, potential security threats, and compliance violations, contributing to comprehensive runtime security. For more information on the Falco, please refer to the [official documentation](https://falco.org/) -# How to deploy? +## How to deploy? Falco is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the Falco application from the Application Catalog. +- Select the Falco application from the Application Catalog. ![Select Falco Application](01-select-application-falco-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for Falco Application](02-settings-falco-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Falco application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Falco application to the user cluster. To further configure the values.yaml, find more information on the [Falco Helm chart documentation](https://github.com/falcosecurity/charts/tree/master/charts/falco). diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/flux2/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/flux2/_index.en.md index f63b1532e..f261968e1 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/flux2/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/flux2/_index.en.md @@ -7,7 +7,7 @@ weight = 2 +++ -# What is Flux2? +## What is Flux2? Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration (like Git repositories and OCI artifacts), automating updates to configuration when there is new code to deploy. @@ -19,19 +19,19 @@ Flux is a Cloud Native Computing Foundation [CNCF](https://www.cncf.io/) project For more information on the Flux2, please refer to the [official documentation](https://github.com/fluxcd-community/helm-charts) -# How to deploy? +## How to deploy? Flux2 is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the Flux2 application from the Application Catalog. +- Select the Flux2 application from the Application Catalog. ![Select Flux2 Application](01-select-application-flux2-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for Flux2 Application](02-settings-flux2-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Flux2 application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Flux2 application to the user cluster. A full list of available Helm values is on [flux2's ArtifactHub page](https://artifacthub.io/packages/helm/fluxcd-community/flux2) diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/k8sgpt-operator/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/k8sgpt-operator/_index.en.md index 474b59f4e..f317b8b3d 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/k8sgpt-operator/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/k8sgpt-operator/_index.en.md @@ -7,7 +7,8 @@ weight = 11 +++ -# What is K8sGPT-Operator? +## What is K8sGPT-Operator? + This operator is designed to enable K8sGPT within a Kubernetes cluster. It will allow you to create a custom resource that defines the behaviour and scope of a managed K8sGPT workload. @@ -16,20 +17,20 @@ Analysis and outputs will also be configurable to enable integration into existi For more information on the K8sGPT-Operator, please refer to the [official documentation](https://docs.k8sgpt.ai/reference/operator/overview/) -# How to deploy? +## How to deploy? K8sGPT-Operator is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the K8sGPT-Operator application from the Application Catalog. +- Select the K8sGPT-Operator application from the Application Catalog. ![Select K8sGPT-Operator Application](01-select-application-k8sgpt-operator-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for K8sGPT-Operator Application](02-settings-k8sgpt-operator-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the K8sGPT-Operator application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the K8sGPT-Operator application to the user cluster. ![Application Values for K8sGPT-Operator Application](03-applicationvalues-k8sgpt-operator-app.png) diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/k8sgpt/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/k8sgpt/_index.en.md index 3e8d17dcd..e85a05ba8 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/k8sgpt/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/k8sgpt/_index.en.md @@ -7,7 +7,8 @@ weight = 11 +++ -# What is K8sGPT? +## What is K8sGPT? + K8sGPT gives Kubernetes SRE superpowers to everyone. It is a tool for scanning your Kubernetes clusters, diagnosing, and triaging issues in simple English. It has SRE experience codified into its analyzers and helps to pull out the most relevant information to enrich it with AI. @@ -16,20 +17,20 @@ Out of the box integration with OpenAI, Azure, Cohere, Amazon Bedrock and local For more information on the K8sGPT, please refer to the [official documentation](https://docs.k8sgpt.ai/) -# How to deploy? +## How to deploy? K8sGPT is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the K8sGPT application from the Application Catalog. +- Select the K8sGPT application from the Application Catalog. ![Select K8sGPT Application](01-select-application-k8sgpt-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for K8sGPT Application](02-settings-k8sgpt-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the K8sGPT application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the K8sGPT application to the user cluster. ![Application Values for K8sGPT Application](03-applicationvalues-k8sgpt-app.png) diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/kube-vip/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/kube-vip/_index.en.md index 66420c42d..fc5f18cd6 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/kube-vip/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/kube-vip/_index.en.md @@ -7,25 +7,25 @@ weight = 6 +++ -# What is Kube-VIP? +## What is Kube-VIP? Kube-VIP provides Kubernetes clusters with a virtual IP and load balancer for both the control plane (for building a highly-available cluster) and Kubernetes Services of type LoadBalancer without relying on any external hardware or software. For more information on the Kube-VIP, please refer to the [official documentation](https://kube-vip.io/) -# How to deploy? +## How to deploy? Kube-VIP is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the Kube-VIP application from the Application Catalog. +- Select the Kube-VIP application from the Application Catalog. ![Select Kube-VIP Application](01-select-application-kube-vip-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for Kube-VIP Application](02-settings-kube-vip-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Kube-VIP application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Kube-VIP application to the user cluster. To further configure the values.yaml, find more information on the [Kube-vip Helm chart documentation](https://github.com/kube-vip/helm-charts). diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/kubevirt/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/kubevirt/_index.en.md index 7eb3f4e31..37ba16493 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/kubevirt/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/kubevirt/_index.en.md @@ -7,7 +7,7 @@ weight = 10 +++ -# What is KubeVirt? +## What is KubeVirt? KubeVirt is a virtual machine management add-on for Kubernetes. Its aim is to provide a common ground for virtualization solutions on top of Kubernetes. @@ -21,15 +21,15 @@ As of today KubeVirt can be used to declaratively: For more information on the KubeVirt, please refer to the [official documentation](https://kubevirt.io/) -# How to deploy? +## How to deploy? KubeVirt is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the KubeVirt application from the Application Catalog. +- Select the KubeVirt application from the Application Catalog. ![Select KubeVirt Application](01-select-application-kubevirt-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for KubeVirt Application](02-settings-kubevirt-app.png) diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/local-ai/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/local-ai/_index.en.md index f69bace53..5f467e68d 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/local-ai/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/local-ai/_index.en.md @@ -15,18 +15,18 @@ LocalAI is an open-source alternative to OpenAI’s API, designed to run AI mode Local AI is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready (existing cluster) from the Applications tab via UI. -* Select the Local AI application from the Application Catalog. +- Select the Local AI application from the Application Catalog. ![Select Local AI Application](01-select-local-ai-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for Local AI Application](02-settings-local-ai-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the LocalAI application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the LocalAI application to the user cluster. ![Application Values for LocalAI Application](03-applicationvalues-local-ai-app.png) To further configure the `values.yaml`, find more information on the [LocalAI Helm Chart Configuration](https://github.com/go-skynet/helm-charts/tree/main/charts/local-ai) -Please take care about the size of the default models which can vary from the default configuration. \ No newline at end of file +Please take care about the size of the default models which can vary from the default configuration. diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/metallb/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/metallb/_index.en.md index 88c79c916..d1f64d83f 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/metallb/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/metallb/_index.en.md @@ -7,26 +7,25 @@ weight = 4 +++ -# What is MetalLB? +## What is MetalLB? MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols. - For more information on the MetalLB, please refer to the [official documentation](https://metallb.universe.tf/) -# How to deploy? +## How to deploy? MetalLB is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the MetalLB application from the Application Catalog. +- Select the MetalLB application from the Application Catalog. ![Select MetalLB Application](01-select-application-metallb-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for MetalLB Application](02-settings-metallb-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the MetalLB application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the MetalLB application to the user cluster. To further configure the values.yaml, find more information on the [MetalLB Helm chart documentation](https://github.com/metallb/metallb/tree/main/charts/metallb). diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/nginx/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/nginx/_index.en.md index 9f54a2063..572292997 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/nginx/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/nginx/_index.en.md @@ -7,25 +7,25 @@ weight = 5 +++ -# What is Nginx? +## What is Nginx? Nginx is an ingress-controller for Kubernetes using NGINX as a reverse proxy and load balancer. For more information on the Nginx, please refer to the [official documentation](https://kubernetes.github.io/ingress-nginx/) -# How to deploy? +## How to deploy? Nginx is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the Nginx application from the Application Catalog. +- Select the Nginx application from the Application Catalog. ![Select Nginx Application](01-select-application-nginx-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for Nginx Application](02-settings-nginx-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Nginx application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Nginx application to the user cluster. To further configure the values.yaml, find more information on the [Nginx Helm chart documentation](https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx). diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/nvidia-gpu-operator/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/nvidia-gpu-operator/_index.en.md index b0f19d963..a86820c73 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/nvidia-gpu-operator/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/nvidia-gpu-operator/_index.en.md @@ -7,24 +7,25 @@ weight = 12 +++ -# What is Nvidia GPU Operator? +## What is Nvidia GPU Operator? + The NVIDIA GPU Operator uses the operator framework within Kubernetes to automate the management of all NVIDIA software components needed to provision GPU. For more information on the Nvidia GPU Operator, please refer to the [official documentation](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/overview.html) -# How to deploy? +## How to deploy? Nvidia GPU Operator is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the Nvidia GPU Operator application from the Application Catalog. +- Select the Nvidia GPU Operator application from the Application Catalog. ![Select Nvidia GPU Operator Application](01-select-application-nvidia-gpu-operator-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for Nvidia GPU Operator Application](02-settings-nvidia-gpu-operator-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Nvidia GPU Operator application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Nvidia GPU Operator application to the user cluster. To further configure the values.yaml, find more information on the [Nvidia GPU Operator Helm chart documentation](https://github.com/NVIDIA/gpu-operator/) diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/trivy-operator/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/trivy-operator/_index.en.md index fd7d1c713..27723d0cb 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/trivy-operator/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/trivy-operator/_index.en.md @@ -7,7 +7,7 @@ weight = 9 +++ -# What is Trivy Operator? +## What is Trivy Operator? The Trivy Operator leverages Trivy to continuously scan your Kubernetes cluster for security issues. The scans are summarised in security reports as Kubernetes Custom Resources, which become accessible through the Kubernetes API. The Operator does this by watching Kubernetes for state changes and automatically triggering security scans in response. For example, a vulnerability scan is initiated when a new Pod is created. This way, users can find and view the risks that relate to different resources in a Kubernetes-native way. @@ -15,23 +15,23 @@ Trivy Operator can be deployed and used for scanning the resources deployed on t For more information on the Trivy Operator, please refer to the [official documentation](https://aquasecurity.github.io/trivy-operator/latest/) -# How to deploy? +## How to deploy? Trivy Operator is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the Trivy Operator application from the Application Catalog. +- Select the Trivy Operator application from the Application Catalog. ![Select Trivy Operator Application](01-select-application-trivy-operator-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for Trivy Operator Application](02-settings-trivy-operator-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Trivy Operator application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Trivy Operator application to the user cluster. ![Application Values for Trivy Operator Application](03-applicationvalues-trivy-operator-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Trivy Operator application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Trivy Operator application to the user cluster. To further configure the values.yaml, find more information on the [Trivy Operator Helm chart documentation](https://github.com/aquasecurity/trivy-operator/tree/main/deploy/helm). diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/trivy/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/trivy/_index.en.md index d70dcb4fa..5e052daa8 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/trivy/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/applications/default-applications-catalog/trivy/_index.en.md @@ -7,7 +7,7 @@ weight = 8 +++ -# What is Trivy? +## What is Trivy? Trivy is a comprehensive and versatile security scanner. Trivy has scanners that look for security issues, and targets where it can find those issues. @@ -32,19 +32,19 @@ Trivy supports most popular programming languages, operating systems, and platfo For more information on the Trivy, please refer to the [official documentation](https://aquasecurity.github.io/trivy/v0.49/docs/) -# How to deploy? +## How to deploy? Trivy is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the Trivy application from the Application Catalog. +- Select the Trivy application from the Application Catalog. ![Select Trivy Application](01-select-application-trivy-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for Trivy Application](02-settings-trivy-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Trivy application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Trivy application to the user cluster. To further configure the values.yaml, find more information on the [Trivy Helm chart documentation](https://github.com/aquasecurity/trivy/tree/main/helm/trivy). diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/cluster-templates/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/cluster-templates/_index.en.md index 792fb6880..a62adbfee 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/cluster-templates/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/cluster-templates/_index.en.md @@ -7,6 +7,7 @@ weight = 1 +++ ## Understanding Cluster Templates + Cluster templates are designed to standardize and simplify the creation of Kubernetes clusters. A cluster template is a reusable cluster template object. It guarantees that every cluster it provisions from the template is uniform and consistent in the way it is produced. @@ -15,22 +16,26 @@ A cluster template allows you to specify a provider, node layout, and configurat via Kubermatic API or UI. ## Scope + The cluster templates are accessible from different levels. - - global: (managed by admin user) visible to everyone - - project: accessible to the project users - - user: accessible to the template owner in every project, where the user is in the owner or editor group + +- global: (managed by admin user) visible to everyone +- project: accessible to the project users +- user: accessible to the template owner in every project, where the user is in the owner or editor group Template management is available from project level. The regular user with owner or editor privileges can create template in project or user scope. The admin user can create a template for every project in every scope. Template in `global` scope can be created only by admins. ## Credentials + Creating a cluster from the template requires credentials to authenticate with the cloud provider. During template creation, the credentials are stored in the secret which is assigned to the cluster template. The credential secret is independent. It's just a copy of credentials specified manually by the user or taken from the preset. Any credentials update must be processed on the cluster template. ## Creating and Using Templates + Cluster templates can be created from scratch to pre-define the cluster configuration. The whole process is done in the UI wizard for the cluster creation. During the cluster creation process, the end user can pick a template and specify the desired number of cluster instances. diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/kkp-security/pod-security-policy/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/kkp-security/pod-security-policy/_index.en.md index 30bdc650c..4b120a99e 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/kkp-security/pod-security-policy/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/kkp-security/pod-security-policy/_index.en.md @@ -6,7 +6,7 @@ weight = 130 [Pod Security Policy](https://kubernetes.io/docs/concepts/policy/pod-security-policy/), (PSP), is a key security feature in Kubernetes. It allows cluster administrators to set [granular controls](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#policy-reference) over security sensitive aspects of pod and container specs. -PSP is implemented using an optional admission controller that's disabled by default. It's important to have an initial authorizing policy on the cluster _before_ enabling the PSP admission controller. +PSP is implemented using an optional admission controller that's disabled by default. It's important to have an initial authorizing policy on the cluster *before* enabling the PSP admission controller. This is also true for existing clusters. Without an authorizing policy, the controller will prevent all pods from being created on the cluster. PSP objects are cluster-level objects. They define a set of conditions that a pod must pass to be accepted by the PSP admission controller. The most common way to apply this is using RBAC. For a pod to use a specific Pod Security Policy, the pod should run using a Service Account or a User that has `use` permission to that particular Pod Security policy. @@ -29,12 +29,12 @@ For existing clusters, it's also possible to enable/disable PSP: ![Edit Cluster](@/images/ui/psp-edit.png?classes=shadow,border "Edit Cluster") - {{% notice note %}} Activating Pod Security Policy will mean that a lot of Pod specifications, Operators and Helm charts will not work out of the box. KKP will apply a default authorizing policy to prevent this. Additionally, all KKP user-clusters are configured to be compatible with enabled PSPs. Make sure that you know the consequences of activating this feature on your workloads. {{% /notice %}} ### Datacenter Level Support + It is also possible to enforce enabling Pod Security Policies on the datacenter level. In this case, user cluster level configuration will be ignored, and PSP will be enabled for all user clusters in the datacenter. To enable this, you will need to update your [Seed Cluster CRD]({{< ref "../../../../../tutorials-howtos/project-and-cluster-management/seed-cluster" >}}), and set `enforcePodSecurityPolicy` to `true` in the datacenter spec. diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/networking/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/networking/_index.en.md index 9afdaa93d..c291cfba4 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/networking/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/networking/_index.en.md @@ -13,7 +13,6 @@ The [expose strategy]({{< ref "../../../../tutorials-howtos/networking/expose-st This section explains how the connection between user clusters and the control plane is established, as well as the general networking concept in KKP. - ![KKP Network](images/network.png?classes=shadow,border "This diagram illustrates the necessary connections for KKP.") The following diagrams illustrate all available [expose strategy]({{< ref "../../../../tutorials-howtos/networking/expose-strategies" >}}) available in KKP. @@ -33,11 +32,11 @@ Any port numbers marked with * are overridable, so you will need to ensure any c ** Default port range for [NodePort Services](https://kubernetes.io/docs/concepts/services-networking/service/). All ports listed are using TCP. -#### Worker Nodes +### Worker Nodes Worker nodes in user clusters must have full connectivity to each other to ensure the functionality of various components, including different Container Network Interfaces (CNIs) and Container Storage Interfaces (CSIs) supported by KKP. -#### API Server +### API Server For each user cluster, an API server is deployed in the Seed and exposed depending on the chosen expose strategy. Its purpose is not only to make the apiserver accessible to users, but also to ensure the proper functioning of the cluster. @@ -46,7 +45,7 @@ In addition, the apiserver is used for [in-cluster API](https://kubernetes.io/do In Tunneling mode, to forward traffic to the correct apiserver, an envoy proxy is deployed on each node, serving as an endpoint for the Kubernetes cluster service to proxy traffic to the apiserver. -#### Kubernetes Konnectivity proxy +### Kubernetes Konnectivity proxy To enable Kubernetes to work properly, parts of the control plane need to be connected to the internal Kubernetes cluster network. This is done via the [konnectivity proxy](https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/), which is deployed for each cluster. diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/resource-quotas/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/resource-quotas/_index.en.md index 5b2ac7022..6406889c3 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/resource-quotas/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/resource-quotas/_index.en.md @@ -10,6 +10,7 @@ Resource Quotas in KKP allow administrators to set quotas on the amount of resou subject which is supported is Project, so the resource quotas currently limit the amount of resources that can be used project-wide. The resources in question are the resources of the user cluster: + - CPU - the cumulated CPU used by the nodes on all clusters. - Memory - the cumulated RAM used by the nodes on all clusters. - Storage - the cumulated disk size of the nodes on all clusters. @@ -21,12 +22,12 @@ This feature is available in the EE edition only. That one just controls the size of the machines suggested to users in the KKP Dashboard during the cluster creation. {{% /notice %}} - ## Setting up Resource Quotas The resource quotas are managed by administrators either through the KKP UI/API or through the Resource Quota CRDs. Example ResourceQuota: + ```yaml apiVersion: kubermatic.k8c.io/v1 kind: ResourceQuota @@ -53,6 +54,7 @@ set in the ResourceQuota is done automatically by the API. ## Calculating Quota Usage The ResourceQuota has 2 status fields: + - `globalUsage` which shows the resource usage across all seeds - `localUsage` which shows the resource usage on the local seed @@ -98,7 +100,6 @@ resulting K8s Node `.status.capacity`. | Anexia | CPUs (set in Machine spec) | Memory (from Machine spec) | DiskSize (from Machine spec) | | VMWare Cloud Director | CPU * CPUCores (Machine spec) | MemoryMB (from Machine spec) | DiskSizeGB (from Machine spec) | - ## Enforcing Quotas The quotas are enforced through a validating webhook on Machine resources in the user clusters. This means that the quota validation diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/service-account/service-account-token-projection/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/service-account/service-account-token-projection/_index.en.md index c381e7d3c..1c38fb091 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/service-account/service-account-token-projection/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/service-account/service-account-token-projection/_index.en.md @@ -12,14 +12,17 @@ is used by some applications to enhance security when using service accounts (e.g. [Istio uses it by default](https://istio.io/latest/docs/ops/best-practices/security/#configure-third-party-service-account-tokens) as of version v1.3). As of KKP version v2.16, KKP supports Service Account Token Volume Projection as follows: + - in clusters with Kubernetes version v1.20+, it is enabled by default with the default configuration as described below, - in clusters with Kubernetes below v1.20, it has to be explicitly enabled. ## Prerequisites + `TokenRequest` and `TokenRequestProjection` Kubernetes feature gates have to be enabled (enabled by default since Kubernetes v1.11 and v1.12 respectively). ## Configuration + In KKP v2.16, the Service Account Token Volume Projection feature can be configured only via KKP API. The `Cluster` API object provides the `serviceAccount` field of the `ServiceAccountSettings` type, with the following definition: @@ -58,8 +61,8 @@ The following table summarizes the supported properties of the `ServiceAccountSe | `issuer` | Identifier of the service account token issuer. The issuer will assert this identifier in `iss` claim of issued tokens. | The URL of the apiserver, e.g., `https://`. | | `apiAudiences` | Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. Multiple audiences can be separated by comma (`,`). | Equal to `issuer`. | - ### Example: Configuration using a Request to KKP API + To configure the feature in an existing cluster, execute a `PATCH` request to URL: `https:///api/v1/projects//dc//clusters/` @@ -78,8 +81,8 @@ with the following content: You can use the Swagger UI at `https:///rest-api` to construct and send the API request. - ### Example: Configuration using Cluster CR + Alternatively, the feature can be also configured via the `Cluster` Custom Resource in the KKP seed cluster. For example, to enable the feature in an existing cluster via kubectl, edit the `Cluster` CR with `kubectl edit cluster ` and add the following configuration: diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/service-account/using-service-account/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/service-account/using-service-account/_index.en.md index 850b8bb63..89699e749 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/service-account/using-service-account/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/service-account/using-service-account/_index.en.md @@ -37,35 +37,39 @@ You can also change a token name. It is possible to delete a service account tok You can see when a token was created and when will expire. ## Using Service Accounts with KKP + You can control service account access in your project by provided groups. There are three basic access level groups: + - viewers - editors - project managers -#### Viewers +### Viewers **A viewer can:** - - list projects - - get project details - - get project SSH keys - - list clusters - - get cluster details - - get cluster resources details + +- list projects +- get project details +- get project SSH keys +- list clusters +- get cluster details +- get cluster resources details Permissions for read-only actions that do not affect state, such as viewing. + - viewers are not allowed to interact with service accounts (User) - viewers are not allowed to interact with members of a project (UserProjectBinding) - -#### Editors +### Editors **All viewer permissions, plus permissions to create, edit & delete cluster** - - editors are not allowed to delete a project - - editors are not allowed to interact with members of a project (UserProjectBinding) - - editors are not allowed to interact with service accounts (User) -#### Project Managers +- editors are not allowed to delete a project +- editors are not allowed to interact with members of a project (UserProjectBinding) +- editors are not allowed to interact with service accounts (User) + +### Project Managers **The `project managers` is service account specific group. Which allows** @@ -90,6 +94,6 @@ Authorization: Bearer aaa.bbb.ccc You can also use `curl` command to reach API endpoint: -``` +```bash curl -i -H "Accept: application/json" -H "Authorization: Bearer aaa.bbb.ccc" -X GET http://localhost:8080/api/v2/projects/jnpllgp66z/clusters ``` diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/user-interface/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/user-interface/_index.en.md index d63f1765c..8877d168d 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/user-interface/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/user-interface/_index.en.md @@ -9,15 +9,18 @@ Get information on how to get the most ouf of the Kubermatic Dashboard, the offi ![Admin Panel](dashboard.png?height=400px&classes=shadow,border "Kubermatic Dashboard") ## Preparing New Themes + A set of [tutorials]({{< ref "./theming" >}}) that will teach you how to prepare custom themes and apply them to be used by the KKP Dashboard. ## Admin Panel + The Admin Panel is a place for the Kubermatic administrators where they can manage the global settings that directly impact all Kubermatic users. Check out the [Admin Panel]({{< ref "../../../../tutorials-howtos/administration/admin-panel" >}}) section for more details. ## Theming + Theme and customize the KKP Dashboard according to your needs, but be aware that theming capabilities are available in the Enterprise Edition only. Check out [Customizing the Dashboard]({{< ref "../../../../tutorials-howtos/dashboard-customization" >}}) section diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/user-interface/theming/with-src/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/user-interface/theming/with-src/_index.en.md index 4ad79d5d1..2114012e6 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/user-interface/theming/with-src/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/user-interface/theming/with-src/_index.en.md @@ -20,20 +20,20 @@ All available themes can be found inside `src/assets/themes` directory. Follow t - `name` - refers to the theme file name stored inside `assets/themes` directory. - `displayName` - will be used by the theme picker available in the `Account` view to display a new theme. - `isDark` - defines the icon to be used by the theme picker (sun/moon). - ```json - { - "openstack": { - "wizard_use_default_user": false - }, - "themes": [ - { - "name": "custom", - "displayName": "Custom", - "isDark": false - } - ] - } - ``` + ```json + { + "openstack": { + "wizard_use_default_user": false + }, + "themes": [ + { + "name": "custom", + "displayName": "Custom", + "isDark": false + } + ] + } + ``` - Make sure that theme is registered in the `angular.json` file before running the application locally. It is done for `custom` theme by default. - Run the application using `npm start`, open the `Account` view under `User settings`, select your new theme and update `custom.scss` according to your needs. diff --git a/content/kubermatic/main/architecture/concept/kkp-concepts/user-interface/theming/without-src/_index.en.md b/content/kubermatic/main/architecture/concept/kkp-concepts/user-interface/theming/without-src/_index.en.md index cab747001..366605ae2 100644 --- a/content/kubermatic/main/architecture/concept/kkp-concepts/user-interface/theming/without-src/_index.en.md +++ b/content/kubermatic/main/architecture/concept/kkp-concepts/user-interface/theming/without-src/_index.en.md @@ -6,55 +6,62 @@ weight = 50 +++ ### Preparing a New Theme Without Access to the Sources + In this case the easiest way of preparing a new theme is to download one of the existing themes light/dark. This can be done in a few different ways. We'll describe here two possible ways of downloading enabled themes. #### Download Theme Using the Browser + 1. Open KKP UI -2. Open `Developer tools` and navigate to `Sources` tab. -3. There should be a CSS file of a currently selected theme available to be downloaded inside `assts/themes` directory. +1. Open `Developer tools` and navigate to `Sources` tab. +1. There should be a CSS file of a currently selected theme available to be downloaded inside `assts/themes` directory. ![Dev tools](@/images/ui/developer-tools.png?height=300px&classes=shadow,border "Dev tools") #### Download Themes Directly From the KKP Dashboard container + Assuming that you know how to exec into the container and copy resources from/to it, themes can be simply copied over to your machine from the running KKP Dashboard container. They are stored inside the container in `dist/assets/themes` directory. ##### Kubernetes + Assuming that the KKP Dashboard pod name is `kubermatic-dashboard-5b96d7f5df-mkmgh` you can copy themes to your `${HOME}/themes` directory using below command: + ```bash kubectl -n kubermatic cp kubermatic-dashboard-5b96d7f5df-mkmgh:/dist/assets/themes ~/themes ``` ##### Docker + Assuming that the KKP Dashboard container name is `kubermatic-dashboard` you can copy themes to your `${HOME}/themes` directory using below command: + ```bash docker cp kubermatic-dashboard:/dist/assets/themes/. ~/themes ``` #### Using Compiled Theme to Prepare a New Theme + Once you have a base theme file ready, we can use it to prepare a new theme. To easier understand the process, let's assume that we have downloaded a `light.css` file and will be preparing a new theme called `solar.css`. 1. Rename `light.css` to `solar.css`. -2. Update `solar.css` file according to your needs. Anything in the file can be changed or new rules can be added. +1. Update `solar.css` file according to your needs. Anything in the file can be changed or new rules can be added. In case you are changing colors, remember to update it in the whole file. -3. Mount new `solar.css` file to `dist/assets/themes` directory inside the application container. **Make sure not to override whole directory.** -4. Update `config.json` file inside `dist/config` directory and register the new theme. - - ```json - { - "openstack": { - "wizard_use_default_user": false - }, - "themes": [ - { - "name": "solar", - "displayName": "Solar", - "isDark": true - } - ] - } - ``` +1. Mount new `solar.css` file to `dist/assets/themes` directory inside the application container. **Make sure not to override whole directory.** +1. Update `config.json` file inside `dist/config` directory and register the new theme. + ```json + { + "openstack": { + "wizard_use_default_user": false + }, + "themes": [ + { + "name": "solar", + "displayName": "Solar", + "isDark": true + } + ] + } + ``` That's it. After restarting the application, theme picker in the `Account` view should show your new `Solar` theme. diff --git a/content/kubermatic/main/architecture/feature-stages/_index.en.md b/content/kubermatic/main/architecture/feature-stages/_index.en.md index f897eec92..ece457538 100644 --- a/content/kubermatic/main/architecture/feature-stages/_index.en.md +++ b/content/kubermatic/main/architecture/feature-stages/_index.en.md @@ -15,7 +15,6 @@ weight = 4 - The whole feature can be revoked immediately and without notice - Recommended only for testing and providing feedback - ## Beta / Technical Preview - Targeted users: experienced KKP administrators @@ -27,7 +26,6 @@ weight = 4 - The whole feature can still be revoked, but with prior notice and respecting a deprecation cycle - Recommended for only non-business-critical uses, testing usability, performance, and compatibility in real-world environments - ## General Availability (GA) - Users: All users diff --git a/content/kubermatic/main/architecture/iam-role-based-access-control/_index.en.md b/content/kubermatic/main/architecture/iam-role-based-access-control/_index.en.md index 68cebea75..8719ec556 100644 --- a/content/kubermatic/main/architecture/iam-role-based-access-control/_index.en.md +++ b/content/kubermatic/main/architecture/iam-role-based-access-control/_index.en.md @@ -11,23 +11,26 @@ By default, KKP provides [Dex](#authentication-with-dex) as OIDC provider, but y please refer to the [OIDC provider]({{< ref "../../tutorials-howtos/oidc-provider-configuration" >}}) chapter. ## Authentication with Dex + [Dex](https://dexidp.io/) is an identity service that uses OIDC to drive authentication for KKP components. It acts as a portal to other identity providers through [connectors](https://dexidp.io/docs/connectors/). This lets Dex defer authentication to these connectors. Multiple connectors may be configured at the same time. Most popular are: -* [GitHub](https://dexidp.io/docs/connectors/github/) -* [Google](https://dexidp.io/docs/connectors/google/) -* [LDAP](https://dexidp.io/docs/connectors/ldap/) -* [Microsoft](https://dexidp.io/docs/connectors/microsoft/) -* [OAuth 2.0](https://dexidp.io/docs/connectors/oauth/) -* [OpenID Connect](https://dexidp.io/docs/connectors/oidc/) -* [SAML2.0](https://dexidp.io/docs/connectors/saml/) + +- [GitHub](https://dexidp.io/docs/connectors/github/) +- [Google](https://dexidp.io/docs/connectors/google/) +- [LDAP](https://dexidp.io/docs/connectors/ldap/) +- [Microsoft](https://dexidp.io/docs/connectors/microsoft/) +- [OAuth 2.0](https://dexidp.io/docs/connectors/oauth/) +- [OpenID Connect](https://dexidp.io/docs/connectors/oidc/) +- [SAML2.0](https://dexidp.io/docs/connectors/saml/) Check out the [Dex documentation](https://dexidp.io/docs/connectors/) for a list of available providers and how to setup their configuration. To configure Dex connectors, edit `.dex.connectors` in the `values.yaml` Example to update or set up Github connector: -``` + +```yaml dex: ingress: [...] @@ -50,17 +53,18 @@ And apply the changes to the cluster: ``` ## Authorization + Authorization is managed at multiple levels to ensure users only have access to authorized resources. KKP uses its own authorization system to control access to various resources within the platform, including projects and clusters. Administrators and project owners define and manage these policies and provide specific access control rules for users and groups. - The Kubernetes Role-Based Access Control (RBAC) system is also used to control access to user cluster level resources, such as namespaces, pods, and services. Please refer to [Cluster Access]({{< ref "../../tutorials-howtos/cluster-access" >}}) to configure RBAC. ### Kubermatic Kubernetes Platform (KKP) Users + There are two kinds of users in KKP: **admin** and **non-admin** users. **Admin** users can manage settings that impact the whole Kubermatic installation and users. For example, they can set default diff --git a/content/kubermatic/main/architecture/known-issues/_index.en.md b/content/kubermatic/main/architecture/known-issues/_index.en.md index a895a4d47..ec2a809be 100644 --- a/content/kubermatic/main/architecture/known-issues/_index.en.md +++ b/content/kubermatic/main/architecture/known-issues/_index.en.md @@ -15,7 +15,6 @@ This page documents the list of known issues and possible work arounds/solutions For oidc authentication to user cluster there is always the same issuer used. This leads to invalidation of refresh tokens when a new authentication happens with the same user because existing refresh tokens for the same user/client pair are invalidated when a new one is requested. - ### Root Cause By default it is only possible to have one refresh token per user/client pair in dex for security reasons. There is an open issue regarding this in the [upstream repository](https://github.com/dexidp/dex/issues/981). The refresh token has by default also no expiration set. This is useful to stay logged in over a longer time because the id_token can be refreshed unless the refresh token is invalidated. @@ -26,7 +25,7 @@ One example would be to download a kubeconfig of one cluster and then of another You can either change this in dex configuration by setting `userIDKey` to `jti` in the connector section or you could configure an other oidc provider which supports multiple refresh tokens per user-client pair like keycloak does by default. -#### dex +#### Dex The following yaml snippet is an example how to configure an oidc connector to keep the refresh tokens. @@ -45,17 +44,17 @@ The following yaml snippet is an example how to configure an oidc connector to k userNameKey: email ``` -#### external provider +#### External provider For an explanation how to configure an other oidc provider than dex take a look at [oidc-provider-configuration]({{< ref "../../tutorials-howtos/oidc-provider-configuration" >}}). -### security implications regarding dex solution +### Security implications regarding dex solution For dex this has some implications. With this configuration a token is generated for each user session. The number of objects stored in kubernetes regarding refresh tokens has no limit anymore. The principle that one refresh belongs to one user/client pair is a security consideration which would be ignored in that case. The only way to revoke a refresh token is then to do it via grpc api which is not exposed by default or by manually deleting the related refreshtoken resource in the kubernetes cluster. ## API server Overload Leading to Instability in Seed due to Konnectivity -Issue: https://github.com/kubermatic/kubermatic/issues/13321 +Issue: Status: Fixed diff --git a/content/kubermatic/main/architecture/monitoring-logging-alerting/master-seed/_index.en.md b/content/kubermatic/main/architecture/monitoring-logging-alerting/master-seed/_index.en.md index f0c3d49bf..e49cdbfeb 100644 --- a/content/kubermatic/main/architecture/monitoring-logging-alerting/master-seed/_index.en.md +++ b/content/kubermatic/main/architecture/monitoring-logging-alerting/master-seed/_index.en.md @@ -32,10 +32,11 @@ When working with Grafana please keep in mind, that **ALL CHANGES** done using t Depending on how user clusters are used, disk usage for Prometheus can vary greatly. As the operator you should however plan for -* 100 MiB used by the seed-level Prometheus for each user cluster -* 50-300 MiB used by the user-level Prometheus, depending on its WAL size. +- 100 MiB used by the seed-level Prometheus for each user cluster +- 50-300 MiB used by the user-level Prometheus, depending on its WAL size. These values can also vary, if you tweak the retention periods. ## Installation + Please follow the [Installation of the Master / Seed MLA Stack Guide]({{< relref "../../../tutorials-howtos/monitoring-logging-alerting/master-seed/installation/" >}}). diff --git a/content/kubermatic/main/architecture/monitoring-logging-alerting/user-cluster/_index.en.md b/content/kubermatic/main/architecture/monitoring-logging-alerting/user-cluster/_index.en.md index 38c5ad986..206380b0a 100644 --- a/content/kubermatic/main/architecture/monitoring-logging-alerting/user-cluster/_index.en.md +++ b/content/kubermatic/main/architecture/monitoring-logging-alerting/user-cluster/_index.en.md @@ -25,11 +25,13 @@ Unlike the [Master / Seed Cluster MLA stack]({{< ref "../master-seed/">}}), it i ![Monitoring architecture diagram](architecture.png) ### User Cluster Components + When User Cluster MLA is enabled in a KKP user cluster, it automatically deploys two components into it - Prometheus and Loki Promtail. These components are configured to stream (remote write) the logs and metrics into backends running in the Seed Cluster (Cortex for metrics and Loki-Distributed for logs). The connection between the user cluster components and Seed cluster components is secured by HTTPS with mutual TLS certificate authentication. This makes the MLA setup in user clusters very simple and low footprint, as no MLA data is stored in the user clusters and user clusters are not involved when doing data lookups. Data of all user clusters can be accessed from a central place (Grafana UI) in the Seed Cluster. ### Seed Cluster Components + As mentioned above, metrics and logs data from all user clusters are streamed into their Seed Cluster, where they are processed and stored in a long term object store (Minio). Data can be looked up in a multi-tenant Grafana instance which is running in the Seed, and provides each user a view to metrics and logs of all clusters which they have privileges to access in the KKP platform. **MLA Gateway**: @@ -47,4 +49,5 @@ The backend for processing, storing and retrieving metrics data from user Cluste The backend for processing, storing and retrieving logs data from user Cluster Clusters is based on the [Loki](https://grafana.com/docs/loki/latest/) - distributed deployment. It allows horizontal scalability of individual Loki components that can be fine-tuned to fit any use-case. For more details about Loki architecture, please refer to the [Loki Architecture](https://grafana.com/docs/loki/latest/architecture/) documentation. ## Installation + Please follow the [User Cluster MLA Stack Admin Guide]({{< relref "../../../tutorials-howtos/monitoring-logging-alerting/user-cluster/admin-guide/" >}}). diff --git a/content/kubermatic/main/architecture/requirements/cluster-requirements/_index.en.md b/content/kubermatic/main/architecture/requirements/cluster-requirements/_index.en.md index ed1461203..8aa7eb68e 100644 --- a/content/kubermatic/main/architecture/requirements/cluster-requirements/_index.en.md +++ b/content/kubermatic/main/architecture/requirements/cluster-requirements/_index.en.md @@ -6,39 +6,43 @@ weight = 15 +++ ## Master Cluster + The Master Cluster hosts the KKP components and might also act as a seed cluster and host the master components of user clusters (see [Architecture]({{< ref "../../../architecture/">}})). Therefore, it should run in a highly-available setup with at least 3 master nodes and 3 worker nodes. **Minimal Requirements:** -* Six or more machines running one of: - * Ubuntu 20.04+ - * Debian 10 - * RHEL 7 - * Flatcar -* 4 GB or more of RAM per machine (any less will leave little room for your apps) -* 2 CPUs or more + +- Six or more machines running one of: + - Ubuntu 20.04+ + - Debian 10 + - RHEL 7 + - Flatcar +- 4 GB or more of RAM per machine (any less will leave little room for your apps) +- 2 CPUs or more ## User Cluster + The User Cluster is a Kubernetes cluster created and managed by KKP. The exact requirements may depend on the type of workloads that will be running in the user cluster. **Minimal Requirements:** -* One or more machines running one of: - * Ubuntu 20.04+ - * Debian 10 - * RHEL 7 - * Flatcar -* 2 GB or more of RAM per machine (any less will leave little room for your apps) -* 2 CPUs or more -* Full network connectivity between all machines in the cluster (public or private network is fine) -* Unique hostname, MAC address, and product\_uuid for every node. See more details in the next [**topic**](#Verify-the-MAC-Address-and-product-uuid-Are-Unique-for-Every-Node). -* Certain ports are open on your machines. See below for more details. -* Swap disabled. You **MUST** disable swap in order for the kubelet to work properly. + +- One or more machines running one of: + - Ubuntu 20.04+ + - Debian 10 + - RHEL 7 + - Flatcar +- 2 GB or more of RAM per machine (any less will leave little room for your apps) +- 2 CPUs or more +- Full network connectivity between all machines in the cluster (public or private network is fine) +- Unique hostname, MAC address, and product\_uuid for every node. See more details in the next [**topic**](#Verify-the-MAC-Address-and-product-uuid-Are-Unique-for-Every-Node). +- Certain ports are open on your machines. See below for more details. +- Swap disabled. You **MUST** disable swap in order for the kubelet to work properly. ## Verify Node Uniqueness You will need to verify that MAC address and `product_uuid` are unique on every node. This should usually be the case but might not be, especially for on-premise providers. -* You can get the MAC address of the network interfaces using the command `ip link` or `ifconfig -a` -* The product\_uuid can be checked by using the command `sudo cat /sys/class/dmi/id/product_uuid` +- You can get the MAC address of the network interfaces using the command `ip link` or `ifconfig -a` +- The product\_uuid can be checked by using the command `sudo cat /sys/class/dmi/id/product_uuid` It is very likely that hardware devices will have unique addresses, although some virtual machines may have identical values. Kubernetes uses these values to uniquely identify the nodes in the cluster. If these values are not unique to each node, the installation process may [fail](https://github.com/kubernetes/kubeadm/issues/31). diff --git a/content/kubermatic/main/architecture/requirements/storage/_index.en.md b/content/kubermatic/main/architecture/requirements/storage/_index.en.md index 832624b73..8c8e90ef4 100644 --- a/content/kubermatic/main/architecture/requirements/storage/_index.en.md +++ b/content/kubermatic/main/architecture/requirements/storage/_index.en.md @@ -6,7 +6,7 @@ weight = 15 +++ -Running KKP requires at least one persistent storage layer that can be accessed via a Kubernetes [CSI driver](https://kubernetes-csi.github.io/docs/drivers.html). The Kubermatic Installer attempts to discover pre-existing CSI drivers for known cloud providers to create a suitable _kubermatic-fast_ `StorageClass`. +Running KKP requires at least one persistent storage layer that can be accessed via a Kubernetes [CSI driver](https://kubernetes-csi.github.io/docs/drivers.html). The Kubermatic Installer attempts to discover pre-existing CSI drivers for known cloud providers to create a suitable *kubermatic-fast* `StorageClass`. In particular for setups in private datacenters, setting up a dedicated storage layer might be necessary to reach adequate performance. Make sure to configure and install the corresponding CSI driver (from the list linked above) for your storage solution onto the KKP Seed clusters before installing KKP. diff --git a/content/kubermatic/main/architecture/supported-providers/azure/_index.en.md b/content/kubermatic/main/architecture/supported-providers/azure/_index.en.md index e432e279d..9687f2119 100644 --- a/content/kubermatic/main/architecture/supported-providers/azure/_index.en.md +++ b/content/kubermatic/main/architecture/supported-providers/azure/_index.en.md @@ -25,7 +25,7 @@ az account show --query id -o json Create a role that is used by the service account. -``` +```text az role definition create --role-definition '{ "Name": "Kubermatic", "Description": "Manage VM and Networks as well to manage Resource Groups and Tags", @@ -47,7 +47,7 @@ Get your Tenant ID az account show --query tenantId -o json ``` -create a new app with +Create a new app with ```bash az ad sp create-for-rbac --role="Kubermatic" --scopes="/subscriptions/********-****-****-****-************" @@ -73,6 +73,7 @@ Enter provider credentials using the values from step "Prepare Azure Environment - `Subscription ID`: your subscription ID ### Resources cleanup + During the machines cleanup, if KKP's Machine-Controller failed to delete the Cloud Provider instance and the user deleted that instance manually, Machine-Controller won't be able to delete any referenced resources to that machine, such as Public IPs, Disks and NICs. In that case, the user should cleanup those resources manually due to the fact that, Azure won't cleanup diff --git a/content/kubermatic/main/architecture/supported-providers/baremetal/_index.en.md b/content/kubermatic/main/architecture/supported-providers/baremetal/_index.en.md index 71a803bdc..bcf96a864 100644 --- a/content/kubermatic/main/architecture/supported-providers/baremetal/_index.en.md +++ b/content/kubermatic/main/architecture/supported-providers/baremetal/_index.en.md @@ -17,12 +17,13 @@ KKP’s Baremetal provider uses Tinkerbell to automate the setup and management With Tinkerbell, the provisioning process is driven by workflows that ensure each server is configured according to the desired specifications. Whether you are managing servers in a single location or across multiple data centers, Tinkerbell provides a reliable and automated way to manage your physical infrastructure, making it as easy to handle as cloud-based resources. ## Requirement + To successfully use the KKP Baremetal provider with Tinkerbell, ensure the following: -* **Tinkerbell Cluster**: A working Tinkerbell cluster must be in place. -* **Direct Access to Servers**: You must have access to your bare-metal servers, allowing you to provision and manage them. -* **Network Connectivity**: Establish a network connection between the API server of Tinkerbell cluster and the KKP seed cluster. This allows the Kubermatic Machine Controller to communicate with the Tinkerbell stack. -* **Tinkerbell Hardware Objects**: Create Hardware Objects within Tinkerbell that represent each bare-metal server you want to provision as a worker node in your Kubernetes cluster. +- **Tinkerbell Cluster**: A working Tinkerbell cluster must be in place. +- **Direct Access to Servers**: You must have access to your bare-metal servers, allowing you to provision and manage them. +- **Network Connectivity**: Establish a network connection between the API server of Tinkerbell cluster and the KKP seed cluster. This allows the Kubermatic Machine Controller to communicate with the Tinkerbell stack. +- **Tinkerbell Hardware Objects**: Create Hardware Objects within Tinkerbell that represent each bare-metal server you want to provision as a worker node in your Kubernetes cluster. ## Usage @@ -53,9 +54,9 @@ In Tinkerbell, Hardware Objects represent your physical bare-metal servers. To s Before proceeding, ensure you gather the following information for each server: -* **Disk Devices**: Specify the available disk devices, including bootable storage. -* **Network Interfaces**: Define the network interfaces available on the server, including MAC addresses and interface names. -* **Network Configuration**: Configure the IP addresses, gateways, and DNS settings for the server's network setup. +- **Disk Devices**: Specify the available disk devices, including bootable storage. +- **Network Interfaces**: Define the network interfaces available on the server, including MAC addresses and interface names. +- **Network Configuration**: Configure the IP addresses, gateways, and DNS settings for the server's network setup. It’s essential to allow PXE booting and workflows for the provisioning process. This is done by ensuring the following settings in the hardware spec object: @@ -68,6 +69,7 @@ netboot: This configuration allows Tinkerbell to initiate network booting and enables iPXE to start the provisioning workflow for your bare-metal server. This is an example for Hardware Object Configuration + ```yaml apiVersion: tinkerbell.org/v1alpha1 kind: Hardware @@ -118,10 +120,10 @@ Once the MachineDeployment is created and reconciled, the provisioning workflow The Machine Controller generates the necessary actions for this workflow, which are then executed on the bare-metal server by the `tink-worker` container. The key actions include: -* **Wiping the Disk Devices**: All existing data on the disk will be erased to prepare for the new OS installation. -* **Installing the Operating System**: The specified OS image (e.g., Ubuntu 20.04 or 22.04) will be installed on the server. -* **Network Configuration**: The server’s network settings will be configured based on the Hardware Object and the defined network settings. -* **Cloud-init Propagation**: The Operating System Manager (OSM) will propagate the cloud-init settings to the node to ensure proper configuration of the OS and related services. +- **Wiping the Disk Devices**: All existing data on the disk will be erased to prepare for the new OS installation. +- **Installing the Operating System**: The specified OS image (e.g., Ubuntu 20.04 or 22.04) will be installed on the server. +- **Network Configuration**: The server’s network settings will be configured based on the Hardware Object and the defined network settings. +- **Cloud-init Propagation**: The Operating System Manager (OSM) will propagate the cloud-init settings to the node to ensure proper configuration of the OS and related services. Once the provisioning workflow is complete, the bare-metal server will be fully operational as a worker node in the Kubernetes cluster. @@ -131,4 +133,4 @@ Currently, the baremetal provider only support Ubuntu as an operating system. Mo ## Future Enhancements -Currently, the Baremetal provider requires users to manually create Hardware Objects in Tinkerbell and manually boot up bare-metal servers for provisioning. However, future improvements aim to automate these steps to make the process smoother and more efficient. The goal is to eliminate the need for manual intervention by automatically detecting hardware, creating the necessary objects, and initiating the provisioning process without user input. This will make the Baremetal provider more dynamic and scalable, allowing users to manage their infrastructure with even greater ease and flexibility. \ No newline at end of file +Currently, the Baremetal provider requires users to manually create Hardware Objects in Tinkerbell and manually boot up bare-metal servers for provisioning. However, future improvements aim to automate these steps to make the process smoother and more efficient. The goal is to eliminate the need for manual intervention by automatically detecting hardware, creating the necessary objects, and initiating the provisioning process without user input. This will make the Baremetal provider more dynamic and scalable, allowing users to manage their infrastructure with even greater ease and flexibility. diff --git a/content/kubermatic/main/architecture/supported-providers/edge/_index.en.md b/content/kubermatic/main/architecture/supported-providers/edge/_index.en.md index c37bccdad..4714e537e 100644 --- a/content/kubermatic/main/architecture/supported-providers/edge/_index.en.md +++ b/content/kubermatic/main/architecture/supported-providers/edge/_index.en.md @@ -13,6 +13,7 @@ staging environment for testing before. {{% /notice %}} ## Requirement + To leverage KKP's edge capabilities, you'll need to: * Provide Target Devices: Identify the edge devices you want to function as worker nodes within your Kubermatic user cluster. diff --git a/content/kubermatic/main/architecture/supported-providers/kubevirt/_index.en.md b/content/kubermatic/main/architecture/supported-providers/kubevirt/_index.en.md index d4ad27fac..08b15e1ff 100644 --- a/content/kubermatic/main/architecture/supported-providers/kubevirt/_index.en.md +++ b/content/kubermatic/main/architecture/supported-providers/kubevirt/_index.en.md @@ -14,16 +14,18 @@ weight = 5 ### Requirements A Kubernetes cluster (KubeVirt infrastructure cluster), which consists of nodes that **have a hardware virtualization support** with at least: -* 3 Bare Metal Server -* CPUs: Minimum 8-core for testing; minimum 16-core or more for production -* Memory: Minimum 32 GB for testing; minimum 64 GB or more for production -* Storage: Minimum 100 GB for testing; minimum 500 GB or more for production + +- 3 Bare Metal Server +- CPUs: Minimum 8-core for testing; minimum 16-core or more for production +- Memory: Minimum 32 GB for testing; minimum 64 GB or more for production +- Storage: Minimum 100 GB for testing; minimum 500 GB or more for production Software requirement: -* KubeOne = 1.7 or higher -* KubeOVN = 1.12 or higher or Canal = 3.26 or higher -* KubeVirt = 1.2.2 -* Containerized Data Importer (CDI) = v1.60 + +- KubeOne = 1.7 or higher +- KubeOVN = 1.12 or higher or Canal = 3.26 or higher +- KubeVirt = 1.2.2 +- Containerized Data Importer (CDI) = v1.60 The cluster version must be in the scope of [supported KKP Kubernetes clusters]({{< ref "../../../tutorials-howtos/operating-system-manager/compatibility/#kubernetes-versions" >}}) and it must be in the [KubeVirt Support Matrix](https://github.com/kubevirt/sig-release/blob/main/releases/k8s-support-matrix.md). @@ -37,6 +39,7 @@ Follow [KubeVirt](https://kubevirt.io/user-guide/operations/installation/#instal documentation to find out how to install them. We require the following KubeVirt configuration: + ```yaml apiVersion: kubevirt.io/v1 kind: KubeVirt @@ -85,31 +88,31 @@ Currently, it is not recommended to use local or any topology constrained storag Once you have Kubernetes with all needed components, the last thing is to configure KubeVirt datacenter on seed. We allow to configure: -* `customNetworkPolicies` - Network policies that are deployed on the infrastructure cluster (where VMs run). - * Check [Network Policy documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/#networkpolicy-resource) to see available options in the spec. - * Also check a [common services connectivity issue](#i-created-a-load-balancer-service-on-a-user-cluster-but-services-outside-cannot-reach-it) that can be solved by a custom network policy. -* `ccmZoneAndRegionEnabled` - Indicates if region and zone labels from the cloud provider should be fetched. This field is enabled by default and should be disabled if the infra kubeconfig that is provided for KKP has no permission to access cluster role resources such as node objects. -* `dnsConfig` and `dnsPolicy` - DNS config and policy which are set up on a guest. Defaults to `ClusterFirst`. - * You should set those fields when you suffer from DNS loop or collision issue. [Refer to this section for more details.](#i-discovered-a-dns-collision-on-my-cluster-why-does-it-happen) -* `images` - Images for Virtual Machines that are selectable from KKP dashboard. - * Set this field according to [supported operating systems]({{< ref "../../compatibility/os-support-matrix/" >}}) to make sure that users can select operating systems for their VMs. -* `infraStorageClasses` - Storage classes that are initialized on user clusters that end users can work with. - * `isDefaultClass` - If true, the created StorageClass in the tenant cluster will be annotated with. - * `labels` - Is a map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. - * `regions` - Represents a larger domain, made up of one or more zones. It is uncommon for Kubernetes clusters to span multiple regions. - * `volumeBindingMode` - indicates how PersistentVolumeClaims should be provisioned and bound. When unset, VolumeBindingImmediate is used. - * `volumeProvisioner` - The field specifies whether a storage class will be utilized by the infra cluster csi driver where the Containerized Data Importer (CDI) can use to create VM disk images or by the KubeVirt CSI Driver to provision volumes in the user cluster. If not specified, the storage class can be used as a VM disk image or user clusters volumes. - * `infra-csi-driver` - When set in the infraStorageClass, the storage class can be listed in the UI while creating the machine deployments and won't be available in the user cluster. - * `kubevirt-csi-driver` - When set in the infraStorageClass, the storage class won't be listed in the UI and will be available in the user cluster. - * `zones` - Represent a logical failure domain. It is common for Kubernetes clusters to span multiple zones for increased availability. -* `namespacedMode(experimental)` - Represents the configuration for enabling the single namespace mode for all user-clusters in the KubeVirt datacenter. -* `vmEvictionStrategy` - Indicates the strategy to follow when a node drain occurs. If not set the default value is External and the VM will be protected by a PDB. Currently, we only support two strategies, `External` or `LiveMigrate`. - * `LiveMigrate`: the VirtualMachineInstance will be migrated instead of being shutdown. - * `External`: the VirtualMachineInstance will be protected by a PDB and `vmi.Status.EvacuationNodeName` will be set on eviction. This is mainly useful for machine-controller which needs a way for VMI's to be blocked from eviction, yet inform machine-controller that eviction has been called on the VMI, so it can handle tearing the VMI down. -* `csiDriverOperator` - Contains the KubeVirt CSI Driver Operator configurations, where users can override the default configurations of the csi driver. - * `overwriteRegistry`: overwrite the images registry for the csi driver daemonset that runs in the user cluster. -* `enableDedicatedCPUs` (deprecated) - Represents the configuration for virtual machine cpu assignment by using `domain.cpu` when set to `true` or using `resources.requests` and `resources.limits` when set to `false` which is the default -* `usePodResourcesCPU` - Represents the new way of configuring for cpu assignment virtual machine by using `domain.cpu` when set to `false` which is the default or using `resources.requests` and `resources.limits` when set to `true` +- `customNetworkPolicies` - Network policies that are deployed on the infrastructure cluster (where VMs run). + - Check [Network Policy documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/#networkpolicy-resource) to see available options in the spec. + - Also check a [common services connectivity issue](#i-created-a-load-balancer-service-on-a-user-cluster-but-services-outside-cannot-reach-it) that can be solved by a custom network policy. +- `ccmZoneAndRegionEnabled` - Indicates if region and zone labels from the cloud provider should be fetched. This field is enabled by default and should be disabled if the infra kubeconfig that is provided for KKP has no permission to access cluster role resources such as node objects. +- `dnsConfig` and `dnsPolicy` - DNS config and policy which are set up on a guest. Defaults to `ClusterFirst`. + - You should set those fields when you suffer from DNS loop or collision issue. [Refer to this section for more details.](#i-discovered-a-dns-collision-on-my-cluster-why-does-it-happen) +- `images` - Images for Virtual Machines that are selectable from KKP dashboard. + - Set this field according to [supported operating systems]({{< ref "../../compatibility/os-support-matrix/" >}}) to make sure that users can select operating systems for their VMs. +- `infraStorageClasses` - Storage classes that are initialized on user clusters that end users can work with. + - `isDefaultClass` - If true, the created StorageClass in the tenant cluster will be annotated with. + - `labels` - Is a map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. + - `regions` - Represents a larger domain, made up of one or more zones. It is uncommon for Kubernetes clusters to span multiple regions. + - `volumeBindingMode` - indicates how PersistentVolumeClaims should be provisioned and bound. When unset, VolumeBindingImmediate is used. + - `volumeProvisioner` - The field specifies whether a storage class will be utilized by the infra cluster csi driver where the Containerized Data Importer (CDI) can use to create VM disk images or by the KubeVirt CSI Driver to provision volumes in the user cluster. If not specified, the storage class can be used as a VM disk image or user clusters volumes. + - `infra-csi-driver` - When set in the infraStorageClass, the storage class can be listed in the UI while creating the machine deployments and won't be available in the user cluster. + - `kubevirt-csi-driver` - When set in the infraStorageClass, the storage class won't be listed in the UI and will be available in the user cluster. + - `zones` - Represent a logical failure domain. It is common for Kubernetes clusters to span multiple zones for increased availability. +- `namespacedMode(experimental)` - Represents the configuration for enabling the single namespace mode for all user-clusters in the KubeVirt datacenter. +- `vmEvictionStrategy` - Indicates the strategy to follow when a node drain occurs. If not set the default value is External and the VM will be protected by a PDB. Currently, we only support two strategies, `External` or `LiveMigrate`. + - `LiveMigrate`: the VirtualMachineInstance will be migrated instead of being shutdown. + - `External`: the VirtualMachineInstance will be protected by a PDB and `vmi.Status.EvacuationNodeName` will be set on eviction. This is mainly useful for machine-controller which needs a way for VMI's to be blocked from eviction, yet inform machine-controller that eviction has been called on the VMI, so it can handle tearing the VMI down. +- `csiDriverOperator` - Contains the KubeVirt CSI Driver Operator configurations, where users can override the default configurations of the csi driver. + - `overwriteRegistry`: overwrite the images registry for the csi driver daemonset that runs in the user cluster. +- `enableDedicatedCPUs` (deprecated) - Represents the configuration for virtual machine cpu assignment by using `domain.cpu` when set to `true` or using `resources.requests` and `resources.limits` when set to `false` which is the default +- `usePodResourcesCPU` - Represents the new way of configuring for cpu assignment virtual machine by using `domain.cpu` when set to `false` which is the default or using `resources.requests` and `resources.limits` when set to `true` {{% notice note %}} The `infraStorageClasses` pass names of KubeVirt storage classes that can be used from user clusters. @@ -140,6 +143,7 @@ only inside the cluster. You should use `customNetworkPolicies` to customize the Install [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator) on the KubeVirt cluster. Then update `KubeVirt` configuration with the following spec: + ```yaml apiVersion: kubevirt.io/v1 kind: KubeVirt @@ -168,12 +172,14 @@ We provide a Virtual Machine templating functionality over [Instance Types and P You can use our standard Instance Types: -* standard-2 - 2 CPUs, 8Gi RAM -* standard-4 - 4 CPUs, 16Gi RAM -* standard-8 - 8 CPUs, 32Gi RAM + +- standard-2 - 2 CPUs, 8Gi RAM +- standard-4 - 4 CPUs, 16Gi RAM +- standard-8 - 8 CPUs, 32Gi RAM and Preferences (which are optional): -* sockets-advantage - cpu guest topology where number of cpus is equal to number of sockets + +- sockets-advantage - cpu guest topology where number of cpus is equal to number of sockets or you can just simply adjust the amount of CPUs and RAM of our default template according to your needs. @@ -183,6 +189,7 @@ instance types and preferences that users can select later. [Read how to add new ### Virtual Machine Scheduling KubeVirt can take advantage of Kubernetes inner features to provide an advanced scheduling mechanism to virtual machines (VMs): + - [Kubernetes topology spread constraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) - [Kubernetes node affinity/anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) @@ -192,6 +199,7 @@ This allows you to restrict KubeVirt VMs ([see architecture](#architecture)) to {{% notice note %}} Note that topology spread constraints and node affinity presets are applicable to KubeVirt infra nodes. {{% /notice %}} + #### Default Scheduling Behavior Each Virtual Machine you create has default [topology spread constraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) applied: @@ -202,7 +210,7 @@ topologyKey: kubernetes.io/hostname whenUnsatisfiable: ScheduleAnyway ``` -this allows us to spread Virtual Machine equally across a cluster. +This allows us to spread Virtual Machine equally across a cluster. #### Customize Scheduling Behavior @@ -215,6 +223,7 @@ You can do it by expanding *ADVANCED SCHEDULING SETTINGS* on the initial nodes d - `Node Affinity Preset Values` refers to the values of KubeVirt infra node labels. Node Affinity Preset type can be `hard` or `soft` and refers to the same notion of [Pod affinity/anti-affinity types](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#types-of-inter-pod-affinity-and-anti-affinity): + - `hard`: the scheduler can't schedule the VM unless the rule is met. - `soft`: the scheduler tries to find a node that meets the rule. If a matching node is not available, the scheduler still schedules the VM. @@ -281,16 +290,17 @@ parameter of Machine Controller that sets the timeout for workload eviction. Usually it happens when both infrastructure and user clusters points to the same address of NodeLocal DNS Cache servers, even if they have separate server instances running. Let us imagine that: -* On the infrastructure cluster there is a running NodeLocal DNS Cache under 169.254.20.10 address. -* Then we create a new user cluster, start a few Virtual Machines that finally gives a fully functional k8s cluster that runs on another k8s cluster. -* Next we observe that on the user cluster there is another NodeLocal DNS Cache that has the same 169.254.20.10 address. -* Since Virtual Machine can have access to subnets on the infra and user clusters (depends on your network policy rules) having the same address of DNS cache leads to conflict. + +- On the infrastructure cluster there is a running NodeLocal DNS Cache under 169.254.20.10 address. +- Then we create a new user cluster, start a few Virtual Machines that finally gives a fully functional k8s cluster that runs on another k8s cluster. +- Next we observe that on the user cluster there is another NodeLocal DNS Cache that has the same 169.254.20.10 address. +- Since Virtual Machine can have access to subnets on the infra and user clusters (depends on your network policy rules) having the same address of DNS cache leads to conflict. One way to prevent that situation is to set a `dnsPolicy` and `dnsConfig` rules that Virtual Machines do not copy DNS configuration from their pods and points to different addresses. Follow [Configure KKP With KubeVirt](#configure-kkp-with-kubevirt) to learn how set DNS config correctly. -### I created a load balancer service on a user cluster but services outside cannot reach it. +### I created a load balancer service on a user cluster but services outside cannot reach it In most cases it is due to `cluster-isolation` network policy that is deployed as default on each user cluster. It only allows in-cluster communication. You should adjust network rules to your needs by adding [customNetworkPolicies configuration]({{< ref "../../../tutorials-howtos/project-and-cluster-management/seed-cluster/" >}})). @@ -326,16 +336,18 @@ Kubermatic Virtualization graduates to GA from KKP 2.22! On the way, we have changed many things that improved our implementation of KubeVirt Cloud Provider. Just to highlight the most important: -* Safe Virtual Machine workload eviction has been implemented. -* Virtual Machine templating is based on InstanceTypes and Preferences. -* KubeVirt CSI controller has been moved to control plane of a user cluster. -* Users can influence scheduling of VMs over topology spread constraints and node affinity presets. -* KubeVirt Cloud Controller Manager has been improved and optimized. -* Cluster admin can define the list of supported OS images and initialized storage classes. + +- Safe Virtual Machine workload eviction has been implemented. +- Virtual Machine templating is based on InstanceTypes and Preferences. +- KubeVirt CSI controller has been moved to control plane of a user cluster. +- Users can influence scheduling of VMs over topology spread constraints and node affinity presets. +- KubeVirt Cloud Controller Manager has been improved and optimized. +- Cluster admin can define the list of supported OS images and initialized storage classes. Additionally, we removed some features that didn't leave technology preview stage, those are: -* Custom Local Disks -* Secondary Disks + +- Custom Local Disks +- Secondary Disks {{% notice warning %}} The official upgrade procedure will not break clusters that already exist, however, **scaling cluster nodes will not lead to expected results**. @@ -358,12 +370,12 @@ Or if you provisioned the cluster over KubeOne please follow [the update procedu Next you can update KubeVirt control plane and Containerized Data Importer by executing: -```shell +```bash export RELEASE= kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml ``` -```shell +```bash export RELEASE= kubectl apply -f https://github.com/kubevirt/containerized-data-importer/releases/download/${RELEASE}/cdi-operator.yaml ``` diff --git a/content/kubermatic/main/architecture/supported-providers/vmware-cloud-director/_index.en.md b/content/kubermatic/main/architecture/supported-providers/vmware-cloud-director/_index.en.md index b2cf2e8ec..8b9eac150 100644 --- a/content/kubermatic/main/architecture/supported-providers/vmware-cloud-director/_index.en.md +++ b/content/kubermatic/main/architecture/supported-providers/vmware-cloud-director/_index.en.md @@ -10,9 +10,9 @@ weight = 7 Prerequisites for provisioning Kubernetes clusters with the KKP are as follows: 1. An Organizational Virtual Data Center (VDC). -2. `Edge Gateway` is required for connectivity with the internet, network address translation, and network firewall. -3. Organizational Virtual Data Center network is connected to the edge gateway. -4. Ensure that the distributed firewalls are configured in a way that allows traffic flow within and out of the VDC. +1. `Edge Gateway` is required for connectivity with the internet, network address translation, and network firewall. +1. Organizational Virtual Data Center network is connected to the edge gateway. +1. Ensure that the distributed firewalls are configured in a way that allows traffic flow within and out of the VDC. Kubermatic Kubernetes Platform (KKP) integration has been tested with `VMware Cloud Director 10.4`. @@ -57,7 +57,7 @@ spec: CSI driver settings can be configured at the cluster level when creating a cluster using UI or API. The following settings are required: 1. Storage Profile: Used for creating persistent volumes. -2. Filesystem: Filesystem to use for named disks. Allowed values are ext4 or xfs. +1. Filesystem: Filesystem to use for named disks. Allowed values are ext4 or xfs. ## Known Limitations diff --git a/content/kubermatic/main/architecture/supported-providers/vsphere/_index.en.md b/content/kubermatic/main/architecture/supported-providers/vsphere/_index.en.md index 9ae4d15cb..8b0fe217d 100644 --- a/content/kubermatic/main/architecture/supported-providers/vsphere/_index.en.md +++ b/content/kubermatic/main/architecture/supported-providers/vsphere/_index.en.md @@ -17,10 +17,9 @@ When creating worker nodes for a user cluster, the user can specify an existing ### Supported Operating Systems -* Ubuntu 20.04 [ova](https://cloud-images.ubuntu.com/releases/20.04/release/ubuntu-20.04-server-cloudimg-amd64.ova) -* Ubuntu 22.04 [ova](https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.ova) -* Flatcar (Stable channel) [ova](https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_vmware_ova.ova) - +- Ubuntu 20.04 [ova](https://cloud-images.ubuntu.com/releases/20.04/release/ubuntu-20.04-server-cloudimg-amd64.ova) +- Ubuntu 22.04 [ova](https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.ova) +- Flatcar (Stable channel) [ova](https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_vmware_ova.ova) ### Importing the OVA @@ -55,7 +54,7 @@ are needed to manage VMs, storage, networking and tags. The vSphere provider allows to split permissions into two sets of credentials: 1. Credentials passed to the [vSphere Cloud Controller Manager (CCM) and CSI Storage driver](#cloud-controller-manager-ccm--csi). These credentials are currently inherited into the user cluster and should therefore be individual per user cluster. This type of credentials can be passed when creating a user cluster or setting up a preset. -2. Credentials used for [creating and managing infrastructure](#infrastructure-management) (VMs, tags, networks). This set of credentials is not shared with the user cluster and is kept on the seed cluster. This type of credentials can either be passed in the Seed configuration ([.spec.datacenters.EXAMPLEDC.vpshere.infraManagementUser]({{< ref "../../../references/crds/#datacenterspecvsphere" >}})) for all user clusters created in this datacenter or individually while creating a user cluster. +1. Credentials used for [creating and managing infrastructure](#infrastructure-management) (VMs, tags, networks). This set of credentials is not shared with the user cluster and is kept on the seed cluster. This type of credentials can either be passed in the Seed configuration ([.spec.datacenters.EXAMPLEDC.vpshere.infraManagementUser]({{< ref "../../../references/crds/#datacenterspecvsphere" >}})) for all user clusters created in this datacenter or individually while creating a user cluster. If such a split is not desired, one set of credentials used for both use cases can be provided instead. Providing two sets of credentials is optional. @@ -64,6 +63,7 @@ If such a split is not desired, one set of credentials used for both use cases c The vsphere users has to have to following permissions on the correct resources. Note that if a shared set of credentials is used, roles for both use cases need to be assigned to the technical user which will be used for credentials. #### Cloud Controller Manager (CCM) / CSI + **Note:** Below roles were updated based on [vsphere-storage-plugin-roles] for external CCM which is available from kkp v2.18+ and vsphere v7.0.2+ For the Cloud Controller Manager (CCM) and CSI components used to provide cloud provider and storage integration to the user cluster, @@ -71,23 +71,25 @@ a technical user (e.g. `cust-ccm-cluster`) is needed. The user should be assigne {{< tabs name="CCM/CSI User Roles" >}} {{% tab name="k8c-ccm-storage-vmfolder-propagate" %}} + ##### Role `k8c-ccm-storage-vmfolder-propagate` -* Granted at **VM Folder** and **Template Folder**, propagated -* Permissions - * Virtual machine - * Change Configuration - * Add existing disk - * Add new disk - * Add or remove device - * Remove disk - * Folder - * Create folder - * Delete dolder + +- Granted at **VM Folder** and **Template Folder**, propagated +- Permissions + - Virtual machine + - Change Configuration + - Add existing disk + - Add new disk + - Add or remove device + - Remove disk + - Folder + - Create folder + - Delete dolder --- -``` -$ govc role.ls k8c-ccm-storage-vmfolder-propagate +```bash +govc role.ls k8c-ccm-storage-vmfolder-propagate Folder.Create Folder.Delete VirtualMachine.Config.AddExistingDisk @@ -95,50 +97,61 @@ VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.RemoveDisk ``` + {{% /tab %}} {{% tab name="k8c-ccm-storage-datastore-propagate" %}} + ##### Role `k8c-ccm-storage-datastore-propagate` -* Granted at **Datastore**, propagated -* Permissions - * Datastore - * Allocate space - * Low level file operations + +- Granted at **Datastore**, propagated +- Permissions + - Datastore + - Allocate space + - Low level file operations --- -``` -$ govc role.ls k8c-ccm-storage-datastore-propagate +```bash +govc role.ls k8c-ccm-storage-datastore-propagate Datastore.AllocateSpace Datastore.FileManagement ``` + {{% /tab %}} {{% tab name="k8c-ccm-storage-cns" %}} + ##### Role `k8c-ccm-storage-cns` -* Granted at **vcenter** level, not propagated -* Permissions - * CNS - * Searchable + +- Granted at **vcenter** level, not propagated +- Permissions + - CNS + - Searchable + --- -``` -$ govc role.ls k8c-ccm-storage-cns +```bash +govc role.ls k8c-ccm-storage-cns Cns.Searchable ``` + {{% /tab %}} {{% tab name="Read-only (predefined)" %}} + ##### Role `Read-only` (predefined) -* Granted at ..., **not** propagated - * Datacenter - * All hosts where the nodes VMs reside. + +- Granted at ..., **not** propagated + - Datacenter + - All hosts where the nodes VMs reside. --- -``` -$ govc role.ls ReadOnly +```bash +govc role.ls ReadOnly System.Anonymous System.Read System.View ``` + {{% /tab %}} {{< /tabs >}} @@ -148,33 +161,36 @@ For infrastructure (e.g. VMs, tags and networking) provisioning actions of KKP i {{< tabs name="Infrastructure Management" >}} {{% tab name="k8c-user-vcenter" %}} + ##### Role `k8c-user-vcenter` -* Granted at **vcenter** level, **not** propagated -* Needed to customize VM during provisioning -* Permissions - * CNS - * Searchable - * Profile-driven storage - * Profile-driven storage view - * VirtualMachine - * Provisioning - * Modify customization specification - * Read customization specifications - * vSphere Tagging - * Assign or Unassign vSphere Tag - * Assign or Unassign vSphere Tag on Object - * Create vSphere Tag - * Create vSphere Tag Category - * Delete vSphere Tag - * Delete vSphere Tag Category - * Edit vSphere Tag - * Edit vSphere Tag Category - * Modify UsedBy Field For Category - * Modify UsedBy Field For Tag + +- Granted at **vcenter** level, **not** propagated +- Needed to customize VM during provisioning +- Permissions + - CNS + - Searchable + - Profile-driven storage + - Profile-driven storage view + - VirtualMachine + - Provisioning + - Modify customization specification + - Read customization specifications + - vSphere Tagging + - Assign or Unassign vSphere Tag + - Assign or Unassign vSphere Tag on Object + - Create vSphere Tag + - Create vSphere Tag Category + - Delete vSphere Tag + - Delete vSphere Tag Category + - Edit vSphere Tag + - Edit vSphere Tag Category + - Modify UsedBy Field For Category + - Modify UsedBy Field For Tag + --- -``` -$ govc role.ls k8c-user-vcenter +```bash +govc role.ls k8c-user-vcenter Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory @@ -193,34 +209,37 @@ System.View VirtualMachine.Provisioning.ModifyCustSpecs VirtualMachine.Provisioning.ReadCustSpecs ``` + {{% /tab %}} {{% tab name="k8c-user-datacenter" %}} + ##### Role `k8c-user-datacenter` -* Granted at **datacenter** level, **not** propagated -* Needed for cloning the template VM (obviously this is not done in a folder at this time) -* Permissions - * Datastore - * Allocate space - * Browse datastore - * Low level file operations - * Remove file - * vApp - * vApp application configuration - * vApp instance configuration - * Virtual Machine - * Change Configuration - * Change CPU count - * Change Memory - * Change Settings - * Edit Inventory - * Create from existing - * vSphere Tagging - * Assign or Unassign vSphere Tag on Object + +- Granted at **datacenter** level, **not** propagated +- Needed for cloning the template VM (obviously this is not done in a folder at this time) +- Permissions + - Datastore + - Allocate space + - Browse datastore + - Low level file operations + - Remove file + - vApp + - vApp application configuration + - vApp instance configuration + - Virtual Machine + - Change Configuration + - Change CPU count + - Change Memory + - Change Settings + - Edit Inventory + - Create from existing + - vSphere Tagging + - Assign or Unassign vSphere Tag on Object --- -``` -$ govc role.ls k8c-user-datacenter +```bash +govc role.ls k8c-user-datacenter Datastore.AllocateSpace Datastore.Browse Datastore.DeleteFile @@ -236,40 +255,44 @@ VirtualMachine.Config.Memory VirtualMachine.Config.Settings VirtualMachine.Inventory.CreateFromExisting ``` + {{% /tab %}} {{% tab name="k8c-user-cluster-propagate" %}} -* Role `k8c-user-cluster-propagate` - * Granted at **cluster** level, propagated - * Needed for upload of `cloud-init.iso` (Ubuntu) or defining the Ignition config into Guestinfo (CoreOS) - * Permissions - * AutoDeploy - * Rule - * Create - * Delete - * Edit - * Folder - * Create folder - * Host - * Configuration - * Storage partition configuration - * System Management - * Local operations - * Reconfigure virtual machine - * Inventory - * Modify cluster - * Resource - * Assign virtual machine to resource pool - * Migrate powered off virtual machine - * Migrate powered on virtual machine - * vApp - * vApp application configuration - * vApp instance configuration - * vSphere Tagging - * Assign or Unassign vSphere Tag on Object + +##### Role `k8c-user-cluster-propagate` + +- Granted at **cluster** level, propagated +- Needed for upload of `cloud-init.iso` (Ubuntu) or defining the Ignition config into Guestinfo (CoreOS) +- Permissions + - AutoDeploy + - Rule + - Create + - Delete + - Edit + - Folder + - Create folder + - Host + - Configuration + - Storage partition configuration + - System Management + - Local operations + - Reconfigure virtual machine + - Inventory + - Modify cluster + - Resource + - Assign virtual machine to resource pool + - Migrate powered off virtual machine + - Migrate powered on virtual machine + - vApp + - vApp application configuration + - vApp instance configuration + - vSphere Tagging + - Assign or Unassign vSphere Tag on Object + --- -``` -$ govc role.ls k8c-user-cluster-propagate +```bash +govc role.ls k8c-user-cluster-propagate AutoDeploy.Rule.Create AutoDeploy.Rule.Delete AutoDeploy.Rule.Edit @@ -285,19 +308,23 @@ Resource.HotMigrate VApp.ApplicationConfig VApp.InstanceConfig ``` + {{% /tab %}} {{% tab name="k8c-network-attach" %}} + ##### Role `k8c-network-attach` -* Granted for each network that should be used (distributed switch + network) -* Permissions - * Network - * Assign network - * vSphere Tagging - * Assign or Unassign vSphere Tag on Object + +- Granted for each network that should be used (distributed switch + network) +- Permissions + - Network + - Assign network + - vSphere Tagging + - Assign or Unassign vSphere Tag on Object + --- -``` -$ govc role.ls k8c-network-attach +```bash +govc role.ls k8c-network-attach InventoryService.Tagging.ObjectAttachable Network.Assign System.Anonymous @@ -307,27 +334,30 @@ System.View {{% /tab %}} {{% tab name="k8c-user-datastore-propagate" %}} + ##### Role `k8c-user-datastore-propagate` -* Granted at **datastore / datastore cluster** level, propagated -* Also provides permission to create vSphere tags for a dedicated category, which are required by KKP seed controller manager -* Please note below points about tagging. + +- Granted at **datastore / datastore cluster** level, propagated +- Also provides permission to create vSphere tags for a dedicated category, which are required by KKP seed controller manager +- Please note below points about tagging. **Note**: If a category id is assigned to a user cluster, KKP would claim the ownership of any tags it creates. KKP would try to delete tags assigned to the cluster upon cluster deletion. Thus, make sure that the assigned category isn't shared across other lingering resources. **Note**: Tags can be attached to machine deployments regardless if the tags are created via KKP or not. If a tag was not attached to the user cluster, machine controller will only detach it. -* Permissions - * Datastore - * Allocate space - * Browse datastore - * Low level file operations - * vSphere Tagging - * Assign or Unassign vSphere Tag on an Object + +- Permissions + - Datastore + - Allocate space + - Browse datastore + - Low level file operations + - vSphere Tagging + - Assign or Unassign vSphere Tag on an Object --- -``` -$ govc role.ls k8c-user-datastore-propagate +```bash +govc role.ls k8c-user-datastore-propagate Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement @@ -336,34 +366,37 @@ System.Anonymous System.Read System.View ``` + {{% /tab %}} {{% tab name="k8c-user-folder-propagate" %}} + ##### Role `k8c-user-folder-propagate` -* Granted at **VM Folder** and **Template Folder** level, propagated -* Needed for managing the node VMs -* Permissions - * Folder - * Create folder - * Delete folder - * Global - * Set custom attribute - * Virtual machine - * Change Configuration - * Edit Inventory - * Guest operations - * Interaction - * Provisioning - * Snapshot management - * vSphere Tagging - * Assign or Unassign vSphere Tag - * Assign or Unassign vSphere Tag on an Object - * Create vSphere Tag - * Delete vSphere Tag + +- Granted at **VM Folder** and **Template Folder** level, propagated +- Needed for managing the node VMs +- Permissions + - Folder + - Create folder + - Delete folder + - Global + - Set custom attribute + - Virtual machine + - Change Configuration + - Edit Inventory + - Guest operations + - Interaction + - Provisioning + - Snapshot management + - vSphere Tagging + - Assign or Unassign vSphere Tag + - Assign or Unassign vSphere Tag on an Object + - Create vSphere Tag + - Delete vSphere Tag --- -``` -$ govc role.ls k8c-user-folder-propagate +```bash +govc role.ls k8c-user-folder-propagate Folder.Create Folder.Delete Global.SetCustomField @@ -459,20 +492,15 @@ VirtualMachine.State.RenameSnapshot VirtualMachine.State.RevertToSnapshot ``` + {{% /tab %}} {{< /tabs >}} - - - - - - The described permissions have been tested with vSphere 8.0.2 and might be different for other vSphere versions. ## Datastores and Datastore Clusters @@ -483,8 +511,8 @@ shared management interface. In KKP *Datastores* are used for two purposes: -* Storing the VMs files for the worker nodes of vSphere user clusters. -* Generating the vSphere cloud provider storage configuration for user clusters. +- Storing the VMs files for the worker nodes of vSphere user clusters. +- Generating the vSphere cloud provider storage configuration for user clusters. In particular to provide the `default-datastore` value, that is the default datastore for dynamic volume provisioning. @@ -494,24 +522,20 @@ specified directly in [vSphere cloud configuration][vsphere-cloud-config]. There are three places where Datastores and Datastore Clusters can be configured in KKP: -* At datacenter level (configured in the [Seed CRD]({{< ref "../../../tutorials-howtos/project-and-cluster-management/seed-cluster" >}}))) +- At datacenter level (configured in the [Seed CRD]({{< ref "../../../tutorials-howtos/project-and-cluster-management/seed-cluster" >}}))) it is possible to specify the default *Datastore* that will be used for user clusters dynamic volume provisioning and workers VMs placement in case no *Datastore* or *Datastore Cluster* is specified at cluster level. -* At *Cluster* level it is possible to provide either a *Datastore* or a +- At *Cluster* level it is possible to provide either a *Datastore* or a *Datastore Cluster* respectively with `spec.cloud.vsphere.datastore` and `spec.cloud.vsphere.datastoreCluster` fields. -* It is possible to specify *Datastore* or *Datastore Clusters* in a preset +- It is possible to specify *Datastore* or *Datastore Clusters* in a preset than is later used to create a user cluster from it. These settings can also be configured as part of the "Advanced Settings" step when creating a user cluster from the [KKP dashboard]({{< ref "../../../tutorials-howtos/project-and-cluster-management/#create-cluster" >}}). -[vsphere-cloud-config]: https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-BFF39F1D-F70A-4360-ABC9-85BDAFBE8864.html?hWord=N4IghgNiBcIMYQK4GcAuBTATgWgJYBMACAYQGUBJEAXyA -[vsphere-storage-plugin-roles]: https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-0AB6E692-AA47-4B6A-8CEA-38B754E16567.html#GUID-043ACF65-9E0B-475C-A507-BBBE2579AA58__GUID-E51466CB-F1EA-4AD7-A541-F22CDC6DE881 - - ## Known Issues ### Volume Detach Bug @@ -520,24 +544,24 @@ After a node is powered-off, the Kubernetes vSphere driver doesn't detach disks Upstream Kubernetes has been working on the issue for a long time now and tracking it under the following tickets: -* -* -* -* -* +- +- +- +- +- ## Internal Kubernetes endpoints unreachable ### Symptoms -* Unable to perform CRUD operations on resources governed by webhooks (e.g. ValidatingWebhookConfiguration, MutatingWebhookConfiguration, etc.). The following error is observed: +- Unable to perform CRUD operations on resources governed by webhooks (e.g. ValidatingWebhookConfiguration, MutatingWebhookConfiguration, etc.). The following error is observed: -```sh +```bash Internal error occurred: failed calling webhook "webhook-name": failed to call webhook: Post "https://webhook-service-name.namespace.svc:443/webhook-endpoint": context deadline exceeded ``` -* Unable to reach internal Kubernetes endpoints from pods/nodes. -* ICMP is working but TCP/UDP is not. +- Unable to reach internal Kubernetes endpoints from pods/nodes. +- ICMP is working but TCP/UDP is not. ### Cause @@ -545,7 +569,7 @@ On recent enough VMware hardware compatibility version (i.e >=15 or maybe >=14), ### Solution -```sh +```bash sudo ethtool -K ens192 tx-udp_tnl-segmentation off sudo ethtool -K ens192 tx-udp_tnl-csum-segmentation off ``` @@ -554,10 +578,13 @@ These flags are related to the hardware segmentation offload done by the vSphere We have two options to configure these flags for KKP installations: -* When configuring the VM template, set these flags as well. -* Create a [custom Operating System Profile]({{< ref "../../../tutorials-howtos/operating-system-manager/usage#custom-operatingsystemprofiles" >}}) and configure the flags there. +- When configuring the VM template, set these flags as well. +- Create a [custom Operating System Profile]({{< ref "../../../tutorials-howtos/operating-system-manager/usage#custom-operatingsystemprofiles" >}}) and configure the flags there. ### References -* -* +- +- + +[vsphere-cloud-config]: https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-BFF39F1D-F70A-4360-ABC9-85BDAFBE8864.html?hWord=N4IghgNiBcIMYQK4GcAuBTATgWgJYBMACAYQGUBJEAXyA +[vsphere-storage-plugin-roles]: https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-0AB6E692-AA47-4B6A-8CEA-38B754E16567.html#GUID-043ACF65-9E0B-475C-A507-BBBE2579AA58__GUID-E51466CB-F1EA-4AD7-A541-F22CDC6DE881 diff --git a/content/kubermatic/main/cheat-sheets/etcd/etcd-launcher/_index.en.md b/content/kubermatic/main/cheat-sheets/etcd/etcd-launcher/_index.en.md index a01669d2a..2102b076b 100644 --- a/content/kubermatic/main/cheat-sheets/etcd/etcd-launcher/_index.en.md +++ b/content/kubermatic/main/cheat-sheets/etcd/etcd-launcher/_index.en.md @@ -12,12 +12,12 @@ API and flexibly control how the user cluster etcd ring is started. - **v2.19.0**: Peer TLS connections have been added to etcd-launcher. - **v2.22.0**: `EtcdLauncher` feature gate is enabled by default in `KubermaticConfiguration`. - ## Comparison to static etcd Prior to v2.15.0, user cluster etcd ring was based on a static StatefulSet with 3 pods running the etcd ring nodes. With `etcd-launcher`, the etcd `StatefulSet` is updated to include: + - An init container that is responsible for copying the etcd-launcher into the main etcd pod. - Additional environment variables used by the etcd-launcher and etcdctl binary for simpler operations. - A liveness probe to improve stability. @@ -58,6 +58,7 @@ spec: If the feature gate was disabled explicitly, etcd Launcher can still be configured for individual user clusters. ### Enabling etcd Launcher + In this mode, the feature is only enabled for a specific user cluster. This can be done by editing the object cluster and enabling the feature gate for `etcdLauncher`: diff --git a/content/kubermatic/main/cheat-sheets/vsphere-cluster-id/_index.en.md b/content/kubermatic/main/cheat-sheets/vsphere-cluster-id/_index.en.md index a1051f238..88a0411ab 100644 --- a/content/kubermatic/main/cheat-sheets/vsphere-cluster-id/_index.en.md +++ b/content/kubermatic/main/cheat-sheets/vsphere-cluster-id/_index.en.md @@ -43,15 +43,16 @@ The following steps should be done in the **seed cluster** for each vSphere user cluster. + First, get all user clusters and filter vSphere user clusters using `grep`: -```shell +```bash kubectl --kubeconfig= get clusters | grep vsphere ``` You should get output similar to the following: -``` +```bash NAME HUMANREADABLENAME OWNER VERSION PROVIDER DATACENTER PHASE PAUSED AGE s8kkpcccfq focused-spence test@kubermatic.com 1.23.8 vsphere your-dc Running false 16h ``` @@ -60,14 +61,14 @@ s8kkpcccfq focused-spence test@kubermatic.com 1.23.8 vspher `s8kkpcccfq`) and inspect the vSphere CSI cloud-config to check value of the `cluster-id` field. -```shell +```bash kubectl --kubeconfig= get configmap -n cluster- cloud-config-csi -o yaml ``` The following excerpt shows the most important part of the output. You need to locate the `cluster-id` field under the `[Global]` group. -``` +```yaml apiVersion: v1 data: config: |+ @@ -102,8 +103,9 @@ The second approach assumes changing `cluster-id` without stopping the CSI driver. This approach is **not documented** by VMware, however, it worked in our environment. In this case, there's no significant downtime. Since this approach is not documented by VMware, we **heavily advise** that you: - - follow the first approach - - if you decide to follow this approach, make sure to extensively test it in + + * follow the first approach + * if you decide to follow this approach, make sure to extensively test it in a staging/testing environment before applying it in the production ### Approach 1 (recommended) @@ -141,7 +143,7 @@ user cluster. First, pause affected user clusters by running the following command in the **seed cluster** for **each affected** user cluster: -```shell +```bash clusterPatch='{"spec":{"pause":true,"features":{"vsphereCSIClusterID":true}}}' kubectl --kubeconfig= patch cluster --type=merge -p $clusterPatch ... @@ -151,7 +153,7 @@ kubectl --kubeconfig= patch cluster --type=merge Once done, scale down the vSphere CSI driver deployment in **each affected user cluster**: -```shell +```bash kubectl --kubeconfig= scale deployment -n kube-system vsphere-csi-controller --replicas=0 ... kubectl --kubeconfig= scale deployment -n kube-system vsphere-csi-controller --replicas=0 @@ -190,13 +192,13 @@ config and update the Secret. The following command reads the config stored in the Secret, decodes it and then saves it to a file called `cloud-config-csi`: -```shell +```bash kubectl --kubeconfig= get secret -n kube-system cloud-config-csi -o=jsonpath='{.data.config}' | base64 -d > cloud-config-csi ``` Open the `cloud-config-csi` file in some text editor: -```shell +```bash vi cloud-config-csi ``` @@ -205,7 +207,7 @@ locate the `cluster-id` field under the `[Global]` group, and replace `` with the name of your user cluster (e.g. `s8kkpcccfq`). -``` +```yaml [Global] user = "username" password = "password" @@ -218,13 +220,13 @@ cluster-id = "" Save the file, exit your editor, and then encode the file: -```shell +```bash cat cloud-config-csi | base64 -w0 ``` Copy the encoded output and run the following `kubectl edit` command: -```shell +```bash kubectl --kubeconfig= edit secret -n kube-system cloud-config-csi ``` @@ -268,7 +270,7 @@ the `cluster-id` value to the name of the user cluster. Run the following `kubectl edit` command. Replace `` in the command with the name of user cluster (e.g. `s8kkpcccfq`). -```shell +```bash kubectl --kubeconfig= edit configmap -n cluster- cloud-config-csi ``` @@ -303,7 +305,7 @@ to vSphere to de-register all volumes. cluster. The `vsphereCSIClusterID` feature flag enabled at the beginning ensures that your `cluster-id` changes are persisted once the clusters are unpaused. -```shell +```bash clusterPatch='{"spec":{"pause":false}}' kubectl patch cluster --type=merge -p $clusterPatch ... @@ -351,7 +353,7 @@ Start with patching the Cluster object for **each affected** user clusters to enable the `vsphereCSIClusterID` feature flag. Enabling this feature flag automatically changes the `cluster-id` value to the cluster name. -```shell +```bash clusterPatch='{"spec":{"features":{"vsphereCSIClusterID":true}}}' kubectl patch cluster --type=merge -p $clusterPatch ... @@ -375,7 +377,7 @@ the seed cluster **AND** the `cloud-config-csi` Secret in the user cluster the ConfigMap in the user cluster namespace in seed cluster, and the second commands reads the config from the Secret in the user cluster. -```shell +```bash kubectl --kubeconfig= get configmap -n cluster- cloud-config-csi kubectl --kubeconfig= get secret -n kube-system cloud-config-csi -o jsonpath='{.data.config}' | base64 -d ``` @@ -383,7 +385,7 @@ kubectl --kubeconfig= get secret -n kube-system cloud-c Both the Secret and the ConfigMap should have config with `cluster-id` set to the user cluster name (e.g. `s8kkpcccfq`). -``` +```yaml [Global] user = "username" password = "password" @@ -402,7 +404,7 @@ to the next section. Finally, restart the vSphere CSI controller pods in the **each affected user cluster** to put those changes in the effect: -```shell +```bash kubectl --kubeconfig= delete pods -n kube-system -l app=vsphere-csi-controller ... kubectl --kubeconfig= delete pods -n kube-system -l app=vsphere-csi-controller diff --git a/content/kubermatic/main/how-to-contribute/_index.en.md b/content/kubermatic/main/how-to-contribute/_index.en.md index 3136c9988..7f5ba5e0a 100644 --- a/content/kubermatic/main/how-to-contribute/_index.en.md +++ b/content/kubermatic/main/how-to-contribute/_index.en.md @@ -12,34 +12,34 @@ KKP is an open-source project to centrally manage the global automation of thous There are few things to note when contributing to the KKP project, which are highlighted below: -* KKP project is hosted on GitHub; thus, GitHub knowledge is one of the essential pre-requisites -* The KKP documentation is written in markdown (.md) and located in the [docs repository](https://github.com/kubermatic/docs/tree/main/content/kubermatic) -* See [CONTRIBUTING.md](https://github.com/kubermatic/kubermatic/blob/main/CONTRIBUTING.md) for instructions on the developer certificate of origin that we require -* Familiarization with Hugo for building static site locally is suggested for documentation contribution -* Kubernetes knowledge is also recommended -* The KKP documentation is currently available only in English -* We have a simple code of conduct that should be adhered to +- KKP project is hosted on GitHub; thus, GitHub knowledge is one of the essential pre-requisites +- The KKP documentation is written in markdown (.md) and located in the [docs repository](https://github.com/kubermatic/docs/tree/main/content/kubermatic) +- See [CONTRIBUTING.md](https://github.com/kubermatic/kubermatic/blob/main/CONTRIBUTING.md) for instructions on the developer certificate of origin that we require +- Familiarization with Hugo for building static site locally is suggested for documentation contribution +- Kubernetes knowledge is also recommended +- The KKP documentation is currently available only in English +- We have a simple code of conduct that should be adhered to ## Steps in Contributing to KKP -* Please familiarise yourself with our [Code of Conduct](https://github.com/kubermatic/kubermatic/blob/main/CODE_OF_CONDUCT.md) -* Check the [opened issues](https://github.com/kubermatic/kubermatic/issues) on our GitHub repo peradventure there might be anyone that will be of interest -* Fork the repository on GitHub -* Read the [README](https://github.com/kubermatic/kubermatic/blob/main/README.md) for build and test instructions +- Please familiarise yourself with our [Code of Conduct](https://github.com/kubermatic/kubermatic/blob/main/CODE_OF_CONDUCT.md) +- Check the [opened issues](https://github.com/kubermatic/kubermatic/issues) on our GitHub repo peradventure there might be anyone that will be of interest +- Fork the repository on GitHub +- Read the [README](https://github.com/kubermatic/kubermatic/blob/main/README.md) for build and test instructions ## Contribution Workflow The below outlines show an example of what a contributor's workflow looks like: -* Fork the repository on GitHub -* Create a topic branch from where you want to base your work (usually main) -* Make commits of logical units. -* Make sure your commit messages are in the proper format -* Push your changes to the topic branch in your fork repository -* Make sure the tests pass and add any new tests as appropriate -* Submit a pull request to the original repository -* Assign a reviewer if you wish and wait for the PR to be reviewed -* If everything works fine, your PR will be merged into the project's main branch +- Fork the repository on GitHub +- Create a topic branch from where you want to base your work (usually main) +- Make commits of logical units. +- Make sure your commit messages are in the proper format +- Push your changes to the topic branch in your fork repository +- Make sure the tests pass and add any new tests as appropriate +- Submit a pull request to the original repository +- Assign a reviewer if you wish and wait for the PR to be reviewed +- If everything works fine, your PR will be merged into the project's main branch Congratulations! You have successfully contributed to the KKP project. diff --git a/content/kubermatic/main/installation/install-kkp-ce/_index.en.md b/content/kubermatic/main/installation/install-kkp-ce/_index.en.md index 18b70c3bc..2e8afcbb6 100644 --- a/content/kubermatic/main/installation/install-kkp-ce/_index.en.md +++ b/content/kubermatic/main/installation/install-kkp-ce/_index.en.md @@ -35,6 +35,7 @@ For this guide you need to have [kubectl](https://kubernetes.io/docs/tasks/tools You should be familiar with core Kubernetes concepts and the YAML file format before proceeding. + In addition, we recommend familiarizing yourself with the resource quota system of your infrastructure provider. It is important to provide enough capacity to let KKP provision infrastructure for your future user clusters, but also to enforce a maximum to protect against overspending. {{< tabs name="resource-quotas" >}} @@ -132,14 +133,14 @@ The release archive hosted on GitHub contains examples for both of the configura The key items to consider while preparing your configuration files are described in the table below. -| Description | YAML Paths and File | -| ------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------- | -| The base domain under which KKP shall be accessible (e.g. `kkp.example.com`). | `.spec.ingress.domain` (`kubermatic.yaml`), `.dex.ingress.hosts[0].host` and `dex.ingress.tls[0].hosts[0]` (`values.yaml`); also adjust `.dex.config.staticClients[*].RedirectURIs` (`values.yaml`) according to your domain. | -| The certificate issuer for KKP (KKP requires it since the dashboard and Dex are accessible only via HTTPS); by default cert-manager is used, but you have to reference an issuer that you need to create later on. | `.spec.ingress.certificateIssuer.name` (`kubermatic.yaml`) | -| For proper authentication, shared secrets must be configured between Dex and KKP. Likewise, Dex uses yet another random secret to encrypt cookies stored in the users' browsers. | `.dex.config.staticClients[*].secret` (`values.yaml`), `.spec.auth.issuerClientSecret` (`kubermatic.yaml`); this needs to be equal to `.dex.config.staticClients[name=="kubermaticIssuer"].secret` (`values.yaml`), `.spec.auth.issuerCookieKey` and `.spec.auth.serviceAccountKey` (both `kubermatic.yaml`) | -| To authenticate via an external identity provider, you need to set up connectors in Dex. Check out [the Dex documentation](https://dexidp.io/docs/connectors/) for a list of available providers. This is not required, but highly recommended for multi-user installations. | `.dex.config.connectors` (`values.yaml`; commented in example file) | -| The expose strategy which controls how control plane components of a User Cluster are exposed to worker nodes and users. See [the expose strategy documentation]({{< ref "../../tutorials-howtos/networking/expose-strategies/" >}}) for available options. Defaults to `NodePort` strategy, if not set. | `.spec.exposeStrategy` (`kubermatic.yaml`; not included in example file) | -| Telemetry used to track the KKP and k8s cluster usage, uuid field is required and will print an error message when that entry is missing. | `.telemetry.uuid` (`values.yaml`) | +| Description | YAML Paths and File | +| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| The base domain under which KKP shall be accessible (e.g. `kkp.example.com`). | `.spec.ingress.domain` (`kubermatic.yaml`), `.dex.ingress.hosts[0].host` and `dex.ingress.tls[0].hosts[0]` (`values.yaml`); also adjust `.dex.config.staticClients[*].RedirectURIs` (`values.yaml`) according to your domain. | +| The certificate issuer for KKP (KKP requires it since the dashboard and Dex are accessible only via HTTPS); by default cert-manager is used, but you have to reference an issuer that you need to create later on. | `.spec.ingress.certificateIssuer.name` (`kubermatic.yaml`) | +| For proper authentication, shared secrets must be configured between Dex and KKP. Likewise, Dex uses yet another random secret to encrypt cookies stored in the users' browsers. | `.dex.config.staticClients[*].secret` (`values.yaml`), `.spec.auth.issuerClientSecret` (`kubermatic.yaml`); this needs to be equal to `.dex.config.staticClients[name=="kubermaticIssuer"].secret` (`values.yaml`), `.spec.auth.issuerCookieKey` and `.spec.auth.serviceAccountKey` (both `kubermatic.yaml`) | +| To authenticate via an external identity provider, you need to set up connectors in Dex. Check out [the Dex documentation](https://dexidp.io/docs/connectors/) for a list of available providers. This is not required, but highly recommended for multi-user installations. | `.dex.config.connectors` (`values.yaml`; commented in example file) | +| The expose strategy which controls how control plane components of a User Cluster are exposed to worker nodes and users. See [the expose strategy documentation]({{< ref "../../tutorials-howtos/networking/expose-strategies/" >}}) for available options. Defaults to `NodePort` strategy, if not set. | `.spec.exposeStrategy` (`kubermatic.yaml`; not included in example file) | +| Telemetry used to track the KKP and k8s cluster usage, uuid field is required and will print an error message when that entry is missing. | `.telemetry.uuid` (`values.yaml`) | There are many more options, but these are essential to get a minimal system up and running. A full reference of all options can be found in the [KubermaticConfiguration Reference]({{< relref "../../references/crds/#kubermaticconfigurationspec" >}}). The secret keys mentioned above can be generated using any password generator or on the shell using diff --git a/content/kubermatic/main/installation/install-kkp-ce/add-seed-cluster/_index.en.md b/content/kubermatic/main/installation/install-kkp-ce/add-seed-cluster/_index.en.md index c0aa6b013..f1677601f 100644 --- a/content/kubermatic/main/installation/install-kkp-ce/add-seed-cluster/_index.en.md +++ b/content/kubermatic/main/installation/install-kkp-ce/add-seed-cluster/_index.en.md @@ -29,9 +29,9 @@ about the cluster relationships. In this chapter, you will find the following KKP-specific terms: -* **Master Cluster** -- A Kubernetes cluster which is responsible for storing central information about users, projects and SSH keys. It hosts the KKP master components and might also act as a seed cluster. -* **Seed Cluster** -- A Kubernetes cluster which is responsible for hosting the control plane components (kube-apiserver, kube-scheduler, kube-controller-manager, etcd and more) of a User Cluster. -* **User Cluster** -- A Kubernetes cluster created and managed by KKP, hosting applications managed by users. +- **Master Cluster** -- A Kubernetes cluster which is responsible for storing central information about users, projects and SSH keys. It hosts the KKP master components and might also act as a seed cluster. +- **Seed Cluster** -- A Kubernetes cluster which is responsible for hosting the control plane components (kube-apiserver, kube-scheduler, kube-controller-manager, etcd and more) of a User Cluster. +- **User Cluster** -- A Kubernetes cluster created and managed by KKP, hosting applications managed by users. ## Overview @@ -82,6 +82,7 @@ a separate storage class with a different location/security level. The following {{< tabs name="StorageClass Creation" >}} {{% tab name="AWS" %}} + ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass @@ -91,8 +92,10 @@ provisioner: kubernetes.io/aws-ebs parameters: type: sc1 ``` + {{% /tab %}} {{% tab name="Azure" %}} + ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass @@ -103,8 +106,10 @@ parameters: kind: Managed storageaccounttype: Standard_LRS ``` + {{% /tab %}} {{% tab name="GCP" %}} + ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass @@ -114,8 +119,10 @@ provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd ``` + {{% /tab %}} {{% tab name="vSphere" %}} + ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass @@ -123,8 +130,10 @@ metadata: name: kubermatic-backup provisioner: csi.vsphere.vmware.com ``` + {{% /tab %}} {{% tab name="Other Providers" %}} + For other providers, please refer to the respective CSI driver documentation. It should guide you through setting up a `StorageClass`. Ensure that the `StorageClass` you create is named `kubermatic-backup`. The final resource should look something like this: ```yaml @@ -139,6 +148,7 @@ parameters: parameter1: value1 parameter2: value2 ``` + {{% /tab %}} {{< /tabs >}} @@ -369,7 +379,7 @@ Key considerations for creating your `Seed` resource are: ### Configure Datacenters -Each `Seed` has a map of so-called _Datacenters_ (under `.spec.datacenters`), which define the cloud +Each `Seed` has a map of so-called *Datacenters* (under `.spec.datacenters`), which define the cloud provider locations that User Clusters can be deployed to. Every datacenter name is globally unique in a KKP setup. Users will select from a list of datacenters when creating User Clusters and their clusters will automatically get scheduled to the seed that defines that datacenter. @@ -380,6 +390,7 @@ datacenters: {{< tabs name="Datacenter Examples" >}} {{% tab name="AWS" %}} + ```yaml # Datacenter for AWS 'eu-central-1' region aws-eu-central-1a: @@ -396,8 +407,10 @@ aws-eu-west-1a: aws: region: eu-west-1 ``` + {{% /tab %}} {{% tab name="Azure" %}} + ```yaml # Datacenter for Azure 'westeurope' location azure-westeurope: @@ -407,8 +420,10 @@ azure-westeurope: azure: location: westeurope ``` + {{% /tab %}} {{% tab name="GCP" %}} + ```yaml # Datacenter for GCP 'europe-west3' region # this is configured to use three availability zones and spread cluster resources across them @@ -421,8 +436,10 @@ gce-eu-west-3: regional: true zoneSuffixes: [a,b,c] ``` + {{% /tab %}} {{% tab name="vSphere" %}} + ```yaml # Datacenter for a vSphere setup available under https://vsphere.hamburg.example.com vsphere-hamburg: @@ -438,10 +455,13 @@ vsphere-hamburg: templates: ubuntu: ubuntu-20.04-server-cloudimg-amd64 ``` + {{% /tab %}} {{% tab name="Other Providers" %}} + For additional providers supported by KKP, please check out our [DatacenterSpec CRD documentation]({{< ref "../../../references/crds/#datacenterspec" >}}) for the respective provider you want to use. + {{% /tab %}} {{< /tabs >}} @@ -535,6 +555,7 @@ kubectl apply -f seed-with-secret.yaml #Secret/kubeconfig-kubermatic created. #Seed/kubermatic created. ``` + You can watch the progress by using `kubectl` and `watch` on the master cluster: ```bash @@ -543,7 +564,7 @@ watch kubectl -n kubermatic get seeds #kubermatic 0 Hamburg v2.21.2 v1.24.8 Healthy 5m ``` -Watch the `PHASE` column until it shows "_Healthy_". If it does not after a couple of minutes, you can check +Watch the `PHASE` column until it shows "*Healthy*". If it does not after a couple of minutes, you can check the `kubermatic` namespace on the new seed cluster and verify if there are any Pods showing signs of issues: ```bash diff --git a/content/kubermatic/main/installation/install-kkp-ee/add-seed-cluster/_index.en.md b/content/kubermatic/main/installation/install-kkp-ee/add-seed-cluster/_index.en.md index 86ca0cdcb..8874885ca 100644 --- a/content/kubermatic/main/installation/install-kkp-ee/add-seed-cluster/_index.en.md +++ b/content/kubermatic/main/installation/install-kkp-ee/add-seed-cluster/_index.en.md @@ -18,9 +18,9 @@ Please [contact sales](mailto:sales@kubermatic.com) to receive your credentials. In this chapter, you will find the following KKP-specific terms: -* **Master Cluster** -- A Kubernetes cluster which is responsible for storing central information about users, projects and SSH keys. It hosts the KKP master components and might also act as a seed cluster. -* **Seed Cluster** -- A Kubernetes cluster which is responsible for hosting the control plane components (kube-apiserver, kube-scheduler, kube-controller-manager, etcd and more) of a User Cluster. -* **User Cluster** -- A Kubernetes cluster created and managed by KKP, hosting applications managed by users. +- **Master Cluster** -- A Kubernetes cluster which is responsible for storing central information about users, projects and SSH keys. It hosts the KKP master components and might also act as a seed cluster. +- **Seed Cluster** -- A Kubernetes cluster which is responsible for hosting the control plane components (kube-apiserver, kube-scheduler, kube-controller-manager, etcd and more) of a User Cluster. +- **User Cluster** -- A Kubernetes cluster created and managed by KKP, hosting applications managed by users. It is also recommended to make yourself familiar with our [architecture documentation]({{< ref "../../../architecture/" >}}). diff --git a/content/kubermatic/main/installation/local-installation/_index.en.md b/content/kubermatic/main/installation/local-installation/_index.en.md index 122caeb8b..b8463282c 100644 --- a/content/kubermatic/main/installation/local-installation/_index.en.md +++ b/content/kubermatic/main/installation/local-installation/_index.en.md @@ -80,7 +80,6 @@ tar -xzvf "kubermatic-${KUBERMATIC_EDITION}-v${VERSION}-darwin-${ARCH}.tar.gz" You can find more information regarding the download instructions in the [CE installation guide](../install-kkp-ce/#download-the-installer) or [EE installation guide](../install-kkp-ee/#download-the-installer). **2. Provide the image pull secret (EE)** - This step is only required if you are using the enterprise edition installer. Replace `${AUTH_TOKEN}` with the Docker authentication JSON provided by Kubermatic and run the following command: ```bash @@ -135,8 +134,8 @@ By default, KubeVirt is configured to use hardware virtualization. If this is no On Linux, KubeVirt uses the inode notify kernel subsystem `inotify` to watch for changes in certain files. Usually you shouldn't need to configure this but in case you can observe the `virt-handler` failing with -``` -kubectl log -nkubevirt ds/virt-handler +```bash +kubectl log -n kubevirt ds/virt-handler ... {"component":"virt-handler","level":"fatal","msg":"Failed to create an inotify watcher","pos":"cert-manager.go:105","reason":"too many open files","timestamp":"2023-06-22T09:58:24.284130Z"} ``` diff --git a/content/kubermatic/main/installation/offline-mode/_index.en.md b/content/kubermatic/main/installation/offline-mode/_index.en.md index 5b182f9c7..fb9f63c7a 100644 --- a/content/kubermatic/main/installation/offline-mode/_index.en.md +++ b/content/kubermatic/main/installation/offline-mode/_index.en.md @@ -23,13 +23,13 @@ without Docker. There are a number of sources for container images used in a KKP setup: -* The container images used by KKP itself (e.g. `quay.io/kubermatic/kubermatic`) -* The images used by the various Helm charts used to deploy KKP (nginx, cert-manager, +- The container images used by KKP itself (e.g. `quay.io/kubermatic/kubermatic`) +- The images used by the various Helm charts used to deploy KKP (nginx, cert-manager, Grafana, ...) -* The images used for creating a user cluster control plane (the Kubernetes apiserver, +- The images used for creating a user cluster control plane (the Kubernetes apiserver, scheduler, metrics-server, ...). -* The images referenced by cluster [Addons]({{< ref "../../architecture/concept/kkp-concepts/addons/" >}}). -* The images referenced in system [Applications]({{< ref "../../tutorials-howtos/applications/" >}}). +- The images referenced by cluster [Addons]({{< ref "../../architecture/concept/kkp-concepts/addons/" >}}). +- The images referenced in system [Applications]({{< ref "../../tutorials-howtos/applications/" >}}). To make it easier to collect all required images, the `kubermatic-installer mirror-images` utility is provided. It will scan KKP source code and Helm charts included in a KKP release to determine all images that need to be mirrored. @@ -93,7 +93,6 @@ pass `--registry-prefix 'docker.io'` to `kubermatic-installer mirror-images`. ### Addons - Note that by default, `kubermatic-installer mirror-images` will determine the addons container image based on the `KubermaticConfiguration` file, pull it down and then extract the addon manifests from the image, so that it can then scan them for container images to mirror. @@ -117,6 +116,7 @@ you should pass the `--addons-image` flag instead to reference a non-standard ad The `mirrorImages` field in the `KubermaticConfiguration` allows you to specify additional container images to mirror during the `kubermatic-installer mirror-images` command, simplifying air-gapped setups. Example: + ```yaml apiVersion: kubermatic.k8c.io/v1 kind: KubermaticConfiguration @@ -130,7 +130,8 @@ spec: ## Mirroring Binaries -The `kubermatic-installer mirror-binaries` command is designed to **mirror and host essential binaries** required by the Operating System Profiles for provisioning user clusters in **offline/airgapped environments**. This includes critical components like: +The `kubermatic-installer mirror-binaries` command is designed to **mirror and host essential binaries** required by the Operating System Profiles for provisioning user clusters in **offline/airgapped environments**. This includes critical components like: + - **Kubernetes binaries**: `kubeadm`, `kubelet`, `kubectl` - **CNI plugins** (e.g., bridge, ipvlan, loopback, macvlan, etc) - **CRI tools** (e.g., `crictl`) @@ -142,7 +143,8 @@ The default output directory (`/usr/share/nginx/html/`) requires root permission ### Key Features -#### Mirrors Original Domain Structure: +#### Mirrors Original Domain Structure + Binaries are stored in the **exact directory hierarchy** as their original domains (e.g., `containernetworking/plugins/releases/v1.5.1/...`). This allows **DNS-based redirection** of domains like `github.com` or `k8s.gcr.io` to your local/offline server, ensuring the OSP fetches binaries from the mirrored paths **without URL reconfiguration** or **Operating System Profile** changes. ### Example Workflow @@ -162,7 +164,7 @@ INFO[0033] ✅ Finished loading images. ### Example of the Directory Structure -``` +```bash . ├── containernetworking # CNI plugins (Container Network Interface) │ └── plugins @@ -248,6 +250,7 @@ kubectl -n kubermatic get seeds ``` Output will be similar to this: + ```bash #NAME AGE #hamburg 143d diff --git a/content/kubermatic/main/installation/single-node-setup/_index.en.md b/content/kubermatic/main/installation/single-node-setup/_index.en.md index aa2daaf66..984cf2424 100644 --- a/content/kubermatic/main/installation/single-node-setup/_index.en.md +++ b/content/kubermatic/main/installation/single-node-setup/_index.en.md @@ -18,7 +18,7 @@ In this **Get Started with KKP** guide, we will be using AWS Cloud as our underl ## Prerequisites 1. [Terraform >v1.0.0](https://www.terraform.io/downloads) -2. [KubeOne](https://github.com/kubermatic/kubeone/releases) +1. [KubeOne](https://github.com/kubermatic/kubeone/releases) ## Download the Repository @@ -95,18 +95,18 @@ export KUBECONFIG=$PWD/aws/-kubeconfig ## Validate the KKP Master Setup -* Get the LoadBalancer External IP by following command. +- Get the LoadBalancer External IP by following command. ```bash kubectl get svc -n ingress-nginx ``` -* Update DNS mapping with External IP of the nginx ingress controller service. In case of AWS, the CNAME record mapping for $TODO_DNS with External IP should be created. +- Update DNS mapping with External IP of the nginx ingress controller service. In case of AWS, the CNAME record mapping for $TODO_DNS with External IP should be created. -* Nginx Ingress Controller Load Balancer configuration - Add the node to backend pool manually. +- Nginx Ingress Controller Load Balancer configuration - Add the node to backend pool manually. > **Known Issue**: Should be supported in the future as part of Feature request[#1822](https://github.com/kubermatic/kubeone/issues/1822) -* Verify the Kubermatic resources and certificates +- Verify the Kubermatic resources and certificates ```bash kubectl -n kubermatic get deployments,pods @@ -122,5 +122,6 @@ export KUBECONFIG=$PWD/aws/-kubeconfig Finally, you should be able to login to KKP dashboard! -Login to https://$TODO_DNS/ +Login to + > Use username/password configured as part of Kubermatic configuration. diff --git a/content/kubermatic/main/installation/upgrading/upgrade-from-2.19-to-2.20/_index.en.md b/content/kubermatic/main/installation/upgrading/upgrade-from-2.19-to-2.20/_index.en.md index 56de7db27..8e84c6133 100644 --- a/content/kubermatic/main/installation/upgrading/upgrade-from-2.19-to-2.20/_index.en.md +++ b/content/kubermatic/main/installation/upgrading/upgrade-from-2.19-to-2.20/_index.en.md @@ -20,13 +20,13 @@ Migrating to KKP 2.20 requires a downtime of all reconciling and includes restar The general migration procedure is as follows: -* Shutdown KKP controllers/dashboard/API. -* Create duplicate of all KKP resources in the new API groups. -* Adjust the owner references in the new resources. -* Remove finalizers and owner references from old objects. -* Delete old objects. -* Deploy new KKP 2.20 Operator. -* The operator will reconcile and restart the remaining KKP controllers, dashboard and API. +- Shutdown KKP controllers/dashboard/API. +- Create duplicate of all KKP resources in the new API groups. +- Adjust the owner references in the new resources. +- Remove finalizers and owner references from old objects. +- Delete old objects. +- Deploy new KKP 2.20 Operator. +- The operator will reconcile and restart the remaining KKP controllers, dashboard and API. {{% notice note %}} Creating clones of, for example, Secrets in a cluster namespace will lead to new resource versions on those cloned Secrets. These new resource versions will affect Deployments like the kube-apiserver once KKP is restarted and reconciles. This will in turn cause all Deployments/StatefulSets to rotate. @@ -52,11 +52,11 @@ tar xzf kubermatic-ce-v2.20.0-linux-amd64.tar.gz Before the migration can begin, a number of preflight checks need to happen first: -* No KKP resource must be marked as deleted. -* The new CRD files must be available on disk. -* All seed clusters must be reachable. -* Deprecated features which were removed in KKP 2.20 must not be used anymore. -* (only before actual migration) No KKP controllers/webhooks must be running. +- No KKP resource must be marked as deleted. +- The new CRD files must be available on disk. +- All seed clusters must be reachable. +- Deprecated features which were removed in KKP 2.20 must not be used anymore. +- (only before actual migration) No KKP controllers/webhooks must be running. The first step is to get the kubeconfig file for the KKP **master** cluster. Set the `KUBECONFIG` variable pointing to it: @@ -199,12 +199,12 @@ When you're ready, start the migration: The installer will now -* perform the same preflight checks as the `preflight` command, plus it checks that no KKP controllers are running, -* create a backup of all KKP resources per seed cluster, -* install the new CRDs, -* migrate all KKP resources, -* adjust the owner references and -* optionally remove the old resources if `--remove-old-resources` was given (this can be done manually at any time later on). +- perform the same preflight checks as the `preflight` command, plus it checks that no KKP controllers are running, +- create a backup of all KKP resources per seed cluster, +- install the new CRDs, +- migrate all KKP resources, +- adjust the owner references and +- optionally remove the old resources if `--remove-old-resources` was given (this can be done manually at any time later on). {{% notice note %}} The command is idempotent and can be interrupted and restarted at any time. It will have to go through already migrated resources again, though. diff --git a/content/kubermatic/main/installation/upgrading/upgrade-from-2.21-to-2.22/_index.en.md b/content/kubermatic/main/installation/upgrading/upgrade-from-2.21-to-2.22/_index.en.md index 128b20062..67f3e3379 100644 --- a/content/kubermatic/main/installation/upgrading/upgrade-from-2.21-to-2.22/_index.en.md +++ b/content/kubermatic/main/installation/upgrading/upgrade-from-2.21-to-2.22/_index.en.md @@ -21,7 +21,7 @@ container runtime in KKP 2.22 is therefore containerd. As such, the upgrade will with Docker as container runtime. It is necessary to migrate **existing clusters and cluster templates** to containerd before proceeding. This can be done either via the Kubermatic Dashboard -or with `kubectl`. On the Dashboard, just edit the cluster or cluster template, change the _Container Runtime_ field to `containerd` and save your changes. +or with `kubectl`. On the Dashboard, just edit the cluster or cluster template, change the *Container Runtime* field to `containerd` and save your changes. ![Change Container Runtime](upgrade-container-runtime.png?classes=shadow,border&height=200 "Change Container Runtime") @@ -68,8 +68,8 @@ Before starting the upgrade, make sure your KKP Master and Seed clusters are hea Download the latest 2.22.x release archive for the correct edition (`ce` for Community Edition, `ee` for Enterprise Edition) from [the release page](https://github.com/kubermatic/kubermatic/releases) and extract it locally on your computer. Make sure you have the `values.yaml` you used to deploy KKP 2.21 available and already adjusted for any 2.22 changes (also see [Pre-Upgrade Considerations](#pre-upgrade-considerations)), as you need to pass it to the installer. The `KubermaticConfiguration` is no longer necessary (unless you are adjusting it), as the KKP operator will use its in-cluster representation. From within the extracted directory, run the installer: -```sh -$ ./kubermatic-installer deploy kubermatic-master --helm-values path/to/values.yaml +```bash +./kubermatic-installer deploy kubermatic-master --helm-values path/to/values.yaml # example output for a successful upgrade INFO[0000] 🚀 Initializing installer… edition="Enterprise Edition" version=v2.22.0 @@ -120,8 +120,8 @@ Upgrading seed clusters is no longer necessary in KKP 2.22, unless you are runni You can follow the upgrade process by either supervising the Pods on master and seed clusters (by simply checking `kubectl get pods -n kubermatic` frequently) or checking status information for the `Seed` objects. A possible command to extract the current status by seed would be: -```sh -$ kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" +```bash +kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" kubermatic - {"clusters":5,"conditions":{"ClusterInitialized":{"lastHeartbeatTime":"2023-02-16T10:53:34Z","message":"All KKP CRDs have been installed successfully.","reason":"CRDsUpdated","status":"True"},"KubeconfigValid":{"lastHeartbeatTime":"2023-02-14T16:50:09Z","reason":"KubeconfigValid","status":"True"},"ResourcesReconciled":{"lastHeartbeatTime":"2023-02-14T16:50:14Z","reason":"ReconcilingSuccess","status":"True"}},"phase":"Healthy","versions":{"cluster":"v1.24.10","kubermatic":"v2.22.0"}} ``` @@ -183,7 +183,7 @@ If a custom values file is required and is ready for use, `kubermatic-installer` uncomment the command flags that you need (e.g. `--helm-values` if you have a `mlavalues.yaml` to pass and `--mla-include-iap` if you are using IAP for MLA; both flags are optional). -```sh +```bash ./kubermatic-installer deploy usercluster-mla \ # uncomment if you are providing non-standard values # --helm-values mlavalues.yaml \ @@ -194,7 +194,6 @@ using IAP for MLA; both flags are optional). ## Post-Upgrade Considerations - ### KubeVirt Migration KubeVirt cloud provider support graduates to GA in KKP 2.22 and has gained several new features. However, KubeVirt clusters need to be migrated after the KKP 2.22 upgrade. [Instructions are available in KubeVirt provider documentation]({{< ref "../../../architecture/supported-providers/kubevirt#migration-from-kkp-221" >}}). diff --git a/content/kubermatic/main/installation/upgrading/upgrade-from-2.22-to-2.23/_index.en.md b/content/kubermatic/main/installation/upgrading/upgrade-from-2.22-to-2.23/_index.en.md index 66e4c4620..f2229b1a3 100644 --- a/content/kubermatic/main/installation/upgrading/upgrade-from-2.22-to-2.23/_index.en.md +++ b/content/kubermatic/main/installation/upgrading/upgrade-from-2.22-to-2.23/_index.en.md @@ -32,31 +32,31 @@ The JSON file contains a `format` key. If the output looks like {"version":"1","format":"xl-single","id":"5dc676ac-92f3-4c19-81d0-2304b366293c","xl":{"version":"3","this":"888f699a-2f22-402a-9e49-2e0fc9abd5c5","sets":[["888f699a-2f22-402a-9e49-2e0fc9abd5c5"]],"distributionAlgo":"SIPMOD+PARITY"}} ``` -you're good to go, no migration required. However if you receive +You're good to go, no migration required. However if you receive ```json {"version":"1","format":"fs","id":"baa787b5-43b6-4bcb-b1d7-acf46bcc0a05","fs":{"version":"2"}} ``` -you must either +You must either -* migrate according to the [migration guide](https://min.io/docs/minio/container/operations/install-deploy-manage/migrate-fs-gateway.html), which effectively involves setting up a second MinIO and copying each file over, or -* wipe your MinIO's storage (e.g. by deleting the PVC, see below), or -* pin the MinIO version to the last version that supports `fs`, which is `RELEASE.2022-10-24T18-35-07Z`, using the Helm values file (set `minio.image.tag=RELEASE.2022-10-24T18-35-07Z`). +- migrate according to the [migration guide](https://min.io/docs/minio/container/operations/install-deploy-manage/migrate-fs-gateway.html), which effectively involves setting up a second MinIO and copying each file over, or +- wipe your MinIO's storage (e.g. by deleting the PVC, see below), or +- pin the MinIO version to the last version that supports `fs`, which is `RELEASE.2022-10-24T18-35-07Z`, using the Helm values file (set `minio.image.tag=RELEASE.2022-10-24T18-35-07Z`). The KKP installer will, when installing the seed dependencies, perform an automated check and will refuse to upgrade if the existing MinIO volume uses the old `fs` driver. If the contents of MinIO is expendable, instead of migrating it's also possible to wipe (**deleting all data**) MinIO's storage entirely. There are several ways to go about this, for example: ```bash -$ kubectl --namespace minio scale deployment/minio --replicas=0 +kubectl --namespace minio scale deployment/minio --replicas=0 #deployment.apps/minio scaled -$ kubectl --namespace minio delete pvc minio-data +kubectl --namespace minio delete pvc minio-data #persistentvolumeclaim "minio-data" deleted # re-install MinIO chart manually -$ helm --namespace minio upgrade minio ./charts/minio --values myhelmvalues.yaml +helm --namespace minio upgrade minio ./charts/minio --values myhelmvalues.yaml #Release "minio" has been upgraded. Happy Helming! #NAME: minio #LAST DEPLOYED: Mon Jul 24 13:40:51 2023 @@ -65,7 +65,7 @@ $ helm --namespace minio upgrade minio ./charts/minio --values myhelmvalues.yaml #REVISION: 2 #TEST SUITE: None -$ kubectl --namespace minio scale deployment/minio --replicas=1 +kubectl --namespace minio scale deployment/minio --replicas=1 #deployment.apps/minio scaled ``` @@ -97,8 +97,8 @@ Before starting the upgrade, make sure your KKP Master and Seed clusters are hea Download the latest 2.23.x release archive for the correct edition (`ce` for Community Edition, `ee` for Enterprise Edition) from [the release page](https://github.com/kubermatic/kubermatic/releases) and extract it locally on your computer. Make sure you have the `values.yaml` you used to deploy KKP 2.22 available and already adjusted for any 2.23 changes (also see [Pre-Upgrade Considerations](#pre-upgrade-considerations)), as you need to pass it to the installer. The `KubermaticConfiguration` is no longer necessary (unless you are adjusting it), as the KKP operator will use its in-cluster representation. From within the extracted directory, run the installer: -```sh -$ ./kubermatic-installer deploy kubermatic-master --helm-values path/to/values.yaml +```bash +./kubermatic-installer deploy kubermatic-master --helm-values path/to/values.yaml # example output for a successful upgrade INFO[0000] 🚀 Initializing installer… edition="Enterprise Edition" version=v2.23.0 @@ -152,12 +152,13 @@ A breaking change in the `minio` Helm chart shipped in KKP v2.23.0 has been iden Upgrading seed cluster is not necessary unless User Cluster MLA has been installed. All other KKP components on the seed will be upgraded automatically. + You can follow the upgrade process by either supervising the Pods on master and seed clusters (by simply checking `kubectl get pods -n kubermatic` frequently) or checking status information for the `Seed` objects. A possible command to extract the current status by seed would be: -```sh -$ kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" +```bash +kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" kubermatic - {"clusters":5,"conditions":{"ClusterInitialized":{"lastHeartbeatTime":"2023-02-16T10:53:34Z","message":"All KKP CRDs have been installed successfully.","reason":"CRDsUpdated","status":"True"},"KubeconfigValid":{"lastHeartbeatTime":"2023-02-14T16:50:09Z","reason":"KubeconfigValid","status":"True"},"ResourcesReconciled":{"lastHeartbeatTime":"2023-02-14T16:50:14Z","reason":"ReconcilingSuccess","status":"True"}},"phase":"Healthy","versions":{"cluster":"v1.24.10","kubermatic":"v2.23.0"}} ``` diff --git a/content/kubermatic/main/installation/upgrading/upgrade-from-2.24-to-2.25/_index.en.md b/content/kubermatic/main/installation/upgrading/upgrade-from-2.24-to-2.25/_index.en.md index 428f82e70..f413cff4f 100644 --- a/content/kubermatic/main/installation/upgrading/upgrade-from-2.24-to-2.25/_index.en.md +++ b/content/kubermatic/main/installation/upgrading/upgrade-from-2.24-to-2.25/_index.en.md @@ -59,31 +59,31 @@ The JSON file contains a `format` key. If the output looks like {"version":"1","format":"xl-single","id":"5dc676ac-92f3-4c19-81d0-2304b366293c","xl":{"version":"3","this":"888f699a-2f22-402a-9e49-2e0fc9abd5c5","sets":[["888f699a-2f22-402a-9e49-2e0fc9abd5c5"]],"distributionAlgo":"SIPMOD+PARITY"}} ``` -you're good to go, no migration required. However if you receive +You're good to go, no migration required. However if you receive ```json {"version":"1","format":"fs","id":"baa787b5-43b6-4bcb-b1d7-acf46bcc0a05","fs":{"version":"2"}} ``` -you must either +You must either -* migrate according to the [migration guide](https://min.io/docs/minio/container/operations/install-deploy-manage/migrate-fs-gateway.html), which effectively involves setting up a second MinIO and copying each file over, or -* wipe your MinIO's storage (e.g. by deleting the PVC, see below), or -* pin the MinIO version to the last version that supports `fs`, which is `RELEASE.2022-10-24T18-35-07Z`, using the Helm values file (set `minio.image.tag=RELEASE.2022-10-24T18-35-07Z`). +- migrate according to the [migration guide](https://min.io/docs/minio/container/operations/install-deploy-manage/migrate-fs-gateway.html), which effectively involves setting up a second MinIO and copying each file over, or +- wipe your MinIO's storage (e.g. by deleting the PVC, see below), or +- pin the MinIO version to the last version that supports `fs`, which is `RELEASE.2022-10-24T18-35-07Z`, using the Helm values file (set `minio.image.tag=RELEASE.2022-10-24T18-35-07Z`). The KKP installer will, when installing the `usercluster-mla` stack, perform an automated check and will refuse to upgrade if the existing MinIO volume uses the old `fs` driver. If the contents of MinIO is expendable, instead of migrating it's also possible to wipe (**deleting all data**) MinIO's storage entirely. There are several ways to go about this, for example: ```bash -$ kubectl --namespace mla scale deployment/minio --replicas=0 +kubectl --namespace mla scale deployment/minio --replicas=0 #deployment.apps/minio scaled -$ kubectl --namespace mla delete pvc minio-data +kubectl --namespace mla delete pvc minio-data #persistentvolumeclaim "minio-data" deleted # re-install MinIO chart manually -$ helm --namespace mla upgrade minio ./charts/minio --values myhelmvalues.yaml +helm --namespace mla upgrade minio ./charts/minio --values myhelmvalues.yaml #Release "minio" has been upgraded. Happy Helming! #NAME: minio #LAST DEPLOYED: Mon Jul 24 13:40:51 2023 @@ -92,7 +92,7 @@ $ helm --namespace mla upgrade minio ./charts/minio --values myhelmvalues.yaml #REVISION: 2 #TEST SUITE: None -$ kubectl --namespace mla scale deployment/minio --replicas=1 +kubectl --namespace mla scale deployment/minio --replicas=1 #deployment.apps/minio scaled ``` @@ -108,8 +108,8 @@ Before starting the upgrade, make sure your KKP Master and Seed clusters are hea Download the latest 2.25.x release archive for the correct edition (`ce` for Community Edition, `ee` for Enterprise Edition) from [the release page](https://github.com/kubermatic/kubermatic/releases) and extract it locally on your computer. Make sure you have the `values.yaml` you used to deploy KKP 2.24 available and already adjusted for any 2.25 changes (also see [Pre-Upgrade Considerations](#pre-upgrade-considerations)), as you need to pass it to the installer. The `KubermaticConfiguration` is no longer necessary (unless you are adjusting it), as the KKP operator will use its in-cluster representation. From within the extracted directory, run the installer: -```sh -$ ./kubermatic-installer deploy kubermatic-master --helm-values path/to/values.yaml +```bash +./kubermatic-installer deploy kubermatic-master --helm-values path/to/values.yaml # example output for a successful upgrade INFO[0000] 🚀 Initializing installer… edition="Enterprise Edition" version=v2.25.0 @@ -160,8 +160,8 @@ Upgrading seed clusters is not necessary, unless you are running the `minio` Hel You can follow the upgrade process by either supervising the Pods on master and seed clusters (by simply checking `kubectl get pods -n kubermatic` frequently) or checking status information for the `Seed` objects. A possible command to extract the current status by seed would be: -```sh -$ kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" +```bash +kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" kubermatic - {"clusters":5,"conditions":{"ClusterInitialized":{"lastHeartbeatTime":"2024-03-11T10:53:34Z","message":"All KKP CRDs have been installed successfully.","reason":"CRDsUpdated","status":"True"},"KubeconfigValid":{"lastHeartbeatTime":"2024-03-11T16:50:09Z","reason":"KubeconfigValid","status":"True"},"ResourcesReconciled":{"lastHeartbeatTime":"2024-03-11T16:50:14Z","reason":"ReconcilingSuccess","status":"True"}},"phase":"Healthy","versions":{"cluster":"v1.27.11","kubermatic":"v2.25.0"}} ``` diff --git a/content/kubermatic/main/installation/upgrading/upgrade-from-2.25-to-2.26/_index.en.md b/content/kubermatic/main/installation/upgrading/upgrade-from-2.25-to-2.26/_index.en.md index 72dde1168..bbfcab911 100644 --- a/content/kubermatic/main/installation/upgrading/upgrade-from-2.25-to-2.26/_index.en.md +++ b/content/kubermatic/main/installation/upgrading/upgrade-from-2.25-to-2.26/_index.en.md @@ -26,8 +26,8 @@ Beginning with KKP 2.26, Helm chart versions now use strict semvers without a le KKP 2.26 ships a lot of major version upgrades for the Helm charts, most notably -* Loki & Promtail v2.5 to v2.9.x -* Grafana 9.x to 10.4.x +- Loki & Promtail v2.5 to v2.9.x +- Grafana 9.x to 10.4.x Some of these updates require manual intervention or at least checking whether a given KKP system is affected by upstream changes. Please read the following sections carefully before beginning the upgrade. @@ -41,17 +41,18 @@ Due to labelling changes, and in-place upgrade of Velero is not possible. It's r The switch to the upstream Helm chart requires adjusting the `values.yaml` used to install Velero. Most existing settings have a 1:1 representation in the new chart: -* `velero.podAnnotations` is now `velero.annotations` -* `velero.serverFlags` is now `velero.configuration.*` (each CLI flag is its own field in the YAML file, e.g. `serverFlags:["--log-format=json"]` would become `configuration.logFormat: "json"`) -* `velero.uploaderType` is now `velero.configuration.uploaderType`; note that the default has changed from restic to Kopia, see the next section below for more information. -* `velero.credentials` is now `velero.credentials.*` -* `velero.schedulesPath` is not available anymore, since putting additional files into a Helm chart before installing it is a rather unusual process. Instead, specify the desired schedules directly inside the `values.yaml` in `velero.schedules` -* `velero.backupStorageLocations` is now `velero.configuration.backupStorageLocation` -* `velero.volumeSnapshotLocations` is now `velero.configuration.volumeSnapshotLocation` -* `velero.defaultVolumeSnapshotLocations` is now `velero.configuration.defaultBackupStorageLocation` +- `velero.podAnnotations` is now `velero.annotations` +- `velero.serverFlags` is now `velero.configuration.*` (each CLI flag is its own field in the YAML file, e.g. `serverFlags:["--log-format=json"]` would become `configuration.logFormat: "json"`) +- `velero.uploaderType` is now `velero.configuration.uploaderType`; note that the default has changed from restic to Kopia, see the next section below for more information. +- `velero.credentials` is now `velero.credentials.*` +- `velero.schedulesPath` is not available anymore, since putting additional files into a Helm chart before installing it is a rather unusual process. Instead, specify the desired schedules directly inside the `values.yaml` in `velero.schedules` +- `velero.backupStorageLocations` is now `velero.configuration.backupStorageLocation` +- `velero.volumeSnapshotLocations` is now `velero.configuration.volumeSnapshotLocation` +- `velero.defaultVolumeSnapshotLocations` is now `velero.configuration.defaultBackupStorageLocation` {{< tabs name="Velero Helm Chart Upgrades" >}} {{% tab name="old Velero Chart" %}} + ```yaml velero: podAnnotations: @@ -89,9 +90,11 @@ velero: schedulesPath: schedules/* ``` + {{% /tab %}} {{% tab name="new Velero Chart" %}} + ```yaml velero: annotations: @@ -136,6 +139,7 @@ velero: aws_access_key_id=itsme aws_secret_access_key=andthisismypassword ``` + {{% /tab %}} {{< /tabs >}} @@ -155,15 +159,15 @@ If you decide to switch to Kopia and do not need the restic repository anymore, The configuration syntax for cert-manager has changed slightly. -* Breaking: If you have `.featureGates` value set in `values.yaml`, the features defined there will no longer be passed to cert-manager webhook, only to cert-manager controller. Use the `webhook.featureGates` field instead to define features to be enabled on webhook. -* Potentially breaking: Webhook validation of CertificateRequest resources is stricter now: all `KeyUsages` and `ExtendedKeyUsages` must be defined directly in the CertificateRequest resource, the encoded CSR can never contain more usages that defined there. +- Breaking: If you have `.featureGates` value set in `values.yaml`, the features defined there will no longer be passed to cert-manager webhook, only to cert-manager controller. Use the `webhook.featureGates` field instead to define features to be enabled on webhook. +- Potentially breaking: Webhook validation of CertificateRequest resources is stricter now: all `KeyUsages` and `ExtendedKeyUsages` must be defined directly in the CertificateRequest resource, the encoded CSR can never contain more usages that defined there. ### oauth2-proxy (IAP) 7.6 This upgrade includes one breaking change: -* A change to how auth routes are evaluated using the flags `skip-auth-route`/`skip-auth-regex`: the new behaviour uses the regex you specify to evaluate the full path including query parameters. For more details please read the [detailed PR description](https://github.com/oauth2-proxy/oauth2-proxy/issues/2271). -* The environment variable `OAUTH2_PROXY_GOOGLE_GROUP` has been deprecated in favor of `OAUTH2_PROXY_GOOGLE_GROUPS`. Next major release will remove this option. +- A change to how auth routes are evaluated using the flags `skip-auth-route`/`skip-auth-regex`: the new behaviour uses the regex you specify to evaluate the full path including query parameters. For more details please read the [detailed PR description](https://github.com/oauth2-proxy/oauth2-proxy/issues/2271). +- The environment variable `OAUTH2_PROXY_GOOGLE_GROUP` has been deprecated in favor of `OAUTH2_PROXY_GOOGLE_GROUPS`. Next major release will remove this option. ### Loki & Promtail 2.9 (Seed MLA) @@ -171,16 +175,16 @@ The Loki upgrade from 2.5 to 2.9 might be the most significant bump in this KKP Before upgrading, review your `values.yaml` for Loki, as a number of syntax changes were made: -* Most importantly, `loki.config` is now a templated string that aggregates many other individual values specified in `loki`, for example `loki.tableManager` gets rendered into `loki.config.table_manager`, and `loki.loki.schemaConfig` gets rendered into `loki.config.schema_config`. To follow these changes, if you have `loki.config` in your `values.yaml`, rename it to `loki.loki`. Ideally you should not need to manually override the templating string in `loki.config` from the upstream chart anymore. Additionally, some values are moved out or renamed slightly: - * `loki.config.schema_config` becomes `loki.loki.schemaConfig` - * `loki.config.table_manager` becomes `loki.tableManager` (sic) - * `loki.config.server` was removed, if you need to specify something, use `loki.loki.server` -* The base volume path for the Loki PVC was changed from `/data/loki` to `/var/loki`. -* Configuration for the default image has changed, there is no `loki.image.repository` field anymore, it's now `loki.image.registry` and `loki.image.repository`. -* `loki.affinity` is now a templated string and enabled by default; if you use multiple Loki replicas, your cluster needs to have multiple nodes to host these pods. -* All fields related to the Loki pod (`loki.tolerations`, `loki.resources`, `loki.nodeSelector` etc.) were moved below `loki.singleBinary`. -* Self-monitoring, Grafana Agent and selftests are disabled by default now, reducing the default resource requirements for the logging stack. -* `loki.singleBinary.persistence.enableStatefulSetAutoDeletePVC` is set to `false` to ensure that when the StatefulSet is deleted, the PVCs will not also be deleted. This allows for easier upgrades in the +- Most importantly, `loki.config` is now a templated string that aggregates many other individual values specified in `loki`, for example `loki.tableManager` gets rendered into `loki.config.table_manager`, and `loki.loki.schemaConfig` gets rendered into `loki.config.schema_config`. To follow these changes, if you have `loki.config` in your `values.yaml`, rename it to `loki.loki`. Ideally you should not need to manually override the templating string in `loki.config` from the upstream chart anymore. Additionally, some values are moved out or renamed slightly: + - `loki.config.schema_config` becomes `loki.loki.schemaConfig` + - `loki.config.table_manager` becomes `loki.tableManager` (sic) + - `loki.config.server` was removed, if you need to specify something, use `loki.loki.server` +- The base volume path for the Loki PVC was changed from `/data/loki` to `/var/loki`. +- Configuration for the default image has changed, there is no `loki.image.repository` field anymore, it's now `loki.image.registry` and `loki.image.repository`. +- `loki.affinity` is now a templated string and enabled by default; if you use multiple Loki replicas, your cluster needs to have multiple nodes to host these pods. +- All fields related to the Loki pod (`loki.tolerations`, `loki.resources`, `loki.nodeSelector` etc.) were moved below `loki.singleBinary`. +- Self-monitoring, Grafana Agent and selftests are disabled by default now, reducing the default resource requirements for the logging stack. +- `loki.singleBinary.persistence.enableStatefulSetAutoDeletePVC` is set to `false` to ensure that when the StatefulSet is deleted, the PVCs will not also be deleted. This allows for easier upgrades in the future, but if you scale down Loki, you would have to manually deleted the leftover PVCs. ### Alertmanager 0.27 (Seed MLA) @@ -205,39 +209,39 @@ Afterwards you can install the new release from the chart. As is typical for kube-state-metrics, the upgrade simple, but the devil is in the details. There were many minor changes since v2.8, please review [the changelog](https://github.com/kubernetes/kube-state-metrics/releases) carefully if you built upon metrics provided by kube-state-metrics: -* The deprecated experimental VerticalPodAutoscaler metrics are no longer supported, and have been removed. It's recommend to use CustomResourceState metrics to gather metrics from custom resources like the Vertical Pod Autoscaler. -* Label names were regulated to adhere with OTel-Prometheus standards, so existing label names that do not follow the same may be replaced by the ones that do. Please refer to [the PR](https://github.com/kubernetes/kube-state-metrics/pull/2004) for more details. -* Label and annotation metrics aren't exposed by default anymore to reduce the memory usage of the default configuration of kube-state-metrics. Before this change, they used to only include the name and namespace of the objects which is not relevant to users not opting in these metrics. +- The deprecated experimental VerticalPodAutoscaler metrics are no longer supported, and have been removed. It's recommend to use CustomResourceState metrics to gather metrics from custom resources like the Vertical Pod Autoscaler. +- Label names were regulated to adhere with OTel-Prometheus standards, so existing label names that do not follow the same may be replaced by the ones that do. Please refer to [the PR](https://github.com/kubernetes/kube-state-metrics/pull/2004) for more details. +- Label and annotation metrics aren't exposed by default anymore to reduce the memory usage of the default configuration of kube-state-metrics. Before this change, they used to only include the name and namespace of the objects which is not relevant to users not opting in these metrics. ### node-exporter 1.7 (Seed MLA) This new version comes with a few minor backwards-incompatible changes: -* metrics of offline CPUs in CPU collector were removed -* bcache cache_readaheads_totals metrics were removed -* ntp collector was deprecated -* supervisord collector was deprecated +- metrics of offline CPUs in CPU collector were removed +- bcache cache_readaheads_totals metrics were removed +- ntp collector was deprecated +- supervisord collector was deprecated ### Prometheus 2.51 (Seed MLA) Prometheus had many improvements and some changes to the remote-write functionality that might affect you: -* Remote-write: - * raise default samples per send to 2,000 - * respect `Retry-After` header on 5xx errors - * error `storage.ErrTooOldSample` is now generating HTTP error 400 instead of HTTP error 500 -* Scraping: - * Do experimental timestamp alignment even if tolerance is bigger than 1% of scrape interval +- Remote-write: + - raise default samples per send to 2,000 + - respect `Retry-After` header on 5xx errors + - error `storage.ErrTooOldSample` is now generating HTTP error 400 instead of HTTP error 500 +- Scraping: + - Do experimental timestamp alignment even if tolerance is bigger than 1% of scrape interval ### nginx-ingress-controller 1.10 nginx v1.10 brings quite a few potentially breaking changes: -* does not support chroot image (this will be fixed on a future minor patch release) -* dropped Opentracing and zipkin modules, just Opentelemetry is supported as of this release -* dropped support for PodSecurityPolicy -* dropped support for GeoIP (legacy), only GeoIP2 is supported -* The automatically generated `NetworkPolicy` from nginx 1.9.3 is now disabled by default, refer to https://github.com/kubernetes/ingress-nginx/pull/10238 for more information. +- does not support chroot image (this will be fixed on a future minor patch release) +- dropped Opentracing and zipkin modules, just Opentelemetry is supported as of this release +- dropped support for PodSecurityPolicy +- dropped support for GeoIP (legacy), only GeoIP2 is supported +- The automatically generated `NetworkPolicy` from nginx 1.9.3 is now disabled by default, refer to for more information. ### Dex 2.40 @@ -253,8 +257,8 @@ Before starting the upgrade, make sure your KKP Master and Seed clusters are hea Download the latest 2.26.x release archive for the correct edition (`ce` for Community Edition, `ee` for Enterprise Edition) from [the release page](https://github.com/kubermatic/kubermatic/releases) and extract it locally on your computer. Make sure you have the `values.yaml` you used to deploy KKP 2.26 available and already adjusted for any 2.26 changes (also see [Pre-Upgrade Considerations](#pre-upgrade-considerations)), as you need to pass it to the installer. The `KubermaticConfiguration` is no longer necessary (unless you are adjusting it), as the KKP operator will use its in-cluster representation. From within the extracted directory, run the installer: -```sh -$ ./kubermatic-installer deploy kubermatic-master --helm-values path/to/values.yaml +```bash +./kubermatic-installer deploy kubermatic-master --helm-values path/to/values.yaml # example output for a successful upgrade INFO[0000] 🚀 Initializing installer… edition="Enterprise Edition" version=v2.26.0 @@ -305,8 +309,8 @@ Upgrading seed clusters is not necessary, unless you are running the `minio` Hel You can follow the upgrade process by either supervising the Pods on master and seed clusters (by simply checking `kubectl get pods -n kubermatic` frequently) or checking status information for the `Seed` objects. A possible command to extract the current status by seed would be: -```sh -$ kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" +```bash +kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" kubermatic - {"clusters":5,"conditions":{"ClusterInitialized":{"lastHeartbeatTime":"2024-03-11T10:53:34Z","message":"All KKP CRDs have been installed successfully.","reason":"CRDsUpdated","status":"True"},"KubeconfigValid":{"lastHeartbeatTime":"2024-03-11T16:50:09Z","reason":"KubeconfigValid","status":"True"},"ResourcesReconciled":{"lastHeartbeatTime":"2024-03-11T16:50:14Z","reason":"ReconcilingSuccess","status":"True"}},"phase":"Healthy","versions":{"cluster":"v1.27.11","kubermatic":"v2.25.0"}} ``` diff --git a/content/kubermatic/main/installation/upgrading/upgrade-from-2.26-to-2.27/_index.en.md b/content/kubermatic/main/installation/upgrading/upgrade-from-2.26-to-2.27/_index.en.md index 37b92567c..6ecd399fd 100644 --- a/content/kubermatic/main/installation/upgrading/upgrade-from-2.26-to-2.27/_index.en.md +++ b/content/kubermatic/main/installation/upgrading/upgrade-from-2.26-to-2.27/_index.en.md @@ -32,17 +32,20 @@ A regression in KKP v2.26.0 caused the floatingIPPool field in OpenStack cluster If your OpenStack clusters use a floating IP pool other than the default, you may need to manually update Cluster objects after upgrading to v2.27. -* Action Required: - * After the upgrade, check your OpenStack clusters and manually reset the correct floating IP pool if needed. - * Example command to check the floating IP pool - ```sh - kubectl get clusters -o jsonpath="{.items[*].spec.cloud.openstack.floatingIPPool}" - ``` - * If incorrect, manually edit the Cluster object: - ```sh - kubectl edit cluster - ``` +- Action Required: + - After the upgrade, check your OpenStack clusters and manually reset the correct floating IP pool if needed. + - Example command to check the floating IP pool + ```bash + kubectl get clusters -o jsonpath="{.items[*].spec.cloud.openstack.floatingIPPool}" + ``` + + - If incorrect, manually edit the Cluster object: + + ```bash + kubectl edit cluster + ``` + ### Velero Configuration Changes By default, Velero backups and snapshots are turned off. If you were using Velero for etcd backups and/or volume backups, you must explicitly enable them in your values.yaml file. @@ -88,19 +91,20 @@ Because the namespace changes, both old and new Dex can temporarily live side-by To begin the migration, create a new `values.yaml` section for Dex (both old and new chart use `dex` as the top-level key in the YAML file) and migrate your existing configuration as follows: -* `dex.replicas` is now `dex.replicaCount` -* `dex.env` is now `dex.envVars` -* `dex.extraVolumes` is now `dex.volumes` -* `dex.extraVolumeMounts` is now `dex.volumeMounts` -* `dex.certIssuer` has been removed, admins must manually set the necessary annotations on the +- `dex.replicas` is now `dex.replicaCount` +- `dex.env` is now `dex.envVars` +- `dex.extraVolumes` is now `dex.volumes` +- `dex.extraVolumeMounts` is now `dex.volumeMounts` +- `dex.certIssuer` has been removed, admins must manually set the necessary annotations on the ingress to integrate with cert-manager. -* `dex.ingress` has changed internally: - * `class` is now `className` (the value "non-existent" is not supported anymore, use the `dex.ingress.enabled` field instead) - * `host` and `path` are gone, instead admins will have to manually define their Ingress configuration - * `scheme` is likewise gone and admins have to configure the `tls` section in the Ingress configuration +- `dex.ingress` has changed internally: + - `class` is now `className` (the value "non-existent" is not supported anymore, use the `dex.ingress.enabled` field instead) + - `host` and `path` are gone, instead admins will have to manually define their Ingress configuration + - `scheme` is likewise gone and admins have to configure the `tls` section in the Ingress configuration {{< tabs name="Dex Helm Chart values" >}} {{% tab name="old oauth Chart" %}} + ```yaml dex: replicas: 2 @@ -129,9 +133,11 @@ dex: name: letsencrypt-prod kind: ClusterIssuer ``` + {{% /tab %}} {{% tab name="new dex Chart" %}} + ```yaml # Tell the KKP installer to install the new dex Chart into the # "dex" namespace, instead of the old oauth Chart. @@ -166,19 +172,20 @@ dex: # above. - "kkp.example.com" ``` + {{% /tab %}} {{< /tabs >}} Additionally, Dex's own configuration is now more clearly separated from how Dex's Kubernetes manifests are configured. The following changes are required: -* In general, Dex's configuration is everything under `dex.config`. -* `dex.config.issuer` has to be set explicitly (the old `oauth` Chart automatically set it), usually to `https:///dex`, e.g. `https://kkp.example.com/dex`. -* `dex.connectors` is now `dex.config.connectors` -* `dex.expiry` is now `dex.config.expiry` -* `dex.frontend` is now `dex.config.frontend` -* `dex.grpc` is now `dex.config.grpc` -* `dex.clients` is now `dex.config.staticClients` -* `dex.staticPasswords` is now `dex.config.staticPasswords` (when using static passwords, you also have to set `dex.config.enablePasswordDB` to `true`) +- In general, Dex's configuration is everything under `dex.config`. +- `dex.config.issuer` has to be set explicitly (the old `oauth` Chart automatically set it), usually to `https:///dex`, e.g. `https://kkp.example.com/dex`. +- `dex.connectors` is now `dex.config.connectors` +- `dex.expiry` is now `dex.config.expiry` +- `dex.frontend` is now `dex.config.frontend` +- `dex.grpc` is now `dex.config.grpc` +- `dex.clients` is now `dex.config.staticClients` +- `dex.staticPasswords` is now `dex.config.staticPasswords` (when using static passwords, you also have to set `dex.config.enablePasswordDB` to `true`) Finally, theming support has changed. The old `oauth` Helm chart allowed to inline certain assets, like logos, as base64-encoded blobs into the Helm values. This mechanism is not available in the new `dex` Helm chart and admins have to manually provision the desired theme. KKP's Dex chart will setup a `dex-theme-kkp` ConfigMap, which is mounted into Dex and then overlays files over the default theme that ships with Dex. To customize, create your own ConfigMap/Secret and adjust `dex.volumes`, `dex.volumeMounts` and `dex.config.frontend.theme` / `dex.config.frontend.dir` accordingly. @@ -192,6 +199,7 @@ kubectl rollout restart deploy kubermatic-api -n kubermatic ``` #### Important: Update OIDC Provider URL for Hostname Changes + Before configuring the UI to use the new URL, ensure that the new Dex installation is healthy by checking that the pods are running and the logs show no suspicious errors. ```bash @@ -200,6 +208,7 @@ kubectl get pods -n dex # To check the logs kubectl get logs -n dex deploy/dex ``` + Next, verify the OpenID configuration by running: ```bash @@ -236,16 +245,16 @@ spec: Once you have verified that the new Dex installation is up and running, you can either -* point KKP to the new Dex installation (if its new URL is meant to be permanent) by changing the `tokenIssuer` in the `KubermaticConfiguration`, or -* delete the old `oauth` release (`helm -n oauth delete oauth`) and then re-deploy the new Dex release, but with the same host+path as the old `oauth` chart used, so that no further changes are necessary in downstream components like KKP. This will incur a short downtime, while no Ingress exists for the issuer URL configured in KKP. +- point KKP to the new Dex installation (if its new URL is meant to be permanent) by changing the `tokenIssuer` in the `KubermaticConfiguration`, or +- delete the old `oauth` release (`helm -n oauth delete oauth`) and then re-deploy the new Dex release, but with the same host+path as the old `oauth` chart used, so that no further changes are necessary in downstream components like KKP. This will incur a short downtime, while no Ingress exists for the issuer URL configured in KKP. ### API Changes -* New Prometheus Overrides - * Added `spec.componentsOverride.prometheus` to allow overriding Prometheus replicas and tolerations. +- New Prometheus Overrides + - Added `spec.componentsOverride.prometheus` to allow overriding Prometheus replicas and tolerations. -* Container Image Tagging - * Tagged KKP releases will no longer tag KKP images twice (with the Git tag and the Git hash), but only once with the Git tag. This ensures that existing hash-based container images do not suddenly change when a Git tag is set and the release job is run. Users of tagged KKP releases are not affected by this change. +- Container Image Tagging + - Tagged KKP releases will no longer tag KKP images twice (with the Git tag and the Git hash), but only once with the Git tag. This ensures that existing hash-based container images do not suddenly change when a Git tag is set and the release job is run. Users of tagged KKP releases are not affected by this change. ## Upgrade Procedure @@ -255,8 +264,8 @@ Before starting the upgrade, make sure your KKP Master and Seed clusters are hea Download the latest 2.27.x release archive for the correct edition (`ce` for Community Edition, `ee` for Enterprise Edition) from [the release page](https://github.com/kubermatic/kubermatic/releases) and extract it locally on your computer. Make sure you have the `values.yaml` you used to deploy KKP 2.27 available and already adjusted for any 2.27 changes (also see [Pre-Upgrade Considerations](#pre-upgrade-considerations)), as you need to pass it to the installer. The `KubermaticConfiguration` is no longer necessary (unless you are adjusting it), as the KKP operator will use its in-cluster representation. From within the extracted directory, run the installer: -```sh -$ ./kubermatic-installer deploy kubermatic-master --helm-values path/to/values.yaml +```bash +./kubermatic-installer deploy kubermatic-master --helm-values path/to/values.yaml # example output for a successful upgrade INFO[0000] 🚀 Initializing installer… edition="Enterprise Edition" version=v2.27.0 @@ -307,8 +316,8 @@ Upgrading seed clusters is not necessary, unless you are running the `minio` Hel You can follow the upgrade process by either supervising the Pods on master and seed clusters (by simply checking `kubectl get pods -n kubermatic` frequently) or checking status information for the `Seed` objects. A possible command to extract the current status by seed would be: -```sh -$ kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" +```bash +kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" kubermatic - {"clusters":5,"conditions":{"ClusterInitialized":{"lastHeartbeatTime":"2025-02-20T10:53:34Z","message":"All KKP CRDs have been installed successfully.","reason":"CRDsUpdated","status":"True"},"KubeconfigValid":{"lastHeartbeatTime":"2025-02-20T16:50:09Z","reason":"KubeconfigValid","status":"True"},"ResourcesReconciled":{"lastHeartbeatTime":"2025-02-20T16:50:14Z","reason":"ReconcilingSuccess","status":"True"}},"phase":"Healthy","versions":{"cluster":"v1.29.13","kubermatic":"v2.27.0"}} ``` @@ -320,13 +329,13 @@ Of particular interest to the upgrade process is if the `ResourcesReconciled` co Some functionality of KKP has been deprecated or removed with KKP 2.27. You should review the full [changelog](https://github.com/kubermatic/kubermatic/blob/main/docs/changelogs/CHANGELOG-2.27.md) and adjust any automation or scripts that might be using deprecated fields or features. Below is a list of changes that might affect you: -* The custom `oauth` Helm chart in KKP has been deprecated and will be replaced with a new Helm chart, `dex`, which is based on the [official upstream chart](https://github.com/dexidp/helm-charts/tree/master/charts/dex), in KKP 2.27. +- The custom `oauth` Helm chart in KKP has been deprecated and will be replaced with a new Helm chart, `dex`, which is based on the [official upstream chart](https://github.com/dexidp/helm-charts/tree/master/charts/dex), in KKP 2.27. -* Canal v3.19 and v3.20 addons have been removed. +- Canal v3.19 and v3.20 addons have been removed. -* kubermatic-installer `--docker-binary` flag has been removed from the kubermatic-installer `mirror-images` subcommand. +- kubermatic-installer `--docker-binary` flag has been removed from the kubermatic-installer `mirror-images` subcommand. -* The `K8sgpt` non-operator application has been deprecated and replaced by the `K8sgpt-operator`. The old application will be removed in future releases. +- The `K8sgpt` non-operator application has been deprecated and replaced by the `K8sgpt-operator`. The old application will be removed in future releases. ## Next Steps diff --git a/content/kubermatic/main/installation/upgrading/upgrade-from-2.27-to-2.28/_index.en.md b/content/kubermatic/main/installation/upgrading/upgrade-from-2.27-to-2.28/_index.en.md index 0ff69f675..c9a120c28 100644 --- a/content/kubermatic/main/installation/upgrading/upgrade-from-2.27-to-2.28/_index.en.md +++ b/content/kubermatic/main/installation/upgrading/upgrade-from-2.27-to-2.28/_index.en.md @@ -111,6 +111,7 @@ kubectl -n monitoring delete statefulset alertmanager Afterwards you can install the new release from the chart using Helm CLI or using your favourite GitOps tool. Finally, cleanup the leftover PVC resources from old helm chart installation. + ```bash kubectl delete pvc -n monitoring -l app=alertmanager ``` @@ -253,9 +254,9 @@ Additionally, Dex's own configuration is now more clearly separated from how Dex Finally, theming support has changed. The old `oauth` Helm chart allowed to inline certain assets, like logos, as base64-encoded blobs into the Helm values. This mechanism is not available in the new `dex` Helm chart and admins have to manually provision the desired theme. KKP's Dex chart will setup a `dex-theme-kkp` ConfigMap, which is mounted into Dex and then overlays files over the default theme that ships with Dex. To customize, create your own ConfigMap/Secret and adjust `dex.volumes`, `dex.volumeMounts` and `dex.config.frontend.theme` / `dex.config.frontend.dir` accordingly. -**Note that you cannot have two Ingress objects with the same host names and paths. So if you install the new Dex in parallel to the old one, you will have to temporarily use a different hostname (e.g. `kkp.example.com/dex` for the old one and `kkp.example.com/dex2` for the new Dex installation).** +**Note** that you cannot have two Ingress objects with the same host names and paths. So if you install the new Dex in parallel to the old one, you will have to temporarily use a different hostname (e.g. `kkp.example.com/dex` for the old one and `kkp.example.com/dex2` for the new Dex installation). -**Restarting Kubermatic API After Dex Migration**: +**Restarting Kubermatic API After Dex Migration:** If you choose to delete the `oauth` chart and immediately switch to the new `dex` chart without using a different hostname, it is recommended to restart the `kubermatic-api` to ensure proper functionality. You can do this by running the following command: ```bash @@ -331,7 +332,7 @@ Upgrading seed clusters is not necessary, unless you are running the `minio` Hel You can follow the upgrade process by either supervising the Pods on master and seed clusters (by simply checking `kubectl get pods -n kubermatic` frequently) or checking status information for the `Seed` objects. A possible command to extract the current status by seed would be: -```sh +```bash $ kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" # Place holder for output @@ -351,10 +352,10 @@ This retirement affects all customers using the Azure Basic Load Balancer SKU, w If you have Basic Load Balancers deployed within Azure Cloud Services (extended support), these deployments will not be affected by this retirement, and no action is required for these specific instances. For more details about this deprecation, please refer to the official Azure announcement: -[https://azure.microsoft.com/en-us/updates?id=azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer](https://azure.microsoft.com/en-us/updates?id=azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer) + The Azure team has created an upgrade guideline, including required scripts to automate the migration process. -Please refer to the official documentation for detailed upgrade instructions: [https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-basic-upgrade-guidance#upgrade-using-automated-scripts-recommended](https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-basic-upgrade-guidance#upgrade-using-automated-scripts-recommended) +Please refer to the official documentation for detailed upgrade instructions: ## Next Steps diff --git a/content/kubermatic/main/references/setup-checklist/_index.en.md b/content/kubermatic/main/references/setup-checklist/_index.en.md index c10a61e58..5a0b6257d 100644 --- a/content/kubermatic/main/references/setup-checklist/_index.en.md +++ b/content/kubermatic/main/references/setup-checklist/_index.en.md @@ -66,7 +66,6 @@ A helpful shortcut, we recommend our KubeOne tooling container, which contains a Kubermatic exposes an NGINX server and user clusters API servers via Load Balancers. Therefore, KKP is using the native Kubernetes Service Type `LoadBalancer` implementation. More details about the different expose points you find at the next chapter “DHCP/Networking” - ### **On-Premise/Bring-your-own Load Balancer** If no external load balancer is provided for the setup, we recommend [KubeLB](https://docs.kubermatic.com/kubelb) for Multi-Tenant Load Balancing. @@ -75,7 +74,6 @@ If no external load balancer is provided for the setup, we recommend [KubeLB](ht As frontend IPAM solution and IP announcement, KubeLB could use on-premise non-multi-tenant LB implementations like Cilium or MetalLB in Layer 2 ARP or BGP mode. (Also commercial Kubernetes conform implementation like [F5 Big IP](https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/v1.0/#) would work). KubeLB will add the multi-tenant plus central DNS, Certificate and Ingress management. KubeLB deliver for each Kubernetes Cluster one tenant separated authentication token, what get used via the so called [KubeLB CCM](https://docs.kubermatic.com/kubelb/v1.1/installation/tenant-cluster/), what automatically get configured for KKP clusters. The KubeLB CCM is then handling service and node announcements. For Setups where multi-tenant automated LB is not required, direct [MetalLB](https://metallb.universe.tf/) or [Cilium](https://docs.cilium.io/) setups could be used as well. For the best performance and stability of the platform, we recommend to talk with our consultants to advise you what is the best fit for your environment. - #### **Layer 2 ARP Announcement** If you choose to use Layer 2 ARP Announcements, you require a set of usable IP addresses in the target Layer 2 network segment, that are not managed by DHCP (at least 2 for Kubermatic itself + your workload load balancers). For deeper information about how Layer 2 ARP works, take a look at [MetalLB in layer 2 mode](https://metallb.universe.tf/concepts/layer2/). The role of this LB is different from e.g. go-between, which is only used to access the master clusters API server. Some Reference for the settings you find at: @@ -84,7 +82,6 @@ If you choose to use Layer 2 ARP Announcements, you require a set of usable IP a - [Cilium L2 Announcements](https://docs.cilium.io/en/stable/network/l2-announcements/) - #### **BGP Advertisement (recommended)** It’s recommend to use BGP for IP announcement as BGP can handle failovers and Kubernetes node updates way better as the L2 Announcement. Also, BGP supports dedicated load balancing hashing algorithm, for more information see [MetalLB in BGP Mode](https://metallb.io/concepts/bgp/). A load balancer in BGP mode advertises each allocated IP to the configured peers with no additional BGP attributes. The peer router(s) will receive one /32 route for each service IP, with the BGP localpref set to zero and no BGP communities. For the different configurations take a look at the reference settings at: @@ -93,7 +90,6 @@ It’s recommend to use BGP for IP announcement as BGP can handle failovers and - [Cilium BGP Control Plane](https://docs.cilium.io/en/stable/network/bgp-control-plane/bgp-control-plane/) - ### **Public/Private Cloud Load Balancers** For other Load Balancer scenarios we strongly recommend to use cloud environment specific Load Balancer that comes with the dedicated cloud CCM. These native cloud LBs can interact dynamically with Kubernetes to provide updates for service type Load Balancer or Ingress objects. For more detail see [Kubernetes - Services, Load Balancing, and Networking](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer). @@ -124,7 +120,6 @@ To access a user cluster via API, a wildcard DNS entry per seed cluster (in your Optional: An alternative expose strategy Load Balancer can be chosen. Therefore, every control plane gets its own Load Balancer with an external IP, see [Expose Strategy](https://docs.kubermatic.com/kubermatic/main/tutorials-howtos/networking/expose-strategies/). - ### **Example of DNS Entries for KKP Services** Root DNS-Zone: `*.kubermatic.example.com` @@ -222,8 +217,8 @@ For each Cloud Provider, there will be some requirements and Todo’s (e.g. crea In the following section, you find SOME examples of setups, that don’t need to match 100% to your use case. Please reach out to your technical contact person at Kubermatic, who could provide you with a tailored technical solution for your use case. {{< tabs name="Cloud Provider Specifics" >}} - {{% tab name="Azure" %}} + ### **Additional Requirements for Azure** #### **General Requirements** @@ -254,7 +249,6 @@ Azure Account described at [Kubermatic Docs > Supported Provider > Azure](https: - Ensure to share the needed parameter of [Azure - machine.spec.providerConfig.cloudProviderSpec](https://github.com/kubermatic/machine-controller/blob/main/docs/cloud-provider.md#azure) - #### **Integration Option to existing KKP** ##### Option I - Workers only in Azure @@ -269,7 +263,6 @@ Azure Account described at [Kubermatic Docs > Supported Provider > Azure](https: - Application traffic gets exposed at Azure workers (Cloud LBs) - ##### Option II - Additional Seed at Azure + Worker in Azure ![multi seed setup - on-premise and azure](integration-additional-seed-azure.png) @@ -293,8 +286,8 @@ Azure Account described at [Kubermatic Docs > Supported Provider > Azure](https: - Host for Seed provisioning (KubeOne setup) needs to reach the Azure network VMs by SSH {{% /tab %}} - {{% tab name="vSphere" %}} + ### **Cloud Provider vSphere** #### **Access to vSphere API** @@ -305,17 +298,14 @@ For dynamic provisioning of nodes, Kubermatic needs access to the vSphere API en - Alternative for managing via terraform [kubermatic-vsphere-permissions-terraform](https://github.com/kubermatic-labs/kubermatic-vsphere-permissions-terraform) (outdated) - #### **User Cluster / Network separation** The separation and multi-tenancy of KKP and their created user clusters is highly dependent on the provided network and user management of the vSphere Infrastructure. Due to the individuality of such setups, it’s recommended to create a dedicated concept per installation together with Kubermatic engineering team. Please provide at least one separate network CIDR and technical vSphere user for the management components and each expected tenant. - #### **Routable virtual IPs (for metalLB)** To set up Kubermatic behind [MetalLB](https://metallb.universe.tf/), we need a few routable address ranges. This could be sliced into one CIDR. The CIDR should be routed to the target network, but not used for machines. - #### **Master/Seed Cluster(s)** CIDR for @@ -324,12 +314,10 @@ CIDR for - Node-Port-Proxy: 1 IP (if expose strategy NodePort or Tunneling), multiple IPs at expose strategy LoadBalancer (for each cluster one IP) - #### **User Cluster** Depending on the concept of how the application workload gets exposed, IPs need to be reserved for exposing the workload at the user cluster side. As a recommendation, at least one virtual IP need is needed e.g. the [MetalLB user cluster addon](https://docs.kubermatic.com/kubermatic/main/tutorials-howtos/networking/ipam/#metallb-addon-integration) + NGINX ingress. Note: during the provisioning of the user cluster, the IP must be entered for the MetalLB addon or you need to configure a [Multi-Cluster IPAM Pool](https://docs.kubermatic.com/kubermatic/main/tutorials-howtos/networking/ipam/). On manual IP configuration, the user must ensure that there will be no IP conflict. - #### **(if no DHCP) Machine CIDRs** Depending on the target network setup, we need ranges for: @@ -342,7 +330,6 @@ Depending on the target network setup, we need ranges for: To provide a “cloud native” experience to the end user of KKP, we recommend the usage of a DHCP at all layers, otherwise, the management layer (master/seed cluster) could not breathe with the autoscaler. - #### **Integration** ##### Option I - Workers only in vSphere Datacenter(s) @@ -355,7 +342,6 @@ To provide a “cloud native” experience to the end user of KKP, we recommend - Application traffic get exposed at vSphere workers by the chosen ingress/load balancing solution - ##### Option II - Additional Seed at vSphere Datacenter(s) - Seed Cluster Kubernetes API endpoint at the dedicated vSphere seed cluster (provisioned by e.g. KubeOne) needs to be reachable @@ -378,8 +364,8 @@ To provide a “cloud native” experience to the end user of KKP, we recommend {{% /tab %}} {{% tab name="OpenStack" %}} -### **Cloud Provider OpenStack** +### **Cloud Provider OpenStack** #### **Access to OpenStack API** @@ -393,7 +379,6 @@ For dynamic provisioning of nodes, Kubermatic needs access to the OpenStack API - User / Password or Application Credential ID / Secret - #### **User Cluster / Network separation** The separation and multi-tenancy of KKP and their created user clusters is dependent on the provided network and project structure. Due to the individuality of such setups, it’s recommended to create a dedicated concept per installation together with Kubermatic engineering team. Please provide at least for the management components and for each expected tenant: @@ -416,7 +401,6 @@ The separation and multi-tenancy of KKP and their created user clusters is depen - OpenStack user or application credentials - #### **Further Information** Additional information about the usage of Open Stack within in Kubernetes you could find at: @@ -439,7 +423,6 @@ Additional information about the usage of Open Stack within in Kubernetes you co - OpenStack Cloud Controller Manager: - #### **Integration** ##### Option I - Workers only in OpenStack Datacenter(s) @@ -452,7 +435,6 @@ Additional information about the usage of Open Stack within in Kubernetes you co - Application traffic get exposed at OpenStack workers by the chosen ingress/load balancing solution - ##### Option II - Additional Seed at vSphere Datacenter(s) - Seed Cluster Kubernetes API endpoint at the dedicated OpenStack seed cluster (provisioned by e.g.[KubeOne](https://docs.kubermatic.com/kubeone/main/tutorials/creating-clusters/)) needs to be reachable diff --git a/content/kubermatic/main/release/_index.en.md b/content/kubermatic/main/release/_index.en.md index 5492f0b06..66bd796e8 100644 --- a/content/kubermatic/main/release/_index.en.md +++ b/content/kubermatic/main/release/_index.en.md @@ -4,7 +4,6 @@ date = 2025-07-22T11:07:15+02:00 weight = 80 +++ - ## Kubermatic Release Process This document provides comprehensive information about the Kubermatic release process, outlining release types, cadence, upgrade paths, and artifact delivery mechanisms. This guide is intended for technical users and system administrators managing Kubermatic deployments. @@ -92,4 +91,4 @@ Effective upgrade management is crucial for Kubermatic users. This section detai * Backward Compatibility Guarantees: * API Versions: Kubermatic generally adheres to the Kubernetes API versioning policy. Stable API versions are guaranteed to be backward compatible. Beta API versions may introduce breaking changes, and alpha API versions offer no compatibility guarantees. Users should be aware of the API versions they are utilizing. * Components: While best efforts are made, some internal component changes in minor releases might not be fully backward compatible. Refer to release notes for specific component compatibility details. - * Configuration: Configuration schemas may evolve with minor releases. Tools and documentation will be provided to assist with configuration migration. \ No newline at end of file + * Configuration: Configuration schemas may evolve with minor releases. Tools and documentation will be provided to assist with configuration migration. diff --git a/content/kubermatic/main/tutorials-howtos/_index.en.md b/content/kubermatic/main/tutorials-howtos/_index.en.md index f124df7fe..aa11ac6fd 100644 --- a/content/kubermatic/main/tutorials-howtos/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/_index.en.md @@ -4,7 +4,6 @@ date = 2020-02-10T11:07:15+02:00 weight = 20 +++ - Are you just embarking on your cloud adoption journey but feeling a bit overwhelmed on how to get started? Then you’re in the right place here. The purpose of our tutorials is to show how to accomplish a goal that is larger than a single task. Typically a tutorial page has several sections, each of which has a sequence of steps. In our tutorials, we provide detailed walk-throughs of how to get started with Kubermatic Kubernetes Platform (KKP). diff --git a/content/kubermatic/main/tutorials-howtos/administration/admin-panel/_index.en.md b/content/kubermatic/main/tutorials-howtos/administration/admin-panel/_index.en.md index cba9e50a1..67827614b 100644 --- a/content/kubermatic/main/tutorials-howtos/administration/admin-panel/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/administration/admin-panel/_index.en.md @@ -9,8 +9,8 @@ impact all Kubermatic users directly. Admin rights can be granted from the admin setting the `spec.admin` field of the user object to `true`. ```bash -$ kubectl get user -o=custom-columns=INTERNAL_NAME:.metadata.name,NAME:.spec.name,EMAIL:.spec.email,ADMIN:.spec.admin -$ kubectl edit user ... +kubectl get user -o=custom-columns=INTERNAL_NAME:.metadata.name,NAME:.spec.name,EMAIL:.spec.email,ADMIN:.spec.admin +kubectl edit user ... ``` After logging in to the dashboard as an administrator, you should be able to access the admin panel from the menu up @@ -22,7 +22,7 @@ top. Global settings can also be modified from the command line with kubectl. It can be done by editing the `globalsettings` in `KubermaticSetting` CRD. This resource has the following structure: -``` +```yaml apiVersion: kubermatic.k8c.io/v1 kind: KubermaticSetting metadata: @@ -66,8 +66,8 @@ spec: It can be edited directly from the command line: -``` -$ kubectl edit kubermaticsetting globalsettings +```bash +kubectl edit kubermaticsetting globalsettings ``` **Note:** Custom link `icon` is not required and defaults will be used if field is not specified. `icon` URL can @@ -78,32 +78,39 @@ point to the images inside the container as well, i.e. `/assets/images/icons/cus The below global settings are managed via UI: ### Manage Custom Links + Control the way custom links are displayed in the Kubermatic Dashboard. Choose the place that suits you best, whether it is a sidebar, footer or a help & support panel. Check out the [Custom Links]({{< ref "./custom-links" >}}) section for more details. ### Control Cluster Settings + Control number of initial Machine Deployment replicas, cluster deletion cleanup settings, availability of Kubernetes Dashboard for user clusters and more. Check out the [Cluster Settings]({{< ref "./cluster-settings" >}}) section for more details. ### Manage Dynamic Datacenters + Use number of filtering options to find and control existing dynamic datacenters or simply create a new one.Check out the [Dynamic Datacenters]({{< ref "./dynamic-datacenters-management" >}}) section for more details. ### Manage Administrators + Manage all Kubermatic Dashboard Administrator in a single place. Decide who should be granted or revoked an administrator privileges. Check out the [Administrators]({{< ref "./administrators" >}}) section for more details. ### Manage Presets + Prepare custom provider presets for a variety of use cases. Control which presets will be visible to the users down to the per-provider level. Check out the [Presets]({{< ref "./presets-management" >}}) section for more details. ### OPA Constraint Templates + Constraint Templates allow you to declare new Constraints. They are intended to work as a schema for Constraint parameters and enforce their behavior. Check out the [OPA Constraint Templates]({{< ref "./opa-constraint-templates" >}}) section for more details. ### Backup Buckets + Through the Backup Buckets settings you can enable and configure the new etcd backups per Seed. Check out the [Etcd Backup Settings]({{< ref "./backup-buckets" >}}) section for more details. diff --git a/content/kubermatic/main/tutorials-howtos/administration/admin-panel/admin-announcements/_index.en.md b/content/kubermatic/main/tutorials-howtos/administration/admin-panel/admin-announcements/_index.en.md index 4f5e94a6e..8e809cc1a 100644 --- a/content/kubermatic/main/tutorials-howtos/administration/admin-panel/admin-announcements/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/administration/admin-panel/admin-announcements/_index.en.md @@ -11,16 +11,19 @@ The Admin Announcement feature allows administrators to broadcast important upda - ### [User View](#user-view) ## Announcement Page + The Announcement page provides the admin with a centralized location to manage all announcements. From here, you can add, edit, and delete announcements. ![Announcements](images/announcements-page.png "Announcements View") -## Add Announcement +## Add Announcement + The dialog allows admins to add new announcements by customizing the message, setting an expiration date, and activating the announcement. ![Add Announcement](images/announcements-dialog.png "Announcements Add Dialog") ## User View + Users can see the latest unread active announcement across all pages from the announcement banner. ![Announcement Banner](images/announcement-banner.png "Announcements Banner") @@ -32,4 +35,3 @@ Users can also see a list of all active announcements by clicking the "See All" You can also view all active announcements from the Help Panel. ![Help Panel](images/help-panel.png "Help Panel") - diff --git a/content/kubermatic/main/tutorials-howtos/administration/admin-panel/administrators/_index.en.md b/content/kubermatic/main/tutorials-howtos/administration/admin-panel/administrators/_index.en.md index c41ef641d..fcffbb05d 100644 --- a/content/kubermatic/main/tutorials-howtos/administration/admin-panel/administrators/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/administration/admin-panel/administrators/_index.en.md @@ -12,6 +12,7 @@ Administrators view in the Admin Panel allows adding and deleting administrator - ### [Deleting Administrators](#deleting-administrators) ## Adding Administrators + Administrators can be added after clicking on the plus icon in the top right corner of the Administrators view. ![Add Administrator](images/admin-add.png?classes=shadow,border&height=200 "Administrator Add Dialog") @@ -20,6 +21,7 @@ Email address is the only field that is available in the dialog. After providing provided user should be able to access and use the Admin Panel. ## Deleting Administrators + Administrator rights can be taken away after clicking on the trash icon that appears after putting mouse over one of the rows with administrators. Please note, that it is impossible to take away the rights from current user. diff --git a/content/kubermatic/main/tutorials-howtos/administration/admin-panel/backup-buckets/_index.en.md b/content/kubermatic/main/tutorials-howtos/administration/admin-panel/backup-buckets/_index.en.md index a8c1566b0..9c0825764 100644 --- a/content/kubermatic/main/tutorials-howtos/administration/admin-panel/backup-buckets/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/administration/admin-panel/backup-buckets/_index.en.md @@ -8,7 +8,6 @@ Through the Backup Destinations settings you can enable and configure the new et ![Backup Destinations](images/backup-destinations.png?classes=shadow,border "Backup Destinations Settings View") - ### Etcd Backup Settings Setting a Bucket and Endpoint for a Seed turns on the automatic etcd Backups and Restore feature, for that Seed only. For now, @@ -44,13 +43,13 @@ For security reasons, the API/UI does not offer a way to get the current credent To see how to make backups and restore your cluster, check the [Etcd Backup and Restore Tutorial]({{< ref "../../../etcd-backups" >}}). - ### Default Backups Since 2.20, default destinations are required if the automatic etcd backups are configured. A default EtcdBackupConfig is created for all the user clusters in the Seed. It has to be a destination that is present in the backup destination list for that Seed. Example Seed with default destination: + ```yaml ... etcdBackupRestore: @@ -66,6 +65,7 @@ Example Seed with default destination: ``` Default EtcdBackupConfig that is created: + ```yaml ... spec: diff --git a/content/kubermatic/main/tutorials-howtos/administration/admin-panel/cluster-settings/_index.en.md b/content/kubermatic/main/tutorials-howtos/administration/admin-panel/cluster-settings/_index.en.md index 614e54bb6..55e98c715 100644 --- a/content/kubermatic/main/tutorials-howtos/administration/admin-panel/cluster-settings/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/administration/admin-panel/cluster-settings/_index.en.md @@ -4,7 +4,6 @@ date = 2025-07-24T09:45:00+05:00 weight = 20 +++ - Interface section in the Admin Panel allows user to control various cluster-related settings. They can influence cluster creation, management and cleanup after deletion. diff --git a/content/kubermatic/main/tutorials-howtos/administration/admin-panel/dynamic-datacenters-management/_index.en.md b/content/kubermatic/main/tutorials-howtos/administration/admin-panel/dynamic-datacenters-management/_index.en.md index 2e392ffc6..85ad21888 100644 --- a/content/kubermatic/main/tutorials-howtos/administration/admin-panel/dynamic-datacenters-management/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/administration/admin-panel/dynamic-datacenters-management/_index.en.md @@ -15,6 +15,7 @@ datacenters. All of these will be described below. - ### [Deleting Datacenters](#del) ## Listing & Filtering Datacenters {#add} + Besides traditional list functionalities the Dynamic Datacenter view provides filtering options. Datacenters can be filtered by: @@ -25,6 +26,7 @@ filtered by: Filters are applied together, that means result datacenters have to match all the filtering criteria. ## Creating & Editing Datacenters {#cre} + Datacenters can be added after clicking on the plus icon in the top right corner of the Dynamic Datacenters view. To edit datacenter Administrator should click on the pencil icon that appears after putting mouse over one of the rows with datacenters. @@ -49,6 +51,7 @@ Fields available in the dialogs: - Provider Configuration - provider configuration in the YAML format. ## Deleting Datacenters {#del} + Datacenters can be deleted after clicking on the trash icon that appears after putting mouse over one of the rows with datacenters. diff --git a/content/kubermatic/main/tutorials-howtos/administration/admin-panel/opa-constraint-templates/_index.en.md b/content/kubermatic/main/tutorials-howtos/administration/admin-panel/opa-constraint-templates/_index.en.md index 0881a88e4..0a458b30e 100644 --- a/content/kubermatic/main/tutorials-howtos/administration/admin-panel/opa-constraint-templates/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/administration/admin-panel/opa-constraint-templates/_index.en.md @@ -10,6 +10,7 @@ Constraint Templates allow you to declare new Constraints. They are intended to The Constraint Templates view under OPA menu in Admin Panel allows adding, editing and deleting Constraint Templates. ## Adding Constraint Templates + Constraint Templates can be added after clicking on the `+ Add Constraint Template` icon in the top right corner of the view. ![Add Constraint Template](@/images/ui/opa-admin-add-ct.png?classes=shadow,border&height=350px "Constraint Template Add Dialog") @@ -45,9 +46,11 @@ targets: ``` ## Editing Constraint Templates + Constraint Templates can be edited after clicking on the pencil icon that appears when hovering over one of the rows. The form is identical to the one from creation. ## Deleting Constraint Templates + Constraint Templates can be deleted after clicking on the trash icon that appears when hovering over one of the rows. Please note, that the deletion of a Constraint Template will also delete all Constraints that are assigned to it. ![Delete Constraint Template](@/images/ui/opa-admin-delete-ct.png?classes=shadow,border&height=200 "Constraint Template Delete Dialog") diff --git a/content/kubermatic/main/tutorials-howtos/administration/admin-panel/opa-default-constraints/_index.en.md b/content/kubermatic/main/tutorials-howtos/administration/admin-panel/opa-default-constraints/_index.en.md index df93e8687..426258ef8 100644 --- a/content/kubermatic/main/tutorials-howtos/administration/admin-panel/opa-default-constraints/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/administration/admin-panel/opa-default-constraints/_index.en.md @@ -17,7 +17,7 @@ To add a new default constraint click on the `+Add Default Constraint` icon on t ![Create Default Constraint](@/images/ui/create-default-constraint-dialog.png?height=300px&classes=shadow,border "Create Default Constraint") -``` +```yaml constraintType: K8sPSPAllowPrivilegeEscalationContainer match: kinds: @@ -60,7 +60,7 @@ In case of no filtering applied Default Constraints are synced to all User Clust for example, Admin wants to apply a policy only on clusters with the provider as `aws` and label selector as `filtered:true` To enable this add the following selectors in the constraint spec for the above use case. -``` +```yaml selector: providers: - aws @@ -90,7 +90,6 @@ Disabled Constraint in the Applied cluster View disabled-default-constraint-cluster-view.png ![Disabled Default Constraint](@/images/ui/disabled-default-constraint-cluster-view.png?classes=shadow,border "Disabled Default Constraint") - Enable the constraint by clicking the same button ![Enable Default Constraint](@/images/ui/disabled-default-constraint.png?classes=shadow,border "Enable Default Constraint") diff --git a/content/kubermatic/main/tutorials-howtos/administration/admin-panel/seed-configurations/_index.en.md b/content/kubermatic/main/tutorials-howtos/administration/admin-panel/seed-configurations/_index.en.md index 932279bd9..e0d234356 100644 --- a/content/kubermatic/main/tutorials-howtos/administration/admin-panel/seed-configurations/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/administration/admin-panel/seed-configurations/_index.en.md @@ -4,7 +4,6 @@ date = 2023-02-20 weight = 20 +++ - The Seed Configurations page in the Admin Panel allows administrators to see Seeds available in the KKP instance. This page presents Seed's utilization broken down into Providers and Datacenters. {{% notice note %}} @@ -15,7 +14,6 @@ The Seed Configuration section is a readonly view page. This page does not provi ![Seed Configurations](images/seed-confgurations.png?classes=shadow,border "Seed Configurations List View") - ### Seed Details The following page presents Seed's statistics along with tables of utilization broken down respectively per Providers and Datacenters. diff --git a/content/kubermatic/main/tutorials-howtos/administration/kkp-user/_index.en.md b/content/kubermatic/main/tutorials-howtos/administration/kkp-user/_index.en.md index ea7e560ef..f513fc58e 100644 --- a/content/kubermatic/main/tutorials-howtos/administration/kkp-user/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/administration/kkp-user/_index.en.md @@ -10,7 +10,7 @@ When a user authenticates for the first time at the Dashboard, an internal User Example User representation: -``` +```yaml apiVersion: kubermatic.k8c.io/v1 kind: User metadata: @@ -22,13 +22,13 @@ spec: name: Jane Doe ``` -# Initial Admin +## Initial Admin -After the installation of Kubermatic Kubernetes Platform the first account that authenticates at the Dashboard is elected as an admin. +After the installation of Kubermatic Kubernetes Platform, the first account that authenticates at the Dashboard is elected as an admin. The account is then capable of setting admin permissions via the [dashboard]({{< ref "../admin-panel/administrators" >}}) . -# Granting Admin Permission via kubectl +## Granting Admin Permission via kubectl Make sure the account logged in once at the Kubermatic Dashboard. diff --git a/content/kubermatic/main/tutorials-howtos/administration/presets/_index.en.md b/content/kubermatic/main/tutorials-howtos/administration/presets/_index.en.md index da1bee199..c62a8276d 100644 --- a/content/kubermatic/main/tutorials-howtos/administration/presets/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/administration/presets/_index.en.md @@ -38,9 +38,9 @@ Preset list offers multiple options that allow Administrators to manage Presets. 1. Create a new Preset 1. Manage existing Preset - - Edit Preset (allows showing/hiding specific providers) - - Add a new provider to the Preset - - Edit configure provider + 1. Edit Preset (allows showing/hiding specific providers) + 1. Add a new provider to the Preset + 1. Edit configure provider 1. Show/Hide the Preset. Allows hiding Presets from the users and block new cluster creation based on them. 1. A list of providers configured for the Preset. @@ -72,7 +72,7 @@ All configured providers will be available on this step and only a single provid #### Step 3: Settings -The _Settings_ step will vary depending on the provider selected in the previous step. In our example, we have selected +The *Settings* step will vary depending on the provider selected in the previous step. In our example, we have selected an AWS provider. ![Third step of creating a preset](images/create-preset-third-step.png?height=500px&classes=shadow,border) @@ -143,20 +143,19 @@ To delete a preset, select the dotted menu on the Preset list entry and choose t Deleting a preset is **permanent** and will remove the entire preset and any associated provider configurations. {{% /notice %}} - ![Delete Preset Option](images/delete-preset-action-item.png) {{% notice info %}} The system displays a confirmation message that differs depending on whether the preset is associated with any resources: {{% /notice %}} +#### For presets with no associations -**For presets with no associations:** - Simple confirmation message asking if you want to delete the preset permanently and has no associations with existing preset. ![Delete Preset with no associations](images/delete-preset-with-no-associations.png) -**For presets with associations:** +#### For presets with associations Displays a warning showing the number of associated clusters and cluster templates. @@ -178,7 +177,6 @@ The linkages view dialog displays associated clusters and templates, providing d ![View Preset Linkages](images/view-preset-linkages.png) - ### Showing/Hiding Providers Inside the Preset {#show-hide-provider-inside-the-preset} Open `Edit Preset` option through dotted menu on the Preset list entry. @@ -190,7 +188,6 @@ be hidden/shown instead of hiding the whole Preset it can be managed here. ![Showing or hiding specific providers inside the preset dialog](@/images/ui/edit-preset-dialog.png?height=400px&classes=shadow,border) - ## Managing Presets via kubectl While the Kubermatic Dashboard offers easy options to manage Presets, it is also possible to use `kubectl` diff --git a/content/kubermatic/main/tutorials-howtos/administration/user-settings/user-ssh-key-agent/_index.en.md b/content/kubermatic/main/tutorials-howtos/administration/user-settings/user-ssh-key-agent/_index.en.md index 938d2f2c1..0c4845023 100644 --- a/content/kubermatic/main/tutorials-howtos/administration/user-settings/user-ssh-key-agent/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/administration/user-settings/user-ssh-key-agent/_index.en.md @@ -6,6 +6,7 @@ weight = 6 +++ ## Overview + The user ssh key agent is responsible for syncing the defined user ssh keys to the worker nodes, when users attach ssh keys to the user clusters. When users choose to add a user ssh key to a cluster after it was created the agent will sync those keys to the worker machines by fetching the ssh keys and write them to the `authorized_keys` @@ -18,7 +19,7 @@ the content of the file based on the attached user ssh keys. The agent is deployed to the user clusters by default and it is not possible to change whether to deploy it or not once the cluster has been created. The reason behind that is, once the agent is deployed after the cluster is created, any previously added ssh keys in the worker nodes(except the keys that have been added during the cluster creation) will be -deleted. If the user disables the agent after the cluster creation, any pre-existing keys won’t be cleaned up. +deleted. If the user disables the agent after the cluster creation, any pre-existing keys won't be cleaned up. Due to the previously mentioned reasons, the agent state cannot be changed once the cluster is created. If users decide to disable the agent(during cluster creation), they should take care of adding ssh keys to the worker nodes by themselves. @@ -27,6 +28,7 @@ During the user cluster creation steps(at the second step), the users have the p is not affected by the agent, whether it was deployed or not. ## Disable user SSH Key Agent feature + To disable the User SSH Key Agent feature completely, enable the following feature flag in the Kubermatic configuration: ```yaml @@ -36,7 +38,9 @@ spec: ``` When this feature flag is enabled, the User SSH Key Agent will be disabled. The SSH Keys page and all SSH key options in the cluster view will also be hidden from the dashboard. + ## Migration + Starting from KKP 2.16.0 on-wards, it was made possible to enable and disable the user ssh key agent during cluster creation. Users can enable the agent in KKP dashboard as it is mentioned above, or by enabling the newly added `enableUserSSHKeyAgent: true` in the cluster spec. For user clusters which were created using KKP 2.15.x and earlier, this has introduced an issue, due to diff --git a/content/kubermatic/main/tutorials-howtos/admission-plugins/_index.en.md b/content/kubermatic/main/tutorials-howtos/admission-plugins/_index.en.md index 23b7fb79e..8057d98dc 100644 --- a/content/kubermatic/main/tutorials-howtos/admission-plugins/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/admission-plugins/_index.en.md @@ -13,7 +13,8 @@ The Kubermatic Kubernetes Platform manages the Kubernetes API server by setting list of admission control plugins to be enabled during cluster creation. In the current version, the default ones are: -``` + +```bash NamespaceLifecycle NodeRestriction LimitRanger @@ -39,6 +40,7 @@ They can be selected in the UI wizard. ![Admission Plugin Selection](@/images/ui/admission-plugins.png?height=400px&classes=shadow,border "Admission Plugin Selection") ### PodNodeSelector Configuration + Selecting the `PodNodeSelector` plugin expands an additional view for the plugin-specific configuration. ![PodNodeSelector Admission Plugin Configuration](@/images/ui/admission-plugin-configuration.png?classes=shadow,border "PodNodeSelector Admission Plugin Configuration") diff --git a/content/kubermatic/main/tutorials-howtos/applications/add-remove-application-version/_index.en.md b/content/kubermatic/main/tutorials-howtos/applications/add-remove-application-version/_index.en.md index ecfa4aa96..b9511f6f3 100644 --- a/content/kubermatic/main/tutorials-howtos/applications/add-remove-application-version/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/applications/add-remove-application-version/_index.en.md @@ -31,6 +31,7 @@ spec: url: https://charts.bitnami.com/bitnami version: 9.2.9 ``` + And want to make the new version `9.2.11` available. Then, all you have to do is to add the new version as described below: ```yaml @@ -60,6 +61,7 @@ spec: url: https://charts.bitnami.com/bitnami version: 9.2.11 ``` + Users will now be able to reference this version in their `ApplicationInstallation`. For additional details, see the [update an application guide]({{< ref "../update-application" >}}). {{% notice warning %}} @@ -68,11 +70,14 @@ For more details, see how to delete a version from an `ApplicationDefinition`. {{% /notice %}} ## How to delete a version from an ApplicationDefinition + Deleting a version from `ApplicationDefinition` will trigger the deletion of all `ApplicationInstallations` that reference this version! It guarantees that only desired versions are installed in user clusters, which is helpful if a version contains a critical security breach. Under normal circumstances, we recommend following the deprecation policy to delete a version. ### Deprecation policy + Our recommended deprecation policy is as follows: + * stop the user from creating or upgrading to the deprecated version. But let them edit the application using a deprecated version (it may be needed for operational purposes). * notify the user running this version that it's deprecated. @@ -85,6 +90,7 @@ This deprecation policy is an example and may have to be adapted to your organiz The best way to achieve that is using the [Gatekeeper / OPA integration]({{< ref "../../opa-integration" >}}) to create a `ConstraintTemplate` and two [Default Constraints]({{< ref "../../opa-integration#default-constraints" >}}) (one for each point of the deprecation policy) **Example Kubermatic Constraint Template to deprecate a version:** + ```yaml apiVersion: kubermatic.k8c.io/v1 kind: ConstraintTemplate @@ -191,8 +197,8 @@ spec: If users try to create an `ApplicationInstallation` using the deprecation version, they will get the following error message: -``` -$ kubectl create -f app.yaml +```bash +kubectl create -f app.yaml Error from server ([deprecate-app-apache-9-2-9] application `apache` in version `9.2.9` is deprecated. Please upgrade to next version): error when creating "app.yaml": admission webhook "validation.gatekeeper.sh" denied the request: [deprecate-app-apache-9-2-9] application `apache` in version `9.2.9` is deprecated. Please upgrade to the next version ``` @@ -223,17 +229,19 @@ spec: selector: labelSelector: {} ``` + This constraint will raise a warning if a user tries to create, edit, or upgrade to the deprecated version: -``` -$ kubectl edit applicationInstallation my-apache +```bash +kubectl edit applicationInstallation my-apache + Warning: [warn-app-apache-9-2-9] application `apache` in version `9.2.9` is deprecated. Please upgrade to the next version applicationinstallation.apps.kubermatic.k8c.io/my-apache edited ``` We can see which applications are using the deprecated version by looking at the constraint status. -``` +```bash status: [...] auditTimestamp: "2023-01-23T14:55:47Z" diff --git a/content/kubermatic/main/tutorials-howtos/applications/update-application/_index.en.md b/content/kubermatic/main/tutorials-howtos/applications/update-application/_index.en.md index bc672eae7..cb9b10c4a 100644 --- a/content/kubermatic/main/tutorials-howtos/applications/update-application/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/applications/update-application/_index.en.md @@ -8,6 +8,7 @@ This guide targets Cluster Admins and details how to update an Application insta For more details on Applications, please refer to our [Applications Primer]({{< ref "../../../architecture/concept/kkp-concepts/applications/" >}}). ## Update an Application via the UI + Go to the Applications Tab and click on the pen icon to edit the application. ![Applications Tab](@/images/applications/applications-edit-icon.png?classes=shadow,border "Applications edit button") @@ -18,13 +19,13 @@ Then you can update the values and or version using the editor. If you update the application's version, you may have to update the values accordingly. {{% /notice %}} - ![Applications Tab](@/images/applications/applications-edit-values.png?classes=shadow,border "Applications edit values and version") ## Update an Application via GitOps + Use `kubectl` to edit the applicationInstallation CR. -```sh +```bash kubectl -n edit applicationinstallation ``` @@ -32,5 +33,4 @@ kubectl -n edit applicationinstallation If you update the application's version, you may have to update the values accordingly. {{% /notice %}} - Then you can check the progress of your upgrade in `status.conditions`. For more information, please refer to [Application Life Cycle]({{< ref "../../../architecture/concept/kkp-concepts/applications/application-installation#application-life-cycle" >}}). diff --git a/content/kubermatic/main/tutorials-howtos/audit-logging/_index.en.md b/content/kubermatic/main/tutorials-howtos/audit-logging/_index.en.md index 89fc0576c..13e7aa8fd 100644 --- a/content/kubermatic/main/tutorials-howtos/audit-logging/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/audit-logging/_index.en.md @@ -167,7 +167,8 @@ To enable this, you will need to edit your [datacenter definitions in a Seed]({{ Centrally define audit logging for **user-clusters** (via `auditLogging` in the Seed spec). Configure sidecar settings , webhook backends, and policy presets. Enforce datacenter-level controls with `EnforceAuditLogging` (mandatory logging) and `EnforcedAuditWebhookSettings` (override user-cluster webhook configs). -**Example**: +**Example**: + ```yaml spec: auditLogging: diff --git a/content/kubermatic/main/tutorials-howtos/aws-assume-role/_index.en.md b/content/kubermatic/main/tutorials-howtos/aws-assume-role/_index.en.md index 2fc98b725..675113d70 100644 --- a/content/kubermatic/main/tutorials-howtos/aws-assume-role/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/aws-assume-role/_index.en.md @@ -14,23 +14,26 @@ Using KKP you are able to use the `AssumeRole` feature to easily deploy user clu ![Running user clusters using an assumed IAM role](aws-assume-role-sequence-diagram.png?width=1000&classes=shadow,border "Running user clusters using an assumed IAM role") ## Benefits - * Privilege escalation + + - Privilege escalation - Get access to someones elses AWS account (e.g. a customer) to run user clusters on their behalf - While not described here, it is also possible to assume a role belonging to the same AWS account to escalate your privileges inside of your account - * Billing: All user cluster resources will be billed to **AWS account B** (the "external" account) - * Control: The owner of **AWS account B** (e.g. the customer) has control over all resources created in his account + - Billing: All user cluster resources will be billed to **AWS account B** (the "external" account) + - Control: The owner of **AWS account B** (e.g. the customer) has control over all resources created in his account ## Prerequisites - * An **AWS account A** that is allowed to assume the **IAM role R** of a second **AWS account B** + + - An **AWS account A** that is allowed to assume the **IAM role R** of a second **AWS account B** - **A** needs to be able to perform the API call `sts:AssumeRole` - You can test assuming the role by running the following AWS CLI command as **AWS account A**: \ `aws sts assume-role --role-arn "arn:aws:iam::YOUR_AWS_ACCOUNT_B_ID:role/YOUR_IAM_ROLE" --role-session-name "test" --external-id "YOUR_EXTERNAL_ID_IF_SET"` - * An **IAM role R** on **AWS account B** + - An **IAM role R** on **AWS account B** - The role should have all necessary permissions to run user clusters (IAM, EC2, Route53) - The role should have a trust relationship configured that allows **A** to assume the role **R**. Please refer to this [AWS article about trust relationships][aws-docs-how-to-trust-policies] for more information - Setting an `External ID` is optional but recommended when configuring the trust relationship. It helps avoiding the [confused deputy problem][aws-docs-confused-deputy] ## Usage + Creating a new cluster using an assumed role is a breeze. During cluster creation choose AWS as your provider and configure the cluster to your liking. After entering your AWS access credentials (access key ID and secret access key) choose "Enable Assume Role" (1), enter the ARN of the IAM role you would like to assume in field (2) (IAM role ARN should be in the format `arn:aws:iam::ID_OF_AWS_ACCOUNT_B:role/ROLE_NAME`) and if the IAM role has an optional `External ID` add it in field (3). @@ -39,6 +42,7 @@ After that you can proceed as usual. ![Enabling AWS AssumeRole in the cluster creation wizard](aws-assume-role-wizard.png?classes=shadow,border "Enabling AWS AssumeRole in the cluster creation wizard") ## Notes + Please note that KKP has no way to clean up clusters after a trust relationship has been removed. You should assure that all resources managed by KKP have been shut down before removing access. diff --git a/content/kubermatic/main/tutorials-howtos/ccm-migration/_index.en.md b/content/kubermatic/main/tutorials-howtos/ccm-migration/_index.en.md index 2daa8a640..ea517ffa4 100644 --- a/content/kubermatic/main/tutorials-howtos/ccm-migration/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/ccm-migration/_index.en.md @@ -29,6 +29,7 @@ needed a mechanism to allow users to migrate their clusters to the out-of-tree i ### Support and Prerequisites The CCM/CSI migration is supported for the following providers: + * Amazon Web Services (AWS) * OpenStack * [Required OpenStack services and cloudConfig properties for the external CCM][openstack-ccm-reqs] diff --git a/content/kubermatic/main/tutorials-howtos/ccm-migration/via-ui/_index.en.md b/content/kubermatic/main/tutorials-howtos/ccm-migration/via-ui/_index.en.md index 888db5382..fec4a5ce7 100644 --- a/content/kubermatic/main/tutorials-howtos/ccm-migration/via-ui/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/ccm-migration/via-ui/_index.en.md @@ -19,6 +19,7 @@ to the Kubernetes core code. Then, the Kubernetes community moved toward the out a plugin mechanism that allows different cloud providers to integrate their platforms with Kubernetes. ## CCM Migration Status + To allow migration from in-tree to out-of-tree CCM for existing cluster, the cluster details view has been extended by a section in the top area, the "External CCM Migration Status", that indicates the status of the CCM migration. @@ -27,10 +28,12 @@ section in the top area, the "External CCM Migration Status", that indicates the The "External CCM Migration Status" can have four different possible values: ### Not Needed + The cluster already uses the external CCM. ![ccm_migration_not_needed](ccm-migration-not-needed.png?height=60px&classes=shadow,border) ### Supported + KKP supports the external CCM for the given cloud provider, therefore the cluster can be migrated. ![ccm_migration_supported](ccm-migration-supported.png?height=130px&classes=shadow,border) @@ -38,14 +41,17 @@ When clicking on this button, a windows pops up to confirm the migration. ![ccm_migration_supported](ccm-migration-confirm.png?height=200px&classes=shadow,border) ### In Progress + External CCM migration has already been enabled for the given cluster, and the migration is in progress. ![ccm_migration_in_progress](ccm-migration-in-progress.png?height=60px&classes=shadow,border) ### Unsupported + KKP does not support yet the external CCM for the given cloud provider. ![ccm_migration_unsupported](ccm-migration-unsupported.png?height=60px&classes=shadow,border) ## Roll out MachineDeployments + Once the CCM migration has been enabled by clicking on the "Supported" button, the migration procedure will hang in "In progress" status until all the `machineDeployments` will be rolled out. To roll out a `machineDeployment` get into the `machineDeployment` view and click on the circular arrow in the top right. diff --git a/content/kubermatic/main/tutorials-howtos/cluster-access/_index.en.md b/content/kubermatic/main/tutorials-howtos/cluster-access/_index.en.md index 8c85039e1..f3cca3cf7 100644 --- a/content/kubermatic/main/tutorials-howtos/cluster-access/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/cluster-access/_index.en.md @@ -8,7 +8,9 @@ weight = 14 This manual explains how to configure Role-Based Access Control (a.k.a RBAC) on user clusters. ## Concepts + You can grant permission to 3 types of subjects: + * `user`: end user identified by their email * `group`: named collection of users * `service account`: a Kubernetes service account that authenticates a process (e.g. Continuous integration) @@ -21,13 +23,13 @@ permission by adding or removing binding. ![list group rbac](@/images/ui/rbac-group-view.png?classes=shadow,border "list group rbac") ![list service account rbac](@/images/ui/rbac-sa-view.png?classes=shadow,border "list service account rbac") - ## Role-Based Access Control Predefined Roles + KKP provides predefined roles and cluster roles to help implement granular permissions for specific resources and simplify access control across the user cluster. All of the default roles and cluster roles are labeled with `component=userClusterRole`. -### Cluster Level +### Cluster Level | Default ClusterRole | Description | |---------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| @@ -38,15 +40,14 @@ with `component=userClusterRole`. ### Namespace Level -| Default Role | Description | -|------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------| -| namespace-admin | Allows admin access. Allows read/write access to most resources in a namespace. | -| namespace-editor | Allows read/write access to most objects in a namespace. This role allows accessing secrets and running pods as any service account in the namespace| -| namespace-viewer | Allows read-only access to see most objects in a namespace. | - - +| Default Role | Description | +|------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------| +| namespace-admin | Allows admin access. Allows read/write access to most resources in a namespace. | +| namespace-editor | Allows read/write access to most objects in a namespace. This role allows accessing secrets and running pods as any service account in the namespace | +| namespace-viewer | Allows read-only access to see most objects in a namespace. | # Manage User Permissions + You can grant permissions to a group by clicking on `add Bindings`. ![Grant permission to a user](@/images/ui/rbac-user-binding.png?classes=shadow,border "Grant permission to a user") @@ -55,6 +56,7 @@ The cluster owner is automatically connected to the `cluster-admin` cluster role {{% /notice %}} ## Manage Group Permissions + Group are named collection of users. You can grant permission to a group by clicking on `add Bindings`. ![Grant permission to a group](@/images/ui/rbac-group-binding.png?classes=shadow,border "Grant permission to a Group") @@ -65,15 +67,17 @@ If you want to bind an OIDC group, you must prefix the group's name with `oidc:` The kubernetes API Server automatically adds this prefix to prevent conflicts with other authentication strategies {{% /notice %}} - ## Manage Service Account Permissions + Service accounts are designed to authenticate processes like Continuous integration (a.k.a CI). In this example, we will: + * create a Service account * grant permission to 2 namespaces * download the associated kubeconfig that can be used to deploy workload into these namespaces. ### Create a Service Account + Service accounts are namespaced objects. So you must choose in which namespace you will create it. The namespace where the service account live is not related to the granted permissions. To create a service account, click on `Add Service Account` @@ -82,6 +86,7 @@ To create a service account, click on `Add Service Account` In this example, we create a service account named `ci` into `kube-system` namespace. ## Grant Permissions to Service Account + You can grant permission by clicking on `Add binding` ![Grant permission to service account](@/images/ui/rbac-sa-binding.png?classes=shadow,border "Grant permission to service account") @@ -91,8 +96,8 @@ In this example, we grant the permission `namespace-admin` on the namespace `app You can see and remove binding by unfolding the service account. {{% /notice %}} - ### Download Service Account kubeconfig + Finally, you can download the service account's kubeconfig by clicking on the download icon. ![download service account's kubeconfig](@/images/ui/rbac-sa-download-kc.png?classes=shadow,border "Download service account's kubeconfig") @@ -101,8 +106,10 @@ You can edit service account's permissions at any time. There is no need to down {{% /notice %}} ### Delete a Service Account + You can delete a service account by clicking on the trash icon. Deleting a service account also deletes all associated binding. ## Debugging + The best way to debug authorizing problems is to enable [audit logging]({{< ref "../audit-logging/" >}}) and checks audit logs. For example, check the user belongs to the expected groups (see `.user.groups`) diff --git a/content/kubermatic/main/tutorials-howtos/cluster-backup/_index.en.md b/content/kubermatic/main/tutorials-howtos/cluster-backup/_index.en.md index b746656a6..629005d08 100644 --- a/content/kubermatic/main/tutorials-howtos/cluster-backup/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/cluster-backup/_index.en.md @@ -30,7 +30,6 @@ The `ClusterBackupStorageLocation` resource is a simple wrapper for Velero's [Ba When Custer Backup is enabled for a user cluster, KKP will deploy a managed instance of Velero on the user cluster, and propagate the required Velero `BackupStorageLocation` to the user cluster, with a special prefix to avoid collisions with other user clusters that could be using the same storage. - ![Create ClusterBackupStorageLocation](images/create-cbsl.png?classes=shadow,border "Create ClusterBackupStorageLocation") For simplicity, the KKP UI requires the minimal required values to enabled a working Velero Backup Storage Location. If further parameters are needed, they can be added by editing the `ClusterBackupStorageLocation` resources on the Seed cluster: @@ -46,12 +45,13 @@ The KKP control plane nodes need to have access to S3 endpoint defined in the St {{% /notice %}} #### Enabling Backup for User Clusters + Cluster Backup can be used for existing clusters and newly created ones. When creating a new user cluster, you can enable the `Cluster Backup` option in the cluster creation wizard. Once enabled, you will be required to assign the Backup Storage Location that you have created previously. ![Enable Backup for New Clusters](images/enable-backup-new-cluster.png?classes=shadow,border "Enable Backup for New Clusters") - For existing clusters, you can edit the cluster to assign the required Cluster Backup Location: + ![Edit Existing Cluster](images/enable-backup-edit-cluster.png?classes=shadow,border "Edit Existing Cluster") {{% notice note %}} @@ -63,17 +63,18 @@ Currently, KKP support defining a single storage location per cluster. The stora {{% /notice %}} #### Configuring Backups and Schedules + Using the KKP UI, you can configure one-time backups that run as soon as they are defined, or recurring backup schedules that run at specific intervals. Using the user cluster Kubernetes API, you can also create these resources using the Velero command line tool. They should show up automatically on the KKP UI once created. ##### One-time Backups + As with the `CBSL`, KKP UI allows the user to set the minimal required options to create a backup configuration. Since the backup process is started immediately after creation, it's not possible to edit it via the Kubernetes API. If you need to customize your backup further, you should use a Schedule. To configure a new one-time backup, go to the Backups list, select the cluster you would like to create the backup for from the drop-down list, and click `Create Cluster Backup`. ![Create Backup](images/create-backup.png?classes=shadow,border "Create Backup") - You can select the Namespaces that you want to include in this backup configuration from the dropdown list. Note that this list of Namespaces is directly fetched from your cluster, so you need to create the Namespaces before configuring the backup. You can define the backup expiration period, which defaults to **30 days** and you can also choose if you want to backup Persistent Volumes or not. KKP integration uses Velero's [File System Backup](https://velero.io/docs/v1.12/file-system-backup/) to cover the widest range of use cases. Persistent Volumes backup is enabled by default. @@ -83,20 +84,23 @@ Backing up Persistent Volume data to S3 backend can be resource intensive, espec {{% /notice %}} ##### Scheduled Backups + Creating scheduled backups is almost identical to one-time backups. Configuration is available from the "Schedules" submenu, selecting your user cluster and clicking `Create Backup Schedule`. For schedules, you also need to add a cron-style schedule to perform the backups. ![Create Backup Schedule](images/create-schedule.png?classes=shadow,border "Create Backup Schedule") - ##### Downloading Backups + KKP UI provides a convenient button to download backups. You can simply go to the "Backups" list, select a user cluster and a specific backup, then click the "Download Backup" button. + {{% notice note %}} The S3 endpoint defined in the Cluster Backup Storage Location must be accessible to the device used to download the backup. {{% /notice %}} #### Performing Restores + To restore a specific backup, go to the Backups page, select your cluster and find the required backup. Click on the `Restore Backup` button. Velero restores backups by creating a `Restore` API resource on the clusters and then reconciles it. @@ -145,6 +149,7 @@ KKP UI will use multipart upload for files larger than the 100MB size and maximu After all files have been uploaded successfully, user can follow the instructions mentioned above for Importing External Backups. ### Security Consideration + KKP administrators and project owners/editors should be carefully plan the backup storage strategy of projects and user clusters. Velero Backup is not currently designed with multi-tenancy in mind. While the upstream project is working on that, it's not there yet. As a result, Velero is expected to be managed by the cluster administrator who has full access to Velero resources as well as the backup storage backend. diff --git a/content/kubermatic/main/tutorials-howtos/dashboard-customization/_index.en.md b/content/kubermatic/main/tutorials-howtos/dashboard-customization/_index.en.md index f2c6b360e..0bc29dcc2 100644 --- a/content/kubermatic/main/tutorials-howtos/dashboard-customization/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/dashboard-customization/_index.en.md @@ -9,11 +9,11 @@ This manual explains multiple approaches to add custom themes to the application Here are some quick links to the different chapters: -* [Modifying Available Themes]({{< ref "#modifying-available-themes" >}}) -* [Disabling Theming Functionality]({{< ref "#disabling-theming-functionality" >}}) -* [Possible Customizing Approaches]({{< ref "#possible-customizing-approaches" >}}) - * [Preparing a New Theme With Access to the Sources]({{< ref "#customizing-the-application-sources" >}}) - * [Preparing a New Theme Without Access to the Sources]({{< ref "#customizing-the-application-sources-inside-custom-container" >}}) +- [Modifying Available Themes]({{< ref "#modifying-available-themes" >}}) +- [Disabling Theming Functionality]({{< ref "#disabling-theming-functionality" >}}) +- [Possible Customizing Approaches]({{< ref "#possible-customizing-approaches" >}}) + - [Preparing a New Theme With Access to the Sources]({{< ref "#customizing-the-application-sources" >}}) + - [Preparing a New Theme Without Access to the Sources]({{< ref "#customizing-the-application-sources-inside-custom-container" >}}) ## Modifying Available Themes @@ -28,10 +28,12 @@ In order to disable theming options for all users and enforce using only the def `enforced_theme` property in the application `config.json` file to the name of the theme that should be enforced (i.e. `light`). ## Possible Customizing Approaches + There are two possible approaches of preparing custom themes. They all rely on the same functionality. It all depends on user access to the application code in order to prepare and quickly test the new theme before using it in the official deployment. ### Preparing a New Theme With Access to the Sources + This approach gives user the possibility to reuse already defined code, work with `scss` instead of `css` and quickly test your new theme before uploading it to the official deployment. @@ -40,22 +42,22 @@ All available themes can be found inside `src/assets/themes` directory. Follow t - Create a new `scss` theme file inside `src/assets/themes` directory called `custom.scss`. This is only a temporary name that can be changed later. - As a base reuse code from one of the default themes, either `light.scss` or `dark.scss`. - Register a new style in `src/assets/config/config.json` similar to how it's done for `light` and `dark` themes. As the `name` use `custom`. - - `name` - refers to the theme file name stored inside `assets/themes` directory. - - `displayName` - will be used by the theme picker available in the `Account` view to display a new theme. - - `isDark` - defines the icon to be used by the theme picker (sun/moon). + - `name` - refers to the theme file name stored inside `assets/themes` directory. + - `displayName` - will be used by the theme picker available in the `Account` view to display a new theme. + - `isDark` - defines the icon to be used by the theme picker (sun/moon). ```json - { - "openstack": { - "wizard_use_default_user": false - }, - "themes": [ - { - "name": "custom", - "displayName": "Custom", - "isDark": false - } - ] - } + { + "openstack": { + "wizard_use_default_user": false + }, + "themes": [ + { + "name": "custom", + "displayName": "Custom", + "isDark": false + } + ] + } ``` - Run the application using `npm start`, open the `Account` view under `User settings`, select your new theme and update `custom.scss` according to your needs. @@ -79,41 +81,49 @@ All available themes can be found inside `src/assets/themes` directory. Follow t ![Theme picker](@/images/ui/custom-theme.png?classes=shadow,border "Theme picker") ### Preparing a New Theme Without Access to the Sources + In this case the easiest way of preparing a new theme is to download one of the existing themes light/dark. This can be done in a few different ways. We'll describe here two possible ways of downloading enabled themes. #### Download Theme Using the Browser -1. Open KKP UI -2. Open `Developer tools` and navigate to `Sources` tab. -3. There should be a CSS file of a currently selected theme available to be downloaded inside `assts/themes` directory. + +- Open KKP UI +- Open `Developer tools` and navigate to `Sources` tab. +- There should be a CSS file of a currently selected theme available to be downloaded inside `assts/themes` directory. ![Dev tools](@/images/ui/developer-tools.png?classes=shadow,border "Dev tools") #### Download Themes Directly From the KKP Dashboard container + Assuming that you know how to exec into the container and copy resources from/to it, themes can be simply copied over to your machine from the running KKP Dashboard container. They are stored inside the container in `dist/assets/themes` directory. ##### Kubernetes + Assuming that the KKP Dashboard pod name is `kubermatic-dashboard-5b96d7f5df-mkmgh` you can copy themes to your `${HOME}/themes` directory using below command: + ```bash kubectl -n kubermatic cp kubermatic-dashboard-5b96d7f5df-mkmgh:/dist/assets/themes ~/themes ``` ##### Docker + Assuming that the KKP Dashboard container name is `kubermatic-dashboard` you can copy themes to your `${HOME}/themes` directory using below command: + ```bash docker cp kubermatic-dashboard:/dist/assets/themes/. ~/themes ``` #### Using Compiled Theme to Prepare a New Theme + Once you have a base theme file ready, we can use it to prepare a new theme. To easier understand the process, let's assume that we have downloaded a `light.css` file and will be preparing a new theme called `solar.css`. -1. Rename `light.css` to `solar.css`. -2. Update `solar.css` file according to your needs. Anything in the file can be changed or new rules can be added. +- Rename `light.css` to `solar.css`. +- Update `solar.css` file according to your needs. Anything in the file can be changed or new rules can be added. In case you are changing colors, remember to update it in the whole file. -3. Mount new `solar.css` file to `dist/assets/themes` directory inside the application container. **Make sure not to override whole directory.** -4. Update `config.json` file inside `dist/config` directory and register the new theme. +- Mount new `solar.css` file to `dist/assets/themes` directory inside the application container. **Make sure not to override whole irectory.** +- Update `config.json` file inside `dist/config` directory and register the new theme. ```json { diff --git a/content/kubermatic/main/tutorials-howtos/deploy-your-application/_index.en.md b/content/kubermatic/main/tutorials-howtos/deploy-your-application/_index.en.md index 605828f80..dcea62bc4 100644 --- a/content/kubermatic/main/tutorials-howtos/deploy-your-application/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/deploy-your-application/_index.en.md @@ -11,7 +11,8 @@ Log into Kubermatic Kubernetes Platform (KKP), then [create and connect to the c We are using a [hello-world app](https://github.com/GoogleCloudPlatform/kubernetes-engine-samples/tree/master/hello-app) whose image is available at gcr.io/google-samples/node-hello:1.0. First, create a Deployment: -``` + +```yaml apiVersion: apps/v1 kind: Deployment metadata: @@ -38,29 +39,34 @@ spec: ```bash kubectl apply -f load-balancer-example.yaml ``` + To expose the Deployment, create a Service object of type LoadBalancer. + ```bash kubectl expose deployment hello-world --type=LoadBalancer --name=my-service ``` + Now you need to find out the external IP of that service. ```bash kubectl get services my-service ``` + The response on AWS should look like this: -``` +```bash NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-service LoadBalancer 10.240.29.100 8080:30574/TCP 19m ``` + If you curl against that external IP: ```bash curl :8080 ``` -you should get this response: +You should get this response: -``` +```bash Hello Kubernetes! ``` diff --git a/content/kubermatic/main/tutorials-howtos/encryption-at-rest/_index.en.md b/content/kubermatic/main/tutorials-howtos/encryption-at-rest/_index.en.md index 5e7be95e8..d442f13f6 100644 --- a/content/kubermatic/main/tutorials-howtos/encryption-at-rest/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/encryption-at-rest/_index.en.md @@ -15,7 +15,7 @@ Data will either be encrypted with static encryption keys or via envelope encryp ## Important Notes -- Data is only encrypted _at rest_ and not when requested by users with sufficient RBAC to access it. This means that the output of `kubectl get secret -o yaml` (or similar commands/actions) remains unencrypted and is only base64-encoded. Proper RBAC management is mandatory to secure secret data at all stages. +- Data is only encrypted *at rest* and not when requested by users with sufficient RBAC to access it. This means that the output of `kubectl get secret -o yaml` (or similar commands/actions) remains unencrypted and is only base64-encoded. Proper RBAC management is mandatory to secure secret data at all stages. - Due to multiple revisions of data existing in etcd, [etcd backups]({{< ref "../etcd-backups/" >}}) might contain previous revisions of a resource that are unencrypted if the etcd backup is taken less than five minutes after data has been encrypted. Previous revisions are compacted every five minutes by `kube-apiserver`. ## Configuring Encryption at Rest @@ -70,7 +70,7 @@ spec: value: ynCl8otobs5NuHu$3TLghqwFXVpv6N//SE6ZVTimYok= ``` -``` +```yaml # snippet for referencing a secret spec: encryptionConfiguration: @@ -95,8 +95,8 @@ Once configured, encryption at rest can be disabled via setting `spec.encryption Since encryption at rest needs to reconfigure the control plane and re-encrypt existing data in a user cluster, applying changes to the encryption configuration can take a while. Encryption status can be queried via `kubectl`: -```sh -$ kubectl get cluster -o jsonpath="{.status.encryption.phase}" +```bash +kubectl get cluster -o jsonpath="{.status.encryption.phase}" Active ``` @@ -131,7 +131,6 @@ This will configure the contents of `encryption-key-2022-02` as secondary encryp After control plane components have been rotated, switch the position of the two keys in the `keys` array. The given example will look like this: - ```yaml # only a snippet, not valid on its own! spec: diff --git a/content/kubermatic/main/tutorials-howtos/etcd-backups/_index.en.md b/content/kubermatic/main/tutorials-howtos/etcd-backups/_index.en.md index 39a7bb79d..2d9fd0e8c 100644 --- a/content/kubermatic/main/tutorials-howtos/etcd-backups/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/etcd-backups/_index.en.md @@ -10,7 +10,7 @@ Through KKP you can set up automatic scheduled etcd backups for your user cluste Firstly, you need to enable and configure at least one backup destination (backup bucket, endpoint and credentials). To see how, check [Etcd Backup Destination Settings]({{< ref "../administration/admin-panel/backup-buckets" >}}). It is recommended to enable [EtcdLauncher]({{< ref "../../cheat-sheets/etcd/etcd-launcher" >}}) on the clusters. -It is _required_ for the restore to work. +It is *required* for the restore to work. ## Etcd Backups @@ -48,11 +48,12 @@ It is also possible to do one-time backups (snapshots). The only change to the Y In **Kubermatic Kubernetes Platform (KKP)**, you can configure backup settings (`backupInterval` and `backupCount`) at two levels: 1. **Global Level** (KubermaticConfiguration): Defines default values for all Seeds. -2. **Seed Level** (Seed CRD): Overrides the global settings for a specific Seed. +1. **Seed Level** (Seed CRD): Overrides the global settings for a specific Seed. The **Seed-level configuration takes precedence** over the global KubermaticConfiguration. This allows fine-grained control over backup behavior for individual Seeds. #### **1. Global Configuration (KubermaticConfiguration)** + The global settings apply to all Seeds unless overridden by a Seed's configuration. These are defined in the `KubermaticConfiguration` CRD under `spec.seedController`: ```yaml @@ -70,6 +71,7 @@ spec: --- #### **2. Seed-Level Configuration (Seed CRD)** + Each Seed can override the global settings using the `etcdBackupRestore` field in the `Seed` CRD. These values take precedence over the global configuration: ```yaml @@ -173,7 +175,6 @@ This will create an `EtcdRestore` object for your cluster. You can observe the p ![Etcd Restore List](@/images/ui/etcd-restores-list.png?classes=shadow,border "Etcd Restore List") - #### Starting Restore via kubectl To restore a cluster from am existing backup via `kubectl`, you simply create a restore resource in the cluster namespace: @@ -195,7 +196,6 @@ spec: This needs to reference the backup name from the list of backups (shown above). - ### Restore Progress In the cluster view, you may notice that your cluster is in a `Restoring` state, and you can not interact with it until it is done. diff --git a/content/kubermatic/main/tutorials-howtos/external-clusters/_index.en.md b/content/kubermatic/main/tutorials-howtos/external-clusters/_index.en.md index b98b9e8ce..d6642f6d6 100644 --- a/content/kubermatic/main/tutorials-howtos/external-clusters/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/external-clusters/_index.en.md @@ -7,8 +7,10 @@ weight = 7 This section describes how to add, create and manage Kubernetes clusters known as external clusters in KKP. You can create a new cluster or import/connect an existing cluster. + - Import: You can import a cluster via credentials. Imported Cluster can be viewed and edited. Supported Providers: + - Azure Kubernetes Service (AKS) - Amazon Elastic Kubernetes Service (EKS) - Google Kubernetes Engine (GKE) @@ -22,6 +24,7 @@ Every cluster update is performed only by the cloud provider client. There is no ## Prerequisites The following requirements must be met to add an external Kubernetes cluster: + - The external Kubernetes cluster must already exist before you begin the import/connect process. Please refer to the cloud provider documentation for instructions. - The external Kubernetes cluster must be accessible using kubectl to get the information needed to add that cluster. - Make sure the cluster kubeconfig or provider credentials have sufficient rights to manage the cluster (get, list, upgrade,get kubeconfig) @@ -66,7 +69,7 @@ KKP allows creating a Kubernetes cluster on AKS/GKE/EKS and import it as an Exte ![External Cluster List](@/images/tutorials/external-clusters/externalcluster-list.png "External Cluster List") -## Delete Cluster: +## Delete Cluster {{% notice info %}} Delete operation is not allowed for imported clusters. @@ -109,12 +112,10 @@ You can `Disconnect` an external cluster by clicking on the disconnect icon next ![Disconnect Dialog](@/images/tutorials/external-clusters/disconnect.png "Disconnect Dialog") - ## Delete Cluster {{% notice info %}} Delete Cluster displays information in case nodes are attached {{% /notice %}} - ![Delete External Cluster](@/images/tutorials/external-clusters/delete-external-cluster-dialog.png "Delete External Cluster") diff --git a/content/kubermatic/main/tutorials-howtos/external-clusters/aks/_index.en.md b/content/kubermatic/main/tutorials-howtos/external-clusters/aks/_index.en.md index 7050d8d49..2166f2ed7 100644 --- a/content/kubermatic/main/tutorials-howtos/external-clusters/aks/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/external-clusters/aks/_index.en.md @@ -43,6 +43,7 @@ Validation performed will only check if the credentials have `Read` access. ![Select AKS cluster](@/images/tutorials/external-clusters/select-aks-cluster.png "Select AKS cluster") ## Create AKS Preset + Admin can create a preset on a KKP cluster using KKP `Admin Panel`. This Preset can then be used to Create/Import an AKS cluster. @@ -62,7 +63,7 @@ This Preset can then be used to Create/Import an AKS cluster. ![Choose AKS Preset](@/images/tutorials/external-clusters/choose-akspreset.png "Choose AKS Preset") -- Enter AKS credentials and Click on `Create` button. +- Enter AKS credentials and Click on `Create` button. !["Enter Credentials](@/images/tutorials/external-clusters/enter-aks-credentials-preset.png "Enter Credentials") @@ -137,7 +138,7 @@ Navigate to the cluster overview, scroll down to machine deployments and click o ![Update AKS Machine Deployment](@/images/tutorials/external-clusters/delete-md.png "Delete AKS Machine Deployment") -## Cluster State: +## Cluster State {{% notice info %}} `Provisioning State` is used to indicate AKS Cluster State diff --git a/content/kubermatic/main/tutorials-howtos/external-clusters/eks/_index.en.md b/content/kubermatic/main/tutorials-howtos/external-clusters/eks/_index.en.md index ded93a207..0f0436a65 100644 --- a/content/kubermatic/main/tutorials-howtos/external-clusters/eks/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/external-clusters/eks/_index.en.md @@ -42,6 +42,7 @@ You should see the list of all available clusters in the region specified. Selec ![Select EKS cluster](@/images/tutorials/external-clusters/select-eks-cluster.png "Select EKS cluster") ## Create EKS Preset + Admin can create a preset on a KKP cluster using KKP `Admin Panel`. This Preset can then be used to Create/Import an EKS cluster. @@ -61,7 +62,7 @@ This Preset can then be used to Create/Import an EKS cluster. ![Choose EKS Preset](@/images/tutorials/external-clusters/choose-akspreset.png "Choose EKS Preset") -- Enter EKS credentials and Click on `Create` button. +- Enter EKS credentials and Click on `Create` button. ![Enter Credentials](@/images/tutorials/external-clusters/enter-eks-credentials-preset.png "Enter Credentials") @@ -151,7 +152,7 @@ Example: `~/.aws/credentials` -``` +```bash [default] aws_access_key_id=AKIAIOSFODNN7EXAMPLE aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY @@ -161,7 +162,7 @@ aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY Now you can create kubeconfig file automatically using the following command: -``` +```bash aws eks update-kubeconfig --region region-code --name cluster-name ``` diff --git a/content/kubermatic/main/tutorials-howtos/external-clusters/eks/create-eks/_index.en.md b/content/kubermatic/main/tutorials-howtos/external-clusters/eks/create-eks/_index.en.md index 143dc5861..83590c1b4 100644 --- a/content/kubermatic/main/tutorials-howtos/external-clusters/eks/create-eks/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/external-clusters/eks/create-eks/_index.en.md @@ -39,11 +39,13 @@ Supported kubernetes versions 1.21.0, 1.22.0, 1.23.0, 1.24.0 currently available ## Configure the Cluster ### Basic Settings + - Name: Provide a unique name for your cluster - Kubernetes Version: Select the Kubernetes version for this cluster. - Cluster Service Role: Select the IAM role to allow the Kubernetes control plane to manage AWS resources on your behalf. This property cannot be changed after the cluster is created. ### Networking + - VPC: Select a VPC to use for your EKS cluster resources - Subnets: Choose the subnets in your VPC where the control plane may place elastic network interfaces (ENIs) to facilitate communication with your cluster. @@ -62,6 +64,7 @@ Both Subnet and Security Groups list depends on chosen VPC. - Add NodeGroup configurations: ### Basic Settings + - Name: Assign a unique name for this node group. The node group name should begin with letter or digit and can have any of the following characters: the set of Unicode letters, digits, hyphens and underscores. Maximum length of 63. - Kubernetes Version: Cluster Control Plane Version is prefilled. @@ -69,11 +72,14 @@ Both Subnet and Security Groups list depends on chosen VPC. - Disk Size: Select the size of the attached EBS volume for each node. ### Networking + - VPC: VPC of the cluster is pre-filled. - Subnet: Specify the subnets in your VPC where your nodes will run. ### Autoscaling + Node group scaling configuration: + - Desired Size: Set the desired number of nodes that the group should launch with initially. - Max Count: Set the maximum number of nodes that the group can scale out to. - Min Count: Set the minimum number of nodes that the group can scale in to. diff --git a/content/kubermatic/main/tutorials-howtos/external-clusters/gke/_index.en.md b/content/kubermatic/main/tutorials-howtos/external-clusters/gke/_index.en.md index d6f2ef55e..2f6a5dd14 100644 --- a/content/kubermatic/main/tutorials-howtos/external-clusters/gke/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/external-clusters/gke/_index.en.md @@ -41,6 +41,7 @@ Validation performed will only check if the credentials have `Read` access. {{% /notice %}} ## Create GKE Preset + Admin can create a preset on a KKP cluster using KKP `Admin Panel`. This Preset can then be used to Create/Import an GKE cluster. @@ -60,7 +61,7 @@ This Preset can then be used to Create/Import an GKE cluster. ![Choose EKS Preset](@/images/tutorials/external-clusters/choose-akspreset.png "Choose GKE Preset") -- Enter GKE credentials and Click on `Create` button. +- Enter GKE credentials and Click on `Create` button. ![Enter Credentials](@/images/tutorials/external-clusters/enter-gke-credentials-preset.png "Enter Credentials") @@ -136,25 +137,24 @@ The KKP platform allows getting kubeconfig file for the GKE cluster. ![Get GKE kubeconfig](@/images/tutorials/external-clusters/gke-kubeconfig.png "Get cluster kubeconfig") - The end-user must be aware that the kubeconfig expires after some short period of time. To mitigate this disadvantage you can extend the kubeconfig for the provider information and use exported JSON with the service account for the authentication. - Add `name: gcp` for the users: -``` +```bash users: - name: gke_kubermatic-dev_europe-central2-a_test user: auth-provider: name: gcp ``` + Provide authentication credentials to your application code by setting the environment variable GOOGLE_APPLICATION_CREDENTIALS. This variable applies only to your current shell session. If you want the variable to apply to future shell sessions, set the variable in your shell startup file, for example in the `~/.bashrc` or `~/.profile` file. -``` +```bash export GOOGLE_APPLICATION_CREDENTIALS="KEY_PATH" ``` @@ -162,6 +162,6 @@ Replace `KEY_PATH` with the path of the JSON file that contains your service acc For example: -``` +```bash export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/service-account-file.json" ``` diff --git a/content/kubermatic/main/tutorials-howtos/gitops-argocd/_index.en.md b/content/kubermatic/main/tutorials-howtos/gitops-argocd/_index.en.md index 60486cad3..578a05efb 100644 --- a/content/kubermatic/main/tutorials-howtos/gitops-argocd/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/gitops-argocd/_index.en.md @@ -9,8 +9,8 @@ linktitle = "GitOps via ArgoCD" This is an Alpha version of Kubermatic management via GitOps which can become the default way to manage KKP in future. This article explains how you can kickstart that journey with ArgoCD right now. But this feature can also change significantly and in backward incompatible ways. **Please use this setup in production at your own risk.** {{% /notice %}} - ## Need of GitOps solution + Kubermatic Kubernetes Platform is a versatile solution to create and manage Kubernetes clusters (user-clusters) a plethora of cloud providers and on-prem virtualizaton platforms. But this flexibility also means that there is a good amount of moving parts. KKP provides various tools to manage user-clusters across various regions and clouds. This is why, if we utilize a GitOps solution to manage KKP and its upgrades, KKP administrators would have better peace of mind. We have now provided [an alpha release of ArgoCD based management of KKP master and seeds](https://github.com/kubermatic/kubermatic/tree/main/charts/gitops/kkp-argocd-apps). @@ -39,6 +39,7 @@ For the demonstration, 1. ArgoCD will be installed in each of the seeds (and master/seed) to manage the respective seed's KKP components A high-level procedure to get ArgoCD to manage the seed would be as follows: + 1. Setup a Kubernetes cluster to be used as master (or master-seed combo) 1. Install ArgoCD Helm chart and KKP ArgoCD Applications Helm chart 1. Install KKP on the master seed using kubermatic-installer @@ -51,38 +52,39 @@ A high-level procedure to get ArgoCD to manage the seed would be as follows: The `Folder and File Structure` section in the [README.md of ArgoCD Apps Component](https://github.com/kubermatic/kubermatic/blob/main/charts/gitops/kkp-argocd-apps/README.md#folder-and-file-structure) explains what files should be present for each seed in what folders and how to customize the behavior of ArgoCD apps installation. -**Note:** Configuring values for all the components of KKP is a humongous task. Also - each KKP installation might like a different directory structure to manage KKP installation. This ArgoCD Apps based approach is an _opinionated attempt_ to provide a standard structure that can be used in most of the KKP installations. If you need different directory structure, refer to [README.md of ArgoCD Apps Component](https://github.com/kubermatic/kubermatic/blob/main/charts/gitops/kkp-argocd-apps/README.md) to understand how you can customize this, if needed. +**Note:** Configuring values for all the components of KKP is a humongous task. Also - each KKP installation might like a different directory structure to manage KKP installation. This ArgoCD Apps based approach is an *opinionated attempt* to provide a standard structure that can be used in most of the KKP installations. If you need different directory structure, refer to [README.md of ArgoCD Apps Component](https://github.com/kubermatic/kubermatic/blob/main/charts/gitops/kkp-argocd-apps/README.md) to understand how you can customize this, if needed. ### ArgoCD Apps + We will install ArgoCD on both the clusters and we will install following components on both clusters via ArgoCD. In non-GitOps scenario, some of these components are managed via kubermatic-installer and rest are left to be managed by KKP administrator in master/seed clusters by themselves. With ArgoCD, except for kubermatic-operator, everything else can be managed via ArgoCD. Choice remains with KKP Administrator to include which apps to be managed by ArgoCD. 1. Core KKP components - 1. Dex (in master) - 1. ngix-ingress-controller - 1. cert-manager + 1. Dex (in master) + 1. ngix-ingress-controller + 1. cert-manager 1. Backup components - 1. Velero + 1. Velero 1. Seed monitoring tools - 1. Prometheus - 1. alertmanager - 1. Grafana - 1. kube-state-metrics - 1. node-exporter - 1. blackbox-exporter - 1. Identity aware proxy (IAP) for seed monitoring components + 1. Prometheus + 1. alertmanager + 1. Grafana + 1. kube-state-metrics + 1. node-exporter + 1. blackbox-exporter + 1. Identity aware proxy (IAP) for seed monitoring components 1. Logging components - 1. Promtail - 1. Loki + 1. Promtail + 1. Loki 1. S3-like object storage, like Minio 1. User-cluster MLA components - 1. Minio and Minio Lifecycle Manager - 1. Grafana - 1. Consul - 1. Cortex - 1. Loki - 1. Alertmanager Proxy - 1. IAP for user-mla - 1. secrets - Grafana and Minio secrets + 1. Minio and Minio Lifecycle Manager + 1. Grafana + 1. Consul + 1. Cortex + 1. Loki + 1. Alertmanager Proxy + 1. IAP for user-mla + 1. secrets - Grafana and Minio secrets 1. Seed Settings - Kubermatic configuration, Seed objects, Preset objects and such misc objects needed for Seed configuration 1. Seed Extras - This is a generic ArgoCD app to deploy arbitrary resources not covered by above things and as per needs of KKP Admin. @@ -91,6 +93,7 @@ We will install ArgoCD on both the clusters and we will install following compon > You can find code for this tutorial with sample values in [this git repository](https://github.com/kubermatic-labs/kkp-using-argocd). For ease of installation, a `Makefile` has been provided to just make commands easier to read. Internally, it just depends on Helm, kubectl and kubermatic-installer binaries. But you will need to look at `make` target definitions in `Makefile` to adjust DNS names. While for the demo, provided files would work, you would need to look through each file under `dev` folder and customize the values as per your need. ### Setup two Kubernetes Clusters + > This step install two Kubernetes clusters using KubeOne in AWS. You can skip this step if you already have access to two Kubernetes clusters. Use KubeOne to create 2 clusters in DEV env - master-seed combo (c1) and regular seed (c2). The steps below are generic to any KubeOne installation. a) We create basic VMs in AWS using Terraform and then b) Use KubeOne to bootstrap the control plane on these VMs and then rollout worker node machines. @@ -129,7 +132,8 @@ make k1-apply-seed This same folder structure can be further expanded to add KubeOne installations for additional environments like staging and prod. -### Note about URLs: +### Note about URLs + The [demo codebase](https://github.com/kubermatic-labs/kkp-using-argocd) assumes `argodemo.lab.kubermatic.io` as base URL for KKP. The KKP Dashboard is available at this URL. This also means that ArgoCD for master-seed, all tools like Prometheus, Grafana, etc are accessible at `*.argodemo.lab.kubermatic.io` The seed need its own DNS prefix which is configured as `self.seed`. This prefix needs to be configured in Route53 or similar DNS provider in your setup. @@ -138,20 +142,21 @@ Similarly, the demo creates a 2nd seed named `india-seed`. Thus, 2nd seed's Argo These names would come handy to understand the references below to them and customize these values as per your setup. ### Installation of KKP Master-seed combo + 1. Install ArgoCD and all the ArgoCD Apps - ```shell + ```shell cd make deploy-argo-dev-master deploy-argo-apps-dev-master # get ArgoCD admin password via below command kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d - ``` + ``` 1. Create a git tag with right label. The `make` target creates a git tag with a pre-configured name: `dev-kkp-` and pushes it to your git repository. This way, when you want to upgrade KKP version, you just need to update the KKP version at the top of Makefile and run this make target again. - ```shell + ```shell make push-git-tag-dev - ``` + ``` 1. ArgoCD syncs nginx ingress and cert-manager automatically 1. Manually update the DNS records so that ArgoCD is accessible. (In the demo, this step is automated via external-dns app) - ```shell + ```shell # Apply the DNS CNAME record below manually in AWS Route53: # argodemo.lab.kubermatic.io # *.argodemo.lab.kubermatic.io @@ -159,76 +164,85 @@ These names would come handy to understand the references below to them and cust # alertmanager-user.self.seed.argodemo.lab.kubermatic.io # You can get load balancer details from `k get svc -n nginx-ingress-controller nginx-ingress-controller` # After DNS setup, you can access ArgoCD at https://argocd.argodemo.lab.kubermatic.io - ``` + ``` 1. Install KKP EE without Helm charts. If you would want a complete ArgoCD setup with separate seeds, we will need Enterprise Edition of KKP. You can run the demo with master-seed combo. For this, community edition of KKP is sufficient. - ```shell + ```shell make install-kkp-dev - ``` + ``` 1. Add Seed CR for seed called `self` - ```shell + + ```shell make create-long-lived-master-seed-kubeconfig # commit changes to git and push latest changes make push-git-tag-dev - ``` + ``` 1. Wait for all apps to sync in ArgoCD (depending on setup - you can choose to sync all apps manually. In the demo, all apps are configured to sync automatically.) 1. Add Seed DNS record AFTER seed has been added (needed for usercluster creation). Seed is added as part of ArgoCD apps reconciliation above (In the demo, this step is automated via external-dns app) - ```shell + + ```shell # Apply DNS record manually in AWS Route53 # *.self.seed.argodemo.lab.kubermatic.io # Loadbalancer details from `k get svc -n kubermatic nodeport-proxy` - ``` -1. Access KKP dashboard at https://argodemo.lab.kubermatic.io + ``` +1. Access KKP dashboard at 1. Now you can create user-clusters on this master-seed cluster 1. (only for staging Let's Encrypt) We need to provide the staging Let's Encrypt cert so that monitoring IAP components can work. For this, one needs to save the certificate issuer for `https://argodemo.lab.kubermatic.io/dex/` from browser / openssl and insert the certificate in `dev/common/custom-ca-bundle.yaml` for the secret `letsencrypt-staging-ca-cert` under key `ca.crt` in base64 encoded format. After saving the file, commit the change to git and re-apply the tag via `make push-git-tag-dev` and sync the ArgoCD App. ### Installation of dedicated KKP seed + > **Note:** You can follow these steps only if you have a KKP EE license. With KKP CE licence, you can only work with one seed (which is master-seed combo above) We follow similar procedure as the master-seed combo but with slightly different commands. We execute most of the commands below, unless noted otherwise, in a 2nd shell where we have exported kubeconfig of dev-seed cluster above. 1. Install ArgoCD and all the ArgoCD Apps - ```shell + ```shell cd make deploy-argo-dev-seed deploy-argo-apps-dev-seed # get ArgoCD admin password via below command kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d - ``` + ``` 1. Add Seed nginx-ingress DNS record (In the demo, this step is automated via external-dns app) - ```shell + + ```shell # Apply below DNS CNAME record manually in AWS Route53 # *.india.argodemo.lab.kubermatic.io # grafana-user.india.seed.argodemo.lab.kubermatic.io # alertmanager-user.india.seed.argodemo.lab.kubermatic.io # You can get load balancer details from `k get svc -n nginx-ingress-controller nginx-ingress-controller` # After DNS setup, you can access the seed ArgoCD at https://argocd.india.argodemo.lab.kubermatic.io - ``` + ``` + 1. Prepare kubeconfig with cluster-admin privileges so that it can be added as secret and then this cluster can be added as Seed in master cluster configuration - ```shell + + ```shell make create-long-lived-seed-kubeconfig # commit changes to git and push latest changes in make push-git-tag-dev - ``` + ``` 1. Sync all apps in ArgoCD by accessing ArgoCD UI and syncing apps manually 1. Add Seed nodeport proxy DNS record - ```shell + + ```shell # Apply DNS record manually in AWS Route53 # *.india.seed.argodemo.lab.kubermatic.io # Loadbalancer details from `k get svc -n kubermatic nodeport-proxy` - ``` + ``` + 1. Now we can create user-clusters on this dedicated seed cluster as well. > NOTE: If you receive timeout errors, you should restart node-local-dns daemonset and/or coredns / cluster-autoscaler deployment to resolve these errors. ----- +--- ## Verification that this entire setup works + 1. Clusters creation on both the seeds (**Note:** If your VPC does not have a NAT Gateway, then ensure that you selected public IP for worker nodes during cluster creation wizard) 1. Access All Monitoring, Logging, Alerting links - available in left nav on any project within KKP. 1. Check Minio and Velero setup 1. Check User-MLA Grafana and see you can access user-cluster metrics and logs. You must remember to enable user-cluster monitoring and logging during creation of user-cluster. 1. KKP upgrade scenario - 1. Change the KKP version in Makefile - 1. Rollout KKP installer target again - 1. Create new git tag and push this new tag - 1. rollout argo-apps again and sync all apps on both seeds. + 1. Change the KKP version in Makefile + 1. Rollout KKP installer target again + 1. Create new git tag and push this new tag + 1. rollout argo-apps again and sync all apps on both seeds. diff --git a/content/kubermatic/main/tutorials-howtos/kkp-autoscaler/_index.en.md b/content/kubermatic/main/tutorials-howtos/kkp-autoscaler/_index.en.md index bff6357d2..8048f750f 100644 --- a/content/kubermatic/main/tutorials-howtos/kkp-autoscaler/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/kkp-autoscaler/_index.en.md @@ -7,4 +7,4 @@ weight = 7 This section details how various autoscalers have been integrated in KKP. * Cluster Autoscaler (as Addon/Application) [integration](./cluster-autoscaler/) for user clusters -* Vertical Pod Autoscaler [integration](./vertical-pod-autoscaler/) as Feature for user-cluster control plane components \ No newline at end of file +* Vertical Pod Autoscaler [integration](./vertical-pod-autoscaler/) as Feature for user-cluster control plane components diff --git a/content/kubermatic/main/tutorials-howtos/kkp-autoscaler/cluster-autoscaler/_index.en.md b/content/kubermatic/main/tutorials-howtos/kkp-autoscaler/cluster-autoscaler/_index.en.md index 1aed162ed..26065470b 100644 --- a/content/kubermatic/main/tutorials-howtos/kkp-autoscaler/cluster-autoscaler/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/kkp-autoscaler/cluster-autoscaler/_index.en.md @@ -14,8 +14,8 @@ Kubernetes Cluster Autoscaler is a tool that automatically adjusts the size of t The Kubernetes Autoscaler in the KKP User cluster automatically scaled up/down when one of the following conditions is satisfied: -* Some pods failed to run in the cluster due to insufficient resources. -* There are nodes in the cluster that have been underutilised for an extended period (10 minutes by default) and can place their Pods on other existing nodes. +- Some pods failed to run in the cluster due to insufficient resources. +- There are nodes in the cluster that have been underutilised for an extended period (10 minutes by default) and can place their Pods on other existing nodes. ## Installing Kubernetes Autoscaler on User Cluster @@ -31,18 +31,19 @@ It is possible to migrate from cluster autoscaler addon to app. For that it is r ### Installing kubernetes autoscaler as an addon [DEPRECATED] -**Step 1** +#### Step 1 Create a KKP User cluster by selecting your project on the dashboard and click on "Create Cluster". More details can be found on the official [documentation]({{< ref "../../project-and-cluster-management/" >}}) page. -**Step 2** +#### Step 2 When the User cluster is ready, check the pods in the `kube-system` namespace to know if any autoscaler is running. ![KKP Dashboard](../images/kkp-autoscaler-dashboard.png?classes=shadow,border "KKP Dashboard") ```bash -$ kubectl get pods -n kube-system +kubectl get pods -n kube-system + NAME READY STATUS RESTARTS AGE canal-gq9gc 2/2 Running 0 21m canal-tnms8 2/2 Running 0 21m @@ -55,49 +56,41 @@ node-local-dns-4p8jr 1/1 Running 0 21m As shown above, the cluster autoscaler is not part of the running Kubernetes components within the namespace. -**Step 3** +#### Step 3 Add the Autoscaler to the User cluster under the addon section on the dashboard by clicking on the Addons and then `Install Addon.` ![Add Addon](../images/add-autoscaler-addon.png?classes=shadow,border "Add Addon") - Select Cluster Autoscaler: - ![Select Autoscaler](../images/select-autoscaler.png?classes=shadow,border "Select Autoscaler") - Select install: - ![Select Install](../images/install-autoscaler.png?classes=shadow,border "Select Install") - - ![Installation Confirmation](../images/autoscaler-confirmation.png?classes=shadow,border "Installation Confirmation") - -**Step 4** +#### Step 4 Go over to the cluster and check the pods in the `kube-system` namespace using the `kubectl` command. ```bash -$ kubectl get pods -n kube-system -NAME READY STATUS RESTARTS AGE -canal-gq9gc 2/2 Running 0 32m -canal-tnms8 2/2 Running 0 33m -cluster-autoscaler-58c6c755bb-9g6df 1/1 Running 0 39s -coredns-666448b887-s8wv8 1/1 Running 0 36m -coredns-666448b887-vldzz 1/1 Running 0 36m +kubectl get pods -n kube-system + +NAME READY STATUS RESTARTS AGE +canal-gq9gc 2/2 Running 0 32m +canal-tnms8 2/2 Running 0 33m +cluster-autoscaler-58c6c755bb-9g6df 1/1 Running 0 39s +coredns-666448b887-s8wv8 1/1 Running 0 36m +coredns-666448b887-vldzz 1/1 Running 0 36m ``` As shown above, the cluster autoscaler has been provisioned and running. - ## Annotating MachineDeployments for Autoscaling - The Cluster Autoscaler only considers MachineDeployment with valid annotations. The annotations are used to control the minimum and the maximum number of replicas per MachineDeployment. You don't need to apply those annotations to all MachineDeployment objects, but only on MachineDeployments that Cluster Autoscaler should consider. Annotations can be set either using the KKP Dashboard or manually with kubectl. ### KKP Dashboard @@ -120,26 +113,26 @@ cluster.k8s.io/cluster-api-autoscaler-node-group-max-size - the maximum number o You can apply the annotations to MachineDeployments once the User cluster is provisioned and the MachineDeployments are created and running by following the steps below. -**Step 1** +#### Step 1 Run the following kubectl command to check the available MachineDeployments: ```bash -$ kubectl get machinedeployments -n kube-system +kubectl get machinedeployments -n kube-system NAME AGE DELETED REPLICAS AVAILABLEREPLICAS PROVIDER OS VERSION test-cluster-worker-v5drmq 3h56m 2 2 aws ubuntu 1.19.9 test-cluster-worker-pndqd 3h59m 1 1 aws ubuntu 1.19.9 ``` -**Step 2** + Step 2 -The annotation command will be used with one of the MachineDeployments above to annotate the desired MachineDeployments. In this case, the `test-cluster-worker-v5drmq` will be annotated, and the minimum and maximum will be set. + The annotation command will be used with one of the MachineDeployments above to annotate the desired MachineDeployments. In this case, the `test-cluster-worker-v5drmq` will be annotated, and the minimum and maximum will be set. ### Minimum Annotation ```bash -$ kubectl annotate machinedeployment -n kube-system test-cluster-worker-v5drmq cluster.k8s.io/cluster-api-autoscaler-node-group-min-size="1" +kubectl annotate machinedeployment -n kube-system test-cluster-worker-v5drmq cluster.k8s.io/cluster-api-autoscaler-node-group-min-size="1" machinedeployment.cluster.k8s.io/test-cluster-worker-v5drmq annotated ``` @@ -147,18 +140,18 @@ machinedeployment.cluster.k8s.io/test-cluster-worker-v5drmq annotated ### Maximum Annotation ```bash -$ kubectl annotate machinedeployment -n kube-system test-cluster-worker-v5drmq cluster.k8s.io/cluster-api-autoscaler-node-group-max-size="5" +kubectl annotate machinedeployment -n kube-system test-cluster-worker-v5drmq cluster.k8s.io/cluster-api-autoscaler-node-group-max-size="5" machinedeployment.cluster.k8s.io/test-cluster-worker-v5drmq annotated ``` - -**Step 3** +#### Step 3 Check the MachineDeployment description: ```bash -$ kubectl describe machinedeployments -n kube-system test-cluster-worker-v5drmq +kubectl describe machinedeployments -n kube-system test-cluster-worker-v5drmq + Name: test-cluster-worker-v5drmq Namespace: kube-system Labels: @@ -189,25 +182,21 @@ To edit KKP Autoscaler, click on the three dots in front of the Cluster Autoscal ![Edit Autoscaler](../images/edit-autoscaler.png?classes=shadow,border "Edit Autoscaler") - ## Delete KKP Autoscaler You can delete autoscaler from where you edit it above and select delete. ![Delete Autoscaler](../images/delete-autoscaler.png?classes=shadow,border "Delete Autoscaler") - - Once it has been deleted, you can check the cluster to ensure that the cluster autoscaler has been deleted using the command `kubectl get pods -n kube-system`. - +Once it has been deleted, you can check the cluster to ensure that the cluster autoscaler has been deleted using the command `kubectl get pods -n kube-system`. ## Customize KKP Autoscaler You can customize the cluster autoscaler addon in order to override the cluster autoscaler deployment definition to set or pass the required flag(s) by following the instructions provided [in the Addons document]({{< relref "../../../architecture/concept/kkp-concepts/addons/#custom-addons" >}}). -* [My cluster is below minimum / above maximum number of nodes, but CA did not fix that! Why?](https://github.com/kubernetes/autoscaler/blob/aff50d773e42f95baaae300f27e3b2e9cba1ea1b/cluster-autoscaler/FAQ.md#my-cluster-is-below-minimum--above-maximum-number-of-nodes-but-ca-did-not-fix-that-why) - -* [I'm running cluster with nodes in multiple zones for HA purposes. Is that supported by Cluster Autoscaler?](https://github.com/kubernetes/autoscaler/blob/aff50d773e42f95baaae300f27e3b2e9cba1ea1b/cluster-autoscaler/FAQ.md#im-running-cluster-with-nodes-in-multiple-zones-for-ha-purposes-is-that-supported-by-cluster-autoscaler) +- [My cluster is below minimum / above maximum number of nodes, but CA did not fix that! Why?](https://github.com/kubernetes/autoscaler/blob/aff50d773e42f95baaae300f27e3b2e9cba1ea1b/cluster-autoscaler/FAQ.md#my-cluster-is-below-minimum--above-maximum-number-of-nodes-but-ca-did-not-fix-that-why) +- [I'm running cluster with nodes in multiple zones for HA purposes. Is that supported by Cluster Autoscaler?](https://github.com/kubernetes/autoscaler/blob/aff50d773e42f95baaae300f27e3b2e9cba1ea1b/cluster-autoscaler/FAQ.md#im-running-cluster-with-nodes-in-multiple-zones-for-ha-purposes-is-that-supported-by-cluster-autoscaler) ## Summary @@ -215,5 +204,5 @@ That is it! You have successfully deployed a Kubernetes Autoscaler on a KKP Clus ## Learn More -* Read more on [Kubernetes autoscaler here](https://github.com/kubernetes/autoscaler/blob/main/cluster-autoscaler/FAQ.md#what-is-cluster-autoscaler). -* You can easily provision a Kubernetes User Cluster using [KKP here]({{< relref "../../../tutorials-howtos/project-and-cluster-management/" >}}) +- Read more on [Kubernetes autoscaler here](https://github.com/kubernetes/autoscaler/blob/main/cluster-autoscaler/FAQ.md#what-is-cluster-autoscaler). +- You can easily provision a Kubernetes User Cluster using [KKP here]({{< relref "../../../tutorials-howtos/project-and-cluster-management/" >}}) diff --git a/content/kubermatic/main/tutorials-howtos/kkp-autoscaler/vertical-pod-autoscaler/_index.en.md b/content/kubermatic/main/tutorials-howtos/kkp-autoscaler/vertical-pod-autoscaler/_index.en.md index f6d868e6b..84191846c 100644 --- a/content/kubermatic/main/tutorials-howtos/kkp-autoscaler/vertical-pod-autoscaler/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/kkp-autoscaler/vertical-pod-autoscaler/_index.en.md @@ -7,14 +7,17 @@ weight = 9 This section explains how Kubernetes Vertical Pod Autoscaler helps in a scaling the control plane components for user-clusters with rising load on that user-cluster. ## What is a Vertical Pod Autoscaler in Kubernetes? + [Vertical Pod Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) (VPA) frees users from the necessity of setting up-to-date resource limits and requests for the containers in their pods. When configured, it will set the requests automatically based on usage and thus allow proper scheduling onto nodes so that appropriate resource amount is available for each pod. It will also maintain ratios between limits and requests that were specified in initial containers configuration. Unlike HPA, VPA does not bundled with Kubernetes itself. It must get installed in the cluster separately. ## KKP Components controllable via VPA + KKP natively integrates VPA resource creation, reconciliation and management for all user-cluster control plane components. This allows these components to have optimal resource allocations - which can grow with growing cluster's needs. This reduces administration burden on KKP administrators. Components controlled by VPA are: + 1. apiserver 1. controller-manager 1. etcd @@ -29,6 +32,7 @@ All these components have default resources allocated by KKP. You can either use > Note: If you enable VPA and add `componentsOverride` block as well for a given cluster to specify resources, `componentsOverride` takes precedence. ## How to enable VPA in KKP + To enable VPA controlled control-plane components for user-clusters, we just need to turn on a featureFlag in Kubermatic Configuration. ```yaml @@ -40,4 +44,5 @@ spec: This installs necessary VPA components in `kube-system` namespace of each seed. It also create VPA custom resources for each of the control-plane components as noted above. ## Customizing VPA installation + You can customize various aspects of VPA deployments themselves (i.e. admissionController, recommender and updater) via [KKP configuration](../../../tutorials-howtos/kkp-configuration/) diff --git a/content/kubermatic/main/tutorials-howtos/kkp-configuration/custom-certificates/_index.en.md b/content/kubermatic/main/tutorials-howtos/kkp-configuration/custom-certificates/_index.en.md index 682c186d5..55ba5aac4 100644 --- a/content/kubermatic/main/tutorials-howtos/kkp-configuration/custom-certificates/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/kkp-configuration/custom-certificates/_index.en.md @@ -18,7 +18,7 @@ Changes to the CA bundle are automatically reconciled across these locations. If is invalid, no further reconciliation happens, so if the master cluster's CA bundle breaks, seed clusters are not affected. -Do note that the CA bundle configured in KKP is usually _the only_ source of CA certificates +Do note that the CA bundle configured in KKP is usually *the only* source of CA certificates for all of these components, meaning that no certificates are mounted from any of the Seed cluster host systems. @@ -82,8 +82,8 @@ If issuing certificates inside the cluster is not possible, static certificates `Secret` resources. The cluster admin is responsible for renewing and updating them as needed. A TLS secret can be created via `kubectl` when certificate and private key are available: -```sh -$ kubectl create secret tls tls-secret --cert=tls.cert --key=tls.key +```bash +kubectl create secret tls tls-secret --cert=tls.cert --key=tls.key ``` Going forward, it is assumed that proper certificates have already been created and now need to be configured into KKP. @@ -186,7 +186,6 @@ spec: name: ca-bundle ``` - ### KKP The KKP Operator manages a single `Ingress` for the KKP API/dashboard. This by default includes setting up @@ -221,7 +220,6 @@ If the static certificate is signed by a private CA, it is necessary to add that used by KKP. Otherwise, components will not be able to properly communicate with each other. {{% /notice %}} - #### User Cluster KKP automatically synchronizes the relevant CA bundle into each user cluster. The `ConfigMap` diff --git a/content/kubermatic/main/tutorials-howtos/kkp-configuration/dynamic-kubelet-config/_index.en.md b/content/kubermatic/main/tutorials-howtos/kkp-configuration/dynamic-kubelet-config/_index.en.md index 511c5fee2..251adbc7d 100644 --- a/content/kubermatic/main/tutorials-howtos/kkp-configuration/dynamic-kubelet-config/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/kkp-configuration/dynamic-kubelet-config/_index.en.md @@ -12,9 +12,10 @@ Dynamic kubelet configuration is a deprecated feature in Kubernetes. It will no Dynamic kubelet configuration allows for live reconfiguration of some or all nodes' kubelet options. ### See Also -* https://kubernetes.io/blog/2018/07/11/dynamic-kubelet-configuration/ -* https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/ -* https://github.com/kubernetes/enhancements/issues/281 + +* +* +* ### Enabling diff --git a/content/kubermatic/main/tutorials-howtos/kkp-kubeone-integration/_index.en.md b/content/kubermatic/main/tutorials-howtos/kkp-kubeone-integration/_index.en.md index b5a024ddf..b3e16cfbe 100644 --- a/content/kubermatic/main/tutorials-howtos/kkp-kubeone-integration/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/kkp-kubeone-integration/_index.en.md @@ -10,6 +10,7 @@ This section describes how to import and manage KubeOne clusters in KKP. We can import/connect an existing KubeOne cluster. Imported Cluster can be viewed and edited. Currently Supported Providers: + - [AWS]({{< ref "./aws" >}}) - [Google Cloud Provider]({{< ref "./gcp" >}}) - [Azure]({{< ref "./azure" >}}) @@ -18,17 +19,17 @@ We can import/connect an existing KubeOne cluster. Imported Cluster can be viewe - [OpenStack]({{< ref "./openstack" >}}) - [vSphere]({{< ref "./vsphere" >}}) - ## Prerequisites The following requirements must be met to import a KubeOne cluster: - - The KubeOne cluster must already exist before we begin the import/connect process. - - KubeOne configuration manifest: YAML manifest file that describes the KubeOne cluster configuration. - If you don't have the manifest of your cluster, it can be generated by running `kubeone config dump -m kubeone.yaml -t tf.json` from your KubeOne terraform directory. - - Private SSH Key used to create the KubeOne cluster: KubeOne connects to instances over SSH to perform any management operation. - - Provider Specific Credentials used to create the cluster. - > For more information on the KubeOne configuration for different environments, checkout the [Creating the Kubernetes Cluster using KubeOne]({{< relref "../../../../kubeone/main/tutorials/creating-clusters/" >}}) documentation. +- The KubeOne cluster must already exist before we begin the import/connect process. +- KubeOne configuration manifest: YAML manifest file that describes the KubeOne cluster configuration. + If you don't have the manifest of your cluster, it can be generated by running `kubeone config dump -m kubeone.yaml -t tf.json` from your KubeOne terraform directory. +- Private SSH Key used to create the KubeOne cluster: KubeOne connects to instances over SSH to perform any management operation. +- Provider Specific Credentials used to create the cluster. + +> For more information on the KubeOne configuration for different environments, checkout the [Creating the Kubernetes Cluster using KubeOne]({{< relref "../../../../kubeone/main/tutorials/creating-clusters/" >}}) documentation. ## Import KubeOne Cluster @@ -102,6 +103,7 @@ We can `Disconnect` a KubeOne cluster by clicking on the disconnect icon next to ![Disconnect Dialog](@/images/tutorials/kubeone-clusters/disconnect-cluster-dialog.png "Disconnect Dialog") ## Troubleshoot + To Troubleshoot a failing imported cluster we can `Pause` cluster by editing the external cluster CR. ```bash diff --git a/content/kubermatic/main/tutorials-howtos/kkp-kubeone-integration/digitalocean/_index.en.md b/content/kubermatic/main/tutorials-howtos/kkp-kubeone-integration/digitalocean/_index.en.md index ffb8db1b7..fdcb72b62 100644 --- a/content/kubermatic/main/tutorials-howtos/kkp-kubeone-integration/digitalocean/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/kkp-kubeone-integration/digitalocean/_index.en.md @@ -29,7 +29,6 @@ You can add an existing DigitalOcean KubeOne cluster and then manage it using KK - Manually enter the credentials `Token` used to create the KubeOne cluster you are importing. - ![DigitalOcean credentials](@/images/tutorials/kubeone-clusters/digitalocean-credentials-step.png "DigitalOcean credentials") - Review provided settings and click `Import KubeOne Cluster`. diff --git a/content/kubermatic/main/tutorials-howtos/kkp-kubeone-integration/hetzner/_index.en.md b/content/kubermatic/main/tutorials-howtos/kkp-kubeone-integration/hetzner/_index.en.md index 6d6ccf3d4..9a62e7609 100644 --- a/content/kubermatic/main/tutorials-howtos/kkp-kubeone-integration/hetzner/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/kkp-kubeone-integration/hetzner/_index.en.md @@ -29,7 +29,6 @@ You can add an existing Hetzner KubeOne cluster and then manage it using KKP. - Manually enter the credentials `Token` used to create the KubeOne cluster you are importing. - ![Hetzner credentials](@/images/tutorials/kubeone-clusters/hetzner-credentials-step.png "Hetzner credentials") - Review provided settings and click `Import KubeOne Cluster`. diff --git a/content/kubermatic/main/tutorials-howtos/kkp-kubeone-integration/openstack/_index.en.md b/content/kubermatic/main/tutorials-howtos/kkp-kubeone-integration/openstack/_index.en.md index 4b20688d2..c08cb6102 100644 --- a/content/kubermatic/main/tutorials-howtos/kkp-kubeone-integration/openstack/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/kkp-kubeone-integration/openstack/_index.en.md @@ -26,7 +26,6 @@ You can add an existing OpenStack KubeOne cluster and then manage it using KKP. - Enter the credentials `AuthURL`, `Username`, `Password`, `Domain`, `Project Name`, `Project ID` and `Region` used to create the KubeOne cluster you are importing. - ![OpenStack credentials](@/images/tutorials/kubeone-clusters/openstack-credentials-step.png "OpenStack credentials") - Review provided settings and click `Import KubeOne Cluster`. diff --git a/content/kubermatic/main/tutorials-howtos/kkp-kubeone-integration/vsphere/_index.en.md b/content/kubermatic/main/tutorials-howtos/kkp-kubeone-integration/vsphere/_index.en.md index fa4e0fe30..97c9f781a 100644 --- a/content/kubermatic/main/tutorials-howtos/kkp-kubeone-integration/vsphere/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/kkp-kubeone-integration/vsphere/_index.en.md @@ -26,7 +26,6 @@ You can add an existing vSphere KubeOne cluster and then manage it using KKP. - Enter the credentials `Username`, `Password`, and `ServerURL` used to create the KubeOne cluster you are importing. - ![vSphere credentials](@/images/tutorials/kubeone-clusters/vsphere-credentials-step.png "vSphere credentials") - Review provided settings and click `Import KubeOne Cluster`. diff --git a/content/kubermatic/main/tutorials-howtos/kkp-os-support/coreos-eos/_index.en.md b/content/kubermatic/main/tutorials-howtos/kkp-os-support/coreos-eos/_index.en.md index 3733533c6..098c34fc0 100644 --- a/content/kubermatic/main/tutorials-howtos/kkp-os-support/coreos-eos/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/kkp-os-support/coreos-eos/_index.en.md @@ -13,6 +13,7 @@ Please read this blog post to learn how to migrate your clusters to be able to u You can check the operating system from the KKP dashboard (machine deployments list) or over kubectl commands, for instance: + * `kubectl get machines -nkube-system` * `kubectl get nodes -owide`. @@ -32,6 +33,7 @@ With the new deployment, you can then migrate the containers to the newly create Additionally, it is a good idea to consider a pod disruption budget for each application you want to transfer to other nodes. Example PDB resource: + ```yaml apiVersion: policy/v1beta1 kind: PodDisruptionBudget @@ -43,6 +45,7 @@ spec: matchLabels: app: nginx ``` + Find more information [here](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/). ## Do I Have to Rely on the Kubermatic Eviction Mechanism? diff --git a/content/kubermatic/main/tutorials-howtos/kyverno-policies/_index.en.md b/content/kubermatic/main/tutorials-howtos/kyverno-policies/_index.en.md index ea6fefe01..484d4457c 100644 --- a/content/kubermatic/main/tutorials-howtos/kyverno-policies/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/kyverno-policies/_index.en.md @@ -25,7 +25,6 @@ You can also enable or disable it after creation from the **Edit Cluster** dialo ![edit cluster](images/enable-kyverno-edit-cluster.png?classes=shadow,border "Edit Cluster") - ## Policy Templates Admin View Admins can manage global policy templates directly from the **Kyverno Policies** page in the **Admin Panel.** @@ -61,4 +60,3 @@ This page displays a list of all applied policies. You can also create a policy ![add policy binding](images/add-policy-binding.png?classes=shadow,border "Add Policy Binding") You can choose a template from the list of all available templates. Note that templates already applied will not be available. - diff --git a/content/kubermatic/main/tutorials-howtos/manage-workers-node/via-ui/_index.en.md b/content/kubermatic/main/tutorials-howtos/manage-workers-node/via-ui/_index.en.md index f9cf3cd7e..65283f49d 100644 --- a/content/kubermatic/main/tutorials-howtos/manage-workers-node/via-ui/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/manage-workers-node/via-ui/_index.en.md @@ -5,7 +5,6 @@ date = 2021-04-20T12:16:38+02:00 weight = 15 +++ - ## Find the Edit Setting To add or delete a worker node you can easily edit the machine deployment in your cluster. Navigate to the cluster overview, scroll down and hover over `Machine Deployments` and click on the edit icon next to the deployment you want to edit. diff --git a/content/kubermatic/main/tutorials-howtos/metering/_index.en.md b/content/kubermatic/main/tutorials-howtos/metering/_index.en.md index 41429aa91..974156b9b 100644 --- a/content/kubermatic/main/tutorials-howtos/metering/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/metering/_index.en.md @@ -28,11 +28,11 @@ work properly the s3 endpoint needs to be available from the browser. ### Prerequisites -* S3 bucket +- S3 bucket - Any S3-compatible endpoint can be used - The bucket is required to store report csv files - Should be available via browser -* Administrator access to dashboard +- Administrator access to dashboard - Administrator access can be gained by - asking other administrators to follow the instructions for [Adding administrators][adding-administrators] via the dashboard - or by using `kubectl` to give a user admin access. Please refer to the [Admin Panel][admin-panel] @@ -75,7 +75,6 @@ according to your wishes. - When choosing a volume size, please take into consideration that old usage data files will not be deleted automatically - In the end it is possible to create different report schedules. Click on **Create Schedule**, to open the Schedule configuration dialog. diff --git a/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/master-seed/customization/_index.en.md b/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/master-seed/customization/_index.en.md index ceb4bad0a..1d7aa4f4f 100644 --- a/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/master-seed/customization/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/master-seed/customization/_index.en.md @@ -9,10 +9,10 @@ This chapter describes the customization of the KKP [Master / Seed Monitoring, L When it comes to monitoring, no approach fits all use cases. It's expected that you will want to adjust things to your needs and this page describes the various places where customizations can be applied. In broad terms, there are four main areas that are discussed: -* customer-cluster Prometheus -* seed-cluster Prometheus -* alertmanager rules -* Grafana dashboards +- customer-cluster Prometheus +- seed-cluster Prometheus +- alertmanager rules +- Grafana dashboards You will want to familiarize yourself with the [Installation of the Master / Seed MLA Stack]({{< relref "../installation/" >}}) before reading any further. @@ -148,12 +148,14 @@ prometheus: Managing the `ruleFiles` is also the way to disable the predefined rules by just removing the applicable item from the list. You can also keep the list completely empty to disable any and all alerts. ### Long-term metrics storage + By default, the seed prometheus is configured to store 1 days worth of metrics. It can be customized via overriding `prometheus.tsdb.retentionTime` field in `values.yaml` used for chart installation. If you would like to store the metrics for longer term, typically other solutions like Thanos are used. Thanos integration is a more involved process. Please read more about [thanos integration]({{< relref "./thanos.md" >}}). ## Alertmanager + Alertmanager configuration can be tweaked via `values.yaml` like so: ```yaml @@ -175,6 +177,7 @@ alertmanager: - channel: '#alerting' send_resolved: true ``` + Please review the [Alertmanager Configuration Guide](https://prometheus.io/docs/alerting/latest/configuration/) for detailed configuration syntax. You can review the [Alerting Runbook]({{< relref "../../../../cheat-sheets/alerting-runbook" >}}) for a reference of alerts that Kubermatic Kubernetes Platform (KKP) monitoring setup can fire, alongside a short description and steps to debug. @@ -183,9 +186,9 @@ You can review the [Alerting Runbook]({{< relref "../../../../cheat-sheets/alert Customizing Grafana entails three different aspects: -* Datasources (like Prometheus, InfluxDB, ...) -* Dashboard providers (telling Grafana where to load dashboards from) -* Dashboards themselves +- Datasources (like Prometheus, InfluxDB, ...) +- Dashboard providers (telling Grafana where to load dashboards from) +- Dashboards themselves In all cases, you have two general approaches: Either take the Grafana Helm chart and place additional files into the existing directory structure or leave the Helm chart as-is and use the `values.yaml` and your own ConfigMaps/Secrets to hold your customizations. This is very similar to how customizing the seed-level Prometheus works, so if you read that chapter, you will feel right at home. diff --git a/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/master-seed/customization/thanos.md b/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/master-seed/customization/thanos.md index d8768eed4..aa2025957 100644 --- a/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/master-seed/customization/thanos.md +++ b/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/master-seed/customization/thanos.md @@ -8,12 +8,15 @@ weight = 20 This page explains how we can integrate [Thanos](https://thanos.io/) long term storage of metrics with KKP seed Prometheus ## Pre-requisites + 1. Helm is installed. 1. KKP v2.22.4+ is installed in the cluster. 1. KKP Prometheus chart has been deployed in each seed where you want to store long term metrics ## Integration steps + Below page outlines + 1. Installation of Thanos components in your Kubernetes cluster via Helm chart 1. Customization of KKP Prometheus chart to augment Prometheus pod with Thanos sidecar 1. Customization of KKP Prometheus chart values to monitor and get alerts for Thanos components @@ -21,6 +24,7 @@ Below page outlines ## Install thanos chart You can install the Thanos Helm chart from Bitnami chart repository + ```shell HELM_EXPERIMENTAL_OCI=1 helm upgrade --install thanos \ --namespace monitoring --create-namespace\ @@ -30,6 +34,7 @@ HELM_EXPERIMENTAL_OCI=1 helm upgrade --install thanos \ ``` ### Basic Thanos Customization file + You can configure Thanos to store the metrics in any s3 compatible storage as well as many other popular cloud storage solutions. Below yaml snippet uses Azure Blob storage configuration. You can refer to all [supported object storage configurations](https://thanos.io/tip/thanos/storage.md/#supported-clients). @@ -58,7 +63,6 @@ storegateway: enabled: true ``` - ## Augment prometheus to use Thanos sidecar In order to receive metrics from Prometheus into Thanos, Thanos provides two mechanisms. @@ -142,12 +146,12 @@ prometheus: mountPath: /etc/thanos ``` - ## Add scraping and alerting rules to monitor thanos itself To monitor Thanos effectively, we must scrape the Thanos components and define some Prometheus alerting rules to get notified when Thanos is not working correctly. Below sections outline changes in `prometheus` section of `values.yaml` to enable such scraping and alerting for Thanos components. ### Scraping config + Add below `scraping` configuration to scrape the Thanos sidecar as well as various Thanos components deployed via helm chart. ```yaml @@ -183,6 +187,7 @@ prometheus: ``` ### Alerting Rules + Add Below configmap and then refer this configMap in KKP Prometheus chart's `values.yaml` customization ```yaml @@ -197,6 +202,7 @@ prometheus: ```` The configmap + ```yaml apiVersion: v1 kind: ConfigMap diff --git a/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/master-seed/health-assessment/_index.en.md b/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/master-seed/health-assessment/_index.en.md index 14ad63e25..cb0c2a9fe 100644 --- a/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/master-seed/health-assessment/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/master-seed/health-assessment/_index.en.md @@ -11,6 +11,7 @@ All screenshots below were taken from live Grafana instance installed with Kuber ## Dashboard categories Dashboards list consists of categories as shown below, each containing Dashboards relevant to the specific area of KKP: + - **Go Applications** - Go metrics for applications running in the cluster. - **Kubermatic** - dashboards provide insight into KKP components (described [below](#monitoring-kubermatic-kubernetes-platform)). - **Kubernetes** - dashboards used for monitoring Kubernetes resources of the seed cluster (described [below](#monitoring-kubernetes)). diff --git a/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/master-seed/installation/_index.en.md b/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/master-seed/installation/_index.en.md index d2c006ed2..6818bc4b6 100644 --- a/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/master-seed/installation/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/master-seed/installation/_index.en.md @@ -11,14 +11,14 @@ This chapter describes how to setup the [KKP Master / Seed MLA (Monitoring, Logg The exact requirements for the stack depend highly on the expected cluster load; the following are the minimum viable resources: -* 4 GB RAM -* 2 CPU cores -* 200 GB disk storage +- 4 GB RAM +- 2 CPU cores +- 200 GB disk storage This guide assumes the following tools are available: -* Helm 3.x -* kubectl 1.16+ +- Helm 3.x +- kubectl 1.16+ ## Monitoring, Logging & Alerting Components @@ -43,6 +43,7 @@ Download the [release archive from our GitHub release page](https://github.com/k {{< tabs name="Download the installer" >}} {{% tab name="Linux" %}} + ```bash # For latest version: VERSION=$(curl -w '%{url_effective}' -I -L -s -S https://github.com/kubermatic/kubermatic/releases/latest -o /dev/null | sed -e 's|.*/v||') @@ -51,8 +52,10 @@ VERSION=$(curl -w '%{url_effective}' -I -L -s -S https://github.com/kubermatic/k wget https://github.com/kubermatic/kubermatic/releases/download/v${VERSION}/kubermatic-ce-v${VERSION}-linux-amd64.tar.gz tar -xzvf kubermatic-ce-v${VERSION}-linux-amd64.tar.gz ``` + {{% /tab %}} {{% tab name="MacOS" %}} + ```bash # Determine your macOS processor architecture type # Replace 'amd64' with 'arm64' if using an Apple Silicon (M1) Mac. @@ -64,6 +67,7 @@ VERSION=$(curl -w '%{url_effective}' -I -L -s -S https://github.com/kubermatic/k wget "https://github.com/kubermatic/kubermatic/releases/download/v${VERSION}/kubermatic-ce-v${VERSION}-darwin-${ARCH}.tar.gz" tar -xzvf "kubermatic-ce-v${VERSION}-darwin-${ARCH}.tar.gz" ``` + {{% /tab %}} {{< /tabs >}} @@ -71,16 +75,16 @@ tar -xzvf "kubermatic-ce-v${VERSION}-darwin-${ARCH}.tar.gz" As with KKP itself, it's recommended to use a single `values.yaml` to configure all Helm charts. There are a few important options you might want to override for your setup: -* `prometheus.host` is used for the external URL in Prometheus, e.g. `prometheus.kkp.example.com`. -* `alertmanager.host` is used for the external URL in Alertmanager, e.g. `alertmanager.kkp.example.com`. -* `prometheus.storageSize` (default: `100Gi`) controls the volume size for each Prometheus replica; this should be large enough to hold all data as per your retention time (see next option). Long-term storage for Prometheus blocks is provided by Thanos, an optional extension to the Prometheus chart. -* `prometheus.tsdb.retentionTime` (default: `15d`) controls how long metrics are stored in Prometheus before they are deleted. Larger retention times require more disk space. Long-term storage is accomplished by Thanos, so the retention time for Prometheus itself should not be set to extremely large values (like multiple months). -* `prometheus.ruleFiles` is a list of Prometheus alerting rule files to load. Depending on whether or not the target cluster is a master or seed, the `/etc/prometheus/rules/kubermatic-master-*.yaml` entry should be removed in order to not trigger bogus alerts. -* `prometheus.blackboxExporter.enabled` is used to enable integration between Prometheus and Blackbox Exporter, used for monitoring of API endpoints of user clusters created on the seed. `prometheus.blackboxExporter.url` should be adjusted accordingly (default value would be `blackbox-exporter:9115`) -* `grafana.user` and `grafana.password` should be set with custom values if no identity-aware proxy is configured. In this case, `grafana.provisioning.configuration.disable_login_form` should be set to `false` so that a manual login is possible. -* `loki.persistence.size` (default: `10Gi`) controls the volume size for the Loki pods. -* `promtail.scrapeConfigs` controls for which pods the logs are collected. The default configuration should be sufficient for most cases, but adjustment can be made. -* `promtail.tolerations` might need to be extended to deploy a Promtail pod on every node in the cluster. By default, master-node NoSchedule taints are ignored. +- `prometheus.host` is used for the external URL in Prometheus, e.g. `prometheus.kkp.example.com`. +- `alertmanager.host` is used for the external URL in Alertmanager, e.g. `alertmanager.kkp.example.com`. +- `prometheus.storageSize` (default: `100Gi`) controls the volume size for each Prometheus replica; this should be large enough to hold all data as per your retention time (see next option). Long-term storage for Prometheus blocks is provided by Thanos, an optional extension to the Prometheus chart. +- `prometheus.tsdb.retentionTime` (default: `15d`) controls how long metrics are stored in Prometheus before they are deleted. Larger retention times require more disk space. Long-term storage is accomplished by Thanos, so the retention time for Prometheus itself should not be set to extremely large values (like multiple months). +- `prometheus.ruleFiles` is a list of Prometheus alerting rule files to load. Depending on whether or not the target cluster is a master or seed, the `/etc/prometheus/rules/kubermatic-master-*.yaml` entry should be removed in order to not trigger bogus alerts. +- `prometheus.blackboxExporter.enabled` is used to enable integration between Prometheus and Blackbox Exporter, used for monitoring of API endpoints of user clusters created on the seed. `prometheus.blackboxExporter.url` should be adjusted accordingly (default value would be `blackbox-exporter:9115`) +- `grafana.user` and `grafana.password` should be set with custom values if no identity-aware proxy is configured. In this case, `grafana.provisioning.configuration.disable_login_form` should be set to `false` so that a manual login is possible. +- `loki.persistence.size` (default: `10Gi`) controls the volume size for the Loki pods. +- `promtail.scrapeConfigs` controls for which pods the logs are collected. The default configuration should be sufficient for most cases, but adjustment can be made. +- `promtail.tolerations` might need to be extended to deploy a Promtail pod on every node in the cluster. By default, master-node NoSchedule taints are ignored. An example `values.yaml` could look like this if all options mentioned above are customized: @@ -124,6 +128,7 @@ With this file prepared, we can now install all required charts: ``` Output will be similar to this: + ```bash INFO[0000] 🚀 Initializing installer… edition="Community Edition" version=X.Y INFO[0000] 🚦 Validating the provided configuration… diff --git a/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/user-cluster/admin-guide/_index.en.md b/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/user-cluster/admin-guide/_index.en.md index 439e2b5f9..928551dbf 100644 --- a/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/user-cluster/admin-guide/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/user-cluster/admin-guide/_index.en.md @@ -36,16 +36,16 @@ For specific information about estimating the resource usage, please refer to [C Some key parameters to consider are: -* The number of active series -* Sampling rate -* The rate at which series are added and removed -* How compressible the time-series data are +- The number of active series +- Sampling rate +- The rate at which series are added and removed +- How compressible the time-series data are Other parameters which can become important if you have particularly high values: -* Number of different series under one metric name -* Number of labels per series -* Rate and complexity of queries +- Number of different series under one metric name +- Number of labels per series +- Rate and complexity of queries ### Installing MLA Stack in a Seed Cluster @@ -58,6 +58,7 @@ kubermatic-installer deploy usercluster-mla --config --helm-va ``` Additional options that can be used for the installation include: + ```bash --mla-force-secrets (UserCluster MLA) force reinstallation of mla-secrets Helm chart --mla-include-iap (UserCluster MLA) Include Identity-Aware Proxy installation @@ -183,7 +184,9 @@ There are several options in the KKP “Admin Panel” which are related to user - Seed name and the base domain under which KKP is running will be appended to it, e.g. for prefix `grafana` the final URL would be `https://grafana..`. ### Addons Configuration + KKP provides several addons for user clusters, that can be helpful when the User Cluster Monitoring feature is enabled, namely: + - **node-exporter** addon: exposes hardware and OS metrics of worker nodes to Prometheus, - **kube-state-metrics** addon: exposes cluster-level metrics of Kubernetes API objects (like pods, deployments, etc.) to Prometheus. @@ -196,7 +199,9 @@ addons are part of the KKP default accessible addons, so they should be availabl administrator has changed it. ### Enabling alerts for MLA stack in a Seed + To enable alerts in seed cluster for user cluster MLA stack(cortex and loki) , update the `values.yaml` used for installation of [Master / Seed MLA stack]({{< relref "../../master-seed/installation/" >}}). Add the following line under `prometheus.ruleFiles` label: + ```yaml - /etc/prometheus/rules/usercluster-mla-*.yaml ``` @@ -229,6 +234,7 @@ For larger scales, you will may start with tweaking the following: - Cortex Ingester volume sizes (cortex values.yaml - `cortex.ingester.persistentVolume.size`) - default 10Gi - Loki Ingester replicas (loki values.yaml - `loki-distributed.ingester.replicas`) - default 3 - Loki Ingester Storage as follows: + ```yaml loki-distributed: ingester: @@ -292,11 +298,13 @@ cortex: By default, a MinIO instance will also be deployed as the S3 storage backend for MLA components. It is also possible to use an existing MinIO instance in your cluster or any other S3-compatible services. There are three Helm charts which are related to MinIO in MLA repository: + - [mla-secrets](https://github.com/kubermatic/kubermatic/tree/main/charts/mla/mla-secrets) is used to create and manage MinIO and Grafana credentials Secrets. - [minio](https://github.com/kubermatic/kubermatic/tree/main/charts/mla/minio) is used to deploy MinIO instance in Kubernetes cluster. - [minio-lifecycle-mgr](https://github.com/kubermatic/kubermatic/tree/main/charts/mla/minio-lifecycle-mgr) is used to manage the lifecycle of the stored data, and to take care of data retention. If you want to disable the MinIO installation and use your existing MinIO instance or other S3 services, you need to: + - Disable the Secret creation for MinIO in mla-secrets Helm chart. In the [mla-secrets Helm chart values.yaml](https://github.com/kubermatic/kubermatic/blob/main/charts/mla/mla-secrets/values.yaml#L18), set `mlaSecrets.minio.enabled` to `false`. - Modify the S3 storage settings in `values.yaml` of other MLA components to use the existing MinIO instance or other S3 services: - In [cortex Helm chart values.yaml](https://github.com/kubermatic/kubermatic/blob/main/charts/mla/cortex/values.yaml), change the `cortex.config.ruler_storage.s3`, `cortex.config.alertmanager_storage.s3`, and `cortex.config.blocks_storage.s3` to point to your existing MinIO instance or other S3 service. Modify the `cortex.alertmanager.env`, `cortex.ingester.env`, `cortex.querier.env`, `cortex.ruler.env` and `cortex.storage_gateway.env` to get credentials from your Secret. @@ -304,7 +312,6 @@ If you want to disable the MinIO installation and use your existing MinIO instan - If you still want to use MinIO lifecycle manager to manage data retention for MLA data in your MinIO instance, in [minio-lefecycle-mgr Helm chart values.yaml](https://github.com/kubermatic/kubermatic/blob/main/charts/mla/minio-lifecycle-mgr/values.yaml), set `lifecycleMgr.minio.endpoint` and `lifecycleMgr.minio.secretName` to your MinIO endpoint and Secret. - Use `--mla-skip-minio` or `--mla-skip-minio-lifecycle-mgr` flag when you execute `kubermatic-installer deploy usercluster-mla`. If you want to disable MinIO but still use MinIO lifecycle manager to take care of data retention, you can use `--mla-skip-minio` flag. Otherwise, you can use both flags to disable both MinIO and lifecycle manager. Please note that if you are redeploying the stack on existing cluster, you will have to manually uninstall MinIO and/or lifecycle manager. To do that, you can use commands: `helm uninstall --namespace mla minio` and `helm uninstall --namespace mla minio-lifecycle-mgr` accordingly. - ### Managing Grafana Dashboards In the User Cluster MLA Grafana, there are several predefined Grafana dashboards that are automatically available across all Grafana organizations (KKP projects). The KKP administrators have ability to modify the list of these dashboards. @@ -356,25 +363,25 @@ By default, no rate-limiting is applied. Configuring the rate-limiting options w For **metrics**, the following rate-limiting options are supported as part of the `monitoringRateLimits`: -| Option | Direction | Enforced by | Description -| -------------------- | -----------| ----------- | ---------------------------------------------------------------------- -| `ingestionRate` | Write path | Cortex | Ingestion rate limit in samples per second (Cortex `ingestion_rate`). -| `ingestionBurstSize` | Write path | Cortex | Maximum number of series per metric (Cortex `max_series_per_metric`). -| `maxSeriesPerMetric` | Write path | Cortex | Maximum number of series per this user cluster (Cortex `max_series_per_user`). -| `maxSeriesTotal` | Write path | Cortex | Maximum number of series per this user cluster (Cortex `max_series_per_user`). -| `queryRate` | Read path | MLA Gateway | Query request rate limit per second (NGINX `rate` in `r/s`). -| `queryBurstSize` | Read path | MLA Gateway | Query burst size in number of requests (NGINX `burst`). -| `maxSamplesPerQuery` | Read path | Cortex | Maximum number of samples during a query (Cortex `max_samples_per_query`). -| `maxSeriesPerQuery` | Read path | Cortex | Maximum number of timeseries during a query (Cortex `max_series_per_query`). +| Option | Direction | Enforced by | Description | +| -------------------- | -----------| ----------- | --------------------------------------------------------------------------------| +| `ingestionRate` | Write path | Cortex | Ingestion rate limit in samples per second (Cortex `ingestion_rate`). | +| `ingestionBurstSize` | Write path | Cortex | Maximum number of series per metric (Cortex `max_series_per_metric`). | +| `maxSeriesPerMetric` | Write path | Cortex | Maximum number of series per this user cluster (Cortex `max_series_per_user`). | +| `maxSeriesTotal` | Write path | Cortex | Maximum number of series per this user cluster (Cortex `max_series_per_user`). | +| `queryRate` | Read path | MLA Gateway | Query request rate limit per second (NGINX `rate` in `r/s`). | +| `queryBurstSize` | Read path | MLA Gateway | Query burst size in number of requests (NGINX `burst`). | +| `maxSamplesPerQuery` | Read path | Cortex | Maximum number of samples during a query (Cortex `max_samples_per_query`). | +| `maxSeriesPerQuery` | Read path | Cortex | Maximum number of timeseries during a query (Cortex `max_series_per_query`). | For **logs**, the following rate-limiting options are supported as part of the `loggingRateLimits`: -| Option | Direction | Enforced by | Description -| -------------------- | -----------| ----------- | ---------------------------------------------------------------------- -| `ingestionRate` | Write path | MLA Gateway | Ingestion rate limit in requests per second (NGINX `rate` in `r/s`). -| `ingestionBurstSize` | Write path | MLA Gateway | Ingestion burst size in number of requests (NGINX `burst`). -| `queryRate` | Read path | MLA Gateway | Query request rate limit per second (NGINX `rate` in `r/s`). -| `queryBurstSize` | Read path | MLA Gateway | Query burst size in number of requests (NGINX `burst`). +| Option | Direction | Enforced by | Description | +| -------------------- | -----------| ----------- | ----------------------------------------------------------------------| +| `ingestionRate` | Write path | MLA Gateway | Ingestion rate limit in requests per second (NGINX `rate` in `r/s`). | +| `ingestionBurstSize` | Write path | MLA Gateway | Ingestion burst size in number of requests (NGINX `burst`). | +| `queryRate` | Read path | MLA Gateway | Query request rate limit per second (NGINX `rate` in `r/s`). | +| `queryBurstSize` | Read path | MLA Gateway | Query burst size in number of requests (NGINX `burst`). | ## Debugging @@ -400,6 +407,7 @@ kubectl get pods -n mla-system ``` Output will be similar to this: + ```bash NAME READY STATUS RESTARTS AGE monitoring-agent-68f7485456-jj7v6 1/1 Running 0 11m @@ -414,6 +422,7 @@ kubectl get pods -n cluster-cxfmstjqkw | grep mla-gateway ``` Output will be similar to this: + ```bash mla-gateway-6dd8c68d67-knmq7 1/1 Running 0 22m ``` @@ -479,6 +488,7 @@ To incorporate the helm-charts upgrade, follow the below steps: ### Upgrade Loki to version 2.4.0 Add the following configuration inside `loki.config` key, under `ingester` label in the Loki's `values.yaml` file: + ```yaml wal: dir: /var/loki/wal @@ -489,11 +499,13 @@ wal: Statefulset `store-gateway` refers to a headless service called `cortex-store-gateway-headless`, however, due to a bug in the upstream helm-chart(v0.5.0), the `cortex-store-gateway-headless` doesn’t exist at all, and headless service is named `cortex-store-gateway`, which is not used by the statefulset. Because `cortex-store-gateway` is not referred at all, we can safely delete it, and do helm upgrade to fix the issue (Refer to this [pull-request](https://github.com/cortexproject/cortex-helm-chart/pull/166) for details). Delete the existing `cortex-store-gateway` service by running the below command: + ```bash kubectl delete svc cortex-store-gateway -n mla ``` After doing the above-mentioned steps, MLA stack can be upgraded using the Kubermatic Installer: + ```bash kubermatic-installer deploy usercluster-mla --config --helm-values ``` diff --git a/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/user-cluster/setting-up-alertmanager-with-slack-notifications/_index.en.md b/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/user-cluster/setting-up-alertmanager-with-slack-notifications/_index.en.md index 87d03a661..c5e250dc7 100644 --- a/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/user-cluster/setting-up-alertmanager-with-slack-notifications/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/user-cluster/setting-up-alertmanager-with-slack-notifications/_index.en.md @@ -60,6 +60,7 @@ alertmanager_config: | title: "{{ range .Alerts }}{{ .Annotations.summary }}\n{{ end }}" text: "{{ range .Alerts }}{{ .Annotations.description }}\n{{ end }}" ``` + Don’t forget to add the Slack Webhook URL that you have generated in the previous setup to `slack_api_url`, change the slack channel under `slack_configs` to the channel that you are going to use and save it by clicking **Edit** button: diff --git a/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/user-cluster/user-guide/_index.en.md b/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/user-cluster/user-guide/_index.en.md index 9e6b11b34..54c1d9c7f 100644 --- a/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/user-cluster/user-guide/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/monitoring-logging-alerting/user-cluster/user-guide/_index.en.md @@ -19,6 +19,7 @@ Users can enable monitoring and logging independently, and also can disable or e ## Enabling MLA Addons in a User Cluster KKP provides several addons for user clusters, that can be helpful when the User Cluster Monitoring feature is enabled, namely: + - **node-exporter** addon: exposes hardware and OS metrics of worker nodes to Prometheus, - **kube-state-metrics** addon: exposes cluster-level metrics of Kubernetes API objects (like pods, deployments, etc.) to Prometheus. @@ -54,16 +55,17 @@ The metric endpoints exposed via annotations will be automatically discovered by The following annotations are supported: -| Annotation | Example value | Description -| ------------------------- | ------------- | ------------ -| prometheus.io/scrape | `"true"` | Only scrape pods / service endpoints that have a value of `true` -| prometheus.io/scrape-slow | `"true"` | The same as `prometheus.io/scrape`, but will scrape metrics in longer intervals (5 minutes) -| prometheus.io/path | `/metrics` | Overrides the metrics path, the default is `/metrics` -| prometheus.io/port | `"8080"` | Scrape the pod / service endpoints on the indicated port +| Annotation | Example value | Description | +| ------------------------- | ------------- | --------------------------------------------------------------------------------------------| +| prometheus.io/scrape | `"true"` | Only scrape pods / service endpoints that have a value of `true` | +| prometheus.io/scrape-slow | `"true"` | The same as `prometheus.io/scrape`, but will scrape metrics in longer intervals (5 minutes) | +| prometheus.io/path | `/metrics` | Overrides the metrics path, the default is `/metrics` | +| prometheus.io/port | `"8080"` | Scrape the pod / service endpoints on the indicated port | For more information on exact scraping configuration and annotations, reference the user cluster Grafana Agent configuration in the `monitoring-agent` ConfigMap (`kubectl get configmap monitoring-agent -n mla-system -oyaml`) against the prometheus documentation for [kubernetes_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config) and [relabel_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config). ### Extending Scrape Config + It is also possible to extend User Cluster Grafana Agent with custom `scrape_config` targets. This can be achieved by adding ConfigMaps with a pre-defined name prefix `monitoring-scraping` in the `mla-system` namespace in the user cluster. For example, a file `example.yaml` which contains customized scrape configs can look like the following: ```yaml @@ -158,17 +160,17 @@ As described on the [User Cluster MLA Stack Architecture]({{< relref "../../../. **monitoring-agent**: -| Resource | Requests | Limits -| -------- | -------- | ------ -| CPU | 100m | 1 -| Memory | 256Mi | 4Gi +| Resource | Requests | Limits | +| -------- | -------- | -------| +| CPU | 100m | 1 | +| Memory | 256Mi | 4Gi | **logging-agent**: -| Resource | Requests | Limits -| -------- | -------- | ------ -| CPU | 50m | 200m -| Memory | 64Mi | 128Mi +| Resource | Requests | Limits | +| -------- | -------- | -------| +| CPU | 50m | 200m | +| Memory | 64Mi | 128Mi | Non-default resource requests & limits for user cluster Prometheus and Loki Promtail can be configured via KKP API endpoint for managing clusters (`/api/v2/projects/{project_id}/clusters/{cluster_id}`): diff --git a/content/kubermatic/main/tutorials-howtos/networking/apiserver-policies/_index.en.md b/content/kubermatic/main/tutorials-howtos/networking/apiserver-policies/_index.en.md index 7e3c9531f..7b7fc8bf9 100644 --- a/content/kubermatic/main/tutorials-howtos/networking/apiserver-policies/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/networking/apiserver-policies/_index.en.md @@ -58,9 +58,10 @@ This feature is available only for user clusters with the `LoadBalancer` [Expose {{% notice warning %}} When restricting access to the API server, it is important to allow the following IP ranges : -* Worker nodes of the user cluster. -* Worker nodes of the KKP Master cluster. -* Worker nodes of the KKP seed cluster in case you are using separate Master/Seed Clusters. + +- Worker nodes of the user cluster. +- Worker nodes of the KKP Master cluster. +- Worker nodes of the KKP seed cluster in case you are using separate Master/Seed Clusters. Since Kubernetes in version v1.25, it is also needed to add Pod IP range of KKP seed cluster, because of the [change](https://github.com/kubernetes/kubernetes/pull/110289) to kube-proxy. @@ -86,7 +87,8 @@ or in an existing cluster via the "Edit Cluster" dialog: ## Seed-Level API Server IP Ranges Whitelisting -The `defaultAPIServerAllowedIPRanges` field in the Seed specification allows administrators to define a **global set of CIDR ranges** that are **automatically appended** to the allowed IP ranges for all user cluster API servers within that Seed. These ranges act as a security baseline to: +The `defaultAPIServerAllowedIPRanges` field in the Seed specification allows administrators to define a **global set of CIDR ranges** that are **automatically appended** to the allowed IP ranges for all user cluster API servers within that Seed. These ranges act as a security baseline to: + - Ensure KKP components (e.g., seed-manager, dashboard) retain access to cluster APIs - Enforce organizational IP restrictions across all clusters in the Seed - Prevent accidental misconfigurations in cluster-specific settings diff --git a/content/kubermatic/main/tutorials-howtos/networking/cilium-cluster-mesh/_index.en.md b/content/kubermatic/main/tutorials-howtos/networking/cilium-cluster-mesh/_index.en.md index b2db3df3f..93a67fb27 100644 --- a/content/kubermatic/main/tutorials-howtos/networking/cilium-cluster-mesh/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/networking/cilium-cluster-mesh/_index.en.md @@ -8,12 +8,14 @@ This guide describes the setup for configuring Cilium Cluster Mesh between 2 KKP running with Cilium CNI. ## Versions + This guide was made for the following versions of KKP and Cilium: - KKP 2.22.0 - Cilium 1.13.0 ## Prerequisites + Before proceeding, please review that your intended setup meets the [Prerequisites for Cilium Cluster Mesh](https://docs.cilium.io/en/latest/network/clustermesh/clustermesh/#prerequisites). @@ -25,11 +27,13 @@ Especially, keep in mind that nodes in all clusters must have IP connectivity be ## Deployment Steps ### 1. Create 2 KKP User Clusters with non-overlapping pod CIDRs + Create 2 user clusters with Cilium CNI and `ebpf` proxy mode (necessary to have Cluster Mesh working also for cluster-ingress traffic via LoadBalancer or NodePort services). The clusters need to have non-overlapping pod CIDRs, so at least one of them needs to have the `spec.clusterNetwork.pods.cidrBlocks` set to a non-default value (e.g. `172.26.0.0/16`). We will be referring to these clusters as `Cluster 1` and `Cluster 2` in this guide. ### 2. Enable Cluster Mesh in the Cluster 1 + **In Cluster 1**, edit the Cilium ApplicationInstallation values (via UI, or `kubectl edit ApplicationInstallation cilium -n kube-system`), and add the following snippet to it: @@ -50,24 +54,29 @@ clustermesh: ``` ### 3. Retrieve Cluster Mesh data from the Cluster 1 + **In Cluster 1**, retrieve the information necessary for the next steps: Retrieve CA cert & key: -``` + +```bash kubectl get secret cilium-ca -n kube-system -o yaml ``` Retrieve clustermesh-apiserver external IP: -``` + +```bash kubectl get svc clustermesh-apiserver -n kube-system ``` Retrieve clustermesh-apiserver remote certs: -``` + +```bash kubectl get secret clustermesh-apiserver-remote-cert -n kube-system -o yaml ``` ### 4. Enable Cluster Mesh in the Cluster 2 + **In Cluster 2**, the Cilium ApplicationInstallation values, and add the following snippet to it (after replacing the values below the lines with comments with the actual values retrieved in the previous step): @@ -105,19 +114,23 @@ clustermesh: ``` ### 5. Retrieve Cluster Mesh data from the Cluster 2 + **In Cluster 2**, retrieve the information necessary for the next steps: Retrieve clustermesh-apiserver external IP: -```shell + +```bash kubectl get svc clustermesh-apiserver -n kube-system ``` Retrieve clustermesh-apiserver remote certs: -```shell + +```bash kubectl get secret clustermesh-apiserver-remote-cert -n kube-system -o yaml ``` ### 6. Update Cluster Mesh config in the Cluster 1 + **In Cluster 1**, update the Cilium ApplicationInstallation values, and add the following clustermesh config with cluster-2 details into it: ```yaml @@ -151,21 +164,25 @@ clustermesh: ``` ### 7. Allow traffic between worker nodes of different clusters + If any firewalling is in place between the worker nodes in different clusters, the following ports need to be allowed between them: - UDP 8472 (VXLAN) - TCP 4240 (HTTP health checks) ### 8. Check Cluster Mesh status + At this point, check Cilium health status in each cluster with: -```shell + +```bash kubectl exec -it cilium- -n kube-system -- cilium-health status ``` It should show all local and remote cluster's nodes and not show any errors. It may take a few minutes until things settle down since the last configuration. Example output: -``` + +```yaml Nodes: cluster-1/f5m2nzcb4z-worker-p7m58g-7f44796457-wv5fq (localhost): Host connectivity to 10.0.0.2: @@ -184,11 +201,12 @@ Nodes: ``` In case of errors, check again for firewall settings mentioned in the previous point. It may also help to manually restart: + - first `clustermesh-apiserver` pods in each cluster, - then `cilium` agent pods in each cluster. - ## Example Cross-Cluster Application Deployment With Failover / Migration + After Cilium Cluster Mesh has been set up, it is possible to use global services across the meshed clusters. In this example, we will deploy a global deployment into 2 clusters, where each cluster will be acting a failover for the other. Normally, all traffic will be handled by backends in the local cluster. Only in case of no local backends, it will be handled by backends running in the other cluster. That will be true for local (pod-to-service) traffic, as well as ingress traffic provided by LoadBalancer services in each cluster. @@ -218,6 +236,7 @@ spec: ``` Now, in each cluster, lets create a service of type LoadBalancer with the necessary annotations: + - `io.cilium/global-service: "true"` - `io.cilium/service-affinity: "local"` @@ -242,12 +261,14 @@ spec: ``` The list of backends for a service can be checked with: -```shell + +```bash kubectl exec -it cilium- -n kube-system -- cilium service list --clustermesh-affinity ``` Example output: -``` + +```bash ID Frontend Service Type Backend 16 10.240.27.208:80 ClusterIP 1 => 172.25.0.160:80 (active) (preferred) 2 => 172.25.0.12:80 (active) (preferred) @@ -258,13 +279,14 @@ ID Frontend Service Type Backend At this point, the service should be available in both clusters, either locally or via assigned external IP of the `nginx-deployment` service. Let's scale the number of nginx replicas in one of the clusters (let's say Cluster 1) to 0: -```shell + +```bash kubectl scale deployment nginx-deployment --replicas=0 ``` The number of backends for the service has been lowered down to 2, and only lists remote backends in the `cilium service list` output: -``` +```bash ID Frontend Service Type Backend 16 10.240.27.208:80 ClusterIP 1 => 172.26.0.31:80 (active) 2 => 172.26.0.196:80 (active) diff --git a/content/kubermatic/main/tutorials-howtos/networking/cni-cluster-network/_index.en.md b/content/kubermatic/main/tutorials-howtos/networking/cni-cluster-network/_index.en.md index 816a349a7..7f126aa13 100644 --- a/content/kubermatic/main/tutorials-howtos/networking/cni-cluster-network/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/networking/cni-cluster-network/_index.en.md @@ -74,10 +74,12 @@ When this option is selected, the user cluster will be left without any CNI, and When deploying your own CNI, please make sure you pass proper pods & services CIDRs to your CNI configuration - matching with the KKP user-cluster level configuration in the [Advanced Network Configuration](#advanced-network-configuration). ### Deploying CNI as a System Application + As of Cilium version `1.13.0`, Cilium CNI is deployed as a "System Application" instead of KKP Addon (as it is the case for older Cilium versions and all Canal CNI versions). Apart from internally relying on KKP's [Applications]({{< relref "../../applications" >}}) infrastructure rather than [Addons]({{< relref "../../../architecture/concept/kkp-concepts/addons" >}}) infrastructure, it provides the users with full flexibility of CNI feature usage and configuration. #### Editing the CNI Configuration During Cluster Creation + When creating a new user cluster via KKP UI, it is possible to specify Helm values used to deploy the CNI via the "Edit CNI Values" button at the bottom of the "Advanced Network Configuration" section on the step 2 of the cluster creation wizard: ![Edit CNI Values](images/edit-cni-app-values.png?classes=shadow,border "Edit CNI Values") @@ -88,6 +90,7 @@ Please note that the final Helm values applied in the user cluster will be autom This option is also available when creating cluster templates and the CNI configuration saved in the cluster template is automatically applied to all clusters created from the template. #### Editing the CNI Configuration in Existing Cluster + In an existing cluster, the CNI configuration can be edited in two ways: via KKP UI, or by editing CNI `ApplicationInstallation` in the user cluster. For editing CNI configuration via KKP UI, navigate to the "Applications" tab on the cluster details page, switch the "Show System Applications" toggle, and click on the "Edit Application" button of the CNI. After that a new dialog window with currently applied CNI Helm values will be open and allow their modification. @@ -95,22 +98,28 @@ For editing CNI configuration via KKP UI, navigate to the "Applications" tab on ![Edit CNI Application](images/edit-cni-app.png?classes=shadow,border "Edit CNI Application") The other option is to edit the CNI `ApplicationInstallation` in the user cluster directly, e.g. like this for the Cilium CNI: + ```bash kubectl edit ApplicationInstallation cilium -n kube-system ``` + and edit the configuration in ApplicationInstallation's `spec.values`. This approach can be used e.g. to turn specific CNI features on or off, or modify arbitrary CNI configuration. Please note that some parts of the CNI configuration (e.g. pod CIDR etc.) is managed by KKP, and its change will not be allowed, or may be overwritten upon next reconciliation of the ApplicationInstallation. #### Changing the Default CNI Configuration + The default CNI configuration that will be used to deploy CNI in new KKP user clusters can be defined at two places: + - in a cluster template, if the cluster is being created from a template (which takes precedence over the next option), - in the CNI ApplicationDefinition's `spec.defaultValues` in the KKP master cluster (editable e.g. via `kubectl edit ApplicationDefinition cilium`). #### CNI Helm Chart Source + The Helm charts used to deploy CNI are hosted in a Kubermatic OCI registry (`oci://quay.io/kubermatic/helm-charts`). This registry needs to be accessible from the KKP Seed cluster to allow successful CNI deployment. In setups with restricted Internet connectivity, a different (e.g. private) OCI registry source for the CNI charts can be configured in `KubermaticConfiguration` (`spec.systemApplications.helmRepository` and `spec.systemApplications.helmRegistryConfigFile`). To mirror a Helm chart into a private OCI repository, you can use the helm CLI, e.g.: + ```bash CHART_VERSION=1.13.0 helm pull oci://quay.io/kubermatic/helm-charts/cilium --version ${CHART_VERSION} @@ -118,10 +127,12 @@ helm push cilium-${CHART_VERSION}.tgz oci://// ``` #### Upgrading Cilium CNI to Cilium 1.13.0 / Downgrading + For user clusters originally created with the Cilium CNI version lower than `1.13.0` (which was managed by the Addons mechanism rather than Applications), the migration to the management via Applications infra happens automatically during the CNI version upgrade to `1.13.0`. During the upgrade, if the Hubble Addon was installed in the cluster before, the Addon will be automatically removed, as Hubble is now enabled by default. If there are such clusters in your KKP installation, it is important to preserve the following part of the configuration in the [default configuration](#changing-the-default-cni-configuration) of the ApplicationInstallation: + ```bash hubble: tls: @@ -158,36 +169,45 @@ Some newer Kubernetes versions may not be compatible with already deprecated CNI Again, please note that it is not a good practice to keep the clusters on an old CNI version and try to upgrade as soon as new CNI version is available next time. ## IPv4 / IPv4 + IPv6 (Dual Stack) + This option allows for switching between IPv4-only and IPv4+IPv6 (dual-stack) networking in the user cluster. This feature is described in detail on an individual page: [Dual-Stack Networking]({{< relref "../dual-stack/" >}}). ## Advanced Network Configuration + After Clicking on the "Advanced Networking Configuration" button in the cluster creation wizard, several more network configuration options are shown to the user: ![Cluster Settings - Advanced Network Configuration](images/ui-cluster-networking-advanced.png?classes=shadow,border "Cluster Settings - Network Configuration") ### Proxy Mode + Configures kube-proxy mode for k8s services. Can be set to `ipvs`, `iptables` or `ebpf` (`ebpf` is available only if Cilium CNI is selected and [Konnectivity](#konnectivity) is enabled). Defaults to `ipvs` for Canal CNI clusters and `ebpf` / `iptables` (based on whether Konnectivity is enabled or not) for Cilium CNI clusters. Note that IPVS kube-proxy mode is not recommended with Cilium CNI due to [a known issue]({{< relref "../../../architecture/known-issues/" >}}#2-connectivity-issue-in-pod-to-nodeport-service-in-cilium--ipvs-proxy-mode). ### Pods CIDR + The network range from which POD networks are allocated. Defaults to `[172.25.0.0/16]` (or `[172.26.0.0/16]` for Kubevirt clusters, `[172.25.0.0/16, fd01::/48]` for `IPv4+IPv6` ipFamily). ### Services CIDR + The network range from which service VIPs are allocated. Defaults to `[10.240.16.0/20]` (or `[10.241.0.0/20]` for Kubevirt clusters, `[10.240.16.0/20, fd02::/120]` for `IPv4+IPv6` ipFamily). ### Node CIDR Mask Size + The mask size (prefix length) used to allocate a node-specific pod subnet within the provided Pods CIDR. It has to be larger than the provided Pods CIDR prefix length. ### Allowed IP Range for NodePorts + IP range from which NodePort access to the worker nodes will be allowed. Defaults to `0.0.0.0/0` (allowed from anywhere). This option is available only for some cloud providers that support it. ### Node Local DNS Cache + Enables NodeLocal DNS Cache - caching DNS server running on each worker node in the cluster. ### Konnectivity + Konnectivity provides TCP level proxy for the control plane (seed cluster) to worker nodes (user cluster) communication. It is based on the upstream [apiserver-network-proxy](https://github.com/kubernetes-sigs/apiserver-network-proxy/) project and is aimed to be the replacement of the older KKP-specific solution based on OpenVPN and network address translation. Since the old solution was facing several limitations, it has been replaced with Konnectivity and will be removed in future KKP releases. {{% notice warning %}} diff --git a/content/kubermatic/main/tutorials-howtos/networking/cni-migration/_index.en.md b/content/kubermatic/main/tutorials-howtos/networking/cni-migration/_index.en.md index 2c9a80c71..6d6400799 100644 --- a/content/kubermatic/main/tutorials-howtos/networking/cni-migration/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/networking/cni-migration/_index.en.md @@ -110,6 +110,7 @@ kubectl delete -f https://docs.projectcalico.org/v3.21/manifests/canal.yaml At this point, your cluster should be running on Cilium CNI. ### Step 5 (Optional) + Consider changing the kube-proxy mode, especially if it was IPVS previously. See [Changing the Kube-Proxy Mode](#changing-the-kube-proxy-mode) for more details. @@ -121,7 +122,6 @@ As the last step, we recommend to perform rolling restart of machine deployments Please verify that everything works normally in the cluster. If there are any problems, you can revert the migration procedure and go back to the previously used CNI type and version as described in the next section. - ## Migrating User Cluster with Cilium CNI to Canal CNI Please follow the same steps as in [Migrating User Cluster with the Canal CNI to the Cilium CNI](#migrating-user-cluster-with-the-canal-cni-to-the-cilium-cni), with the following changes: @@ -138,8 +138,8 @@ helm template cilium cilium/cilium --version 1.11.0 --namespace kube-system | ku - [(Step 5)](#step-5-optional) Restart all already running non-host-networking pods as in the [Step 3](#step-3). We then recommend to perform rolling restart of machine deployments in the cluster as well. - ## Changing the Kube-Proxy Mode + If you migrated your cluster from Canal CNI to Cilium CNI, you may want to change the kube-proxy mode of the cluster. As the `ipvs` kube-proxy mode is not recommended with Cilium CNI due to [a known issue]({{< relref "../../../architecture/known-issues/" >}}#2-connectivity-issue-in-pod-to-nodeport-service-in-cilium--ipvs-proxy-mode), we strongly recommend migrating to `ebpf` or `iptables` proxy mode after Canal -> Cilium migration. @@ -159,6 +159,7 @@ At this point, you are able to change the proxy mode in the Cluster API. Change - or by editing the cluster CR in the Seed Cluster (`kubectl edit cluster `). ### Step 3 + When switching to/from ebpf, wait until all Cilium pods are redeployed (you will notice a restart of all Cilium pods). It can take up to 5 minutes until this happens. diff --git a/content/kubermatic/main/tutorials-howtos/networking/dual-stack/_index.en.md b/content/kubermatic/main/tutorials-howtos/networking/dual-stack/_index.en.md index 60b480664..c2f8cdc90 100644 --- a/content/kubermatic/main/tutorials-howtos/networking/dual-stack/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/networking/dual-stack/_index.en.md @@ -34,6 +34,7 @@ KKP supports dual-stack networking for KKP-managed user clusters for the followi Dual-stack [specifics & limitations of individual cloud-providers](#cloud-provider-specifics-and-limitations) are listed below. ## Compatibility Matrix + The following table lists the provider / operating system combinations compatible with dual-stack clusters on KKP: | | Ubuntu | Flatcar | RHEL | Rocky Linux | @@ -47,14 +48,13 @@ The following table lists the provider / operating system combinations compatibl | Openstack | ✓ | ✓ | ✓ | ✓ | | VMware vSphere | ✓ | - | - | - | - **NOTES:** - A hyphen(`-`) denotes that the operating system is available / not tested on the given platform. - An asterisk (`*`) denotes a minor issue described in [specifics & limitations of individual cloud-providers](#cloud-provider-specifics-and-limitations). - ## Enabling Dual-Stack Networking for a User Cluster + Dual-stack networking can be enabled for each user-cluster across one of the supported cloud providers. Please refer to [provider-specific documentation](#cloud-provider-specifics-and-limitations) below to see if it is supported globally, or it needs to be enabled on the datacenter level. @@ -62,6 +62,7 @@ or it needs to be enabled on the datacenter level. Dual-stack can be enabled for each supported CNI (both Canal and Cilium). In case of Canal CNI, the minimal supported version is 3.22. ### Enabling Dual-Stack Networking from KKP UI + If dual-stack networking is available for the given provider and datacenter, an option for choosing between `IPv4` and `IPv4 and IPv6 (Dual Stack)` becomes automatically available on the cluster details page in the cluster creation wizard: @@ -85,7 +86,7 @@ without specifying pod / services CIDRs for individual address families, just se `spec.clusterNetwork.ipFamily` to `IPv4+IPv6` and leave `spec.clusterNetwork.pods` and `spec.clusterNetwork.services` empty. They will be defaulted as described on the [CNI & Cluster Network Configuration page]({{< relref "../cni-cluster-network/" >}}#default-cluster-network-configuration). -2. The other option is to specify both IPv4 and IPv6 CIDRs in `spec.clusterNetwork.pods` and `spec.clusterNetwork.services`. +1. The other option is to specify both IPv4 and IPv6 CIDRs in `spec.clusterNetwork.pods` and `spec.clusterNetwork.services`. For example, a valid `clusterNetwork` configuration excerpt may look like: ```yaml @@ -106,9 +107,9 @@ spec: Please note that the order of address families in the `cidrBlocks` is important and KKP right now only supports IPv4 as the primary IP family (meaning that IPv4 address must always be the first in the `cidrBlocks` list). - ## Verifying Dual-Stack Networking in a User Cluster -in order to verify the connectivity in a dual-stack enabled user cluster, please refer to the + +In order to verify the connectivity in a dual-stack enabled user cluster, please refer to the [Validate IPv4/IPv6 dual-stack](https://kubernetes.io/docs/tasks/network/validate-dual-stack/) page in the Kubernetes documentation. Please note the [cloud-provider specifics & limitations](#cloud-provider-specifics-and-limitations) section below, as some features may not be supported on the given cloud-provider. @@ -116,40 +117,49 @@ section below, as some features may not be supported on the given cloud-provider ## Cloud-Provider Specifics and Limitations ### AWS + Dual-stack feature is available automatically for all new user clusters in AWS. Please note however, that the VPC and subnets used to host the worker nodes need to be dual-stack enabled - i.e. must have both IPv4 and IPv6 CIDR assigned. Limitations: + - In the Clusters with control plane version < 1.24, Worker nodes do not have their IPv6 IP addresses published in k8s API (`kubectl describe nodes`), but have them physically applied on their network interfaces (can be seen after SSH-ing to the node). Because of this, pods in the host network namespace do not have IPv6 address assigned. - Dual-Stack services of type `LoadBalancer` are not yet supported by AWS cloud-controller-manager. Only `NodePort` services can be used to expose services outside the cluster via IPv6. Related issues: - - https://github.com/kubermatic/kubermatic/issues/9899 - - https://github.com/kubernetes/cloud-provider-aws/issues/477 + + - + - Docs: + - [AWS: Subnets for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html) ### Microsoft Azure + Dual-stack feature is available automatically for all new user clusters in Azure. Please note however that the VNet used to host the worker nodes needs to be dual-stack enabled - i.e. must have both IPv4 and IPv6 CIDR assigned. In case that you are not using a pre-created VNet, but leave the VNet creation on KKP, it will automatically create a dual-stack VNet for your dual-stack user clusters. Limitations: + - Dual-Stack services of type `LoadBalancer` are not yet supported by Azure cloud-controller-manager. Only `NodePort` services can be used to expose services outside the cluster via IPv6. Related issues: - - https://github.com/kubernetes-sigs/cloud-provider-azure/issues/814 - - https://github.com/kubernetes-sigs/cloud-provider-azure/issues/1831 + + - + - Docs: + - [Overview of IPv6 for Azure Virtual Network](https://docs.microsoft.com/en-us/azure/virtual-network/ip-services/ipv6-overview) ### BYO / kubeadm + Dual-stack feature is available automatically for all new Bring-Your-Own (kubeadm) user clusters. Before joining a KKP user cluster, the worker node needs to have both IPv4 and IPv6 address assigned. @@ -159,6 +169,7 @@ flag of the kubelet. This can be done as follows: - As instructed by KKP UI, run the `kubeadm token --kubeconfig create --print-join-command` command and use its output in the next step. - Create a yaml file with kubeadm `JoinConfiguration`, e.g. `kubeadm-join-config.yaml` with the content similar to this: + ```yaml apiVersion: kubeadm.k8s.io/v1beta3 kind: JoinConfiguration @@ -174,58 +185,72 @@ nodeRegistration: # change the node-ip below to match your desired IPv4 and IPv6 addresses of the node node-ip: 10.0.6.114,2a05:d014:937:4500:a324:767b:38da:2bff ``` + - Join the node with the provided config file, e.g.: `kubeadm join --config kubeadm-join-config.yaml`. Limitations: + - Services of type `LoadBalancer` don't work out of the box in BYO/kubeadm clusters. You can use additional addon software, such as [MetalLB](https://metallb.universe.tf/) to make them work in your custom kubeadm setup. Docs: + - [Dual-stack support with kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/dual-stack-support/) ### DigitalOcean + Dual-stack feature is available automatically for all new user clusters in DigitalOcean. Limitations: + - Services of type `LoadBalancer` are not yet supported in KKP on DigitalOcean (not even for IPv4-only clusters). - On some operating systems (e.g. Rocky Linux) IPv6 address assignment on the node may take longer time during the node provisioning. In that case, the IPv6 address may not be detected when the kubelet starts, and because of that, worker nodes may not have their IPv6 IP addresses published in k8s API (`kubectl describe nodes`). This can be work-arounded by restarting the kubelet manually / rebooting the node. Related issues: -- https://github.com/kubermatic/kubermatic/issues/8847 + +- ### Equinix Metal + Dual-stack feature is available automatically for all new user clusters in Equinix Metal. Limitations: + - Services of type `LoadBalancer` are not yet supported in KKP on Equinix Metal (not even for IPv4-only clusters). - On some operating systems (e.g. Rocky Linux, Flatcar) IPv6 address assignment on the node may take longer time during the node provisioning. In that case, the IPv6 address may not be detected when the kubelet starts, and because of that, worker nodes may not have their IPv6 IP addresses published in k8s API (`kubectl describe nodes`). This can be work-arounded by restarting the kubelet manually / rebooting the node. Related issues: -- https://github.com/kubermatic/kubermatic/issues/10648 -- https://github.com/equinix/cloud-provider-equinix-metal/issues/179 + +- +- ### Google Cloud Platform (GCP) + Dual-stack feature is available automatically for all new user clusters in GCP. Please note however, that the subnet used to host the worker nodes need to be dual-stack enabled - i.e. must have both IPv4 and IPv6 CIDR assigned. Limitations: + - Worker nodes do not have their IPv6 IP addresses published in k8s API (`kubectl describe nodes`), but have them physically applied on their network interfaces (can be seen after SSH-ing to the node). Because of this, pods in the host network namespace do not have IPv6 address assigned. - Dual-Stack services of type `LoadBalancer` are not yet supported by GCP cloud-controller-manager. Only `NodePort` services can be used to expose services outside the cluster via IPv6. Related issues: -- https://github.com/kubermatic/kubermatic/issues/9899 -- https://github.com/kubernetes/cloud-provider-gcp/issues/324 + +- +- Docs: + - [GCP: Create and modify VPC Networks](https://cloud.google.com/vpc/docs/create-modify-vpc-networks) ### Hetzner + Dual-stack feature is available automatically for all new user clusters in Hetzner. Please note that all services of type `LoadBalancer` in Hetzner need to have a @@ -234,14 +259,17 @@ for example `load-balancer.hetzner.cloud/network-zone: "eu-central"` or `load-ba Without one of these annotations, the load-balancer will be stuck in the Pending state. Limitations: + - Due to the [issue with node ExternalIP ordering](https://github.com/hetznercloud/hcloud-cloud-controller-manager/issues/305), we recommend using dual-stack clusters on Hetzner only with [Konnectivity]({{< relref "../cni-cluster-network/#konnectivity" >}}) enabled, otherwise errors can be seen when issuing `kubectl logs` / `kubectl exec` / `kubectl cp` commands on the cluster. Related Issues: -- https://github.com/hetznercloud/hcloud-cloud-controller-manager/issues/305 + +- ### OpenStack + As IPv6 support in OpenStack highly depends on the datacenter setup, dual-stack feature in KKP is available only in those OpenStack datacenters where it is explicitly enabled in the datacenter config of the KKP (datacenter's `spec.openstack.ipv6Enabled` config flag is set to `true`). @@ -254,29 +282,34 @@ but a default IPv6 subnet pool exists in the datacenter, the default one will be specified and the default IPv6 subnet pool does not exist, the IPv6 subnet will be created with the CIDR `fd00::/64`. Limitations: + - Dual-Stack services of type `LoadBalancer` are not yet supported by the OpenStack cloud-controller-manager. The initial work has been -finished as part of https://github.com/kubernetes/cloud-provider-openstack/pull/1901 and should be released as of +finished as part of and should be released as of Kubernetes version 1.25. Related Issues: -- https://github.com/kubernetes/cloud-provider-openstack/issues/1937 + +- Docs: + - [IPv6 in OpenStack](https://docs.openstack.org/neutron/yoga/admin/config-ipv6.html) - [Subnet pools](https://docs.openstack.org/neutron/yoga/admin/config-subnet-pools.html) ### VMware vSphere + As IPv6 support in VMware vSphere highly depends on the datacenter setup, dual-stack feature in KKP is available only in those vSphere datacenters where it is explicitly enabled in the datacenter config of the KKP (datacenter's `spec.vsphere.ipv6Enabled` config flag is set to `true`). Limitations: + - Services of type `LoadBalancer` don't work out of the box in vSphere clusters, as they are not implemented by the vSphere cloud-controller-manager. You can use additional addon software, such as [MetalLB](https://metallb.universe.tf/) to make them work in your environment. - ## Operating System Specifics and Limitations + Although IPv6 is usually enabled by most modern operating systems by default, there can be cases when the particular provider's IPv6 assignment method is not automatically enabled in the given operating system image. Even though we tried to cover most of the cases in the Machine Controller and Operating System Manager code, in some cases @@ -287,6 +320,7 @@ These cases can be still addressed by introducing of custom Operating System Pro specific configuration (see [Operating System Manager]({{< relref "../../operating-system-manager/" >}}) docs). ### RHEL / Rocky Linux + RHEL & Rocky Linux provide an extensive set of IPv6 settings for NetworkManager (see "Table 22. ipv6 setting" in the [NetworkManager ifcfg-rh settings plugin docs](https://developer-old.gnome.org/NetworkManager/unstable/nm-settings-ifcfg-rh.html)). Depending on the IPv6 assignment method used in the datacenter, you may need the proper combination diff --git a/content/kubermatic/main/tutorials-howtos/networking/ipam/_index.en.md b/content/kubermatic/main/tutorials-howtos/networking/ipam/_index.en.md index 29f7de106..50c5b1612 100644 --- a/content/kubermatic/main/tutorials-howtos/networking/ipam/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/networking/ipam/_index.en.md @@ -7,17 +7,21 @@ weight = 170 Multi-Cluster IPAM is a feature responsible for automating the allocation of IP address ranges/subnets per user-cluster, based on a predefined configuration ([IPAMPool](#input-resource-ipampool)) per datacenter that defines the pool subnet and the allocation size. The user cluster allocated ranges are available in the [KKP Addon](#kkp-addon-template-integration) `TemplateData`, so it can be used by various Addons running in the user cluster. ## Motivation and Background + Networking applications deployed in KKP user clusters need automated IP Address Management (IPAM) for IP ranges that they use, in a way that prevents address overlaps between multiple user clusters. An example for such an application is MetalLB load-balancer, for which a unique IP range from a larger CIDR range needs to be configured in each user cluster in the same datacenter. The goal is to provide a simple solution that is automated and less prone to human errors. ## Allocation Types + Each IPAM pool in a datacenter should define an allocation type: "range" or "prefix". ### Range + Results in a set of IPs based on an input size. E.g. the first allocation for a range of size **8** in a pool subnet `192.168.1.0/26` would be + ```txt 192.168.1.0-192.168.1.7 ``` @@ -25,21 +29,27 @@ E.g. the first allocation for a range of size **8** in a pool subnet `192.168.1. *Note*: There is a minimal allowed pool subnet mask based on the IP version (**20** for IPv4 and **116** for IPv6). So, if you need a large range of IPs, it's recommended to use the "prefix" type. ### Prefix + Results in a subnet of the pool subnet based on an input subnet prefix. Recommended when a large range of IPs is necessary. E.g. the first allocation for a prefix **30** in a pool subnet `192.168.1.0/26` would be + ```txt 192.168.1.0/30 ``` + and the second would be + ```txt 192.168.1.4/30 ``` ## Input Resource (IPAMPool) + KKP exposes a global-scoped Custom Resource Definition (CRD) `IPAMPool` in the seed cluster. The administrators are able to define the `IPAMPool` CR with a specific name with multiple pool CIDRs with predefined allocation ranges tied to specific datacenters. The administrators can also manage the IPAM pools via [API endpoints]({{< relref "../../../references/rest-api-reference/#/ipampool" >}}) (`/api/v2/seeds/{seed_name}/ipampools`). E.g. containing both allocation types for different datacenters: + ```yaml apiVersion: kubermatic.k8c.io/v1 kind: IPAMPool @@ -66,6 +76,7 @@ Optionally, you can configure range/prefix exclusions in IPAMPools, in order to For that, you need to extend the IPAM Pool datacenter spec to include a list of subnets CIDR to exclude (`excludePrefixes` for prefix allocation type) or a list of particular IPs or IP ranges to exclude (`excludeRanges` for range allocation type). E.g. from previous example, containing both allocation types exclusions: + ```yaml apiVersion: kubermatic.k8c.io/v1 kind: IPAMPool @@ -90,7 +101,9 @@ spec: ``` ### Restrictions + Required `IPAMPool` spec fields: + - `datacenters` list cannot be empty. - `type` for a datacenter is mandatory. - `poolCidr` for a datacenter is mandatory. @@ -98,15 +111,18 @@ Required `IPAMPool` spec fields: - `allocationPrefix` for a datacenter with "prefix" allocation type is mandatory. For the "range" allocation type: + - `allocationRange` should be a positive integer and cannot be greater than the pool subnet possible number of IP addresses. - IPv4 `poolCIDR` should have a prefix (i.e. mask) equal or greater than **20**. - IPv6 `poolCIDR` should have a prefix (i.e. mask) equal or greater than **116**. For the "prefix" allocation type: + - `allocationPrefix` should be between **1** and **32** for IPv4 pool, and between **1** and **128** for IPv6 pool. - `allocationPrefix` should be equal or greater than the pool subnet mask size. ### Modifications + In general, modifications of the `IPAMPool` are not allowed, with the following exceptions: - It is possible to add a new datacenter into the `IPAMPool`. @@ -115,13 +131,14 @@ In general, modifications of the `IPAMPool` are not allowed, with the following If you need to change an already applied `IPAMPool`, you should first delete it and then apply it with the changes. Note that by `IPAMPool` deletion, all user clusters allocations (`IPAMAllocation`) will be deleted as well. - ## Generated Resource (IPAMAllocation) + The IPAM controller in the seed-controller-manager is in charge of the allocation of IP ranges from the defined pools for user clusters. For each user cluster which runs in a datacenter for which an `IPAMPool` is defined, it will automatically allocate a free IP range from the available pool. The persisted allocation is an `IPAMAllocation` CR that will be installed in the seed cluster in the user cluster's namespace. E.g. for "prefix" type: + ```yaml apiVersion: kubermatic.k8c.io/v1 kind: IPAMAllocation @@ -135,6 +152,7 @@ spec: ``` E.g. for "range" type: + ```yaml apiVersion: kubermatic.k8c.io/v1 kind: IPAMAllocation @@ -149,24 +167,30 @@ spec: ``` Note that the ranges of addresses may be disjoint for the "range" type, e.g.: + ```yaml spec: addresses: - "192.168.1.0-192.168.1.7" - "192.168.1.16-192.168.1.23" ``` + The reason for that is to allow for some `IPAMPool` modifications (i.e. increase of the allocation range) in the future. ### Allocations Cleanup + The allocations (i.e. `IPAMAllocation` CRs) for a user cluster are deleted in two occasions: + - Related pool (i.e. `IPAMPool` CR with same name) is deleted. - User cluster itself is deleted. ## KKP Addon Template Integration + The user cluster allocated ranges (i.e. `IPAMAllocation` CRs values) are available in the [Addon template data]({{< relref "../../../architecture/concept/kkp-concepts/addons/" >}}#manifest-templating) (attribute `.Cluster.Network.IPAMAllocations`) to be rendered in the Addons manifests. That allows consumption of the user cluster's IPAM allocations in any KKP [Addon]({{< relref "../../../architecture/concept/kkp-concepts/addons/" >}}). For example, looping over all user cluster IPAM pools allocations in an addon template can be done as follows: + ```yaml ... @@ -185,18 +209,21 @@ For example, looping over all user cluster IPAM pools allocations in an addon te ``` ## MetalLB Addon Integration + KKP provides a [MetalLB](https://metallb.universe.tf/) [accessible addon]({{< relref "../../../architecture/concept/kkp-concepts/addons/#accessible-addons" >}}) integrated with the Multi-Cluster IPAM feature. The addon deploys standard MetalLB manifests into the user cluster. On top of that, if an IPAM allocation from an IPAM pool with a specific name is available for the user-cluster, the addon automatically installs the equivalent MetalLB IP address pool in the user cluster (in the `IPAddressPool` custom resource from the `metallb.io/v1beta1` API). The KKP `IPAMPool` from which the allocations are made need to have the following name: + - `metallb` if a single-stack (either IPv4 or IPv6) IP address pool needs to be created in the user cluster. - `metallb-ipv4` and `metallb-ipv6` if a dual-stack (both IPv4 and IPv6) IP address pool needs to be created in the user cluster. In this case, allocations from both address pools need to exist. The created [`IPAddressPool`](https://metallb.universe.tf/configuration/#defining-the-ips-to-assign-to-the-load-balancer-services) custom resource (from the `metallb.io/v1beta1` API) will have the following name: + - `kkp-managed-pool` in case of a single-stack address pool, - `kkp-managed-pool-dualstack` in case of a dual-stack address pool. diff --git a/content/kubermatic/main/tutorials-howtos/networking/multus/_index.en.md b/content/kubermatic/main/tutorials-howtos/networking/multus/_index.en.md index 34fd8c9c5..321684d20 100644 --- a/content/kubermatic/main/tutorials-howtos/networking/multus/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/networking/multus/_index.en.md @@ -7,17 +7,20 @@ weight = 160 The Multus-CNI Addon allows automated installation of [Multus-CNI](https://github.com/k8snetworkplumbingwg/multus-cni) in KKP user clusters. ## About Multus-CNI + Multus-CNI enables attaching multiple network interfaces to pods in Kubernetes. It is not a standard CNI plugin - it acts as a CNI "meta-plugin", a CNI that can call multiple other CNI plugins. This implies that clusters still need a primary CNI to function properly. In KKP, Multus can be installed into user clusters with any [supported CNI]({{< relref "../cni-cluster-network/" >}}). Multus addon can be deployed into a user cluster with a working primary CNI at any time. ## Installing the Multus Addon in KKP + Before this addon can be deployed in a KKP user cluster, the KKP installation has to be configured to enable `multus` addon as an [accessible addon]({{< relref "../../../architecture/concept/kkp-concepts/addons/#accessible-addons" >}}). This needs to be done by the KKP installation administrator, once per KKP installation. As an administrator you can use the [AddonConfig](#multus-addonconfig) listed at the end of this page. ## Deploying the Multus Addon in a KKP User Cluster + Once the Multus Addon is installed in KKP, it can be deployed into a user cluster via the KKP UI as shown below: ![Multus Addon](@/images/ui/addon-multus.png?height=400px&classes=shadow,border "Multus Addon") @@ -25,6 +28,7 @@ Once the Multus Addon is installed in KKP, it can be deployed into a user cluste Multus will automatically configure itself with the primary CNI running in the user cluster. If the primary CNI is not yet running at the time of Multus installation, Multus will wait for it for up to 10 minutes. ## Using Multus CNI + When Multus addon is installed, all pods will be still managed by the primary CNI. At this point, it is possible to define additional networks with `NetworkAttachmentDefinition` custom resources. As an example, the following `NetworkAttachmentDefinition` defines a network named `macvlan-net` managed by the [macvlan CNI plugin](https://www.cni.dev/plugins/current/main/macvlan/) (a simple standard CNI plugin usually installed together with the primary CNIs): @@ -97,6 +101,7 @@ $ kubectl exec -it samplepod -- ip address ``` ## Multus AddonConfig + As an KKP administrator, you can use the following AddonConfig for Multus to display Multus logo in the addon list in KKP UI: ```yaml diff --git a/content/kubermatic/main/tutorials-howtos/networking/proxy-whitelisting/_index.en.md b/content/kubermatic/main/tutorials-howtos/networking/proxy-whitelisting/_index.en.md index 647ba8494..335a85b49 100644 --- a/content/kubermatic/main/tutorials-howtos/networking/proxy-whitelisting/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/networking/proxy-whitelisting/_index.en.md @@ -225,32 +225,38 @@ projects.registry.vmware.com/vmware-cloud-director/cloud-director-named-disk-csi ``` ### OS Resources -Additional to the kubelet dependencies, the [OperatingSystemManager](https://docs.kubermatic.com/operatingsystemmanager) installs some operating-system-specific packages over cloud-init: +Additional to the kubelet dependencies, the [OperatingSystemManager](https://docs.kubermatic.com/operatingsystemmanager) installs some operating-system-specific packages over cloud-init: ### Flatcar Linux + Init script: [osp-flatcar-cloud-init.yaml](https://github.com/kubermatic/operating-system-manager/blob/main/deploy/osps/default/osp-flatcar-cloud-init.yaml) - no additional targets ### Ubuntu 20.04/22.04/24.04 + Init script: [osp-ubuntu.yaml](https://github.com/kubermatic/operating-system-manager/blob/main/deploy/osps/default/osp-ubuntu.yaml) - default apt repositories - docker apt repository: `download.docker.com/linux/ubuntu` ### Other OS + Other supported operating system details are visible by the dedicated [default OperatingSystemProfiles](https://github.com/kubermatic/operating-system-manager/tree/main/deploy/osps/default). # KKP Seed Cluster Setup ## Cloud Provider API Endpoints + KKP interacts with the different cloud provider directly to provision the required infrastructure to manage Kubernetes clusters: ### AWS + API endpoint documentation: [AWS service endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html) KKP interacts in several ways with different cloud providers, e.g.: + - creating EC2 instances - creating security groups - access instance profiles @@ -263,7 +269,9 @@ ec2.eu-central-1.amazonaws.com ``` ### Azure + API endpoint documentation: [Azure API Docs - Request URI](https://docs.microsoft.com/en-us/rest/api/azure/#request-uri) + ```bash # Resource Manager API management.azure.com @@ -275,8 +283,8 @@ login.microsoftonline.com ``` ### vSphere -API Endpoint URL of all targeted vCenters specified in [seed cluster `spec.datacenters.EXAMPLEDC.vsphere.endpoint`]({{< ref "../../../tutorials-howtos/project-and-cluster-management/seed-cluster" >}}), e.g. `vcenter.example.com`. +API Endpoint URL of all targeted vCenters specified in [seed cluster `spec.datacenters.EXAMPLEDC.vsphere.endpoint`]({{< ref "../../../tutorials-howtos/project-and-cluster-management/seed-cluster" >}}), e.g. `vcenter.example.com`. ## KubeOne Seed Cluster Setup @@ -299,7 +307,9 @@ github.com/containernetworking/plugins/releases/download # gobetween (if used, e.g. at vsphere terraform setup) github.com/yyyar/gobetween/releases ``` + **At installer host / bastion server**: + ```bash ## terraform modules registry.terraform.io @@ -313,6 +323,7 @@ quay.io/kubermatic-labs/kubeone-tooling ``` ## cert-manager (if used) + For creating certificates with let's encrypt we need access: ```bash diff --git a/content/kubermatic/main/tutorials-howtos/oidc-provider-configuration/_index.en.md b/content/kubermatic/main/tutorials-howtos/oidc-provider-configuration/_index.en.md index 2553a06d5..17c2073ee 100644 --- a/content/kubermatic/main/tutorials-howtos/oidc-provider-configuration/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/oidc-provider-configuration/_index.en.md @@ -107,13 +107,13 @@ spec: "oidc_logout_url": "https://keycloak.kubermatic.test/auth/realms/test/protocol/openid-connect/logout" } ``` + {{% notice note %}} When the user token size exceeds the browser's cookie size limit (e.g., when the user is a member of many groups), the token is split across multiple cookies to ensure proper authentication. External tools outside of KKP (e.g., Kubernetes Dashboard, Grafana, Prometheus) are not supported with multi-cookie tokens. {{% /notice %}} - ### Seed Configuration In some cases a Seed may require an independent OIDC provider. For this reason a `Seed` CRD contains relevant fields under `spec.oidcProviderConfiguration`. Filling those fields results in overwriting a configuration from `KubermaticConfiguration` CRD. The following snippet presents an example of `Seed` CRD configuration: @@ -138,7 +138,5 @@ reconfigure the components accordingly. After a few seconds the new pods should running. {{% notice note %}} -If you are using _Keycloak_ as a custom OIDC provider, make sure that you set the option `Implicit Flow Enabled: On` -on the `kubermatic` and `kubermaticIssuer` clients. Without this option, you won't be properly -redirected to the login page. +If you are using *Keycloak* as a custom OIDC provider, make sure that you set the option `Implicit Flow Enabled: On` on the `kubermatic` and `kubermaticIssuer` clients. Without this option, you won't be properly redirected to the login page. {{% /notice %}} diff --git a/content/kubermatic/main/tutorials-howtos/oidc-provider-configuration/share-clusters-via-delegated-oidc-authentication/_index.en.md b/content/kubermatic/main/tutorials-howtos/oidc-provider-configuration/share-clusters-via-delegated-oidc-authentication/_index.en.md index f4d044c5f..5dcddc054 100644 --- a/content/kubermatic/main/tutorials-howtos/oidc-provider-configuration/share-clusters-via-delegated-oidc-authentication/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/oidc-provider-configuration/share-clusters-via-delegated-oidc-authentication/_index.en.md @@ -143,4 +143,5 @@ kubectl -n kubermatic apply -f kubermaticconfig.yaml After the operator has reconciled the KKP installation, OIDC auth will become available. ### Grant Permission to an OIDC group + Please take a look at [Cluster Access - Manage Group's permissions]({{< ref "../../cluster-access#manage-group-permissions" >}}) diff --git a/content/kubermatic/main/tutorials-howtos/opa-integration/_index.en.md b/content/kubermatic/main/tutorials-howtos/opa-integration/_index.en.md index 734556c78..6e776ff78 100644 --- a/content/kubermatic/main/tutorials-howtos/opa-integration/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/opa-integration/_index.en.md @@ -17,7 +17,6 @@ policy engine. More info about OPA and Gatekeeper can be read from their docs and tutorials, but the general idea is that by using the Constraint Template CRD the users can create rule templates whose parameters are then filled out by the corresponding Constraints. - ## How to activate OPA Integration on your Cluster The integration is specific per user cluster, meaning that it is activated by a flag in the cluster spec. @@ -44,6 +43,7 @@ Constraint Templates are managed by the Kubermatic platform admins. Kubermatic i Kubermatic CT's which designated controllers to reconcile to the seed and to user cluster with activated OPA integration as Gatekeeper CT's. Example of a Kubermatic Constraint Template: + ```yaml apiVersion: kubermatic.k8c.io/v1 kind: ConstraintTemplate @@ -94,7 +94,6 @@ Constraints need to be associated with a Constraint Template. To add a new constraint click on the `+ Add Constraint` icon on the right at the bottom of cluster view. A new dialog will appear, where you can specify the name, the constraint template, and the spec: Spec is the only field that needs to be filled with a yaml. - ![Add Constraints Dialog](@/images/ui/opa-add-constraint.png?height=350px&classes=shadow,border "Add Constraints Dialog") `Note: You can now manage Default Constraints from the Admin Panel.` @@ -134,6 +133,7 @@ Kubermatic operator/admin creates a Constraint in the admin panel, it gets propa The following example is regarding `Restricting escalation to root privileges` in Pod Security Policy but implemented as Constraints and Constraint Templates with Gatekeeper. Constraint Templates + ```yaml crd: spec: @@ -169,6 +169,7 @@ selector: ``` Constraint + ```yaml constraintType: K8sPSPAllowPrivilegeEscalationContainer match: @@ -291,6 +292,7 @@ selector: matchLabels: filtered: 'true' ``` + ### Deleting Default Constraint Deleting Default Constraint causes all related Constraints on the user clusters to be deleted as well. @@ -316,6 +318,7 @@ OPA matches these prefixes with the Pods container `image` field and if it match They are cluster-scoped and reside in the KKP Master cluster. Example of a AllowedRegistry: + ```yaml apiVersion: kubermatic.k8c.io/v1 kind: AllowedRegistry @@ -396,7 +399,8 @@ For the existing `allowedregistry` [Default Constraint]({{< ref "#default-constr When a user tries to create a Pod with an image coming from a registry that is not prefixed by one of the AllowedRegistries, they will get a similar error: -``` + +```bash container has an invalid image registry , allowed image registries are ["quay.io"] ``` @@ -419,7 +423,7 @@ You can manage the config in the user cluster view, per user cluster. OPA integration on a user cluster can simply be removed by disabling the OPA Integration flag on the Cluster object. Be advised that this action removes all Constraint Templates, Constraints, and Config related to the cluster. -**Exempting Namespaces** +### Exempting Namespaces `gatekeeper-system` and `kube-system` namespace are by default entirely exempted from Gatekeeper webhook which means they are exempted from the Admission Webhook and Auditing. diff --git a/content/kubermatic/main/tutorials-howtos/opa-integration/via-ui/_index.en.md b/content/kubermatic/main/tutorials-howtos/opa-integration/via-ui/_index.en.md index 4a6570398..ccd503541 100644 --- a/content/kubermatic/main/tutorials-howtos/opa-integration/via-ui/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/opa-integration/via-ui/_index.en.md @@ -14,6 +14,7 @@ As an admin, you will find a few options in the `Admin Panel`. You can access th ![Access Admin Panel](@/images/ui/admin-panel.png?classes=shadow,border "Accessing the Admin Panel") In here you can see the `OPA Options` with two checkboxes attached. + - `Enable by Default`: Set the `OPA Integration` checkbox on cluster creation to enabled by default. - `Enforce`: Enable to make users unable to edit the checkbox. @@ -30,13 +31,13 @@ Here you navigate to the OPA menu and then to Default Constraints. ## Cluster Details View The cluster details view is extended by some more information if OPA is enabled. + - `OPA Integration` in the top area is indicating if OPA is enabled or not. - `OPA Gatekeeper Controller` and `OPA Gatekeeper Audit` provide information about the status of those controllers. - `OPA Constraints` and `OPA Gatekeeper Config` are added to the tab menu on the bottom. More details are in the following sections. ![Cluster Details View](@/images/ui/opa-cluster-view.png?classes=shadow,border "Cluster Details View") - ## Activating OPA To create a new cluster with OPA enabled you only have to enable the `OPA Integration` checkbox during the cluster creation process. It is placed in Step 2 `Cluster` and can be enabled by default as mentioned in the [Admin Panel for OPA Options]({{< ref "#admin-panel-for-opa-options" >}}) section. @@ -69,6 +70,7 @@ Spec is the only field that needs to be filled with a yaml. ![Add Constraint Template](@/images/ui/opa-admin-add-ct.png?classes=shadow,border&height=350px "Constraint Template Add Dialog") The following example requires all labels that are described by the constraint to be present: + ```yaml crd: spec: @@ -116,6 +118,7 @@ To add a new constraint click on the `+ Add Constraint` icon on the right. A new ![Add Constraints Dialog](@/images/ui/opa-add-constraint.png?classes=shadow,border "Add Constraints Dialog") The following example will make sure that the gatekeeper label is defined on all namespaces, if you are using the `K8sRequiredLabels` constraint template from above: + ```yaml match: kinds: @@ -238,10 +241,8 @@ In Admin View to disable Default Constraints, click on the green button under `O Kubermatic adds a label `disabled: true` to the Disabled Constraint ![Disabled Default Constraint](@/images/ui/default-constraint-default-true.png?height=400px&classes=shadow,border "Disabled Default Constraint") - ![Disabled Default Constraint](@/images/ui/disabled-default-constraint-cluster-view.png?classes=shadow,border "Disabled Default Constraint") - Enable the constraint by clicking the same button ![Enable Default Constraint](@/images/ui/disabled-default-constraint.png?classes=shadow,border "Enable Default Constraint") @@ -287,7 +288,8 @@ Click on this button to create a config. A new dialog will appear, where you can ![Add Gatekeeper Config](@/images/ui/opa-add-config.png?height=350px&classes=shadow,border "Add Gatekeeper Config") The following example will dynamically update what objects are synced: -``` + +```yaml sync: syncOnly: - group: "" diff --git a/content/kubermatic/main/tutorials-howtos/operating-system-manager/_index.en.md b/content/kubermatic/main/tutorials-howtos/operating-system-manager/_index.en.md index c6a8479ef..593556851 100644 --- a/content/kubermatic/main/tutorials-howtos/operating-system-manager/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/operating-system-manager/_index.en.md @@ -44,7 +44,7 @@ Its dedicated controller runs in the **seed** cluster, in user cluster namespace For each cluster there are at least two OSC objects: 1. **Bootstrap**: OSC used for initial configuration of machine and to fetch the provisioning OSC object. -2. **Provisioning**: OSC with the actual cloud-config that provision the worker node. +1. **Provisioning**: OSC with the actual cloud-config that provision the worker node. OSCs are processed by controllers to eventually generate **secrets inside each user cluster**. These secrets are then consumed by worker nodes. diff --git a/content/kubermatic/main/tutorials-howtos/operating-system-manager/compatibility/_index.en.md b/content/kubermatic/main/tutorials-howtos/operating-system-manager/compatibility/_index.en.md index c0196d331..4df4a2450 100644 --- a/content/kubermatic/main/tutorials-howtos/operating-system-manager/compatibility/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/operating-system-manager/compatibility/_index.en.md @@ -16,25 +16,25 @@ The following operating systems are currently supported by the default Operating ## Operating System -| | Ubuntu | Flatcar | Amazon Linux 2 | RHEL | Rocky Linux | -|---|---|---|---|---|---| -| AWS | ✓ | ✓ | ✓ | ✓ | ✓ | -| Azure | ✓ | ✓ | x | ✓ | ✓ | -| DigitalOcean | ✓ | x | x | x | ✓ | -| Equinix Metal | ✓ | ✓ | x | x | ✓ | -| Google Cloud Platform | ✓ | ✓ | x | x | x | -| Hetzner | ✓ | x | x | x | ✓ | -| KubeVirt | ✓ | ✓ | x | ✓ | ✓ | -| Nutanix | ✓ | x | x | x | x | -| Openstack | ✓ | ✓ | x | ✓ | ✓ | -| VMware Cloud Director | ✓ | ✓ | x | x | x | -| VSphere | ✓ | ✓ | x | ✓ | ✓ | +| | Ubuntu | Flatcar | Amazon Linux 2 | RHEL | Rocky Linux | +| --------------------- | ------ | ------- | -------------- | ---- | ----------- | +| AWS | ✓ | ✓ | ✓ | ✓ | ✓ | +| Azure | ✓ | ✓ | x | ✓ | ✓ | +| DigitalOcean | ✓ | x | x | x | ✓ | +| Equinix Metal | ✓ | ✓ | x | x | ✓ | +| Google Cloud Platform | ✓ | ✓ | x | x | x | +| Hetzner | ✓ | x | x | x | ✓ | +| KubeVirt | ✓ | ✓ | x | ✓ | ✓ | +| Nutanix | ✓ | x | x | x | x | +| Openstack | ✓ | ✓ | x | ✓ | ✓ | +| VMware Cloud Director | ✓ | ✓ | x | x | x | +| VSphere | ✓ | ✓ | x | ✓ | ✓ | ## Kubernetes Versions Currently supported K8S versions are: -* 1.33 -* 1.32 -* 1.31 -* 1.30 +- 1.33 +- 1.32 +- 1.31 +- 1.30 diff --git a/content/kubermatic/main/tutorials-howtos/operating-system-manager/usage/_index.en.md b/content/kubermatic/main/tutorials-howtos/operating-system-manager/usage/_index.en.md index 21932998a..f44c34c2b 100644 --- a/content/kubermatic/main/tutorials-howtos/operating-system-manager/usage/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/operating-system-manager/usage/_index.en.md @@ -45,69 +45,68 @@ To use custom OperatingSystemProfiles, users can do the following: 1. Create their `CustomOperatingSystemProfile` resource on the seed cluster in the `kubermatic` namespace. These resources will be automatically synced to the `kube-system` namespace of the user-clusters. -```yaml -apiVersion: operatingsystemmanager.k8c.io/v1alpha1 -kind: CustomOperatingSystemProfile -metadata: - name: osp-install-curl - namespace: kubermatic -spec: - osName: "ubuntu" - osVersion: "20.04" - version: "v1.0.0" - provisioningUtility: "cloud-init" - supportedCloudProviders: - - name: "aws" - bootstrapConfig: - files: - - path: /opt/bin/bootstrap - permissions: 755 - content: - inline: - encoding: b64 - data: | - #!/bin/bash - - apt update && apt install -y curl jq - - - path: /etc/systemd/system/bootstrap.service - permissions: 644 - content: - inline: - encoding: b64 - data: | - [Install] - WantedBy=multi-user.target - - [Unit] - Requires=network-online.target - After=network-online.target - [Service] - Type=oneshot - RemainAfterExit=true - ExecStart=/opt/bin/bootstrap - - modules: - runcmd: - - systemctl restart bootstrap.service - - provisioningConfig: - files: - - path: /opt/hello-world - permissions: 644 - content: - inline: - encoding: b64 - data: echo "hello world" -``` - -2. Create `OperatingSystemProfile` resources in the `kube-system` namespace of the user cluster, after cluster creation. + ```yaml + apiVersion: operatingsystemmanager.k8c.io/v1alpha1 + kind: CustomOperatingSystemProfile + metadata: + name: osp-install-curl + namespace: kubermatic + spec: + osName: "ubuntu" + osVersion: "20.04" + version: "v1.0.0" + provisioningUtility: "cloud-init" + supportedCloudProviders: + - name: "aws" + bootstrapConfig: + files: + - path: /opt/bin/bootstrap + permissions: 755 + content: + inline: + encoding: b64 + data: | + #!/bin/bash + + apt update && apt install -y curl jq + + - path: /etc/systemd/system/bootstrap.service + permissions: 644 + content: + inline: + encoding: b64 + data: | + [Install] + WantedBy=multi-user.target + + [Unit] + Requires=network-online.target + After=network-online.target + [Service] + Type=oneshot + RemainAfterExit=true + ExecStart=/opt/bin/bootstrap + + modules: + runcmd: + - systemctl restart bootstrap.service + + provisioningConfig: + files: + - path: /opt/hello-world + permissions: 644 + content: + inline: + encoding: b64 + data: echo "hello world" + ``` + +1. Create `OperatingSystemProfile` resources in the `kube-system` namespace of the user cluster, after cluster creation. {{% notice note %}} OSM uses a dedicated resource CustomOperatingSystemProfile in seed cluster. These CustomOperatingSystemProfiles are converted to OperatingSystemProfiles and then propagated to the user clusters. {{% /notice %}} - ## Updating existing OperatingSystemProfiles OSPs are immutable by design and any modifications to an existing OSP requires a version bump in `.spec.version`. Users can create custom OSPs in the seed namespace or in the user cluster and manage them. diff --git a/content/kubermatic/main/tutorials-howtos/project-and-cluster-management/_index.en.md b/content/kubermatic/main/tutorials-howtos/project-and-cluster-management/_index.en.md index 67b0291ac..5b3f2ec9c 100644 --- a/content/kubermatic/main/tutorials-howtos/project-and-cluster-management/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/project-and-cluster-management/_index.en.md @@ -21,14 +21,12 @@ You can assign key-label pairs to your projects. These will be inherited by the After you click `Save`, the project will be created. If you click on it now, you will see options for adding clusters, managing project members, service accounts and SSH keys. - ### Delete a Project To delete a project, move the cursor over the line with the project name and click the trash bucket icon. ![Delete Project](images/project-delete.png?classes=shadow,border "Delete Project") - ### Add an SSH Key If you want to ssh into the project VMs, you need to provide your SSH public key. SSH keys are tied to a project. During cluster creation you can choose which SSH keys should be added to nodes. To add an SSH key, navigate to `SSH Keys` in the Dashboard and click on `Add SSH Key`: @@ -41,7 +39,6 @@ This will create a pop up. Enter a unique name and paste the complete content of After you click on `Add SSH key`, your key will be created and you can now add it to clusters in the same project. - ## Manage Clusters ### Create Cluster @@ -68,7 +65,6 @@ Disabling the User SSH Key Agent at this point can not be reverted after the clu ![General Cluster Settings](images/wizard-step-2.png?classes=shadow,border "General Cluster Settings") - In the next step of the installer, enter the credentials for the chosen provider. A good option is to use [Presets]({{< ref "../administration/presets/" >}}) instead putting in credentials for every cluster creation: ![Provider Credentials](images/wizard-step-3.png?classes=shadow,border "Provider Credentials") @@ -126,7 +122,6 @@ To confirm the deletion, type the name of the cluster into the text box: The cluster will switch into deletion state afterwards, and will be removed from the list when the deletion succeeds. - ## Add a New Machine Deployment To add a new machine deployment navigate to your cluster view and click on the `Add Machine Deployment` button: diff --git a/content/kubermatic/main/tutorials-howtos/project-and-cluster-management/using-kubectl/_index.en.md b/content/kubermatic/main/tutorials-howtos/project-and-cluster-management/using-kubectl/_index.en.md index 480584fb1..244f91c89 100644 --- a/content/kubermatic/main/tutorials-howtos/project-and-cluster-management/using-kubectl/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/project-and-cluster-management/using-kubectl/_index.en.md @@ -4,7 +4,6 @@ date = 2019-11-13T12:07:15+02:00 weight = 70 +++ - Using kubectl requires the installation of kubectl on your system as well as downloading of kubeconfig on the cluster UI page. See the [Official kubectl Install Instructions](https://kubernetes.io/docs/tasks/tools/install-kubectl/) for a tutorial on how to install kubectl on your system. Once you have installed it, download the kubeconfig. The below steps will guide you on how to download a kubeconfig. @@ -20,10 +19,8 @@ Users in the groups `Owner` and `Editor` have an admin token in their kubeconfig ![Revoke the token](revoke-token-dialog.png?classes=shadow,border "Revoke the token") - Once you have installed the kubectl and downloaded the kubeconfig, change into the download directory and export it to your environment: - ```bash $ export KUBECONFIG=$PWD/kubeconfig-admin-czmg7r2sxm $ kubectl version diff --git a/content/kubermatic/main/tutorials-howtos/project-and-cluster-management/web-terminal/_index.en.md b/content/kubermatic/main/tutorials-howtos/project-and-cluster-management/web-terminal/_index.en.md index 66637815f..f9d5b5350 100644 --- a/content/kubermatic/main/tutorials-howtos/project-and-cluster-management/web-terminal/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/project-and-cluster-management/web-terminal/_index.en.md @@ -53,7 +53,7 @@ spec: KKP nginx ingress controller is configured with 1 hour proxy timeout to support long-lasting connections of webterminal. In case that you use a different ingress controller in your setup, you may need to extend the timeouts for the `kubermatic` ingress - e.g. in case of nginx ingress controller, you can add these annotations to have a 1 hour "read" and "send" timeouts: -``` +```yaml nginx.ingress.kubernetes.io/proxy-read-timeout: "3600" nginx.ingress.kubernetes.io/proxy-send-timeout: "3600" ``` diff --git a/content/kubermatic/main/tutorials-howtos/storage/disable-csi-driver/_index.en.md b/content/kubermatic/main/tutorials-howtos/storage/disable-csi-driver/_index.en.md index f49aba51a..83eda571f 100644 --- a/content/kubermatic/main/tutorials-howtos/storage/disable-csi-driver/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/storage/disable-csi-driver/_index.en.md @@ -11,7 +11,7 @@ KKP installs the CSI drivers on user clusters that have the external CCM enabled To disable the CSI driver installation for all user clusters in a data center the admin needs to set ` disableCsiDriver: true` in the data center spec in the seed resource. -``` +```yaml apiVersion: kubermatic.k8c.io/v1 kind: Seed metadata: @@ -38,7 +38,7 @@ This will not impact the clusters which were created prior to enabling this opti To disable the CSI driver installation for a user cluster, admin needs to set `disableCsiDriver: true` in the cluster spec, this is possible only if it is not disabled at the data center. -``` +```yaml apiVersion: kubermatic.k8c.io/v1 kind: Cluster metadata: diff --git a/content/kubermatic/main/tutorials-howtos/telemetry/_index.en.md b/content/kubermatic/main/tutorials-howtos/telemetry/_index.en.md index 6dcbf4360..35ee7dc16 100644 --- a/content/kubermatic/main/tutorials-howtos/telemetry/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/telemetry/_index.en.md @@ -12,6 +12,7 @@ Telemetry helm chart can be found in the [Kubermatic repository](https://github. ## Installation ### Kubermatic Installer + Telemetry will be enabled by default if you use the Kubermatic installer to deploy KKP. For more information about how to use the Kubermatic installer to deploy KKP, please refer to the [installation guide]({{< relref "../../installation/" >}}). Kubermatic installer will use a `values.yaml` file to configure all Helm charts, including Telemetry. The following is an example of the configuration of the Telemetry tool: @@ -35,22 +36,29 @@ Then you can use the Kubermatic installer to install KKP by using the following ``` After this command finishes, a CronJob will be created in the `telemetry-system` namespace on the master cluster. The CronJob includes the following components: + - Agents, including Kubermatic Agent and Kubernetes Agent. They will collect data based on the predefined report schema. Each agent will collect data as an initContainer and write data to local storage. -- Reporter. It will aggregate data that was collected by Agents from local storage, and send it to the public Telemetry endpoint (https://telemetry.k8c.io) based on the `schedule` you defined in the `values.yaml` (or once per day by default). +- Reporter. It will aggregate data that was collected by Agents from local storage, and send it to the public [Telemetry endpoint](https://telemetry.k8c.io) based on the `schedule` you defined in the `values.yaml` (or once per day by default). ### Helm Chart + Telemetry can also be installed by using Helm chart, which is included in the release, prepare a `values.yaml` as we mentioned in the previous section, and install it on the master cluster by using the following command: + ```bash helm --namespace telemetry-system upgrade --atomic --create-namespace --install telemetry /path/to/telemetry/chart --values values.yaml ``` ## Disable Telemetry + If you don’t want to send usage data to us to improve our product, or your KKP will be running in offline mode which doesn’t have access to the public Telemetry endpoint, you can disable it by using `--disable-telemetry` flag as following: + ```bash ./kubermatic-installer deploy --disable-telemetry --config kubermatic.yaml --helm-values values.yaml ``` ## Data that Telemetry Collects + Telemetry tool collects the following metadata in an anonymous manner with UUIDs, the data schemas can be found in [Telemetry-Client repository](https://github.com/kubermatic/telemetry-client): + - For Kubermatic usage: [Kubermatic Record](https://github.com/kubermatic/telemetry-client/blob/release/v0.3/pkg/agent/kubermatic/v2/record.go) - For Kubernetes usage: [Kubernetes Record](https://github.com/kubermatic/telemetry-client/blob/release/v0.3/pkg/agent/kubernetes/v2/record.go) diff --git a/content/kubermatic/v2.28/_index.en.md b/content/kubermatic/v2.28/_index.en.md index 0ead10bba..0b8012732 100644 --- a/content/kubermatic/v2.28/_index.en.md +++ b/content/kubermatic/v2.28/_index.en.md @@ -17,10 +17,12 @@ KKP is the easiest and most effective software for managing cloud native IT infr ## Features -#### Powerful & Intuitive Dashboard to Visualize your Kubernetes Deployment +### Powerful & Intuitive Dashboard to Visualize your Kubernetes Deployment + Manage your [projects and clusters with the KKP dashboard]({{< ref "./tutorials-howtos/project-and-cluster-management/" >}}). Scale your cluster by adding and removing nodes in just a few clicks. As an admin, the dashboard also allows you to [customize the theme]({{< ref "./tutorials-howtos/dashboard-customization/" >}}) and disable theming options for other users. -#### Deploy, Scale & Update Multiple Kubernetes Clusters +### Deploy, Scale & Update Multiple Kubernetes Clusters + Kubernetes environments must be highly distributed to meet the performance demands of modern cloud native applications. Organizations can ensure consistent operations across all environments with effective cluster management. KKP empowers you to take advantage of all the advanced features that Kubernetes has to offer and increases the speed, flexibility and scalability of your cloud deployment workflow. At Kubermatic, we have chosen to do multi-cluster management with Kubernetes Operators. Operators (a method of packaging, deploying and managing a Kubernetes application) allow KKP to automate creation as well as the full lifecycle management of clusters. With KKP you can create a cluster for each need, fine-tune it, reuse it and continue this process hassle-free. This results in: @@ -29,15 +31,17 @@ At Kubermatic, we have chosen to do multi-cluster management with Kubernetes Ope - Smaller individual clusters being more adaptable than one big cluster. - Faster development thanks to less complex environments. -#### Kubernetes Autoscaler Integration +### Kubernetes Autoscaler Integration + Autoscaling in Kubernetes refers to the ability to increase or decrease the number of nodes as the demand for service response changes. Without autoscaling, teams would manually first provision and then scale up or down resources every time conditions change. This means, either services fail at peak demand due to the unavailability of enough resources or you pay at peak capacity to ensure availability. [The Kubernetes Autoscaler in a cluster created by KKP]({{< ref "./tutorials-howtos/kkp-autoscaler/cluster-autoscaler/" >}}) can automatically scale up/down when one of the following conditions is satisfied: 1. Some pods fail to run in the cluster due to insufficient resources. -2. There are nodes in the cluster that have been underutilized for an extended period (10 minutes by default) and pods running on those nodes can be rescheduled to other existing nodes. +1. There are nodes in the cluster that have been underutilized for an extended period (10 minutes by default) and pods running on those nodes can be rescheduled to other existing nodes. + +### Manage all KKP Users Directly from a Single Panel -#### Manage all KKP Users Directly from a Single Panel The admin panel allows KKP administrators to manage the global settings that impact all KKP users directly. As an administrator, you can do the following: - Customize the way custom links (example: Twitter, Github, Slack) are displayed in the Kubermatic dashboard. @@ -46,32 +50,39 @@ The admin panel allows KKP administrators to manage the global settings that imp - Define Preset types in a Kubernetes Custom Resource Definition (CRD) allowing the assignment of new credential types to supported providers. - Enable and configure etcd backups for your clusters through Backup Buckets. -#### Manage Worker Nodes via the UI or the CLI +### Manage Worker Nodes via the UI or the CLI + Worker nodes can be managed [via the KKP web dashboard]({{< ref "./tutorials-howtos/manage-workers-node/via-ui/" >}}). Once you have installed kubectl, you can also manage them [via CLI]({{< ref "./tutorials-howtos/manage-workers-node/via-command-line" >}}) to automate the creation, deletion, and upgrade of nodes. -#### Monitoring, Logging & Alerting +### Monitoring, Logging & Alerting + When it comes to monitoring, no approach fits all use cases. KKP allows you to adjust things to your needs by enabling certain customizations to enable easy and tactical monitoring. KKP provides two different levels of Monitoring, Logging, and Alerting. 1. The first targets only the management components (master, seed, CRDs) and is independent. This is the Master/Seed Cluster MLA Stack and only the KKP Admins can access this monitoring data. -2. The other component is the User Cluster MLA Stack which is a true multi-tenancy solution for all your end-users as well as a comprehensive overview for the KKP Admin. It helps to speed up individual progress but lets the Admin keep an overview of the big picture. It can be configured per seed to match the requirements of the organizational structure. All users can access monitoring data of the user clusters under the projects that they are members of. +1. The other component is the User Cluster MLA Stack which is a true multi-tenancy solution for all your end-users as well as a comprehensive overview for the KKP Admin. It helps to speed up individual progress but lets the Admin keep an overview of the big picture. It can be configured per seed to match the requirements of the organizational structure. All users can access monitoring data of the user clusters under the projects that they are members of. Integrated Monitoring, Logging and Alerting functionality for applications and services in KKP user clusters are built using Prometheus, Loki, Cortex and Grafana. Furthermore, this can be enabled with a single click on the KKP UI. -#### OIDC Provider Configuration +### OIDC Provider Configuration + Since Kubernetes does not provide an OpenID Connect (OIDC) Identity Provider, KKP allows the user to configure a custom OIDC. This way you can grant access and information to the right stakeholders and fulfill security requirements by managing user access in a central identity provider across your whole infrastructure. -#### Easily Upgrading Control Plane and Nodes +### Easily Upgrading Control Plane and Nodes + A specific version of Kubernetes’ control plane typically supports a specific range of kubelet versions connected to it. KKP enforces the rule “kubelet must not be newer than kube-apiserver, and maybe up to two minor versions older” on its own. KKP ensures this rule is followed by checking during each upgrade of the clusters’ control plane or node’s kubelet. Additionally, only compatible versions are listed in the UI as available for upgrades. -#### Open Policy Agent (OPA) +### Open Policy Agent (OPA) + To enforce policies and improve governance in Kubernetes, Open Policy Agent (OPA) can be used. KKP integrates it using OPA Gatekeeper as a kubernetes-native policy engine supporting OPA policies. As an admin you can enable and enforce OPA integration during cluster creation by default via the UI. -#### Cluster Templates +### Cluster Templates + Clusters can be created in a few clicks with the UI. To take the user experience one step further and make repetitive tasks redundant, cluster templates allow you to save data entered into a wizard to create multiple clusters from a single template at once. Templates can be saved to be used subsequently for new cluster creation. -#### Use Default Addons to Extend the Functionality of Kubernetes +### Use Default Addons to Extend the Functionality of Kubernetes + [Addons]({{< ref "./architecture/concept/kkp-concepts/addons/" >}}) are specific services and tools extending the functionality of Kubernetes. Default addons are installed in each user cluster in KKP. The KKP Operator comes with a tool to output full default KKP configuration, serving as a starting point for adjustments. Accessible addons can be installed in each user cluster in KKP on user demand. {{% notice tip %}} diff --git a/content/kubermatic/v2.28/architecture/compatibility/os-support-matrix/_index.en.md b/content/kubermatic/v2.28/architecture/compatibility/os-support-matrix/_index.en.md index 1eab1534c..98e979c8a 100644 --- a/content/kubermatic/v2.28/architecture/compatibility/os-support-matrix/_index.en.md +++ b/content/kubermatic/v2.28/architecture/compatibility/os-support-matrix/_index.en.md @@ -11,11 +11,12 @@ KKP supports a multitude of operating systems. One of the unique features of KKP The following operating systems are currently supported by Kubermatic: -* Ubuntu 20.04, 22.04 and 24.04 -* RHEL beginning with 8.0 (support is cloud provider-specific) -* Flatcar (Stable channel) -* Rocky Linux beginning with 8.0 -* Amazon Linux 2 +- Ubuntu 20.04, 22.04 and 24.04 +- RHEL beginning with 8.0 (support is cloud provider-specific) +- Flatcar (Stable channel) +- Rocky Linux beginning with 8.0 +- Amazon Linux 2 + **Note:** CentOS was removed as a supported OS in KKP 2.26.3 This table shows the combinations of operating systems and cloud providers that KKP supports: diff --git a/content/kubermatic/v2.28/architecture/compatibility/supported-versions/_index.en.md b/content/kubermatic/v2.28/architecture/compatibility/supported-versions/_index.en.md index bfc0b4eb0..17811a9f3 100644 --- a/content/kubermatic/v2.28/architecture/compatibility/supported-versions/_index.en.md +++ b/content/kubermatic/v2.28/architecture/compatibility/supported-versions/_index.en.md @@ -39,8 +39,7 @@ recommend upgrading to a supported Kubernetes release as soon as possible. Refer [Kubernetes website](https://kubernetes.io/releases/) for more information on the supported releases. -Upgrades from a previous Kubernetes version are generally supported whenever a version is -marked as supported, for example KKP 2.27 supports updating clusters from Kubernetes 1.30 to 1.31. +Upgrades from a previous Kubernetes version are generally supported whenever a version is marked as supported, for example KKP 2.27 supports updating clusters from Kubernetes 1.30 to 1.31. ## Provider Incompatibilities diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/addons/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/addons/_index.en.md index 04420a613..8fa6b641d 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/addons/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/addons/_index.en.md @@ -24,22 +24,22 @@ In general, we recommend the usage of Applications for workloads running inside Default addons are installed in each user-cluster in KKP. The default addons are: -* [Canal](https://github.com/projectcalico/canal): policy based networking for cloud native applications -* [Dashboard](https://github.com/kubernetes/dashboard): General-purpose web UI for Kubernetes clusters -* [kube-proxy](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/): Kubernetes network proxy -* [rbac](https://kubernetes.io/docs/reference/access-authn-authz/rbac/): Kubernetes Role-Based Access Control, needed for +- [Canal](https://github.com/projectcalico/canal): policy based networking for cloud native applications +- [Dashboard](https://github.com/kubernetes/dashboard): General-purpose web UI for Kubernetes clusters +- [kube-proxy](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/): Kubernetes network proxy +- [rbac](https://kubernetes.io/docs/reference/access-authn-authz/rbac/): Kubernetes Role-Based Access Control, needed for [TLS node bootstrapping](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) -* [OpenVPN client](https://openvpn.net/index.php/open-source/overview.html): virtual private network (VPN). Lets the control +- [OpenVPN client](https://openvpn.net/index.php/open-source/overview.html): virtual private network (VPN). Lets the control plan access the Pod & Service network. Required for functionality like `kubectl proxy` & `kubectl port-forward`. -* pod-security-policy: Policies to configure KKP access when PSPs are enabled -* default-storage-class: A cloud provider specific StorageClass -* kubeadm-configmap & kubelet-configmap: A set of ConfigMaps used by kubeadm +- pod-security-policy: Policies to configure KKP access when PSPs are enabled +- default-storage-class: A cloud provider specific StorageClass +- kubeadm-configmap & kubelet-configmap: A set of ConfigMaps used by kubeadm Installation and configuration of these addons is done by 2 controllers which are part of the KKP seed-controller-manager: -* `addon-installer-controller`: Ensures a given set of addons will be installed in all clusters -* `addon-controller`: Templates the addons & applies the manifests in the user clusters +- `addon-installer-controller`: Ensures a given set of addons will be installed in all clusters +- `addon-controller`: Templates the addons & applies the manifests in the user clusters The KKP binaries come with a `kubermatic-installer` tool, which can output a full default `KubermaticConfiguration` (`kubermatic-installer print`). This will also include the default configuration for addons and can serve as @@ -86,7 +86,7 @@ regular addons, which are always installed and cannot be removed by the user). I and accessible, then it will be installed in the user-cluster, but also be visible to the user, who can manage it from the KKP dashboard like the other accessible addons. The accessible addons are: -* [node-exporter](https://github.com/prometheus/node_exporter): Exports metrics from the node +- [node-exporter](https://github.com/prometheus/node_exporter): Exports metrics from the node Accessible addons can be managed in the UI from the cluster details view: @@ -256,6 +256,7 @@ spec: ``` There is a short explanation of the single `formSpec` fields: + - `displayName` is the name that is displayed in the UI as the control label. - `internalName` is the name used internally. It can be referenced with template variables (see the description below). - `required` indicates if the control should be required in the UI. @@ -317,7 +318,7 @@ the exact templating syntax. KKP injects an instance of the `TemplateData` struct into each template. The following Go snippet shows the available information: -``` +```plaintext {{< readfile "kubermatic/v2.28/data/addondata.go" >}} ``` diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/addons/aws-node-termination-handler/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/addons/aws-node-termination-handler/_index.en.md index c4191f0a8..e3e440b9d 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/addons/aws-node-termination-handler/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/addons/aws-node-termination-handler/_index.en.md @@ -32,6 +32,7 @@ AWS node termination handler is deployed with any aws user cluster created by KK cluster once the spot instance is interrupted. ## AWS Spot Instances Creation + To create a user cluster which runs some spot instance machines, the user can specify the machine type whether it's a spot instance or not at the step number four (Initial Nodes). A checkbox that has the label "Spot Instance" should be checked. diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/addons/kubeflow/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/addons/kubeflow/_index.en.md index da43a0e4f..d50274b7f 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/addons/kubeflow/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/addons/kubeflow/_index.en.md @@ -28,12 +28,12 @@ Before this addon can be deployed in a KKP user cluster, the KKP installation ha as an [accessible addon](../#accessible-addons). This needs to be done by the KKP installation administrator, once per KKP installation. -* Request the KKP addon Docker image with Kubeflow Addon matching your KKP version from Kubermatic +- Request the KKP addon Docker image with Kubeflow Addon matching your KKP version from Kubermatic (or [build it yourself](../#creating-a-docker-image) from the [Flowmatic repository](https://github.com/kubermatic/flowmatic)). -* Configure KKP - edit `KubermaticConfiguration` as follows: - * modify `spec.userClusters.addons.kubernetes.dockerRepository` to point to the provided addon Docker image repository, - * add `kubeflow` into `spec.api.accessibleAddons`. -* Apply the [AddonConfig from the Flowmatic repository](https://raw.githubusercontent.com/kubermatic/flowmatic/master/addon/addonconfig.yaml) in your KKP installation. +- Configure KKP - edit `KubermaticConfiguration` as follows: + - modify `spec.userClusters.addons.kubernetes.dockerRepository` to point to the provided addon Docker image repository, + - add `kubeflow` into `spec.api.accessibleAddons`. +- Apply the [AddonConfig from the Flowmatic repository](https://raw.githubusercontent.com/kubermatic/flowmatic/master/addon/addonconfig.yaml) in your KKP installation. ### Kubeflow prerequisites @@ -66,7 +66,8 @@ For a LoadBalancer service, an external IP address will be assigned by the cloud This address can be retrieved by reviewing the `istio-ingressgateway` Service in `istio-system` Namespace, e.g.: ```bash -$ kubectl get service istio-ingressgateway -n istio-system +kubectl get service istio-ingressgateway -n istio-system + NAME TYPE CLUSTER-IP EXTERNAL-IP istio-ingressgateway LoadBalancer 10.240.28.214 a286f5a47e9564e43ab4165039e58e5e-1598660756.eu-central-1.elb.amazonaws.com ``` @@ -162,33 +163,33 @@ This section contains a list of known issues in different Kubeflow components: **Kubermatic Kubernetes Platform** -* Not all GPU instances of various providers can be started from the KKP UI: +- Not all GPU instances of various providers can be started from the KKP UI: **Istio RBAC in Kubeflow:** -* If enabled, this issue can be hit in the pipelines: +- If enabled, this issue can be hit in the pipelines: **Kubeflow UI issues:** -* Error by adding notebook server: 500 Internal Server Error: +- Error by adding notebook server: 500 Internal Server Error: -* Experiment run status shows as unknown: +- Experiment run status shows as unknown: **Kale Pipeline:** -* "Namespace is empty" exception: +- "Namespace is empty" exception: **NVIDIA GPU Operator** -* Please see the official NVIDIA GPU documentation for known limitations: +- Please see the official NVIDIA GPU documentation for known limitations: **AMD GPU Support** -* The latest AMD GPU -enabled instances in AWS ([EC2 G4ad](https://aws.amazon.com/blogs/aws/new-amazon-ec2-g4ad-instances-featuring-amd-gpus-for-graphics-workloads/)) +- The latest AMD GPU -enabled instances in AWS ([EC2 G4ad](https://aws.amazon.com/blogs/aws/new-amazon-ec2-g4ad-instances-featuring-amd-gpus-for-graphics-workloads/)) featuring Radeon Pro V520 GPUs do not seem to be working with Kubeflow (yet). The GPUs are successfully attached to the pods but the notebook runtime does not seem to recognize them. diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/_index.en.md index af587dace..c41fc5d04 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/_index.en.md @@ -15,6 +15,7 @@ Currently, helm is exclusively supported as a templating method, but integration Helm Applications can both be installed from helm registries directly or from a git repository. ## Concepts + KKP manages Applications using two key mechanisms: [ApplicationDefinitions]({{< ref "./application-definition" >}}) and [ApplicationInstallations]({{< ref "./application-installation" >}}). `ApplicationDefinitions` are managed by KKP Admins and contain all the necessary information for an application's installation. diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/application-definition/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/application-definition/_index.en.md index a46abafde..87b5a9efd 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/application-definition/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/application-definition/_index.en.md @@ -8,8 +8,9 @@ weight = 1 An `ApplicationDefinition` represents a single Application and contains all its versions. It holds the necessary information to install an application. Two types of information are required to install an application: -* How to download the application's source (i.e Kubernetes manifest, helm chart...). We refer to this as `source`. -* How to render (i.e. templating) the application's source to install it into user-cluster. We refer to this as`templating method`. + +- How to download the application's source (i.e Kubernetes manifest, helm chart...). We refer to this as `source`. +- How to render (i.e. templating) the application's source to install it into user-cluster. We refer to this as`templating method`. Each version can have a different `source` (`.spec.version[].template.source`) but share the same `templating method` (`.spec.method`). Here is the minimal example of `ApplicationDefinition`. More advanced configurations are described in subsequent paragraphs. @@ -43,13 +44,17 @@ spec: In this example, the `ApplicationDefinition` allows the installation of two versions of apache using the [helm method](#helm-method). Notice that one source originates from a [Helm repository](#helm-source) and the other from a [git repository](#git-source) ## Templating Method + Templating Method describes how the Kubernetes manifests are being packaged and rendered. ### Helm Method + This method use [Helm](https://helm.sh/docs/) to install, upgrade and uninstall the application into the user-cluster. ## Templating Source + ### Helm Source + The Helm Source allows downloading the application's source from a Helm [HTTP repository](https://helm.sh/docs/topics/chart_repository/) or an [OCI repository](https://helm.sh/blog/storing-charts-in-oci/#helm). The following parameters are required: @@ -57,8 +62,8 @@ The following parameters are required: - `chartName` -> Name of the chart within the repository - `chartVersion` -> Version of the chart; corresponds to the chartVersion field - **Example of Helm source with HTTP repository:** + ```yaml - template: source: @@ -69,6 +74,7 @@ The following parameters are required: ``` **Example of Helm source with OCI repository:** + ```yaml - template: source: @@ -77,11 +83,12 @@ The following parameters are required: chartVersion: 1.13.0-rc5 url: oci://quay.io/kubermatic/helm-charts ``` + For private git repositories, please check the [working with private registries](#working-with-private-registries) section. Currently, the best way to obtain `chartName` and `chartVersion` for an HTTP repository is to make use of `helm search`: -```sh +```bash # initial preparation helm repo add helm repo update @@ -99,9 +106,11 @@ helm search repo prometheus-community/prometheus --versions --version ">=15" For OCI repositories, there is currently [no native helm search](https://github.com/helm/helm/issues/9983). Instead, you have to rely on the capabilities of your OCI registry. For example, harbor supports searching for helm-charts directly [in their UI](https://goharbor.io/docs/2.4.0/working-with-projects/working-with-images/managing-helm-charts/#list-charts). ### Git Source + The Git source allows you to download the application's source from a Git repository. **Example of Git Source:** + ```yaml - template: source: @@ -121,7 +130,6 @@ The Git source allows you to download the application's source from a Git reposi For private git repositories, please check the [working with private registries](#working-with-private-registries) section. - ## Working With Private Registries For private registries, the Applications Feature supports storing credentials in Kubernetes secrets in the KKP master and referencing the secrets in your ApplicationDefinitions. @@ -134,67 +142,68 @@ In order for the controller to sync your secrets, they must be annotated with `a ### Git Repositories KKP supports three types of authentication for git repositories: -* `password`: authenticate with a username and password. -* `Token`: authenticate with a Bearer token -* `SSH-Key`: authenticate with an ssh private key. + +- `password`: authenticate with a username and password. +- `Token`: authenticate with a Bearer token +- `SSH-Key`: authenticate with an ssh private key. Their setup is comparable: 1. Create a secret containing our credentials + ```bash + # inside KKP master + + # user-pass + kubectl create secret -n generic --from-literal=pass= --from-literal=user= + + # token + kubectl create secret -n generic --from-literal=token= + + # ssh-key + kubectl create secret -n generic --from-literal=sshKey= + + # after creation, annotate + kubectl annotate secret apps.kubermatic.k8c.io/secret-type="git" + ``` + +1. Reference the secret in the ApplicationDefinition + ```yaml + spec: + versions: + - template: + source: + git: + path: + ref: + branch: + remote: # for ssh-key, an ssh url must be chosen (e.g. git@example.com/repo.git) + credentials: + method: + # user-pass + username: + key: user + name: + password: + key: pass + name: + # token + token: + key: token + name: + # ssh-key + sshKey: + key: sshKey + name: + ``` -```sh -# inside KKP master - -# user-pass -kubectl create secret -n generic --from-literal=pass= --from-literal=user= - -# token -kubectl create secret -n generic --from-literal=token= - -# ssh-key -kubectl create secret -n generic --from-literal=sshKey= - -# after creation, annotate -kubectl annotate secret apps.kubermatic.k8c.io/secret-type="git" -``` - -2. Reference the secret in the ApplicationDefinition - -```yaml -spec: - versions: - - template: - source: - git: - path: - ref: - branch: - remote: # for ssh-key, an ssh url must be chosen (e.g. git@example.com/repo.git) - credentials: - method: - # user-pass - username: - key: user - name: - password: - key: pass - name: - # token - token: - key: token - name: - # ssh-key - sshKey: - key: sshKey - name: -``` #### Compatibility Warning Be aware that all authentication methods may be available on your git server. More and more servers disable the authentication with username and password. More over on some providers like GitHub, to authenticate with an access token, you must use `password` method instead of `token`. Example of secret to authenticate with [GitHub access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token#using-a-token-on-the-command-line): -```sh + +```bash kubectl create secret -n generic --from-literal=pass= --from-literal=user= ``` @@ -205,73 +214,71 @@ For other providers, please refer to their official documentation. [Helm OCI registries](https://helm.sh/docs/topics/registries/#enabling-oci-support) are being accessed using a JSON configuration similar to the `~/.docker/config.json` on the local machine. It should be noted, that all OCI server urls need to be prefixed with `oci://`. 1. Create a secret containing our credentials - -```sh -# inside KKP master -kubectl create secret -n docker-registry --docker-server= --docker-username= --docker-password= -kubectl annotate secret apps.kubermatic.k8c.io/secret-type="helm" - -# example -kubectl create secret -n kubermatic docker-registry --docker-server=harbor.example.com/my-project --docker-username=someuser --docker-password=somepaswword oci-cred -kubectl annotate secret oci-cred apps.kubermatic.k8c.io/secret-type="helm" -``` - -2. Reference the secret in the ApplicationDefinition - -```yaml -spec: - versions: - - template: - source: - helm: - chartName: examplechart - chartVersion: 0.1.0 - credentials: - registryConfigFile: - key: .dockerconfigjson # `kubectl create secret docker-registry` stores by default the creds under this key - name: - url: -``` + ```bash + # inside KKP master + kubectl create secret -n docker-registry --docker-server= --docker-username= --docker-password= + kubectl annotate secret apps.kubermatic.k8c.io/secret-type="helm" + + # example + kubectl create secret -n kubermatic docker-registry --docker-server=harbor.example.com/my-project --docker-username=someuser --docker-password=somepaswword oci-cred + kubectl annotate secret oci-cred apps.kubermatic.k8c.io/secret-type="helm" + ``` + +1. Reference the secret in the ApplicationDefinition + ```yaml + spec: + versions: + - template: + source: + helm: + chartName: examplechart + chartVersion: 0.1.0 + credentials: + registryConfigFile: + key: .dockerconfigjson # `kubectl create secret docker-registry` stores by default the creds under this key + name: + url: + ``` ### Helm Userpass Registries To use KKP Applications with a helm [userpass auth](https://helm.sh/docs/topics/registries/#auth) registry, you can configure the following: 1. Create a secret containing our credentials - -```sh -# inside KKP master -kubectl create secret -n generic --from-literal=pass= --from-literal=user= -kubectl annotate secret apps.kubermatic.k8c.io/secret-type="helm" -``` - -2. Reference the secret in the ApplicationDefinition - -```yaml -spec: - versions: - - template: - source: - helm: - chartName: examplechart - chartVersion: 0.1.0 - credentials: - password: - key: pass - name: - username: - key: user - name: - url: -``` + ```bash + # inside KKP master + kubectl create secret -n generic --from-literal=pass= --from-literal=user= + kubectl annotate secret apps.kubermatic.k8c.io/secret-type="helm" + ``` + +1. Reference the secret in the ApplicationDefinition + ```yaml + spec: + versions: + - template: + source: + helm: + chartName: examplechart + chartVersion: 0.1.0 + credentials: + password: + key: pass + name: + username: + key: user + name: + url: + ``` ### Templating Credentials + There is a particular case where credentials may be needed at the templating stage to render the manifests. For example, if the template method is `helm` and the source is git. To install the chart into the user cluster, we have to build the chart dependencies. These dependencies may be hosted on a private registry requiring authentication. You can specify the templating credentials by settings `.spec.version[].template.templateCredentials`. It works the same way as source credentials. **Example of template credentials:** + ```yaml spec: versions: @@ -293,7 +300,9 @@ spec: ``` ## Advanced Configuration + ### Default Values + The `.spec.defaultValuesBlock` field describes overrides for manifest-rendering in UI when creating an application. For example if the method is Helm, then this field contains the Helm values. **Example for helm values** @@ -308,20 +317,23 @@ spec: ``` ### Customize Deployment + You can tune how the application will be installed by setting `.spec.defaultDeployOptions`. The options depend on the template method (i.e. `.spec.method`). -*note: `defaultDeployOptions` can be overridden at `ApplicationInstallation` level by settings `.spec.deployOptions`* +*Note: `defaultDeployOptions` can be overridden at `ApplicationInstallation` level by settings `.spec.deployOptions`* #### Customize Deployment For Helm Method + You may tune how Helm deploys the application with the following options: -* `atomic`: corresponds to the `--atomic` flag on Helm CLI. If set, the installation process deletes the installation on failure; the upgrade process rolls back changes made in case of failed upgrade. -* `wait`: corresponds to the `--wait` flag on Helm CLI. If set, will wait until all Pods, PVCs, Services, and minimum number of Pods of a Deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as `--timeout` -* `timeout`: corresponds to the `--timeout` flag on Helm CLI. It's time to wait for any individual Kubernetes operation. -* `enableDNS`: corresponds to the `-enable-dns ` flag on Helm CLI. It enables DNS lookups when rendering templates. if you enable this flag, you have to verify that helm template function 'getHostByName' is not being used in a chart to disclose any information you do not want to be passed to DNS servers.(c.f. [CVE-2023-25165](https://github.com/helm/helm/security/advisories/GHSA-pwcw-6f5g-gxf8)) +- `atomic`: corresponds to the `--atomic` flag on Helm CLI. If set, the installation process deletes the installation on failure; the upgrade process rolls back changes made in case of failed upgrade. +- `wait`: corresponds to the `--wait` flag on Helm CLI. If set, will wait until all Pods, PVCs, Services, and minimum number of Pods of a Deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as `--timeout` +- `timeout`: corresponds to the `--timeout` flag on Helm CLI. It's time to wait for any individual Kubernetes operation. +- `enableDNS`: corresponds to the `-enable-dns ` flag on Helm CLI. It enables DNS lookups when rendering templates. if you enable this flag, you have to verify that helm template function 'getHostByName' is not being used in a chart to disclose any information you do not want to be passed to DNS servers.(c.f. [CVE-2023-25165](https://github.com/helm/helm/security/advisories/GHSA-pwcw-6f5g-gxf8)) Example: + ```yaml apiVersion: apps.kubermatic.k8c.io/v1 kind: ApplicationDefinition @@ -335,11 +347,11 @@ spec: timeout: "5m" ``` -*note: if `atomic` is true, then wait must be true. If `wait` is true then `timeout` must be defined.* - +*Note: if `atomic` is true, then wait must be true. If `wait` is true then `timeout` must be defined.* ## ApplicationDefinition Reference -**The following is an example of ApplicationDefinition, showing all the possible options**. + +**The following is an example of ApplicationDefinition, showing all the possible options** ```yaml {{< readfile "kubermatic/v2.28/data/applicationDefinition.yaml" >}} diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/application-installation/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/application-installation/_index.en.md index 9755a1f95..a642e3999 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/application-installation/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/application-installation/_index.en.md @@ -10,6 +10,7 @@ An `ApplicationInstallation` is an instance of an application to install into us It abstracts to the end user how to get the application deployment sources (i.e. the k8s manifests, hem chart... ) and how to install it into the cluster. So he can install and use the application with minimal knowledge of Kubernetes. ## Anatomy of an Application + ```yaml apiVersion: apps.kubermatic.k8c.io/v1 kind: ApplicationInstallation @@ -35,6 +36,7 @@ The `.spec.namespace` defines in which namespace the application will be install The `values` is a schemaless field that describes overrides for manifest-rendering (e.g. if the method is Helm, then this field contains the Helm values.) ## Application Life Cycle + It mainly composes of 2 steps: download the application's source and install or upgrade the application. You can monitor these steps thanks to the conditions in the applicationInstallation's status. - `ManifestsRetrieved` condition indicates if application's source has been correctly downloaded. @@ -45,9 +47,11 @@ It mainly composes of 2 steps: download the application's source and install or - `{status: "False", reason: "InstallationFailedRetriesExceeded"}`: meaning the max number of retries was exceeded. ### Helm additional information + If the [templating method]({{< ref "../application-definition#templating-method" >}}) is `Helm`, then additional information regarding the install or upgrade is provided under `.status.helmRelease`. Example: + ```yaml status: [...] @@ -81,13 +85,16 @@ status: ``` ## Advanced Configuration + This section is relevant to advanced users. However, configuring advanced parameters may impact performance, load, and workload stability. Consequently, it must be treated carefully. ### Periodic Reconciliation + By default, Applications are only reconciled on changes in the spec, annotations, or the parent application definition. Meaning that if the user manually deletes the workload deployed by the application, nothing will happen until the `ApplicationInstallation` CR changes. -You can periodically force the reconciliation of the application by settings `.spec.reconciliationInterval`: -- a value greater than zero force reconciliation even if no changes occurred on application CR. +You can periodically force the reconciliation of the application by setting `.spec.reconciliationInterval`: + +- a value greater than zero forces reconciliation even if no changes occurred on application CR. - a value equal to 0 disables the force reconciliation of the application (default behavior). {{% notice warning %}} @@ -97,20 +104,23 @@ Setting this too low can cause a heavy load and disrupt your application workloa The application will not be reconciled if the maximum number of retries is exceeded. ### Customize Deployment + You can tune how the application will be installed by setting `.spec.deployOptions`. The options depends of the template method (i.e. `.spec.method`) of the `ApplicationDefinition`. *note: if `deployOptions` is not set then it used the default defined at the `ApplicationDefinition` level (`.spec.defaultDeployOptions`)* #### Customize Deployment for Helm Method + You may tune how Helm deploys the application with the following options: -* `atomic`: corresponds to the `--atomic` flag on Helm CLI. If set, the installation process deletes the installation on failure; the upgrade process rolls back changes made in case of failed upgrade. -* `wait`: corresponds to the `--wait` flag on Helm CLI. If set, will wait until all Pods, PVCs, Services, and minimum number of Pods of a Deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as `--timeout` -* `timeout`: corresponds to the `--timeout` flag on Helm CLI. It's time to wait for any individual Kubernetes operation. -* `enableDNS`: corresponds to the `-enable-dns ` flag on Helm CLI. It enables DNS lookups when rendering templates. if you enable this flag, you have to verify that helm template function 'getHostByName' is not being used in a chart to disclose any information you do not want to be passed to DNS servers.(c.f. [CVE-2023-25165](https://github.com/helm/helm/security/advisories/GHSA-pwcw-6f5g-gxf8)) +- `atomic`: corresponds to the `--atomic` flag on Helm CLI. If set, the installation process deletes the installation on failure; the upgrade process rolls back changes made in case of a failed upgrade. +- `wait`: corresponds to the `--wait` flag on Helm CLI. If set, will wait until all Pods, PVCs, Services, and a minimum number of Pods of a Deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as `--timeout` +- `timeout`: corresponds to the `--timeout` flag on Helm CLI. It's time to wait for any individual Kubernetes operation. +- `enableDNS`: corresponds to the `-enable-dns ` flag on Helm CLI. It enables DNS lookups when rendering templates. If you enable this flag, you have to verify that the Helm template function 'getHostByName' is not being used in a chart to disclose any information you do not want to be passed to DNS servers. (c.f. [CVE-2023-25165](https://github.com/helm/helm/security/advisories/GHSA-pwcw-6f5g-gxf8)) Example: + ```yaml apiVersion: apps.kubermatic.k8c.io/v1 kind: ApplicationInstallation @@ -124,13 +134,14 @@ spec: timeout: "5m" ``` -*note: if `atomic` is true, then wait must be true. If `wait` is true then `timeout` must be defined.* +*Note: if `atomic` is true, then wait must be true. If `wait` is true, then `timeout` must be defined.* If `.spec.deployOptions.helm.atomic` is true, then when installation or upgrade of an application fails, `ApplicationsInstallation.Status.Failures` counter is incremented. If it reaches the max number of retries (hardcoded to 5), then the applicationInstallation controller will stop trying to install or upgrade the application until applicationInstallation 's spec changes. This behavior reduces the load on the cluster and avoids an infinite loop that disrupts workload. ## ApplicationInstallation Reference + **The following is an example of ApplicationInstallation, showing all the possible options**. ```yaml diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/application-templating/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/application-templating/_index.en.md index efd4a5d4d..0b2dc7ba0 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/application-templating/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/application-templating/_index.en.md @@ -15,7 +15,7 @@ the exact templating syntax. KKP injects an instance of the `TemplateData` struct into each template. The following Go snippet shows the available information: -``` +```text {{< readfile "kubermatic/v2.28/data/applicationdata.go" >}} ``` diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/aikit/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/aikit/_index.en.md index 0bce1a83c..46b57e768 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/aikit/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/aikit/_index.en.md @@ -23,16 +23,16 @@ For more information on AIKit, please refer to the [official documentation](http AIKit is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready (existing cluster) from the Applications tab via UI. -* Select the AIKit application from the Application Catalog. +- Select the AIKit application from the Application Catalog. ![Select AIKit Application](01-select-application-aikit-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for AIKit Application](02-settings-aikit-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the AIKit application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the AIKit application to the user cluster. ![Application Values for AIKit Application](03-applicationvalues-aikit-app.png) -To further configure the `values.yaml`, find more information on the [AIKit Helm Chart Configuration](https://github.com/sozercan/aikit/tree/v0.16.0/charts/aikit) \ No newline at end of file +To further configure the `values.yaml`, find more information on the [AIKit Helm Chart Configuration](https://github.com/sozercan/aikit/tree/v0.16.0/charts/aikit) diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/argocd/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/argocd/_index.en.md index 445948042..f516e2b39 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/argocd/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/argocd/_index.en.md @@ -7,7 +7,8 @@ weight = 1 +++ -# What is ArgoCD? +## What is ArgoCD? + ArgoCD is a declarative, GitOps continuous delivery tool for Kubernetes. ArgoCD follows the GitOps pattern of using Git repositories as the source of truth for defining the desired application state. Kubernetes manifests can be specified in several ways: @@ -18,23 +19,22 @@ ArgoCD follows the GitOps pattern of using Git repositories as the source of tru - Plain directory of YAML/json manifests - Any custom config management tool configured as a config management plugin - For more information on the ArgoCD, please refer to the [official documentation](https://argoproj.github.io/cd/) -# How to deploy? +## How to deploy? ArgoCD is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the ArgoCD application from the Application Catalog. +- Select the ArgoCD application from the Application Catalog. ![Select ArgoCD Application](01-select-application-argocd-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for ArgoCD Application](02-settings-argocd-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the ArgoCD application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the ArgoCD application to the user cluster. ![Application Values for ArgoCD Application](03-applicationvalues-argocd-app.png) diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/cert-manager/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/cert-manager/_index.en.md index da9d4dad5..7f797ada8 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/cert-manager/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/cert-manager/_index.en.md @@ -7,7 +7,7 @@ weight = 3 +++ -# What is cert-manager? +## What is cert-manager? cert-manager adds certificates and certificate issuers as resource types in Kubernetes clusters. It simplifies the process of obtaining, renewing and using certificates. @@ -17,20 +17,20 @@ It will ensure certificates are valid and up to date, and attempt to renew certi For more information on the cert-manager, please refer to the [official documentation](https://cert-manager.io/) -# How to deploy? +## How to deploy? cert-manager is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the cert-manager application from the Application Catalog. +- Select the cert-manager application from the Application Catalog. ![Select cert-manager Application](01-select-application-cert-manager-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for cert-manager Application](02-settings-cert-manager-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the cert-manager application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the cert-manager application to the user cluster. ![Application Values for cert-manager Application](03-applicationvalues-cert-manager-app.png) diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/cluster-autoscaler/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/cluster-autoscaler/_index.en.md index 013815e66..a7c194c95 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/cluster-autoscaler/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/cluster-autoscaler/_index.en.md @@ -6,24 +6,24 @@ weight = 1 +++ -# What is the Kubernetes Cluster Autoscaler? +## What is the Kubernetes Cluster Autoscaler? Kubernetes Cluster Autoscaler is a tool that automatically adjusts the size of the worker’s node up or down depending on the consumption. This means that the cluster autoscaler, for example, automatically scale up a cluster by increasing the node count when there are not enough node resources for cluster workload scheduling and scale down when the node resources have continuously staying idle, or there are more than enough node resources available for cluster workload scheduling. In a nutshell, it is a component that automatically adjusts the size of a Kubernetes cluster so that all pods have a place to run and there are no unneeded nodes. -# How to deploy? +## How to deploy? Kubernetes Cluster Autoscaler is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the Cluster Autoscaler application from the Application Catalog. +- Select the Cluster Autoscaler application from the Application Catalog. ![Select Cluster Autoscaler Application](01-select-application-cluster-autoscaler-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for Cluster Autoscaler Application](02-settings-cluster-autoscaler-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Clustet Autoscaler application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Clustet Autoscaler application to the user cluster. ![Application Values for Cluster Autoscaler Application](03-applicationvalues-cluster-autoscaler-app.png) diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/falco/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/falco/_index.en.md index 1579a2b95..ee6118a1d 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/falco/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/falco/_index.en.md @@ -7,25 +7,25 @@ weight = 7 +++ -# What is Falco? +## What is Falco? Falco is a cloud-native security tool designed for Linux systems. It employs custom rules on kernel events, which are enriched with container and Kubernetes metadata, to provide real-time alerts. Falco helps you gain visibility into abnormal behavior, potential security threats, and compliance violations, contributing to comprehensive runtime security. For more information on the Falco, please refer to the [official documentation](https://falco.org/) -# How to deploy? +## How to deploy? Falco is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the Falco application from the Application Catalog. +- Select the Falco application from the Application Catalog. ![Select Falco Application](01-select-application-falco-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for Falco Application](02-settings-falco-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Falco application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Falco application to the user cluster. To further configure the values.yaml, find more information on the [Falco Helm chart documentation](https://github.com/falcosecurity/charts/tree/master/charts/falco). diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/flux2/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/flux2/_index.en.md index f63b1532e..f261968e1 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/flux2/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/flux2/_index.en.md @@ -7,7 +7,7 @@ weight = 2 +++ -# What is Flux2? +## What is Flux2? Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration (like Git repositories and OCI artifacts), automating updates to configuration when there is new code to deploy. @@ -19,19 +19,19 @@ Flux is a Cloud Native Computing Foundation [CNCF](https://www.cncf.io/) project For more information on the Flux2, please refer to the [official documentation](https://github.com/fluxcd-community/helm-charts) -# How to deploy? +## How to deploy? Flux2 is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the Flux2 application from the Application Catalog. +- Select the Flux2 application from the Application Catalog. ![Select Flux2 Application](01-select-application-flux2-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for Flux2 Application](02-settings-flux2-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Flux2 application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Flux2 application to the user cluster. A full list of available Helm values is on [flux2's ArtifactHub page](https://artifacthub.io/packages/helm/fluxcd-community/flux2) diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/k8sgpt-operator/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/k8sgpt-operator/_index.en.md index 474b59f4e..f317b8b3d 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/k8sgpt-operator/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/k8sgpt-operator/_index.en.md @@ -7,7 +7,8 @@ weight = 11 +++ -# What is K8sGPT-Operator? +## What is K8sGPT-Operator? + This operator is designed to enable K8sGPT within a Kubernetes cluster. It will allow you to create a custom resource that defines the behaviour and scope of a managed K8sGPT workload. @@ -16,20 +17,20 @@ Analysis and outputs will also be configurable to enable integration into existi For more information on the K8sGPT-Operator, please refer to the [official documentation](https://docs.k8sgpt.ai/reference/operator/overview/) -# How to deploy? +## How to deploy? K8sGPT-Operator is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the K8sGPT-Operator application from the Application Catalog. +- Select the K8sGPT-Operator application from the Application Catalog. ![Select K8sGPT-Operator Application](01-select-application-k8sgpt-operator-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for K8sGPT-Operator Application](02-settings-k8sgpt-operator-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the K8sGPT-Operator application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the K8sGPT-Operator application to the user cluster. ![Application Values for K8sGPT-Operator Application](03-applicationvalues-k8sgpt-operator-app.png) diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/k8sgpt/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/k8sgpt/_index.en.md index 3e8d17dcd..e85a05ba8 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/k8sgpt/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/k8sgpt/_index.en.md @@ -7,7 +7,8 @@ weight = 11 +++ -# What is K8sGPT? +## What is K8sGPT? + K8sGPT gives Kubernetes SRE superpowers to everyone. It is a tool for scanning your Kubernetes clusters, diagnosing, and triaging issues in simple English. It has SRE experience codified into its analyzers and helps to pull out the most relevant information to enrich it with AI. @@ -16,20 +17,20 @@ Out of the box integration with OpenAI, Azure, Cohere, Amazon Bedrock and local For more information on the K8sGPT, please refer to the [official documentation](https://docs.k8sgpt.ai/) -# How to deploy? +## How to deploy? K8sGPT is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the K8sGPT application from the Application Catalog. +- Select the K8sGPT application from the Application Catalog. ![Select K8sGPT Application](01-select-application-k8sgpt-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for K8sGPT Application](02-settings-k8sgpt-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the K8sGPT application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the K8sGPT application to the user cluster. ![Application Values for K8sGPT Application](03-applicationvalues-k8sgpt-app.png) diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/kube-vip/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/kube-vip/_index.en.md index 66420c42d..fc5f18cd6 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/kube-vip/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/kube-vip/_index.en.md @@ -7,25 +7,25 @@ weight = 6 +++ -# What is Kube-VIP? +## What is Kube-VIP? Kube-VIP provides Kubernetes clusters with a virtual IP and load balancer for both the control plane (for building a highly-available cluster) and Kubernetes Services of type LoadBalancer without relying on any external hardware or software. For more information on the Kube-VIP, please refer to the [official documentation](https://kube-vip.io/) -# How to deploy? +## How to deploy? Kube-VIP is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the Kube-VIP application from the Application Catalog. +- Select the Kube-VIP application from the Application Catalog. ![Select Kube-VIP Application](01-select-application-kube-vip-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for Kube-VIP Application](02-settings-kube-vip-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Kube-VIP application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Kube-VIP application to the user cluster. To further configure the values.yaml, find more information on the [Kube-vip Helm chart documentation](https://github.com/kube-vip/helm-charts). diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/kubevirt/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/kubevirt/_index.en.md index 7eb3f4e31..37ba16493 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/kubevirt/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/kubevirt/_index.en.md @@ -7,7 +7,7 @@ weight = 10 +++ -# What is KubeVirt? +## What is KubeVirt? KubeVirt is a virtual machine management add-on for Kubernetes. Its aim is to provide a common ground for virtualization solutions on top of Kubernetes. @@ -21,15 +21,15 @@ As of today KubeVirt can be used to declaratively: For more information on the KubeVirt, please refer to the [official documentation](https://kubevirt.io/) -# How to deploy? +## How to deploy? KubeVirt is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the KubeVirt application from the Application Catalog. +- Select the KubeVirt application from the Application Catalog. ![Select KubeVirt Application](01-select-application-kubevirt-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for KubeVirt Application](02-settings-kubevirt-app.png) diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/local-ai/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/local-ai/_index.en.md index f69bace53..5f467e68d 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/local-ai/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/local-ai/_index.en.md @@ -15,18 +15,18 @@ LocalAI is an open-source alternative to OpenAI’s API, designed to run AI mode Local AI is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready (existing cluster) from the Applications tab via UI. -* Select the Local AI application from the Application Catalog. +- Select the Local AI application from the Application Catalog. ![Select Local AI Application](01-select-local-ai-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for Local AI Application](02-settings-local-ai-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the LocalAI application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the LocalAI application to the user cluster. ![Application Values for LocalAI Application](03-applicationvalues-local-ai-app.png) To further configure the `values.yaml`, find more information on the [LocalAI Helm Chart Configuration](https://github.com/go-skynet/helm-charts/tree/main/charts/local-ai) -Please take care about the size of the default models which can vary from the default configuration. \ No newline at end of file +Please take care about the size of the default models which can vary from the default configuration. diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/metallb/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/metallb/_index.en.md index 88c79c916..d1f64d83f 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/metallb/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/metallb/_index.en.md @@ -7,26 +7,25 @@ weight = 4 +++ -# What is MetalLB? +## What is MetalLB? MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols. - For more information on the MetalLB, please refer to the [official documentation](https://metallb.universe.tf/) -# How to deploy? +## How to deploy? MetalLB is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the MetalLB application from the Application Catalog. +- Select the MetalLB application from the Application Catalog. ![Select MetalLB Application](01-select-application-metallb-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for MetalLB Application](02-settings-metallb-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the MetalLB application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the MetalLB application to the user cluster. To further configure the values.yaml, find more information on the [MetalLB Helm chart documentation](https://github.com/metallb/metallb/tree/main/charts/metallb). diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/nginx/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/nginx/_index.en.md index 9f54a2063..572292997 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/nginx/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/nginx/_index.en.md @@ -7,25 +7,25 @@ weight = 5 +++ -# What is Nginx? +## What is Nginx? Nginx is an ingress-controller for Kubernetes using NGINX as a reverse proxy and load balancer. For more information on the Nginx, please refer to the [official documentation](https://kubernetes.github.io/ingress-nginx/) -# How to deploy? +## How to deploy? Nginx is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the Nginx application from the Application Catalog. +- Select the Nginx application from the Application Catalog. ![Select Nginx Application](01-select-application-nginx-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for Nginx Application](02-settings-nginx-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Nginx application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Nginx application to the user cluster. To further configure the values.yaml, find more information on the [Nginx Helm chart documentation](https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx). diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/nvidia-gpu-operator/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/nvidia-gpu-operator/_index.en.md index b0f19d963..a86820c73 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/nvidia-gpu-operator/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/nvidia-gpu-operator/_index.en.md @@ -7,24 +7,25 @@ weight = 12 +++ -# What is Nvidia GPU Operator? +## What is Nvidia GPU Operator? + The NVIDIA GPU Operator uses the operator framework within Kubernetes to automate the management of all NVIDIA software components needed to provision GPU. For more information on the Nvidia GPU Operator, please refer to the [official documentation](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/overview.html) -# How to deploy? +## How to deploy? Nvidia GPU Operator is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the Nvidia GPU Operator application from the Application Catalog. +- Select the Nvidia GPU Operator application from the Application Catalog. ![Select Nvidia GPU Operator Application](01-select-application-nvidia-gpu-operator-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for Nvidia GPU Operator Application](02-settings-nvidia-gpu-operator-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Nvidia GPU Operator application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Nvidia GPU Operator application to the user cluster. To further configure the values.yaml, find more information on the [Nvidia GPU Operator Helm chart documentation](https://github.com/NVIDIA/gpu-operator/) diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/trivy-operator/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/trivy-operator/_index.en.md index fd7d1c713..27723d0cb 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/trivy-operator/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/trivy-operator/_index.en.md @@ -7,7 +7,7 @@ weight = 9 +++ -# What is Trivy Operator? +## What is Trivy Operator? The Trivy Operator leverages Trivy to continuously scan your Kubernetes cluster for security issues. The scans are summarised in security reports as Kubernetes Custom Resources, which become accessible through the Kubernetes API. The Operator does this by watching Kubernetes for state changes and automatically triggering security scans in response. For example, a vulnerability scan is initiated when a new Pod is created. This way, users can find and view the risks that relate to different resources in a Kubernetes-native way. @@ -15,23 +15,23 @@ Trivy Operator can be deployed and used for scanning the resources deployed on t For more information on the Trivy Operator, please refer to the [official documentation](https://aquasecurity.github.io/trivy-operator/latest/) -# How to deploy? +## How to deploy? Trivy Operator is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the Trivy Operator application from the Application Catalog. +- Select the Trivy Operator application from the Application Catalog. ![Select Trivy Operator Application](01-select-application-trivy-operator-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for Trivy Operator Application](02-settings-trivy-operator-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Trivy Operator application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Trivy Operator application to the user cluster. ![Application Values for Trivy Operator Application](03-applicationvalues-trivy-operator-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Trivy Operator application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Trivy Operator application to the user cluster. To further configure the values.yaml, find more information on the [Trivy Operator Helm chart documentation](https://github.com/aquasecurity/trivy-operator/tree/main/deploy/helm). diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/trivy/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/trivy/_index.en.md index d70dcb4fa..5e052daa8 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/trivy/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/applications/default-applications-catalog/trivy/_index.en.md @@ -7,7 +7,7 @@ weight = 8 +++ -# What is Trivy? +## What is Trivy? Trivy is a comprehensive and versatile security scanner. Trivy has scanners that look for security issues, and targets where it can find those issues. @@ -32,19 +32,19 @@ Trivy supports most popular programming languages, operating systems, and platfo For more information on the Trivy, please refer to the [official documentation](https://aquasecurity.github.io/trivy/v0.49/docs/) -# How to deploy? +## How to deploy? Trivy is available as part of the KKP's default application catalog. It can be deployed to the user cluster either during the cluster creation or after the cluster is ready(existing cluster) from the Applications tab via UI. -* Select the Trivy application from the Application Catalog. +- Select the Trivy application from the Application Catalog. ![Select Trivy Application](01-select-application-trivy-app.png) -* Under the Settings section, select and provide appropriate details and click `-> Next` button. +- Under the Settings section, select and provide appropriate details and click `-> Next` button. ![Settings for Trivy Application](02-settings-trivy-app.png) -* Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Trivy application to the user cluster. +- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Trivy application to the user cluster. To further configure the values.yaml, find more information on the [Trivy Helm chart documentation](https://github.com/aquasecurity/trivy/tree/main/helm/trivy). diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/cluster-templates/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/cluster-templates/_index.en.md index 792fb6880..a62adbfee 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/cluster-templates/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/cluster-templates/_index.en.md @@ -7,6 +7,7 @@ weight = 1 +++ ## Understanding Cluster Templates + Cluster templates are designed to standardize and simplify the creation of Kubernetes clusters. A cluster template is a reusable cluster template object. It guarantees that every cluster it provisions from the template is uniform and consistent in the way it is produced. @@ -15,22 +16,26 @@ A cluster template allows you to specify a provider, node layout, and configurat via Kubermatic API or UI. ## Scope + The cluster templates are accessible from different levels. - - global: (managed by admin user) visible to everyone - - project: accessible to the project users - - user: accessible to the template owner in every project, where the user is in the owner or editor group + +- global: (managed by admin user) visible to everyone +- project: accessible to the project users +- user: accessible to the template owner in every project, where the user is in the owner or editor group Template management is available from project level. The regular user with owner or editor privileges can create template in project or user scope. The admin user can create a template for every project in every scope. Template in `global` scope can be created only by admins. ## Credentials + Creating a cluster from the template requires credentials to authenticate with the cloud provider. During template creation, the credentials are stored in the secret which is assigned to the cluster template. The credential secret is independent. It's just a copy of credentials specified manually by the user or taken from the preset. Any credentials update must be processed on the cluster template. ## Creating and Using Templates + Cluster templates can be created from scratch to pre-define the cluster configuration. The whole process is done in the UI wizard for the cluster creation. During the cluster creation process, the end user can pick a template and specify the desired number of cluster instances. diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/kkp-security/pod-security-policy/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/kkp-security/pod-security-policy/_index.en.md index 30bdc650c..4b120a99e 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/kkp-security/pod-security-policy/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/kkp-security/pod-security-policy/_index.en.md @@ -6,7 +6,7 @@ weight = 130 [Pod Security Policy](https://kubernetes.io/docs/concepts/policy/pod-security-policy/), (PSP), is a key security feature in Kubernetes. It allows cluster administrators to set [granular controls](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#policy-reference) over security sensitive aspects of pod and container specs. -PSP is implemented using an optional admission controller that's disabled by default. It's important to have an initial authorizing policy on the cluster _before_ enabling the PSP admission controller. +PSP is implemented using an optional admission controller that's disabled by default. It's important to have an initial authorizing policy on the cluster *before* enabling the PSP admission controller. This is also true for existing clusters. Without an authorizing policy, the controller will prevent all pods from being created on the cluster. PSP objects are cluster-level objects. They define a set of conditions that a pod must pass to be accepted by the PSP admission controller. The most common way to apply this is using RBAC. For a pod to use a specific Pod Security Policy, the pod should run using a Service Account or a User that has `use` permission to that particular Pod Security policy. @@ -29,12 +29,12 @@ For existing clusters, it's also possible to enable/disable PSP: ![Edit Cluster](@/images/ui/psp-edit.png?classes=shadow,border "Edit Cluster") - {{% notice note %}} Activating Pod Security Policy will mean that a lot of Pod specifications, Operators and Helm charts will not work out of the box. KKP will apply a default authorizing policy to prevent this. Additionally, all KKP user-clusters are configured to be compatible with enabled PSPs. Make sure that you know the consequences of activating this feature on your workloads. {{% /notice %}} ### Datacenter Level Support + It is also possible to enforce enabling Pod Security Policies on the datacenter level. In this case, user cluster level configuration will be ignored, and PSP will be enabled for all user clusters in the datacenter. To enable this, you will need to update your [Seed Cluster CRD]({{< ref "../../../../../tutorials-howtos/project-and-cluster-management/seed-cluster" >}}), and set `enforcePodSecurityPolicy` to `true` in the datacenter spec. diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/networking/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/networking/_index.en.md index 9afdaa93d..c291cfba4 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/networking/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/networking/_index.en.md @@ -13,7 +13,6 @@ The [expose strategy]({{< ref "../../../../tutorials-howtos/networking/expose-st This section explains how the connection between user clusters and the control plane is established, as well as the general networking concept in KKP. - ![KKP Network](images/network.png?classes=shadow,border "This diagram illustrates the necessary connections for KKP.") The following diagrams illustrate all available [expose strategy]({{< ref "../../../../tutorials-howtos/networking/expose-strategies" >}}) available in KKP. @@ -33,11 +32,11 @@ Any port numbers marked with * are overridable, so you will need to ensure any c ** Default port range for [NodePort Services](https://kubernetes.io/docs/concepts/services-networking/service/). All ports listed are using TCP. -#### Worker Nodes +### Worker Nodes Worker nodes in user clusters must have full connectivity to each other to ensure the functionality of various components, including different Container Network Interfaces (CNIs) and Container Storage Interfaces (CSIs) supported by KKP. -#### API Server +### API Server For each user cluster, an API server is deployed in the Seed and exposed depending on the chosen expose strategy. Its purpose is not only to make the apiserver accessible to users, but also to ensure the proper functioning of the cluster. @@ -46,7 +45,7 @@ In addition, the apiserver is used for [in-cluster API](https://kubernetes.io/do In Tunneling mode, to forward traffic to the correct apiserver, an envoy proxy is deployed on each node, serving as an endpoint for the Kubernetes cluster service to proxy traffic to the apiserver. -#### Kubernetes Konnectivity proxy +### Kubernetes Konnectivity proxy To enable Kubernetes to work properly, parts of the control plane need to be connected to the internal Kubernetes cluster network. This is done via the [konnectivity proxy](https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/), which is deployed for each cluster. diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/resource-quotas/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/resource-quotas/_index.en.md index 5b2ac7022..6406889c3 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/resource-quotas/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/resource-quotas/_index.en.md @@ -10,6 +10,7 @@ Resource Quotas in KKP allow administrators to set quotas on the amount of resou subject which is supported is Project, so the resource quotas currently limit the amount of resources that can be used project-wide. The resources in question are the resources of the user cluster: + - CPU - the cumulated CPU used by the nodes on all clusters. - Memory - the cumulated RAM used by the nodes on all clusters. - Storage - the cumulated disk size of the nodes on all clusters. @@ -21,12 +22,12 @@ This feature is available in the EE edition only. That one just controls the size of the machines suggested to users in the KKP Dashboard during the cluster creation. {{% /notice %}} - ## Setting up Resource Quotas The resource quotas are managed by administrators either through the KKP UI/API or through the Resource Quota CRDs. Example ResourceQuota: + ```yaml apiVersion: kubermatic.k8c.io/v1 kind: ResourceQuota @@ -53,6 +54,7 @@ set in the ResourceQuota is done automatically by the API. ## Calculating Quota Usage The ResourceQuota has 2 status fields: + - `globalUsage` which shows the resource usage across all seeds - `localUsage` which shows the resource usage on the local seed @@ -98,7 +100,6 @@ resulting K8s Node `.status.capacity`. | Anexia | CPUs (set in Machine spec) | Memory (from Machine spec) | DiskSize (from Machine spec) | | VMWare Cloud Director | CPU * CPUCores (Machine spec) | MemoryMB (from Machine spec) | DiskSizeGB (from Machine spec) | - ## Enforcing Quotas The quotas are enforced through a validating webhook on Machine resources in the user clusters. This means that the quota validation diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/service-account/service-account-token-projection/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/service-account/service-account-token-projection/_index.en.md index c381e7d3c..1c38fb091 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/service-account/service-account-token-projection/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/service-account/service-account-token-projection/_index.en.md @@ -12,14 +12,17 @@ is used by some applications to enhance security when using service accounts (e.g. [Istio uses it by default](https://istio.io/latest/docs/ops/best-practices/security/#configure-third-party-service-account-tokens) as of version v1.3). As of KKP version v2.16, KKP supports Service Account Token Volume Projection as follows: + - in clusters with Kubernetes version v1.20+, it is enabled by default with the default configuration as described below, - in clusters with Kubernetes below v1.20, it has to be explicitly enabled. ## Prerequisites + `TokenRequest` and `TokenRequestProjection` Kubernetes feature gates have to be enabled (enabled by default since Kubernetes v1.11 and v1.12 respectively). ## Configuration + In KKP v2.16, the Service Account Token Volume Projection feature can be configured only via KKP API. The `Cluster` API object provides the `serviceAccount` field of the `ServiceAccountSettings` type, with the following definition: @@ -58,8 +61,8 @@ The following table summarizes the supported properties of the `ServiceAccountSe | `issuer` | Identifier of the service account token issuer. The issuer will assert this identifier in `iss` claim of issued tokens. | The URL of the apiserver, e.g., `https://`. | | `apiAudiences` | Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. Multiple audiences can be separated by comma (`,`). | Equal to `issuer`. | - ### Example: Configuration using a Request to KKP API + To configure the feature in an existing cluster, execute a `PATCH` request to URL: `https:///api/v1/projects//dc//clusters/` @@ -78,8 +81,8 @@ with the following content: You can use the Swagger UI at `https:///rest-api` to construct and send the API request. - ### Example: Configuration using Cluster CR + Alternatively, the feature can be also configured via the `Cluster` Custom Resource in the KKP seed cluster. For example, to enable the feature in an existing cluster via kubectl, edit the `Cluster` CR with `kubectl edit cluster ` and add the following configuration: diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/service-account/using-service-account/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/service-account/using-service-account/_index.en.md index 850b8bb63..89699e749 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/service-account/using-service-account/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/service-account/using-service-account/_index.en.md @@ -37,35 +37,39 @@ You can also change a token name. It is possible to delete a service account tok You can see when a token was created and when will expire. ## Using Service Accounts with KKP + You can control service account access in your project by provided groups. There are three basic access level groups: + - viewers - editors - project managers -#### Viewers +### Viewers **A viewer can:** - - list projects - - get project details - - get project SSH keys - - list clusters - - get cluster details - - get cluster resources details + +- list projects +- get project details +- get project SSH keys +- list clusters +- get cluster details +- get cluster resources details Permissions for read-only actions that do not affect state, such as viewing. + - viewers are not allowed to interact with service accounts (User) - viewers are not allowed to interact with members of a project (UserProjectBinding) - -#### Editors +### Editors **All viewer permissions, plus permissions to create, edit & delete cluster** - - editors are not allowed to delete a project - - editors are not allowed to interact with members of a project (UserProjectBinding) - - editors are not allowed to interact with service accounts (User) -#### Project Managers +- editors are not allowed to delete a project +- editors are not allowed to interact with members of a project (UserProjectBinding) +- editors are not allowed to interact with service accounts (User) + +### Project Managers **The `project managers` is service account specific group. Which allows** @@ -90,6 +94,6 @@ Authorization: Bearer aaa.bbb.ccc You can also use `curl` command to reach API endpoint: -``` +```bash curl -i -H "Accept: application/json" -H "Authorization: Bearer aaa.bbb.ccc" -X GET http://localhost:8080/api/v2/projects/jnpllgp66z/clusters ``` diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/user-interface/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/user-interface/_index.en.md index d63f1765c..8877d168d 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/user-interface/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/user-interface/_index.en.md @@ -9,15 +9,18 @@ Get information on how to get the most ouf of the Kubermatic Dashboard, the offi ![Admin Panel](dashboard.png?height=400px&classes=shadow,border "Kubermatic Dashboard") ## Preparing New Themes + A set of [tutorials]({{< ref "./theming" >}}) that will teach you how to prepare custom themes and apply them to be used by the KKP Dashboard. ## Admin Panel + The Admin Panel is a place for the Kubermatic administrators where they can manage the global settings that directly impact all Kubermatic users. Check out the [Admin Panel]({{< ref "../../../../tutorials-howtos/administration/admin-panel" >}}) section for more details. ## Theming + Theme and customize the KKP Dashboard according to your needs, but be aware that theming capabilities are available in the Enterprise Edition only. Check out [Customizing the Dashboard]({{< ref "../../../../tutorials-howtos/dashboard-customization" >}}) section diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/user-interface/theming/with-src/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/user-interface/theming/with-src/_index.en.md index 4ad79d5d1..2114012e6 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/user-interface/theming/with-src/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/user-interface/theming/with-src/_index.en.md @@ -20,20 +20,20 @@ All available themes can be found inside `src/assets/themes` directory. Follow t - `name` - refers to the theme file name stored inside `assets/themes` directory. - `displayName` - will be used by the theme picker available in the `Account` view to display a new theme. - `isDark` - defines the icon to be used by the theme picker (sun/moon). - ```json - { - "openstack": { - "wizard_use_default_user": false - }, - "themes": [ - { - "name": "custom", - "displayName": "Custom", - "isDark": false - } - ] - } - ``` + ```json + { + "openstack": { + "wizard_use_default_user": false + }, + "themes": [ + { + "name": "custom", + "displayName": "Custom", + "isDark": false + } + ] + } + ``` - Make sure that theme is registered in the `angular.json` file before running the application locally. It is done for `custom` theme by default. - Run the application using `npm start`, open the `Account` view under `User settings`, select your new theme and update `custom.scss` according to your needs. diff --git a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/user-interface/theming/without-src/_index.en.md b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/user-interface/theming/without-src/_index.en.md index cab747001..366605ae2 100644 --- a/content/kubermatic/v2.28/architecture/concept/kkp-concepts/user-interface/theming/without-src/_index.en.md +++ b/content/kubermatic/v2.28/architecture/concept/kkp-concepts/user-interface/theming/without-src/_index.en.md @@ -6,55 +6,62 @@ weight = 50 +++ ### Preparing a New Theme Without Access to the Sources + In this case the easiest way of preparing a new theme is to download one of the existing themes light/dark. This can be done in a few different ways. We'll describe here two possible ways of downloading enabled themes. #### Download Theme Using the Browser + 1. Open KKP UI -2. Open `Developer tools` and navigate to `Sources` tab. -3. There should be a CSS file of a currently selected theme available to be downloaded inside `assts/themes` directory. +1. Open `Developer tools` and navigate to `Sources` tab. +1. There should be a CSS file of a currently selected theme available to be downloaded inside `assts/themes` directory. ![Dev tools](@/images/ui/developer-tools.png?height=300px&classes=shadow,border "Dev tools") #### Download Themes Directly From the KKP Dashboard container + Assuming that you know how to exec into the container and copy resources from/to it, themes can be simply copied over to your machine from the running KKP Dashboard container. They are stored inside the container in `dist/assets/themes` directory. ##### Kubernetes + Assuming that the KKP Dashboard pod name is `kubermatic-dashboard-5b96d7f5df-mkmgh` you can copy themes to your `${HOME}/themes` directory using below command: + ```bash kubectl -n kubermatic cp kubermatic-dashboard-5b96d7f5df-mkmgh:/dist/assets/themes ~/themes ``` ##### Docker + Assuming that the KKP Dashboard container name is `kubermatic-dashboard` you can copy themes to your `${HOME}/themes` directory using below command: + ```bash docker cp kubermatic-dashboard:/dist/assets/themes/. ~/themes ``` #### Using Compiled Theme to Prepare a New Theme + Once you have a base theme file ready, we can use it to prepare a new theme. To easier understand the process, let's assume that we have downloaded a `light.css` file and will be preparing a new theme called `solar.css`. 1. Rename `light.css` to `solar.css`. -2. Update `solar.css` file according to your needs. Anything in the file can be changed or new rules can be added. +1. Update `solar.css` file according to your needs. Anything in the file can be changed or new rules can be added. In case you are changing colors, remember to update it in the whole file. -3. Mount new `solar.css` file to `dist/assets/themes` directory inside the application container. **Make sure not to override whole directory.** -4. Update `config.json` file inside `dist/config` directory and register the new theme. - - ```json - { - "openstack": { - "wizard_use_default_user": false - }, - "themes": [ - { - "name": "solar", - "displayName": "Solar", - "isDark": true - } - ] - } - ``` +1. Mount new `solar.css` file to `dist/assets/themes` directory inside the application container. **Make sure not to override whole directory.** +1. Update `config.json` file inside `dist/config` directory and register the new theme. + ```json + { + "openstack": { + "wizard_use_default_user": false + }, + "themes": [ + { + "name": "solar", + "displayName": "Solar", + "isDark": true + } + ] + } + ``` That's it. After restarting the application, theme picker in the `Account` view should show your new `Solar` theme. diff --git a/content/kubermatic/v2.28/architecture/feature-stages/_index.en.md b/content/kubermatic/v2.28/architecture/feature-stages/_index.en.md index f897eec92..ece457538 100644 --- a/content/kubermatic/v2.28/architecture/feature-stages/_index.en.md +++ b/content/kubermatic/v2.28/architecture/feature-stages/_index.en.md @@ -15,7 +15,6 @@ weight = 4 - The whole feature can be revoked immediately and without notice - Recommended only for testing and providing feedback - ## Beta / Technical Preview - Targeted users: experienced KKP administrators @@ -27,7 +26,6 @@ weight = 4 - The whole feature can still be revoked, but with prior notice and respecting a deprecation cycle - Recommended for only non-business-critical uses, testing usability, performance, and compatibility in real-world environments - ## General Availability (GA) - Users: All users diff --git a/content/kubermatic/v2.28/architecture/iam-role-based-access-control/_index.en.md b/content/kubermatic/v2.28/architecture/iam-role-based-access-control/_index.en.md index 68cebea75..8719ec556 100644 --- a/content/kubermatic/v2.28/architecture/iam-role-based-access-control/_index.en.md +++ b/content/kubermatic/v2.28/architecture/iam-role-based-access-control/_index.en.md @@ -11,23 +11,26 @@ By default, KKP provides [Dex](#authentication-with-dex) as OIDC provider, but y please refer to the [OIDC provider]({{< ref "../../tutorials-howtos/oidc-provider-configuration" >}}) chapter. ## Authentication with Dex + [Dex](https://dexidp.io/) is an identity service that uses OIDC to drive authentication for KKP components. It acts as a portal to other identity providers through [connectors](https://dexidp.io/docs/connectors/). This lets Dex defer authentication to these connectors. Multiple connectors may be configured at the same time. Most popular are: -* [GitHub](https://dexidp.io/docs/connectors/github/) -* [Google](https://dexidp.io/docs/connectors/google/) -* [LDAP](https://dexidp.io/docs/connectors/ldap/) -* [Microsoft](https://dexidp.io/docs/connectors/microsoft/) -* [OAuth 2.0](https://dexidp.io/docs/connectors/oauth/) -* [OpenID Connect](https://dexidp.io/docs/connectors/oidc/) -* [SAML2.0](https://dexidp.io/docs/connectors/saml/) + +- [GitHub](https://dexidp.io/docs/connectors/github/) +- [Google](https://dexidp.io/docs/connectors/google/) +- [LDAP](https://dexidp.io/docs/connectors/ldap/) +- [Microsoft](https://dexidp.io/docs/connectors/microsoft/) +- [OAuth 2.0](https://dexidp.io/docs/connectors/oauth/) +- [OpenID Connect](https://dexidp.io/docs/connectors/oidc/) +- [SAML2.0](https://dexidp.io/docs/connectors/saml/) Check out the [Dex documentation](https://dexidp.io/docs/connectors/) for a list of available providers and how to setup their configuration. To configure Dex connectors, edit `.dex.connectors` in the `values.yaml` Example to update or set up Github connector: -``` + +```yaml dex: ingress: [...] @@ -50,17 +53,18 @@ And apply the changes to the cluster: ``` ## Authorization + Authorization is managed at multiple levels to ensure users only have access to authorized resources. KKP uses its own authorization system to control access to various resources within the platform, including projects and clusters. Administrators and project owners define and manage these policies and provide specific access control rules for users and groups. - The Kubernetes Role-Based Access Control (RBAC) system is also used to control access to user cluster level resources, such as namespaces, pods, and services. Please refer to [Cluster Access]({{< ref "../../tutorials-howtos/cluster-access" >}}) to configure RBAC. ### Kubermatic Kubernetes Platform (KKP) Users + There are two kinds of users in KKP: **admin** and **non-admin** users. **Admin** users can manage settings that impact the whole Kubermatic installation and users. For example, they can set default diff --git a/content/kubermatic/v2.28/architecture/known-issues/_index.en.md b/content/kubermatic/v2.28/architecture/known-issues/_index.en.md index 892097498..ec39ce1f7 100644 --- a/content/kubermatic/v2.28/architecture/known-issues/_index.en.md +++ b/content/kubermatic/v2.28/architecture/known-issues/_index.en.md @@ -15,7 +15,6 @@ This page documents the list of known issues and possible work arounds/solutions For oidc authentication to user cluster there is always the same issuer used. This leads to invalidation of refresh tokens when a new authentication happens with the same user because existing refresh tokens for the same user/client pair are invalidated when a new one is requested. - ### Root Cause By default it is only possible to have one refresh token per user/client pair in dex for security reasons. There is an open issue regarding this in the [upstream repository](https://github.com/dexidp/dex/issues/981). The refresh token has by default also no expiration set. This is useful to stay logged in over a longer time because the id_token can be refreshed unless the refresh token is invalidated. @@ -26,7 +25,7 @@ One example would be to download a kubeconfig of one cluster and then of another You can either change this in dex configuration by setting `userIDKey` to `jti` in the connector section or you could configure an other oidc provider which supports multiple refresh tokens per user-client pair like keycloak does by default. -#### dex +#### Dex The following yaml snippet is an example how to configure an oidc connector to keep the refresh tokens. @@ -45,17 +44,17 @@ The following yaml snippet is an example how to configure an oidc connector to k userNameKey: email ``` -#### external provider +#### External provider For an explanation how to configure an other oidc provider than dex take a look at [oidc-provider-configuration]({{< ref "../../tutorials-howtos/oidc-provider-configuration" >}}). -### security implications regarding dex solution +### Security implications regarding dex solution For dex this has some implications. With this configuration a token is generated for each user session. The number of objects stored in kubernetes regarding refresh tokens has no limit anymore. The principle that one refresh belongs to one user/client pair is a security consideration which would be ignored in that case. The only way to revoke a refresh token is then to do it via grpc api which is not exposed by default or by manually deleting the related refreshtoken resource in the kubernetes cluster. ## API server Overload Leading to Instability in Seed due to Konnectivity -Issue: https://github.com/kubermatic/kubermatic/issues/13321 +Issue: Status: Fixed diff --git a/content/kubermatic/v2.28/architecture/monitoring-logging-alerting/master-seed/_index.en.md b/content/kubermatic/v2.28/architecture/monitoring-logging-alerting/master-seed/_index.en.md index f0c3d49bf..e49cdbfeb 100644 --- a/content/kubermatic/v2.28/architecture/monitoring-logging-alerting/master-seed/_index.en.md +++ b/content/kubermatic/v2.28/architecture/monitoring-logging-alerting/master-seed/_index.en.md @@ -32,10 +32,11 @@ When working with Grafana please keep in mind, that **ALL CHANGES** done using t Depending on how user clusters are used, disk usage for Prometheus can vary greatly. As the operator you should however plan for -* 100 MiB used by the seed-level Prometheus for each user cluster -* 50-300 MiB used by the user-level Prometheus, depending on its WAL size. +- 100 MiB used by the seed-level Prometheus for each user cluster +- 50-300 MiB used by the user-level Prometheus, depending on its WAL size. These values can also vary, if you tweak the retention periods. ## Installation + Please follow the [Installation of the Master / Seed MLA Stack Guide]({{< relref "../../../tutorials-howtos/monitoring-logging-alerting/master-seed/installation/" >}}). diff --git a/content/kubermatic/v2.28/architecture/monitoring-logging-alerting/user-cluster/_index.en.md b/content/kubermatic/v2.28/architecture/monitoring-logging-alerting/user-cluster/_index.en.md index 38c5ad986..206380b0a 100644 --- a/content/kubermatic/v2.28/architecture/monitoring-logging-alerting/user-cluster/_index.en.md +++ b/content/kubermatic/v2.28/architecture/monitoring-logging-alerting/user-cluster/_index.en.md @@ -25,11 +25,13 @@ Unlike the [Master / Seed Cluster MLA stack]({{< ref "../master-seed/">}}), it i ![Monitoring architecture diagram](architecture.png) ### User Cluster Components + When User Cluster MLA is enabled in a KKP user cluster, it automatically deploys two components into it - Prometheus and Loki Promtail. These components are configured to stream (remote write) the logs and metrics into backends running in the Seed Cluster (Cortex for metrics and Loki-Distributed for logs). The connection between the user cluster components and Seed cluster components is secured by HTTPS with mutual TLS certificate authentication. This makes the MLA setup in user clusters very simple and low footprint, as no MLA data is stored in the user clusters and user clusters are not involved when doing data lookups. Data of all user clusters can be accessed from a central place (Grafana UI) in the Seed Cluster. ### Seed Cluster Components + As mentioned above, metrics and logs data from all user clusters are streamed into their Seed Cluster, where they are processed and stored in a long term object store (Minio). Data can be looked up in a multi-tenant Grafana instance which is running in the Seed, and provides each user a view to metrics and logs of all clusters which they have privileges to access in the KKP platform. **MLA Gateway**: @@ -47,4 +49,5 @@ The backend for processing, storing and retrieving metrics data from user Cluste The backend for processing, storing and retrieving logs data from user Cluster Clusters is based on the [Loki](https://grafana.com/docs/loki/latest/) - distributed deployment. It allows horizontal scalability of individual Loki components that can be fine-tuned to fit any use-case. For more details about Loki architecture, please refer to the [Loki Architecture](https://grafana.com/docs/loki/latest/architecture/) documentation. ## Installation + Please follow the [User Cluster MLA Stack Admin Guide]({{< relref "../../../tutorials-howtos/monitoring-logging-alerting/user-cluster/admin-guide/" >}}). diff --git a/content/kubermatic/v2.28/architecture/requirements/cluster-requirements/_index.en.md b/content/kubermatic/v2.28/architecture/requirements/cluster-requirements/_index.en.md index ed1461203..8aa7eb68e 100644 --- a/content/kubermatic/v2.28/architecture/requirements/cluster-requirements/_index.en.md +++ b/content/kubermatic/v2.28/architecture/requirements/cluster-requirements/_index.en.md @@ -6,39 +6,43 @@ weight = 15 +++ ## Master Cluster + The Master Cluster hosts the KKP components and might also act as a seed cluster and host the master components of user clusters (see [Architecture]({{< ref "../../../architecture/">}})). Therefore, it should run in a highly-available setup with at least 3 master nodes and 3 worker nodes. **Minimal Requirements:** -* Six or more machines running one of: - * Ubuntu 20.04+ - * Debian 10 - * RHEL 7 - * Flatcar -* 4 GB or more of RAM per machine (any less will leave little room for your apps) -* 2 CPUs or more + +- Six or more machines running one of: + - Ubuntu 20.04+ + - Debian 10 + - RHEL 7 + - Flatcar +- 4 GB or more of RAM per machine (any less will leave little room for your apps) +- 2 CPUs or more ## User Cluster + The User Cluster is a Kubernetes cluster created and managed by KKP. The exact requirements may depend on the type of workloads that will be running in the user cluster. **Minimal Requirements:** -* One or more machines running one of: - * Ubuntu 20.04+ - * Debian 10 - * RHEL 7 - * Flatcar -* 2 GB or more of RAM per machine (any less will leave little room for your apps) -* 2 CPUs or more -* Full network connectivity between all machines in the cluster (public or private network is fine) -* Unique hostname, MAC address, and product\_uuid for every node. See more details in the next [**topic**](#Verify-the-MAC-Address-and-product-uuid-Are-Unique-for-Every-Node). -* Certain ports are open on your machines. See below for more details. -* Swap disabled. You **MUST** disable swap in order for the kubelet to work properly. + +- One or more machines running one of: + - Ubuntu 20.04+ + - Debian 10 + - RHEL 7 + - Flatcar +- 2 GB or more of RAM per machine (any less will leave little room for your apps) +- 2 CPUs or more +- Full network connectivity between all machines in the cluster (public or private network is fine) +- Unique hostname, MAC address, and product\_uuid for every node. See more details in the next [**topic**](#Verify-the-MAC-Address-and-product-uuid-Are-Unique-for-Every-Node). +- Certain ports are open on your machines. See below for more details. +- Swap disabled. You **MUST** disable swap in order for the kubelet to work properly. ## Verify Node Uniqueness You will need to verify that MAC address and `product_uuid` are unique on every node. This should usually be the case but might not be, especially for on-premise providers. -* You can get the MAC address of the network interfaces using the command `ip link` or `ifconfig -a` -* The product\_uuid can be checked by using the command `sudo cat /sys/class/dmi/id/product_uuid` +- You can get the MAC address of the network interfaces using the command `ip link` or `ifconfig -a` +- The product\_uuid can be checked by using the command `sudo cat /sys/class/dmi/id/product_uuid` It is very likely that hardware devices will have unique addresses, although some virtual machines may have identical values. Kubernetes uses these values to uniquely identify the nodes in the cluster. If these values are not unique to each node, the installation process may [fail](https://github.com/kubernetes/kubeadm/issues/31). diff --git a/content/kubermatic/v2.28/architecture/requirements/storage/_index.en.md b/content/kubermatic/v2.28/architecture/requirements/storage/_index.en.md index 832624b73..8c8e90ef4 100644 --- a/content/kubermatic/v2.28/architecture/requirements/storage/_index.en.md +++ b/content/kubermatic/v2.28/architecture/requirements/storage/_index.en.md @@ -6,7 +6,7 @@ weight = 15 +++ -Running KKP requires at least one persistent storage layer that can be accessed via a Kubernetes [CSI driver](https://kubernetes-csi.github.io/docs/drivers.html). The Kubermatic Installer attempts to discover pre-existing CSI drivers for known cloud providers to create a suitable _kubermatic-fast_ `StorageClass`. +Running KKP requires at least one persistent storage layer that can be accessed via a Kubernetes [CSI driver](https://kubernetes-csi.github.io/docs/drivers.html). The Kubermatic Installer attempts to discover pre-existing CSI drivers for known cloud providers to create a suitable *kubermatic-fast* `StorageClass`. In particular for setups in private datacenters, setting up a dedicated storage layer might be necessary to reach adequate performance. Make sure to configure and install the corresponding CSI driver (from the list linked above) for your storage solution onto the KKP Seed clusters before installing KKP. diff --git a/content/kubermatic/v2.28/architecture/supported-providers/azure/_index.en.md b/content/kubermatic/v2.28/architecture/supported-providers/azure/_index.en.md index e432e279d..9687f2119 100644 --- a/content/kubermatic/v2.28/architecture/supported-providers/azure/_index.en.md +++ b/content/kubermatic/v2.28/architecture/supported-providers/azure/_index.en.md @@ -25,7 +25,7 @@ az account show --query id -o json Create a role that is used by the service account. -``` +```text az role definition create --role-definition '{ "Name": "Kubermatic", "Description": "Manage VM and Networks as well to manage Resource Groups and Tags", @@ -47,7 +47,7 @@ Get your Tenant ID az account show --query tenantId -o json ``` -create a new app with +Create a new app with ```bash az ad sp create-for-rbac --role="Kubermatic" --scopes="/subscriptions/********-****-****-****-************" @@ -73,6 +73,7 @@ Enter provider credentials using the values from step "Prepare Azure Environment - `Subscription ID`: your subscription ID ### Resources cleanup + During the machines cleanup, if KKP's Machine-Controller failed to delete the Cloud Provider instance and the user deleted that instance manually, Machine-Controller won't be able to delete any referenced resources to that machine, such as Public IPs, Disks and NICs. In that case, the user should cleanup those resources manually due to the fact that, Azure won't cleanup diff --git a/content/kubermatic/v2.28/architecture/supported-providers/baremetal/_index.en.md b/content/kubermatic/v2.28/architecture/supported-providers/baremetal/_index.en.md index 71a803bdc..bcf96a864 100644 --- a/content/kubermatic/v2.28/architecture/supported-providers/baremetal/_index.en.md +++ b/content/kubermatic/v2.28/architecture/supported-providers/baremetal/_index.en.md @@ -17,12 +17,13 @@ KKP’s Baremetal provider uses Tinkerbell to automate the setup and management With Tinkerbell, the provisioning process is driven by workflows that ensure each server is configured according to the desired specifications. Whether you are managing servers in a single location or across multiple data centers, Tinkerbell provides a reliable and automated way to manage your physical infrastructure, making it as easy to handle as cloud-based resources. ## Requirement + To successfully use the KKP Baremetal provider with Tinkerbell, ensure the following: -* **Tinkerbell Cluster**: A working Tinkerbell cluster must be in place. -* **Direct Access to Servers**: You must have access to your bare-metal servers, allowing you to provision and manage them. -* **Network Connectivity**: Establish a network connection between the API server of Tinkerbell cluster and the KKP seed cluster. This allows the Kubermatic Machine Controller to communicate with the Tinkerbell stack. -* **Tinkerbell Hardware Objects**: Create Hardware Objects within Tinkerbell that represent each bare-metal server you want to provision as a worker node in your Kubernetes cluster. +- **Tinkerbell Cluster**: A working Tinkerbell cluster must be in place. +- **Direct Access to Servers**: You must have access to your bare-metal servers, allowing you to provision and manage them. +- **Network Connectivity**: Establish a network connection between the API server of Tinkerbell cluster and the KKP seed cluster. This allows the Kubermatic Machine Controller to communicate with the Tinkerbell stack. +- **Tinkerbell Hardware Objects**: Create Hardware Objects within Tinkerbell that represent each bare-metal server you want to provision as a worker node in your Kubernetes cluster. ## Usage @@ -53,9 +54,9 @@ In Tinkerbell, Hardware Objects represent your physical bare-metal servers. To s Before proceeding, ensure you gather the following information for each server: -* **Disk Devices**: Specify the available disk devices, including bootable storage. -* **Network Interfaces**: Define the network interfaces available on the server, including MAC addresses and interface names. -* **Network Configuration**: Configure the IP addresses, gateways, and DNS settings for the server's network setup. +- **Disk Devices**: Specify the available disk devices, including bootable storage. +- **Network Interfaces**: Define the network interfaces available on the server, including MAC addresses and interface names. +- **Network Configuration**: Configure the IP addresses, gateways, and DNS settings for the server's network setup. It’s essential to allow PXE booting and workflows for the provisioning process. This is done by ensuring the following settings in the hardware spec object: @@ -68,6 +69,7 @@ netboot: This configuration allows Tinkerbell to initiate network booting and enables iPXE to start the provisioning workflow for your bare-metal server. This is an example for Hardware Object Configuration + ```yaml apiVersion: tinkerbell.org/v1alpha1 kind: Hardware @@ -118,10 +120,10 @@ Once the MachineDeployment is created and reconciled, the provisioning workflow The Machine Controller generates the necessary actions for this workflow, which are then executed on the bare-metal server by the `tink-worker` container. The key actions include: -* **Wiping the Disk Devices**: All existing data on the disk will be erased to prepare for the new OS installation. -* **Installing the Operating System**: The specified OS image (e.g., Ubuntu 20.04 or 22.04) will be installed on the server. -* **Network Configuration**: The server’s network settings will be configured based on the Hardware Object and the defined network settings. -* **Cloud-init Propagation**: The Operating System Manager (OSM) will propagate the cloud-init settings to the node to ensure proper configuration of the OS and related services. +- **Wiping the Disk Devices**: All existing data on the disk will be erased to prepare for the new OS installation. +- **Installing the Operating System**: The specified OS image (e.g., Ubuntu 20.04 or 22.04) will be installed on the server. +- **Network Configuration**: The server’s network settings will be configured based on the Hardware Object and the defined network settings. +- **Cloud-init Propagation**: The Operating System Manager (OSM) will propagate the cloud-init settings to the node to ensure proper configuration of the OS and related services. Once the provisioning workflow is complete, the bare-metal server will be fully operational as a worker node in the Kubernetes cluster. @@ -131,4 +133,4 @@ Currently, the baremetal provider only support Ubuntu as an operating system. Mo ## Future Enhancements -Currently, the Baremetal provider requires users to manually create Hardware Objects in Tinkerbell and manually boot up bare-metal servers for provisioning. However, future improvements aim to automate these steps to make the process smoother and more efficient. The goal is to eliminate the need for manual intervention by automatically detecting hardware, creating the necessary objects, and initiating the provisioning process without user input. This will make the Baremetal provider more dynamic and scalable, allowing users to manage their infrastructure with even greater ease and flexibility. \ No newline at end of file +Currently, the Baremetal provider requires users to manually create Hardware Objects in Tinkerbell and manually boot up bare-metal servers for provisioning. However, future improvements aim to automate these steps to make the process smoother and more efficient. The goal is to eliminate the need for manual intervention by automatically detecting hardware, creating the necessary objects, and initiating the provisioning process without user input. This will make the Baremetal provider more dynamic and scalable, allowing users to manage their infrastructure with even greater ease and flexibility. diff --git a/content/kubermatic/v2.28/architecture/supported-providers/edge/_index.en.md b/content/kubermatic/v2.28/architecture/supported-providers/edge/_index.en.md index c37bccdad..4714e537e 100644 --- a/content/kubermatic/v2.28/architecture/supported-providers/edge/_index.en.md +++ b/content/kubermatic/v2.28/architecture/supported-providers/edge/_index.en.md @@ -13,6 +13,7 @@ staging environment for testing before. {{% /notice %}} ## Requirement + To leverage KKP's edge capabilities, you'll need to: * Provide Target Devices: Identify the edge devices you want to function as worker nodes within your Kubermatic user cluster. diff --git a/content/kubermatic/v2.28/architecture/supported-providers/kubevirt/_index.en.md b/content/kubermatic/v2.28/architecture/supported-providers/kubevirt/_index.en.md index d4ad27fac..08b15e1ff 100644 --- a/content/kubermatic/v2.28/architecture/supported-providers/kubevirt/_index.en.md +++ b/content/kubermatic/v2.28/architecture/supported-providers/kubevirt/_index.en.md @@ -14,16 +14,18 @@ weight = 5 ### Requirements A Kubernetes cluster (KubeVirt infrastructure cluster), which consists of nodes that **have a hardware virtualization support** with at least: -* 3 Bare Metal Server -* CPUs: Minimum 8-core for testing; minimum 16-core or more for production -* Memory: Minimum 32 GB for testing; minimum 64 GB or more for production -* Storage: Minimum 100 GB for testing; minimum 500 GB or more for production + +- 3 Bare Metal Server +- CPUs: Minimum 8-core for testing; minimum 16-core or more for production +- Memory: Minimum 32 GB for testing; minimum 64 GB or more for production +- Storage: Minimum 100 GB for testing; minimum 500 GB or more for production Software requirement: -* KubeOne = 1.7 or higher -* KubeOVN = 1.12 or higher or Canal = 3.26 or higher -* KubeVirt = 1.2.2 -* Containerized Data Importer (CDI) = v1.60 + +- KubeOne = 1.7 or higher +- KubeOVN = 1.12 or higher or Canal = 3.26 or higher +- KubeVirt = 1.2.2 +- Containerized Data Importer (CDI) = v1.60 The cluster version must be in the scope of [supported KKP Kubernetes clusters]({{< ref "../../../tutorials-howtos/operating-system-manager/compatibility/#kubernetes-versions" >}}) and it must be in the [KubeVirt Support Matrix](https://github.com/kubevirt/sig-release/blob/main/releases/k8s-support-matrix.md). @@ -37,6 +39,7 @@ Follow [KubeVirt](https://kubevirt.io/user-guide/operations/installation/#instal documentation to find out how to install them. We require the following KubeVirt configuration: + ```yaml apiVersion: kubevirt.io/v1 kind: KubeVirt @@ -85,31 +88,31 @@ Currently, it is not recommended to use local or any topology constrained storag Once you have Kubernetes with all needed components, the last thing is to configure KubeVirt datacenter on seed. We allow to configure: -* `customNetworkPolicies` - Network policies that are deployed on the infrastructure cluster (where VMs run). - * Check [Network Policy documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/#networkpolicy-resource) to see available options in the spec. - * Also check a [common services connectivity issue](#i-created-a-load-balancer-service-on-a-user-cluster-but-services-outside-cannot-reach-it) that can be solved by a custom network policy. -* `ccmZoneAndRegionEnabled` - Indicates if region and zone labels from the cloud provider should be fetched. This field is enabled by default and should be disabled if the infra kubeconfig that is provided for KKP has no permission to access cluster role resources such as node objects. -* `dnsConfig` and `dnsPolicy` - DNS config and policy which are set up on a guest. Defaults to `ClusterFirst`. - * You should set those fields when you suffer from DNS loop or collision issue. [Refer to this section for more details.](#i-discovered-a-dns-collision-on-my-cluster-why-does-it-happen) -* `images` - Images for Virtual Machines that are selectable from KKP dashboard. - * Set this field according to [supported operating systems]({{< ref "../../compatibility/os-support-matrix/" >}}) to make sure that users can select operating systems for their VMs. -* `infraStorageClasses` - Storage classes that are initialized on user clusters that end users can work with. - * `isDefaultClass` - If true, the created StorageClass in the tenant cluster will be annotated with. - * `labels` - Is a map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. - * `regions` - Represents a larger domain, made up of one or more zones. It is uncommon for Kubernetes clusters to span multiple regions. - * `volumeBindingMode` - indicates how PersistentVolumeClaims should be provisioned and bound. When unset, VolumeBindingImmediate is used. - * `volumeProvisioner` - The field specifies whether a storage class will be utilized by the infra cluster csi driver where the Containerized Data Importer (CDI) can use to create VM disk images or by the KubeVirt CSI Driver to provision volumes in the user cluster. If not specified, the storage class can be used as a VM disk image or user clusters volumes. - * `infra-csi-driver` - When set in the infraStorageClass, the storage class can be listed in the UI while creating the machine deployments and won't be available in the user cluster. - * `kubevirt-csi-driver` - When set in the infraStorageClass, the storage class won't be listed in the UI and will be available in the user cluster. - * `zones` - Represent a logical failure domain. It is common for Kubernetes clusters to span multiple zones for increased availability. -* `namespacedMode(experimental)` - Represents the configuration for enabling the single namespace mode for all user-clusters in the KubeVirt datacenter. -* `vmEvictionStrategy` - Indicates the strategy to follow when a node drain occurs. If not set the default value is External and the VM will be protected by a PDB. Currently, we only support two strategies, `External` or `LiveMigrate`. - * `LiveMigrate`: the VirtualMachineInstance will be migrated instead of being shutdown. - * `External`: the VirtualMachineInstance will be protected by a PDB and `vmi.Status.EvacuationNodeName` will be set on eviction. This is mainly useful for machine-controller which needs a way for VMI's to be blocked from eviction, yet inform machine-controller that eviction has been called on the VMI, so it can handle tearing the VMI down. -* `csiDriverOperator` - Contains the KubeVirt CSI Driver Operator configurations, where users can override the default configurations of the csi driver. - * `overwriteRegistry`: overwrite the images registry for the csi driver daemonset that runs in the user cluster. -* `enableDedicatedCPUs` (deprecated) - Represents the configuration for virtual machine cpu assignment by using `domain.cpu` when set to `true` or using `resources.requests` and `resources.limits` when set to `false` which is the default -* `usePodResourcesCPU` - Represents the new way of configuring for cpu assignment virtual machine by using `domain.cpu` when set to `false` which is the default or using `resources.requests` and `resources.limits` when set to `true` +- `customNetworkPolicies` - Network policies that are deployed on the infrastructure cluster (where VMs run). + - Check [Network Policy documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/#networkpolicy-resource) to see available options in the spec. + - Also check a [common services connectivity issue](#i-created-a-load-balancer-service-on-a-user-cluster-but-services-outside-cannot-reach-it) that can be solved by a custom network policy. +- `ccmZoneAndRegionEnabled` - Indicates if region and zone labels from the cloud provider should be fetched. This field is enabled by default and should be disabled if the infra kubeconfig that is provided for KKP has no permission to access cluster role resources such as node objects. +- `dnsConfig` and `dnsPolicy` - DNS config and policy which are set up on a guest. Defaults to `ClusterFirst`. + - You should set those fields when you suffer from DNS loop or collision issue. [Refer to this section for more details.](#i-discovered-a-dns-collision-on-my-cluster-why-does-it-happen) +- `images` - Images for Virtual Machines that are selectable from KKP dashboard. + - Set this field according to [supported operating systems]({{< ref "../../compatibility/os-support-matrix/" >}}) to make sure that users can select operating systems for their VMs. +- `infraStorageClasses` - Storage classes that are initialized on user clusters that end users can work with. + - `isDefaultClass` - If true, the created StorageClass in the tenant cluster will be annotated with. + - `labels` - Is a map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. + - `regions` - Represents a larger domain, made up of one or more zones. It is uncommon for Kubernetes clusters to span multiple regions. + - `volumeBindingMode` - indicates how PersistentVolumeClaims should be provisioned and bound. When unset, VolumeBindingImmediate is used. + - `volumeProvisioner` - The field specifies whether a storage class will be utilized by the infra cluster csi driver where the Containerized Data Importer (CDI) can use to create VM disk images or by the KubeVirt CSI Driver to provision volumes in the user cluster. If not specified, the storage class can be used as a VM disk image or user clusters volumes. + - `infra-csi-driver` - When set in the infraStorageClass, the storage class can be listed in the UI while creating the machine deployments and won't be available in the user cluster. + - `kubevirt-csi-driver` - When set in the infraStorageClass, the storage class won't be listed in the UI and will be available in the user cluster. + - `zones` - Represent a logical failure domain. It is common for Kubernetes clusters to span multiple zones for increased availability. +- `namespacedMode(experimental)` - Represents the configuration for enabling the single namespace mode for all user-clusters in the KubeVirt datacenter. +- `vmEvictionStrategy` - Indicates the strategy to follow when a node drain occurs. If not set the default value is External and the VM will be protected by a PDB. Currently, we only support two strategies, `External` or `LiveMigrate`. + - `LiveMigrate`: the VirtualMachineInstance will be migrated instead of being shutdown. + - `External`: the VirtualMachineInstance will be protected by a PDB and `vmi.Status.EvacuationNodeName` will be set on eviction. This is mainly useful for machine-controller which needs a way for VMI's to be blocked from eviction, yet inform machine-controller that eviction has been called on the VMI, so it can handle tearing the VMI down. +- `csiDriverOperator` - Contains the KubeVirt CSI Driver Operator configurations, where users can override the default configurations of the csi driver. + - `overwriteRegistry`: overwrite the images registry for the csi driver daemonset that runs in the user cluster. +- `enableDedicatedCPUs` (deprecated) - Represents the configuration for virtual machine cpu assignment by using `domain.cpu` when set to `true` or using `resources.requests` and `resources.limits` when set to `false` which is the default +- `usePodResourcesCPU` - Represents the new way of configuring for cpu assignment virtual machine by using `domain.cpu` when set to `false` which is the default or using `resources.requests` and `resources.limits` when set to `true` {{% notice note %}} The `infraStorageClasses` pass names of KubeVirt storage classes that can be used from user clusters. @@ -140,6 +143,7 @@ only inside the cluster. You should use `customNetworkPolicies` to customize the Install [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator) on the KubeVirt cluster. Then update `KubeVirt` configuration with the following spec: + ```yaml apiVersion: kubevirt.io/v1 kind: KubeVirt @@ -168,12 +172,14 @@ We provide a Virtual Machine templating functionality over [Instance Types and P You can use our standard Instance Types: -* standard-2 - 2 CPUs, 8Gi RAM -* standard-4 - 4 CPUs, 16Gi RAM -* standard-8 - 8 CPUs, 32Gi RAM + +- standard-2 - 2 CPUs, 8Gi RAM +- standard-4 - 4 CPUs, 16Gi RAM +- standard-8 - 8 CPUs, 32Gi RAM and Preferences (which are optional): -* sockets-advantage - cpu guest topology where number of cpus is equal to number of sockets + +- sockets-advantage - cpu guest topology where number of cpus is equal to number of sockets or you can just simply adjust the amount of CPUs and RAM of our default template according to your needs. @@ -183,6 +189,7 @@ instance types and preferences that users can select later. [Read how to add new ### Virtual Machine Scheduling KubeVirt can take advantage of Kubernetes inner features to provide an advanced scheduling mechanism to virtual machines (VMs): + - [Kubernetes topology spread constraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) - [Kubernetes node affinity/anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) @@ -192,6 +199,7 @@ This allows you to restrict KubeVirt VMs ([see architecture](#architecture)) to {{% notice note %}} Note that topology spread constraints and node affinity presets are applicable to KubeVirt infra nodes. {{% /notice %}} + #### Default Scheduling Behavior Each Virtual Machine you create has default [topology spread constraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) applied: @@ -202,7 +210,7 @@ topologyKey: kubernetes.io/hostname whenUnsatisfiable: ScheduleAnyway ``` -this allows us to spread Virtual Machine equally across a cluster. +This allows us to spread Virtual Machine equally across a cluster. #### Customize Scheduling Behavior @@ -215,6 +223,7 @@ You can do it by expanding *ADVANCED SCHEDULING SETTINGS* on the initial nodes d - `Node Affinity Preset Values` refers to the values of KubeVirt infra node labels. Node Affinity Preset type can be `hard` or `soft` and refers to the same notion of [Pod affinity/anti-affinity types](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#types-of-inter-pod-affinity-and-anti-affinity): + - `hard`: the scheduler can't schedule the VM unless the rule is met. - `soft`: the scheduler tries to find a node that meets the rule. If a matching node is not available, the scheduler still schedules the VM. @@ -281,16 +290,17 @@ parameter of Machine Controller that sets the timeout for workload eviction. Usually it happens when both infrastructure and user clusters points to the same address of NodeLocal DNS Cache servers, even if they have separate server instances running. Let us imagine that: -* On the infrastructure cluster there is a running NodeLocal DNS Cache under 169.254.20.10 address. -* Then we create a new user cluster, start a few Virtual Machines that finally gives a fully functional k8s cluster that runs on another k8s cluster. -* Next we observe that on the user cluster there is another NodeLocal DNS Cache that has the same 169.254.20.10 address. -* Since Virtual Machine can have access to subnets on the infra and user clusters (depends on your network policy rules) having the same address of DNS cache leads to conflict. + +- On the infrastructure cluster there is a running NodeLocal DNS Cache under 169.254.20.10 address. +- Then we create a new user cluster, start a few Virtual Machines that finally gives a fully functional k8s cluster that runs on another k8s cluster. +- Next we observe that on the user cluster there is another NodeLocal DNS Cache that has the same 169.254.20.10 address. +- Since Virtual Machine can have access to subnets on the infra and user clusters (depends on your network policy rules) having the same address of DNS cache leads to conflict. One way to prevent that situation is to set a `dnsPolicy` and `dnsConfig` rules that Virtual Machines do not copy DNS configuration from their pods and points to different addresses. Follow [Configure KKP With KubeVirt](#configure-kkp-with-kubevirt) to learn how set DNS config correctly. -### I created a load balancer service on a user cluster but services outside cannot reach it. +### I created a load balancer service on a user cluster but services outside cannot reach it In most cases it is due to `cluster-isolation` network policy that is deployed as default on each user cluster. It only allows in-cluster communication. You should adjust network rules to your needs by adding [customNetworkPolicies configuration]({{< ref "../../../tutorials-howtos/project-and-cluster-management/seed-cluster/" >}})). @@ -326,16 +336,18 @@ Kubermatic Virtualization graduates to GA from KKP 2.22! On the way, we have changed many things that improved our implementation of KubeVirt Cloud Provider. Just to highlight the most important: -* Safe Virtual Machine workload eviction has been implemented. -* Virtual Machine templating is based on InstanceTypes and Preferences. -* KubeVirt CSI controller has been moved to control plane of a user cluster. -* Users can influence scheduling of VMs over topology spread constraints and node affinity presets. -* KubeVirt Cloud Controller Manager has been improved and optimized. -* Cluster admin can define the list of supported OS images and initialized storage classes. + +- Safe Virtual Machine workload eviction has been implemented. +- Virtual Machine templating is based on InstanceTypes and Preferences. +- KubeVirt CSI controller has been moved to control plane of a user cluster. +- Users can influence scheduling of VMs over topology spread constraints and node affinity presets. +- KubeVirt Cloud Controller Manager has been improved and optimized. +- Cluster admin can define the list of supported OS images and initialized storage classes. Additionally, we removed some features that didn't leave technology preview stage, those are: -* Custom Local Disks -* Secondary Disks + +- Custom Local Disks +- Secondary Disks {{% notice warning %}} The official upgrade procedure will not break clusters that already exist, however, **scaling cluster nodes will not lead to expected results**. @@ -358,12 +370,12 @@ Or if you provisioned the cluster over KubeOne please follow [the update procedu Next you can update KubeVirt control plane and Containerized Data Importer by executing: -```shell +```bash export RELEASE= kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml ``` -```shell +```bash export RELEASE= kubectl apply -f https://github.com/kubevirt/containerized-data-importer/releases/download/${RELEASE}/cdi-operator.yaml ``` diff --git a/content/kubermatic/v2.28/architecture/supported-providers/vmware-cloud-director/_index.en.md b/content/kubermatic/v2.28/architecture/supported-providers/vmware-cloud-director/_index.en.md index b2cf2e8ec..8b9eac150 100644 --- a/content/kubermatic/v2.28/architecture/supported-providers/vmware-cloud-director/_index.en.md +++ b/content/kubermatic/v2.28/architecture/supported-providers/vmware-cloud-director/_index.en.md @@ -10,9 +10,9 @@ weight = 7 Prerequisites for provisioning Kubernetes clusters with the KKP are as follows: 1. An Organizational Virtual Data Center (VDC). -2. `Edge Gateway` is required for connectivity with the internet, network address translation, and network firewall. -3. Organizational Virtual Data Center network is connected to the edge gateway. -4. Ensure that the distributed firewalls are configured in a way that allows traffic flow within and out of the VDC. +1. `Edge Gateway` is required for connectivity with the internet, network address translation, and network firewall. +1. Organizational Virtual Data Center network is connected to the edge gateway. +1. Ensure that the distributed firewalls are configured in a way that allows traffic flow within and out of the VDC. Kubermatic Kubernetes Platform (KKP) integration has been tested with `VMware Cloud Director 10.4`. @@ -57,7 +57,7 @@ spec: CSI driver settings can be configured at the cluster level when creating a cluster using UI or API. The following settings are required: 1. Storage Profile: Used for creating persistent volumes. -2. Filesystem: Filesystem to use for named disks. Allowed values are ext4 or xfs. +1. Filesystem: Filesystem to use for named disks. Allowed values are ext4 or xfs. ## Known Limitations diff --git a/content/kubermatic/v2.28/architecture/supported-providers/vsphere/_index.en.md b/content/kubermatic/v2.28/architecture/supported-providers/vsphere/_index.en.md index 9ae4d15cb..8b0fe217d 100644 --- a/content/kubermatic/v2.28/architecture/supported-providers/vsphere/_index.en.md +++ b/content/kubermatic/v2.28/architecture/supported-providers/vsphere/_index.en.md @@ -17,10 +17,9 @@ When creating worker nodes for a user cluster, the user can specify an existing ### Supported Operating Systems -* Ubuntu 20.04 [ova](https://cloud-images.ubuntu.com/releases/20.04/release/ubuntu-20.04-server-cloudimg-amd64.ova) -* Ubuntu 22.04 [ova](https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.ova) -* Flatcar (Stable channel) [ova](https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_vmware_ova.ova) - +- Ubuntu 20.04 [ova](https://cloud-images.ubuntu.com/releases/20.04/release/ubuntu-20.04-server-cloudimg-amd64.ova) +- Ubuntu 22.04 [ova](https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.ova) +- Flatcar (Stable channel) [ova](https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_vmware_ova.ova) ### Importing the OVA @@ -55,7 +54,7 @@ are needed to manage VMs, storage, networking and tags. The vSphere provider allows to split permissions into two sets of credentials: 1. Credentials passed to the [vSphere Cloud Controller Manager (CCM) and CSI Storage driver](#cloud-controller-manager-ccm--csi). These credentials are currently inherited into the user cluster and should therefore be individual per user cluster. This type of credentials can be passed when creating a user cluster or setting up a preset. -2. Credentials used for [creating and managing infrastructure](#infrastructure-management) (VMs, tags, networks). This set of credentials is not shared with the user cluster and is kept on the seed cluster. This type of credentials can either be passed in the Seed configuration ([.spec.datacenters.EXAMPLEDC.vpshere.infraManagementUser]({{< ref "../../../references/crds/#datacenterspecvsphere" >}})) for all user clusters created in this datacenter or individually while creating a user cluster. +1. Credentials used for [creating and managing infrastructure](#infrastructure-management) (VMs, tags, networks). This set of credentials is not shared with the user cluster and is kept on the seed cluster. This type of credentials can either be passed in the Seed configuration ([.spec.datacenters.EXAMPLEDC.vpshere.infraManagementUser]({{< ref "../../../references/crds/#datacenterspecvsphere" >}})) for all user clusters created in this datacenter or individually while creating a user cluster. If such a split is not desired, one set of credentials used for both use cases can be provided instead. Providing two sets of credentials is optional. @@ -64,6 +63,7 @@ If such a split is not desired, one set of credentials used for both use cases c The vsphere users has to have to following permissions on the correct resources. Note that if a shared set of credentials is used, roles for both use cases need to be assigned to the technical user which will be used for credentials. #### Cloud Controller Manager (CCM) / CSI + **Note:** Below roles were updated based on [vsphere-storage-plugin-roles] for external CCM which is available from kkp v2.18+ and vsphere v7.0.2+ For the Cloud Controller Manager (CCM) and CSI components used to provide cloud provider and storage integration to the user cluster, @@ -71,23 +71,25 @@ a technical user (e.g. `cust-ccm-cluster`) is needed. The user should be assigne {{< tabs name="CCM/CSI User Roles" >}} {{% tab name="k8c-ccm-storage-vmfolder-propagate" %}} + ##### Role `k8c-ccm-storage-vmfolder-propagate` -* Granted at **VM Folder** and **Template Folder**, propagated -* Permissions - * Virtual machine - * Change Configuration - * Add existing disk - * Add new disk - * Add or remove device - * Remove disk - * Folder - * Create folder - * Delete dolder + +- Granted at **VM Folder** and **Template Folder**, propagated +- Permissions + - Virtual machine + - Change Configuration + - Add existing disk + - Add new disk + - Add or remove device + - Remove disk + - Folder + - Create folder + - Delete dolder --- -``` -$ govc role.ls k8c-ccm-storage-vmfolder-propagate +```bash +govc role.ls k8c-ccm-storage-vmfolder-propagate Folder.Create Folder.Delete VirtualMachine.Config.AddExistingDisk @@ -95,50 +97,61 @@ VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.RemoveDisk ``` + {{% /tab %}} {{% tab name="k8c-ccm-storage-datastore-propagate" %}} + ##### Role `k8c-ccm-storage-datastore-propagate` -* Granted at **Datastore**, propagated -* Permissions - * Datastore - * Allocate space - * Low level file operations + +- Granted at **Datastore**, propagated +- Permissions + - Datastore + - Allocate space + - Low level file operations --- -``` -$ govc role.ls k8c-ccm-storage-datastore-propagate +```bash +govc role.ls k8c-ccm-storage-datastore-propagate Datastore.AllocateSpace Datastore.FileManagement ``` + {{% /tab %}} {{% tab name="k8c-ccm-storage-cns" %}} + ##### Role `k8c-ccm-storage-cns` -* Granted at **vcenter** level, not propagated -* Permissions - * CNS - * Searchable + +- Granted at **vcenter** level, not propagated +- Permissions + - CNS + - Searchable + --- -``` -$ govc role.ls k8c-ccm-storage-cns +```bash +govc role.ls k8c-ccm-storage-cns Cns.Searchable ``` + {{% /tab %}} {{% tab name="Read-only (predefined)" %}} + ##### Role `Read-only` (predefined) -* Granted at ..., **not** propagated - * Datacenter - * All hosts where the nodes VMs reside. + +- Granted at ..., **not** propagated + - Datacenter + - All hosts where the nodes VMs reside. --- -``` -$ govc role.ls ReadOnly +```bash +govc role.ls ReadOnly System.Anonymous System.Read System.View ``` + {{% /tab %}} {{< /tabs >}} @@ -148,33 +161,36 @@ For infrastructure (e.g. VMs, tags and networking) provisioning actions of KKP i {{< tabs name="Infrastructure Management" >}} {{% tab name="k8c-user-vcenter" %}} + ##### Role `k8c-user-vcenter` -* Granted at **vcenter** level, **not** propagated -* Needed to customize VM during provisioning -* Permissions - * CNS - * Searchable - * Profile-driven storage - * Profile-driven storage view - * VirtualMachine - * Provisioning - * Modify customization specification - * Read customization specifications - * vSphere Tagging - * Assign or Unassign vSphere Tag - * Assign or Unassign vSphere Tag on Object - * Create vSphere Tag - * Create vSphere Tag Category - * Delete vSphere Tag - * Delete vSphere Tag Category - * Edit vSphere Tag - * Edit vSphere Tag Category - * Modify UsedBy Field For Category - * Modify UsedBy Field For Tag + +- Granted at **vcenter** level, **not** propagated +- Needed to customize VM during provisioning +- Permissions + - CNS + - Searchable + - Profile-driven storage + - Profile-driven storage view + - VirtualMachine + - Provisioning + - Modify customization specification + - Read customization specifications + - vSphere Tagging + - Assign or Unassign vSphere Tag + - Assign or Unassign vSphere Tag on Object + - Create vSphere Tag + - Create vSphere Tag Category + - Delete vSphere Tag + - Delete vSphere Tag Category + - Edit vSphere Tag + - Edit vSphere Tag Category + - Modify UsedBy Field For Category + - Modify UsedBy Field For Tag + --- -``` -$ govc role.ls k8c-user-vcenter +```bash +govc role.ls k8c-user-vcenter Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory @@ -193,34 +209,37 @@ System.View VirtualMachine.Provisioning.ModifyCustSpecs VirtualMachine.Provisioning.ReadCustSpecs ``` + {{% /tab %}} {{% tab name="k8c-user-datacenter" %}} + ##### Role `k8c-user-datacenter` -* Granted at **datacenter** level, **not** propagated -* Needed for cloning the template VM (obviously this is not done in a folder at this time) -* Permissions - * Datastore - * Allocate space - * Browse datastore - * Low level file operations - * Remove file - * vApp - * vApp application configuration - * vApp instance configuration - * Virtual Machine - * Change Configuration - * Change CPU count - * Change Memory - * Change Settings - * Edit Inventory - * Create from existing - * vSphere Tagging - * Assign or Unassign vSphere Tag on Object + +- Granted at **datacenter** level, **not** propagated +- Needed for cloning the template VM (obviously this is not done in a folder at this time) +- Permissions + - Datastore + - Allocate space + - Browse datastore + - Low level file operations + - Remove file + - vApp + - vApp application configuration + - vApp instance configuration + - Virtual Machine + - Change Configuration + - Change CPU count + - Change Memory + - Change Settings + - Edit Inventory + - Create from existing + - vSphere Tagging + - Assign or Unassign vSphere Tag on Object --- -``` -$ govc role.ls k8c-user-datacenter +```bash +govc role.ls k8c-user-datacenter Datastore.AllocateSpace Datastore.Browse Datastore.DeleteFile @@ -236,40 +255,44 @@ VirtualMachine.Config.Memory VirtualMachine.Config.Settings VirtualMachine.Inventory.CreateFromExisting ``` + {{% /tab %}} {{% tab name="k8c-user-cluster-propagate" %}} -* Role `k8c-user-cluster-propagate` - * Granted at **cluster** level, propagated - * Needed for upload of `cloud-init.iso` (Ubuntu) or defining the Ignition config into Guestinfo (CoreOS) - * Permissions - * AutoDeploy - * Rule - * Create - * Delete - * Edit - * Folder - * Create folder - * Host - * Configuration - * Storage partition configuration - * System Management - * Local operations - * Reconfigure virtual machine - * Inventory - * Modify cluster - * Resource - * Assign virtual machine to resource pool - * Migrate powered off virtual machine - * Migrate powered on virtual machine - * vApp - * vApp application configuration - * vApp instance configuration - * vSphere Tagging - * Assign or Unassign vSphere Tag on Object + +##### Role `k8c-user-cluster-propagate` + +- Granted at **cluster** level, propagated +- Needed for upload of `cloud-init.iso` (Ubuntu) or defining the Ignition config into Guestinfo (CoreOS) +- Permissions + - AutoDeploy + - Rule + - Create + - Delete + - Edit + - Folder + - Create folder + - Host + - Configuration + - Storage partition configuration + - System Management + - Local operations + - Reconfigure virtual machine + - Inventory + - Modify cluster + - Resource + - Assign virtual machine to resource pool + - Migrate powered off virtual machine + - Migrate powered on virtual machine + - vApp + - vApp application configuration + - vApp instance configuration + - vSphere Tagging + - Assign or Unassign vSphere Tag on Object + --- -``` -$ govc role.ls k8c-user-cluster-propagate +```bash +govc role.ls k8c-user-cluster-propagate AutoDeploy.Rule.Create AutoDeploy.Rule.Delete AutoDeploy.Rule.Edit @@ -285,19 +308,23 @@ Resource.HotMigrate VApp.ApplicationConfig VApp.InstanceConfig ``` + {{% /tab %}} {{% tab name="k8c-network-attach" %}} + ##### Role `k8c-network-attach` -* Granted for each network that should be used (distributed switch + network) -* Permissions - * Network - * Assign network - * vSphere Tagging - * Assign or Unassign vSphere Tag on Object + +- Granted for each network that should be used (distributed switch + network) +- Permissions + - Network + - Assign network + - vSphere Tagging + - Assign or Unassign vSphere Tag on Object + --- -``` -$ govc role.ls k8c-network-attach +```bash +govc role.ls k8c-network-attach InventoryService.Tagging.ObjectAttachable Network.Assign System.Anonymous @@ -307,27 +334,30 @@ System.View {{% /tab %}} {{% tab name="k8c-user-datastore-propagate" %}} + ##### Role `k8c-user-datastore-propagate` -* Granted at **datastore / datastore cluster** level, propagated -* Also provides permission to create vSphere tags for a dedicated category, which are required by KKP seed controller manager -* Please note below points about tagging. + +- Granted at **datastore / datastore cluster** level, propagated +- Also provides permission to create vSphere tags for a dedicated category, which are required by KKP seed controller manager +- Please note below points about tagging. **Note**: If a category id is assigned to a user cluster, KKP would claim the ownership of any tags it creates. KKP would try to delete tags assigned to the cluster upon cluster deletion. Thus, make sure that the assigned category isn't shared across other lingering resources. **Note**: Tags can be attached to machine deployments regardless if the tags are created via KKP or not. If a tag was not attached to the user cluster, machine controller will only detach it. -* Permissions - * Datastore - * Allocate space - * Browse datastore - * Low level file operations - * vSphere Tagging - * Assign or Unassign vSphere Tag on an Object + +- Permissions + - Datastore + - Allocate space + - Browse datastore + - Low level file operations + - vSphere Tagging + - Assign or Unassign vSphere Tag on an Object --- -``` -$ govc role.ls k8c-user-datastore-propagate +```bash +govc role.ls k8c-user-datastore-propagate Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement @@ -336,34 +366,37 @@ System.Anonymous System.Read System.View ``` + {{% /tab %}} {{% tab name="k8c-user-folder-propagate" %}} + ##### Role `k8c-user-folder-propagate` -* Granted at **VM Folder** and **Template Folder** level, propagated -* Needed for managing the node VMs -* Permissions - * Folder - * Create folder - * Delete folder - * Global - * Set custom attribute - * Virtual machine - * Change Configuration - * Edit Inventory - * Guest operations - * Interaction - * Provisioning - * Snapshot management - * vSphere Tagging - * Assign or Unassign vSphere Tag - * Assign or Unassign vSphere Tag on an Object - * Create vSphere Tag - * Delete vSphere Tag + +- Granted at **VM Folder** and **Template Folder** level, propagated +- Needed for managing the node VMs +- Permissions + - Folder + - Create folder + - Delete folder + - Global + - Set custom attribute + - Virtual machine + - Change Configuration + - Edit Inventory + - Guest operations + - Interaction + - Provisioning + - Snapshot management + - vSphere Tagging + - Assign or Unassign vSphere Tag + - Assign or Unassign vSphere Tag on an Object + - Create vSphere Tag + - Delete vSphere Tag --- -``` -$ govc role.ls k8c-user-folder-propagate +```bash +govc role.ls k8c-user-folder-propagate Folder.Create Folder.Delete Global.SetCustomField @@ -459,20 +492,15 @@ VirtualMachine.State.RenameSnapshot VirtualMachine.State.RevertToSnapshot ``` + {{% /tab %}} {{< /tabs >}} - - - - - - The described permissions have been tested with vSphere 8.0.2 and might be different for other vSphere versions. ## Datastores and Datastore Clusters @@ -483,8 +511,8 @@ shared management interface. In KKP *Datastores* are used for two purposes: -* Storing the VMs files for the worker nodes of vSphere user clusters. -* Generating the vSphere cloud provider storage configuration for user clusters. +- Storing the VMs files for the worker nodes of vSphere user clusters. +- Generating the vSphere cloud provider storage configuration for user clusters. In particular to provide the `default-datastore` value, that is the default datastore for dynamic volume provisioning. @@ -494,24 +522,20 @@ specified directly in [vSphere cloud configuration][vsphere-cloud-config]. There are three places where Datastores and Datastore Clusters can be configured in KKP: -* At datacenter level (configured in the [Seed CRD]({{< ref "../../../tutorials-howtos/project-and-cluster-management/seed-cluster" >}}))) +- At datacenter level (configured in the [Seed CRD]({{< ref "../../../tutorials-howtos/project-and-cluster-management/seed-cluster" >}}))) it is possible to specify the default *Datastore* that will be used for user clusters dynamic volume provisioning and workers VMs placement in case no *Datastore* or *Datastore Cluster* is specified at cluster level. -* At *Cluster* level it is possible to provide either a *Datastore* or a +- At *Cluster* level it is possible to provide either a *Datastore* or a *Datastore Cluster* respectively with `spec.cloud.vsphere.datastore` and `spec.cloud.vsphere.datastoreCluster` fields. -* It is possible to specify *Datastore* or *Datastore Clusters* in a preset +- It is possible to specify *Datastore* or *Datastore Clusters* in a preset than is later used to create a user cluster from it. These settings can also be configured as part of the "Advanced Settings" step when creating a user cluster from the [KKP dashboard]({{< ref "../../../tutorials-howtos/project-and-cluster-management/#create-cluster" >}}). -[vsphere-cloud-config]: https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-BFF39F1D-F70A-4360-ABC9-85BDAFBE8864.html?hWord=N4IghgNiBcIMYQK4GcAuBTATgWgJYBMACAYQGUBJEAXyA -[vsphere-storage-plugin-roles]: https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-0AB6E692-AA47-4B6A-8CEA-38B754E16567.html#GUID-043ACF65-9E0B-475C-A507-BBBE2579AA58__GUID-E51466CB-F1EA-4AD7-A541-F22CDC6DE881 - - ## Known Issues ### Volume Detach Bug @@ -520,24 +544,24 @@ After a node is powered-off, the Kubernetes vSphere driver doesn't detach disks Upstream Kubernetes has been working on the issue for a long time now and tracking it under the following tickets: -* -* -* -* -* +- +- +- +- +- ## Internal Kubernetes endpoints unreachable ### Symptoms -* Unable to perform CRUD operations on resources governed by webhooks (e.g. ValidatingWebhookConfiguration, MutatingWebhookConfiguration, etc.). The following error is observed: +- Unable to perform CRUD operations on resources governed by webhooks (e.g. ValidatingWebhookConfiguration, MutatingWebhookConfiguration, etc.). The following error is observed: -```sh +```bash Internal error occurred: failed calling webhook "webhook-name": failed to call webhook: Post "https://webhook-service-name.namespace.svc:443/webhook-endpoint": context deadline exceeded ``` -* Unable to reach internal Kubernetes endpoints from pods/nodes. -* ICMP is working but TCP/UDP is not. +- Unable to reach internal Kubernetes endpoints from pods/nodes. +- ICMP is working but TCP/UDP is not. ### Cause @@ -545,7 +569,7 @@ On recent enough VMware hardware compatibility version (i.e >=15 or maybe >=14), ### Solution -```sh +```bash sudo ethtool -K ens192 tx-udp_tnl-segmentation off sudo ethtool -K ens192 tx-udp_tnl-csum-segmentation off ``` @@ -554,10 +578,13 @@ These flags are related to the hardware segmentation offload done by the vSphere We have two options to configure these flags for KKP installations: -* When configuring the VM template, set these flags as well. -* Create a [custom Operating System Profile]({{< ref "../../../tutorials-howtos/operating-system-manager/usage#custom-operatingsystemprofiles" >}}) and configure the flags there. +- When configuring the VM template, set these flags as well. +- Create a [custom Operating System Profile]({{< ref "../../../tutorials-howtos/operating-system-manager/usage#custom-operatingsystemprofiles" >}}) and configure the flags there. ### References -* -* +- +- + +[vsphere-cloud-config]: https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-BFF39F1D-F70A-4360-ABC9-85BDAFBE8864.html?hWord=N4IghgNiBcIMYQK4GcAuBTATgWgJYBMACAYQGUBJEAXyA +[vsphere-storage-plugin-roles]: https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-0AB6E692-AA47-4B6A-8CEA-38B754E16567.html#GUID-043ACF65-9E0B-475C-A507-BBBE2579AA58__GUID-E51466CB-F1EA-4AD7-A541-F22CDC6DE881 diff --git a/content/kubermatic/v2.28/cheat-sheets/etcd/etcd-launcher/_index.en.md b/content/kubermatic/v2.28/cheat-sheets/etcd/etcd-launcher/_index.en.md index a01669d2a..2102b076b 100644 --- a/content/kubermatic/v2.28/cheat-sheets/etcd/etcd-launcher/_index.en.md +++ b/content/kubermatic/v2.28/cheat-sheets/etcd/etcd-launcher/_index.en.md @@ -12,12 +12,12 @@ API and flexibly control how the user cluster etcd ring is started. - **v2.19.0**: Peer TLS connections have been added to etcd-launcher. - **v2.22.0**: `EtcdLauncher` feature gate is enabled by default in `KubermaticConfiguration`. - ## Comparison to static etcd Prior to v2.15.0, user cluster etcd ring was based on a static StatefulSet with 3 pods running the etcd ring nodes. With `etcd-launcher`, the etcd `StatefulSet` is updated to include: + - An init container that is responsible for copying the etcd-launcher into the main etcd pod. - Additional environment variables used by the etcd-launcher and etcdctl binary for simpler operations. - A liveness probe to improve stability. @@ -58,6 +58,7 @@ spec: If the feature gate was disabled explicitly, etcd Launcher can still be configured for individual user clusters. ### Enabling etcd Launcher + In this mode, the feature is only enabled for a specific user cluster. This can be done by editing the object cluster and enabling the feature gate for `etcdLauncher`: diff --git a/content/kubermatic/v2.28/cheat-sheets/vsphere-cluster-id/_index.en.md b/content/kubermatic/v2.28/cheat-sheets/vsphere-cluster-id/_index.en.md index a1051f238..88a0411ab 100644 --- a/content/kubermatic/v2.28/cheat-sheets/vsphere-cluster-id/_index.en.md +++ b/content/kubermatic/v2.28/cheat-sheets/vsphere-cluster-id/_index.en.md @@ -43,15 +43,16 @@ The following steps should be done in the **seed cluster** for each vSphere user cluster. + First, get all user clusters and filter vSphere user clusters using `grep`: -```shell +```bash kubectl --kubeconfig= get clusters | grep vsphere ``` You should get output similar to the following: -``` +```bash NAME HUMANREADABLENAME OWNER VERSION PROVIDER DATACENTER PHASE PAUSED AGE s8kkpcccfq focused-spence test@kubermatic.com 1.23.8 vsphere your-dc Running false 16h ``` @@ -60,14 +61,14 @@ s8kkpcccfq focused-spence test@kubermatic.com 1.23.8 vspher `s8kkpcccfq`) and inspect the vSphere CSI cloud-config to check value of the `cluster-id` field. -```shell +```bash kubectl --kubeconfig= get configmap -n cluster- cloud-config-csi -o yaml ``` The following excerpt shows the most important part of the output. You need to locate the `cluster-id` field under the `[Global]` group. -``` +```yaml apiVersion: v1 data: config: |+ @@ -102,8 +103,9 @@ The second approach assumes changing `cluster-id` without stopping the CSI driver. This approach is **not documented** by VMware, however, it worked in our environment. In this case, there's no significant downtime. Since this approach is not documented by VMware, we **heavily advise** that you: - - follow the first approach - - if you decide to follow this approach, make sure to extensively test it in + + * follow the first approach + * if you decide to follow this approach, make sure to extensively test it in a staging/testing environment before applying it in the production ### Approach 1 (recommended) @@ -141,7 +143,7 @@ user cluster. First, pause affected user clusters by running the following command in the **seed cluster** for **each affected** user cluster: -```shell +```bash clusterPatch='{"spec":{"pause":true,"features":{"vsphereCSIClusterID":true}}}' kubectl --kubeconfig= patch cluster --type=merge -p $clusterPatch ... @@ -151,7 +153,7 @@ kubectl --kubeconfig= patch cluster --type=merge Once done, scale down the vSphere CSI driver deployment in **each affected user cluster**: -```shell +```bash kubectl --kubeconfig= scale deployment -n kube-system vsphere-csi-controller --replicas=0 ... kubectl --kubeconfig= scale deployment -n kube-system vsphere-csi-controller --replicas=0 @@ -190,13 +192,13 @@ config and update the Secret. The following command reads the config stored in the Secret, decodes it and then saves it to a file called `cloud-config-csi`: -```shell +```bash kubectl --kubeconfig= get secret -n kube-system cloud-config-csi -o=jsonpath='{.data.config}' | base64 -d > cloud-config-csi ``` Open the `cloud-config-csi` file in some text editor: -```shell +```bash vi cloud-config-csi ``` @@ -205,7 +207,7 @@ locate the `cluster-id` field under the `[Global]` group, and replace `` with the name of your user cluster (e.g. `s8kkpcccfq`). -``` +```yaml [Global] user = "username" password = "password" @@ -218,13 +220,13 @@ cluster-id = "" Save the file, exit your editor, and then encode the file: -```shell +```bash cat cloud-config-csi | base64 -w0 ``` Copy the encoded output and run the following `kubectl edit` command: -```shell +```bash kubectl --kubeconfig= edit secret -n kube-system cloud-config-csi ``` @@ -268,7 +270,7 @@ the `cluster-id` value to the name of the user cluster. Run the following `kubectl edit` command. Replace `` in the command with the name of user cluster (e.g. `s8kkpcccfq`). -```shell +```bash kubectl --kubeconfig= edit configmap -n cluster- cloud-config-csi ``` @@ -303,7 +305,7 @@ to vSphere to de-register all volumes. cluster. The `vsphereCSIClusterID` feature flag enabled at the beginning ensures that your `cluster-id` changes are persisted once the clusters are unpaused. -```shell +```bash clusterPatch='{"spec":{"pause":false}}' kubectl patch cluster --type=merge -p $clusterPatch ... @@ -351,7 +353,7 @@ Start with patching the Cluster object for **each affected** user clusters to enable the `vsphereCSIClusterID` feature flag. Enabling this feature flag automatically changes the `cluster-id` value to the cluster name. -```shell +```bash clusterPatch='{"spec":{"features":{"vsphereCSIClusterID":true}}}' kubectl patch cluster --type=merge -p $clusterPatch ... @@ -375,7 +377,7 @@ the seed cluster **AND** the `cloud-config-csi` Secret in the user cluster the ConfigMap in the user cluster namespace in seed cluster, and the second commands reads the config from the Secret in the user cluster. -```shell +```bash kubectl --kubeconfig= get configmap -n cluster- cloud-config-csi kubectl --kubeconfig= get secret -n kube-system cloud-config-csi -o jsonpath='{.data.config}' | base64 -d ``` @@ -383,7 +385,7 @@ kubectl --kubeconfig= get secret -n kube-system cloud-c Both the Secret and the ConfigMap should have config with `cluster-id` set to the user cluster name (e.g. `s8kkpcccfq`). -``` +```yaml [Global] user = "username" password = "password" @@ -402,7 +404,7 @@ to the next section. Finally, restart the vSphere CSI controller pods in the **each affected user cluster** to put those changes in the effect: -```shell +```bash kubectl --kubeconfig= delete pods -n kube-system -l app=vsphere-csi-controller ... kubectl --kubeconfig= delete pods -n kube-system -l app=vsphere-csi-controller diff --git a/content/kubermatic/v2.28/how-to-contribute/_index.en.md b/content/kubermatic/v2.28/how-to-contribute/_index.en.md index 3136c9988..7f5ba5e0a 100644 --- a/content/kubermatic/v2.28/how-to-contribute/_index.en.md +++ b/content/kubermatic/v2.28/how-to-contribute/_index.en.md @@ -12,34 +12,34 @@ KKP is an open-source project to centrally manage the global automation of thous There are few things to note when contributing to the KKP project, which are highlighted below: -* KKP project is hosted on GitHub; thus, GitHub knowledge is one of the essential pre-requisites -* The KKP documentation is written in markdown (.md) and located in the [docs repository](https://github.com/kubermatic/docs/tree/main/content/kubermatic) -* See [CONTRIBUTING.md](https://github.com/kubermatic/kubermatic/blob/main/CONTRIBUTING.md) for instructions on the developer certificate of origin that we require -* Familiarization with Hugo for building static site locally is suggested for documentation contribution -* Kubernetes knowledge is also recommended -* The KKP documentation is currently available only in English -* We have a simple code of conduct that should be adhered to +- KKP project is hosted on GitHub; thus, GitHub knowledge is one of the essential pre-requisites +- The KKP documentation is written in markdown (.md) and located in the [docs repository](https://github.com/kubermatic/docs/tree/main/content/kubermatic) +- See [CONTRIBUTING.md](https://github.com/kubermatic/kubermatic/blob/main/CONTRIBUTING.md) for instructions on the developer certificate of origin that we require +- Familiarization with Hugo for building static site locally is suggested for documentation contribution +- Kubernetes knowledge is also recommended +- The KKP documentation is currently available only in English +- We have a simple code of conduct that should be adhered to ## Steps in Contributing to KKP -* Please familiarise yourself with our [Code of Conduct](https://github.com/kubermatic/kubermatic/blob/main/CODE_OF_CONDUCT.md) -* Check the [opened issues](https://github.com/kubermatic/kubermatic/issues) on our GitHub repo peradventure there might be anyone that will be of interest -* Fork the repository on GitHub -* Read the [README](https://github.com/kubermatic/kubermatic/blob/main/README.md) for build and test instructions +- Please familiarise yourself with our [Code of Conduct](https://github.com/kubermatic/kubermatic/blob/main/CODE_OF_CONDUCT.md) +- Check the [opened issues](https://github.com/kubermatic/kubermatic/issues) on our GitHub repo peradventure there might be anyone that will be of interest +- Fork the repository on GitHub +- Read the [README](https://github.com/kubermatic/kubermatic/blob/main/README.md) for build and test instructions ## Contribution Workflow The below outlines show an example of what a contributor's workflow looks like: -* Fork the repository on GitHub -* Create a topic branch from where you want to base your work (usually main) -* Make commits of logical units. -* Make sure your commit messages are in the proper format -* Push your changes to the topic branch in your fork repository -* Make sure the tests pass and add any new tests as appropriate -* Submit a pull request to the original repository -* Assign a reviewer if you wish and wait for the PR to be reviewed -* If everything works fine, your PR will be merged into the project's main branch +- Fork the repository on GitHub +- Create a topic branch from where you want to base your work (usually main) +- Make commits of logical units. +- Make sure your commit messages are in the proper format +- Push your changes to the topic branch in your fork repository +- Make sure the tests pass and add any new tests as appropriate +- Submit a pull request to the original repository +- Assign a reviewer if you wish and wait for the PR to be reviewed +- If everything works fine, your PR will be merged into the project's main branch Congratulations! You have successfully contributed to the KKP project. diff --git a/content/kubermatic/v2.28/installation/install-kkp-ce/_index.en.md b/content/kubermatic/v2.28/installation/install-kkp-ce/_index.en.md index 18b70c3bc..2e8afcbb6 100644 --- a/content/kubermatic/v2.28/installation/install-kkp-ce/_index.en.md +++ b/content/kubermatic/v2.28/installation/install-kkp-ce/_index.en.md @@ -35,6 +35,7 @@ For this guide you need to have [kubectl](https://kubernetes.io/docs/tasks/tools You should be familiar with core Kubernetes concepts and the YAML file format before proceeding. + In addition, we recommend familiarizing yourself with the resource quota system of your infrastructure provider. It is important to provide enough capacity to let KKP provision infrastructure for your future user clusters, but also to enforce a maximum to protect against overspending. {{< tabs name="resource-quotas" >}} @@ -132,14 +133,14 @@ The release archive hosted on GitHub contains examples for both of the configura The key items to consider while preparing your configuration files are described in the table below. -| Description | YAML Paths and File | -| ------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------- | -| The base domain under which KKP shall be accessible (e.g. `kkp.example.com`). | `.spec.ingress.domain` (`kubermatic.yaml`), `.dex.ingress.hosts[0].host` and `dex.ingress.tls[0].hosts[0]` (`values.yaml`); also adjust `.dex.config.staticClients[*].RedirectURIs` (`values.yaml`) according to your domain. | -| The certificate issuer for KKP (KKP requires it since the dashboard and Dex are accessible only via HTTPS); by default cert-manager is used, but you have to reference an issuer that you need to create later on. | `.spec.ingress.certificateIssuer.name` (`kubermatic.yaml`) | -| For proper authentication, shared secrets must be configured between Dex and KKP. Likewise, Dex uses yet another random secret to encrypt cookies stored in the users' browsers. | `.dex.config.staticClients[*].secret` (`values.yaml`), `.spec.auth.issuerClientSecret` (`kubermatic.yaml`); this needs to be equal to `.dex.config.staticClients[name=="kubermaticIssuer"].secret` (`values.yaml`), `.spec.auth.issuerCookieKey` and `.spec.auth.serviceAccountKey` (both `kubermatic.yaml`) | -| To authenticate via an external identity provider, you need to set up connectors in Dex. Check out [the Dex documentation](https://dexidp.io/docs/connectors/) for a list of available providers. This is not required, but highly recommended for multi-user installations. | `.dex.config.connectors` (`values.yaml`; commented in example file) | -| The expose strategy which controls how control plane components of a User Cluster are exposed to worker nodes and users. See [the expose strategy documentation]({{< ref "../../tutorials-howtos/networking/expose-strategies/" >}}) for available options. Defaults to `NodePort` strategy, if not set. | `.spec.exposeStrategy` (`kubermatic.yaml`; not included in example file) | -| Telemetry used to track the KKP and k8s cluster usage, uuid field is required and will print an error message when that entry is missing. | `.telemetry.uuid` (`values.yaml`) | +| Description | YAML Paths and File | +| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| The base domain under which KKP shall be accessible (e.g. `kkp.example.com`). | `.spec.ingress.domain` (`kubermatic.yaml`), `.dex.ingress.hosts[0].host` and `dex.ingress.tls[0].hosts[0]` (`values.yaml`); also adjust `.dex.config.staticClients[*].RedirectURIs` (`values.yaml`) according to your domain. | +| The certificate issuer for KKP (KKP requires it since the dashboard and Dex are accessible only via HTTPS); by default cert-manager is used, but you have to reference an issuer that you need to create later on. | `.spec.ingress.certificateIssuer.name` (`kubermatic.yaml`) | +| For proper authentication, shared secrets must be configured between Dex and KKP. Likewise, Dex uses yet another random secret to encrypt cookies stored in the users' browsers. | `.dex.config.staticClients[*].secret` (`values.yaml`), `.spec.auth.issuerClientSecret` (`kubermatic.yaml`); this needs to be equal to `.dex.config.staticClients[name=="kubermaticIssuer"].secret` (`values.yaml`), `.spec.auth.issuerCookieKey` and `.spec.auth.serviceAccountKey` (both `kubermatic.yaml`) | +| To authenticate via an external identity provider, you need to set up connectors in Dex. Check out [the Dex documentation](https://dexidp.io/docs/connectors/) for a list of available providers. This is not required, but highly recommended for multi-user installations. | `.dex.config.connectors` (`values.yaml`; commented in example file) | +| The expose strategy which controls how control plane components of a User Cluster are exposed to worker nodes and users. See [the expose strategy documentation]({{< ref "../../tutorials-howtos/networking/expose-strategies/" >}}) for available options. Defaults to `NodePort` strategy, if not set. | `.spec.exposeStrategy` (`kubermatic.yaml`; not included in example file) | +| Telemetry used to track the KKP and k8s cluster usage, uuid field is required and will print an error message when that entry is missing. | `.telemetry.uuid` (`values.yaml`) | There are many more options, but these are essential to get a minimal system up and running. A full reference of all options can be found in the [KubermaticConfiguration Reference]({{< relref "../../references/crds/#kubermaticconfigurationspec" >}}). The secret keys mentioned above can be generated using any password generator or on the shell using diff --git a/content/kubermatic/v2.28/installation/install-kkp-ce/add-seed-cluster/_index.en.md b/content/kubermatic/v2.28/installation/install-kkp-ce/add-seed-cluster/_index.en.md index c0aa6b013..f1677601f 100644 --- a/content/kubermatic/v2.28/installation/install-kkp-ce/add-seed-cluster/_index.en.md +++ b/content/kubermatic/v2.28/installation/install-kkp-ce/add-seed-cluster/_index.en.md @@ -29,9 +29,9 @@ about the cluster relationships. In this chapter, you will find the following KKP-specific terms: -* **Master Cluster** -- A Kubernetes cluster which is responsible for storing central information about users, projects and SSH keys. It hosts the KKP master components and might also act as a seed cluster. -* **Seed Cluster** -- A Kubernetes cluster which is responsible for hosting the control plane components (kube-apiserver, kube-scheduler, kube-controller-manager, etcd and more) of a User Cluster. -* **User Cluster** -- A Kubernetes cluster created and managed by KKP, hosting applications managed by users. +- **Master Cluster** -- A Kubernetes cluster which is responsible for storing central information about users, projects and SSH keys. It hosts the KKP master components and might also act as a seed cluster. +- **Seed Cluster** -- A Kubernetes cluster which is responsible for hosting the control plane components (kube-apiserver, kube-scheduler, kube-controller-manager, etcd and more) of a User Cluster. +- **User Cluster** -- A Kubernetes cluster created and managed by KKP, hosting applications managed by users. ## Overview @@ -82,6 +82,7 @@ a separate storage class with a different location/security level. The following {{< tabs name="StorageClass Creation" >}} {{% tab name="AWS" %}} + ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass @@ -91,8 +92,10 @@ provisioner: kubernetes.io/aws-ebs parameters: type: sc1 ``` + {{% /tab %}} {{% tab name="Azure" %}} + ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass @@ -103,8 +106,10 @@ parameters: kind: Managed storageaccounttype: Standard_LRS ``` + {{% /tab %}} {{% tab name="GCP" %}} + ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass @@ -114,8 +119,10 @@ provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd ``` + {{% /tab %}} {{% tab name="vSphere" %}} + ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass @@ -123,8 +130,10 @@ metadata: name: kubermatic-backup provisioner: csi.vsphere.vmware.com ``` + {{% /tab %}} {{% tab name="Other Providers" %}} + For other providers, please refer to the respective CSI driver documentation. It should guide you through setting up a `StorageClass`. Ensure that the `StorageClass` you create is named `kubermatic-backup`. The final resource should look something like this: ```yaml @@ -139,6 +148,7 @@ parameters: parameter1: value1 parameter2: value2 ``` + {{% /tab %}} {{< /tabs >}} @@ -369,7 +379,7 @@ Key considerations for creating your `Seed` resource are: ### Configure Datacenters -Each `Seed` has a map of so-called _Datacenters_ (under `.spec.datacenters`), which define the cloud +Each `Seed` has a map of so-called *Datacenters* (under `.spec.datacenters`), which define the cloud provider locations that User Clusters can be deployed to. Every datacenter name is globally unique in a KKP setup. Users will select from a list of datacenters when creating User Clusters and their clusters will automatically get scheduled to the seed that defines that datacenter. @@ -380,6 +390,7 @@ datacenters: {{< tabs name="Datacenter Examples" >}} {{% tab name="AWS" %}} + ```yaml # Datacenter for AWS 'eu-central-1' region aws-eu-central-1a: @@ -396,8 +407,10 @@ aws-eu-west-1a: aws: region: eu-west-1 ``` + {{% /tab %}} {{% tab name="Azure" %}} + ```yaml # Datacenter for Azure 'westeurope' location azure-westeurope: @@ -407,8 +420,10 @@ azure-westeurope: azure: location: westeurope ``` + {{% /tab %}} {{% tab name="GCP" %}} + ```yaml # Datacenter for GCP 'europe-west3' region # this is configured to use three availability zones and spread cluster resources across them @@ -421,8 +436,10 @@ gce-eu-west-3: regional: true zoneSuffixes: [a,b,c] ``` + {{% /tab %}} {{% tab name="vSphere" %}} + ```yaml # Datacenter for a vSphere setup available under https://vsphere.hamburg.example.com vsphere-hamburg: @@ -438,10 +455,13 @@ vsphere-hamburg: templates: ubuntu: ubuntu-20.04-server-cloudimg-amd64 ``` + {{% /tab %}} {{% tab name="Other Providers" %}} + For additional providers supported by KKP, please check out our [DatacenterSpec CRD documentation]({{< ref "../../../references/crds/#datacenterspec" >}}) for the respective provider you want to use. + {{% /tab %}} {{< /tabs >}} @@ -535,6 +555,7 @@ kubectl apply -f seed-with-secret.yaml #Secret/kubeconfig-kubermatic created. #Seed/kubermatic created. ``` + You can watch the progress by using `kubectl` and `watch` on the master cluster: ```bash @@ -543,7 +564,7 @@ watch kubectl -n kubermatic get seeds #kubermatic 0 Hamburg v2.21.2 v1.24.8 Healthy 5m ``` -Watch the `PHASE` column until it shows "_Healthy_". If it does not after a couple of minutes, you can check +Watch the `PHASE` column until it shows "*Healthy*". If it does not after a couple of minutes, you can check the `kubermatic` namespace on the new seed cluster and verify if there are any Pods showing signs of issues: ```bash diff --git a/content/kubermatic/v2.28/installation/install-kkp-ee/add-seed-cluster/_index.en.md b/content/kubermatic/v2.28/installation/install-kkp-ee/add-seed-cluster/_index.en.md index 86ca0cdcb..8874885ca 100644 --- a/content/kubermatic/v2.28/installation/install-kkp-ee/add-seed-cluster/_index.en.md +++ b/content/kubermatic/v2.28/installation/install-kkp-ee/add-seed-cluster/_index.en.md @@ -18,9 +18,9 @@ Please [contact sales](mailto:sales@kubermatic.com) to receive your credentials. In this chapter, you will find the following KKP-specific terms: -* **Master Cluster** -- A Kubernetes cluster which is responsible for storing central information about users, projects and SSH keys. It hosts the KKP master components and might also act as a seed cluster. -* **Seed Cluster** -- A Kubernetes cluster which is responsible for hosting the control plane components (kube-apiserver, kube-scheduler, kube-controller-manager, etcd and more) of a User Cluster. -* **User Cluster** -- A Kubernetes cluster created and managed by KKP, hosting applications managed by users. +- **Master Cluster** -- A Kubernetes cluster which is responsible for storing central information about users, projects and SSH keys. It hosts the KKP master components and might also act as a seed cluster. +- **Seed Cluster** -- A Kubernetes cluster which is responsible for hosting the control plane components (kube-apiserver, kube-scheduler, kube-controller-manager, etcd and more) of a User Cluster. +- **User Cluster** -- A Kubernetes cluster created and managed by KKP, hosting applications managed by users. It is also recommended to make yourself familiar with our [architecture documentation]({{< ref "../../../architecture/" >}}). diff --git a/content/kubermatic/v2.28/installation/local-installation/_index.en.md b/content/kubermatic/v2.28/installation/local-installation/_index.en.md index 122caeb8b..b8463282c 100644 --- a/content/kubermatic/v2.28/installation/local-installation/_index.en.md +++ b/content/kubermatic/v2.28/installation/local-installation/_index.en.md @@ -80,7 +80,6 @@ tar -xzvf "kubermatic-${KUBERMATIC_EDITION}-v${VERSION}-darwin-${ARCH}.tar.gz" You can find more information regarding the download instructions in the [CE installation guide](../install-kkp-ce/#download-the-installer) or [EE installation guide](../install-kkp-ee/#download-the-installer). **2. Provide the image pull secret (EE)** - This step is only required if you are using the enterprise edition installer. Replace `${AUTH_TOKEN}` with the Docker authentication JSON provided by Kubermatic and run the following command: ```bash @@ -135,8 +134,8 @@ By default, KubeVirt is configured to use hardware virtualization. If this is no On Linux, KubeVirt uses the inode notify kernel subsystem `inotify` to watch for changes in certain files. Usually you shouldn't need to configure this but in case you can observe the `virt-handler` failing with -``` -kubectl log -nkubevirt ds/virt-handler +```bash +kubectl log -n kubevirt ds/virt-handler ... {"component":"virt-handler","level":"fatal","msg":"Failed to create an inotify watcher","pos":"cert-manager.go:105","reason":"too many open files","timestamp":"2023-06-22T09:58:24.284130Z"} ``` diff --git a/content/kubermatic/v2.28/installation/offline-mode/_index.en.md b/content/kubermatic/v2.28/installation/offline-mode/_index.en.md index 5b182f9c7..fb9f63c7a 100644 --- a/content/kubermatic/v2.28/installation/offline-mode/_index.en.md +++ b/content/kubermatic/v2.28/installation/offline-mode/_index.en.md @@ -23,13 +23,13 @@ without Docker. There are a number of sources for container images used in a KKP setup: -* The container images used by KKP itself (e.g. `quay.io/kubermatic/kubermatic`) -* The images used by the various Helm charts used to deploy KKP (nginx, cert-manager, +- The container images used by KKP itself (e.g. `quay.io/kubermatic/kubermatic`) +- The images used by the various Helm charts used to deploy KKP (nginx, cert-manager, Grafana, ...) -* The images used for creating a user cluster control plane (the Kubernetes apiserver, +- The images used for creating a user cluster control plane (the Kubernetes apiserver, scheduler, metrics-server, ...). -* The images referenced by cluster [Addons]({{< ref "../../architecture/concept/kkp-concepts/addons/" >}}). -* The images referenced in system [Applications]({{< ref "../../tutorials-howtos/applications/" >}}). +- The images referenced by cluster [Addons]({{< ref "../../architecture/concept/kkp-concepts/addons/" >}}). +- The images referenced in system [Applications]({{< ref "../../tutorials-howtos/applications/" >}}). To make it easier to collect all required images, the `kubermatic-installer mirror-images` utility is provided. It will scan KKP source code and Helm charts included in a KKP release to determine all images that need to be mirrored. @@ -93,7 +93,6 @@ pass `--registry-prefix 'docker.io'` to `kubermatic-installer mirror-images`. ### Addons - Note that by default, `kubermatic-installer mirror-images` will determine the addons container image based on the `KubermaticConfiguration` file, pull it down and then extract the addon manifests from the image, so that it can then scan them for container images to mirror. @@ -117,6 +116,7 @@ you should pass the `--addons-image` flag instead to reference a non-standard ad The `mirrorImages` field in the `KubermaticConfiguration` allows you to specify additional container images to mirror during the `kubermatic-installer mirror-images` command, simplifying air-gapped setups. Example: + ```yaml apiVersion: kubermatic.k8c.io/v1 kind: KubermaticConfiguration @@ -130,7 +130,8 @@ spec: ## Mirroring Binaries -The `kubermatic-installer mirror-binaries` command is designed to **mirror and host essential binaries** required by the Operating System Profiles for provisioning user clusters in **offline/airgapped environments**. This includes critical components like: +The `kubermatic-installer mirror-binaries` command is designed to **mirror and host essential binaries** required by the Operating System Profiles for provisioning user clusters in **offline/airgapped environments**. This includes critical components like: + - **Kubernetes binaries**: `kubeadm`, `kubelet`, `kubectl` - **CNI plugins** (e.g., bridge, ipvlan, loopback, macvlan, etc) - **CRI tools** (e.g., `crictl`) @@ -142,7 +143,8 @@ The default output directory (`/usr/share/nginx/html/`) requires root permission ### Key Features -#### Mirrors Original Domain Structure: +#### Mirrors Original Domain Structure + Binaries are stored in the **exact directory hierarchy** as their original domains (e.g., `containernetworking/plugins/releases/v1.5.1/...`). This allows **DNS-based redirection** of domains like `github.com` or `k8s.gcr.io` to your local/offline server, ensuring the OSP fetches binaries from the mirrored paths **without URL reconfiguration** or **Operating System Profile** changes. ### Example Workflow @@ -162,7 +164,7 @@ INFO[0033] ✅ Finished loading images. ### Example of the Directory Structure -``` +```bash . ├── containernetworking # CNI plugins (Container Network Interface) │ └── plugins @@ -248,6 +250,7 @@ kubectl -n kubermatic get seeds ``` Output will be similar to this: + ```bash #NAME AGE #hamburg 143d diff --git a/content/kubermatic/v2.28/installation/single-node-setup/_index.en.md b/content/kubermatic/v2.28/installation/single-node-setup/_index.en.md index aa2daaf66..984cf2424 100644 --- a/content/kubermatic/v2.28/installation/single-node-setup/_index.en.md +++ b/content/kubermatic/v2.28/installation/single-node-setup/_index.en.md @@ -18,7 +18,7 @@ In this **Get Started with KKP** guide, we will be using AWS Cloud as our underl ## Prerequisites 1. [Terraform >v1.0.0](https://www.terraform.io/downloads) -2. [KubeOne](https://github.com/kubermatic/kubeone/releases) +1. [KubeOne](https://github.com/kubermatic/kubeone/releases) ## Download the Repository @@ -95,18 +95,18 @@ export KUBECONFIG=$PWD/aws/-kubeconfig ## Validate the KKP Master Setup -* Get the LoadBalancer External IP by following command. +- Get the LoadBalancer External IP by following command. ```bash kubectl get svc -n ingress-nginx ``` -* Update DNS mapping with External IP of the nginx ingress controller service. In case of AWS, the CNAME record mapping for $TODO_DNS with External IP should be created. +- Update DNS mapping with External IP of the nginx ingress controller service. In case of AWS, the CNAME record mapping for $TODO_DNS with External IP should be created. -* Nginx Ingress Controller Load Balancer configuration - Add the node to backend pool manually. +- Nginx Ingress Controller Load Balancer configuration - Add the node to backend pool manually. > **Known Issue**: Should be supported in the future as part of Feature request[#1822](https://github.com/kubermatic/kubeone/issues/1822) -* Verify the Kubermatic resources and certificates +- Verify the Kubermatic resources and certificates ```bash kubectl -n kubermatic get deployments,pods @@ -122,5 +122,6 @@ export KUBECONFIG=$PWD/aws/-kubeconfig Finally, you should be able to login to KKP dashboard! -Login to https://$TODO_DNS/ +Login to + > Use username/password configured as part of Kubermatic configuration. diff --git a/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.19-to-2.20/_index.en.md b/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.19-to-2.20/_index.en.md index 56de7db27..8e84c6133 100644 --- a/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.19-to-2.20/_index.en.md +++ b/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.19-to-2.20/_index.en.md @@ -20,13 +20,13 @@ Migrating to KKP 2.20 requires a downtime of all reconciling and includes restar The general migration procedure is as follows: -* Shutdown KKP controllers/dashboard/API. -* Create duplicate of all KKP resources in the new API groups. -* Adjust the owner references in the new resources. -* Remove finalizers and owner references from old objects. -* Delete old objects. -* Deploy new KKP 2.20 Operator. -* The operator will reconcile and restart the remaining KKP controllers, dashboard and API. +- Shutdown KKP controllers/dashboard/API. +- Create duplicate of all KKP resources in the new API groups. +- Adjust the owner references in the new resources. +- Remove finalizers and owner references from old objects. +- Delete old objects. +- Deploy new KKP 2.20 Operator. +- The operator will reconcile and restart the remaining KKP controllers, dashboard and API. {{% notice note %}} Creating clones of, for example, Secrets in a cluster namespace will lead to new resource versions on those cloned Secrets. These new resource versions will affect Deployments like the kube-apiserver once KKP is restarted and reconciles. This will in turn cause all Deployments/StatefulSets to rotate. @@ -52,11 +52,11 @@ tar xzf kubermatic-ce-v2.20.0-linux-amd64.tar.gz Before the migration can begin, a number of preflight checks need to happen first: -* No KKP resource must be marked as deleted. -* The new CRD files must be available on disk. -* All seed clusters must be reachable. -* Deprecated features which were removed in KKP 2.20 must not be used anymore. -* (only before actual migration) No KKP controllers/webhooks must be running. +- No KKP resource must be marked as deleted. +- The new CRD files must be available on disk. +- All seed clusters must be reachable. +- Deprecated features which were removed in KKP 2.20 must not be used anymore. +- (only before actual migration) No KKP controllers/webhooks must be running. The first step is to get the kubeconfig file for the KKP **master** cluster. Set the `KUBECONFIG` variable pointing to it: @@ -199,12 +199,12 @@ When you're ready, start the migration: The installer will now -* perform the same preflight checks as the `preflight` command, plus it checks that no KKP controllers are running, -* create a backup of all KKP resources per seed cluster, -* install the new CRDs, -* migrate all KKP resources, -* adjust the owner references and -* optionally remove the old resources if `--remove-old-resources` was given (this can be done manually at any time later on). +- perform the same preflight checks as the `preflight` command, plus it checks that no KKP controllers are running, +- create a backup of all KKP resources per seed cluster, +- install the new CRDs, +- migrate all KKP resources, +- adjust the owner references and +- optionally remove the old resources if `--remove-old-resources` was given (this can be done manually at any time later on). {{% notice note %}} The command is idempotent and can be interrupted and restarted at any time. It will have to go through already migrated resources again, though. diff --git a/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.21-to-2.22/_index.en.md b/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.21-to-2.22/_index.en.md index 128b20062..67f3e3379 100644 --- a/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.21-to-2.22/_index.en.md +++ b/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.21-to-2.22/_index.en.md @@ -21,7 +21,7 @@ container runtime in KKP 2.22 is therefore containerd. As such, the upgrade will with Docker as container runtime. It is necessary to migrate **existing clusters and cluster templates** to containerd before proceeding. This can be done either via the Kubermatic Dashboard -or with `kubectl`. On the Dashboard, just edit the cluster or cluster template, change the _Container Runtime_ field to `containerd` and save your changes. +or with `kubectl`. On the Dashboard, just edit the cluster or cluster template, change the *Container Runtime* field to `containerd` and save your changes. ![Change Container Runtime](upgrade-container-runtime.png?classes=shadow,border&height=200 "Change Container Runtime") @@ -68,8 +68,8 @@ Before starting the upgrade, make sure your KKP Master and Seed clusters are hea Download the latest 2.22.x release archive for the correct edition (`ce` for Community Edition, `ee` for Enterprise Edition) from [the release page](https://github.com/kubermatic/kubermatic/releases) and extract it locally on your computer. Make sure you have the `values.yaml` you used to deploy KKP 2.21 available and already adjusted for any 2.22 changes (also see [Pre-Upgrade Considerations](#pre-upgrade-considerations)), as you need to pass it to the installer. The `KubermaticConfiguration` is no longer necessary (unless you are adjusting it), as the KKP operator will use its in-cluster representation. From within the extracted directory, run the installer: -```sh -$ ./kubermatic-installer deploy kubermatic-master --helm-values path/to/values.yaml +```bash +./kubermatic-installer deploy kubermatic-master --helm-values path/to/values.yaml # example output for a successful upgrade INFO[0000] 🚀 Initializing installer… edition="Enterprise Edition" version=v2.22.0 @@ -120,8 +120,8 @@ Upgrading seed clusters is no longer necessary in KKP 2.22, unless you are runni You can follow the upgrade process by either supervising the Pods on master and seed clusters (by simply checking `kubectl get pods -n kubermatic` frequently) or checking status information for the `Seed` objects. A possible command to extract the current status by seed would be: -```sh -$ kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" +```bash +kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" kubermatic - {"clusters":5,"conditions":{"ClusterInitialized":{"lastHeartbeatTime":"2023-02-16T10:53:34Z","message":"All KKP CRDs have been installed successfully.","reason":"CRDsUpdated","status":"True"},"KubeconfigValid":{"lastHeartbeatTime":"2023-02-14T16:50:09Z","reason":"KubeconfigValid","status":"True"},"ResourcesReconciled":{"lastHeartbeatTime":"2023-02-14T16:50:14Z","reason":"ReconcilingSuccess","status":"True"}},"phase":"Healthy","versions":{"cluster":"v1.24.10","kubermatic":"v2.22.0"}} ``` @@ -183,7 +183,7 @@ If a custom values file is required and is ready for use, `kubermatic-installer` uncomment the command flags that you need (e.g. `--helm-values` if you have a `mlavalues.yaml` to pass and `--mla-include-iap` if you are using IAP for MLA; both flags are optional). -```sh +```bash ./kubermatic-installer deploy usercluster-mla \ # uncomment if you are providing non-standard values # --helm-values mlavalues.yaml \ @@ -194,7 +194,6 @@ using IAP for MLA; both flags are optional). ## Post-Upgrade Considerations - ### KubeVirt Migration KubeVirt cloud provider support graduates to GA in KKP 2.22 and has gained several new features. However, KubeVirt clusters need to be migrated after the KKP 2.22 upgrade. [Instructions are available in KubeVirt provider documentation]({{< ref "../../../architecture/supported-providers/kubevirt#migration-from-kkp-221" >}}). diff --git a/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.22-to-2.23/_index.en.md b/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.22-to-2.23/_index.en.md index 66e4c4620..f2229b1a3 100644 --- a/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.22-to-2.23/_index.en.md +++ b/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.22-to-2.23/_index.en.md @@ -32,31 +32,31 @@ The JSON file contains a `format` key. If the output looks like {"version":"1","format":"xl-single","id":"5dc676ac-92f3-4c19-81d0-2304b366293c","xl":{"version":"3","this":"888f699a-2f22-402a-9e49-2e0fc9abd5c5","sets":[["888f699a-2f22-402a-9e49-2e0fc9abd5c5"]],"distributionAlgo":"SIPMOD+PARITY"}} ``` -you're good to go, no migration required. However if you receive +You're good to go, no migration required. However if you receive ```json {"version":"1","format":"fs","id":"baa787b5-43b6-4bcb-b1d7-acf46bcc0a05","fs":{"version":"2"}} ``` -you must either +You must either -* migrate according to the [migration guide](https://min.io/docs/minio/container/operations/install-deploy-manage/migrate-fs-gateway.html), which effectively involves setting up a second MinIO and copying each file over, or -* wipe your MinIO's storage (e.g. by deleting the PVC, see below), or -* pin the MinIO version to the last version that supports `fs`, which is `RELEASE.2022-10-24T18-35-07Z`, using the Helm values file (set `minio.image.tag=RELEASE.2022-10-24T18-35-07Z`). +- migrate according to the [migration guide](https://min.io/docs/minio/container/operations/install-deploy-manage/migrate-fs-gateway.html), which effectively involves setting up a second MinIO and copying each file over, or +- wipe your MinIO's storage (e.g. by deleting the PVC, see below), or +- pin the MinIO version to the last version that supports `fs`, which is `RELEASE.2022-10-24T18-35-07Z`, using the Helm values file (set `minio.image.tag=RELEASE.2022-10-24T18-35-07Z`). The KKP installer will, when installing the seed dependencies, perform an automated check and will refuse to upgrade if the existing MinIO volume uses the old `fs` driver. If the contents of MinIO is expendable, instead of migrating it's also possible to wipe (**deleting all data**) MinIO's storage entirely. There are several ways to go about this, for example: ```bash -$ kubectl --namespace minio scale deployment/minio --replicas=0 +kubectl --namespace minio scale deployment/minio --replicas=0 #deployment.apps/minio scaled -$ kubectl --namespace minio delete pvc minio-data +kubectl --namespace minio delete pvc minio-data #persistentvolumeclaim "minio-data" deleted # re-install MinIO chart manually -$ helm --namespace minio upgrade minio ./charts/minio --values myhelmvalues.yaml +helm --namespace minio upgrade minio ./charts/minio --values myhelmvalues.yaml #Release "minio" has been upgraded. Happy Helming! #NAME: minio #LAST DEPLOYED: Mon Jul 24 13:40:51 2023 @@ -65,7 +65,7 @@ $ helm --namespace minio upgrade minio ./charts/minio --values myhelmvalues.yaml #REVISION: 2 #TEST SUITE: None -$ kubectl --namespace minio scale deployment/minio --replicas=1 +kubectl --namespace minio scale deployment/minio --replicas=1 #deployment.apps/minio scaled ``` @@ -97,8 +97,8 @@ Before starting the upgrade, make sure your KKP Master and Seed clusters are hea Download the latest 2.23.x release archive for the correct edition (`ce` for Community Edition, `ee` for Enterprise Edition) from [the release page](https://github.com/kubermatic/kubermatic/releases) and extract it locally on your computer. Make sure you have the `values.yaml` you used to deploy KKP 2.22 available and already adjusted for any 2.23 changes (also see [Pre-Upgrade Considerations](#pre-upgrade-considerations)), as you need to pass it to the installer. The `KubermaticConfiguration` is no longer necessary (unless you are adjusting it), as the KKP operator will use its in-cluster representation. From within the extracted directory, run the installer: -```sh -$ ./kubermatic-installer deploy kubermatic-master --helm-values path/to/values.yaml +```bash +./kubermatic-installer deploy kubermatic-master --helm-values path/to/values.yaml # example output for a successful upgrade INFO[0000] 🚀 Initializing installer… edition="Enterprise Edition" version=v2.23.0 @@ -152,12 +152,13 @@ A breaking change in the `minio` Helm chart shipped in KKP v2.23.0 has been iden Upgrading seed cluster is not necessary unless User Cluster MLA has been installed. All other KKP components on the seed will be upgraded automatically. + You can follow the upgrade process by either supervising the Pods on master and seed clusters (by simply checking `kubectl get pods -n kubermatic` frequently) or checking status information for the `Seed` objects. A possible command to extract the current status by seed would be: -```sh -$ kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" +```bash +kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" kubermatic - {"clusters":5,"conditions":{"ClusterInitialized":{"lastHeartbeatTime":"2023-02-16T10:53:34Z","message":"All KKP CRDs have been installed successfully.","reason":"CRDsUpdated","status":"True"},"KubeconfigValid":{"lastHeartbeatTime":"2023-02-14T16:50:09Z","reason":"KubeconfigValid","status":"True"},"ResourcesReconciled":{"lastHeartbeatTime":"2023-02-14T16:50:14Z","reason":"ReconcilingSuccess","status":"True"}},"phase":"Healthy","versions":{"cluster":"v1.24.10","kubermatic":"v2.23.0"}} ``` diff --git a/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.24-to-2.25/_index.en.md b/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.24-to-2.25/_index.en.md index 428f82e70..f413cff4f 100644 --- a/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.24-to-2.25/_index.en.md +++ b/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.24-to-2.25/_index.en.md @@ -59,31 +59,31 @@ The JSON file contains a `format` key. If the output looks like {"version":"1","format":"xl-single","id":"5dc676ac-92f3-4c19-81d0-2304b366293c","xl":{"version":"3","this":"888f699a-2f22-402a-9e49-2e0fc9abd5c5","sets":[["888f699a-2f22-402a-9e49-2e0fc9abd5c5"]],"distributionAlgo":"SIPMOD+PARITY"}} ``` -you're good to go, no migration required. However if you receive +You're good to go, no migration required. However if you receive ```json {"version":"1","format":"fs","id":"baa787b5-43b6-4bcb-b1d7-acf46bcc0a05","fs":{"version":"2"}} ``` -you must either +You must either -* migrate according to the [migration guide](https://min.io/docs/minio/container/operations/install-deploy-manage/migrate-fs-gateway.html), which effectively involves setting up a second MinIO and copying each file over, or -* wipe your MinIO's storage (e.g. by deleting the PVC, see below), or -* pin the MinIO version to the last version that supports `fs`, which is `RELEASE.2022-10-24T18-35-07Z`, using the Helm values file (set `minio.image.tag=RELEASE.2022-10-24T18-35-07Z`). +- migrate according to the [migration guide](https://min.io/docs/minio/container/operations/install-deploy-manage/migrate-fs-gateway.html), which effectively involves setting up a second MinIO and copying each file over, or +- wipe your MinIO's storage (e.g. by deleting the PVC, see below), or +- pin the MinIO version to the last version that supports `fs`, which is `RELEASE.2022-10-24T18-35-07Z`, using the Helm values file (set `minio.image.tag=RELEASE.2022-10-24T18-35-07Z`). The KKP installer will, when installing the `usercluster-mla` stack, perform an automated check and will refuse to upgrade if the existing MinIO volume uses the old `fs` driver. If the contents of MinIO is expendable, instead of migrating it's also possible to wipe (**deleting all data**) MinIO's storage entirely. There are several ways to go about this, for example: ```bash -$ kubectl --namespace mla scale deployment/minio --replicas=0 +kubectl --namespace mla scale deployment/minio --replicas=0 #deployment.apps/minio scaled -$ kubectl --namespace mla delete pvc minio-data +kubectl --namespace mla delete pvc minio-data #persistentvolumeclaim "minio-data" deleted # re-install MinIO chart manually -$ helm --namespace mla upgrade minio ./charts/minio --values myhelmvalues.yaml +helm --namespace mla upgrade minio ./charts/minio --values myhelmvalues.yaml #Release "minio" has been upgraded. Happy Helming! #NAME: minio #LAST DEPLOYED: Mon Jul 24 13:40:51 2023 @@ -92,7 +92,7 @@ $ helm --namespace mla upgrade minio ./charts/minio --values myhelmvalues.yaml #REVISION: 2 #TEST SUITE: None -$ kubectl --namespace mla scale deployment/minio --replicas=1 +kubectl --namespace mla scale deployment/minio --replicas=1 #deployment.apps/minio scaled ``` @@ -108,8 +108,8 @@ Before starting the upgrade, make sure your KKP Master and Seed clusters are hea Download the latest 2.25.x release archive for the correct edition (`ce` for Community Edition, `ee` for Enterprise Edition) from [the release page](https://github.com/kubermatic/kubermatic/releases) and extract it locally on your computer. Make sure you have the `values.yaml` you used to deploy KKP 2.24 available and already adjusted for any 2.25 changes (also see [Pre-Upgrade Considerations](#pre-upgrade-considerations)), as you need to pass it to the installer. The `KubermaticConfiguration` is no longer necessary (unless you are adjusting it), as the KKP operator will use its in-cluster representation. From within the extracted directory, run the installer: -```sh -$ ./kubermatic-installer deploy kubermatic-master --helm-values path/to/values.yaml +```bash +./kubermatic-installer deploy kubermatic-master --helm-values path/to/values.yaml # example output for a successful upgrade INFO[0000] 🚀 Initializing installer… edition="Enterprise Edition" version=v2.25.0 @@ -160,8 +160,8 @@ Upgrading seed clusters is not necessary, unless you are running the `minio` Hel You can follow the upgrade process by either supervising the Pods on master and seed clusters (by simply checking `kubectl get pods -n kubermatic` frequently) or checking status information for the `Seed` objects. A possible command to extract the current status by seed would be: -```sh -$ kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" +```bash +kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" kubermatic - {"clusters":5,"conditions":{"ClusterInitialized":{"lastHeartbeatTime":"2024-03-11T10:53:34Z","message":"All KKP CRDs have been installed successfully.","reason":"CRDsUpdated","status":"True"},"KubeconfigValid":{"lastHeartbeatTime":"2024-03-11T16:50:09Z","reason":"KubeconfigValid","status":"True"},"ResourcesReconciled":{"lastHeartbeatTime":"2024-03-11T16:50:14Z","reason":"ReconcilingSuccess","status":"True"}},"phase":"Healthy","versions":{"cluster":"v1.27.11","kubermatic":"v2.25.0"}} ``` diff --git a/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.25-to-2.26/_index.en.md b/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.25-to-2.26/_index.en.md index 72dde1168..bbfcab911 100644 --- a/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.25-to-2.26/_index.en.md +++ b/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.25-to-2.26/_index.en.md @@ -26,8 +26,8 @@ Beginning with KKP 2.26, Helm chart versions now use strict semvers without a le KKP 2.26 ships a lot of major version upgrades for the Helm charts, most notably -* Loki & Promtail v2.5 to v2.9.x -* Grafana 9.x to 10.4.x +- Loki & Promtail v2.5 to v2.9.x +- Grafana 9.x to 10.4.x Some of these updates require manual intervention or at least checking whether a given KKP system is affected by upstream changes. Please read the following sections carefully before beginning the upgrade. @@ -41,17 +41,18 @@ Due to labelling changes, and in-place upgrade of Velero is not possible. It's r The switch to the upstream Helm chart requires adjusting the `values.yaml` used to install Velero. Most existing settings have a 1:1 representation in the new chart: -* `velero.podAnnotations` is now `velero.annotations` -* `velero.serverFlags` is now `velero.configuration.*` (each CLI flag is its own field in the YAML file, e.g. `serverFlags:["--log-format=json"]` would become `configuration.logFormat: "json"`) -* `velero.uploaderType` is now `velero.configuration.uploaderType`; note that the default has changed from restic to Kopia, see the next section below for more information. -* `velero.credentials` is now `velero.credentials.*` -* `velero.schedulesPath` is not available anymore, since putting additional files into a Helm chart before installing it is a rather unusual process. Instead, specify the desired schedules directly inside the `values.yaml` in `velero.schedules` -* `velero.backupStorageLocations` is now `velero.configuration.backupStorageLocation` -* `velero.volumeSnapshotLocations` is now `velero.configuration.volumeSnapshotLocation` -* `velero.defaultVolumeSnapshotLocations` is now `velero.configuration.defaultBackupStorageLocation` +- `velero.podAnnotations` is now `velero.annotations` +- `velero.serverFlags` is now `velero.configuration.*` (each CLI flag is its own field in the YAML file, e.g. `serverFlags:["--log-format=json"]` would become `configuration.logFormat: "json"`) +- `velero.uploaderType` is now `velero.configuration.uploaderType`; note that the default has changed from restic to Kopia, see the next section below for more information. +- `velero.credentials` is now `velero.credentials.*` +- `velero.schedulesPath` is not available anymore, since putting additional files into a Helm chart before installing it is a rather unusual process. Instead, specify the desired schedules directly inside the `values.yaml` in `velero.schedules` +- `velero.backupStorageLocations` is now `velero.configuration.backupStorageLocation` +- `velero.volumeSnapshotLocations` is now `velero.configuration.volumeSnapshotLocation` +- `velero.defaultVolumeSnapshotLocations` is now `velero.configuration.defaultBackupStorageLocation` {{< tabs name="Velero Helm Chart Upgrades" >}} {{% tab name="old Velero Chart" %}} + ```yaml velero: podAnnotations: @@ -89,9 +90,11 @@ velero: schedulesPath: schedules/* ``` + {{% /tab %}} {{% tab name="new Velero Chart" %}} + ```yaml velero: annotations: @@ -136,6 +139,7 @@ velero: aws_access_key_id=itsme aws_secret_access_key=andthisismypassword ``` + {{% /tab %}} {{< /tabs >}} @@ -155,15 +159,15 @@ If you decide to switch to Kopia and do not need the restic repository anymore, The configuration syntax for cert-manager has changed slightly. -* Breaking: If you have `.featureGates` value set in `values.yaml`, the features defined there will no longer be passed to cert-manager webhook, only to cert-manager controller. Use the `webhook.featureGates` field instead to define features to be enabled on webhook. -* Potentially breaking: Webhook validation of CertificateRequest resources is stricter now: all `KeyUsages` and `ExtendedKeyUsages` must be defined directly in the CertificateRequest resource, the encoded CSR can never contain more usages that defined there. +- Breaking: If you have `.featureGates` value set in `values.yaml`, the features defined there will no longer be passed to cert-manager webhook, only to cert-manager controller. Use the `webhook.featureGates` field instead to define features to be enabled on webhook. +- Potentially breaking: Webhook validation of CertificateRequest resources is stricter now: all `KeyUsages` and `ExtendedKeyUsages` must be defined directly in the CertificateRequest resource, the encoded CSR can never contain more usages that defined there. ### oauth2-proxy (IAP) 7.6 This upgrade includes one breaking change: -* A change to how auth routes are evaluated using the flags `skip-auth-route`/`skip-auth-regex`: the new behaviour uses the regex you specify to evaluate the full path including query parameters. For more details please read the [detailed PR description](https://github.com/oauth2-proxy/oauth2-proxy/issues/2271). -* The environment variable `OAUTH2_PROXY_GOOGLE_GROUP` has been deprecated in favor of `OAUTH2_PROXY_GOOGLE_GROUPS`. Next major release will remove this option. +- A change to how auth routes are evaluated using the flags `skip-auth-route`/`skip-auth-regex`: the new behaviour uses the regex you specify to evaluate the full path including query parameters. For more details please read the [detailed PR description](https://github.com/oauth2-proxy/oauth2-proxy/issues/2271). +- The environment variable `OAUTH2_PROXY_GOOGLE_GROUP` has been deprecated in favor of `OAUTH2_PROXY_GOOGLE_GROUPS`. Next major release will remove this option. ### Loki & Promtail 2.9 (Seed MLA) @@ -171,16 +175,16 @@ The Loki upgrade from 2.5 to 2.9 might be the most significant bump in this KKP Before upgrading, review your `values.yaml` for Loki, as a number of syntax changes were made: -* Most importantly, `loki.config` is now a templated string that aggregates many other individual values specified in `loki`, for example `loki.tableManager` gets rendered into `loki.config.table_manager`, and `loki.loki.schemaConfig` gets rendered into `loki.config.schema_config`. To follow these changes, if you have `loki.config` in your `values.yaml`, rename it to `loki.loki`. Ideally you should not need to manually override the templating string in `loki.config` from the upstream chart anymore. Additionally, some values are moved out or renamed slightly: - * `loki.config.schema_config` becomes `loki.loki.schemaConfig` - * `loki.config.table_manager` becomes `loki.tableManager` (sic) - * `loki.config.server` was removed, if you need to specify something, use `loki.loki.server` -* The base volume path for the Loki PVC was changed from `/data/loki` to `/var/loki`. -* Configuration for the default image has changed, there is no `loki.image.repository` field anymore, it's now `loki.image.registry` and `loki.image.repository`. -* `loki.affinity` is now a templated string and enabled by default; if you use multiple Loki replicas, your cluster needs to have multiple nodes to host these pods. -* All fields related to the Loki pod (`loki.tolerations`, `loki.resources`, `loki.nodeSelector` etc.) were moved below `loki.singleBinary`. -* Self-monitoring, Grafana Agent and selftests are disabled by default now, reducing the default resource requirements for the logging stack. -* `loki.singleBinary.persistence.enableStatefulSetAutoDeletePVC` is set to `false` to ensure that when the StatefulSet is deleted, the PVCs will not also be deleted. This allows for easier upgrades in the +- Most importantly, `loki.config` is now a templated string that aggregates many other individual values specified in `loki`, for example `loki.tableManager` gets rendered into `loki.config.table_manager`, and `loki.loki.schemaConfig` gets rendered into `loki.config.schema_config`. To follow these changes, if you have `loki.config` in your `values.yaml`, rename it to `loki.loki`. Ideally you should not need to manually override the templating string in `loki.config` from the upstream chart anymore. Additionally, some values are moved out or renamed slightly: + - `loki.config.schema_config` becomes `loki.loki.schemaConfig` + - `loki.config.table_manager` becomes `loki.tableManager` (sic) + - `loki.config.server` was removed, if you need to specify something, use `loki.loki.server` +- The base volume path for the Loki PVC was changed from `/data/loki` to `/var/loki`. +- Configuration for the default image has changed, there is no `loki.image.repository` field anymore, it's now `loki.image.registry` and `loki.image.repository`. +- `loki.affinity` is now a templated string and enabled by default; if you use multiple Loki replicas, your cluster needs to have multiple nodes to host these pods. +- All fields related to the Loki pod (`loki.tolerations`, `loki.resources`, `loki.nodeSelector` etc.) were moved below `loki.singleBinary`. +- Self-monitoring, Grafana Agent and selftests are disabled by default now, reducing the default resource requirements for the logging stack. +- `loki.singleBinary.persistence.enableStatefulSetAutoDeletePVC` is set to `false` to ensure that when the StatefulSet is deleted, the PVCs will not also be deleted. This allows for easier upgrades in the future, but if you scale down Loki, you would have to manually deleted the leftover PVCs. ### Alertmanager 0.27 (Seed MLA) @@ -205,39 +209,39 @@ Afterwards you can install the new release from the chart. As is typical for kube-state-metrics, the upgrade simple, but the devil is in the details. There were many minor changes since v2.8, please review [the changelog](https://github.com/kubernetes/kube-state-metrics/releases) carefully if you built upon metrics provided by kube-state-metrics: -* The deprecated experimental VerticalPodAutoscaler metrics are no longer supported, and have been removed. It's recommend to use CustomResourceState metrics to gather metrics from custom resources like the Vertical Pod Autoscaler. -* Label names were regulated to adhere with OTel-Prometheus standards, so existing label names that do not follow the same may be replaced by the ones that do. Please refer to [the PR](https://github.com/kubernetes/kube-state-metrics/pull/2004) for more details. -* Label and annotation metrics aren't exposed by default anymore to reduce the memory usage of the default configuration of kube-state-metrics. Before this change, they used to only include the name and namespace of the objects which is not relevant to users not opting in these metrics. +- The deprecated experimental VerticalPodAutoscaler metrics are no longer supported, and have been removed. It's recommend to use CustomResourceState metrics to gather metrics from custom resources like the Vertical Pod Autoscaler. +- Label names were regulated to adhere with OTel-Prometheus standards, so existing label names that do not follow the same may be replaced by the ones that do. Please refer to [the PR](https://github.com/kubernetes/kube-state-metrics/pull/2004) for more details. +- Label and annotation metrics aren't exposed by default anymore to reduce the memory usage of the default configuration of kube-state-metrics. Before this change, they used to only include the name and namespace of the objects which is not relevant to users not opting in these metrics. ### node-exporter 1.7 (Seed MLA) This new version comes with a few minor backwards-incompatible changes: -* metrics of offline CPUs in CPU collector were removed -* bcache cache_readaheads_totals metrics were removed -* ntp collector was deprecated -* supervisord collector was deprecated +- metrics of offline CPUs in CPU collector were removed +- bcache cache_readaheads_totals metrics were removed +- ntp collector was deprecated +- supervisord collector was deprecated ### Prometheus 2.51 (Seed MLA) Prometheus had many improvements and some changes to the remote-write functionality that might affect you: -* Remote-write: - * raise default samples per send to 2,000 - * respect `Retry-After` header on 5xx errors - * error `storage.ErrTooOldSample` is now generating HTTP error 400 instead of HTTP error 500 -* Scraping: - * Do experimental timestamp alignment even if tolerance is bigger than 1% of scrape interval +- Remote-write: + - raise default samples per send to 2,000 + - respect `Retry-After` header on 5xx errors + - error `storage.ErrTooOldSample` is now generating HTTP error 400 instead of HTTP error 500 +- Scraping: + - Do experimental timestamp alignment even if tolerance is bigger than 1% of scrape interval ### nginx-ingress-controller 1.10 nginx v1.10 brings quite a few potentially breaking changes: -* does not support chroot image (this will be fixed on a future minor patch release) -* dropped Opentracing and zipkin modules, just Opentelemetry is supported as of this release -* dropped support for PodSecurityPolicy -* dropped support for GeoIP (legacy), only GeoIP2 is supported -* The automatically generated `NetworkPolicy` from nginx 1.9.3 is now disabled by default, refer to https://github.com/kubernetes/ingress-nginx/pull/10238 for more information. +- does not support chroot image (this will be fixed on a future minor patch release) +- dropped Opentracing and zipkin modules, just Opentelemetry is supported as of this release +- dropped support for PodSecurityPolicy +- dropped support for GeoIP (legacy), only GeoIP2 is supported +- The automatically generated `NetworkPolicy` from nginx 1.9.3 is now disabled by default, refer to for more information. ### Dex 2.40 @@ -253,8 +257,8 @@ Before starting the upgrade, make sure your KKP Master and Seed clusters are hea Download the latest 2.26.x release archive for the correct edition (`ce` for Community Edition, `ee` for Enterprise Edition) from [the release page](https://github.com/kubermatic/kubermatic/releases) and extract it locally on your computer. Make sure you have the `values.yaml` you used to deploy KKP 2.26 available and already adjusted for any 2.26 changes (also see [Pre-Upgrade Considerations](#pre-upgrade-considerations)), as you need to pass it to the installer. The `KubermaticConfiguration` is no longer necessary (unless you are adjusting it), as the KKP operator will use its in-cluster representation. From within the extracted directory, run the installer: -```sh -$ ./kubermatic-installer deploy kubermatic-master --helm-values path/to/values.yaml +```bash +./kubermatic-installer deploy kubermatic-master --helm-values path/to/values.yaml # example output for a successful upgrade INFO[0000] 🚀 Initializing installer… edition="Enterprise Edition" version=v2.26.0 @@ -305,8 +309,8 @@ Upgrading seed clusters is not necessary, unless you are running the `minio` Hel You can follow the upgrade process by either supervising the Pods on master and seed clusters (by simply checking `kubectl get pods -n kubermatic` frequently) or checking status information for the `Seed` objects. A possible command to extract the current status by seed would be: -```sh -$ kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" +```bash +kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" kubermatic - {"clusters":5,"conditions":{"ClusterInitialized":{"lastHeartbeatTime":"2024-03-11T10:53:34Z","message":"All KKP CRDs have been installed successfully.","reason":"CRDsUpdated","status":"True"},"KubeconfigValid":{"lastHeartbeatTime":"2024-03-11T16:50:09Z","reason":"KubeconfigValid","status":"True"},"ResourcesReconciled":{"lastHeartbeatTime":"2024-03-11T16:50:14Z","reason":"ReconcilingSuccess","status":"True"}},"phase":"Healthy","versions":{"cluster":"v1.27.11","kubermatic":"v2.25.0"}} ``` diff --git a/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.26-to-2.27/_index.en.md b/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.26-to-2.27/_index.en.md index 37b92567c..6ecd399fd 100644 --- a/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.26-to-2.27/_index.en.md +++ b/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.26-to-2.27/_index.en.md @@ -32,17 +32,20 @@ A regression in KKP v2.26.0 caused the floatingIPPool field in OpenStack cluster If your OpenStack clusters use a floating IP pool other than the default, you may need to manually update Cluster objects after upgrading to v2.27. -* Action Required: - * After the upgrade, check your OpenStack clusters and manually reset the correct floating IP pool if needed. - * Example command to check the floating IP pool - ```sh - kubectl get clusters -o jsonpath="{.items[*].spec.cloud.openstack.floatingIPPool}" - ``` - * If incorrect, manually edit the Cluster object: - ```sh - kubectl edit cluster - ``` +- Action Required: + - After the upgrade, check your OpenStack clusters and manually reset the correct floating IP pool if needed. + - Example command to check the floating IP pool + ```bash + kubectl get clusters -o jsonpath="{.items[*].spec.cloud.openstack.floatingIPPool}" + ``` + + - If incorrect, manually edit the Cluster object: + + ```bash + kubectl edit cluster + ``` + ### Velero Configuration Changes By default, Velero backups and snapshots are turned off. If you were using Velero for etcd backups and/or volume backups, you must explicitly enable them in your values.yaml file. @@ -88,19 +91,20 @@ Because the namespace changes, both old and new Dex can temporarily live side-by To begin the migration, create a new `values.yaml` section for Dex (both old and new chart use `dex` as the top-level key in the YAML file) and migrate your existing configuration as follows: -* `dex.replicas` is now `dex.replicaCount` -* `dex.env` is now `dex.envVars` -* `dex.extraVolumes` is now `dex.volumes` -* `dex.extraVolumeMounts` is now `dex.volumeMounts` -* `dex.certIssuer` has been removed, admins must manually set the necessary annotations on the +- `dex.replicas` is now `dex.replicaCount` +- `dex.env` is now `dex.envVars` +- `dex.extraVolumes` is now `dex.volumes` +- `dex.extraVolumeMounts` is now `dex.volumeMounts` +- `dex.certIssuer` has been removed, admins must manually set the necessary annotations on the ingress to integrate with cert-manager. -* `dex.ingress` has changed internally: - * `class` is now `className` (the value "non-existent" is not supported anymore, use the `dex.ingress.enabled` field instead) - * `host` and `path` are gone, instead admins will have to manually define their Ingress configuration - * `scheme` is likewise gone and admins have to configure the `tls` section in the Ingress configuration +- `dex.ingress` has changed internally: + - `class` is now `className` (the value "non-existent" is not supported anymore, use the `dex.ingress.enabled` field instead) + - `host` and `path` are gone, instead admins will have to manually define their Ingress configuration + - `scheme` is likewise gone and admins have to configure the `tls` section in the Ingress configuration {{< tabs name="Dex Helm Chart values" >}} {{% tab name="old oauth Chart" %}} + ```yaml dex: replicas: 2 @@ -129,9 +133,11 @@ dex: name: letsencrypt-prod kind: ClusterIssuer ``` + {{% /tab %}} {{% tab name="new dex Chart" %}} + ```yaml # Tell the KKP installer to install the new dex Chart into the # "dex" namespace, instead of the old oauth Chart. @@ -166,19 +172,20 @@ dex: # above. - "kkp.example.com" ``` + {{% /tab %}} {{< /tabs >}} Additionally, Dex's own configuration is now more clearly separated from how Dex's Kubernetes manifests are configured. The following changes are required: -* In general, Dex's configuration is everything under `dex.config`. -* `dex.config.issuer` has to be set explicitly (the old `oauth` Chart automatically set it), usually to `https:///dex`, e.g. `https://kkp.example.com/dex`. -* `dex.connectors` is now `dex.config.connectors` -* `dex.expiry` is now `dex.config.expiry` -* `dex.frontend` is now `dex.config.frontend` -* `dex.grpc` is now `dex.config.grpc` -* `dex.clients` is now `dex.config.staticClients` -* `dex.staticPasswords` is now `dex.config.staticPasswords` (when using static passwords, you also have to set `dex.config.enablePasswordDB` to `true`) +- In general, Dex's configuration is everything under `dex.config`. +- `dex.config.issuer` has to be set explicitly (the old `oauth` Chart automatically set it), usually to `https:///dex`, e.g. `https://kkp.example.com/dex`. +- `dex.connectors` is now `dex.config.connectors` +- `dex.expiry` is now `dex.config.expiry` +- `dex.frontend` is now `dex.config.frontend` +- `dex.grpc` is now `dex.config.grpc` +- `dex.clients` is now `dex.config.staticClients` +- `dex.staticPasswords` is now `dex.config.staticPasswords` (when using static passwords, you also have to set `dex.config.enablePasswordDB` to `true`) Finally, theming support has changed. The old `oauth` Helm chart allowed to inline certain assets, like logos, as base64-encoded blobs into the Helm values. This mechanism is not available in the new `dex` Helm chart and admins have to manually provision the desired theme. KKP's Dex chart will setup a `dex-theme-kkp` ConfigMap, which is mounted into Dex and then overlays files over the default theme that ships with Dex. To customize, create your own ConfigMap/Secret and adjust `dex.volumes`, `dex.volumeMounts` and `dex.config.frontend.theme` / `dex.config.frontend.dir` accordingly. @@ -192,6 +199,7 @@ kubectl rollout restart deploy kubermatic-api -n kubermatic ``` #### Important: Update OIDC Provider URL for Hostname Changes + Before configuring the UI to use the new URL, ensure that the new Dex installation is healthy by checking that the pods are running and the logs show no suspicious errors. ```bash @@ -200,6 +208,7 @@ kubectl get pods -n dex # To check the logs kubectl get logs -n dex deploy/dex ``` + Next, verify the OpenID configuration by running: ```bash @@ -236,16 +245,16 @@ spec: Once you have verified that the new Dex installation is up and running, you can either -* point KKP to the new Dex installation (if its new URL is meant to be permanent) by changing the `tokenIssuer` in the `KubermaticConfiguration`, or -* delete the old `oauth` release (`helm -n oauth delete oauth`) and then re-deploy the new Dex release, but with the same host+path as the old `oauth` chart used, so that no further changes are necessary in downstream components like KKP. This will incur a short downtime, while no Ingress exists for the issuer URL configured in KKP. +- point KKP to the new Dex installation (if its new URL is meant to be permanent) by changing the `tokenIssuer` in the `KubermaticConfiguration`, or +- delete the old `oauth` release (`helm -n oauth delete oauth`) and then re-deploy the new Dex release, but with the same host+path as the old `oauth` chart used, so that no further changes are necessary in downstream components like KKP. This will incur a short downtime, while no Ingress exists for the issuer URL configured in KKP. ### API Changes -* New Prometheus Overrides - * Added `spec.componentsOverride.prometheus` to allow overriding Prometheus replicas and tolerations. +- New Prometheus Overrides + - Added `spec.componentsOverride.prometheus` to allow overriding Prometheus replicas and tolerations. -* Container Image Tagging - * Tagged KKP releases will no longer tag KKP images twice (with the Git tag and the Git hash), but only once with the Git tag. This ensures that existing hash-based container images do not suddenly change when a Git tag is set and the release job is run. Users of tagged KKP releases are not affected by this change. +- Container Image Tagging + - Tagged KKP releases will no longer tag KKP images twice (with the Git tag and the Git hash), but only once with the Git tag. This ensures that existing hash-based container images do not suddenly change when a Git tag is set and the release job is run. Users of tagged KKP releases are not affected by this change. ## Upgrade Procedure @@ -255,8 +264,8 @@ Before starting the upgrade, make sure your KKP Master and Seed clusters are hea Download the latest 2.27.x release archive for the correct edition (`ce` for Community Edition, `ee` for Enterprise Edition) from [the release page](https://github.com/kubermatic/kubermatic/releases) and extract it locally on your computer. Make sure you have the `values.yaml` you used to deploy KKP 2.27 available and already adjusted for any 2.27 changes (also see [Pre-Upgrade Considerations](#pre-upgrade-considerations)), as you need to pass it to the installer. The `KubermaticConfiguration` is no longer necessary (unless you are adjusting it), as the KKP operator will use its in-cluster representation. From within the extracted directory, run the installer: -```sh -$ ./kubermatic-installer deploy kubermatic-master --helm-values path/to/values.yaml +```bash +./kubermatic-installer deploy kubermatic-master --helm-values path/to/values.yaml # example output for a successful upgrade INFO[0000] 🚀 Initializing installer… edition="Enterprise Edition" version=v2.27.0 @@ -307,8 +316,8 @@ Upgrading seed clusters is not necessary, unless you are running the `minio` Hel You can follow the upgrade process by either supervising the Pods on master and seed clusters (by simply checking `kubectl get pods -n kubermatic` frequently) or checking status information for the `Seed` objects. A possible command to extract the current status by seed would be: -```sh -$ kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" +```bash +kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" kubermatic - {"clusters":5,"conditions":{"ClusterInitialized":{"lastHeartbeatTime":"2025-02-20T10:53:34Z","message":"All KKP CRDs have been installed successfully.","reason":"CRDsUpdated","status":"True"},"KubeconfigValid":{"lastHeartbeatTime":"2025-02-20T16:50:09Z","reason":"KubeconfigValid","status":"True"},"ResourcesReconciled":{"lastHeartbeatTime":"2025-02-20T16:50:14Z","reason":"ReconcilingSuccess","status":"True"}},"phase":"Healthy","versions":{"cluster":"v1.29.13","kubermatic":"v2.27.0"}} ``` @@ -320,13 +329,13 @@ Of particular interest to the upgrade process is if the `ResourcesReconciled` co Some functionality of KKP has been deprecated or removed with KKP 2.27. You should review the full [changelog](https://github.com/kubermatic/kubermatic/blob/main/docs/changelogs/CHANGELOG-2.27.md) and adjust any automation or scripts that might be using deprecated fields or features. Below is a list of changes that might affect you: -* The custom `oauth` Helm chart in KKP has been deprecated and will be replaced with a new Helm chart, `dex`, which is based on the [official upstream chart](https://github.com/dexidp/helm-charts/tree/master/charts/dex), in KKP 2.27. +- The custom `oauth` Helm chart in KKP has been deprecated and will be replaced with a new Helm chart, `dex`, which is based on the [official upstream chart](https://github.com/dexidp/helm-charts/tree/master/charts/dex), in KKP 2.27. -* Canal v3.19 and v3.20 addons have been removed. +- Canal v3.19 and v3.20 addons have been removed. -* kubermatic-installer `--docker-binary` flag has been removed from the kubermatic-installer `mirror-images` subcommand. +- kubermatic-installer `--docker-binary` flag has been removed from the kubermatic-installer `mirror-images` subcommand. -* The `K8sgpt` non-operator application has been deprecated and replaced by the `K8sgpt-operator`. The old application will be removed in future releases. +- The `K8sgpt` non-operator application has been deprecated and replaced by the `K8sgpt-operator`. The old application will be removed in future releases. ## Next Steps diff --git a/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.27-to-2.28/_index.en.md b/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.27-to-2.28/_index.en.md index 0ff69f675..c9a120c28 100644 --- a/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.27-to-2.28/_index.en.md +++ b/content/kubermatic/v2.28/installation/upgrading/upgrade-from-2.27-to-2.28/_index.en.md @@ -111,6 +111,7 @@ kubectl -n monitoring delete statefulset alertmanager Afterwards you can install the new release from the chart using Helm CLI or using your favourite GitOps tool. Finally, cleanup the leftover PVC resources from old helm chart installation. + ```bash kubectl delete pvc -n monitoring -l app=alertmanager ``` @@ -253,9 +254,9 @@ Additionally, Dex's own configuration is now more clearly separated from how Dex Finally, theming support has changed. The old `oauth` Helm chart allowed to inline certain assets, like logos, as base64-encoded blobs into the Helm values. This mechanism is not available in the new `dex` Helm chart and admins have to manually provision the desired theme. KKP's Dex chart will setup a `dex-theme-kkp` ConfigMap, which is mounted into Dex and then overlays files over the default theme that ships with Dex. To customize, create your own ConfigMap/Secret and adjust `dex.volumes`, `dex.volumeMounts` and `dex.config.frontend.theme` / `dex.config.frontend.dir` accordingly. -**Note that you cannot have two Ingress objects with the same host names and paths. So if you install the new Dex in parallel to the old one, you will have to temporarily use a different hostname (e.g. `kkp.example.com/dex` for the old one and `kkp.example.com/dex2` for the new Dex installation).** +**Note** that you cannot have two Ingress objects with the same host names and paths. So if you install the new Dex in parallel to the old one, you will have to temporarily use a different hostname (e.g. `kkp.example.com/dex` for the old one and `kkp.example.com/dex2` for the new Dex installation). -**Restarting Kubermatic API After Dex Migration**: +**Restarting Kubermatic API After Dex Migration:** If you choose to delete the `oauth` chart and immediately switch to the new `dex` chart without using a different hostname, it is recommended to restart the `kubermatic-api` to ensure proper functionality. You can do this by running the following command: ```bash @@ -331,7 +332,7 @@ Upgrading seed clusters is not necessary, unless you are running the `minio` Hel You can follow the upgrade process by either supervising the Pods on master and seed clusters (by simply checking `kubectl get pods -n kubermatic` frequently) or checking status information for the `Seed` objects. A possible command to extract the current status by seed would be: -```sh +```bash $ kubectl get seeds -A -o jsonpath="{range .items[*]}{.metadata.name} - {.status}{'\n'}{end}" # Place holder for output @@ -351,10 +352,10 @@ This retirement affects all customers using the Azure Basic Load Balancer SKU, w If you have Basic Load Balancers deployed within Azure Cloud Services (extended support), these deployments will not be affected by this retirement, and no action is required for these specific instances. For more details about this deprecation, please refer to the official Azure announcement: -[https://azure.microsoft.com/en-us/updates?id=azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer](https://azure.microsoft.com/en-us/updates?id=azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer) + The Azure team has created an upgrade guideline, including required scripts to automate the migration process. -Please refer to the official documentation for detailed upgrade instructions: [https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-basic-upgrade-guidance#upgrade-using-automated-scripts-recommended](https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-basic-upgrade-guidance#upgrade-using-automated-scripts-recommended) +Please refer to the official documentation for detailed upgrade instructions: ## Next Steps diff --git a/content/kubermatic/v2.28/references/setup-checklist/_index.en.md b/content/kubermatic/v2.28/references/setup-checklist/_index.en.md index 55e706fbf..8252028a8 100644 --- a/content/kubermatic/v2.28/references/setup-checklist/_index.en.md +++ b/content/kubermatic/v2.28/references/setup-checklist/_index.en.md @@ -66,7 +66,6 @@ A helpful shortcut, we recommend our KubeOne tooling container, which contains a Kubermatic exposes an NGINX server and user clusters API servers via Load Balancers. Therefore, KKP is using the native Kubernetes Service Type `LoadBalancer` implementation. More details about the different expose points you find at the next chapter “DHCP/Networking” - ### **On-Premise/Bring-your-own Load Balancer** If no external load balancer is provided for the setup, we recommend [KubeLB](https://docs.kubermatic.com/kubelb) for Multi-Tenant Load Balancing. @@ -75,7 +74,6 @@ If no external load balancer is provided for the setup, we recommend [KubeLB](ht As frontend IPAM solution and IP announcement, KubeLB could use on-premise non-multi-tenant LB implementations like Cilium or MetalLB in Layer 2 ARP or BGP mode. (Also commercial Kubernetes conform implementation like [F5 Big IP](https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/v1.0/#) would work). KubeLB will add the multi-tenant plus central DNS, Certificate and Ingress management. KubeLB deliver for each Kubernetes Cluster one tenant separated authentication token, what get used via the so called [KubeLB CCM](https://docs.kubermatic.com/kubelb/v1.1/installation/tenant-cluster/), what automatically get configured for KKP clusters. The KubeLB CCM is then handling service and node announcements. For Setups where multi-tenant automated LB is not required, direct [MetalLB](https://metallb.universe.tf/) or [Cilium](https://docs.cilium.io/) setups could be used as well. For the best performance and stability of the platform, we recommend to talk with our consultants to advise you what is the best fit for your environment. - #### **Layer 2 ARP Announcement** If you choose to use Layer 2 ARP Announcements, you require a set of usable IP addresses in the target Layer 2 network segment, that are not managed by DHCP (at least 2 for Kubermatic itself + your workload load balancers). For deeper information about how Layer 2 ARP works, take a look at [MetalLB in layer 2 mode](https://metallb.universe.tf/concepts/layer2/). The role of this LB is different from e.g. go-between, which is only used to access the master clusters API server. Some Reference for the settings you find at: @@ -84,7 +82,6 @@ If you choose to use Layer 2 ARP Announcements, you require a set of usable IP a - [Cilium L2 Announcements](https://docs.cilium.io/en/stable/network/l2-announcements/) - #### **BGP Advertisement (recommended)** It’s recommend to use BGP for IP announcement as BGP can handle failovers and Kubernetes node updates way better as the L2 Announcement. Also, BGP supports dedicated load balancing hashing algorithm, for more information see [MetalLB in BGP Mode](https://metallb.io/concepts/bgp/). A load balancer in BGP mode advertises each allocated IP to the configured peers with no additional BGP attributes. The peer router(s) will receive one /32 route for each service IP, with the BGP localpref set to zero and no BGP communities. For the different configurations take a look at the reference settings at: @@ -93,7 +90,6 @@ It’s recommend to use BGP for IP announcement as BGP can handle failovers and - [Cilium BGP Control Plane](https://docs.cilium.io/en/stable/network/bgp-control-plane/bgp-control-plane/) - ### **Public/Private Cloud Load Balancers** For other Load Balancer scenarios we strongly recommend to use cloud environment specific Load Balancer that comes with the dedicated cloud CCM. These native cloud LBs can interact dynamically with Kubernetes to provide updates for service type Load Balancer or Ingress objects. For more detail see [Kubernetes - Services, Load Balancing, and Networking](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer). @@ -124,7 +120,6 @@ To access a user cluster via API, a wildcard DNS entry per seed cluster (in your Optional: An alternative expose strategy Load Balancer can be chosen. Therefore, every control plane gets its own Load Balancer with an external IP, see [Expose Strategy](https://docs.kubermatic.com/kubermatic/v2.28/tutorials-howtos/networking/expose-strategies/). - ### **Example of DNS Entries for KKP Services** Root DNS-Zone: `*.kubermatic.example.com` @@ -222,8 +217,8 @@ For each Cloud Provider, there will be some requirements and Todo’s (e.g. crea In the following section, you find SOME examples of setups, that don’t need to match 100% to your use case. Please reach out to your technical contact person at Kubermatic, who could provide you with a tailored technical solution for your use case. {{< tabs name="Cloud Provider Specifics" >}} - {{% tab name="Azure" %}} + ### **Additional Requirements for Azure** #### **General Requirements** @@ -254,7 +249,6 @@ Azure Account described at [Kubermatic Docs > Supported Provider > Azure](https: - Ensure to share the needed parameter of [Azure - machine.spec.providerConfig.cloudProviderSpec](https://github.com/kubermatic/machine-controller/blob/main/docs/cloud-provider.md#azure) - #### **Integration Option to existing KKP** ##### Option I - Workers only in Azure @@ -269,7 +263,6 @@ Azure Account described at [Kubermatic Docs > Supported Provider > Azure](https: - Application traffic gets exposed at Azure workers (Cloud LBs) - ##### Option II - Additional Seed at Azure + Worker in Azure ![multi seed setup - on-premise and azure](integration-additional-seed-azure.png) @@ -293,8 +286,8 @@ Azure Account described at [Kubermatic Docs > Supported Provider > Azure](https: - Host for Seed provisioning (KubeOne setup) needs to reach the Azure network VMs by SSH {{% /tab %}} - {{% tab name="vSphere" %}} + ### **Cloud Provider vSphere** #### **Access to vSphere API** @@ -305,17 +298,14 @@ For dynamic provisioning of nodes, Kubermatic needs access to the vSphere API en - Alternative for managing via terraform [kubermatic-vsphere-permissions-terraform](https://github.com/kubermatic-labs/kubermatic-vsphere-permissions-terraform) (outdated) - #### **User Cluster / Network separation** The separation and multi-tenancy of KKP and their created user clusters is highly dependent on the provided network and user management of the vSphere Infrastructure. Due to the individuality of such setups, it’s recommended to create a dedicated concept per installation together with Kubermatic engineering team. Please provide at least one separate network CIDR and technical vSphere user for the management components and each expected tenant. - #### **Routable virtual IPs (for metalLB)** To set up Kubermatic behind [MetalLB](https://metallb.universe.tf/), we need a few routable address ranges. This could be sliced into one CIDR. The CIDR should be routed to the target network, but not used for machines. - #### **Master/Seed Cluster(s)** CIDR for @@ -324,12 +314,10 @@ CIDR for - Node-Port-Proxy: 1 IP (if expose strategy NodePort or Tunneling), multiple IPs at expose strategy LoadBalancer (for each cluster one IP) - #### **User Cluster** Depending on the concept of how the application workload gets exposed, IPs need to be reserved for exposing the workload at the user cluster side. As a recommendation, at least one virtual IP need is needed e.g. the [MetalLB user cluster addon](https://docs.kubermatic.com/kubermatic/v2.28/tutorials-howtos/networking/ipam/#metallb-addon-integration) + NGINX ingress. Note: during the provisioning of the user cluster, the IP must be entered for the MetalLB addon or you need to configure a [Multi-Cluster IPAM Pool](https://docs.kubermatic.com/kubermatic/v2.28/tutorials-howtos/networking/ipam/). On manual IP configuration, the user must ensure that there will be no IP conflict. - #### **(if no DHCP) Machine CIDRs** Depending on the target network setup, we need ranges for: @@ -342,7 +330,6 @@ Depending on the target network setup, we need ranges for: To provide a “cloud native” experience to the end user of KKP, we recommend the usage of a DHCP at all layers, otherwise, the management layer (master/seed cluster) could not breathe with the autoscaler. - #### **Integration** ##### Option I - Workers only in vSphere Datacenter(s) @@ -355,7 +342,6 @@ To provide a “cloud native” experience to the end user of KKP, we recommend - Application traffic get exposed at vSphere workers by the chosen ingress/load balancing solution - ##### Option II - Additional Seed at vSphere Datacenter(s) - Seed Cluster Kubernetes API endpoint at the dedicated vSphere seed cluster (provisioned by e.g. KubeOne) needs to be reachable @@ -378,8 +364,8 @@ To provide a “cloud native” experience to the end user of KKP, we recommend {{% /tab %}} {{% tab name="OpenStack" %}} -### **Cloud Provider OpenStack** +### **Cloud Provider OpenStack** #### **Access to OpenStack API** @@ -393,7 +379,6 @@ For dynamic provisioning of nodes, Kubermatic needs access to the OpenStack API - User / Password or Application Credential ID / Secret - #### **User Cluster / Network separation** The separation and multi-tenancy of KKP and their created user clusters is dependent on the provided network and project structure. Due to the individuality of such setups, it’s recommended to create a dedicated concept per installation together with Kubermatic engineering team. Please provide at least for the management components and for each expected tenant: @@ -416,7 +401,6 @@ The separation and multi-tenancy of KKP and their created user clusters is depen - OpenStack user or application credentials - #### **Further Information** Additional information about the usage of Open Stack within in Kubernetes you could find at: @@ -439,7 +423,6 @@ Additional information about the usage of Open Stack within in Kubernetes you co - OpenStack Cloud Controller Manager: - #### **Integration** ##### Option I - Workers only in OpenStack Datacenter(s) @@ -452,7 +435,6 @@ Additional information about the usage of Open Stack within in Kubernetes you co - Application traffic get exposed at OpenStack workers by the chosen ingress/load balancing solution - ##### Option II - Additional Seed at vSphere Datacenter(s) - Seed Cluster Kubernetes API endpoint at the dedicated OpenStack seed cluster (provisioned by e.g.[KubeOne](https://docs.kubermatic.com/kubeone/main/tutorials/creating-clusters/)) needs to be reachable diff --git a/content/kubermatic/v2.28/release/_index.en.md b/content/kubermatic/v2.28/release/_index.en.md index afef36f6a..b16962040 100644 --- a/content/kubermatic/v2.28/release/_index.en.md +++ b/content/kubermatic/v2.28/release/_index.en.md @@ -4,7 +4,6 @@ date = 2025-07-22T11:07:15+02:00 weight = 80 +++ - ## Kubermatic Release Process This document provides comprehensive information about the Kubermatic release process, outlining release types, cadence, upgrade paths, and artifact delivery mechanisms. This guide is intended for technical users and system administrators managing Kubermatic deployments. @@ -92,4 +91,4 @@ Effective upgrade management is crucial for Kubermatic users. This section detai * Backward Compatibility Guarantees: * API Versions: Kubermatic generally adheres to the Kubernetes API versioning policy. Stable API versions are guaranteed to be backward compatible. Beta API versions may introduce breaking changes, and alpha API versions offer no compatibility guarantees. Users should be aware of the API versions they are utilizing. * Components: While best efforts are made, some internal component changes in minor releases might not be fully backward compatible. Refer to release notes for specific component compatibility details. - * Configuration: Configuration schemas may evolve with minor releases. Tools and documentation will be provided to assist with configuration migration. \ No newline at end of file + * Configuration: Configuration schemas may evolve with minor releases. Tools and documentation will be provided to assist with configuration migration. diff --git a/content/kubermatic/v2.28/tutorials-howtos/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/_index.en.md index f124df7fe..aa11ac6fd 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/_index.en.md @@ -4,7 +4,6 @@ date = 2020-02-10T11:07:15+02:00 weight = 20 +++ - Are you just embarking on your cloud adoption journey but feeling a bit overwhelmed on how to get started? Then you’re in the right place here. The purpose of our tutorials is to show how to accomplish a goal that is larger than a single task. Typically a tutorial page has several sections, each of which has a sequence of steps. In our tutorials, we provide detailed walk-throughs of how to get started with Kubermatic Kubernetes Platform (KKP). diff --git a/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/_index.en.md index cba9e50a1..67827614b 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/_index.en.md @@ -9,8 +9,8 @@ impact all Kubermatic users directly. Admin rights can be granted from the admin setting the `spec.admin` field of the user object to `true`. ```bash -$ kubectl get user -o=custom-columns=INTERNAL_NAME:.metadata.name,NAME:.spec.name,EMAIL:.spec.email,ADMIN:.spec.admin -$ kubectl edit user ... +kubectl get user -o=custom-columns=INTERNAL_NAME:.metadata.name,NAME:.spec.name,EMAIL:.spec.email,ADMIN:.spec.admin +kubectl edit user ... ``` After logging in to the dashboard as an administrator, you should be able to access the admin panel from the menu up @@ -22,7 +22,7 @@ top. Global settings can also be modified from the command line with kubectl. It can be done by editing the `globalsettings` in `KubermaticSetting` CRD. This resource has the following structure: -``` +```yaml apiVersion: kubermatic.k8c.io/v1 kind: KubermaticSetting metadata: @@ -66,8 +66,8 @@ spec: It can be edited directly from the command line: -``` -$ kubectl edit kubermaticsetting globalsettings +```bash +kubectl edit kubermaticsetting globalsettings ``` **Note:** Custom link `icon` is not required and defaults will be used if field is not specified. `icon` URL can @@ -78,32 +78,39 @@ point to the images inside the container as well, i.e. `/assets/images/icons/cus The below global settings are managed via UI: ### Manage Custom Links + Control the way custom links are displayed in the Kubermatic Dashboard. Choose the place that suits you best, whether it is a sidebar, footer or a help & support panel. Check out the [Custom Links]({{< ref "./custom-links" >}}) section for more details. ### Control Cluster Settings + Control number of initial Machine Deployment replicas, cluster deletion cleanup settings, availability of Kubernetes Dashboard for user clusters and more. Check out the [Cluster Settings]({{< ref "./cluster-settings" >}}) section for more details. ### Manage Dynamic Datacenters + Use number of filtering options to find and control existing dynamic datacenters or simply create a new one.Check out the [Dynamic Datacenters]({{< ref "./dynamic-datacenters-management" >}}) section for more details. ### Manage Administrators + Manage all Kubermatic Dashboard Administrator in a single place. Decide who should be granted or revoked an administrator privileges. Check out the [Administrators]({{< ref "./administrators" >}}) section for more details. ### Manage Presets + Prepare custom provider presets for a variety of use cases. Control which presets will be visible to the users down to the per-provider level. Check out the [Presets]({{< ref "./presets-management" >}}) section for more details. ### OPA Constraint Templates + Constraint Templates allow you to declare new Constraints. They are intended to work as a schema for Constraint parameters and enforce their behavior. Check out the [OPA Constraint Templates]({{< ref "./opa-constraint-templates" >}}) section for more details. ### Backup Buckets + Through the Backup Buckets settings you can enable and configure the new etcd backups per Seed. Check out the [Etcd Backup Settings]({{< ref "./backup-buckets" >}}) section for more details. diff --git a/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/admin-announcements/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/admin-announcements/_index.en.md index 4f5e94a6e..8e809cc1a 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/admin-announcements/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/admin-announcements/_index.en.md @@ -11,16 +11,19 @@ The Admin Announcement feature allows administrators to broadcast important upda - ### [User View](#user-view) ## Announcement Page + The Announcement page provides the admin with a centralized location to manage all announcements. From here, you can add, edit, and delete announcements. ![Announcements](images/announcements-page.png "Announcements View") -## Add Announcement +## Add Announcement + The dialog allows admins to add new announcements by customizing the message, setting an expiration date, and activating the announcement. ![Add Announcement](images/announcements-dialog.png "Announcements Add Dialog") ## User View + Users can see the latest unread active announcement across all pages from the announcement banner. ![Announcement Banner](images/announcement-banner.png "Announcements Banner") @@ -32,4 +35,3 @@ Users can also see a list of all active announcements by clicking the "See All" You can also view all active announcements from the Help Panel. ![Help Panel](images/help-panel.png "Help Panel") - diff --git a/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/administrators/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/administrators/_index.en.md index c41ef641d..fcffbb05d 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/administrators/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/administrators/_index.en.md @@ -12,6 +12,7 @@ Administrators view in the Admin Panel allows adding and deleting administrator - ### [Deleting Administrators](#deleting-administrators) ## Adding Administrators + Administrators can be added after clicking on the plus icon in the top right corner of the Administrators view. ![Add Administrator](images/admin-add.png?classes=shadow,border&height=200 "Administrator Add Dialog") @@ -20,6 +21,7 @@ Email address is the only field that is available in the dialog. After providing provided user should be able to access and use the Admin Panel. ## Deleting Administrators + Administrator rights can be taken away after clicking on the trash icon that appears after putting mouse over one of the rows with administrators. Please note, that it is impossible to take away the rights from current user. diff --git a/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/backup-buckets/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/backup-buckets/_index.en.md index a8c1566b0..9c0825764 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/backup-buckets/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/backup-buckets/_index.en.md @@ -8,7 +8,6 @@ Through the Backup Destinations settings you can enable and configure the new et ![Backup Destinations](images/backup-destinations.png?classes=shadow,border "Backup Destinations Settings View") - ### Etcd Backup Settings Setting a Bucket and Endpoint for a Seed turns on the automatic etcd Backups and Restore feature, for that Seed only. For now, @@ -44,13 +43,13 @@ For security reasons, the API/UI does not offer a way to get the current credent To see how to make backups and restore your cluster, check the [Etcd Backup and Restore Tutorial]({{< ref "../../../etcd-backups" >}}). - ### Default Backups Since 2.20, default destinations are required if the automatic etcd backups are configured. A default EtcdBackupConfig is created for all the user clusters in the Seed. It has to be a destination that is present in the backup destination list for that Seed. Example Seed with default destination: + ```yaml ... etcdBackupRestore: @@ -66,6 +65,7 @@ Example Seed with default destination: ``` Default EtcdBackupConfig that is created: + ```yaml ... spec: diff --git a/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/cluster-settings/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/cluster-settings/_index.en.md index 8334784be..0c5b284e7 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/cluster-settings/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/cluster-settings/_index.en.md @@ -4,7 +4,6 @@ date = 2020-02-10T11:07:15+02:00 weight = 20 +++ - Interface section in the Admin Panel allows user to control various cluster-related settings. They can influence cluster creation, management and cleanup after deletion. @@ -41,7 +40,7 @@ This section controls the default number of initial Machine Deployment replicas. in the cluster creation wizard on the Initial Nodes step and also on the add/edit machine deployment dialog on the cluster details. -#### Cluster Creation Wizard - Initial Nodes Step +### Cluster Creation Wizard - Initial Nodes Step ![Cluster creation wizard initial nodes step](images/wizard-initial-nodes-step.png?classes=shadow,border) @@ -76,6 +75,7 @@ specified criteria will be filtered out and not displayed to the user. Static labels are a list of labels that the admin can add. Users can select from these labels when creating a cluster during the cluster settings step. The admin can set these labels as either default or protected: + - Default label: This label is automatically added, but the user can delete it. - Protected label: This label is automatically added and cannot be deleted by the user. diff --git a/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/dynamic-datacenters-management/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/dynamic-datacenters-management/_index.en.md index 2e392ffc6..85ad21888 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/dynamic-datacenters-management/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/dynamic-datacenters-management/_index.en.md @@ -15,6 +15,7 @@ datacenters. All of these will be described below. - ### [Deleting Datacenters](#del) ## Listing & Filtering Datacenters {#add} + Besides traditional list functionalities the Dynamic Datacenter view provides filtering options. Datacenters can be filtered by: @@ -25,6 +26,7 @@ filtered by: Filters are applied together, that means result datacenters have to match all the filtering criteria. ## Creating & Editing Datacenters {#cre} + Datacenters can be added after clicking on the plus icon in the top right corner of the Dynamic Datacenters view. To edit datacenter Administrator should click on the pencil icon that appears after putting mouse over one of the rows with datacenters. @@ -49,6 +51,7 @@ Fields available in the dialogs: - Provider Configuration - provider configuration in the YAML format. ## Deleting Datacenters {#del} + Datacenters can be deleted after clicking on the trash icon that appears after putting mouse over one of the rows with datacenters. diff --git a/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/opa-constraint-templates/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/opa-constraint-templates/_index.en.md index 0881a88e4..0a458b30e 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/opa-constraint-templates/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/opa-constraint-templates/_index.en.md @@ -10,6 +10,7 @@ Constraint Templates allow you to declare new Constraints. They are intended to The Constraint Templates view under OPA menu in Admin Panel allows adding, editing and deleting Constraint Templates. ## Adding Constraint Templates + Constraint Templates can be added after clicking on the `+ Add Constraint Template` icon in the top right corner of the view. ![Add Constraint Template](@/images/ui/opa-admin-add-ct.png?classes=shadow,border&height=350px "Constraint Template Add Dialog") @@ -45,9 +46,11 @@ targets: ``` ## Editing Constraint Templates + Constraint Templates can be edited after clicking on the pencil icon that appears when hovering over one of the rows. The form is identical to the one from creation. ## Deleting Constraint Templates + Constraint Templates can be deleted after clicking on the trash icon that appears when hovering over one of the rows. Please note, that the deletion of a Constraint Template will also delete all Constraints that are assigned to it. ![Delete Constraint Template](@/images/ui/opa-admin-delete-ct.png?classes=shadow,border&height=200 "Constraint Template Delete Dialog") diff --git a/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/opa-default-constraints/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/opa-default-constraints/_index.en.md index df93e8687..426258ef8 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/opa-default-constraints/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/opa-default-constraints/_index.en.md @@ -17,7 +17,7 @@ To add a new default constraint click on the `+Add Default Constraint` icon on t ![Create Default Constraint](@/images/ui/create-default-constraint-dialog.png?height=300px&classes=shadow,border "Create Default Constraint") -``` +```yaml constraintType: K8sPSPAllowPrivilegeEscalationContainer match: kinds: @@ -60,7 +60,7 @@ In case of no filtering applied Default Constraints are synced to all User Clust for example, Admin wants to apply a policy only on clusters with the provider as `aws` and label selector as `filtered:true` To enable this add the following selectors in the constraint spec for the above use case. -``` +```yaml selector: providers: - aws @@ -90,7 +90,6 @@ Disabled Constraint in the Applied cluster View disabled-default-constraint-cluster-view.png ![Disabled Default Constraint](@/images/ui/disabled-default-constraint-cluster-view.png?classes=shadow,border "Disabled Default Constraint") - Enable the constraint by clicking the same button ![Enable Default Constraint](@/images/ui/disabled-default-constraint.png?classes=shadow,border "Enable Default Constraint") diff --git a/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/seed-configurations/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/seed-configurations/_index.en.md index 932279bd9..e0d234356 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/seed-configurations/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/administration/admin-panel/seed-configurations/_index.en.md @@ -4,7 +4,6 @@ date = 2023-02-20 weight = 20 +++ - The Seed Configurations page in the Admin Panel allows administrators to see Seeds available in the KKP instance. This page presents Seed's utilization broken down into Providers and Datacenters. {{% notice note %}} @@ -15,7 +14,6 @@ The Seed Configuration section is a readonly view page. This page does not provi ![Seed Configurations](images/seed-confgurations.png?classes=shadow,border "Seed Configurations List View") - ### Seed Details The following page presents Seed's statistics along with tables of utilization broken down respectively per Providers and Datacenters. diff --git a/content/kubermatic/v2.28/tutorials-howtos/administration/kkp-user/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/administration/kkp-user/_index.en.md index ea7e560ef..f513fc58e 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/administration/kkp-user/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/administration/kkp-user/_index.en.md @@ -10,7 +10,7 @@ When a user authenticates for the first time at the Dashboard, an internal User Example User representation: -``` +```yaml apiVersion: kubermatic.k8c.io/v1 kind: User metadata: @@ -22,13 +22,13 @@ spec: name: Jane Doe ``` -# Initial Admin +## Initial Admin -After the installation of Kubermatic Kubernetes Platform the first account that authenticates at the Dashboard is elected as an admin. +After the installation of Kubermatic Kubernetes Platform, the first account that authenticates at the Dashboard is elected as an admin. The account is then capable of setting admin permissions via the [dashboard]({{< ref "../admin-panel/administrators" >}}) . -# Granting Admin Permission via kubectl +## Granting Admin Permission via kubectl Make sure the account logged in once at the Kubermatic Dashboard. diff --git a/content/kubermatic/v2.28/tutorials-howtos/administration/presets/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/administration/presets/_index.en.md index 68ed77f67..f3af80cf3 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/administration/presets/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/administration/presets/_index.en.md @@ -38,9 +38,9 @@ Preset list offers multiple options that allow Administrators to manage Presets. 1. Create a new Preset 1. Manage existing Preset - - Edit Preset (allows showing/hiding specific providers) - - Add a new provider to the Preset - - Edit configure provider + 1. Edit Preset (allows showing/hiding specific providers) + 1. Add a new provider to the Preset + 1. Edit configure provider 1. Show/Hide the Preset. Allows hiding Presets from the users and block new cluster creation based on them. 1. A list of providers configured for the Preset. @@ -72,7 +72,7 @@ All configured providers will be available on this step and only a single provid #### Step 3: Settings -The _Settings_ step will vary depending on the provider selected in the previous step. In our example, we have selected +The *Settings* step will vary depending on the provider selected in the previous step. In our example, we have selected an AWS provider. ![Third step of creating a preset](images/create-preset-third-step.png?height=500px&classes=shadow,border) diff --git a/content/kubermatic/v2.28/tutorials-howtos/administration/user-settings/user-ssh-key-agent/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/administration/user-settings/user-ssh-key-agent/_index.en.md index b37291e35..0c4845023 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/administration/user-settings/user-ssh-key-agent/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/administration/user-settings/user-ssh-key-agent/_index.en.md @@ -6,6 +6,7 @@ weight = 6 +++ ## Overview + The user ssh key agent is responsible for syncing the defined user ssh keys to the worker nodes, when users attach ssh keys to the user clusters. When users choose to add a user ssh key to a cluster after it was created the agent will sync those keys to the worker machines by fetching the ssh keys and write them to the `authorized_keys` @@ -27,6 +28,7 @@ During the user cluster creation steps(at the second step), the users have the p is not affected by the agent, whether it was deployed or not. ## Disable user SSH Key Agent feature + To disable the User SSH Key Agent feature completely, enable the following feature flag in the Kubermatic configuration: ```yaml @@ -36,7 +38,9 @@ spec: ``` When this feature flag is enabled, the User SSH Key Agent will be disabled. The SSH Keys page and all SSH key options in the cluster view will also be hidden from the dashboard. + ## Migration + Starting from KKP 2.16.0 on-wards, it was made possible to enable and disable the user ssh key agent during cluster creation. Users can enable the agent in KKP dashboard as it is mentioned above, or by enabling the newly added `enableUserSSHKeyAgent: true` in the cluster spec. For user clusters which were created using KKP 2.15.x and earlier, this has introduced an issue, due to diff --git a/content/kubermatic/v2.28/tutorials-howtos/admission-plugins/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/admission-plugins/_index.en.md index 23b7fb79e..8057d98dc 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/admission-plugins/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/admission-plugins/_index.en.md @@ -13,7 +13,8 @@ The Kubermatic Kubernetes Platform manages the Kubernetes API server by setting list of admission control plugins to be enabled during cluster creation. In the current version, the default ones are: -``` + +```bash NamespaceLifecycle NodeRestriction LimitRanger @@ -39,6 +40,7 @@ They can be selected in the UI wizard. ![Admission Plugin Selection](@/images/ui/admission-plugins.png?height=400px&classes=shadow,border "Admission Plugin Selection") ### PodNodeSelector Configuration + Selecting the `PodNodeSelector` plugin expands an additional view for the plugin-specific configuration. ![PodNodeSelector Admission Plugin Configuration](@/images/ui/admission-plugin-configuration.png?classes=shadow,border "PodNodeSelector Admission Plugin Configuration") diff --git a/content/kubermatic/v2.28/tutorials-howtos/applications/add-remove-application-version/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/applications/add-remove-application-version/_index.en.md index 77dea16be..b9511f6f3 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/applications/add-remove-application-version/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/applications/add-remove-application-version/_index.en.md @@ -31,6 +31,7 @@ spec: url: https://charts.bitnami.com/bitnami version: 9.2.9 ``` + And want to make the new version `9.2.11` available. Then, all you have to do is to add the new version as described below: ```yaml @@ -60,19 +61,23 @@ spec: url: https://charts.bitnami.com/bitnami version: 9.2.11 ``` + Users will now be able to reference this version in their `ApplicationInstallation`. For additional details, see the [update an application guide]({{< ref "../update-application" >}}). {{% notice warning %}} -Do not replace one version with another, as it will be perceived as a **deletion** by the application installation controller leading to **deletion of all `ApplicationInstallation` using this version.** +Do not replace one version with another, as it will be perceived as a **deletion** by the application installation controller, leading to **deletion of all `ApplicationInstallation` using this version.** For more details, see how to delete a version from an `ApplicationDefinition`. {{% /notice %}} ## How to delete a version from an ApplicationDefinition + Deleting a version from `ApplicationDefinition` will trigger the deletion of all `ApplicationInstallations` that reference this version! It guarantees that only desired versions are installed in user clusters, which is helpful if a version contains a critical security breach. Under normal circumstances, we recommend following the deprecation policy to delete a version. ### Deprecation policy + Our recommended deprecation policy is as follows: + * stop the user from creating or upgrading to the deprecated version. But let them edit the application using a deprecated version (it may be needed for operational purposes). * notify the user running this version that it's deprecated. @@ -82,9 +87,10 @@ Once the deprecation period is over, delete the version from the `ApplicationDef This deprecation policy is an example and may have to be adapted to your organization's needs. {{% /notice %}} -The best way to achieve that is using the [gatekepper / opa integration]({{< ref "../../opa-integration" >}}) to create a `ConstraintTemplate` and two [Default Constraints]({{< ref "../../opa-integration#default-constraints" >}}) (one for each point of the deprecation policy) +The best way to achieve that is using the [Gatekeeper / OPA integration]({{< ref "../../opa-integration" >}}) to create a `ConstraintTemplate` and two [Default Constraints]({{< ref "../../opa-integration#default-constraints" >}}) (one for each point of the deprecation policy) **Example Kubermatic Constraint Template to deprecate a version:** + ```yaml apiVersion: kubermatic.k8c.io/v1 kind: ConstraintTemplate @@ -100,7 +106,7 @@ spec: openAPIV3Schema: properties: allowEdit: - description: allow edit of existing application using deprecated version + description: allow editing of the existing application using the deprecated version type: boolean name: description: The name of the application to depreciate. @@ -125,10 +131,10 @@ spec: msg := sprintf("application `%v` in version `%v` is deprecated. Please upgrade to the next version", [input.parameters.name, input.parameters.version]) } - # reject upgrade to the deprecated version but allow edit application that currently use the deprecated version + # reject upgrade to the deprecated version but allow editing the application that currently uses the deprecated version violation[{"msg": msg, "details": {}}] { is_operation("UPDATE") - # when removing finilizer on applicationInstallation an Update event is sent. + # when removing finalizer on ApplicationInstallation an Update event is sent. not input.review.object.metadata.deletionTimestamp appRef := input.review.object.spec.applicationRef @@ -191,8 +197,8 @@ spec: If users try to create an `ApplicationInstallation` using the deprecation version, they will get the following error message: -``` -$ kubectl create -f app.yaml +```bash +kubectl create -f app.yaml Error from server ([deprecate-app-apache-9-2-9] application `apache` in version `9.2.9` is deprecated. Please upgrade to next version): error when creating "app.yaml": admission webhook "validation.gatekeeper.sh" denied the request: [deprecate-app-apache-9-2-9] application `apache` in version `9.2.9` is deprecated. Please upgrade to the next version ``` @@ -223,17 +229,19 @@ spec: selector: labelSelector: {} ``` + This constraint will raise a warning if a user tries to create, edit, or upgrade to the deprecated version: -``` -$ kubectl edit applicationInstallation my-apache +```bash +kubectl edit applicationInstallation my-apache + Warning: [warn-app-apache-9-2-9] application `apache` in version `9.2.9` is deprecated. Please upgrade to the next version applicationinstallation.apps.kubermatic.k8c.io/my-apache edited ``` We can see which applications are using the deprecated version by looking at the constraint status. -``` +```bash status: [...] auditTimestamp: "2023-01-23T14:55:47Z" @@ -242,9 +250,9 @@ status: - enforcementAction: warn kind: ApplicationInstallation message: application `apache` in version `9.2.9` is deprecated. Please upgrade - to next version + to the next version name: my-apache namespace: default ``` -*note: the number of violations on the status is limited to 20. There are more ways to collect violations. Please refer to the official [Gatekeeper audit documentation](https://open-policy-agent.github.io/gatekeeper/website/docs/audit)* +*Note: the number of violations on the status is limited to 20. There are more ways to collect violations. Please refer to the official [Gatekeeper audit documentation](https://open-policy-agent.github.io/gatekeeper/website/docs/audit)* diff --git a/content/kubermatic/v2.28/tutorials-howtos/applications/update-application/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/applications/update-application/_index.en.md index bc672eae7..cb9b10c4a 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/applications/update-application/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/applications/update-application/_index.en.md @@ -8,6 +8,7 @@ This guide targets Cluster Admins and details how to update an Application insta For more details on Applications, please refer to our [Applications Primer]({{< ref "../../../architecture/concept/kkp-concepts/applications/" >}}). ## Update an Application via the UI + Go to the Applications Tab and click on the pen icon to edit the application. ![Applications Tab](@/images/applications/applications-edit-icon.png?classes=shadow,border "Applications edit button") @@ -18,13 +19,13 @@ Then you can update the values and or version using the editor. If you update the application's version, you may have to update the values accordingly. {{% /notice %}} - ![Applications Tab](@/images/applications/applications-edit-values.png?classes=shadow,border "Applications edit values and version") ## Update an Application via GitOps + Use `kubectl` to edit the applicationInstallation CR. -```sh +```bash kubectl -n edit applicationinstallation ``` @@ -32,5 +33,4 @@ kubectl -n edit applicationinstallation If you update the application's version, you may have to update the values accordingly. {{% /notice %}} - Then you can check the progress of your upgrade in `status.conditions`. For more information, please refer to [Application Life Cycle]({{< ref "../../../architecture/concept/kkp-concepts/applications/application-installation#application-life-cycle" >}}). diff --git a/content/kubermatic/v2.28/tutorials-howtos/audit-logging/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/audit-logging/_index.en.md index ec5813de6..72e19767c 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/audit-logging/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/audit-logging/_index.en.md @@ -167,7 +167,8 @@ To enable this, you will need to edit your [datacenter definitions in a Seed]({{ Centrally define audit logging for **user-clusters** (via `auditLogging` in the Seed spec). Configure sidecar settings , webhook backends, and policy presets. Enforce datacenter-level controls with `EnforceAuditLogging` (mandatory logging) and `EnforcedAuditWebhookSettings` (override user-cluster webhook configs). -**Example**: +**Example**: + ```yaml spec: auditLogging: diff --git a/content/kubermatic/v2.28/tutorials-howtos/aws-assume-role/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/aws-assume-role/_index.en.md index 2fc98b725..675113d70 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/aws-assume-role/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/aws-assume-role/_index.en.md @@ -14,23 +14,26 @@ Using KKP you are able to use the `AssumeRole` feature to easily deploy user clu ![Running user clusters using an assumed IAM role](aws-assume-role-sequence-diagram.png?width=1000&classes=shadow,border "Running user clusters using an assumed IAM role") ## Benefits - * Privilege escalation + + - Privilege escalation - Get access to someones elses AWS account (e.g. a customer) to run user clusters on their behalf - While not described here, it is also possible to assume a role belonging to the same AWS account to escalate your privileges inside of your account - * Billing: All user cluster resources will be billed to **AWS account B** (the "external" account) - * Control: The owner of **AWS account B** (e.g. the customer) has control over all resources created in his account + - Billing: All user cluster resources will be billed to **AWS account B** (the "external" account) + - Control: The owner of **AWS account B** (e.g. the customer) has control over all resources created in his account ## Prerequisites - * An **AWS account A** that is allowed to assume the **IAM role R** of a second **AWS account B** + + - An **AWS account A** that is allowed to assume the **IAM role R** of a second **AWS account B** - **A** needs to be able to perform the API call `sts:AssumeRole` - You can test assuming the role by running the following AWS CLI command as **AWS account A**: \ `aws sts assume-role --role-arn "arn:aws:iam::YOUR_AWS_ACCOUNT_B_ID:role/YOUR_IAM_ROLE" --role-session-name "test" --external-id "YOUR_EXTERNAL_ID_IF_SET"` - * An **IAM role R** on **AWS account B** + - An **IAM role R** on **AWS account B** - The role should have all necessary permissions to run user clusters (IAM, EC2, Route53) - The role should have a trust relationship configured that allows **A** to assume the role **R**. Please refer to this [AWS article about trust relationships][aws-docs-how-to-trust-policies] for more information - Setting an `External ID` is optional but recommended when configuring the trust relationship. It helps avoiding the [confused deputy problem][aws-docs-confused-deputy] ## Usage + Creating a new cluster using an assumed role is a breeze. During cluster creation choose AWS as your provider and configure the cluster to your liking. After entering your AWS access credentials (access key ID and secret access key) choose "Enable Assume Role" (1), enter the ARN of the IAM role you would like to assume in field (2) (IAM role ARN should be in the format `arn:aws:iam::ID_OF_AWS_ACCOUNT_B:role/ROLE_NAME`) and if the IAM role has an optional `External ID` add it in field (3). @@ -39,6 +42,7 @@ After that you can proceed as usual. ![Enabling AWS AssumeRole in the cluster creation wizard](aws-assume-role-wizard.png?classes=shadow,border "Enabling AWS AssumeRole in the cluster creation wizard") ## Notes + Please note that KKP has no way to clean up clusters after a trust relationship has been removed. You should assure that all resources managed by KKP have been shut down before removing access. diff --git a/content/kubermatic/v2.28/tutorials-howtos/ccm-migration/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/ccm-migration/_index.en.md index 2daa8a640..ea517ffa4 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/ccm-migration/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/ccm-migration/_index.en.md @@ -29,6 +29,7 @@ needed a mechanism to allow users to migrate their clusters to the out-of-tree i ### Support and Prerequisites The CCM/CSI migration is supported for the following providers: + * Amazon Web Services (AWS) * OpenStack * [Required OpenStack services and cloudConfig properties for the external CCM][openstack-ccm-reqs] diff --git a/content/kubermatic/v2.28/tutorials-howtos/ccm-migration/via-ui/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/ccm-migration/via-ui/_index.en.md index 888db5382..fec4a5ce7 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/ccm-migration/via-ui/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/ccm-migration/via-ui/_index.en.md @@ -19,6 +19,7 @@ to the Kubernetes core code. Then, the Kubernetes community moved toward the out a plugin mechanism that allows different cloud providers to integrate their platforms with Kubernetes. ## CCM Migration Status + To allow migration from in-tree to out-of-tree CCM for existing cluster, the cluster details view has been extended by a section in the top area, the "External CCM Migration Status", that indicates the status of the CCM migration. @@ -27,10 +28,12 @@ section in the top area, the "External CCM Migration Status", that indicates the The "External CCM Migration Status" can have four different possible values: ### Not Needed + The cluster already uses the external CCM. ![ccm_migration_not_needed](ccm-migration-not-needed.png?height=60px&classes=shadow,border) ### Supported + KKP supports the external CCM for the given cloud provider, therefore the cluster can be migrated. ![ccm_migration_supported](ccm-migration-supported.png?height=130px&classes=shadow,border) @@ -38,14 +41,17 @@ When clicking on this button, a windows pops up to confirm the migration. ![ccm_migration_supported](ccm-migration-confirm.png?height=200px&classes=shadow,border) ### In Progress + External CCM migration has already been enabled for the given cluster, and the migration is in progress. ![ccm_migration_in_progress](ccm-migration-in-progress.png?height=60px&classes=shadow,border) ### Unsupported + KKP does not support yet the external CCM for the given cloud provider. ![ccm_migration_unsupported](ccm-migration-unsupported.png?height=60px&classes=shadow,border) ## Roll out MachineDeployments + Once the CCM migration has been enabled by clicking on the "Supported" button, the migration procedure will hang in "In progress" status until all the `machineDeployments` will be rolled out. To roll out a `machineDeployment` get into the `machineDeployment` view and click on the circular arrow in the top right. diff --git a/content/kubermatic/v2.28/tutorials-howtos/cluster-access/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/cluster-access/_index.en.md index 8c85039e1..f3cca3cf7 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/cluster-access/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/cluster-access/_index.en.md @@ -8,7 +8,9 @@ weight = 14 This manual explains how to configure Role-Based Access Control (a.k.a RBAC) on user clusters. ## Concepts + You can grant permission to 3 types of subjects: + * `user`: end user identified by their email * `group`: named collection of users * `service account`: a Kubernetes service account that authenticates a process (e.g. Continuous integration) @@ -21,13 +23,13 @@ permission by adding or removing binding. ![list group rbac](@/images/ui/rbac-group-view.png?classes=shadow,border "list group rbac") ![list service account rbac](@/images/ui/rbac-sa-view.png?classes=shadow,border "list service account rbac") - ## Role-Based Access Control Predefined Roles + KKP provides predefined roles and cluster roles to help implement granular permissions for specific resources and simplify access control across the user cluster. All of the default roles and cluster roles are labeled with `component=userClusterRole`. -### Cluster Level +### Cluster Level | Default ClusterRole | Description | |---------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| @@ -38,15 +40,14 @@ with `component=userClusterRole`. ### Namespace Level -| Default Role | Description | -|------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------| -| namespace-admin | Allows admin access. Allows read/write access to most resources in a namespace. | -| namespace-editor | Allows read/write access to most objects in a namespace. This role allows accessing secrets and running pods as any service account in the namespace| -| namespace-viewer | Allows read-only access to see most objects in a namespace. | - - +| Default Role | Description | +|------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------| +| namespace-admin | Allows admin access. Allows read/write access to most resources in a namespace. | +| namespace-editor | Allows read/write access to most objects in a namespace. This role allows accessing secrets and running pods as any service account in the namespace | +| namespace-viewer | Allows read-only access to see most objects in a namespace. | # Manage User Permissions + You can grant permissions to a group by clicking on `add Bindings`. ![Grant permission to a user](@/images/ui/rbac-user-binding.png?classes=shadow,border "Grant permission to a user") @@ -55,6 +56,7 @@ The cluster owner is automatically connected to the `cluster-admin` cluster role {{% /notice %}} ## Manage Group Permissions + Group are named collection of users. You can grant permission to a group by clicking on `add Bindings`. ![Grant permission to a group](@/images/ui/rbac-group-binding.png?classes=shadow,border "Grant permission to a Group") @@ -65,15 +67,17 @@ If you want to bind an OIDC group, you must prefix the group's name with `oidc:` The kubernetes API Server automatically adds this prefix to prevent conflicts with other authentication strategies {{% /notice %}} - ## Manage Service Account Permissions + Service accounts are designed to authenticate processes like Continuous integration (a.k.a CI). In this example, we will: + * create a Service account * grant permission to 2 namespaces * download the associated kubeconfig that can be used to deploy workload into these namespaces. ### Create a Service Account + Service accounts are namespaced objects. So you must choose in which namespace you will create it. The namespace where the service account live is not related to the granted permissions. To create a service account, click on `Add Service Account` @@ -82,6 +86,7 @@ To create a service account, click on `Add Service Account` In this example, we create a service account named `ci` into `kube-system` namespace. ## Grant Permissions to Service Account + You can grant permission by clicking on `Add binding` ![Grant permission to service account](@/images/ui/rbac-sa-binding.png?classes=shadow,border "Grant permission to service account") @@ -91,8 +96,8 @@ In this example, we grant the permission `namespace-admin` on the namespace `app You can see and remove binding by unfolding the service account. {{% /notice %}} - ### Download Service Account kubeconfig + Finally, you can download the service account's kubeconfig by clicking on the download icon. ![download service account's kubeconfig](@/images/ui/rbac-sa-download-kc.png?classes=shadow,border "Download service account's kubeconfig") @@ -101,8 +106,10 @@ You can edit service account's permissions at any time. There is no need to down {{% /notice %}} ### Delete a Service Account + You can delete a service account by clicking on the trash icon. Deleting a service account also deletes all associated binding. ## Debugging + The best way to debug authorizing problems is to enable [audit logging]({{< ref "../audit-logging/" >}}) and checks audit logs. For example, check the user belongs to the expected groups (see `.user.groups`) diff --git a/content/kubermatic/v2.28/tutorials-howtos/cluster-backup/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/cluster-backup/_index.en.md index b746656a6..629005d08 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/cluster-backup/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/cluster-backup/_index.en.md @@ -30,7 +30,6 @@ The `ClusterBackupStorageLocation` resource is a simple wrapper for Velero's [Ba When Custer Backup is enabled for a user cluster, KKP will deploy a managed instance of Velero on the user cluster, and propagate the required Velero `BackupStorageLocation` to the user cluster, with a special prefix to avoid collisions with other user clusters that could be using the same storage. - ![Create ClusterBackupStorageLocation](images/create-cbsl.png?classes=shadow,border "Create ClusterBackupStorageLocation") For simplicity, the KKP UI requires the minimal required values to enabled a working Velero Backup Storage Location. If further parameters are needed, they can be added by editing the `ClusterBackupStorageLocation` resources on the Seed cluster: @@ -46,12 +45,13 @@ The KKP control plane nodes need to have access to S3 endpoint defined in the St {{% /notice %}} #### Enabling Backup for User Clusters + Cluster Backup can be used for existing clusters and newly created ones. When creating a new user cluster, you can enable the `Cluster Backup` option in the cluster creation wizard. Once enabled, you will be required to assign the Backup Storage Location that you have created previously. ![Enable Backup for New Clusters](images/enable-backup-new-cluster.png?classes=shadow,border "Enable Backup for New Clusters") - For existing clusters, you can edit the cluster to assign the required Cluster Backup Location: + ![Edit Existing Cluster](images/enable-backup-edit-cluster.png?classes=shadow,border "Edit Existing Cluster") {{% notice note %}} @@ -63,17 +63,18 @@ Currently, KKP support defining a single storage location per cluster. The stora {{% /notice %}} #### Configuring Backups and Schedules + Using the KKP UI, you can configure one-time backups that run as soon as they are defined, or recurring backup schedules that run at specific intervals. Using the user cluster Kubernetes API, you can also create these resources using the Velero command line tool. They should show up automatically on the KKP UI once created. ##### One-time Backups + As with the `CBSL`, KKP UI allows the user to set the minimal required options to create a backup configuration. Since the backup process is started immediately after creation, it's not possible to edit it via the Kubernetes API. If you need to customize your backup further, you should use a Schedule. To configure a new one-time backup, go to the Backups list, select the cluster you would like to create the backup for from the drop-down list, and click `Create Cluster Backup`. ![Create Backup](images/create-backup.png?classes=shadow,border "Create Backup") - You can select the Namespaces that you want to include in this backup configuration from the dropdown list. Note that this list of Namespaces is directly fetched from your cluster, so you need to create the Namespaces before configuring the backup. You can define the backup expiration period, which defaults to **30 days** and you can also choose if you want to backup Persistent Volumes or not. KKP integration uses Velero's [File System Backup](https://velero.io/docs/v1.12/file-system-backup/) to cover the widest range of use cases. Persistent Volumes backup is enabled by default. @@ -83,20 +84,23 @@ Backing up Persistent Volume data to S3 backend can be resource intensive, espec {{% /notice %}} ##### Scheduled Backups + Creating scheduled backups is almost identical to one-time backups. Configuration is available from the "Schedules" submenu, selecting your user cluster and clicking `Create Backup Schedule`. For schedules, you also need to add a cron-style schedule to perform the backups. ![Create Backup Schedule](images/create-schedule.png?classes=shadow,border "Create Backup Schedule") - ##### Downloading Backups + KKP UI provides a convenient button to download backups. You can simply go to the "Backups" list, select a user cluster and a specific backup, then click the "Download Backup" button. + {{% notice note %}} The S3 endpoint defined in the Cluster Backup Storage Location must be accessible to the device used to download the backup. {{% /notice %}} #### Performing Restores + To restore a specific backup, go to the Backups page, select your cluster and find the required backup. Click on the `Restore Backup` button. Velero restores backups by creating a `Restore` API resource on the clusters and then reconciles it. @@ -145,6 +149,7 @@ KKP UI will use multipart upload for files larger than the 100MB size and maximu After all files have been uploaded successfully, user can follow the instructions mentioned above for Importing External Backups. ### Security Consideration + KKP administrators and project owners/editors should be carefully plan the backup storage strategy of projects and user clusters. Velero Backup is not currently designed with multi-tenancy in mind. While the upstream project is working on that, it's not there yet. As a result, Velero is expected to be managed by the cluster administrator who has full access to Velero resources as well as the backup storage backend. diff --git a/content/kubermatic/v2.28/tutorials-howtos/dashboard-customization/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/dashboard-customization/_index.en.md index 15c31cb90..0bc29dcc2 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/dashboard-customization/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/dashboard-customization/_index.en.md @@ -9,11 +9,11 @@ This manual explains multiple approaches to add custom themes to the application Here are some quick links to the different chapters: -* [Modifying Available Themes]({{< ref "#modifying-available-themes" >}}) -* [Disabling Theming Functionality]({{< ref "#disabling-theming-functionality" >}}) -* [Possible Customizing Approaches]({{< ref "#possible-customizing-approaches" >}}) - * [Preparing a New Theme With Access to the Sources]({{< ref "#customizing-the-application-sources" >}}) - * [Preparing a New Theme Without Access to the Sources]({{< ref "#customizing-the-application-sources-inside-custom-container" >}}) +- [Modifying Available Themes]({{< ref "#modifying-available-themes" >}}) +- [Disabling Theming Functionality]({{< ref "#disabling-theming-functionality" >}}) +- [Possible Customizing Approaches]({{< ref "#possible-customizing-approaches" >}}) + - [Preparing a New Theme With Access to the Sources]({{< ref "#customizing-the-application-sources" >}}) + - [Preparing a New Theme Without Access to the Sources]({{< ref "#customizing-the-application-sources-inside-custom-container" >}}) ## Modifying Available Themes @@ -28,10 +28,12 @@ In order to disable theming options for all users and enforce using only the def `enforced_theme` property in the application `config.json` file to the name of the theme that should be enforced (i.e. `light`). ## Possible Customizing Approaches + There are two possible approaches of preparing custom themes. They all rely on the same functionality. It all depends on user access to the application code in order to prepare and quickly test the new theme before using it in the official deployment. ### Preparing a New Theme With Access to the Sources + This approach gives user the possibility to reuse already defined code, work with `scss` instead of `css` and quickly test your new theme before uploading it to the official deployment. @@ -40,22 +42,22 @@ All available themes can be found inside `src/assets/themes` directory. Follow t - Create a new `scss` theme file inside `src/assets/themes` directory called `custom.scss`. This is only a temporary name that can be changed later. - As a base reuse code from one of the default themes, either `light.scss` or `dark.scss`. - Register a new style in `src/assets/config/config.json` similar to how it's done for `light` and `dark` themes. As the `name` use `custom`. - - `name` - refers to the theme file name stored inside `assets/themes` directory. - - `displayName` - will be used by the theme picker available in the `Account` view to display a new theme. - - `isDark` - defines the icon to be used by the theme picker (sun/moon). + - `name` - refers to the theme file name stored inside `assets/themes` directory. + - `displayName` - will be used by the theme picker available in the `Account` view to display a new theme. + - `isDark` - defines the icon to be used by the theme picker (sun/moon). ```json - { - "openstack": { - "wizard_use_default_user": false - }, - "themes": [ - { - "name": "custom", - "displayName": "Custom", - "isDark": false - } - ] - } + { + "openstack": { + "wizard_use_default_user": false + }, + "themes": [ + { + "name": "custom", + "displayName": "Custom", + "isDark": false + } + ] + } ``` - Run the application using `npm start`, open the `Account` view under `User settings`, select your new theme and update `custom.scss` according to your needs. @@ -79,41 +81,49 @@ All available themes can be found inside `src/assets/themes` directory. Follow t ![Theme picker](@/images/ui/custom-theme.png?classes=shadow,border "Theme picker") ### Preparing a New Theme Without Access to the Sources + In this case the easiest way of preparing a new theme is to download one of the existing themes light/dark. This can be done in a few different ways. We'll describe here two possible ways of downloading enabled themes. #### Download Theme Using the Browser -1. Open KKP UI -2. Open `Developer tools` and navigate to `Sources` tab. -3. There should be a CSS file of a currently selected theme available to be downloaded inside `assts/themes` directory. + +- Open KKP UI +- Open `Developer tools` and navigate to `Sources` tab. +- There should be a CSS file of a currently selected theme available to be downloaded inside `assts/themes` directory. ![Dev tools](@/images/ui/developer-tools.png?classes=shadow,border "Dev tools") #### Download Themes Directly From the KKP Dashboard container + Assuming that you know how to exec into the container and copy resources from/to it, themes can be simply copied over to your machine from the running KKP Dashboard container. They are stored inside the container in `dist/assets/themes` directory. ##### Kubernetes + Assuming that the KKP Dashboard pod name is `kubermatic-dashboard-5b96d7f5df-mkmgh` you can copy themes to your `${HOME}/themes` directory using below command: + ```bash kubectl -n kubermatic cp kubermatic-dashboard-5b96d7f5df-mkmgh:/dist/assets/themes ~/themes ``` ##### Docker + Assuming that the KKP Dashboard container name is `kubermatic-dashboard` you can copy themes to your `${HOME}/themes` directory using below command: + ```bash docker cp kubermatic-dashboard:/dist/assets/themes/. ~/themes ``` #### Using Compiled Theme to Prepare a New Theme + Once you have a base theme file ready, we can use it to prepare a new theme. To easier understand the process, let's assume that we have downloaded a `light.css` file and will be preparing a new theme called `solar.css`. -1. Rename `light.css` to `solar.css`. -2. Update `solar.css` file according to your needs. Anything in the file can be changed or new rules can be added. +- Rename `light.css` to `solar.css`. +- Update `solar.css` file according to your needs. Anything in the file can be changed or new rules can be added. In case you are changing colors, remember to update it in the whole file. -3. Mount new `solar.css` file to `dist/assets/themes` directory inside the application container. **Make sure not to override whole directory.** -4. Update `config.json` file inside `dist/config` directory and register the new theme. +- Mount new `solar.css` file to `dist/assets/themes` directory inside the application container. **Make sure not to override whole irectory.** +- Update `config.json` file inside `dist/config` directory and register the new theme. ```json { @@ -132,7 +142,6 @@ assume that we have downloaded a `light.css` file and will be preparing a new th That's it. After restarting the application, theme picker in the `Account` view should show your new `Solar` theme. - ## Setting Custom Postfix for the Page Title You can add a custom postfix to the page title from the UI config in the KubermaticConfiguration CRD on the master cluster. diff --git a/content/kubermatic/v2.28/tutorials-howtos/deploy-your-application/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/deploy-your-application/_index.en.md index 605828f80..dcea62bc4 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/deploy-your-application/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/deploy-your-application/_index.en.md @@ -11,7 +11,8 @@ Log into Kubermatic Kubernetes Platform (KKP), then [create and connect to the c We are using a [hello-world app](https://github.com/GoogleCloudPlatform/kubernetes-engine-samples/tree/master/hello-app) whose image is available at gcr.io/google-samples/node-hello:1.0. First, create a Deployment: -``` + +```yaml apiVersion: apps/v1 kind: Deployment metadata: @@ -38,29 +39,34 @@ spec: ```bash kubectl apply -f load-balancer-example.yaml ``` + To expose the Deployment, create a Service object of type LoadBalancer. + ```bash kubectl expose deployment hello-world --type=LoadBalancer --name=my-service ``` + Now you need to find out the external IP of that service. ```bash kubectl get services my-service ``` + The response on AWS should look like this: -``` +```bash NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-service LoadBalancer 10.240.29.100 8080:30574/TCP 19m ``` + If you curl against that external IP: ```bash curl :8080 ``` -you should get this response: +You should get this response: -``` +```bash Hello Kubernetes! ``` diff --git a/content/kubermatic/v2.28/tutorials-howtos/encryption-at-rest/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/encryption-at-rest/_index.en.md index 5e7be95e8..d442f13f6 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/encryption-at-rest/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/encryption-at-rest/_index.en.md @@ -15,7 +15,7 @@ Data will either be encrypted with static encryption keys or via envelope encryp ## Important Notes -- Data is only encrypted _at rest_ and not when requested by users with sufficient RBAC to access it. This means that the output of `kubectl get secret -o yaml` (or similar commands/actions) remains unencrypted and is only base64-encoded. Proper RBAC management is mandatory to secure secret data at all stages. +- Data is only encrypted *at rest* and not when requested by users with sufficient RBAC to access it. This means that the output of `kubectl get secret -o yaml` (or similar commands/actions) remains unencrypted and is only base64-encoded. Proper RBAC management is mandatory to secure secret data at all stages. - Due to multiple revisions of data existing in etcd, [etcd backups]({{< ref "../etcd-backups/" >}}) might contain previous revisions of a resource that are unencrypted if the etcd backup is taken less than five minutes after data has been encrypted. Previous revisions are compacted every five minutes by `kube-apiserver`. ## Configuring Encryption at Rest @@ -70,7 +70,7 @@ spec: value: ynCl8otobs5NuHu$3TLghqwFXVpv6N//SE6ZVTimYok= ``` -``` +```yaml # snippet for referencing a secret spec: encryptionConfiguration: @@ -95,8 +95,8 @@ Once configured, encryption at rest can be disabled via setting `spec.encryption Since encryption at rest needs to reconfigure the control plane and re-encrypt existing data in a user cluster, applying changes to the encryption configuration can take a while. Encryption status can be queried via `kubectl`: -```sh -$ kubectl get cluster -o jsonpath="{.status.encryption.phase}" +```bash +kubectl get cluster -o jsonpath="{.status.encryption.phase}" Active ``` @@ -131,7 +131,6 @@ This will configure the contents of `encryption-key-2022-02` as secondary encryp After control plane components have been rotated, switch the position of the two keys in the `keys` array. The given example will look like this: - ```yaml # only a snippet, not valid on its own! spec: diff --git a/content/kubermatic/v2.28/tutorials-howtos/etcd-backups/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/etcd-backups/_index.en.md index 39a7bb79d..2d9fd0e8c 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/etcd-backups/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/etcd-backups/_index.en.md @@ -10,7 +10,7 @@ Through KKP you can set up automatic scheduled etcd backups for your user cluste Firstly, you need to enable and configure at least one backup destination (backup bucket, endpoint and credentials). To see how, check [Etcd Backup Destination Settings]({{< ref "../administration/admin-panel/backup-buckets" >}}). It is recommended to enable [EtcdLauncher]({{< ref "../../cheat-sheets/etcd/etcd-launcher" >}}) on the clusters. -It is _required_ for the restore to work. +It is *required* for the restore to work. ## Etcd Backups @@ -48,11 +48,12 @@ It is also possible to do one-time backups (snapshots). The only change to the Y In **Kubermatic Kubernetes Platform (KKP)**, you can configure backup settings (`backupInterval` and `backupCount`) at two levels: 1. **Global Level** (KubermaticConfiguration): Defines default values for all Seeds. -2. **Seed Level** (Seed CRD): Overrides the global settings for a specific Seed. +1. **Seed Level** (Seed CRD): Overrides the global settings for a specific Seed. The **Seed-level configuration takes precedence** over the global KubermaticConfiguration. This allows fine-grained control over backup behavior for individual Seeds. #### **1. Global Configuration (KubermaticConfiguration)** + The global settings apply to all Seeds unless overridden by a Seed's configuration. These are defined in the `KubermaticConfiguration` CRD under `spec.seedController`: ```yaml @@ -70,6 +71,7 @@ spec: --- #### **2. Seed-Level Configuration (Seed CRD)** + Each Seed can override the global settings using the `etcdBackupRestore` field in the `Seed` CRD. These values take precedence over the global configuration: ```yaml @@ -173,7 +175,6 @@ This will create an `EtcdRestore` object for your cluster. You can observe the p ![Etcd Restore List](@/images/ui/etcd-restores-list.png?classes=shadow,border "Etcd Restore List") - #### Starting Restore via kubectl To restore a cluster from am existing backup via `kubectl`, you simply create a restore resource in the cluster namespace: @@ -195,7 +196,6 @@ spec: This needs to reference the backup name from the list of backups (shown above). - ### Restore Progress In the cluster view, you may notice that your cluster is in a `Restoring` state, and you can not interact with it until it is done. diff --git a/content/kubermatic/v2.28/tutorials-howtos/external-clusters/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/external-clusters/_index.en.md index b98b9e8ce..d6642f6d6 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/external-clusters/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/external-clusters/_index.en.md @@ -7,8 +7,10 @@ weight = 7 This section describes how to add, create and manage Kubernetes clusters known as external clusters in KKP. You can create a new cluster or import/connect an existing cluster. + - Import: You can import a cluster via credentials. Imported Cluster can be viewed and edited. Supported Providers: + - Azure Kubernetes Service (AKS) - Amazon Elastic Kubernetes Service (EKS) - Google Kubernetes Engine (GKE) @@ -22,6 +24,7 @@ Every cluster update is performed only by the cloud provider client. There is no ## Prerequisites The following requirements must be met to add an external Kubernetes cluster: + - The external Kubernetes cluster must already exist before you begin the import/connect process. Please refer to the cloud provider documentation for instructions. - The external Kubernetes cluster must be accessible using kubectl to get the information needed to add that cluster. - Make sure the cluster kubeconfig or provider credentials have sufficient rights to manage the cluster (get, list, upgrade,get kubeconfig) @@ -66,7 +69,7 @@ KKP allows creating a Kubernetes cluster on AKS/GKE/EKS and import it as an Exte ![External Cluster List](@/images/tutorials/external-clusters/externalcluster-list.png "External Cluster List") -## Delete Cluster: +## Delete Cluster {{% notice info %}} Delete operation is not allowed for imported clusters. @@ -109,12 +112,10 @@ You can `Disconnect` an external cluster by clicking on the disconnect icon next ![Disconnect Dialog](@/images/tutorials/external-clusters/disconnect.png "Disconnect Dialog") - ## Delete Cluster {{% notice info %}} Delete Cluster displays information in case nodes are attached {{% /notice %}} - ![Delete External Cluster](@/images/tutorials/external-clusters/delete-external-cluster-dialog.png "Delete External Cluster") diff --git a/content/kubermatic/v2.28/tutorials-howtos/external-clusters/aks/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/external-clusters/aks/_index.en.md index 7050d8d49..2166f2ed7 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/external-clusters/aks/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/external-clusters/aks/_index.en.md @@ -43,6 +43,7 @@ Validation performed will only check if the credentials have `Read` access. ![Select AKS cluster](@/images/tutorials/external-clusters/select-aks-cluster.png "Select AKS cluster") ## Create AKS Preset + Admin can create a preset on a KKP cluster using KKP `Admin Panel`. This Preset can then be used to Create/Import an AKS cluster. @@ -62,7 +63,7 @@ This Preset can then be used to Create/Import an AKS cluster. ![Choose AKS Preset](@/images/tutorials/external-clusters/choose-akspreset.png "Choose AKS Preset") -- Enter AKS credentials and Click on `Create` button. +- Enter AKS credentials and Click on `Create` button. !["Enter Credentials](@/images/tutorials/external-clusters/enter-aks-credentials-preset.png "Enter Credentials") @@ -137,7 +138,7 @@ Navigate to the cluster overview, scroll down to machine deployments and click o ![Update AKS Machine Deployment](@/images/tutorials/external-clusters/delete-md.png "Delete AKS Machine Deployment") -## Cluster State: +## Cluster State {{% notice info %}} `Provisioning State` is used to indicate AKS Cluster State diff --git a/content/kubermatic/v2.28/tutorials-howtos/external-clusters/eks/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/external-clusters/eks/_index.en.md index ded93a207..0f0436a65 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/external-clusters/eks/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/external-clusters/eks/_index.en.md @@ -42,6 +42,7 @@ You should see the list of all available clusters in the region specified. Selec ![Select EKS cluster](@/images/tutorials/external-clusters/select-eks-cluster.png "Select EKS cluster") ## Create EKS Preset + Admin can create a preset on a KKP cluster using KKP `Admin Panel`. This Preset can then be used to Create/Import an EKS cluster. @@ -61,7 +62,7 @@ This Preset can then be used to Create/Import an EKS cluster. ![Choose EKS Preset](@/images/tutorials/external-clusters/choose-akspreset.png "Choose EKS Preset") -- Enter EKS credentials and Click on `Create` button. +- Enter EKS credentials and Click on `Create` button. ![Enter Credentials](@/images/tutorials/external-clusters/enter-eks-credentials-preset.png "Enter Credentials") @@ -151,7 +152,7 @@ Example: `~/.aws/credentials` -``` +```bash [default] aws_access_key_id=AKIAIOSFODNN7EXAMPLE aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY @@ -161,7 +162,7 @@ aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY Now you can create kubeconfig file automatically using the following command: -``` +```bash aws eks update-kubeconfig --region region-code --name cluster-name ``` diff --git a/content/kubermatic/v2.28/tutorials-howtos/external-clusters/eks/create-eks/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/external-clusters/eks/create-eks/_index.en.md index 143dc5861..83590c1b4 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/external-clusters/eks/create-eks/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/external-clusters/eks/create-eks/_index.en.md @@ -39,11 +39,13 @@ Supported kubernetes versions 1.21.0, 1.22.0, 1.23.0, 1.24.0 currently available ## Configure the Cluster ### Basic Settings + - Name: Provide a unique name for your cluster - Kubernetes Version: Select the Kubernetes version for this cluster. - Cluster Service Role: Select the IAM role to allow the Kubernetes control plane to manage AWS resources on your behalf. This property cannot be changed after the cluster is created. ### Networking + - VPC: Select a VPC to use for your EKS cluster resources - Subnets: Choose the subnets in your VPC where the control plane may place elastic network interfaces (ENIs) to facilitate communication with your cluster. @@ -62,6 +64,7 @@ Both Subnet and Security Groups list depends on chosen VPC. - Add NodeGroup configurations: ### Basic Settings + - Name: Assign a unique name for this node group. The node group name should begin with letter or digit and can have any of the following characters: the set of Unicode letters, digits, hyphens and underscores. Maximum length of 63. - Kubernetes Version: Cluster Control Plane Version is prefilled. @@ -69,11 +72,14 @@ Both Subnet and Security Groups list depends on chosen VPC. - Disk Size: Select the size of the attached EBS volume for each node. ### Networking + - VPC: VPC of the cluster is pre-filled. - Subnet: Specify the subnets in your VPC where your nodes will run. ### Autoscaling + Node group scaling configuration: + - Desired Size: Set the desired number of nodes that the group should launch with initially. - Max Count: Set the maximum number of nodes that the group can scale out to. - Min Count: Set the minimum number of nodes that the group can scale in to. diff --git a/content/kubermatic/v2.28/tutorials-howtos/external-clusters/gke/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/external-clusters/gke/_index.en.md index d6f2ef55e..2f6a5dd14 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/external-clusters/gke/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/external-clusters/gke/_index.en.md @@ -41,6 +41,7 @@ Validation performed will only check if the credentials have `Read` access. {{% /notice %}} ## Create GKE Preset + Admin can create a preset on a KKP cluster using KKP `Admin Panel`. This Preset can then be used to Create/Import an GKE cluster. @@ -60,7 +61,7 @@ This Preset can then be used to Create/Import an GKE cluster. ![Choose EKS Preset](@/images/tutorials/external-clusters/choose-akspreset.png "Choose GKE Preset") -- Enter GKE credentials and Click on `Create` button. +- Enter GKE credentials and Click on `Create` button. ![Enter Credentials](@/images/tutorials/external-clusters/enter-gke-credentials-preset.png "Enter Credentials") @@ -136,25 +137,24 @@ The KKP platform allows getting kubeconfig file for the GKE cluster. ![Get GKE kubeconfig](@/images/tutorials/external-clusters/gke-kubeconfig.png "Get cluster kubeconfig") - The end-user must be aware that the kubeconfig expires after some short period of time. To mitigate this disadvantage you can extend the kubeconfig for the provider information and use exported JSON with the service account for the authentication. - Add `name: gcp` for the users: -``` +```bash users: - name: gke_kubermatic-dev_europe-central2-a_test user: auth-provider: name: gcp ``` + Provide authentication credentials to your application code by setting the environment variable GOOGLE_APPLICATION_CREDENTIALS. This variable applies only to your current shell session. If you want the variable to apply to future shell sessions, set the variable in your shell startup file, for example in the `~/.bashrc` or `~/.profile` file. -``` +```bash export GOOGLE_APPLICATION_CREDENTIALS="KEY_PATH" ``` @@ -162,6 +162,6 @@ Replace `KEY_PATH` with the path of the JSON file that contains your service acc For example: -``` +```bash export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/service-account-file.json" ``` diff --git a/content/kubermatic/v2.28/tutorials-howtos/gitops-argocd/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/gitops-argocd/_index.en.md index 60486cad3..578a05efb 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/gitops-argocd/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/gitops-argocd/_index.en.md @@ -9,8 +9,8 @@ linktitle = "GitOps via ArgoCD" This is an Alpha version of Kubermatic management via GitOps which can become the default way to manage KKP in future. This article explains how you can kickstart that journey with ArgoCD right now. But this feature can also change significantly and in backward incompatible ways. **Please use this setup in production at your own risk.** {{% /notice %}} - ## Need of GitOps solution + Kubermatic Kubernetes Platform is a versatile solution to create and manage Kubernetes clusters (user-clusters) a plethora of cloud providers and on-prem virtualizaton platforms. But this flexibility also means that there is a good amount of moving parts. KKP provides various tools to manage user-clusters across various regions and clouds. This is why, if we utilize a GitOps solution to manage KKP and its upgrades, KKP administrators would have better peace of mind. We have now provided [an alpha release of ArgoCD based management of KKP master and seeds](https://github.com/kubermatic/kubermatic/tree/main/charts/gitops/kkp-argocd-apps). @@ -39,6 +39,7 @@ For the demonstration, 1. ArgoCD will be installed in each of the seeds (and master/seed) to manage the respective seed's KKP components A high-level procedure to get ArgoCD to manage the seed would be as follows: + 1. Setup a Kubernetes cluster to be used as master (or master-seed combo) 1. Install ArgoCD Helm chart and KKP ArgoCD Applications Helm chart 1. Install KKP on the master seed using kubermatic-installer @@ -51,38 +52,39 @@ A high-level procedure to get ArgoCD to manage the seed would be as follows: The `Folder and File Structure` section in the [README.md of ArgoCD Apps Component](https://github.com/kubermatic/kubermatic/blob/main/charts/gitops/kkp-argocd-apps/README.md#folder-and-file-structure) explains what files should be present for each seed in what folders and how to customize the behavior of ArgoCD apps installation. -**Note:** Configuring values for all the components of KKP is a humongous task. Also - each KKP installation might like a different directory structure to manage KKP installation. This ArgoCD Apps based approach is an _opinionated attempt_ to provide a standard structure that can be used in most of the KKP installations. If you need different directory structure, refer to [README.md of ArgoCD Apps Component](https://github.com/kubermatic/kubermatic/blob/main/charts/gitops/kkp-argocd-apps/README.md) to understand how you can customize this, if needed. +**Note:** Configuring values for all the components of KKP is a humongous task. Also - each KKP installation might like a different directory structure to manage KKP installation. This ArgoCD Apps based approach is an *opinionated attempt* to provide a standard structure that can be used in most of the KKP installations. If you need different directory structure, refer to [README.md of ArgoCD Apps Component](https://github.com/kubermatic/kubermatic/blob/main/charts/gitops/kkp-argocd-apps/README.md) to understand how you can customize this, if needed. ### ArgoCD Apps + We will install ArgoCD on both the clusters and we will install following components on both clusters via ArgoCD. In non-GitOps scenario, some of these components are managed via kubermatic-installer and rest are left to be managed by KKP administrator in master/seed clusters by themselves. With ArgoCD, except for kubermatic-operator, everything else can be managed via ArgoCD. Choice remains with KKP Administrator to include which apps to be managed by ArgoCD. 1. Core KKP components - 1. Dex (in master) - 1. ngix-ingress-controller - 1. cert-manager + 1. Dex (in master) + 1. ngix-ingress-controller + 1. cert-manager 1. Backup components - 1. Velero + 1. Velero 1. Seed monitoring tools - 1. Prometheus - 1. alertmanager - 1. Grafana - 1. kube-state-metrics - 1. node-exporter - 1. blackbox-exporter - 1. Identity aware proxy (IAP) for seed monitoring components + 1. Prometheus + 1. alertmanager + 1. Grafana + 1. kube-state-metrics + 1. node-exporter + 1. blackbox-exporter + 1. Identity aware proxy (IAP) for seed monitoring components 1. Logging components - 1. Promtail - 1. Loki + 1. Promtail + 1. Loki 1. S3-like object storage, like Minio 1. User-cluster MLA components - 1. Minio and Minio Lifecycle Manager - 1. Grafana - 1. Consul - 1. Cortex - 1. Loki - 1. Alertmanager Proxy - 1. IAP for user-mla - 1. secrets - Grafana and Minio secrets + 1. Minio and Minio Lifecycle Manager + 1. Grafana + 1. Consul + 1. Cortex + 1. Loki + 1. Alertmanager Proxy + 1. IAP for user-mla + 1. secrets - Grafana and Minio secrets 1. Seed Settings - Kubermatic configuration, Seed objects, Preset objects and such misc objects needed for Seed configuration 1. Seed Extras - This is a generic ArgoCD app to deploy arbitrary resources not covered by above things and as per needs of KKP Admin. @@ -91,6 +93,7 @@ We will install ArgoCD on both the clusters and we will install following compon > You can find code for this tutorial with sample values in [this git repository](https://github.com/kubermatic-labs/kkp-using-argocd). For ease of installation, a `Makefile` has been provided to just make commands easier to read. Internally, it just depends on Helm, kubectl and kubermatic-installer binaries. But you will need to look at `make` target definitions in `Makefile` to adjust DNS names. While for the demo, provided files would work, you would need to look through each file under `dev` folder and customize the values as per your need. ### Setup two Kubernetes Clusters + > This step install two Kubernetes clusters using KubeOne in AWS. You can skip this step if you already have access to two Kubernetes clusters. Use KubeOne to create 2 clusters in DEV env - master-seed combo (c1) and regular seed (c2). The steps below are generic to any KubeOne installation. a) We create basic VMs in AWS using Terraform and then b) Use KubeOne to bootstrap the control plane on these VMs and then rollout worker node machines. @@ -129,7 +132,8 @@ make k1-apply-seed This same folder structure can be further expanded to add KubeOne installations for additional environments like staging and prod. -### Note about URLs: +### Note about URLs + The [demo codebase](https://github.com/kubermatic-labs/kkp-using-argocd) assumes `argodemo.lab.kubermatic.io` as base URL for KKP. The KKP Dashboard is available at this URL. This also means that ArgoCD for master-seed, all tools like Prometheus, Grafana, etc are accessible at `*.argodemo.lab.kubermatic.io` The seed need its own DNS prefix which is configured as `self.seed`. This prefix needs to be configured in Route53 or similar DNS provider in your setup. @@ -138,20 +142,21 @@ Similarly, the demo creates a 2nd seed named `india-seed`. Thus, 2nd seed's Argo These names would come handy to understand the references below to them and customize these values as per your setup. ### Installation of KKP Master-seed combo + 1. Install ArgoCD and all the ArgoCD Apps - ```shell + ```shell cd make deploy-argo-dev-master deploy-argo-apps-dev-master # get ArgoCD admin password via below command kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d - ``` + ``` 1. Create a git tag with right label. The `make` target creates a git tag with a pre-configured name: `dev-kkp-` and pushes it to your git repository. This way, when you want to upgrade KKP version, you just need to update the KKP version at the top of Makefile and run this make target again. - ```shell + ```shell make push-git-tag-dev - ``` + ``` 1. ArgoCD syncs nginx ingress and cert-manager automatically 1. Manually update the DNS records so that ArgoCD is accessible. (In the demo, this step is automated via external-dns app) - ```shell + ```shell # Apply the DNS CNAME record below manually in AWS Route53: # argodemo.lab.kubermatic.io # *.argodemo.lab.kubermatic.io @@ -159,76 +164,85 @@ These names would come handy to understand the references below to them and cust # alertmanager-user.self.seed.argodemo.lab.kubermatic.io # You can get load balancer details from `k get svc -n nginx-ingress-controller nginx-ingress-controller` # After DNS setup, you can access ArgoCD at https://argocd.argodemo.lab.kubermatic.io - ``` + ``` 1. Install KKP EE without Helm charts. If you would want a complete ArgoCD setup with separate seeds, we will need Enterprise Edition of KKP. You can run the demo with master-seed combo. For this, community edition of KKP is sufficient. - ```shell + ```shell make install-kkp-dev - ``` + ``` 1. Add Seed CR for seed called `self` - ```shell + + ```shell make create-long-lived-master-seed-kubeconfig # commit changes to git and push latest changes make push-git-tag-dev - ``` + ``` 1. Wait for all apps to sync in ArgoCD (depending on setup - you can choose to sync all apps manually. In the demo, all apps are configured to sync automatically.) 1. Add Seed DNS record AFTER seed has been added (needed for usercluster creation). Seed is added as part of ArgoCD apps reconciliation above (In the demo, this step is automated via external-dns app) - ```shell + + ```shell # Apply DNS record manually in AWS Route53 # *.self.seed.argodemo.lab.kubermatic.io # Loadbalancer details from `k get svc -n kubermatic nodeport-proxy` - ``` -1. Access KKP dashboard at https://argodemo.lab.kubermatic.io + ``` +1. Access KKP dashboard at 1. Now you can create user-clusters on this master-seed cluster 1. (only for staging Let's Encrypt) We need to provide the staging Let's Encrypt cert so that monitoring IAP components can work. For this, one needs to save the certificate issuer for `https://argodemo.lab.kubermatic.io/dex/` from browser / openssl and insert the certificate in `dev/common/custom-ca-bundle.yaml` for the secret `letsencrypt-staging-ca-cert` under key `ca.crt` in base64 encoded format. After saving the file, commit the change to git and re-apply the tag via `make push-git-tag-dev` and sync the ArgoCD App. ### Installation of dedicated KKP seed + > **Note:** You can follow these steps only if you have a KKP EE license. With KKP CE licence, you can only work with one seed (which is master-seed combo above) We follow similar procedure as the master-seed combo but with slightly different commands. We execute most of the commands below, unless noted otherwise, in a 2nd shell where we have exported kubeconfig of dev-seed cluster above. 1. Install ArgoCD and all the ArgoCD Apps - ```shell + ```shell cd make deploy-argo-dev-seed deploy-argo-apps-dev-seed # get ArgoCD admin password via below command kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d - ``` + ``` 1. Add Seed nginx-ingress DNS record (In the demo, this step is automated via external-dns app) - ```shell + + ```shell # Apply below DNS CNAME record manually in AWS Route53 # *.india.argodemo.lab.kubermatic.io # grafana-user.india.seed.argodemo.lab.kubermatic.io # alertmanager-user.india.seed.argodemo.lab.kubermatic.io # You can get load balancer details from `k get svc -n nginx-ingress-controller nginx-ingress-controller` # After DNS setup, you can access the seed ArgoCD at https://argocd.india.argodemo.lab.kubermatic.io - ``` + ``` + 1. Prepare kubeconfig with cluster-admin privileges so that it can be added as secret and then this cluster can be added as Seed in master cluster configuration - ```shell + + ```shell make create-long-lived-seed-kubeconfig # commit changes to git and push latest changes in make push-git-tag-dev - ``` + ``` 1. Sync all apps in ArgoCD by accessing ArgoCD UI and syncing apps manually 1. Add Seed nodeport proxy DNS record - ```shell + + ```shell # Apply DNS record manually in AWS Route53 # *.india.seed.argodemo.lab.kubermatic.io # Loadbalancer details from `k get svc -n kubermatic nodeport-proxy` - ``` + ``` + 1. Now we can create user-clusters on this dedicated seed cluster as well. > NOTE: If you receive timeout errors, you should restart node-local-dns daemonset and/or coredns / cluster-autoscaler deployment to resolve these errors. ----- +--- ## Verification that this entire setup works + 1. Clusters creation on both the seeds (**Note:** If your VPC does not have a NAT Gateway, then ensure that you selected public IP for worker nodes during cluster creation wizard) 1. Access All Monitoring, Logging, Alerting links - available in left nav on any project within KKP. 1. Check Minio and Velero setup 1. Check User-MLA Grafana and see you can access user-cluster metrics and logs. You must remember to enable user-cluster monitoring and logging during creation of user-cluster. 1. KKP upgrade scenario - 1. Change the KKP version in Makefile - 1. Rollout KKP installer target again - 1. Create new git tag and push this new tag - 1. rollout argo-apps again and sync all apps on both seeds. + 1. Change the KKP version in Makefile + 1. Rollout KKP installer target again + 1. Create new git tag and push this new tag + 1. rollout argo-apps again and sync all apps on both seeds. diff --git a/content/kubermatic/v2.28/tutorials-howtos/kkp-autoscaler/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/kkp-autoscaler/_index.en.md index bff6357d2..8048f750f 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/kkp-autoscaler/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/kkp-autoscaler/_index.en.md @@ -7,4 +7,4 @@ weight = 7 This section details how various autoscalers have been integrated in KKP. * Cluster Autoscaler (as Addon/Application) [integration](./cluster-autoscaler/) for user clusters -* Vertical Pod Autoscaler [integration](./vertical-pod-autoscaler/) as Feature for user-cluster control plane components \ No newline at end of file +* Vertical Pod Autoscaler [integration](./vertical-pod-autoscaler/) as Feature for user-cluster control plane components diff --git a/content/kubermatic/v2.28/tutorials-howtos/kkp-autoscaler/cluster-autoscaler/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/kkp-autoscaler/cluster-autoscaler/_index.en.md index 1aed162ed..26065470b 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/kkp-autoscaler/cluster-autoscaler/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/kkp-autoscaler/cluster-autoscaler/_index.en.md @@ -14,8 +14,8 @@ Kubernetes Cluster Autoscaler is a tool that automatically adjusts the size of t The Kubernetes Autoscaler in the KKP User cluster automatically scaled up/down when one of the following conditions is satisfied: -* Some pods failed to run in the cluster due to insufficient resources. -* There are nodes in the cluster that have been underutilised for an extended period (10 minutes by default) and can place their Pods on other existing nodes. +- Some pods failed to run in the cluster due to insufficient resources. +- There are nodes in the cluster that have been underutilised for an extended period (10 minutes by default) and can place their Pods on other existing nodes. ## Installing Kubernetes Autoscaler on User Cluster @@ -31,18 +31,19 @@ It is possible to migrate from cluster autoscaler addon to app. For that it is r ### Installing kubernetes autoscaler as an addon [DEPRECATED] -**Step 1** +#### Step 1 Create a KKP User cluster by selecting your project on the dashboard and click on "Create Cluster". More details can be found on the official [documentation]({{< ref "../../project-and-cluster-management/" >}}) page. -**Step 2** +#### Step 2 When the User cluster is ready, check the pods in the `kube-system` namespace to know if any autoscaler is running. ![KKP Dashboard](../images/kkp-autoscaler-dashboard.png?classes=shadow,border "KKP Dashboard") ```bash -$ kubectl get pods -n kube-system +kubectl get pods -n kube-system + NAME READY STATUS RESTARTS AGE canal-gq9gc 2/2 Running 0 21m canal-tnms8 2/2 Running 0 21m @@ -55,49 +56,41 @@ node-local-dns-4p8jr 1/1 Running 0 21m As shown above, the cluster autoscaler is not part of the running Kubernetes components within the namespace. -**Step 3** +#### Step 3 Add the Autoscaler to the User cluster under the addon section on the dashboard by clicking on the Addons and then `Install Addon.` ![Add Addon](../images/add-autoscaler-addon.png?classes=shadow,border "Add Addon") - Select Cluster Autoscaler: - ![Select Autoscaler](../images/select-autoscaler.png?classes=shadow,border "Select Autoscaler") - Select install: - ![Select Install](../images/install-autoscaler.png?classes=shadow,border "Select Install") - - ![Installation Confirmation](../images/autoscaler-confirmation.png?classes=shadow,border "Installation Confirmation") - -**Step 4** +#### Step 4 Go over to the cluster and check the pods in the `kube-system` namespace using the `kubectl` command. ```bash -$ kubectl get pods -n kube-system -NAME READY STATUS RESTARTS AGE -canal-gq9gc 2/2 Running 0 32m -canal-tnms8 2/2 Running 0 33m -cluster-autoscaler-58c6c755bb-9g6df 1/1 Running 0 39s -coredns-666448b887-s8wv8 1/1 Running 0 36m -coredns-666448b887-vldzz 1/1 Running 0 36m +kubectl get pods -n kube-system + +NAME READY STATUS RESTARTS AGE +canal-gq9gc 2/2 Running 0 32m +canal-tnms8 2/2 Running 0 33m +cluster-autoscaler-58c6c755bb-9g6df 1/1 Running 0 39s +coredns-666448b887-s8wv8 1/1 Running 0 36m +coredns-666448b887-vldzz 1/1 Running 0 36m ``` As shown above, the cluster autoscaler has been provisioned and running. - ## Annotating MachineDeployments for Autoscaling - The Cluster Autoscaler only considers MachineDeployment with valid annotations. The annotations are used to control the minimum and the maximum number of replicas per MachineDeployment. You don't need to apply those annotations to all MachineDeployment objects, but only on MachineDeployments that Cluster Autoscaler should consider. Annotations can be set either using the KKP Dashboard or manually with kubectl. ### KKP Dashboard @@ -120,26 +113,26 @@ cluster.k8s.io/cluster-api-autoscaler-node-group-max-size - the maximum number o You can apply the annotations to MachineDeployments once the User cluster is provisioned and the MachineDeployments are created and running by following the steps below. -**Step 1** +#### Step 1 Run the following kubectl command to check the available MachineDeployments: ```bash -$ kubectl get machinedeployments -n kube-system +kubectl get machinedeployments -n kube-system NAME AGE DELETED REPLICAS AVAILABLEREPLICAS PROVIDER OS VERSION test-cluster-worker-v5drmq 3h56m 2 2 aws ubuntu 1.19.9 test-cluster-worker-pndqd 3h59m 1 1 aws ubuntu 1.19.9 ``` -**Step 2** + Step 2 -The annotation command will be used with one of the MachineDeployments above to annotate the desired MachineDeployments. In this case, the `test-cluster-worker-v5drmq` will be annotated, and the minimum and maximum will be set. + The annotation command will be used with one of the MachineDeployments above to annotate the desired MachineDeployments. In this case, the `test-cluster-worker-v5drmq` will be annotated, and the minimum and maximum will be set. ### Minimum Annotation ```bash -$ kubectl annotate machinedeployment -n kube-system test-cluster-worker-v5drmq cluster.k8s.io/cluster-api-autoscaler-node-group-min-size="1" +kubectl annotate machinedeployment -n kube-system test-cluster-worker-v5drmq cluster.k8s.io/cluster-api-autoscaler-node-group-min-size="1" machinedeployment.cluster.k8s.io/test-cluster-worker-v5drmq annotated ``` @@ -147,18 +140,18 @@ machinedeployment.cluster.k8s.io/test-cluster-worker-v5drmq annotated ### Maximum Annotation ```bash -$ kubectl annotate machinedeployment -n kube-system test-cluster-worker-v5drmq cluster.k8s.io/cluster-api-autoscaler-node-group-max-size="5" +kubectl annotate machinedeployment -n kube-system test-cluster-worker-v5drmq cluster.k8s.io/cluster-api-autoscaler-node-group-max-size="5" machinedeployment.cluster.k8s.io/test-cluster-worker-v5drmq annotated ``` - -**Step 3** +#### Step 3 Check the MachineDeployment description: ```bash -$ kubectl describe machinedeployments -n kube-system test-cluster-worker-v5drmq +kubectl describe machinedeployments -n kube-system test-cluster-worker-v5drmq + Name: test-cluster-worker-v5drmq Namespace: kube-system Labels: @@ -189,25 +182,21 @@ To edit KKP Autoscaler, click on the three dots in front of the Cluster Autoscal ![Edit Autoscaler](../images/edit-autoscaler.png?classes=shadow,border "Edit Autoscaler") - ## Delete KKP Autoscaler You can delete autoscaler from where you edit it above and select delete. ![Delete Autoscaler](../images/delete-autoscaler.png?classes=shadow,border "Delete Autoscaler") - - Once it has been deleted, you can check the cluster to ensure that the cluster autoscaler has been deleted using the command `kubectl get pods -n kube-system`. - +Once it has been deleted, you can check the cluster to ensure that the cluster autoscaler has been deleted using the command `kubectl get pods -n kube-system`. ## Customize KKP Autoscaler You can customize the cluster autoscaler addon in order to override the cluster autoscaler deployment definition to set or pass the required flag(s) by following the instructions provided [in the Addons document]({{< relref "../../../architecture/concept/kkp-concepts/addons/#custom-addons" >}}). -* [My cluster is below minimum / above maximum number of nodes, but CA did not fix that! Why?](https://github.com/kubernetes/autoscaler/blob/aff50d773e42f95baaae300f27e3b2e9cba1ea1b/cluster-autoscaler/FAQ.md#my-cluster-is-below-minimum--above-maximum-number-of-nodes-but-ca-did-not-fix-that-why) - -* [I'm running cluster with nodes in multiple zones for HA purposes. Is that supported by Cluster Autoscaler?](https://github.com/kubernetes/autoscaler/blob/aff50d773e42f95baaae300f27e3b2e9cba1ea1b/cluster-autoscaler/FAQ.md#im-running-cluster-with-nodes-in-multiple-zones-for-ha-purposes-is-that-supported-by-cluster-autoscaler) +- [My cluster is below minimum / above maximum number of nodes, but CA did not fix that! Why?](https://github.com/kubernetes/autoscaler/blob/aff50d773e42f95baaae300f27e3b2e9cba1ea1b/cluster-autoscaler/FAQ.md#my-cluster-is-below-minimum--above-maximum-number-of-nodes-but-ca-did-not-fix-that-why) +- [I'm running cluster with nodes in multiple zones for HA purposes. Is that supported by Cluster Autoscaler?](https://github.com/kubernetes/autoscaler/blob/aff50d773e42f95baaae300f27e3b2e9cba1ea1b/cluster-autoscaler/FAQ.md#im-running-cluster-with-nodes-in-multiple-zones-for-ha-purposes-is-that-supported-by-cluster-autoscaler) ## Summary @@ -215,5 +204,5 @@ That is it! You have successfully deployed a Kubernetes Autoscaler on a KKP Clus ## Learn More -* Read more on [Kubernetes autoscaler here](https://github.com/kubernetes/autoscaler/blob/main/cluster-autoscaler/FAQ.md#what-is-cluster-autoscaler). -* You can easily provision a Kubernetes User Cluster using [KKP here]({{< relref "../../../tutorials-howtos/project-and-cluster-management/" >}}) +- Read more on [Kubernetes autoscaler here](https://github.com/kubernetes/autoscaler/blob/main/cluster-autoscaler/FAQ.md#what-is-cluster-autoscaler). +- You can easily provision a Kubernetes User Cluster using [KKP here]({{< relref "../../../tutorials-howtos/project-and-cluster-management/" >}}) diff --git a/content/kubermatic/v2.28/tutorials-howtos/kkp-autoscaler/vertical-pod-autoscaler/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/kkp-autoscaler/vertical-pod-autoscaler/_index.en.md index f6d868e6b..84191846c 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/kkp-autoscaler/vertical-pod-autoscaler/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/kkp-autoscaler/vertical-pod-autoscaler/_index.en.md @@ -7,14 +7,17 @@ weight = 9 This section explains how Kubernetes Vertical Pod Autoscaler helps in a scaling the control plane components for user-clusters with rising load on that user-cluster. ## What is a Vertical Pod Autoscaler in Kubernetes? + [Vertical Pod Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) (VPA) frees users from the necessity of setting up-to-date resource limits and requests for the containers in their pods. When configured, it will set the requests automatically based on usage and thus allow proper scheduling onto nodes so that appropriate resource amount is available for each pod. It will also maintain ratios between limits and requests that were specified in initial containers configuration. Unlike HPA, VPA does not bundled with Kubernetes itself. It must get installed in the cluster separately. ## KKP Components controllable via VPA + KKP natively integrates VPA resource creation, reconciliation and management for all user-cluster control plane components. This allows these components to have optimal resource allocations - which can grow with growing cluster's needs. This reduces administration burden on KKP administrators. Components controlled by VPA are: + 1. apiserver 1. controller-manager 1. etcd @@ -29,6 +32,7 @@ All these components have default resources allocated by KKP. You can either use > Note: If you enable VPA and add `componentsOverride` block as well for a given cluster to specify resources, `componentsOverride` takes precedence. ## How to enable VPA in KKP + To enable VPA controlled control-plane components for user-clusters, we just need to turn on a featureFlag in Kubermatic Configuration. ```yaml @@ -40,4 +44,5 @@ spec: This installs necessary VPA components in `kube-system` namespace of each seed. It also create VPA custom resources for each of the control-plane components as noted above. ## Customizing VPA installation + You can customize various aspects of VPA deployments themselves (i.e. admissionController, recommender and updater) via [KKP configuration](../../../tutorials-howtos/kkp-configuration/) diff --git a/content/kubermatic/v2.28/tutorials-howtos/kkp-configuration/custom-certificates/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/kkp-configuration/custom-certificates/_index.en.md index 682c186d5..55ba5aac4 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/kkp-configuration/custom-certificates/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/kkp-configuration/custom-certificates/_index.en.md @@ -18,7 +18,7 @@ Changes to the CA bundle are automatically reconciled across these locations. If is invalid, no further reconciliation happens, so if the master cluster's CA bundle breaks, seed clusters are not affected. -Do note that the CA bundle configured in KKP is usually _the only_ source of CA certificates +Do note that the CA bundle configured in KKP is usually *the only* source of CA certificates for all of these components, meaning that no certificates are mounted from any of the Seed cluster host systems. @@ -82,8 +82,8 @@ If issuing certificates inside the cluster is not possible, static certificates `Secret` resources. The cluster admin is responsible for renewing and updating them as needed. A TLS secret can be created via `kubectl` when certificate and private key are available: -```sh -$ kubectl create secret tls tls-secret --cert=tls.cert --key=tls.key +```bash +kubectl create secret tls tls-secret --cert=tls.cert --key=tls.key ``` Going forward, it is assumed that proper certificates have already been created and now need to be configured into KKP. @@ -186,7 +186,6 @@ spec: name: ca-bundle ``` - ### KKP The KKP Operator manages a single `Ingress` for the KKP API/dashboard. This by default includes setting up @@ -221,7 +220,6 @@ If the static certificate is signed by a private CA, it is necessary to add that used by KKP. Otherwise, components will not be able to properly communicate with each other. {{% /notice %}} - #### User Cluster KKP automatically synchronizes the relevant CA bundle into each user cluster. The `ConfigMap` diff --git a/content/kubermatic/v2.28/tutorials-howtos/kkp-configuration/dynamic-kubelet-config/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/kkp-configuration/dynamic-kubelet-config/_index.en.md index 511c5fee2..251adbc7d 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/kkp-configuration/dynamic-kubelet-config/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/kkp-configuration/dynamic-kubelet-config/_index.en.md @@ -12,9 +12,10 @@ Dynamic kubelet configuration is a deprecated feature in Kubernetes. It will no Dynamic kubelet configuration allows for live reconfiguration of some or all nodes' kubelet options. ### See Also -* https://kubernetes.io/blog/2018/07/11/dynamic-kubelet-configuration/ -* https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/ -* https://github.com/kubernetes/enhancements/issues/281 + +* +* +* ### Enabling diff --git a/content/kubermatic/v2.28/tutorials-howtos/kkp-kubeone-integration/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/kkp-kubeone-integration/_index.en.md index b5a024ddf..b3e16cfbe 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/kkp-kubeone-integration/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/kkp-kubeone-integration/_index.en.md @@ -10,6 +10,7 @@ This section describes how to import and manage KubeOne clusters in KKP. We can import/connect an existing KubeOne cluster. Imported Cluster can be viewed and edited. Currently Supported Providers: + - [AWS]({{< ref "./aws" >}}) - [Google Cloud Provider]({{< ref "./gcp" >}}) - [Azure]({{< ref "./azure" >}}) @@ -18,17 +19,17 @@ We can import/connect an existing KubeOne cluster. Imported Cluster can be viewe - [OpenStack]({{< ref "./openstack" >}}) - [vSphere]({{< ref "./vsphere" >}}) - ## Prerequisites The following requirements must be met to import a KubeOne cluster: - - The KubeOne cluster must already exist before we begin the import/connect process. - - KubeOne configuration manifest: YAML manifest file that describes the KubeOne cluster configuration. - If you don't have the manifest of your cluster, it can be generated by running `kubeone config dump -m kubeone.yaml -t tf.json` from your KubeOne terraform directory. - - Private SSH Key used to create the KubeOne cluster: KubeOne connects to instances over SSH to perform any management operation. - - Provider Specific Credentials used to create the cluster. - > For more information on the KubeOne configuration for different environments, checkout the [Creating the Kubernetes Cluster using KubeOne]({{< relref "../../../../kubeone/main/tutorials/creating-clusters/" >}}) documentation. +- The KubeOne cluster must already exist before we begin the import/connect process. +- KubeOne configuration manifest: YAML manifest file that describes the KubeOne cluster configuration. + If you don't have the manifest of your cluster, it can be generated by running `kubeone config dump -m kubeone.yaml -t tf.json` from your KubeOne terraform directory. +- Private SSH Key used to create the KubeOne cluster: KubeOne connects to instances over SSH to perform any management operation. +- Provider Specific Credentials used to create the cluster. + +> For more information on the KubeOne configuration for different environments, checkout the [Creating the Kubernetes Cluster using KubeOne]({{< relref "../../../../kubeone/main/tutorials/creating-clusters/" >}}) documentation. ## Import KubeOne Cluster @@ -102,6 +103,7 @@ We can `Disconnect` a KubeOne cluster by clicking on the disconnect icon next to ![Disconnect Dialog](@/images/tutorials/kubeone-clusters/disconnect-cluster-dialog.png "Disconnect Dialog") ## Troubleshoot + To Troubleshoot a failing imported cluster we can `Pause` cluster by editing the external cluster CR. ```bash diff --git a/content/kubermatic/v2.28/tutorials-howtos/kkp-kubeone-integration/digitalocean/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/kkp-kubeone-integration/digitalocean/_index.en.md index ffb8db1b7..fdcb72b62 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/kkp-kubeone-integration/digitalocean/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/kkp-kubeone-integration/digitalocean/_index.en.md @@ -29,7 +29,6 @@ You can add an existing DigitalOcean KubeOne cluster and then manage it using KK - Manually enter the credentials `Token` used to create the KubeOne cluster you are importing. - ![DigitalOcean credentials](@/images/tutorials/kubeone-clusters/digitalocean-credentials-step.png "DigitalOcean credentials") - Review provided settings and click `Import KubeOne Cluster`. diff --git a/content/kubermatic/v2.28/tutorials-howtos/kkp-kubeone-integration/hetzner/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/kkp-kubeone-integration/hetzner/_index.en.md index 6d6ccf3d4..9a62e7609 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/kkp-kubeone-integration/hetzner/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/kkp-kubeone-integration/hetzner/_index.en.md @@ -29,7 +29,6 @@ You can add an existing Hetzner KubeOne cluster and then manage it using KKP. - Manually enter the credentials `Token` used to create the KubeOne cluster you are importing. - ![Hetzner credentials](@/images/tutorials/kubeone-clusters/hetzner-credentials-step.png "Hetzner credentials") - Review provided settings and click `Import KubeOne Cluster`. diff --git a/content/kubermatic/v2.28/tutorials-howtos/kkp-kubeone-integration/openstack/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/kkp-kubeone-integration/openstack/_index.en.md index 4b20688d2..c08cb6102 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/kkp-kubeone-integration/openstack/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/kkp-kubeone-integration/openstack/_index.en.md @@ -26,7 +26,6 @@ You can add an existing OpenStack KubeOne cluster and then manage it using KKP. - Enter the credentials `AuthURL`, `Username`, `Password`, `Domain`, `Project Name`, `Project ID` and `Region` used to create the KubeOne cluster you are importing. - ![OpenStack credentials](@/images/tutorials/kubeone-clusters/openstack-credentials-step.png "OpenStack credentials") - Review provided settings and click `Import KubeOne Cluster`. diff --git a/content/kubermatic/v2.28/tutorials-howtos/kkp-kubeone-integration/vsphere/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/kkp-kubeone-integration/vsphere/_index.en.md index fa4e0fe30..97c9f781a 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/kkp-kubeone-integration/vsphere/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/kkp-kubeone-integration/vsphere/_index.en.md @@ -26,7 +26,6 @@ You can add an existing vSphere KubeOne cluster and then manage it using KKP. - Enter the credentials `Username`, `Password`, and `ServerURL` used to create the KubeOne cluster you are importing. - ![vSphere credentials](@/images/tutorials/kubeone-clusters/vsphere-credentials-step.png "vSphere credentials") - Review provided settings and click `Import KubeOne Cluster`. diff --git a/content/kubermatic/v2.28/tutorials-howtos/kkp-os-support/coreos-eos/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/kkp-os-support/coreos-eos/_index.en.md index 3733533c6..098c34fc0 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/kkp-os-support/coreos-eos/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/kkp-os-support/coreos-eos/_index.en.md @@ -13,6 +13,7 @@ Please read this blog post to learn how to migrate your clusters to be able to u You can check the operating system from the KKP dashboard (machine deployments list) or over kubectl commands, for instance: + * `kubectl get machines -nkube-system` * `kubectl get nodes -owide`. @@ -32,6 +33,7 @@ With the new deployment, you can then migrate the containers to the newly create Additionally, it is a good idea to consider a pod disruption budget for each application you want to transfer to other nodes. Example PDB resource: + ```yaml apiVersion: policy/v1beta1 kind: PodDisruptionBudget @@ -43,6 +45,7 @@ spec: matchLabels: app: nginx ``` + Find more information [here](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/). ## Do I Have to Rely on the Kubermatic Eviction Mechanism? diff --git a/content/kubermatic/v2.28/tutorials-howtos/kyverno-policies/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/kyverno-policies/_index.en.md index ea6fefe01..484d4457c 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/kyverno-policies/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/kyverno-policies/_index.en.md @@ -25,7 +25,6 @@ You can also enable or disable it after creation from the **Edit Cluster** dialo ![edit cluster](images/enable-kyverno-edit-cluster.png?classes=shadow,border "Edit Cluster") - ## Policy Templates Admin View Admins can manage global policy templates directly from the **Kyverno Policies** page in the **Admin Panel.** @@ -61,4 +60,3 @@ This page displays a list of all applied policies. You can also create a policy ![add policy binding](images/add-policy-binding.png?classes=shadow,border "Add Policy Binding") You can choose a template from the list of all available templates. Note that templates already applied will not be available. - diff --git a/content/kubermatic/v2.28/tutorials-howtos/manage-workers-node/via-ui/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/manage-workers-node/via-ui/_index.en.md index f9cf3cd7e..65283f49d 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/manage-workers-node/via-ui/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/manage-workers-node/via-ui/_index.en.md @@ -5,7 +5,6 @@ date = 2021-04-20T12:16:38+02:00 weight = 15 +++ - ## Find the Edit Setting To add or delete a worker node you can easily edit the machine deployment in your cluster. Navigate to the cluster overview, scroll down and hover over `Machine Deployments` and click on the edit icon next to the deployment you want to edit. diff --git a/content/kubermatic/v2.28/tutorials-howtos/metering/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/metering/_index.en.md index 41429aa91..974156b9b 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/metering/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/metering/_index.en.md @@ -28,11 +28,11 @@ work properly the s3 endpoint needs to be available from the browser. ### Prerequisites -* S3 bucket +- S3 bucket - Any S3-compatible endpoint can be used - The bucket is required to store report csv files - Should be available via browser -* Administrator access to dashboard +- Administrator access to dashboard - Administrator access can be gained by - asking other administrators to follow the instructions for [Adding administrators][adding-administrators] via the dashboard - or by using `kubectl` to give a user admin access. Please refer to the [Admin Panel][admin-panel] @@ -75,7 +75,6 @@ according to your wishes. - When choosing a volume size, please take into consideration that old usage data files will not be deleted automatically - In the end it is possible to create different report schedules. Click on **Create Schedule**, to open the Schedule configuration dialog. diff --git a/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/master-seed/customization/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/master-seed/customization/_index.en.md index ceb4bad0a..1d7aa4f4f 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/master-seed/customization/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/master-seed/customization/_index.en.md @@ -9,10 +9,10 @@ This chapter describes the customization of the KKP [Master / Seed Monitoring, L When it comes to monitoring, no approach fits all use cases. It's expected that you will want to adjust things to your needs and this page describes the various places where customizations can be applied. In broad terms, there are four main areas that are discussed: -* customer-cluster Prometheus -* seed-cluster Prometheus -* alertmanager rules -* Grafana dashboards +- customer-cluster Prometheus +- seed-cluster Prometheus +- alertmanager rules +- Grafana dashboards You will want to familiarize yourself with the [Installation of the Master / Seed MLA Stack]({{< relref "../installation/" >}}) before reading any further. @@ -148,12 +148,14 @@ prometheus: Managing the `ruleFiles` is also the way to disable the predefined rules by just removing the applicable item from the list. You can also keep the list completely empty to disable any and all alerts. ### Long-term metrics storage + By default, the seed prometheus is configured to store 1 days worth of metrics. It can be customized via overriding `prometheus.tsdb.retentionTime` field in `values.yaml` used for chart installation. If you would like to store the metrics for longer term, typically other solutions like Thanos are used. Thanos integration is a more involved process. Please read more about [thanos integration]({{< relref "./thanos.md" >}}). ## Alertmanager + Alertmanager configuration can be tweaked via `values.yaml` like so: ```yaml @@ -175,6 +177,7 @@ alertmanager: - channel: '#alerting' send_resolved: true ``` + Please review the [Alertmanager Configuration Guide](https://prometheus.io/docs/alerting/latest/configuration/) for detailed configuration syntax. You can review the [Alerting Runbook]({{< relref "../../../../cheat-sheets/alerting-runbook" >}}) for a reference of alerts that Kubermatic Kubernetes Platform (KKP) monitoring setup can fire, alongside a short description and steps to debug. @@ -183,9 +186,9 @@ You can review the [Alerting Runbook]({{< relref "../../../../cheat-sheets/alert Customizing Grafana entails three different aspects: -* Datasources (like Prometheus, InfluxDB, ...) -* Dashboard providers (telling Grafana where to load dashboards from) -* Dashboards themselves +- Datasources (like Prometheus, InfluxDB, ...) +- Dashboard providers (telling Grafana where to load dashboards from) +- Dashboards themselves In all cases, you have two general approaches: Either take the Grafana Helm chart and place additional files into the existing directory structure or leave the Helm chart as-is and use the `values.yaml` and your own ConfigMaps/Secrets to hold your customizations. This is very similar to how customizing the seed-level Prometheus works, so if you read that chapter, you will feel right at home. diff --git a/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/master-seed/customization/thanos.md b/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/master-seed/customization/thanos.md index aef3c80ea..384246856 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/master-seed/customization/thanos.md +++ b/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/master-seed/customization/thanos.md @@ -8,12 +8,15 @@ weight = 20 This page explains how we can integrate [Thanos](https://thanos.io/) long term storage of metrics with KKP seed Prometheus ## Pre-requisites + 1. Helm is installed. 1. KKP v2.22.4+ is installed in the cluster. 1. KKP Prometheus chart has been deployed in each seed where you want to store long term metrics ## Integration steps + Below page outlines + 1. Installation of Thanos components in your Kubernetes cluster via Helm chart 1. Customization of KKP Prometheus chart to augment Prometheus pod with Thanos sidecar 1. Customization of KKP Prometheus chart values to monitor and get alerts for Thanos components @@ -21,6 +24,7 @@ Below page outlines ## Install thanos chart You can install the Thanos Helm chart from Bitnami chart repository + ```shell HELM_EXPERIMENTAL_OCI=1 helm upgrade --install thanos \ --namespace monitoring --create-namespace\ @@ -30,6 +34,7 @@ HELM_EXPERIMENTAL_OCI=1 helm upgrade --install thanos \ ``` ### Basic Thanos Customization file + You can configure Thanos to store the metrics in any s3 compatible storage as well as many other popular cloud storage solutions. Below yaml snippet uses Azure Blob storage configuration. You can refer to all [supported object storage configurations](https://thanos.io/tip/thanos/storage.md/#supported-clients). @@ -58,7 +63,6 @@ storegateway: enabled: true ``` - ## Augment prometheus to use Thanos sidecar In order to receive metrics from Prometheus into Thanos, Thanos provides two mechanisms. @@ -142,12 +146,12 @@ prometheus: mountPath: /etc/thanos ``` - ## Add scraping and alerting rules to monitor thanos itself To monitor Thanos effectively, we must scrape the Thanos components and define some Prometheus alerting rules to get notified when Thanos is not working correctly. Below sections outline changes in `prometheus` section of `values.yaml` to enable such scraping and alerting for Thanos components. ### Scraping config + Add below `scraping` configuration to scrape the Thanos sidecar as well as various Thanos components deployed via helm chart. ```yaml @@ -183,6 +187,7 @@ prometheus: ``` ### Alerting Rules + Add Below configmap and then refer this configMap in KKP Prometheus chart's `values.yaml` customization ```yaml @@ -197,6 +202,7 @@ prometheus: ```` The configmap + ```yaml apiVersion: v1 kind: ConfigMap diff --git a/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/master-seed/health-assessment/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/master-seed/health-assessment/_index.en.md index 14ad63e25..cb0c2a9fe 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/master-seed/health-assessment/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/master-seed/health-assessment/_index.en.md @@ -11,6 +11,7 @@ All screenshots below were taken from live Grafana instance installed with Kuber ## Dashboard categories Dashboards list consists of categories as shown below, each containing Dashboards relevant to the specific area of KKP: + - **Go Applications** - Go metrics for applications running in the cluster. - **Kubermatic** - dashboards provide insight into KKP components (described [below](#monitoring-kubermatic-kubernetes-platform)). - **Kubernetes** - dashboards used for monitoring Kubernetes resources of the seed cluster (described [below](#monitoring-kubernetes)). diff --git a/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/master-seed/installation/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/master-seed/installation/_index.en.md index d2c006ed2..6818bc4b6 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/master-seed/installation/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/master-seed/installation/_index.en.md @@ -11,14 +11,14 @@ This chapter describes how to setup the [KKP Master / Seed MLA (Monitoring, Logg The exact requirements for the stack depend highly on the expected cluster load; the following are the minimum viable resources: -* 4 GB RAM -* 2 CPU cores -* 200 GB disk storage +- 4 GB RAM +- 2 CPU cores +- 200 GB disk storage This guide assumes the following tools are available: -* Helm 3.x -* kubectl 1.16+ +- Helm 3.x +- kubectl 1.16+ ## Monitoring, Logging & Alerting Components @@ -43,6 +43,7 @@ Download the [release archive from our GitHub release page](https://github.com/k {{< tabs name="Download the installer" >}} {{% tab name="Linux" %}} + ```bash # For latest version: VERSION=$(curl -w '%{url_effective}' -I -L -s -S https://github.com/kubermatic/kubermatic/releases/latest -o /dev/null | sed -e 's|.*/v||') @@ -51,8 +52,10 @@ VERSION=$(curl -w '%{url_effective}' -I -L -s -S https://github.com/kubermatic/k wget https://github.com/kubermatic/kubermatic/releases/download/v${VERSION}/kubermatic-ce-v${VERSION}-linux-amd64.tar.gz tar -xzvf kubermatic-ce-v${VERSION}-linux-amd64.tar.gz ``` + {{% /tab %}} {{% tab name="MacOS" %}} + ```bash # Determine your macOS processor architecture type # Replace 'amd64' with 'arm64' if using an Apple Silicon (M1) Mac. @@ -64,6 +67,7 @@ VERSION=$(curl -w '%{url_effective}' -I -L -s -S https://github.com/kubermatic/k wget "https://github.com/kubermatic/kubermatic/releases/download/v${VERSION}/kubermatic-ce-v${VERSION}-darwin-${ARCH}.tar.gz" tar -xzvf "kubermatic-ce-v${VERSION}-darwin-${ARCH}.tar.gz" ``` + {{% /tab %}} {{< /tabs >}} @@ -71,16 +75,16 @@ tar -xzvf "kubermatic-ce-v${VERSION}-darwin-${ARCH}.tar.gz" As with KKP itself, it's recommended to use a single `values.yaml` to configure all Helm charts. There are a few important options you might want to override for your setup: -* `prometheus.host` is used for the external URL in Prometheus, e.g. `prometheus.kkp.example.com`. -* `alertmanager.host` is used for the external URL in Alertmanager, e.g. `alertmanager.kkp.example.com`. -* `prometheus.storageSize` (default: `100Gi`) controls the volume size for each Prometheus replica; this should be large enough to hold all data as per your retention time (see next option). Long-term storage for Prometheus blocks is provided by Thanos, an optional extension to the Prometheus chart. -* `prometheus.tsdb.retentionTime` (default: `15d`) controls how long metrics are stored in Prometheus before they are deleted. Larger retention times require more disk space. Long-term storage is accomplished by Thanos, so the retention time for Prometheus itself should not be set to extremely large values (like multiple months). -* `prometheus.ruleFiles` is a list of Prometheus alerting rule files to load. Depending on whether or not the target cluster is a master or seed, the `/etc/prometheus/rules/kubermatic-master-*.yaml` entry should be removed in order to not trigger bogus alerts. -* `prometheus.blackboxExporter.enabled` is used to enable integration between Prometheus and Blackbox Exporter, used for monitoring of API endpoints of user clusters created on the seed. `prometheus.blackboxExporter.url` should be adjusted accordingly (default value would be `blackbox-exporter:9115`) -* `grafana.user` and `grafana.password` should be set with custom values if no identity-aware proxy is configured. In this case, `grafana.provisioning.configuration.disable_login_form` should be set to `false` so that a manual login is possible. -* `loki.persistence.size` (default: `10Gi`) controls the volume size for the Loki pods. -* `promtail.scrapeConfigs` controls for which pods the logs are collected. The default configuration should be sufficient for most cases, but adjustment can be made. -* `promtail.tolerations` might need to be extended to deploy a Promtail pod on every node in the cluster. By default, master-node NoSchedule taints are ignored. +- `prometheus.host` is used for the external URL in Prometheus, e.g. `prometheus.kkp.example.com`. +- `alertmanager.host` is used for the external URL in Alertmanager, e.g. `alertmanager.kkp.example.com`. +- `prometheus.storageSize` (default: `100Gi`) controls the volume size for each Prometheus replica; this should be large enough to hold all data as per your retention time (see next option). Long-term storage for Prometheus blocks is provided by Thanos, an optional extension to the Prometheus chart. +- `prometheus.tsdb.retentionTime` (default: `15d`) controls how long metrics are stored in Prometheus before they are deleted. Larger retention times require more disk space. Long-term storage is accomplished by Thanos, so the retention time for Prometheus itself should not be set to extremely large values (like multiple months). +- `prometheus.ruleFiles` is a list of Prometheus alerting rule files to load. Depending on whether or not the target cluster is a master or seed, the `/etc/prometheus/rules/kubermatic-master-*.yaml` entry should be removed in order to not trigger bogus alerts. +- `prometheus.blackboxExporter.enabled` is used to enable integration between Prometheus and Blackbox Exporter, used for monitoring of API endpoints of user clusters created on the seed. `prometheus.blackboxExporter.url` should be adjusted accordingly (default value would be `blackbox-exporter:9115`) +- `grafana.user` and `grafana.password` should be set with custom values if no identity-aware proxy is configured. In this case, `grafana.provisioning.configuration.disable_login_form` should be set to `false` so that a manual login is possible. +- `loki.persistence.size` (default: `10Gi`) controls the volume size for the Loki pods. +- `promtail.scrapeConfigs` controls for which pods the logs are collected. The default configuration should be sufficient for most cases, but adjustment can be made. +- `promtail.tolerations` might need to be extended to deploy a Promtail pod on every node in the cluster. By default, master-node NoSchedule taints are ignored. An example `values.yaml` could look like this if all options mentioned above are customized: @@ -124,6 +128,7 @@ With this file prepared, we can now install all required charts: ``` Output will be similar to this: + ```bash INFO[0000] 🚀 Initializing installer… edition="Community Edition" version=X.Y INFO[0000] 🚦 Validating the provided configuration… diff --git a/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/user-cluster/admin-guide/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/user-cluster/admin-guide/_index.en.md index 439e2b5f9..928551dbf 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/user-cluster/admin-guide/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/user-cluster/admin-guide/_index.en.md @@ -36,16 +36,16 @@ For specific information about estimating the resource usage, please refer to [C Some key parameters to consider are: -* The number of active series -* Sampling rate -* The rate at which series are added and removed -* How compressible the time-series data are +- The number of active series +- Sampling rate +- The rate at which series are added and removed +- How compressible the time-series data are Other parameters which can become important if you have particularly high values: -* Number of different series under one metric name -* Number of labels per series -* Rate and complexity of queries +- Number of different series under one metric name +- Number of labels per series +- Rate and complexity of queries ### Installing MLA Stack in a Seed Cluster @@ -58,6 +58,7 @@ kubermatic-installer deploy usercluster-mla --config --helm-va ``` Additional options that can be used for the installation include: + ```bash --mla-force-secrets (UserCluster MLA) force reinstallation of mla-secrets Helm chart --mla-include-iap (UserCluster MLA) Include Identity-Aware Proxy installation @@ -183,7 +184,9 @@ There are several options in the KKP “Admin Panel” which are related to user - Seed name and the base domain under which KKP is running will be appended to it, e.g. for prefix `grafana` the final URL would be `https://grafana..`. ### Addons Configuration + KKP provides several addons for user clusters, that can be helpful when the User Cluster Monitoring feature is enabled, namely: + - **node-exporter** addon: exposes hardware and OS metrics of worker nodes to Prometheus, - **kube-state-metrics** addon: exposes cluster-level metrics of Kubernetes API objects (like pods, deployments, etc.) to Prometheus. @@ -196,7 +199,9 @@ addons are part of the KKP default accessible addons, so they should be availabl administrator has changed it. ### Enabling alerts for MLA stack in a Seed + To enable alerts in seed cluster for user cluster MLA stack(cortex and loki) , update the `values.yaml` used for installation of [Master / Seed MLA stack]({{< relref "../../master-seed/installation/" >}}). Add the following line under `prometheus.ruleFiles` label: + ```yaml - /etc/prometheus/rules/usercluster-mla-*.yaml ``` @@ -229,6 +234,7 @@ For larger scales, you will may start with tweaking the following: - Cortex Ingester volume sizes (cortex values.yaml - `cortex.ingester.persistentVolume.size`) - default 10Gi - Loki Ingester replicas (loki values.yaml - `loki-distributed.ingester.replicas`) - default 3 - Loki Ingester Storage as follows: + ```yaml loki-distributed: ingester: @@ -292,11 +298,13 @@ cortex: By default, a MinIO instance will also be deployed as the S3 storage backend for MLA components. It is also possible to use an existing MinIO instance in your cluster or any other S3-compatible services. There are three Helm charts which are related to MinIO in MLA repository: + - [mla-secrets](https://github.com/kubermatic/kubermatic/tree/main/charts/mla/mla-secrets) is used to create and manage MinIO and Grafana credentials Secrets. - [minio](https://github.com/kubermatic/kubermatic/tree/main/charts/mla/minio) is used to deploy MinIO instance in Kubernetes cluster. - [minio-lifecycle-mgr](https://github.com/kubermatic/kubermatic/tree/main/charts/mla/minio-lifecycle-mgr) is used to manage the lifecycle of the stored data, and to take care of data retention. If you want to disable the MinIO installation and use your existing MinIO instance or other S3 services, you need to: + - Disable the Secret creation for MinIO in mla-secrets Helm chart. In the [mla-secrets Helm chart values.yaml](https://github.com/kubermatic/kubermatic/blob/main/charts/mla/mla-secrets/values.yaml#L18), set `mlaSecrets.minio.enabled` to `false`. - Modify the S3 storage settings in `values.yaml` of other MLA components to use the existing MinIO instance or other S3 services: - In [cortex Helm chart values.yaml](https://github.com/kubermatic/kubermatic/blob/main/charts/mla/cortex/values.yaml), change the `cortex.config.ruler_storage.s3`, `cortex.config.alertmanager_storage.s3`, and `cortex.config.blocks_storage.s3` to point to your existing MinIO instance or other S3 service. Modify the `cortex.alertmanager.env`, `cortex.ingester.env`, `cortex.querier.env`, `cortex.ruler.env` and `cortex.storage_gateway.env` to get credentials from your Secret. @@ -304,7 +312,6 @@ If you want to disable the MinIO installation and use your existing MinIO instan - If you still want to use MinIO lifecycle manager to manage data retention for MLA data in your MinIO instance, in [minio-lefecycle-mgr Helm chart values.yaml](https://github.com/kubermatic/kubermatic/blob/main/charts/mla/minio-lifecycle-mgr/values.yaml), set `lifecycleMgr.minio.endpoint` and `lifecycleMgr.minio.secretName` to your MinIO endpoint and Secret. - Use `--mla-skip-minio` or `--mla-skip-minio-lifecycle-mgr` flag when you execute `kubermatic-installer deploy usercluster-mla`. If you want to disable MinIO but still use MinIO lifecycle manager to take care of data retention, you can use `--mla-skip-minio` flag. Otherwise, you can use both flags to disable both MinIO and lifecycle manager. Please note that if you are redeploying the stack on existing cluster, you will have to manually uninstall MinIO and/or lifecycle manager. To do that, you can use commands: `helm uninstall --namespace mla minio` and `helm uninstall --namespace mla minio-lifecycle-mgr` accordingly. - ### Managing Grafana Dashboards In the User Cluster MLA Grafana, there are several predefined Grafana dashboards that are automatically available across all Grafana organizations (KKP projects). The KKP administrators have ability to modify the list of these dashboards. @@ -356,25 +363,25 @@ By default, no rate-limiting is applied. Configuring the rate-limiting options w For **metrics**, the following rate-limiting options are supported as part of the `monitoringRateLimits`: -| Option | Direction | Enforced by | Description -| -------------------- | -----------| ----------- | ---------------------------------------------------------------------- -| `ingestionRate` | Write path | Cortex | Ingestion rate limit in samples per second (Cortex `ingestion_rate`). -| `ingestionBurstSize` | Write path | Cortex | Maximum number of series per metric (Cortex `max_series_per_metric`). -| `maxSeriesPerMetric` | Write path | Cortex | Maximum number of series per this user cluster (Cortex `max_series_per_user`). -| `maxSeriesTotal` | Write path | Cortex | Maximum number of series per this user cluster (Cortex `max_series_per_user`). -| `queryRate` | Read path | MLA Gateway | Query request rate limit per second (NGINX `rate` in `r/s`). -| `queryBurstSize` | Read path | MLA Gateway | Query burst size in number of requests (NGINX `burst`). -| `maxSamplesPerQuery` | Read path | Cortex | Maximum number of samples during a query (Cortex `max_samples_per_query`). -| `maxSeriesPerQuery` | Read path | Cortex | Maximum number of timeseries during a query (Cortex `max_series_per_query`). +| Option | Direction | Enforced by | Description | +| -------------------- | -----------| ----------- | --------------------------------------------------------------------------------| +| `ingestionRate` | Write path | Cortex | Ingestion rate limit in samples per second (Cortex `ingestion_rate`). | +| `ingestionBurstSize` | Write path | Cortex | Maximum number of series per metric (Cortex `max_series_per_metric`). | +| `maxSeriesPerMetric` | Write path | Cortex | Maximum number of series per this user cluster (Cortex `max_series_per_user`). | +| `maxSeriesTotal` | Write path | Cortex | Maximum number of series per this user cluster (Cortex `max_series_per_user`). | +| `queryRate` | Read path | MLA Gateway | Query request rate limit per second (NGINX `rate` in `r/s`). | +| `queryBurstSize` | Read path | MLA Gateway | Query burst size in number of requests (NGINX `burst`). | +| `maxSamplesPerQuery` | Read path | Cortex | Maximum number of samples during a query (Cortex `max_samples_per_query`). | +| `maxSeriesPerQuery` | Read path | Cortex | Maximum number of timeseries during a query (Cortex `max_series_per_query`). | For **logs**, the following rate-limiting options are supported as part of the `loggingRateLimits`: -| Option | Direction | Enforced by | Description -| -------------------- | -----------| ----------- | ---------------------------------------------------------------------- -| `ingestionRate` | Write path | MLA Gateway | Ingestion rate limit in requests per second (NGINX `rate` in `r/s`). -| `ingestionBurstSize` | Write path | MLA Gateway | Ingestion burst size in number of requests (NGINX `burst`). -| `queryRate` | Read path | MLA Gateway | Query request rate limit per second (NGINX `rate` in `r/s`). -| `queryBurstSize` | Read path | MLA Gateway | Query burst size in number of requests (NGINX `burst`). +| Option | Direction | Enforced by | Description | +| -------------------- | -----------| ----------- | ----------------------------------------------------------------------| +| `ingestionRate` | Write path | MLA Gateway | Ingestion rate limit in requests per second (NGINX `rate` in `r/s`). | +| `ingestionBurstSize` | Write path | MLA Gateway | Ingestion burst size in number of requests (NGINX `burst`). | +| `queryRate` | Read path | MLA Gateway | Query request rate limit per second (NGINX `rate` in `r/s`). | +| `queryBurstSize` | Read path | MLA Gateway | Query burst size in number of requests (NGINX `burst`). | ## Debugging @@ -400,6 +407,7 @@ kubectl get pods -n mla-system ``` Output will be similar to this: + ```bash NAME READY STATUS RESTARTS AGE monitoring-agent-68f7485456-jj7v6 1/1 Running 0 11m @@ -414,6 +422,7 @@ kubectl get pods -n cluster-cxfmstjqkw | grep mla-gateway ``` Output will be similar to this: + ```bash mla-gateway-6dd8c68d67-knmq7 1/1 Running 0 22m ``` @@ -479,6 +488,7 @@ To incorporate the helm-charts upgrade, follow the below steps: ### Upgrade Loki to version 2.4.0 Add the following configuration inside `loki.config` key, under `ingester` label in the Loki's `values.yaml` file: + ```yaml wal: dir: /var/loki/wal @@ -489,11 +499,13 @@ wal: Statefulset `store-gateway` refers to a headless service called `cortex-store-gateway-headless`, however, due to a bug in the upstream helm-chart(v0.5.0), the `cortex-store-gateway-headless` doesn’t exist at all, and headless service is named `cortex-store-gateway`, which is not used by the statefulset. Because `cortex-store-gateway` is not referred at all, we can safely delete it, and do helm upgrade to fix the issue (Refer to this [pull-request](https://github.com/cortexproject/cortex-helm-chart/pull/166) for details). Delete the existing `cortex-store-gateway` service by running the below command: + ```bash kubectl delete svc cortex-store-gateway -n mla ``` After doing the above-mentioned steps, MLA stack can be upgraded using the Kubermatic Installer: + ```bash kubermatic-installer deploy usercluster-mla --config --helm-values ``` diff --git a/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/user-cluster/setting-up-alertmanager-with-slack-notifications/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/user-cluster/setting-up-alertmanager-with-slack-notifications/_index.en.md index 87d03a661..c5e250dc7 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/user-cluster/setting-up-alertmanager-with-slack-notifications/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/user-cluster/setting-up-alertmanager-with-slack-notifications/_index.en.md @@ -60,6 +60,7 @@ alertmanager_config: | title: "{{ range .Alerts }}{{ .Annotations.summary }}\n{{ end }}" text: "{{ range .Alerts }}{{ .Annotations.description }}\n{{ end }}" ``` + Don’t forget to add the Slack Webhook URL that you have generated in the previous setup to `slack_api_url`, change the slack channel under `slack_configs` to the channel that you are going to use and save it by clicking **Edit** button: diff --git a/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/user-cluster/user-guide/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/user-cluster/user-guide/_index.en.md index 9e6b11b34..54c1d9c7f 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/user-cluster/user-guide/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/monitoring-logging-alerting/user-cluster/user-guide/_index.en.md @@ -19,6 +19,7 @@ Users can enable monitoring and logging independently, and also can disable or e ## Enabling MLA Addons in a User Cluster KKP provides several addons for user clusters, that can be helpful when the User Cluster Monitoring feature is enabled, namely: + - **node-exporter** addon: exposes hardware and OS metrics of worker nodes to Prometheus, - **kube-state-metrics** addon: exposes cluster-level metrics of Kubernetes API objects (like pods, deployments, etc.) to Prometheus. @@ -54,16 +55,17 @@ The metric endpoints exposed via annotations will be automatically discovered by The following annotations are supported: -| Annotation | Example value | Description -| ------------------------- | ------------- | ------------ -| prometheus.io/scrape | `"true"` | Only scrape pods / service endpoints that have a value of `true` -| prometheus.io/scrape-slow | `"true"` | The same as `prometheus.io/scrape`, but will scrape metrics in longer intervals (5 minutes) -| prometheus.io/path | `/metrics` | Overrides the metrics path, the default is `/metrics` -| prometheus.io/port | `"8080"` | Scrape the pod / service endpoints on the indicated port +| Annotation | Example value | Description | +| ------------------------- | ------------- | --------------------------------------------------------------------------------------------| +| prometheus.io/scrape | `"true"` | Only scrape pods / service endpoints that have a value of `true` | +| prometheus.io/scrape-slow | `"true"` | The same as `prometheus.io/scrape`, but will scrape metrics in longer intervals (5 minutes) | +| prometheus.io/path | `/metrics` | Overrides the metrics path, the default is `/metrics` | +| prometheus.io/port | `"8080"` | Scrape the pod / service endpoints on the indicated port | For more information on exact scraping configuration and annotations, reference the user cluster Grafana Agent configuration in the `monitoring-agent` ConfigMap (`kubectl get configmap monitoring-agent -n mla-system -oyaml`) against the prometheus documentation for [kubernetes_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config) and [relabel_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config). ### Extending Scrape Config + It is also possible to extend User Cluster Grafana Agent with custom `scrape_config` targets. This can be achieved by adding ConfigMaps with a pre-defined name prefix `monitoring-scraping` in the `mla-system` namespace in the user cluster. For example, a file `example.yaml` which contains customized scrape configs can look like the following: ```yaml @@ -158,17 +160,17 @@ As described on the [User Cluster MLA Stack Architecture]({{< relref "../../../. **monitoring-agent**: -| Resource | Requests | Limits -| -------- | -------- | ------ -| CPU | 100m | 1 -| Memory | 256Mi | 4Gi +| Resource | Requests | Limits | +| -------- | -------- | -------| +| CPU | 100m | 1 | +| Memory | 256Mi | 4Gi | **logging-agent**: -| Resource | Requests | Limits -| -------- | -------- | ------ -| CPU | 50m | 200m -| Memory | 64Mi | 128Mi +| Resource | Requests | Limits | +| -------- | -------- | -------| +| CPU | 50m | 200m | +| Memory | 64Mi | 128Mi | Non-default resource requests & limits for user cluster Prometheus and Loki Promtail can be configured via KKP API endpoint for managing clusters (`/api/v2/projects/{project_id}/clusters/{cluster_id}`): diff --git a/content/kubermatic/v2.28/tutorials-howtos/networking/apiserver-policies/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/networking/apiserver-policies/_index.en.md index 7e3c9531f..7b7fc8bf9 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/networking/apiserver-policies/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/networking/apiserver-policies/_index.en.md @@ -58,9 +58,10 @@ This feature is available only for user clusters with the `LoadBalancer` [Expose {{% notice warning %}} When restricting access to the API server, it is important to allow the following IP ranges : -* Worker nodes of the user cluster. -* Worker nodes of the KKP Master cluster. -* Worker nodes of the KKP seed cluster in case you are using separate Master/Seed Clusters. + +- Worker nodes of the user cluster. +- Worker nodes of the KKP Master cluster. +- Worker nodes of the KKP seed cluster in case you are using separate Master/Seed Clusters. Since Kubernetes in version v1.25, it is also needed to add Pod IP range of KKP seed cluster, because of the [change](https://github.com/kubernetes/kubernetes/pull/110289) to kube-proxy. @@ -86,7 +87,8 @@ or in an existing cluster via the "Edit Cluster" dialog: ## Seed-Level API Server IP Ranges Whitelisting -The `defaultAPIServerAllowedIPRanges` field in the Seed specification allows administrators to define a **global set of CIDR ranges** that are **automatically appended** to the allowed IP ranges for all user cluster API servers within that Seed. These ranges act as a security baseline to: +The `defaultAPIServerAllowedIPRanges` field in the Seed specification allows administrators to define a **global set of CIDR ranges** that are **automatically appended** to the allowed IP ranges for all user cluster API servers within that Seed. These ranges act as a security baseline to: + - Ensure KKP components (e.g., seed-manager, dashboard) retain access to cluster APIs - Enforce organizational IP restrictions across all clusters in the Seed - Prevent accidental misconfigurations in cluster-specific settings diff --git a/content/kubermatic/v2.28/tutorials-howtos/networking/cilium-cluster-mesh/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/networking/cilium-cluster-mesh/_index.en.md index b2db3df3f..93a67fb27 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/networking/cilium-cluster-mesh/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/networking/cilium-cluster-mesh/_index.en.md @@ -8,12 +8,14 @@ This guide describes the setup for configuring Cilium Cluster Mesh between 2 KKP running with Cilium CNI. ## Versions + This guide was made for the following versions of KKP and Cilium: - KKP 2.22.0 - Cilium 1.13.0 ## Prerequisites + Before proceeding, please review that your intended setup meets the [Prerequisites for Cilium Cluster Mesh](https://docs.cilium.io/en/latest/network/clustermesh/clustermesh/#prerequisites). @@ -25,11 +27,13 @@ Especially, keep in mind that nodes in all clusters must have IP connectivity be ## Deployment Steps ### 1. Create 2 KKP User Clusters with non-overlapping pod CIDRs + Create 2 user clusters with Cilium CNI and `ebpf` proxy mode (necessary to have Cluster Mesh working also for cluster-ingress traffic via LoadBalancer or NodePort services). The clusters need to have non-overlapping pod CIDRs, so at least one of them needs to have the `spec.clusterNetwork.pods.cidrBlocks` set to a non-default value (e.g. `172.26.0.0/16`). We will be referring to these clusters as `Cluster 1` and `Cluster 2` in this guide. ### 2. Enable Cluster Mesh in the Cluster 1 + **In Cluster 1**, edit the Cilium ApplicationInstallation values (via UI, or `kubectl edit ApplicationInstallation cilium -n kube-system`), and add the following snippet to it: @@ -50,24 +54,29 @@ clustermesh: ``` ### 3. Retrieve Cluster Mesh data from the Cluster 1 + **In Cluster 1**, retrieve the information necessary for the next steps: Retrieve CA cert & key: -``` + +```bash kubectl get secret cilium-ca -n kube-system -o yaml ``` Retrieve clustermesh-apiserver external IP: -``` + +```bash kubectl get svc clustermesh-apiserver -n kube-system ``` Retrieve clustermesh-apiserver remote certs: -``` + +```bash kubectl get secret clustermesh-apiserver-remote-cert -n kube-system -o yaml ``` ### 4. Enable Cluster Mesh in the Cluster 2 + **In Cluster 2**, the Cilium ApplicationInstallation values, and add the following snippet to it (after replacing the values below the lines with comments with the actual values retrieved in the previous step): @@ -105,19 +114,23 @@ clustermesh: ``` ### 5. Retrieve Cluster Mesh data from the Cluster 2 + **In Cluster 2**, retrieve the information necessary for the next steps: Retrieve clustermesh-apiserver external IP: -```shell + +```bash kubectl get svc clustermesh-apiserver -n kube-system ``` Retrieve clustermesh-apiserver remote certs: -```shell + +```bash kubectl get secret clustermesh-apiserver-remote-cert -n kube-system -o yaml ``` ### 6. Update Cluster Mesh config in the Cluster 1 + **In Cluster 1**, update the Cilium ApplicationInstallation values, and add the following clustermesh config with cluster-2 details into it: ```yaml @@ -151,21 +164,25 @@ clustermesh: ``` ### 7. Allow traffic between worker nodes of different clusters + If any firewalling is in place between the worker nodes in different clusters, the following ports need to be allowed between them: - UDP 8472 (VXLAN) - TCP 4240 (HTTP health checks) ### 8. Check Cluster Mesh status + At this point, check Cilium health status in each cluster with: -```shell + +```bash kubectl exec -it cilium- -n kube-system -- cilium-health status ``` It should show all local and remote cluster's nodes and not show any errors. It may take a few minutes until things settle down since the last configuration. Example output: -``` + +```yaml Nodes: cluster-1/f5m2nzcb4z-worker-p7m58g-7f44796457-wv5fq (localhost): Host connectivity to 10.0.0.2: @@ -184,11 +201,12 @@ Nodes: ``` In case of errors, check again for firewall settings mentioned in the previous point. It may also help to manually restart: + - first `clustermesh-apiserver` pods in each cluster, - then `cilium` agent pods in each cluster. - ## Example Cross-Cluster Application Deployment With Failover / Migration + After Cilium Cluster Mesh has been set up, it is possible to use global services across the meshed clusters. In this example, we will deploy a global deployment into 2 clusters, where each cluster will be acting a failover for the other. Normally, all traffic will be handled by backends in the local cluster. Only in case of no local backends, it will be handled by backends running in the other cluster. That will be true for local (pod-to-service) traffic, as well as ingress traffic provided by LoadBalancer services in each cluster. @@ -218,6 +236,7 @@ spec: ``` Now, in each cluster, lets create a service of type LoadBalancer with the necessary annotations: + - `io.cilium/global-service: "true"` - `io.cilium/service-affinity: "local"` @@ -242,12 +261,14 @@ spec: ``` The list of backends for a service can be checked with: -```shell + +```bash kubectl exec -it cilium- -n kube-system -- cilium service list --clustermesh-affinity ``` Example output: -``` + +```bash ID Frontend Service Type Backend 16 10.240.27.208:80 ClusterIP 1 => 172.25.0.160:80 (active) (preferred) 2 => 172.25.0.12:80 (active) (preferred) @@ -258,13 +279,14 @@ ID Frontend Service Type Backend At this point, the service should be available in both clusters, either locally or via assigned external IP of the `nginx-deployment` service. Let's scale the number of nginx replicas in one of the clusters (let's say Cluster 1) to 0: -```shell + +```bash kubectl scale deployment nginx-deployment --replicas=0 ``` The number of backends for the service has been lowered down to 2, and only lists remote backends in the `cilium service list` output: -``` +```bash ID Frontend Service Type Backend 16 10.240.27.208:80 ClusterIP 1 => 172.26.0.31:80 (active) 2 => 172.26.0.196:80 (active) diff --git a/content/kubermatic/v2.28/tutorials-howtos/networking/cni-cluster-network/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/networking/cni-cluster-network/_index.en.md index 816a349a7..7f126aa13 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/networking/cni-cluster-network/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/networking/cni-cluster-network/_index.en.md @@ -74,10 +74,12 @@ When this option is selected, the user cluster will be left without any CNI, and When deploying your own CNI, please make sure you pass proper pods & services CIDRs to your CNI configuration - matching with the KKP user-cluster level configuration in the [Advanced Network Configuration](#advanced-network-configuration). ### Deploying CNI as a System Application + As of Cilium version `1.13.0`, Cilium CNI is deployed as a "System Application" instead of KKP Addon (as it is the case for older Cilium versions and all Canal CNI versions). Apart from internally relying on KKP's [Applications]({{< relref "../../applications" >}}) infrastructure rather than [Addons]({{< relref "../../../architecture/concept/kkp-concepts/addons" >}}) infrastructure, it provides the users with full flexibility of CNI feature usage and configuration. #### Editing the CNI Configuration During Cluster Creation + When creating a new user cluster via KKP UI, it is possible to specify Helm values used to deploy the CNI via the "Edit CNI Values" button at the bottom of the "Advanced Network Configuration" section on the step 2 of the cluster creation wizard: ![Edit CNI Values](images/edit-cni-app-values.png?classes=shadow,border "Edit CNI Values") @@ -88,6 +90,7 @@ Please note that the final Helm values applied in the user cluster will be autom This option is also available when creating cluster templates and the CNI configuration saved in the cluster template is automatically applied to all clusters created from the template. #### Editing the CNI Configuration in Existing Cluster + In an existing cluster, the CNI configuration can be edited in two ways: via KKP UI, or by editing CNI `ApplicationInstallation` in the user cluster. For editing CNI configuration via KKP UI, navigate to the "Applications" tab on the cluster details page, switch the "Show System Applications" toggle, and click on the "Edit Application" button of the CNI. After that a new dialog window with currently applied CNI Helm values will be open and allow their modification. @@ -95,22 +98,28 @@ For editing CNI configuration via KKP UI, navigate to the "Applications" tab on ![Edit CNI Application](images/edit-cni-app.png?classes=shadow,border "Edit CNI Application") The other option is to edit the CNI `ApplicationInstallation` in the user cluster directly, e.g. like this for the Cilium CNI: + ```bash kubectl edit ApplicationInstallation cilium -n kube-system ``` + and edit the configuration in ApplicationInstallation's `spec.values`. This approach can be used e.g. to turn specific CNI features on or off, or modify arbitrary CNI configuration. Please note that some parts of the CNI configuration (e.g. pod CIDR etc.) is managed by KKP, and its change will not be allowed, or may be overwritten upon next reconciliation of the ApplicationInstallation. #### Changing the Default CNI Configuration + The default CNI configuration that will be used to deploy CNI in new KKP user clusters can be defined at two places: + - in a cluster template, if the cluster is being created from a template (which takes precedence over the next option), - in the CNI ApplicationDefinition's `spec.defaultValues` in the KKP master cluster (editable e.g. via `kubectl edit ApplicationDefinition cilium`). #### CNI Helm Chart Source + The Helm charts used to deploy CNI are hosted in a Kubermatic OCI registry (`oci://quay.io/kubermatic/helm-charts`). This registry needs to be accessible from the KKP Seed cluster to allow successful CNI deployment. In setups with restricted Internet connectivity, a different (e.g. private) OCI registry source for the CNI charts can be configured in `KubermaticConfiguration` (`spec.systemApplications.helmRepository` and `spec.systemApplications.helmRegistryConfigFile`). To mirror a Helm chart into a private OCI repository, you can use the helm CLI, e.g.: + ```bash CHART_VERSION=1.13.0 helm pull oci://quay.io/kubermatic/helm-charts/cilium --version ${CHART_VERSION} @@ -118,10 +127,12 @@ helm push cilium-${CHART_VERSION}.tgz oci://// ``` #### Upgrading Cilium CNI to Cilium 1.13.0 / Downgrading + For user clusters originally created with the Cilium CNI version lower than `1.13.0` (which was managed by the Addons mechanism rather than Applications), the migration to the management via Applications infra happens automatically during the CNI version upgrade to `1.13.0`. During the upgrade, if the Hubble Addon was installed in the cluster before, the Addon will be automatically removed, as Hubble is now enabled by default. If there are such clusters in your KKP installation, it is important to preserve the following part of the configuration in the [default configuration](#changing-the-default-cni-configuration) of the ApplicationInstallation: + ```bash hubble: tls: @@ -158,36 +169,45 @@ Some newer Kubernetes versions may not be compatible with already deprecated CNI Again, please note that it is not a good practice to keep the clusters on an old CNI version and try to upgrade as soon as new CNI version is available next time. ## IPv4 / IPv4 + IPv6 (Dual Stack) + This option allows for switching between IPv4-only and IPv4+IPv6 (dual-stack) networking in the user cluster. This feature is described in detail on an individual page: [Dual-Stack Networking]({{< relref "../dual-stack/" >}}). ## Advanced Network Configuration + After Clicking on the "Advanced Networking Configuration" button in the cluster creation wizard, several more network configuration options are shown to the user: ![Cluster Settings - Advanced Network Configuration](images/ui-cluster-networking-advanced.png?classes=shadow,border "Cluster Settings - Network Configuration") ### Proxy Mode + Configures kube-proxy mode for k8s services. Can be set to `ipvs`, `iptables` or `ebpf` (`ebpf` is available only if Cilium CNI is selected and [Konnectivity](#konnectivity) is enabled). Defaults to `ipvs` for Canal CNI clusters and `ebpf` / `iptables` (based on whether Konnectivity is enabled or not) for Cilium CNI clusters. Note that IPVS kube-proxy mode is not recommended with Cilium CNI due to [a known issue]({{< relref "../../../architecture/known-issues/" >}}#2-connectivity-issue-in-pod-to-nodeport-service-in-cilium--ipvs-proxy-mode). ### Pods CIDR + The network range from which POD networks are allocated. Defaults to `[172.25.0.0/16]` (or `[172.26.0.0/16]` for Kubevirt clusters, `[172.25.0.0/16, fd01::/48]` for `IPv4+IPv6` ipFamily). ### Services CIDR + The network range from which service VIPs are allocated. Defaults to `[10.240.16.0/20]` (or `[10.241.0.0/20]` for Kubevirt clusters, `[10.240.16.0/20, fd02::/120]` for `IPv4+IPv6` ipFamily). ### Node CIDR Mask Size + The mask size (prefix length) used to allocate a node-specific pod subnet within the provided Pods CIDR. It has to be larger than the provided Pods CIDR prefix length. ### Allowed IP Range for NodePorts + IP range from which NodePort access to the worker nodes will be allowed. Defaults to `0.0.0.0/0` (allowed from anywhere). This option is available only for some cloud providers that support it. ### Node Local DNS Cache + Enables NodeLocal DNS Cache - caching DNS server running on each worker node in the cluster. ### Konnectivity + Konnectivity provides TCP level proxy for the control plane (seed cluster) to worker nodes (user cluster) communication. It is based on the upstream [apiserver-network-proxy](https://github.com/kubernetes-sigs/apiserver-network-proxy/) project and is aimed to be the replacement of the older KKP-specific solution based on OpenVPN and network address translation. Since the old solution was facing several limitations, it has been replaced with Konnectivity and will be removed in future KKP releases. {{% notice warning %}} diff --git a/content/kubermatic/v2.28/tutorials-howtos/networking/cni-migration/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/networking/cni-migration/_index.en.md index 2c9a80c71..6d6400799 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/networking/cni-migration/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/networking/cni-migration/_index.en.md @@ -110,6 +110,7 @@ kubectl delete -f https://docs.projectcalico.org/v3.21/manifests/canal.yaml At this point, your cluster should be running on Cilium CNI. ### Step 5 (Optional) + Consider changing the kube-proxy mode, especially if it was IPVS previously. See [Changing the Kube-Proxy Mode](#changing-the-kube-proxy-mode) for more details. @@ -121,7 +122,6 @@ As the last step, we recommend to perform rolling restart of machine deployments Please verify that everything works normally in the cluster. If there are any problems, you can revert the migration procedure and go back to the previously used CNI type and version as described in the next section. - ## Migrating User Cluster with Cilium CNI to Canal CNI Please follow the same steps as in [Migrating User Cluster with the Canal CNI to the Cilium CNI](#migrating-user-cluster-with-the-canal-cni-to-the-cilium-cni), with the following changes: @@ -138,8 +138,8 @@ helm template cilium cilium/cilium --version 1.11.0 --namespace kube-system | ku - [(Step 5)](#step-5-optional) Restart all already running non-host-networking pods as in the [Step 3](#step-3). We then recommend to perform rolling restart of machine deployments in the cluster as well. - ## Changing the Kube-Proxy Mode + If you migrated your cluster from Canal CNI to Cilium CNI, you may want to change the kube-proxy mode of the cluster. As the `ipvs` kube-proxy mode is not recommended with Cilium CNI due to [a known issue]({{< relref "../../../architecture/known-issues/" >}}#2-connectivity-issue-in-pod-to-nodeport-service-in-cilium--ipvs-proxy-mode), we strongly recommend migrating to `ebpf` or `iptables` proxy mode after Canal -> Cilium migration. @@ -159,6 +159,7 @@ At this point, you are able to change the proxy mode in the Cluster API. Change - or by editing the cluster CR in the Seed Cluster (`kubectl edit cluster `). ### Step 3 + When switching to/from ebpf, wait until all Cilium pods are redeployed (you will notice a restart of all Cilium pods). It can take up to 5 minutes until this happens. diff --git a/content/kubermatic/v2.28/tutorials-howtos/networking/dual-stack/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/networking/dual-stack/_index.en.md index 60b480664..c2f8cdc90 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/networking/dual-stack/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/networking/dual-stack/_index.en.md @@ -34,6 +34,7 @@ KKP supports dual-stack networking for KKP-managed user clusters for the followi Dual-stack [specifics & limitations of individual cloud-providers](#cloud-provider-specifics-and-limitations) are listed below. ## Compatibility Matrix + The following table lists the provider / operating system combinations compatible with dual-stack clusters on KKP: | | Ubuntu | Flatcar | RHEL | Rocky Linux | @@ -47,14 +48,13 @@ The following table lists the provider / operating system combinations compatibl | Openstack | ✓ | ✓ | ✓ | ✓ | | VMware vSphere | ✓ | - | - | - | - **NOTES:** - A hyphen(`-`) denotes that the operating system is available / not tested on the given platform. - An asterisk (`*`) denotes a minor issue described in [specifics & limitations of individual cloud-providers](#cloud-provider-specifics-and-limitations). - ## Enabling Dual-Stack Networking for a User Cluster + Dual-stack networking can be enabled for each user-cluster across one of the supported cloud providers. Please refer to [provider-specific documentation](#cloud-provider-specifics-and-limitations) below to see if it is supported globally, or it needs to be enabled on the datacenter level. @@ -62,6 +62,7 @@ or it needs to be enabled on the datacenter level. Dual-stack can be enabled for each supported CNI (both Canal and Cilium). In case of Canal CNI, the minimal supported version is 3.22. ### Enabling Dual-Stack Networking from KKP UI + If dual-stack networking is available for the given provider and datacenter, an option for choosing between `IPv4` and `IPv4 and IPv6 (Dual Stack)` becomes automatically available on the cluster details page in the cluster creation wizard: @@ -85,7 +86,7 @@ without specifying pod / services CIDRs for individual address families, just se `spec.clusterNetwork.ipFamily` to `IPv4+IPv6` and leave `spec.clusterNetwork.pods` and `spec.clusterNetwork.services` empty. They will be defaulted as described on the [CNI & Cluster Network Configuration page]({{< relref "../cni-cluster-network/" >}}#default-cluster-network-configuration). -2. The other option is to specify both IPv4 and IPv6 CIDRs in `spec.clusterNetwork.pods` and `spec.clusterNetwork.services`. +1. The other option is to specify both IPv4 and IPv6 CIDRs in `spec.clusterNetwork.pods` and `spec.clusterNetwork.services`. For example, a valid `clusterNetwork` configuration excerpt may look like: ```yaml @@ -106,9 +107,9 @@ spec: Please note that the order of address families in the `cidrBlocks` is important and KKP right now only supports IPv4 as the primary IP family (meaning that IPv4 address must always be the first in the `cidrBlocks` list). - ## Verifying Dual-Stack Networking in a User Cluster -in order to verify the connectivity in a dual-stack enabled user cluster, please refer to the + +In order to verify the connectivity in a dual-stack enabled user cluster, please refer to the [Validate IPv4/IPv6 dual-stack](https://kubernetes.io/docs/tasks/network/validate-dual-stack/) page in the Kubernetes documentation. Please note the [cloud-provider specifics & limitations](#cloud-provider-specifics-and-limitations) section below, as some features may not be supported on the given cloud-provider. @@ -116,40 +117,49 @@ section below, as some features may not be supported on the given cloud-provider ## Cloud-Provider Specifics and Limitations ### AWS + Dual-stack feature is available automatically for all new user clusters in AWS. Please note however, that the VPC and subnets used to host the worker nodes need to be dual-stack enabled - i.e. must have both IPv4 and IPv6 CIDR assigned. Limitations: + - In the Clusters with control plane version < 1.24, Worker nodes do not have their IPv6 IP addresses published in k8s API (`kubectl describe nodes`), but have them physically applied on their network interfaces (can be seen after SSH-ing to the node). Because of this, pods in the host network namespace do not have IPv6 address assigned. - Dual-Stack services of type `LoadBalancer` are not yet supported by AWS cloud-controller-manager. Only `NodePort` services can be used to expose services outside the cluster via IPv6. Related issues: - - https://github.com/kubermatic/kubermatic/issues/9899 - - https://github.com/kubernetes/cloud-provider-aws/issues/477 + + - + - Docs: + - [AWS: Subnets for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html) ### Microsoft Azure + Dual-stack feature is available automatically for all new user clusters in Azure. Please note however that the VNet used to host the worker nodes needs to be dual-stack enabled - i.e. must have both IPv4 and IPv6 CIDR assigned. In case that you are not using a pre-created VNet, but leave the VNet creation on KKP, it will automatically create a dual-stack VNet for your dual-stack user clusters. Limitations: + - Dual-Stack services of type `LoadBalancer` are not yet supported by Azure cloud-controller-manager. Only `NodePort` services can be used to expose services outside the cluster via IPv6. Related issues: - - https://github.com/kubernetes-sigs/cloud-provider-azure/issues/814 - - https://github.com/kubernetes-sigs/cloud-provider-azure/issues/1831 + + - + - Docs: + - [Overview of IPv6 for Azure Virtual Network](https://docs.microsoft.com/en-us/azure/virtual-network/ip-services/ipv6-overview) ### BYO / kubeadm + Dual-stack feature is available automatically for all new Bring-Your-Own (kubeadm) user clusters. Before joining a KKP user cluster, the worker node needs to have both IPv4 and IPv6 address assigned. @@ -159,6 +169,7 @@ flag of the kubelet. This can be done as follows: - As instructed by KKP UI, run the `kubeadm token --kubeconfig create --print-join-command` command and use its output in the next step. - Create a yaml file with kubeadm `JoinConfiguration`, e.g. `kubeadm-join-config.yaml` with the content similar to this: + ```yaml apiVersion: kubeadm.k8s.io/v1beta3 kind: JoinConfiguration @@ -174,58 +185,72 @@ nodeRegistration: # change the node-ip below to match your desired IPv4 and IPv6 addresses of the node node-ip: 10.0.6.114,2a05:d014:937:4500:a324:767b:38da:2bff ``` + - Join the node with the provided config file, e.g.: `kubeadm join --config kubeadm-join-config.yaml`. Limitations: + - Services of type `LoadBalancer` don't work out of the box in BYO/kubeadm clusters. You can use additional addon software, such as [MetalLB](https://metallb.universe.tf/) to make them work in your custom kubeadm setup. Docs: + - [Dual-stack support with kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/dual-stack-support/) ### DigitalOcean + Dual-stack feature is available automatically for all new user clusters in DigitalOcean. Limitations: + - Services of type `LoadBalancer` are not yet supported in KKP on DigitalOcean (not even for IPv4-only clusters). - On some operating systems (e.g. Rocky Linux) IPv6 address assignment on the node may take longer time during the node provisioning. In that case, the IPv6 address may not be detected when the kubelet starts, and because of that, worker nodes may not have their IPv6 IP addresses published in k8s API (`kubectl describe nodes`). This can be work-arounded by restarting the kubelet manually / rebooting the node. Related issues: -- https://github.com/kubermatic/kubermatic/issues/8847 + +- ### Equinix Metal + Dual-stack feature is available automatically for all new user clusters in Equinix Metal. Limitations: + - Services of type `LoadBalancer` are not yet supported in KKP on Equinix Metal (not even for IPv4-only clusters). - On some operating systems (e.g. Rocky Linux, Flatcar) IPv6 address assignment on the node may take longer time during the node provisioning. In that case, the IPv6 address may not be detected when the kubelet starts, and because of that, worker nodes may not have their IPv6 IP addresses published in k8s API (`kubectl describe nodes`). This can be work-arounded by restarting the kubelet manually / rebooting the node. Related issues: -- https://github.com/kubermatic/kubermatic/issues/10648 -- https://github.com/equinix/cloud-provider-equinix-metal/issues/179 + +- +- ### Google Cloud Platform (GCP) + Dual-stack feature is available automatically for all new user clusters in GCP. Please note however, that the subnet used to host the worker nodes need to be dual-stack enabled - i.e. must have both IPv4 and IPv6 CIDR assigned. Limitations: + - Worker nodes do not have their IPv6 IP addresses published in k8s API (`kubectl describe nodes`), but have them physically applied on their network interfaces (can be seen after SSH-ing to the node). Because of this, pods in the host network namespace do not have IPv6 address assigned. - Dual-Stack services of type `LoadBalancer` are not yet supported by GCP cloud-controller-manager. Only `NodePort` services can be used to expose services outside the cluster via IPv6. Related issues: -- https://github.com/kubermatic/kubermatic/issues/9899 -- https://github.com/kubernetes/cloud-provider-gcp/issues/324 + +- +- Docs: + - [GCP: Create and modify VPC Networks](https://cloud.google.com/vpc/docs/create-modify-vpc-networks) ### Hetzner + Dual-stack feature is available automatically for all new user clusters in Hetzner. Please note that all services of type `LoadBalancer` in Hetzner need to have a @@ -234,14 +259,17 @@ for example `load-balancer.hetzner.cloud/network-zone: "eu-central"` or `load-ba Without one of these annotations, the load-balancer will be stuck in the Pending state. Limitations: + - Due to the [issue with node ExternalIP ordering](https://github.com/hetznercloud/hcloud-cloud-controller-manager/issues/305), we recommend using dual-stack clusters on Hetzner only with [Konnectivity]({{< relref "../cni-cluster-network/#konnectivity" >}}) enabled, otherwise errors can be seen when issuing `kubectl logs` / `kubectl exec` / `kubectl cp` commands on the cluster. Related Issues: -- https://github.com/hetznercloud/hcloud-cloud-controller-manager/issues/305 + +- ### OpenStack + As IPv6 support in OpenStack highly depends on the datacenter setup, dual-stack feature in KKP is available only in those OpenStack datacenters where it is explicitly enabled in the datacenter config of the KKP (datacenter's `spec.openstack.ipv6Enabled` config flag is set to `true`). @@ -254,29 +282,34 @@ but a default IPv6 subnet pool exists in the datacenter, the default one will be specified and the default IPv6 subnet pool does not exist, the IPv6 subnet will be created with the CIDR `fd00::/64`. Limitations: + - Dual-Stack services of type `LoadBalancer` are not yet supported by the OpenStack cloud-controller-manager. The initial work has been -finished as part of https://github.com/kubernetes/cloud-provider-openstack/pull/1901 and should be released as of +finished as part of and should be released as of Kubernetes version 1.25. Related Issues: -- https://github.com/kubernetes/cloud-provider-openstack/issues/1937 + +- Docs: + - [IPv6 in OpenStack](https://docs.openstack.org/neutron/yoga/admin/config-ipv6.html) - [Subnet pools](https://docs.openstack.org/neutron/yoga/admin/config-subnet-pools.html) ### VMware vSphere + As IPv6 support in VMware vSphere highly depends on the datacenter setup, dual-stack feature in KKP is available only in those vSphere datacenters where it is explicitly enabled in the datacenter config of the KKP (datacenter's `spec.vsphere.ipv6Enabled` config flag is set to `true`). Limitations: + - Services of type `LoadBalancer` don't work out of the box in vSphere clusters, as they are not implemented by the vSphere cloud-controller-manager. You can use additional addon software, such as [MetalLB](https://metallb.universe.tf/) to make them work in your environment. - ## Operating System Specifics and Limitations + Although IPv6 is usually enabled by most modern operating systems by default, there can be cases when the particular provider's IPv6 assignment method is not automatically enabled in the given operating system image. Even though we tried to cover most of the cases in the Machine Controller and Operating System Manager code, in some cases @@ -287,6 +320,7 @@ These cases can be still addressed by introducing of custom Operating System Pro specific configuration (see [Operating System Manager]({{< relref "../../operating-system-manager/" >}}) docs). ### RHEL / Rocky Linux + RHEL & Rocky Linux provide an extensive set of IPv6 settings for NetworkManager (see "Table 22. ipv6 setting" in the [NetworkManager ifcfg-rh settings plugin docs](https://developer-old.gnome.org/NetworkManager/unstable/nm-settings-ifcfg-rh.html)). Depending on the IPv6 assignment method used in the datacenter, you may need the proper combination diff --git a/content/kubermatic/v2.28/tutorials-howtos/networking/ipam/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/networking/ipam/_index.en.md index 29f7de106..50c5b1612 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/networking/ipam/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/networking/ipam/_index.en.md @@ -7,17 +7,21 @@ weight = 170 Multi-Cluster IPAM is a feature responsible for automating the allocation of IP address ranges/subnets per user-cluster, based on a predefined configuration ([IPAMPool](#input-resource-ipampool)) per datacenter that defines the pool subnet and the allocation size. The user cluster allocated ranges are available in the [KKP Addon](#kkp-addon-template-integration) `TemplateData`, so it can be used by various Addons running in the user cluster. ## Motivation and Background + Networking applications deployed in KKP user clusters need automated IP Address Management (IPAM) for IP ranges that they use, in a way that prevents address overlaps between multiple user clusters. An example for such an application is MetalLB load-balancer, for which a unique IP range from a larger CIDR range needs to be configured in each user cluster in the same datacenter. The goal is to provide a simple solution that is automated and less prone to human errors. ## Allocation Types + Each IPAM pool in a datacenter should define an allocation type: "range" or "prefix". ### Range + Results in a set of IPs based on an input size. E.g. the first allocation for a range of size **8** in a pool subnet `192.168.1.0/26` would be + ```txt 192.168.1.0-192.168.1.7 ``` @@ -25,21 +29,27 @@ E.g. the first allocation for a range of size **8** in a pool subnet `192.168.1. *Note*: There is a minimal allowed pool subnet mask based on the IP version (**20** for IPv4 and **116** for IPv6). So, if you need a large range of IPs, it's recommended to use the "prefix" type. ### Prefix + Results in a subnet of the pool subnet based on an input subnet prefix. Recommended when a large range of IPs is necessary. E.g. the first allocation for a prefix **30** in a pool subnet `192.168.1.0/26` would be + ```txt 192.168.1.0/30 ``` + and the second would be + ```txt 192.168.1.4/30 ``` ## Input Resource (IPAMPool) + KKP exposes a global-scoped Custom Resource Definition (CRD) `IPAMPool` in the seed cluster. The administrators are able to define the `IPAMPool` CR with a specific name with multiple pool CIDRs with predefined allocation ranges tied to specific datacenters. The administrators can also manage the IPAM pools via [API endpoints]({{< relref "../../../references/rest-api-reference/#/ipampool" >}}) (`/api/v2/seeds/{seed_name}/ipampools`). E.g. containing both allocation types for different datacenters: + ```yaml apiVersion: kubermatic.k8c.io/v1 kind: IPAMPool @@ -66,6 +76,7 @@ Optionally, you can configure range/prefix exclusions in IPAMPools, in order to For that, you need to extend the IPAM Pool datacenter spec to include a list of subnets CIDR to exclude (`excludePrefixes` for prefix allocation type) or a list of particular IPs or IP ranges to exclude (`excludeRanges` for range allocation type). E.g. from previous example, containing both allocation types exclusions: + ```yaml apiVersion: kubermatic.k8c.io/v1 kind: IPAMPool @@ -90,7 +101,9 @@ spec: ``` ### Restrictions + Required `IPAMPool` spec fields: + - `datacenters` list cannot be empty. - `type` for a datacenter is mandatory. - `poolCidr` for a datacenter is mandatory. @@ -98,15 +111,18 @@ Required `IPAMPool` spec fields: - `allocationPrefix` for a datacenter with "prefix" allocation type is mandatory. For the "range" allocation type: + - `allocationRange` should be a positive integer and cannot be greater than the pool subnet possible number of IP addresses. - IPv4 `poolCIDR` should have a prefix (i.e. mask) equal or greater than **20**. - IPv6 `poolCIDR` should have a prefix (i.e. mask) equal or greater than **116**. For the "prefix" allocation type: + - `allocationPrefix` should be between **1** and **32** for IPv4 pool, and between **1** and **128** for IPv6 pool. - `allocationPrefix` should be equal or greater than the pool subnet mask size. ### Modifications + In general, modifications of the `IPAMPool` are not allowed, with the following exceptions: - It is possible to add a new datacenter into the `IPAMPool`. @@ -115,13 +131,14 @@ In general, modifications of the `IPAMPool` are not allowed, with the following If you need to change an already applied `IPAMPool`, you should first delete it and then apply it with the changes. Note that by `IPAMPool` deletion, all user clusters allocations (`IPAMAllocation`) will be deleted as well. - ## Generated Resource (IPAMAllocation) + The IPAM controller in the seed-controller-manager is in charge of the allocation of IP ranges from the defined pools for user clusters. For each user cluster which runs in a datacenter for which an `IPAMPool` is defined, it will automatically allocate a free IP range from the available pool. The persisted allocation is an `IPAMAllocation` CR that will be installed in the seed cluster in the user cluster's namespace. E.g. for "prefix" type: + ```yaml apiVersion: kubermatic.k8c.io/v1 kind: IPAMAllocation @@ -135,6 +152,7 @@ spec: ``` E.g. for "range" type: + ```yaml apiVersion: kubermatic.k8c.io/v1 kind: IPAMAllocation @@ -149,24 +167,30 @@ spec: ``` Note that the ranges of addresses may be disjoint for the "range" type, e.g.: + ```yaml spec: addresses: - "192.168.1.0-192.168.1.7" - "192.168.1.16-192.168.1.23" ``` + The reason for that is to allow for some `IPAMPool` modifications (i.e. increase of the allocation range) in the future. ### Allocations Cleanup + The allocations (i.e. `IPAMAllocation` CRs) for a user cluster are deleted in two occasions: + - Related pool (i.e. `IPAMPool` CR with same name) is deleted. - User cluster itself is deleted. ## KKP Addon Template Integration + The user cluster allocated ranges (i.e. `IPAMAllocation` CRs values) are available in the [Addon template data]({{< relref "../../../architecture/concept/kkp-concepts/addons/" >}}#manifest-templating) (attribute `.Cluster.Network.IPAMAllocations`) to be rendered in the Addons manifests. That allows consumption of the user cluster's IPAM allocations in any KKP [Addon]({{< relref "../../../architecture/concept/kkp-concepts/addons/" >}}). For example, looping over all user cluster IPAM pools allocations in an addon template can be done as follows: + ```yaml ... @@ -185,18 +209,21 @@ For example, looping over all user cluster IPAM pools allocations in an addon te ``` ## MetalLB Addon Integration + KKP provides a [MetalLB](https://metallb.universe.tf/) [accessible addon]({{< relref "../../../architecture/concept/kkp-concepts/addons/#accessible-addons" >}}) integrated with the Multi-Cluster IPAM feature. The addon deploys standard MetalLB manifests into the user cluster. On top of that, if an IPAM allocation from an IPAM pool with a specific name is available for the user-cluster, the addon automatically installs the equivalent MetalLB IP address pool in the user cluster (in the `IPAddressPool` custom resource from the `metallb.io/v1beta1` API). The KKP `IPAMPool` from which the allocations are made need to have the following name: + - `metallb` if a single-stack (either IPv4 or IPv6) IP address pool needs to be created in the user cluster. - `metallb-ipv4` and `metallb-ipv6` if a dual-stack (both IPv4 and IPv6) IP address pool needs to be created in the user cluster. In this case, allocations from both address pools need to exist. The created [`IPAddressPool`](https://metallb.universe.tf/configuration/#defining-the-ips-to-assign-to-the-load-balancer-services) custom resource (from the `metallb.io/v1beta1` API) will have the following name: + - `kkp-managed-pool` in case of a single-stack address pool, - `kkp-managed-pool-dualstack` in case of a dual-stack address pool. diff --git a/content/kubermatic/v2.28/tutorials-howtos/networking/multus/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/networking/multus/_index.en.md index 34fd8c9c5..321684d20 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/networking/multus/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/networking/multus/_index.en.md @@ -7,17 +7,20 @@ weight = 160 The Multus-CNI Addon allows automated installation of [Multus-CNI](https://github.com/k8snetworkplumbingwg/multus-cni) in KKP user clusters. ## About Multus-CNI + Multus-CNI enables attaching multiple network interfaces to pods in Kubernetes. It is not a standard CNI plugin - it acts as a CNI "meta-plugin", a CNI that can call multiple other CNI plugins. This implies that clusters still need a primary CNI to function properly. In KKP, Multus can be installed into user clusters with any [supported CNI]({{< relref "../cni-cluster-network/" >}}). Multus addon can be deployed into a user cluster with a working primary CNI at any time. ## Installing the Multus Addon in KKP + Before this addon can be deployed in a KKP user cluster, the KKP installation has to be configured to enable `multus` addon as an [accessible addon]({{< relref "../../../architecture/concept/kkp-concepts/addons/#accessible-addons" >}}). This needs to be done by the KKP installation administrator, once per KKP installation. As an administrator you can use the [AddonConfig](#multus-addonconfig) listed at the end of this page. ## Deploying the Multus Addon in a KKP User Cluster + Once the Multus Addon is installed in KKP, it can be deployed into a user cluster via the KKP UI as shown below: ![Multus Addon](@/images/ui/addon-multus.png?height=400px&classes=shadow,border "Multus Addon") @@ -25,6 +28,7 @@ Once the Multus Addon is installed in KKP, it can be deployed into a user cluste Multus will automatically configure itself with the primary CNI running in the user cluster. If the primary CNI is not yet running at the time of Multus installation, Multus will wait for it for up to 10 minutes. ## Using Multus CNI + When Multus addon is installed, all pods will be still managed by the primary CNI. At this point, it is possible to define additional networks with `NetworkAttachmentDefinition` custom resources. As an example, the following `NetworkAttachmentDefinition` defines a network named `macvlan-net` managed by the [macvlan CNI plugin](https://www.cni.dev/plugins/current/main/macvlan/) (a simple standard CNI plugin usually installed together with the primary CNIs): @@ -97,6 +101,7 @@ $ kubectl exec -it samplepod -- ip address ``` ## Multus AddonConfig + As an KKP administrator, you can use the following AddonConfig for Multus to display Multus logo in the addon list in KKP UI: ```yaml diff --git a/content/kubermatic/v2.28/tutorials-howtos/networking/proxy-whitelisting/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/networking/proxy-whitelisting/_index.en.md index 647ba8494..335a85b49 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/networking/proxy-whitelisting/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/networking/proxy-whitelisting/_index.en.md @@ -225,32 +225,38 @@ projects.registry.vmware.com/vmware-cloud-director/cloud-director-named-disk-csi ``` ### OS Resources -Additional to the kubelet dependencies, the [OperatingSystemManager](https://docs.kubermatic.com/operatingsystemmanager) installs some operating-system-specific packages over cloud-init: +Additional to the kubelet dependencies, the [OperatingSystemManager](https://docs.kubermatic.com/operatingsystemmanager) installs some operating-system-specific packages over cloud-init: ### Flatcar Linux + Init script: [osp-flatcar-cloud-init.yaml](https://github.com/kubermatic/operating-system-manager/blob/main/deploy/osps/default/osp-flatcar-cloud-init.yaml) - no additional targets ### Ubuntu 20.04/22.04/24.04 + Init script: [osp-ubuntu.yaml](https://github.com/kubermatic/operating-system-manager/blob/main/deploy/osps/default/osp-ubuntu.yaml) - default apt repositories - docker apt repository: `download.docker.com/linux/ubuntu` ### Other OS + Other supported operating system details are visible by the dedicated [default OperatingSystemProfiles](https://github.com/kubermatic/operating-system-manager/tree/main/deploy/osps/default). # KKP Seed Cluster Setup ## Cloud Provider API Endpoints + KKP interacts with the different cloud provider directly to provision the required infrastructure to manage Kubernetes clusters: ### AWS + API endpoint documentation: [AWS service endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html) KKP interacts in several ways with different cloud providers, e.g.: + - creating EC2 instances - creating security groups - access instance profiles @@ -263,7 +269,9 @@ ec2.eu-central-1.amazonaws.com ``` ### Azure + API endpoint documentation: [Azure API Docs - Request URI](https://docs.microsoft.com/en-us/rest/api/azure/#request-uri) + ```bash # Resource Manager API management.azure.com @@ -275,8 +283,8 @@ login.microsoftonline.com ``` ### vSphere -API Endpoint URL of all targeted vCenters specified in [seed cluster `spec.datacenters.EXAMPLEDC.vsphere.endpoint`]({{< ref "../../../tutorials-howtos/project-and-cluster-management/seed-cluster" >}}), e.g. `vcenter.example.com`. +API Endpoint URL of all targeted vCenters specified in [seed cluster `spec.datacenters.EXAMPLEDC.vsphere.endpoint`]({{< ref "../../../tutorials-howtos/project-and-cluster-management/seed-cluster" >}}), e.g. `vcenter.example.com`. ## KubeOne Seed Cluster Setup @@ -299,7 +307,9 @@ github.com/containernetworking/plugins/releases/download # gobetween (if used, e.g. at vsphere terraform setup) github.com/yyyar/gobetween/releases ``` + **At installer host / bastion server**: + ```bash ## terraform modules registry.terraform.io @@ -313,6 +323,7 @@ quay.io/kubermatic-labs/kubeone-tooling ``` ## cert-manager (if used) + For creating certificates with let's encrypt we need access: ```bash diff --git a/content/kubermatic/v2.28/tutorials-howtos/oidc-provider-configuration/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/oidc-provider-configuration/_index.en.md index 2553a06d5..17c2073ee 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/oidc-provider-configuration/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/oidc-provider-configuration/_index.en.md @@ -107,13 +107,13 @@ spec: "oidc_logout_url": "https://keycloak.kubermatic.test/auth/realms/test/protocol/openid-connect/logout" } ``` + {{% notice note %}} When the user token size exceeds the browser's cookie size limit (e.g., when the user is a member of many groups), the token is split across multiple cookies to ensure proper authentication. External tools outside of KKP (e.g., Kubernetes Dashboard, Grafana, Prometheus) are not supported with multi-cookie tokens. {{% /notice %}} - ### Seed Configuration In some cases a Seed may require an independent OIDC provider. For this reason a `Seed` CRD contains relevant fields under `spec.oidcProviderConfiguration`. Filling those fields results in overwriting a configuration from `KubermaticConfiguration` CRD. The following snippet presents an example of `Seed` CRD configuration: @@ -138,7 +138,5 @@ reconfigure the components accordingly. After a few seconds the new pods should running. {{% notice note %}} -If you are using _Keycloak_ as a custom OIDC provider, make sure that you set the option `Implicit Flow Enabled: On` -on the `kubermatic` and `kubermaticIssuer` clients. Without this option, you won't be properly -redirected to the login page. +If you are using *Keycloak* as a custom OIDC provider, make sure that you set the option `Implicit Flow Enabled: On` on the `kubermatic` and `kubermaticIssuer` clients. Without this option, you won't be properly redirected to the login page. {{% /notice %}} diff --git a/content/kubermatic/v2.28/tutorials-howtos/oidc-provider-configuration/share-clusters-via-delegated-oidc-authentication/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/oidc-provider-configuration/share-clusters-via-delegated-oidc-authentication/_index.en.md index f4d044c5f..5dcddc054 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/oidc-provider-configuration/share-clusters-via-delegated-oidc-authentication/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/oidc-provider-configuration/share-clusters-via-delegated-oidc-authentication/_index.en.md @@ -143,4 +143,5 @@ kubectl -n kubermatic apply -f kubermaticconfig.yaml After the operator has reconciled the KKP installation, OIDC auth will become available. ### Grant Permission to an OIDC group + Please take a look at [Cluster Access - Manage Group's permissions]({{< ref "../../cluster-access#manage-group-permissions" >}}) diff --git a/content/kubermatic/v2.28/tutorials-howtos/opa-integration/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/opa-integration/_index.en.md index 734556c78..6e776ff78 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/opa-integration/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/opa-integration/_index.en.md @@ -17,7 +17,6 @@ policy engine. More info about OPA and Gatekeeper can be read from their docs and tutorials, but the general idea is that by using the Constraint Template CRD the users can create rule templates whose parameters are then filled out by the corresponding Constraints. - ## How to activate OPA Integration on your Cluster The integration is specific per user cluster, meaning that it is activated by a flag in the cluster spec. @@ -44,6 +43,7 @@ Constraint Templates are managed by the Kubermatic platform admins. Kubermatic i Kubermatic CT's which designated controllers to reconcile to the seed and to user cluster with activated OPA integration as Gatekeeper CT's. Example of a Kubermatic Constraint Template: + ```yaml apiVersion: kubermatic.k8c.io/v1 kind: ConstraintTemplate @@ -94,7 +94,6 @@ Constraints need to be associated with a Constraint Template. To add a new constraint click on the `+ Add Constraint` icon on the right at the bottom of cluster view. A new dialog will appear, where you can specify the name, the constraint template, and the spec: Spec is the only field that needs to be filled with a yaml. - ![Add Constraints Dialog](@/images/ui/opa-add-constraint.png?height=350px&classes=shadow,border "Add Constraints Dialog") `Note: You can now manage Default Constraints from the Admin Panel.` @@ -134,6 +133,7 @@ Kubermatic operator/admin creates a Constraint in the admin panel, it gets propa The following example is regarding `Restricting escalation to root privileges` in Pod Security Policy but implemented as Constraints and Constraint Templates with Gatekeeper. Constraint Templates + ```yaml crd: spec: @@ -169,6 +169,7 @@ selector: ``` Constraint + ```yaml constraintType: K8sPSPAllowPrivilegeEscalationContainer match: @@ -291,6 +292,7 @@ selector: matchLabels: filtered: 'true' ``` + ### Deleting Default Constraint Deleting Default Constraint causes all related Constraints on the user clusters to be deleted as well. @@ -316,6 +318,7 @@ OPA matches these prefixes with the Pods container `image` field and if it match They are cluster-scoped and reside in the KKP Master cluster. Example of a AllowedRegistry: + ```yaml apiVersion: kubermatic.k8c.io/v1 kind: AllowedRegistry @@ -396,7 +399,8 @@ For the existing `allowedregistry` [Default Constraint]({{< ref "#default-constr When a user tries to create a Pod with an image coming from a registry that is not prefixed by one of the AllowedRegistries, they will get a similar error: -``` + +```bash container has an invalid image registry , allowed image registries are ["quay.io"] ``` @@ -419,7 +423,7 @@ You can manage the config in the user cluster view, per user cluster. OPA integration on a user cluster can simply be removed by disabling the OPA Integration flag on the Cluster object. Be advised that this action removes all Constraint Templates, Constraints, and Config related to the cluster. -**Exempting Namespaces** +### Exempting Namespaces `gatekeeper-system` and `kube-system` namespace are by default entirely exempted from Gatekeeper webhook which means they are exempted from the Admission Webhook and Auditing. diff --git a/content/kubermatic/v2.28/tutorials-howtos/opa-integration/via-ui/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/opa-integration/via-ui/_index.en.md index 4a6570398..ccd503541 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/opa-integration/via-ui/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/opa-integration/via-ui/_index.en.md @@ -14,6 +14,7 @@ As an admin, you will find a few options in the `Admin Panel`. You can access th ![Access Admin Panel](@/images/ui/admin-panel.png?classes=shadow,border "Accessing the Admin Panel") In here you can see the `OPA Options` with two checkboxes attached. + - `Enable by Default`: Set the `OPA Integration` checkbox on cluster creation to enabled by default. - `Enforce`: Enable to make users unable to edit the checkbox. @@ -30,13 +31,13 @@ Here you navigate to the OPA menu and then to Default Constraints. ## Cluster Details View The cluster details view is extended by some more information if OPA is enabled. + - `OPA Integration` in the top area is indicating if OPA is enabled or not. - `OPA Gatekeeper Controller` and `OPA Gatekeeper Audit` provide information about the status of those controllers. - `OPA Constraints` and `OPA Gatekeeper Config` are added to the tab menu on the bottom. More details are in the following sections. ![Cluster Details View](@/images/ui/opa-cluster-view.png?classes=shadow,border "Cluster Details View") - ## Activating OPA To create a new cluster with OPA enabled you only have to enable the `OPA Integration` checkbox during the cluster creation process. It is placed in Step 2 `Cluster` and can be enabled by default as mentioned in the [Admin Panel for OPA Options]({{< ref "#admin-panel-for-opa-options" >}}) section. @@ -69,6 +70,7 @@ Spec is the only field that needs to be filled with a yaml. ![Add Constraint Template](@/images/ui/opa-admin-add-ct.png?classes=shadow,border&height=350px "Constraint Template Add Dialog") The following example requires all labels that are described by the constraint to be present: + ```yaml crd: spec: @@ -116,6 +118,7 @@ To add a new constraint click on the `+ Add Constraint` icon on the right. A new ![Add Constraints Dialog](@/images/ui/opa-add-constraint.png?classes=shadow,border "Add Constraints Dialog") The following example will make sure that the gatekeeper label is defined on all namespaces, if you are using the `K8sRequiredLabels` constraint template from above: + ```yaml match: kinds: @@ -238,10 +241,8 @@ In Admin View to disable Default Constraints, click on the green button under `O Kubermatic adds a label `disabled: true` to the Disabled Constraint ![Disabled Default Constraint](@/images/ui/default-constraint-default-true.png?height=400px&classes=shadow,border "Disabled Default Constraint") - ![Disabled Default Constraint](@/images/ui/disabled-default-constraint-cluster-view.png?classes=shadow,border "Disabled Default Constraint") - Enable the constraint by clicking the same button ![Enable Default Constraint](@/images/ui/disabled-default-constraint.png?classes=shadow,border "Enable Default Constraint") @@ -287,7 +288,8 @@ Click on this button to create a config. A new dialog will appear, where you can ![Add Gatekeeper Config](@/images/ui/opa-add-config.png?height=350px&classes=shadow,border "Add Gatekeeper Config") The following example will dynamically update what objects are synced: -``` + +```yaml sync: syncOnly: - group: "" diff --git a/content/kubermatic/v2.28/tutorials-howtos/operating-system-manager/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/operating-system-manager/_index.en.md index c6a8479ef..593556851 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/operating-system-manager/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/operating-system-manager/_index.en.md @@ -44,7 +44,7 @@ Its dedicated controller runs in the **seed** cluster, in user cluster namespace For each cluster there are at least two OSC objects: 1. **Bootstrap**: OSC used for initial configuration of machine and to fetch the provisioning OSC object. -2. **Provisioning**: OSC with the actual cloud-config that provision the worker node. +1. **Provisioning**: OSC with the actual cloud-config that provision the worker node. OSCs are processed by controllers to eventually generate **secrets inside each user cluster**. These secrets are then consumed by worker nodes. diff --git a/content/kubermatic/v2.28/tutorials-howtos/operating-system-manager/compatibility/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/operating-system-manager/compatibility/_index.en.md index c0196d331..4df4a2450 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/operating-system-manager/compatibility/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/operating-system-manager/compatibility/_index.en.md @@ -16,25 +16,25 @@ The following operating systems are currently supported by the default Operating ## Operating System -| | Ubuntu | Flatcar | Amazon Linux 2 | RHEL | Rocky Linux | -|---|---|---|---|---|---| -| AWS | ✓ | ✓ | ✓ | ✓ | ✓ | -| Azure | ✓ | ✓ | x | ✓ | ✓ | -| DigitalOcean | ✓ | x | x | x | ✓ | -| Equinix Metal | ✓ | ✓ | x | x | ✓ | -| Google Cloud Platform | ✓ | ✓ | x | x | x | -| Hetzner | ✓ | x | x | x | ✓ | -| KubeVirt | ✓ | ✓ | x | ✓ | ✓ | -| Nutanix | ✓ | x | x | x | x | -| Openstack | ✓ | ✓ | x | ✓ | ✓ | -| VMware Cloud Director | ✓ | ✓ | x | x | x | -| VSphere | ✓ | ✓ | x | ✓ | ✓ | +| | Ubuntu | Flatcar | Amazon Linux 2 | RHEL | Rocky Linux | +| --------------------- | ------ | ------- | -------------- | ---- | ----------- | +| AWS | ✓ | ✓ | ✓ | ✓ | ✓ | +| Azure | ✓ | ✓ | x | ✓ | ✓ | +| DigitalOcean | ✓ | x | x | x | ✓ | +| Equinix Metal | ✓ | ✓ | x | x | ✓ | +| Google Cloud Platform | ✓ | ✓ | x | x | x | +| Hetzner | ✓ | x | x | x | ✓ | +| KubeVirt | ✓ | ✓ | x | ✓ | ✓ | +| Nutanix | ✓ | x | x | x | x | +| Openstack | ✓ | ✓ | x | ✓ | ✓ | +| VMware Cloud Director | ✓ | ✓ | x | x | x | +| VSphere | ✓ | ✓ | x | ✓ | ✓ | ## Kubernetes Versions Currently supported K8S versions are: -* 1.33 -* 1.32 -* 1.31 -* 1.30 +- 1.33 +- 1.32 +- 1.31 +- 1.30 diff --git a/content/kubermatic/v2.28/tutorials-howtos/operating-system-manager/usage/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/operating-system-manager/usage/_index.en.md index 21932998a..f44c34c2b 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/operating-system-manager/usage/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/operating-system-manager/usage/_index.en.md @@ -45,69 +45,68 @@ To use custom OperatingSystemProfiles, users can do the following: 1. Create their `CustomOperatingSystemProfile` resource on the seed cluster in the `kubermatic` namespace. These resources will be automatically synced to the `kube-system` namespace of the user-clusters. -```yaml -apiVersion: operatingsystemmanager.k8c.io/v1alpha1 -kind: CustomOperatingSystemProfile -metadata: - name: osp-install-curl - namespace: kubermatic -spec: - osName: "ubuntu" - osVersion: "20.04" - version: "v1.0.0" - provisioningUtility: "cloud-init" - supportedCloudProviders: - - name: "aws" - bootstrapConfig: - files: - - path: /opt/bin/bootstrap - permissions: 755 - content: - inline: - encoding: b64 - data: | - #!/bin/bash - - apt update && apt install -y curl jq - - - path: /etc/systemd/system/bootstrap.service - permissions: 644 - content: - inline: - encoding: b64 - data: | - [Install] - WantedBy=multi-user.target - - [Unit] - Requires=network-online.target - After=network-online.target - [Service] - Type=oneshot - RemainAfterExit=true - ExecStart=/opt/bin/bootstrap - - modules: - runcmd: - - systemctl restart bootstrap.service - - provisioningConfig: - files: - - path: /opt/hello-world - permissions: 644 - content: - inline: - encoding: b64 - data: echo "hello world" -``` - -2. Create `OperatingSystemProfile` resources in the `kube-system` namespace of the user cluster, after cluster creation. + ```yaml + apiVersion: operatingsystemmanager.k8c.io/v1alpha1 + kind: CustomOperatingSystemProfile + metadata: + name: osp-install-curl + namespace: kubermatic + spec: + osName: "ubuntu" + osVersion: "20.04" + version: "v1.0.0" + provisioningUtility: "cloud-init" + supportedCloudProviders: + - name: "aws" + bootstrapConfig: + files: + - path: /opt/bin/bootstrap + permissions: 755 + content: + inline: + encoding: b64 + data: | + #!/bin/bash + + apt update && apt install -y curl jq + + - path: /etc/systemd/system/bootstrap.service + permissions: 644 + content: + inline: + encoding: b64 + data: | + [Install] + WantedBy=multi-user.target + + [Unit] + Requires=network-online.target + After=network-online.target + [Service] + Type=oneshot + RemainAfterExit=true + ExecStart=/opt/bin/bootstrap + + modules: + runcmd: + - systemctl restart bootstrap.service + + provisioningConfig: + files: + - path: /opt/hello-world + permissions: 644 + content: + inline: + encoding: b64 + data: echo "hello world" + ``` + +1. Create `OperatingSystemProfile` resources in the `kube-system` namespace of the user cluster, after cluster creation. {{% notice note %}} OSM uses a dedicated resource CustomOperatingSystemProfile in seed cluster. These CustomOperatingSystemProfiles are converted to OperatingSystemProfiles and then propagated to the user clusters. {{% /notice %}} - ## Updating existing OperatingSystemProfiles OSPs are immutable by design and any modifications to an existing OSP requires a version bump in `.spec.version`. Users can create custom OSPs in the seed namespace or in the user cluster and manage them. diff --git a/content/kubermatic/v2.28/tutorials-howtos/project-and-cluster-management/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/project-and-cluster-management/_index.en.md index 67b0291ac..5b3f2ec9c 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/project-and-cluster-management/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/project-and-cluster-management/_index.en.md @@ -21,14 +21,12 @@ You can assign key-label pairs to your projects. These will be inherited by the After you click `Save`, the project will be created. If you click on it now, you will see options for adding clusters, managing project members, service accounts and SSH keys. - ### Delete a Project To delete a project, move the cursor over the line with the project name and click the trash bucket icon. ![Delete Project](images/project-delete.png?classes=shadow,border "Delete Project") - ### Add an SSH Key If you want to ssh into the project VMs, you need to provide your SSH public key. SSH keys are tied to a project. During cluster creation you can choose which SSH keys should be added to nodes. To add an SSH key, navigate to `SSH Keys` in the Dashboard and click on `Add SSH Key`: @@ -41,7 +39,6 @@ This will create a pop up. Enter a unique name and paste the complete content of After you click on `Add SSH key`, your key will be created and you can now add it to clusters in the same project. - ## Manage Clusters ### Create Cluster @@ -68,7 +65,6 @@ Disabling the User SSH Key Agent at this point can not be reverted after the clu ![General Cluster Settings](images/wizard-step-2.png?classes=shadow,border "General Cluster Settings") - In the next step of the installer, enter the credentials for the chosen provider. A good option is to use [Presets]({{< ref "../administration/presets/" >}}) instead putting in credentials for every cluster creation: ![Provider Credentials](images/wizard-step-3.png?classes=shadow,border "Provider Credentials") @@ -126,7 +122,6 @@ To confirm the deletion, type the name of the cluster into the text box: The cluster will switch into deletion state afterwards, and will be removed from the list when the deletion succeeds. - ## Add a New Machine Deployment To add a new machine deployment navigate to your cluster view and click on the `Add Machine Deployment` button: diff --git a/content/kubermatic/v2.28/tutorials-howtos/project-and-cluster-management/using-kubectl/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/project-and-cluster-management/using-kubectl/_index.en.md index 480584fb1..244f91c89 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/project-and-cluster-management/using-kubectl/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/project-and-cluster-management/using-kubectl/_index.en.md @@ -4,7 +4,6 @@ date = 2019-11-13T12:07:15+02:00 weight = 70 +++ - Using kubectl requires the installation of kubectl on your system as well as downloading of kubeconfig on the cluster UI page. See the [Official kubectl Install Instructions](https://kubernetes.io/docs/tasks/tools/install-kubectl/) for a tutorial on how to install kubectl on your system. Once you have installed it, download the kubeconfig. The below steps will guide you on how to download a kubeconfig. @@ -20,10 +19,8 @@ Users in the groups `Owner` and `Editor` have an admin token in their kubeconfig ![Revoke the token](revoke-token-dialog.png?classes=shadow,border "Revoke the token") - Once you have installed the kubectl and downloaded the kubeconfig, change into the download directory and export it to your environment: - ```bash $ export KUBECONFIG=$PWD/kubeconfig-admin-czmg7r2sxm $ kubectl version diff --git a/content/kubermatic/v2.28/tutorials-howtos/project-and-cluster-management/web-terminal/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/project-and-cluster-management/web-terminal/_index.en.md index 66637815f..f9d5b5350 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/project-and-cluster-management/web-terminal/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/project-and-cluster-management/web-terminal/_index.en.md @@ -53,7 +53,7 @@ spec: KKP nginx ingress controller is configured with 1 hour proxy timeout to support long-lasting connections of webterminal. In case that you use a different ingress controller in your setup, you may need to extend the timeouts for the `kubermatic` ingress - e.g. in case of nginx ingress controller, you can add these annotations to have a 1 hour "read" and "send" timeouts: -``` +```yaml nginx.ingress.kubernetes.io/proxy-read-timeout: "3600" nginx.ingress.kubernetes.io/proxy-send-timeout: "3600" ``` diff --git a/content/kubermatic/v2.28/tutorials-howtos/storage/disable-csi-driver/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/storage/disable-csi-driver/_index.en.md index f49aba51a..83eda571f 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/storage/disable-csi-driver/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/storage/disable-csi-driver/_index.en.md @@ -11,7 +11,7 @@ KKP installs the CSI drivers on user clusters that have the external CCM enabled To disable the CSI driver installation for all user clusters in a data center the admin needs to set ` disableCsiDriver: true` in the data center spec in the seed resource. -``` +```yaml apiVersion: kubermatic.k8c.io/v1 kind: Seed metadata: @@ -38,7 +38,7 @@ This will not impact the clusters which were created prior to enabling this opti To disable the CSI driver installation for a user cluster, admin needs to set `disableCsiDriver: true` in the cluster spec, this is possible only if it is not disabled at the data center. -``` +```yaml apiVersion: kubermatic.k8c.io/v1 kind: Cluster metadata: diff --git a/content/kubermatic/v2.28/tutorials-howtos/telemetry/_index.en.md b/content/kubermatic/v2.28/tutorials-howtos/telemetry/_index.en.md index 6dcbf4360..35ee7dc16 100644 --- a/content/kubermatic/v2.28/tutorials-howtos/telemetry/_index.en.md +++ b/content/kubermatic/v2.28/tutorials-howtos/telemetry/_index.en.md @@ -12,6 +12,7 @@ Telemetry helm chart can be found in the [Kubermatic repository](https://github. ## Installation ### Kubermatic Installer + Telemetry will be enabled by default if you use the Kubermatic installer to deploy KKP. For more information about how to use the Kubermatic installer to deploy KKP, please refer to the [installation guide]({{< relref "../../installation/" >}}). Kubermatic installer will use a `values.yaml` file to configure all Helm charts, including Telemetry. The following is an example of the configuration of the Telemetry tool: @@ -35,22 +36,29 @@ Then you can use the Kubermatic installer to install KKP by using the following ``` After this command finishes, a CronJob will be created in the `telemetry-system` namespace on the master cluster. The CronJob includes the following components: + - Agents, including Kubermatic Agent and Kubernetes Agent. They will collect data based on the predefined report schema. Each agent will collect data as an initContainer and write data to local storage. -- Reporter. It will aggregate data that was collected by Agents from local storage, and send it to the public Telemetry endpoint (https://telemetry.k8c.io) based on the `schedule` you defined in the `values.yaml` (or once per day by default). +- Reporter. It will aggregate data that was collected by Agents from local storage, and send it to the public [Telemetry endpoint](https://telemetry.k8c.io) based on the `schedule` you defined in the `values.yaml` (or once per day by default). ### Helm Chart + Telemetry can also be installed by using Helm chart, which is included in the release, prepare a `values.yaml` as we mentioned in the previous section, and install it on the master cluster by using the following command: + ```bash helm --namespace telemetry-system upgrade --atomic --create-namespace --install telemetry /path/to/telemetry/chart --values values.yaml ``` ## Disable Telemetry + If you don’t want to send usage data to us to improve our product, or your KKP will be running in offline mode which doesn’t have access to the public Telemetry endpoint, you can disable it by using `--disable-telemetry` flag as following: + ```bash ./kubermatic-installer deploy --disable-telemetry --config kubermatic.yaml --helm-values values.yaml ``` ## Data that Telemetry Collects + Telemetry tool collects the following metadata in an anonymous manner with UUIDs, the data schemas can be found in [Telemetry-Client repository](https://github.com/kubermatic/telemetry-client): + - For Kubermatic usage: [Kubermatic Record](https://github.com/kubermatic/telemetry-client/blob/release/v0.3/pkg/agent/kubermatic/v2/record.go) - For Kubernetes usage: [Kubernetes Record](https://github.com/kubermatic/telemetry-client/blob/release/v0.3/pkg/agent/kubernetes/v2/record.go)