Skip to content
This repository has been archived by the owner on Mar 7, 2023. It is now read-only.

Commit

Permalink
Merge pull request #70 from giubacc/remove_k8s_nginx_env_enhs
Browse files Browse the repository at this point in the history
k3s-ansible submodule & removed k8s-vanilla and NGINX support
  • Loading branch information
m-ildefons authored Jun 16, 2022
2 parents e11129e + 696d38e commit 045513b
Show file tree
Hide file tree
Showing 30 changed files with 544 additions and 793 deletions.
3 changes: 3 additions & 0 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[submodule "k3s-ansible"]
path = env/playbooks/k3s-ansible
url = https://github.com/k3s-io/k3s-ansible.git
2 changes: 1 addition & 1 deletion build-ui/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Make sure you've installed the following applications:

* Podman or Docker

The build script expect the following directory hierarchy.
The build script expects the following directory hierarchy.

```text
|
Expand Down
2 changes: 1 addition & 1 deletion env/.gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
/s3gw.ctr.tar
/*.ctr.tar
s3gw/*.tmp.yaml
playbooks/join-command
playbooks/admin.conf
Expand Down
25 changes: 9 additions & 16 deletions env/README.k3s.md → env/README.bm.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# K3s
# K3s on Bare Metal

This README will guide you through the setup of a K3s cluster on bare metal.
If you are looking for K3s cluster running on virtual machines,
refer to our [K3s on virtual machines](./README.vm.md).

This README will guide you through the setup of a K3s cluster on your system.
If you are looking for a vanilla K8s cluster running on virtual machines,
refer to our [K8s section](./README.k8s.md).
To install K3s on a virtual machine, see [here](#Install-K3s-on-a-virtual-machine).
# Setup

## Note Before
Expand Down Expand Up @@ -33,7 +33,7 @@ system. Additionally, it will deploy Longhorn and the s3gw in the cluster.

```
$ cd ~/git/s3gw-core/env
$ ./setup-k3s.sh
$ ./setup.sh
```

# Access the Longhorn UI
Expand Down Expand Up @@ -66,19 +66,12 @@ Backup Target Credential Secret: `s3gw-secret`

# Install K3s on a virtual machine

## Requirements

Make sure you have installed the following applications on your system:

* Vagrant
* libvirt
* Ansible

In order to install k3s on a virtual machine rather than on bare metal, execute:

```
$ cd ~/git/s3gw-core/env
$ ./setup-k3s.sh --vm
$ ./setup.sh --vm
```

Refer to [K8s section](./README.k8s.md) for more configuration options.
Refer to [K3s on virtual machines](./README.vm.md) for requirements and for
more configuration options.
26 changes: 11 additions & 15 deletions env/README.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,13 @@
# K3s & K8s environment running s3gw with Longhorn
# K3s environment running s3gw with Longhorn

This is the entrypoint to setup a Kubernetes cluster on your system.
You can either choose to install a lightweight **K3s** cluster or a **vanilla K8s**
cluster running the latest stable Kubernetes version available.
Regardless of the choice, you will get a provisioned cluster set up to work with
`s3gw` and Longhorn.
K3s version can install directly on bare metal or on virtual machine.
K8s version will install on an arbitrary number of virtual machines depending on the
size of the cluster.
This is the entrypoint to setup a Kubernetes cluster running s3gw with Longhorn.
You can choose to install a **K3s** cluster directly on your machine
or on top of virtual machines.

Refer to the appropriate section to proceed with the setup of the environment:
Refer to the appropriate section to proceed with the setup:

* [K3s Setup](./README.k3s.md)
* [K8s Setup](./README.k8s.md)
* [K3s on bare metal](./README.bm.md)
* [K3s on virtual machines](./README.vm.md)

## Ingresses

Expand All @@ -21,7 +16,7 @@ allocated on a separate virtual host:

* **Longhorn dashboard**, on: `longhorn.local`
* **s3gw**, on: `s3gw.local` and `s3gw-no-tls.local`
* **s3gw s3 explorer**, on: `s3gw-ui.local`
* **s3gw s3 explorer**, on: `s3gw-ui.local` and `s3gw-ui-no-tls.local`

Host names are exposed with a node port service listening on ports
30443 (https) and 30080 (http).
Expand All @@ -32,15 +27,15 @@ When you are running the cluster on a virtual machine,
you can patch host's `/etc/hosts` file as follow:

```text
10.46.201.101 longhorn.local s3gw.local s3gw-no-tls.local s3gw-ui.local
10.46.201.101 longhorn.local s3gw.local s3gw-no-tls.local s3gw-ui.local s3gw-ui-no-tls.local
```

This makes host names resolving with the admin node.
Otherwise, when you are running the cluster on bare metal,
you can patch host's `/etc/hosts` file as follow:

```text
127.0.0.1 longhorn.local s3gw.local s3gw-no-tls.local s3gw-ui.local
127.0.0.1 longhorn.local s3gw.local s3gw-no-tls.local s3gw-ui.local s3gw-ui-no-tls.local
```

Services can now be accessed at:
Expand All @@ -50,4 +45,5 @@ https://longhorn.local:30443
https://s3gw.local:30443
http://s3gw-no-tls.local:30080
https://s3gw-ui.local:30443
http://s3gw-ui-no-tls.local:30080
```
60 changes: 38 additions & 22 deletions env/README.k8s.md → env/README.vm.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
# K8s
# K3s on Virtual Machines

Follow this guide if you wish to run an `s3gw` image on the latest stable Kubernetes release.
You will be able to quickly build a cluster installed on a set of virtual machines.
Follow this guide if you wish to run a K3s cluster installed on virtual machines.
You will have a certain degree of choice in terms of customization options.
If you are looking for a more lightweight environment running directly on bare metal,
refer to our [K3s section](./README.k3s.md).
refer to our [K3s on bare metal](./README.bm.md).

## Table of Contents

* [Description](#description)
* [Requirements](#requirements)
* [Supported Vagrant boxes](#supported-vagrant-boxes)
* [Building the environment](#building-the-environment)
* [Destroying the environment](#destroying-the-environment)
* [Accessing the environment](#accessing-the-environment)
Expand All @@ -20,14 +20,14 @@ refer to our [K3s section](./README.k3s.md).
## Description

The entire environment build process is automated by a set of Ansible playbooks.
The cluster is created with exactly one `admin` node and
The cluster is created with one `admin` node and
an arbitrary number of `worker` nodes.
A single virtual machine acting as an `admin` node is also possible; in this case, it
will be able to schedule pods as a `worker` node.
Name topology for nodes is the following:
Name topology of nodes is the following:

```text
admin
admin-1
worker-1
worker-2
...
Expand All @@ -41,13 +41,32 @@ Make sure you have installed the following applications on your system:
* libvirt
* Ansible

Make sure you have installed the following Ansible modules:

* kubernetes.core
* community.docker.docker_image

You can install them with:

```bash
$ ansible-galaxy collection install kubernetes.core
...
$ ansible-galaxy collection install community.docker
...
```

## Supported Vagrant boxes

* opensuse/Leap-15.3.x86_64
* generic/ubuntu[1604-2004]

## Building the environment

You can build the environment with the `setup-k8s.sh` script.
You can build the environment with the `setup-vm.sh` script.
The simplest form you can use is:

```bash
$ ./setup-k8s.sh build
$ ./setup-vm.sh build
Building environment ...
```

Expand All @@ -56,11 +75,10 @@ and one node `worker`.
You can customize the build with the following environment variables:

```text
IMAGE_NAME : The Vagrant box image used in the cluster
BOX_NAME : The Vagrant box image used in the cluster (default: opensuse/Leap-15.3.x86_64)
VM_NET : The virtual machine subnet used in the cluster
VM_NET_LAST_OCTET_START : Vagrant will increment this value when creating vm(s) and assigning an ip
CIDR_NET : The CIDR subnet used by the Calico network plugin
WORKER_COUNT : The number of Kubernetes workers in the cluster
WORKER_COUNT : The number of Kubernetes node in the cluster
ADMIN_MEM : The RAM amount used by the admin node (Vagrant format)
ADMIN_CPU : The CPU amount used by the admin node (Vagrant format)
ADMIN_DISK : yes/no, when yes a disk will be allocated for the admin node - this will be effective only for mono clusters
Expand All @@ -71,27 +89,25 @@ WORKER_DISK : yes/no, when yes a disk will be allocated for the
WORKER_DISK_SIZE : The disk size allocated for a worker node (Vagrant format)
CONTAINER_ENGINE : The host's local container engine used to build the s3gw container (podman/docker)
STOP_AFTER_BOOTSTRAP : yes/no, when yes stop the provisioning just after the bootstrapping phase
START_LOCAL_REGISTRY : yes/no, when yes start a local insecure image registry at admin.local:5000
S3GW_IMAGE : The s3gw's container image used when deploying the application on k8s
K8S_DISTRO : The Kubernetes distribution to install; specify k3s or k8s (k8s default)
INGRESS : The ingress implementation to be used; NGINX or Traefik (NGINX default)
S3GW_IMAGE : The s3gw's container image used when deploying the application on k3s
PROV_USER : The provisioning user used by Ansible (vagrant default)
S3GW_UI_REPO : A GitHub repository to be used when building the s3gw-ui's image
S3GW_UI_VERSION : A S3GW_UI_REPO's branch to be used
SCENARIO : An optional scenario to be loaded in the cluster
K3S_VERSION : The K3s version to be used (default: v1.23.6+k3s1)
```

So, you could start a more specialized build with:

```bash
$ IMAGE_NAME=generic/ubuntu1804 WORKER_COUNT=4 ./setup-k8s.sh build
$ BOX_NAME=generic/ubuntu1804 WORKER_COUNT=4 ./setup-vm.sh build
Building environment ...
```

You create a mono virtual machine cluster with the lone `admin` node with:

```bash
$ WORKER_COUNT=0 ./setup-k8s.sh build
$ WORKER_COUNT=0 ./setup-vm.sh build
Building environment ...
```

Expand All @@ -102,7 +118,7 @@ In this case, the node will be able to schedule pods as a `worker` node.
You can destroy a previously built environment with:

```bash
$ ./setup-k8s.sh destroy
$ ./setup-vm.sh destroy
Destroying environment ...
```

Expand All @@ -115,7 +131,7 @@ to be released by Vagrant.
You can start a previously built environment with:

```bash
$ ./setup-k8s.sh start
$ ./setup-vm.sh start
Starting environment ...
```

Expand All @@ -131,14 +147,14 @@ You can connect through `ssh` to all nodes in the cluster.
To connect to the `admin` node run:

```bash
$ ./setup-k8s.sh ssh admin
$ ./setup-vm.sh ssh admin
Connecting to admin ...
```

To connect to a `worker` node run:

```bash
$ ./setup-k8s.sh ssh worker-2
$ ./setup-vm.sh ssh worker-2
Connecting to worker-2 ...
```

Expand Down
Loading

0 comments on commit 045513b

Please sign in to comment.