diff --git a/.gitignore b/.gitignore
index 21a422c..eb23a71 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,5 +1,6 @@
/.idea/
.vagrant
.vscode
+.DS_Store
*.swp
diff --git a/README.md b/README.md
index 3728096..be3ac87 100644
--- a/README.md
+++ b/README.md
@@ -1,265 +1,89 @@
-# s3gw core
+# s3gw
+
+
+
-S3-compatible Gateway based on Ceph RGW, using a non-RADOS backend for
+
+An S3-compatible Gateway based on Ceph RGW, using a non-RADOS backend for
standalone usage.
-This project shall provide the required infrastructure to build a container
+This project provides the required infrastructure to build a container
able to run on a kubernetes cluster, providing S3-compatible endpoints to
applications.
-Table of Contents
-=================
-
-* [Roadmap](#roadmap)
-* [How To](#how-to)
- * [Introduction](#introduction)
- * [Requirements](#requirements)
- * [Running](#running)
-* [Building the s3gw container image](#building-the-s3gw-container-image)
- * [Prerequisites](#prerequisites)
- * [Building the radosgw binary](#building-the-radosgw-binary)
- * [Build the s3gw container image](#build-the-s3gw-container-image)
- * [Running the s3gw container](#running-the-s3gw-container)
-* [Building the s3gw-UI image](#building-the-s3gw-ui-image)
-* [Building and running a complete environment](#building-and-running-a-complete-environment)
-* [When things don't work](#when-things-dont-work)
-* [License](#license)
-
-
-
-## Roadmap
-
-We are currently focusing on delivering a proof-of-concept, demonstrating the
-feasibility of the project. We call this deliverable the Minimum Viable Product.
-
-The image below represents what we believe to be our current roadmap. We are on
-the MVP phase right now. Once we achieve this deliverable, and depending on user
-feedback being positive, amongst other factors that may yet be determined, we
-shall proceed to the next phase of development.
-
-![S3GW Roadmap](/assets/images/s3gw-roadmap.jpg)
-
-We also keep track of our items in an Aquarist Labs organization dedicated
-[project](https://github.com/orgs/aquarist-labs/projects/5/views/1).
-
-## How To
-
-### Introduction
-
-Given we are still setting up the project, figuring out requirements, and
-specific details about direction, we are dedicating most of our efforts to
-testing Ceph's RGW as a standalone daemon using a non-RADOS storage backend.
-
-The backend in question is called `dbstore`, backed by a SQLite database, and
-is currently provided by RGW.
-
-In order to ensure we all test from the same point in time, we have a forked
-version of the latest development version of Ceph, which can be found
-[here](https://github.com/aquarist-labs/ceph.git). We are working using the
-[`s3gw` branch](https://github.com/aquarist-labs/ceph/tree/s3gw) as our base of
-reference.
-
-Keep in mind that this development branch will likely closely follow Ceph's
-upstream main development branch, and is bound to change over time. We intend
-to contribute whatever patches we come up with to the original project, thus
-we need to keep up with its ever evolving state.
-
-### Requirements
-
-We are relying on built Ceph sources to test RGW. We don't have a particular
-preference on how one achieves this. Some of us rely on containers to build
-these sources, while others rely on whatever OS they have on their local
-machines to do so. Eventually we intend to standardize how we obtain the
-RGW binary, but that's not in our immediate plans.
+# Table of Contents
-If one is new to Ceph development, the best way to find out how to build
-these sources is to refer to the
-[original documentation](https://docs.ceph.com/en/pacific/install/build-ceph/#id1).
+- [s3gw](#-s3gw)
+- [Roadmap](#-roadmap)
+- [Quickstart](#-quickstart)
+ - [Helm chart](#helm-chart)
+ - [Podman](#podman)
+ - [Docker](#docker)
+- [License](#license)
+- [Developing the s3gw](#-developing-the-s3gw)
-Because we are in a fast development effort at the moment, we have chosen to
-apply patches needed to make our endeavour work on our own fork of the Ceph
-repository. This allows us fiddle with the Ceph source while experimenting,
-without polluting the upstream Ceph repository. We do intend to upstream any
-patches that make sense though.
+# 🛣 Roadmap
-That said, we have the `aquarist-labs/ceph` repository as a requirement for
-this project. We can't guarantee that our instructions, or the project as a
-whole, will work flawlessly with the original Ceph project from `ceph/ceph`.
+![Roadmap](/assets/images/s3gw-roadmap.jpg)
-### Running
+The aim is to deliver a Minumum Viable Product (MVP). In a nutshell, an MVP seeks to collect the maximum amount of validated learning about our users in a short time.
-One should be able to get a standalone RGW running following these steps:
+Based on the user feedback we collect, we develop features that add up towards the goal of delivering the MVP. We're working on phases:
-```
-$ cd build/
-$ mkdir -p dev/rgw.foo
-$ bin/radosgw -i foo -d --no-mon-config --debug-rgw 15 \
- --rgw-backend-store dbstore \
- --rgw-data $(pwd)/dev/rgw.foo \
- --run-dir $(pwd)/dev/rgw.foo
-```
-
-Once the daemon is running, and outputting its logs to the terminal, one can
-start issuing commands to the daemon. We rely on `s3cmd`, which can be found
-on [github](https://github.com/s3tools/s3cmd) or obtained through `pip`.
-
-`s3cmd` will require to be configured to talk to RGW. This can be achieved by
-first running `s3cmd -c $(pwd)/.s3cfg --configure`. By default, the configuration
-file would be put under the user's home directory, but for our testing purposes
-it might be better to place it somewhere less intrusive.
-
-During the interactive configuration a few things will be asked, and we
-recommend using these answers unless one's deployment is different, in which
-case these will need to be properly adapted.
-
-```
- Access Key: 0555b35654ad1656d804
- Secret Key: h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==
- Default Region: US
- S3 Endpoint: 127.0.0.1:7480
- DNS-style bucket+hostname:port template for accessing a bucket: 127.0.0.1:7480/%(bucket)
- Encryption password: ****
- Path to GPG program: /usr/bin/gpg
- Use HTTPS protocol: False
- HTTP Proxy server name:
- HTTP Proxy server port: 0
-```
+**Phase 1**
+* Use SQLite as storage backend.
+* Running the s3gw in a container.
+* Be able to run basic S3 operations.
+* Ability to deploy the S3 Gateway on a K8s or K3s cluster with.
+* [Design mockups](https://www.figma.com/file/IeozuvvYlrKBs7qm030dyo/S3-Wireframe?node-id=0%3A1) for User management & an S3 explorer.
-Please note that both the `Access Key` and the `Secret Key` need to be copied
-verbatim. Unfortunately, at this time, the `dbstore` backend statically creates
-an initial user using these values.
+-----
-Should the configuration be correct, one will then be able to issue commands
-against the running RGW. E.g., `s3cmd mb s3://foo`, to create a new bucket.
+**Phase 2**
+* Basic S3 explorer UI
+* Helm chart
+* Define & start work on a file-based backend
+* Implement CI/CD
-## Building the s3gw container image
+-----
-This documentation will guide you through the several steps to build the
-`s3gw` container image.
+**Phase 3**
+* File-based backend integration
+* Increased support for S3 functional features
+* S3 explorer UI
+* User management UI
+* tba
-> **NOTE:** The absolute paths mentioned in this document may be different
-> on your system.
+-----
-### Prerequisites
-Make sure you've installed the following applications:
+**Project board**
-- podman
-- buildah
+We track our progress in this [Github project](https://github.com/orgs/aquarist-labs/projects/5/views/1) board.
-Optionally, if you prefer building an `s3gw` container image with Docker you will need:
-
-- docker
-
-The build scripts expect the following directory hierarchy.
-
-```
-|
-|- ceph/
-| |- build/
-| ...
-|
-|- s3gw-core/
- |- build/
- ...
-```
-
-### Building the radosgw binary
-To build the `radosgw` binary, a containerized build environment is used.
-This container can be built by running the following command:
-
-```
-$ cd ~/git/s3gw-core/build
-$ podman build --tag build-radosgw -f ./Dockerfile.build-radosgw
-```
-
-If you experience connection issues while downloading the packages to be
-installed in the build environment, try using the `--net=host`
-command line argument.
-
-After the build environment container image has been build once, the
-`radosgw` binary will be build automatically when the container is
-started. Make sure the path to the Ceph Git repository in the host
-file system is correct, e.g. `../../ceph`, `~/git/ceph`, ...
-
-```
-$ podman run --replace --name build-radosgw -v ../../ceph/:/srv/ceph/ localhost/build-radosgw
-```
-
-By default, the `radosgw` binary file will be build in `Debug` mode. For production
-builds set the environment variable `CMAKE_BUILD_TYPE` to `Release`, `RelWithDebInfo`
-or `MinSizeRel`. Check the [CMAKE_BUILD_TYPE documentation](https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html)
-for more information.
-
-```
-$ podman run --replace --name build-radosgw -e CMAKE_BUILD_TYPE="MinSizeRel" -v ../../ceph/:/srv/ceph/ localhost/build-radosgw
-```
-
-### Build the s3gw container image
-If the Ceph `radosgw` binary is compiled, the container image can be build
-with the following commands:
-
-```
-$ cd ~/git/s3gw-core/build
-$ ./build-container.sh
-```
-
-By default, this will build an `s3gw` image using podman.
-In order to build an `s3gw` image with Docker, you can run:
-
-```
-$ cd ~/git/s3gw-core/build
-$ CONTAINER_ENGINE=docker ./build-container.sh
-```
-
-The container build script expects the `radosgw` binary at the relative
-path `../ceph/build/bin`. This can be customized via the `CEPH_DIR`
-environment variable.
-
-The container image name is `s3gw` by default. This can be customized via
-the environment variable `IMAGE_NAME`.
-
-### Running the s3gw container
-Finally, you can run the `s3gw` container with the following command:
+# 🚀 Quickstart
+## Helm chart
+An easy way to deploy the S3 Gateway on your Kubernetes cluster is via a Helm chart.
+We've created a dedicated repository for it, which can be found [here](https://github.com/aquarist-labs/s3gw-charts).
+## Podman
```
$ podman run --replace --name=s3gw -it -p 7480:7480 localhost/s3gw
```
-or, when using Docker:
-
+## Docker
```
-$ docker run -p 7480:7480 localhost/s3gw
-```
-By default, the container will run with the following arguments:
-
-```text
---rgw-backend-store dbstore
---debug-rgw 1
+docker pull ghcr.io/aquarist-labs/s3gw:latest
```
-You can override them passing different values when starting the container.
-For example if you want to increase `radosgw` logging verbosity, you could run:
+In order to run the Docker container:
-```shell
-$ podman run -p 7480:7480 localhost/s3gw --rgw-backend-store dbstore --debug-rgw 15
+```
+docker run -p 7480:7480 localhost/s3gw
```
-## Building the s3gw-UI image
-
-You can refer to the [build-ui](./build-ui/) section to build the s3gw-UI image.
-
-## Building and running a complete environment
-
-You can refer to the [environment](./env/) section to build a fully provisioned Kubernetes cluster.
-
-## When things don't work
-
-If one finds a problem, as one is bound to at this point in time, we encourage
-everyone to check out our [issues list](https://github.com/aquarist-labs/s3gw-core/issues)
-and either file a new issue if the observed behavior has not been reported
-yet, or to contribute with further details to an existing issue.
+For more information on building and running a container, please read our [guide](./build/).
-## License
+# License
Licensed under the Apache License, Version 2.0 (the "License");
you may not use licensed files except in compliance with the License.
@@ -275,3 +99,17 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
+
+# 📖 Developing the s3gw
+- [Developing the S3 Gateway](./docs/developing#----developing-the-s3-gateway)
+- [How to build your own containers](./docs/build#---how-to-build-your-own-containers)
+ * [Building the s3gw container image](./docs/build#building-the-s3gw-container-image)
+ * [Building the s3gw-UI image](./docs/build#building-the-s3gw-ui-image)
+- [Building a K3s & K8s environment running s3gw with Longhorn](./docs/env.md#building-a-k3s---k8s-environment-running-s3gw-with-longhorn)
+ * [K3s Setup](./docs/env-k3s.md)
+ * [K8s Setup](./docs/env-k8s.md)
+- [S3 API compatibility table](./docs/s3-compatibility-table)
+- [S3GW Repositories](#s3gw-repositories)
+- [Contributing](./docs/contributing.md#contributing)
+ * [Reporting an issue](./docs/contributing.md#reporting-an-issue)
+ * [Discussion](./docs/contributing.md#discussion)
diff --git a/assets/images/s3gw-roadmap.jpg b/assets/images/s3gw-roadmap.jpg
index 712b2da..fe8d3ad 100644
Binary files a/assets/images/s3gw-roadmap.jpg and b/assets/images/s3gw-roadmap.jpg differ
diff --git a/build-ui/README.md b/docs/build-ui.md
similarity index 100%
rename from build-ui/README.md
rename to docs/build-ui.md
diff --git a/docs/build.md b/docs/build.md
new file mode 100644
index 0000000..3b4d173
--- /dev/null
+++ b/docs/build.md
@@ -0,0 +1,113 @@
+# 📦 How to build your own containers
+
+## Building the s3gw container image
+
+This documentation will guide you through the several steps to build the
+`s3gw` container image.
+
+> **NOTE:** The absolute paths mentioned in this document may be different
+> on your system.
+
+### Prerequisites
+Make sure you've installed the following applications:
+
+- podman
+- buildah
+
+Optionally, if you prefer building an `s3gw` container image with Docker you will need:
+
+- docker
+
+The build scripts expect the following directory hierarchy.
+
+```
+|
+|- ceph/
+| |- build/
+| ...
+|
+|- s3gw-core/
+ |- build/
+ ...
+```
+
+### Building the radosgw binary
+To build the `radosgw` binary, a containerized build environment is used.
+This container can be built by running the following command:
+
+```
+$ cd ~/git/s3gw-core/build
+$ podman build --tag build-radosgw -f ./Dockerfile.build-radosgw
+```
+
+If you experience connection issues while downloading the packages to be
+installed in the build environment, try using the `--net=host`
+command line argument.
+
+After the build environment container image has been build once, the
+`radosgw` binary will be build automatically when the container is
+started. Make sure the path to the Ceph Git repository in the host
+file system is correct, e.g. `../../ceph`, `~/git/ceph`, ...
+
+```
+$ podman run --replace --name build-radosgw -v ../../ceph/:/srv/ceph/ localhost/build-radosgw
+```
+
+By default, the `radosgw` binary file will be build in `Debug` mode. For production
+builds set the environment variable `CMAKE_BUILD_TYPE` to `Release`, `RelWithDebInfo`
+or `MinSizeRel`. Check the [CMAKE_BUILD_TYPE documentation](https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html)
+for more information.
+
+```
+$ podman run --replace --name build-radosgw -e CMAKE_BUILD_TYPE="MinSizeRel" -v ../../ceph/:/srv/ceph/ localhost/build-radosgw
+```
+
+### Build the s3gw container image
+If the Ceph `radosgw` binary is compiled, the container image can be build
+with the following commands:
+
+```
+$ cd ~/git/s3gw-core/build
+$ ./build-container.sh
+```
+
+By default, this will build an `s3gw` image using podman.
+In order to build an `s3gw` image with Docker, you can run:
+
+```
+$ cd ~/git/s3gw-core/build
+$ CONTAINER_ENGINE=docker ./build-container.sh
+```
+
+The container build script expects the `radosgw` binary at the relative
+path `../ceph/build/bin`. This can be customized via the `CEPH_DIR`
+environment variable.
+
+The container image name is `s3gw` by default. This can be customized via
+the environment variable `IMAGE_NAME`.
+
+### Running the s3gw container
+Finally, you can run the `s3gw` container with the following command:
+
+```
+$ podman run --replace --name=s3gw -it -p 7480:7480 localhost/s3gw
+```
+
+or, when using Docker:
+
+```
+$ docker run -p 7480:7480 localhost/s3gw
+```
+By default, the container will run with the following arguments:
+
+```text
+--rgw-backend-store dbstore
+--debug-rgw 1
+```
+
+You can override them passing different values when starting the container.
+For example if you want to increase `radosgw` logging verbosity, you could run:
+
+```shell
+$ podman run -p 7480:7480 localhost/s3gw --rgw-backend-store dbstore --debug-rgw 15
+```
diff --git a/docs/contributing.md b/docs/contributing.md
new file mode 100644
index 0000000..e9ba929
--- /dev/null
+++ b/docs/contributing.md
@@ -0,0 +1,11 @@
+# Contributing
+## Reporting an issue
+
+If one finds a problem, as one is bound to at this point in time, we encourage
+everyone to check out our [issues list](https://github.com/aquarist-labs/s3gw-core/issues)
+and either file a new issue if the observed behavior has not been reported
+yet, or to contribute with further details to an existing issue.
+
+## Discussion
+
+You can join us in [Slack](https://join.slack.com/t/aquaristlabs/shared_invite/zt-nphn0jhg-QYKw__It8JPMkUR_sArOug).
diff --git a/docs/developing.md b/docs/developing.md
new file mode 100644
index 0000000..f469f2f
--- /dev/null
+++ b/docs/developing.md
@@ -0,0 +1,89 @@
+# 🖥️ Developing the S3 Gateway
+You can refer to the [development](./docs/development.md) section to understand how to build the `s3gw` container image.
+
+## Introduction
+
+Given we are still setting up the project, figuring out requirements, and
+specific details about direction, we are dedicating most of our efforts to
+testing Ceph's RGW as a standalone daemon using a non-RADOS storage backend.
+
+The backend in question is called `dbstore`, backed by a SQLite database, and is currently provided by RGW.
+
+In order to ensure we all test from the same point in time, we have a forked
+version of the latest development version of Ceph, which can be found
+[here](https://github.com/aquarist-labs/ceph.git). We are working using the
+[`s3gw` branch](https://github.com/aquarist-labs/ceph/tree/s3gw) as our base of
+reference.
+
+Keep in mind that this development branch will likely closely follow Ceph's
+upstream main development branch, and is bound to change over time. We intend
+to contribute whatever patches we come up with to the original project, thus
+we need to keep up with its ever evolving state.
+
+## Requirements
+
+We are relying on built Ceph sources to test RGW. We don't have a particular
+preference on how one achieves this. Some of us rely on containers to build
+these sources, while others rely on whatever OS they have on their local
+machines to do so. Eventually we intend to standardize how we obtain the
+RGW binary, but that's not in our immediate plans.
+
+If one is new to Ceph development, the best way to find out how to build
+these sources is to refer to the
+[original documentation](https://docs.ceph.com/en/pacific/install/build-ceph/#id1).
+
+Because we are in a fast development effort at the moment, we have chosen to
+apply patches needed to make our endeavour work on our own fork of the Ceph
+repository. This allows us fiddle with the Ceph source while experimenting,
+without polluting the upstream Ceph repository. We do intend to upstream any
+patches that make sense though.
+
+That said, we have the `aquarist-labs/ceph` repository as a requirement for
+this project. We can't guarantee that our instructions, or the project as a
+whole, will work flawlessly with the original Ceph project from `ceph/ceph`.
+
+## Running the Gateway
+
+One should be able to get a standalone Gateway running following these steps:
+
+```
+cd build/
+mkdir -p dev/rgw.foo
+bin/radosgw -i foo -d --no-mon-config --debug-rgw 15 \
+ --rgw-backend-store dbstore \
+ --rgw-data $(pwd)/dev/rgw.foo \
+ --run-dir $(pwd)/dev/rgw.foo
+```
+
+Once the daemon is running, and outputting its logs to the terminal, one can
+start issuing commands to the daemon. We rely on `s3cmd`, which can be found
+on [github](https://github.com/s3tools/s3cmd) or obtained through `pip`.
+
+`s3cmd` will require to be configured to talk to RGW. This can be achieved by
+first running `s3cmd -c $(pwd)/.s3cfg --configure`. By default, the configuration
+file would be put under the user's home directory, but for our testing purposes
+it might be better to place it somewhere less intrusive.
+
+During the interactive configuration a few things will be asked, and we
+recommend using these answers unless one's deployment is different, in which
+case these will need to be properly adapted.
+
+```
+ Access Key: 0555b35654ad1656d804
+ Secret Key: h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==
+ Default Region: US
+ S3 Endpoint: 127.0.0.1:7480
+ DNS-style bucket+hostname:port template for accessing a bucket: 127.0.0.1:7480/%(bucket)
+ Encryption password: ****
+ Path to GPG program: /usr/bin/gpg
+ Use HTTPS protocol: False
+ HTTP Proxy server name:
+ HTTP Proxy server port: 0
+```
+
+Please note that both the `Access Key` and the `Secret Key` need to be copied
+verbatim. Unfortunately, at this time, the `dbstore` backend statically creates
+an initial user using these values.
+
+Should the configuration be correct, one will then be able to issue commands
+against the running RGW. E.g., `s3cmd mb s3://foo`, to create a new bucket.
diff --git a/docs/env-k3s.md b/docs/env-k3s.md
new file mode 100644
index 0000000..7a4e254
--- /dev/null
+++ b/docs/env-k3s.md
@@ -0,0 +1,227 @@
+# K3s on Bare Metal
+
+This README will guide you through the setup of a K3s cluster on bare metal.
+If you are looking for K3s cluster running on virtual machines,
+refer to our [K3s on virtual machines](./README.vm.md).
+
+## Disabling firewalld
+
+In some host systems, including OpenSUSE Tumbleweed, one will need to disable
+firewalld to ensure proper functioning of k3s and its pods:
+
+```
+$ sudo systemctl stop firewalld.service
+```
+
+This is something we intend figuring out in the near future.
+
+## From the internet
+
+One can easily setup k3s with s3gw from the internet, by running
+
+```
+$ curl -sfL https://raw.githubusercontent.com/aquarist-labs/s3gw-core/main/k3s/setup.sh | sh -
+```
+
+## From source repository
+
+To install a lightweight Kubernetes cluster for development purpose run
+the following commands. It will install open-iscsi and K3s on your local
+system. Additionally, it will deploy Longhorn and the s3gw in the cluster.
+
+```
+$ cd ~/git/s3gw-core/env
+$ ./setup.sh
+```
+
+## Access the Longhorn UI
+
+The Longhorn UI can be access via the URL `http://longhorn.local`.
+
+## Access the S3 API
+
+The S3 API can be accessed via `http://s3gw.local`.
+
+We provide a [s3cmd](https://github.com/s3tools/s3cmd) configuration file
+to easily communicate with the S3 gateway in the k3s cluster.
+
+```
+$ cd ~/git/s3gw-core/k3s
+$ s3cmd -c ./s3cmd.cfg mb s3://foo
+$ s3cmd -c ./s3cmd.cfg ls s3://
+```
+
+Please adapt the `host_base` and `host_bucket` properties in the `s3cmd.cfg`
+configuration file if your K3s cluster is not accessible via localhost.
+
+## Configure s3gw as Longhorn backup target
+
+Use the following values in the Longhorn settings page to use the s3gw as
+backup target.
+
+Backup Target: `s3://@us/`
+Backup Target Credential Secret: `s3gw-secret`
+
+
+# K3s on Virtual Machines
+
+Follow this guide if you wish to run a K3s cluster installed on virtual machines.
+You will have a certain degree of choice in terms of customization options.
+If you are looking for a more lightweight environment running directly on bare metal,
+refer to our [K3s on bare metal](./README.bm.md).
+
+## Table of Contents
+
+* [Description](#description)
+* [Requirements](#requirements)
+* [Supported Vagrant boxes](#supported-vagrant-boxes)
+* [Building the environment](#building-the-environment)
+* [Destroying the environment](#destroying-the-environment)
+* [Accessing the environment](#accessing-the-environment)
+ * [ssh](#ssh)
+
+
+
+## Description
+
+The entire environment build process is automated by a set of Ansible playbooks.
+The cluster is created with one `admin` node and
+an arbitrary number of `worker` nodes.
+A single virtual machine acting as an `admin` node is also possible; in this case, it
+will be able to schedule pods as a `worker` node.
+Name topology of nodes is the following:
+
+```text
+admin-1
+worker-1
+worker-2
+...
+```
+
+## Requirements
+
+Make sure you have installed the following applications on your system:
+
+* Vagrant
+* libvirt
+* Ansible
+
+Make sure you have installed the following Ansible modules:
+
+* kubernetes.core
+* community.docker.docker_image
+
+You can install them with:
+
+```bash
+$ ansible-galaxy collection install kubernetes.core
+...
+$ ansible-galaxy collection install community.docker
+...
+```
+
+## Supported Vagrant boxes
+
+* opensuse/Leap-15.3.x86_64
+* generic/ubuntu[1604-2004]
+
+## Building the environment
+
+You can build the environment with the `setup-vm.sh` script.
+The simplest form you can use is:
+
+```bash
+$ ./setup-vm.sh build
+Building environment ...
+```
+
+This will trigger the build of a Kubernetes cluster formed by one node `admin`
+and one node `worker`.
+You can customize the build with the following environment variables:
+
+```text
+BOX_NAME : The Vagrant box image used in the cluster (default: opensuse/Leap-15.3.x86_64)
+VM_NET : The virtual machine subnet used in the cluster
+VM_NET_LAST_OCTET_START : Vagrant will increment this value when creating vm(s) and assigning an ip
+WORKER_COUNT : The number of Kubernetes node in the cluster
+ADMIN_MEM : The RAM amount used by the admin node (Vagrant format)
+ADMIN_CPU : The CPU amount used by the admin node (Vagrant format)
+ADMIN_DISK : yes/no, when yes a disk will be allocated for the admin node - this will be effective only for mono clusters
+ADMIN_DISK_SIZE : The disk size allocated for the admin node (Vagrant format) - this will be effective only for mono clusters
+WORKER_MEM : The RAM amount used by a worker node (Vagrant format)
+WORKER_CPU : The CPU amount used by a worker node (Vagrant format)
+WORKER_DISK : yes/no, when yes a disk will be allocated for the worker node
+WORKER_DISK_SIZE : The disk size allocated for a worker node (Vagrant format)
+CONTAINER_ENGINE : The host's local container engine used to build the s3gw container (podman/docker)
+STOP_AFTER_BOOTSTRAP : yes/no, when yes stop the provisioning just after the bootstrapping phase
+S3GW_IMAGE : The s3gw's container image used when deploying the application on k3s
+PROV_USER : The provisioning user used by Ansible (vagrant default)
+S3GW_UI_REPO : A GitHub repository to be used when building the s3gw-ui's image
+S3GW_UI_VERSION : A S3GW_UI_REPO's branch to be used
+SCENARIO : An optional scenario to be loaded in the cluster
+K3S_VERSION : The K3s version to be used (default: v1.23.6+k3s1)
+```
+
+So, you could start a more specialized build with:
+
+```bash
+$ BOX_NAME=generic/ubuntu1804 WORKER_COUNT=4 ./setup-vm.sh build
+Building environment ...
+```
+
+You create a mono virtual machine cluster with the lone `admin` node with:
+
+```bash
+$ WORKER_COUNT=0 ./setup-vm.sh build
+Building environment ...
+```
+
+In this case, the node will be able to schedule pods as a `worker` node.
+
+## Destroying the environment
+
+You can destroy a previously built environment with:
+
+```bash
+$ ./setup-vm.sh destroy
+Destroying environment ...
+```
+
+Be sure to match the `WORKER_COUNT` value with the one you used in the build phase.
+Providing a lower value instead of the actual one will cause some allocated vm not
+to be released by Vagrant.
+
+## Starting the environment
+
+You can start a previously built environment with:
+
+```bash
+$ ./setup-vm.sh start
+Starting environment ...
+```
+
+Be sure to match the `WORKER_COUNT` value with the one you used in the build phase.
+Providing a lower value instead of the actual one will cause some allocated vm not
+to start.
+
+## Accessing the environment
+
+### ssh
+
+You can connect through `ssh` to all nodes in the cluster.
+To connect to the `admin` node run:
+
+```bash
+$ ./setup-vm.sh ssh admin
+Connecting to admin ...
+```
+
+To connect to a `worker` node run:
+
+```bash
+$ ./setup-vm.sh ssh worker-2
+Connecting to worker-2 ...
+```
+
+When connecting to a worker node be sure to match the `WORKER_COUNT`
+value with the one you used in the build phase.
diff --git a/docs/env-k8s.md b/docs/env-k8s.md
new file mode 100644
index 0000000..f2e0151
--- /dev/null
+++ b/docs/env-k8s.md
@@ -0,0 +1,146 @@
+# K8s
+
+Follow this guide if you wish to run an `s3gw` image on the latest stable Kubernetes release.
+You will be able to quickly build a cluster installed on a set of virtual machines.
+You will have a certain degree of choice in terms of customization options.
+If you are looking for a more lightweight environment running directly on bare metal,
+refer to our [K3s section](./README.k3s.md).
+
+## Table of Contents
+
+* [Description](#description)
+* [Requirements](#requirements)
+* [Building the environment](#building-the-environment)
+* [Destroying the environment](#destroying-the-environment)
+* [Accessing the environment](#accessing-the-environment)
+ * [ssh](#ssh)
+
+
+
+## Description
+
+The entire environment build process is automated by a set of Ansible playbooks.
+The cluster is created with exactly one `admin` node and
+an arbitrary number of `worker` nodes.
+A single virtual machine acting as an `admin` node is also possible; in this case, it
+will be able to schedule pods as a `worker` node.
+Name topology for nodes is the following:
+
+```text
+admin
+worker-1
+worker-2
+...
+```
+
+## Requirements
+
+Make sure you have installed the following applications on your system:
+
+* Vagrant
+* libvirt
+* Ansible
+
+## Building the environment
+
+You can build the environment with the `setup-k8s.sh` script.
+The simplest form you can use is:
+
+```bash
+$ ./setup-k8s.sh build
+Building environment ...
+```
+
+This will trigger the build of a Kubernetes cluster formed by one node `admin`
+and one node `worker`.
+You can customize the build with the following environment variables:
+
+```text
+IMAGE_NAME : The Vagrant box image used in the cluster
+VM_NET : The virtual machine subnet used in the cluster
+VM_NET_LAST_OCTET_START : Vagrant will increment this value when creating vm(s) and assigning an ip
+CIDR_NET : The CIDR subnet used by the Calico network plugin
+WORKER_COUNT : The number of Kubernetes workers in the cluster
+ADMIN_MEM : The RAM amount used by the admin node (Vagrant format)
+ADMIN_CPU : The CPU amount used by the admin node (Vagrant format)
+ADMIN_DISK : yes/no, when yes a disk will be allocated for the admin node - this will be effective only for mono clusters
+ADMIN_DISK_SIZE : The disk size allocated for the admin node (Vagrant format) - this will be effective only for mono clusters
+WORKER_MEM : The RAM amount used by a worker node (Vagrant format)
+WORKER_CPU : The CPU amount used by a worker node (Vagrant format)
+WORKER_DISK : yes/no, when yes a disk will be allocated for the worker node
+WORKER_DISK_SIZE : The disk size allocated for a worker node (Vagrant format)
+CONTAINER_ENGINE : The host's local container engine used to build the s3gw container (podman/docker)
+STOP_AFTER_BOOTSTRAP : yes/no, when yes stop the provisioning just after the bootstrapping phase
+START_LOCAL_REGISTRY : yes/no, when yes start a local insecure image registry at admin.local:5000
+S3GW_IMAGE : The s3gw's container image used when deploying the application on k8s
+K8S_DISTRO : The Kubernetes distribution to install; specify k3s or k8s (k8s default)
+INGRESS : The ingress implementation to be used; NGINX or Traefik (NGINX default)
+PROV_USER : The provisioning user used by Ansible (vagrant default)
+S3GW_UI_REPO : A GitHub repository to be used when building the s3gw-ui's image
+S3GW_UI_VERSION : A S3GW_UI_REPO's branch to be used
+SCENARIO : An optional scenario to be loaded in the cluster
+```
+
+So, you could start a more specialized build with:
+
+```bash
+$ IMAGE_NAME=generic/ubuntu1804 WORKER_COUNT=4 ./setup-k8s.sh build
+Building environment ...
+```
+
+You create a mono virtual machine cluster with the lone `admin` node with:
+
+```bash
+$ WORKER_COUNT=0 ./setup-k8s.sh build
+Building environment ...
+```
+
+In this case, the node will be able to schedule pods as a `worker` node.
+
+## Destroying the environment
+
+You can destroy a previously built environment with:
+
+```bash
+$ ./setup-k8s.sh destroy
+Destroying environment ...
+```
+
+Be sure to match the `WORKER_COUNT` value with the one you used in the build phase.
+Providing a lower value instead of the actual one will cause some allocated vm not
+to be released by Vagrant.
+
+## Starting the environment
+
+You can start a previously built environment with:
+
+```bash
+$ ./setup-k8s.sh start
+Starting environment ...
+```
+
+Be sure to match the `WORKER_COUNT` value with the one you used in the build phase.
+Providing a lower value instead of the actual one will cause some allocated vm not
+to start.
+
+## Accessing the environment
+
+### ssh
+
+You can connect through `ssh` to all nodes in the cluster.
+To connect to the `admin` node run:
+
+```bash
+$ ./setup-k8s.sh ssh admin
+Connecting to admin ...
+```
+
+To connect to a `worker` node run:
+
+```bash
+$ ./setup-k8s.sh ssh worker-2
+Connecting to worker-2 ...
+```
+
+When connecting to a worker node be sure to match the `WORKER_COUNT`
+value with the one you used in the build phase.
diff --git a/docs/s3-compatibility-table.md b/docs/s3-compatibility-table.md
new file mode 100644
index 0000000..4e3d2ad
--- /dev/null
+++ b/docs/s3-compatibility-table.md
@@ -0,0 +1,29 @@
+# S3 API compatibility
+The following table describes the support status for current Amazon S3 functional features:
+
+| Feature | Status |
+|---------------------------|---------|
+| List Buckets | ✅ |
+| Delete Bucket | ✅ |
+| Create Bucket | ✅ |
+| Put Object | ✅ |
+| Delete Object | ✅ |
+| Get Object | ✅ |
+| Bucket Lifecycle | |
+| Bucket Replication | |
+| Policy (Buckets, Objects) | |
+| Bucket Website | |
+| Bucket ACLs (Get, Put) | |
+| Bucket Location | |
+| Bucket Notification | |
+| Bucket Object Versions | |
+| Get Bucket Info (HEAD) | |
+| Bucket Request Payment | |
+| Object ACLs (Get, Put) | |
+| Get Object Info (HEAD) | |
+| POST Object | |
+| Copy Object | |
+| Multipart Uploads | |
+| Object Tagging | |
+| Bucket Tagging | |
+| Storage Class | |
diff --git a/docs/s3gw-repos.md b/docs/s3gw-repos.md
new file mode 100644
index 0000000..829021b
--- /dev/null
+++ b/docs/s3gw-repos.md
@@ -0,0 +1,7 @@
+# S3GW Repositories
+The S3 Gateway includes the following projects:
+
+* https://github.com/aquarist-labs/s3gw-core
+* https://github.com/aquarist-labs/ceph
+* https://github.com/aquarist-labs/s3gw-charts
+* https://github.com/aquarist-labs/s3gw-status
diff --git a/env/README.md b/env/README.md
deleted file mode 100644
index f1cb20b..0000000
--- a/env/README.md
+++ /dev/null
@@ -1,49 +0,0 @@
-# K3s environment running s3gw with Longhorn
-
-This is the entrypoint to setup a Kubernetes cluster running s3gw with Longhorn.
-You can choose to install a **K3s** cluster directly on your machine
-or on top of virtual machines.
-
-Refer to the appropriate section to proceed with the setup:
-
-* [K3s on bare metal](./README.bm.md)
-* [K3s on virtual machines](./README.vm.md)
-
-## Ingresses
-
-Services are exposed with an Kubernetes ingress; each service category is
-allocated on a separate virtual host:
-
-* **Longhorn dashboard**, on: `longhorn.local`
-* **s3gw**, on: `s3gw.local` and `s3gw-no-tls.local`
-* **s3gw s3 explorer**, on: `s3gw-ui.local` and `s3gw-ui-no-tls.local`
-
-Host names are exposed with a node port service listening on ports
-30443 (https) and 30080 (http).
-You are required to resolve these names with the external ip of one
-of the nodes of the cluster.
-
-When you are running the cluster on a virtual machine,
-you can patch host's `/etc/hosts` file as follow:
-
-```text
-10.46.201.101 longhorn.local s3gw.local s3gw-no-tls.local s3gw-ui.local s3gw-ui-no-tls.local
-```
-
-This makes host names resolving with the admin node.
-Otherwise, when you are running the cluster on bare metal,
-you can patch host's `/etc/hosts` file as follow:
-
-```text
-127.0.0.1 longhorn.local s3gw.local s3gw-no-tls.local s3gw-ui.local s3gw-ui-no-tls.local
-```
-
-Services can now be accessed at:
-
-```text
-https://longhorn.local:30443
-https://s3gw.local:30443
-http://s3gw-no-tls.local:30080
-https://s3gw-ui.local:30443
-http://s3gw-ui-no-tls.local:30080
-```