Skip to content
This repository has been archived by the owner on Feb 27, 2023. It is now read-only.

Commit

Permalink
Merge pull request #45 from Bradamant3/doc-edits
Browse files Browse the repository at this point in the history
edit readmes, more for organization and clarity
  • Loading branch information
stevesloka authored Apr 20, 2018
2 parents 091704c + c131080 commit bdd5e2d
Show file tree
Hide file tree
Showing 4 changed files with 97 additions and 100 deletions.
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,15 @@ Maintainers: [Heptio](https://github.com/heptio)

## Overview

Heptio Gimbal is a layer-7 load balancing platform built on Kubernetes, Envoy, and Contour. It provides a scalable, multi-team, and API-driven ingress tier capable of routing internet traffic to multiple upstream Kubernetes clusters and traditional infrastructure technologies including OpenStack.
Heptio Gimbal is a layer-7 load balancing platform built on Kubernetes, the [Envoy proxy](https://www.envoyproxy.io/), and Heptio's Kubernetes Ingress controller, [Contour](https://heptio.github.io/contour/). It provides a scalable, multi-team, and API-driven ingress tier capable of routing Internet traffic to multiple upstream Kubernetes clusters and to traditional infrastructure technologies such as OpenStack.

Gimbal was developed out of a joint effort between Heptio and Yahoo Japan Corporation's subsidiary, Actapio, to modernize Yahoo Japan’s infrastructure with Kubernetes without impacting legacy investments in OpenStack.
Gimbal was developed out of a joint effort between Heptio and Yahoo Japan Corporation's subsidiary, Actapio, to modernize Yahoo Japan’s infrastructure with Kubernetes, without affecting legacy investments in OpenStack.

At launch, Gimbal can discover services from Kubernetes and OpenStack clusters, but we expect to support additional platforms in the future.

### Common Use Cases

* Organizations with multiple Kubernetes clusters that need a way to manage ingress traffic across them
* Organizations with multiple Kubernetes clusters that need a way to manage ingress traffic across clusters
* Organizations with Kubernetes and OpenStack infrastructure that need a consistent load balancing tier
* Organizations that want to enable their development teams to safely self-manage their routing configuration
* Organizations with bare metal or on-premises infrastructure that want cloud-like load balancing capabilities
Expand All @@ -21,33 +21,33 @@ At launch, Gimbal can discover services from Kubernetes and OpenStack clusters,

## Prerequisites

Gimbal is tested with Kubernetes clusters running version 1.9 and later but should work with any cluster starting at version 1.7.
Gimbal is tested with Kubernetes clusters running version 1.9 and later but should work with any cluster running version 1.7 or later.

Gimbal's service discovery is currently tested with Kubernetes 1.7+ and OpenStack Mitaka.

## Get started

Deployment of Gimbal is outlined in the [deployment section](deployment/README.md) and also includes quick start applications.
Deployment of Gimbal is outlined in the [deployment section](deployment/README.md), which includes quick start applications.

## Documentation

Documentation on all the Gimbal components can be found on the [docs page](docs/README.md).
Documentation for all the Gimbal components can be found in the [docs directory](docs/README.md).


## Known Limitations

* Upstream Kubernetes Pods and OpenStack VMs must be routable from the Gimbal load balancing cluster
* No support for Kubernetes clusters with Overlay networks
* No support for Kubernetes clusters with overlay networks
* We are looking for feedback on community requirements to design a solution. One potential option is to use one GRE tunnel per upstream cluster. [Feedback welcome here](https://github.com/heptio/gimbal/issues/39)!
* Kubernetes Ingress API is limited and insecure
* The Kubernetes Ingress API is limited and insecure
* Only one backend per route
* Anyone can modify route rules for a domain
* More complex load balancing features like weighting and strategy are not supported
* Gimbal & Contour will solve this with a [new IngressRoute CRD](https://github.com/heptio/contour/blob/master/design/ingressroute-design.md)

## Troubleshooting

If you encounter any problems that the documentation does not address, please [file an issue](https://github.com/heptio/gimbal/issues) or talk to us on the Kubernetes Slack team channel `#gimbal`.
If you encounter any problems that the documentation does not address, please [file an issue](https://github.com/heptio/gimbal/issues) or talk to us on the Kubernetes Slack team channel #gimbal.

## Contributing

Expand Down
124 changes: 60 additions & 64 deletions deployment/README.md
Original file line number Diff line number Diff line change
@@ -1,46 +1,42 @@
# Deployment
<!-- TOC -->

- [Deployment](#deployment)
- [Setup / Requirements](#setup--requirements)
- [Contour](#contour)
- [Discoverers](#discoverers)
- [Kubernetes](#kubernetes)
- [Openstack](#openstack)
- [Prometheus](#prometheus)
- [Quick start](#quick-start)
- [Access the Prometheus Web UI](#access-the-prometheus-web-ui)
- [Access the Alert Manager Web UI](#access-the-alert-manager-web-ui)
- [Grafana](#grafana)
- [Quick Start](#quick-start)
- [Accessing Grafana UI](#accessing-grafana-ui)
- [Configure Grafana](#configure-grafana)
- [Configure Datasource](#configure-datasource)
- [Dashboards](#dashboards)
- [Add Sample Kubernetes Dashboard](#add-sample-kubernetes-dashboard)
- [Validation](#validation)
- [Discovery Cluster](#discovery-cluster)
- [Gimbal Cluster](#gimbal-cluster)
- [Prerequisites](#prerequisites)
- [Deploy Contour](#deploy-contour)
- [Deploy Discoverers](#discoverers)
- [Kubernetes](#kubernetes)
- [Openstack](#openstack)
- [Deploy Prometheus](#deploy-prometheus)
- [Quick start](#quick-start)
- [Access the Prometheus web UI](#access-the-prometheus-web-ui)
- [Access the Alertmanager web UI](#access-the-alertmanager-web-ui)
- [Deploy Grafana](#deploy-grafana)
- [Quick start](#quick-start)
- [Access Grafana UI](#access-grafana-ui)
- [Configure Grafana](#configure-grafana)
- [Configure datasource](#configure-datasource)
- [Dashboards](#dashboards)
- [Add Sample Kubernetes Dashboard](#add-sample-kubernetes-dashboard)
- [Validation](#validation)
- [Discovery cluster](#discovery-cluster)
- [Gimbal cluster](#gimbal-cluster)

<!-- /TOC -->
# Deployment

Following are instructions to get all the components up and running.
## Prerequisites

## Setup / Requirements
- A copy of this repository. Download, or clone:

A copy of this repo locally which is easily accomplished by cloning or downloading a copy:
```sh
$ git clone [email protected]:heptio/gimbal.git
```

```sh
$ git clone [email protected]:heptio/gimbal.git
```
- A single Kubernetes cluster to deploy Gimbal
- Kubernetes or Openstack clusters with flat networking. That is, each Pod has a route-able IP address on the network.

Additionally, this guide will assume a single Kubernetes cluster to deploy Gimbal, as well as Kubernetes or Openstack clusters with `flat` networking, meaning, pods get route-able IP address on the network.
## Deploy Contour

## Contour

Contour is the system which handles the Ingress traffic. It utilizes Envoy which is an L7 proxy and communication bus designed for large modern service oriented architectures.

Envoy is the data component of Contour and handles the network traffic, Contour drives the configuration of Envoy based upon the Kubernetes cluster configuration.
For information about Contour, see [the Gimbal architecture doc](../docs/gimbal-architecture.md).

```sh
# Navigate to deployment directory
Expand All @@ -50,11 +46,11 @@ $ cd deployment
$ kubectl create -f contour/
```

_NOTE: The current configuration exposes the Envoy Admin UI so that Prometheus can scrape for metrics!_
**NOTE**: The current configuration exposes the Envoy Admin UI so that Prometheus can scrape for metrics.

## Discoverers
## Deploy Discoverers

Service discovery is enabled via the Discoverers which have both Kubernetes and Openstack implementations.
Service discovery is enabled with the Discoverers, which have both Kubernetes and Openstack implementations.

```sh
# Create gimbal-discoverer namespace
Expand All @@ -65,7 +61,7 @@ kubectl create -f gimbal-discoverer/01-common.yaml

The Kubernetes Discoverer is responsible for looking at all services and endpoints in a Kubernetes cluster and synchronizing them to the host cluster.

[Credentials](../docs/kubernetes-discoverer.md#credentials) to the remote cluster are required to be created as a secret.
[Credentials](../docs/kubernetes-discoverer.md#credentials) to the remote cluster must be created as a Secret.

```sh
# Kubernetes secret
Expand All @@ -77,13 +73,13 @@ $ kubectl -n gimbal-discovery create secret generic remote-discover-kubecfg \
$ kubectl apply -f gimbal-discoverer/02-kubernetes-discoverer.yaml
```

Technical details on how the Kubernetes Discoverer works can be found in the [docs section](../docs/kubernetes-discoverer.md).
For more information, see [the Kubenetes Discoverer doc](../docs/kubernetes-discoverer.md).

### Openstack

The Openstack Discoverer is responsible for looking at all LBaSS and members in an Openstack cluster and synchronizing them to the host cluster.

[Credentials](../docs/openstack-discoverer.md#credentials) to the remote cluster are required to be created as a secret.
[Credentials](../docs/openstack-discoverer.md#credentials) to the remote cluster must be created as a secret.

```sh
# Openstack secret
Expand All @@ -99,11 +95,11 @@ $ kubectl -n gimbal-discovery create secret generic remote-discover-openstack \
$ kubectl apply -f gimbal-discoverer/02-openstack-discoverer.yaml
```

Technical details on how the Openstack Discoverer works can be found in the [docs section](../docs/openstack-discoverer.md).
For more information, see [the OpenStack Discoverer doc](../docs/openstack-discoverer.md).

## Prometheus

This directory contains a sample development deployment of Prometheus and Alert Manager using temporary storage (e.g. emptyDir space).
Sample development deployment of Prometheus and Alertmanager using temporary storage.

### Quick start

Expand All @@ -117,15 +113,15 @@ $ cd kube-state-metrics
$ kubectl apply -f kubernetes/
```

### Access the Prometheus Web UI
### Access the Prometheus web UI

```sh
$ kubectl -n gimbal-monitoring port-forward $(kubectl -n gimbal-monitoring get pods -l app=prometheus -l component=server -o jsonpath='{.items[0].metadata.name}') 9090:9090
```

then go to [http://localhost:9090](http://localhost:9090) in your browser

### Access the Alert Manager Web UI
### Access the Alertmanager web UI

```sh
$ kubectl -n gimbal-monitoring port-forward $(kubectl -n gimbal-monitoring get pods -l app=prometheus -l component=alertmanager -o jsonpath='{.items[0].metadata.name}') 9093:9093
Expand All @@ -137,7 +133,7 @@ then go to [http://localhost:9093](http://localhost:9093) in your browser

Sample development deployment of Grafana using temporary storage.

### Quick Start
### Quick start

```sh
# Deploy
Expand All @@ -149,56 +145,56 @@ $ kubectl create secret generic grafana -n gimbal-monitoring \
--from-literal=grafana-admin-user=admin
```

### Accessing Grafana UI
### Access the Grafana UI

```sh
$ kubectl port-forward $(kubectl get pods -l app=grafana -n gimbal-monitoring -o jsonpath='{.items[0].metadata.name}') 3000 -n gimbal-monitoring
```

Access Grafana at http://localhost:3000 in your browser. Use `admin` as the username.
then go to [http://localhost:3000](http://localhost:3000) in your browser, with `admin` as the username.

### Configure Grafana

Grafana requires some configuration after it's deployed, use the following steps to configure a datasource and import a dashboard to validate the connection.
Grafana requires some configuration after it's deployed. These steps configure a datasource and import a dashboard to validate the connection.

#### Configure Datasource
#### Configure datasource

1. From main Grafana page, click on `Add Datasource`
2. For `Name` enter `prometheus`
3. Choose `Prometheus` under `Type` selector
4. In the next section, enter `http://prometheus:9090` for the `URL`
5. Click `Save & Test`
6. Look for the message box in green stating `Data source is working`
1. On the main Grafana page, click **Add Datasource**
2. For **Name** enter _prometheus_
3. In `Type` selector, choose _Prometheus_
4. For the URL, enter `http://prometheus:9090`
5. Click **Save & Test**
6. Look for the message box in green stating _Data source is working_

#### Dashboards

Dashboards for Envoy and the Discovery components are included as part of the sample Grafana deployment.

##### Add Sample Kubernetes Dashboard

Add sample dashboard to validate data source is collecting data:
Add sample dashboard to validate that the data source is collecting data:

1. From main page, click on `plus` icon and choose `Import dashboard`
2. Enter `1621` in the first box
3. Under the `prometheus` section choose the data source created in previous section
4. Click `Import`
1. On the main page, click the plus icon and choose **Import dashboard**
2. Enter _1621_ in the first box
3. In the **prometheus** section, choose the datasource that you just created
4. Click **Import**

## Validation

Once the components are deployed, the deployment can be verified with the following steps:
Now you can verify the deployment:

### Discovery Cluster
### Discovery cluster

This example utilizes a Kubernetes cluster as the discovered cluster which was configured [previously](#kubernetes). We will deploy a few sample applications into the `default` namespace:
This example deploys a sample application into the default namespace of [the discovered Kubernetes cluster that you created](#kubernetes).

```sh
# Deploy sample apps
$ kubectl apply -f example-workload/deployment.yaml
```

### Gimbal Cluster
### Gimbal cluster

These commands should be run on the Gimbal cluster to verify it's components:
Run the following commands on the Gimbal cluster to verify its components:

```sh
# Verify Discoverer Components
Expand Down
27 changes: 14 additions & 13 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,16 @@
# Documentation

Heptio Gimbal is a layer-7 load balancing platform built on Kubernetes, Envoy, and Contour. It provides a scalable, multi-team, and API-driven ingress tier capable of routing internet traffic to multiple upstream Kubernetes clusters and traditional infrastructure technologies including OpenStack.
See the root-level README for an introduction, and the [deployment directory](../deployment/README.md) to get started with setting up and and deploying Gimbal.

## Data Flow

![Data Flow](images/data-flow.png)

## Architecture Overview

![Arch Overview](images/overview.png)

You can read more about the [Gimbal Architecture](gimbal-architecture.md).
Here you can dig into the details of how Gimbal works, and explore more advanced topics for operators and users.

## Overview Guides

The following guides will describe how components of Gimbal function and interact with other systems:
These guides describe how the components of Gimbal function and how they interact with other systems:

- [Kubernetes Discoverer](kubernetes-discoverer.md)
- [Openstack Discoverer](openstack-discoverer.md)

Guides on how to setup / deploy Gimbal can be found in the [deployment guide](../deployment/README.md).

## Operator Topics

- [Manage Backend Clusters and Discovery](manage-backends.md)
Expand All @@ -31,3 +22,13 @@ Guides on how to setup / deploy Gimbal can be found in the [deployment guide](..

- [Route Specification](route.md)
- [Dashboards / Monitoring / Alerting](monitoring.md)

## Data Flow

![Data Flow](images/data-flow.png)

## Architecture Overview

![Arch Overview](images/overview.png)

More about the [Gimbal Architecture](gimbal-architecture.md).
Loading

0 comments on commit bdd5e2d

Please sign in to comment.