Grafana Prototyping Environment
Table of Contents
This repository contains a tool named grape
that allows you to
quickly create a local grafana visualization development environment
using docker
containers for the grafana server and the postgresql database
which relieves you from having to install and configured grafana or postgresql
yourself.
NOTE: if you are interested in how to run a grafana server and postgres database in containers and connect them manually without a python wrapper please see this gist.
It is more than a simple wrapper around docker build
and docker run
commands because it provides additional services. For example it can also
be used to import external environments, modify them and then export them
back to enable isolation for development.
In addition, a set of samples is provided that demonstrate how to use the tool for common operations.
The primary audience for this tool is folks who want to play around with grafana without having to set it up. As long as you have docker you are good to go.
I use it to experiment on production environments by using the import capability to duplicate the system of interest locally (on my laptop) with the caveat that you must know the passwords for the data sources because grafana filters them out of the REST responses.
I also use it to test out new ideas. Typically i spin up an environment populate the database with modeling data and start playing with visualization ideas.
Here is a screenshot from demo02
:
See demo01
and demo02
for additional grafana screenshots and database access details using psql
.
This tool has been tested on Mac OSX 10.15.6, Ubuntu 18.04 and Windows 10.
To use this you must have:
- docker (https://docs.docker.com/get-docker/)
- The
docker
group must be available on linux.
- The
- bash
- If you are running on windows you will need WSL2 or a linux VM.
- It is only needed for the samples.
- python-3.8 or later
- This is because the log module uses the logger stacklevel argument.
- pipenv
- A recent version of git
- gnu make
- If you are running on windows you will need WSL2 or a linux VM.
- unzip
- If you are running on windows you will need WSL2 or a linux VM.
- sudo
- On linux, the program must be able to rm the database volume mount.
Do these steps to get started:
$ git clone https://github.com/eSentire/grape.git
$ cd grape
$ make
This can take awhile to run.
To see all of the available make targets: make help
.
On windows you may have to run something like dos2unix to convert the bash shell scripts or use WSL2 or a linux VM to run > the demos in
samples/*/run.sh
. This will enable commands likemake demo01
to work.
Program help is available via the -h
command:
$ pipenv run grape help
To get help about the available make commands type: make help
.
To create the infrastructure:
$ pipenv run grape create -v -g 4600 -n example
This will create two docker containers: examplegr
which is the
grafana server and examplepg
which is the postgresql server.and then
connects them so that the postgresql container database becomes a datasource
in the grafana container.
If the docker containers were previously killed because of something
like a system crash, grape create
will restart them in the same
state. The grafana dashboards and postgresql database contents will
not be lost. Beware that the grape delete
operation will destroy the
state data.
It will also create and map the local example/pg/mnt/pgdata
directory
to the database container to save database results and
example/gr/mnt/grdata
to the grafana container to save the grafana
dashboard data. This is done so that if the container is restarted for
any reason the postgresql database and grafana dashboards are not
lost. And, finally, it will connect the database as a source in the
grafana server.
This is what the persistent storage looks like on the host:
$ tree -L 3 example
example
├── gr
│ ├── mnt
│ │ └── grdata
│ └── start.sh
└── pg
├── mnt
│ └── pgdata
└── start.sh
6 directories, 2 files
This the persistent storage looks like from the containers.
$ docker exec -it examplepg ls -l /mnt
total 0
drwxr-xr-x 5 506 dialout 160 Aug 27 17:04 grdata
drwx------ 26 506 dialout 832 Aug 27 17:04 pgdata
$ docker exec -it examplegr ls -l /mnt
total 0
drwxr-xr-x 5 506 dialout 160 Aug 27 17:04 grdata
drwx------ 26 506 dialout 832 Aug 27 17:04 pgdata
It also creates the database container start script in
example/gr/start.sh
and the grafana container start script in
example/pg/start.sh
. These scripts contain the raw docker
commands to start the database and grafana containers with all
existing data if either container is killed. Once started the
containers may take up to 30 seconds to initialize.
This is what the automatically generated scripts look like on the host:
$ tree -L 2 example
example
├── gr
│ ├── mnt
│ └── start.sh
└── pg
├── mnt
└── start.sh
4 directories, 2 files
You can now access the empty dashboard at http://localhost:4600 in your
browser. The username is admin
and the password is admin
.
At this point you can create grafana visualizations using datasources, folders and dashboards as well as database tables.
The delete operation deletes all artifacts created by the create operation.
$ pipenv run grape delete -v -n example
It will remove the containers and the local database storage.
You can access the database using psql
interactively like this:
$ docker exec -it examplepg psql -d postgres -U postgres
You can also load batch commands like this by taking care to make them visible to the tool
$ edit x.sql
$ cp x.sql example/mnt/ # makes it visible as /mnt/x.sql
$ docker exec -it examplepg psql -d postgres -U postgres -1 < /mnt/x.sql
The save operation captures the specified model in a zip file. It is what you use to capture changes.
$ pipenv run grape save -v -n example -g 4760 -f /mnt/save.zip
The load operation updates the model from a saved state (zip file).
$ pipenv run grape load -v -n example -g 4600 -f /mnt/save.zip
The import operation captures an external grafana environment for the purposes of experimenting or working locally.
It imports rhe datasources without passwords because grafana never exports passwords which means that they have to be updated manually after the import operation completes or by specifying the passwords in the associated conf file. It does not import databases.
The import operation creates a zip file that can be used by a load operation. It requires a conf file that is specified by the -x option.
Here is the sequence of operations that define an import flow:
- create the external conf file
- import the external grafana
- load it into the local model
Below are the actual commands for downloading an external
grafana service into the local foo
model.
$ rm -rf foo.zip
$ edit import.yaml # to set the external access parameters.
$ time pipenv run grape del -v -g 4800 -n foo
$ time pipenv run grape import -v -x import.yaml -f foo.zip
$ time pipenv run grape load -v -f foo.zip -n foo -g 4800
Once the above steps are complete, you will be able to access the local version at http://localhost:4800.
This is what a sample external conf file looks like:
# Data to access an external grafana server.
# If any fields are not present, then the user
# will be prompted for them.
url: 'https://official.grafana.server'
username: 'bigbob'
password: 'supersecret'
# The passwords for each database can optionally
# be specified. If they are not specified, then
# they must be entered manually because grafana
# does not export them.
databases:
- name: 'PostgreSQL'
password: 'donttellanyone'
- name: 'InfluxDB'
password: 'topsecret!'
The export operation is the inverse of the import operation. You use it to update an external grafana server.
The sequence of operations to perform an export operation are:
- create the external conf file
- save the local grafana service to a zip file
- export to the external service.
Here are the commands:
$ edit export.yaml
$ time pipenv run grape save -v -n foo -g 4800 -f foo.zip
$ time pipenv run grape export -v -x export.yaml -f foo.zip
This is what a sample external conf file looks like:
# Data to access an external grafana server.
# If any fields are not present, then the user
# will be prompted for them.
url: 'https://official.grafana.server'
username: 'bigbob'
password: 'supersecret'
# The passwords for each database can optionally
# be specified. If they are not specified, then
# they must be entered manually because grafana
# does not export them.
databases:
- name: 'PostgreSQL'
password: 'donttellanyone'
- name: 'InfluxDB'
password: 'topsecret!'
Generate a status report of all docker containers associated with grape.
$ grape status -v
INFO 2021-01-11 16:36:14,134 status.py:187 - status
Name Type Version Status Started Elapsed Id Image Created Port
grape_test1gr gr 0.4.3 running 2021-01-12T00:34:48.618074371Z 00:01:25 f85986a629 sha256:9ad3ce931a 2021-01-12T00:34:48.259793436Z 4700
grape_test1pg pg 0.4.3 running 2021-01-12T00:34:49.13129136Z 00:01:25 28e4372c1b sha256:0b0b68fee3 2021-01-12T00:34:48.673554391Z 4701
grape_test2gr gr 0.4.3 running 2021-01-12T00:35:11.212822109Z 00:01:02 632389a8d8 sha256:9ad3ce931a 2021-01-12T00:35:10.865475598Z 4710
grape_test2pg pg 0.4.3 running 2021-01-12T00:35:11.692884845Z 00:01:02 65b9b2186b sha256:0b0b68fee3 2021-01-12T00:35:11.256244428Z 4711
jbhgr gr 0.4.3 running 2021-01-11T16:47:24.427558155Z 07:48:49 5368be647f sha256:9ad3ce931a 2021-01-11T16:47:24.099948853Z 4640
jbhpg pg 0.4.3 running 2021-01-11T16:47:24.907664979Z 07:48:49 7cf96782d7 sha256:0b0b68fee3 2021-01-11T16:47:24.459861004Z 4641
INFO 2021-01-11 16:36:14,221 status.py:222 - done
Generate a tree view of a grape grafana server datasources, folders and dashboards.
$ pipenv run grape tree -g 4640
jbhgr:4640
├─ datasources
│ └─ jbhpg:1:postgres
└─ folders
├─ JBH:1
│ └─ dashboards
│ ├─ Northstar Dashboard Mock:id=5:uid=lC0QCuaMz:panels=33
│ └─ OKR Initiatives Health:id=6:uid=peAwjuaMk:panels=6
└─ Northstar:2
└─ dashboards
├─ Jenkins Build Health Details:id=4:uid=ir0QjX-Mz:panels=9
└─ Jenkins Build Health:id=3:uid=6Q0QCuaGk:panels=70
This section describes the tools in the local tools
directory. They
are not integrated into grape
at this time because they don't
fit the grape idiom but that is a completely subjective decision
which can be revisitied at any time.
This is a standalone tool to read a CSV data file with a header row and convert it to SQL instructions to create and populate a table generically.
It is generic because it figures out the field types by analyzing the data.
There are a number of options for specifying the output, how to convert certain values and what SQL types to use for integers, floats, dates and strings.
It is useful for adding CSV data to your dashboards.
See the help (-h
) for more detailed information.
There is script called tools/runpga.sh
that will create a pgAdmin
container for you.
For demo01
you would run it like this:
$ tools/runpga.sh demo01pg
When it completes it prints out the information necessary to login into the pgAdmin and connect to the database.
There is a script called tools/upload-json-dashboard.sh
that will upload
a JSON dashboard to a Grafana server from the command line.
The upload is limited to servers with simple authentication based on a
username and password unless you override it using -x
and -n
.
The local dashboard JSON file is creatined by exporting the dashboard from the Grafana UI with the "Export for sharing externally" checkbox checked.
This script is useful for transferring a single dashboard from one server to another.
Although the same function can be accomplished in the UI, this script allows updates to be automated from the command line.
This script requires that "curl" is installed.
See the script help (-h
) for more information and examples.
There are different samples of how to use this system
in the samples directory tree. Each sample is in its
own directory with a README.md
that describes what
it does and how to use it.
This demo is very basic. It shows, in detail, how to create a simple model with grafana and a database. There is more information in the demo README.
Here is how to run it.
$ make demo01
Note that you must have bash installed for it to work.
The demo01 dashboard looks like this.
Although it looks really simple, this demo shows an automatically generated dashboard that connects to the automatically generated database and displays its contents.
Click here for more information.
This demo is more realistic. It shows how to create a model from a publicly available dataset. There is more information in the demo README.
Here is how to run it.
$ make demo02
The demo02 dashboard looks like this.
Click here for more information.
This demo shows how to create a dashboard from local data. There is more information in the README.
Here is how to run it.
$ make demo03
The demo03 dashboard looks like this.
Click here for more information.
This demo shows how to create mock data in a panel which allows you to prototype dashboards without touching the database. It is a very powerful idiom.
Here is how to run it.
$ make demo04
The demo04 dashboard looks like this.
Click here for more information.
This section contains miscellaneous information.
Grafana was chosen because it is a commonly used open source resource for querying, visualizing, alerting on, and exploring metrics you are interested in. Many organizations use it for understanding metrics related to business and engineering operations.
Postgres was chosen for this project because it a popular database that supports relational data, time series data and document (NoSQL) data cleanly which supports experimenting with different data storage models. Other possibilities, like InfluxDB, tend to be more focused in a specific data storage model like time series data.
For folks that like to use GUI interfaces to databases like pgAdmin
or DBeaver
, it is trivial
to create such connections.
If you have pgAdmin
installed on the host you can connect to the database as
host localhost:4401
for demo01 or localhost:4411
for demo02.
If you would prefer to run pgAdmin
from a local container, you need to get
the gateway IP address of pgAdmin
container and use that to connect to
port 4401
or 4402
. The gateway is a proxy for the host localhost
. Here
is how to obtain that for pgAdmin
:
$ docker run -d -e [email protected] -e PGADMIN_DEFAULT_PASSWORD=password -h pgadmin4 --name pgadmin4 -p 4450:80 dpage/pgadmin4
$ docker inspect pgadmin4 | jq '.[]|.NetworkSettings.Networks.bridge.Gateway'
"172.17.0.1"
For the above example, the database host would be 172.17.0.1:4401
or demo01 or 172.17.0.1L4411
for demo02 when referenced from the
pgAdmin
docker container created above: http://localhost:4450
.
- Many thanks to Deron Ferguson for helping me track down and debug problems on windows 10.
- Many thanks to Rob Rodrigues for helping me track down and debug problems on linux as well as his work to add the integration work flow.