edge configures and manages:
- PostgreSQL: The world's most advanced open source database
- ZITADEL: Centralized identity provider (OIDC)
- MinIO / SeaweedFS: S3-compatible object storage
- NATS: Message streaming platform
- envoy: Cloud-native high-performance edge/middle/service proxy
- edgeflare/pgo: PostgREST-compatible API and Debezium-compatible CDC
for a unified backend - similar to Firebase, Supabase, Pocketbase etc. And with scaling capabilities.
- A single binary (embeds official component binaries): planned
- Docker compose or Kubernetes resources: follow this README
- Via a Kubernetes CRD: Project
edge is in very early stage. Interested in experimenting or contributing? See CONTRIBUTING.md.
git clone git@github.com:edgeflare/edge.git && cd edge
- determine a root domain (hostname) eg
example.org
. if such a globally routable domain isn't available, utilize https://sslip.io resolver, which returns embedded IP address in domain name. that's what this demo setup does
when containers dependent on zitadel (it being the centralized IdP) fail, try restarting it once zitadel is healthy
export EDGE_DOMAIN_ROOT=192-168-0-121.sslip.io # resolves to 192.168.0.121 (gateway/envoy IP). use LAN or accesible IP/hostname
- generate
envoy/config.yaml
andpgo/config.yaml
sed "s/EDGE_DOMAIN_ROOT/${EDGE_DOMAIN_ROOT}/g" internal/stack/envoy/config.template.yaml > internal/stack/envoy/config.yaml
sed "s/EDGE_DOMAIN_ROOT/${EDGE_DOMAIN_ROOT}/g" internal/stack/pgo/config.template.yaml > internal/stack/pgo/config.yaml
- ensure zitadel container can write admin service account key which edge uses to configure zitadel
mkdir -p __zitadel
chmod -R a+rw __zitadel
- ensure ./tls.key ./tls.crt exist. Use something like
openssl req -x509 -newkey rsa:4096 -keyout tls.key -out tls.crt -days 365 -nodes \
-subj "/CN=iam.example.local" \
-addext "subjectAltName=DNS:*.example.local,DNS:*.${EDGE_DOMAIN_ROOT}"
# for envoy container to access keypair
chmod 666 tls.crt
chmod 666 tls.key
envoy needs TLS config for end-to-end (even non-TLS) HTTP/2 required by zitadel management API. zitadel API bugs with self-signed certificates. For publicly trusted certificates, enable TLS by updating env vars in ZITADEL.
- start containers
docker compose up -d
Check zitadel health with curl http://iam.${EDGE_DOMAIN_ROOT}/debug/healthz
or docker exec -it edge_edge_1 /edge healthz
Use the centralized IdP for authorization in Postgres via pgo rest
(PostgREST API) as well as minio-s3, NATS etc
edge so far creates the OIDC clients on ZITADEL. a bit works needed to for configuring consumers of client secrets.
The idea is to use edge
to serve config for each component, much like envoy control plane which is already embeded in edge for envoy to pull config dynamically.
For now, visit ZITADEL UI at http://iam.${EDGE_DOMAIN_ROOT}, login (see docker-compose.yaml) and regenerate client-secrets for oauth2-proxy and minio clients in edge project. Then
- update
internal/stack/pgo/config.yaml
with the values - update relevant env vars in minio container
And docker compose down && docker compose up -d
Create a table in app-db for REST and pipeline demo. See pgo repo for more examples
PGUSER=postgres PGPASSWORD=postgrespw PGHOST=localhost PGDATABASE=main PGPORT=5432 psql
CREATE SCHEMA IF NOT EXISTS iam;
CREATE TABLE IF NOT EXISTS iam.users (
id TEXT DEFAULT gen_random_uuid()::TEXT PRIMARY KEY
);
-- wide-open for demo. use GRANT and RLS for granular ACL
GRANT USAGE ON SCHEMA iam to anon;
GRANT ALL ON iam.users to anon;
docker restart edge_pgo-rest_1
to reload schema cache if it bugs.
Now we can GET, POST, PATCH, DELETE on the users table in iam schema like:
curl http://api.${EDGE_DOMAIN_ROOT}/iam/users
The demo pgo-pipeline container syncs users from auth-db (in projections.users14 table) to app-db (in iam.users)
ensure minio MINIO_IDENTITY_OPENID_CLIENT_ID and MINIO_IDENTITY_OPENID_CLIENT_SECRET are set withc appropriate values. console ui is at http://minio.${EDGE_DOMAIN_ROOT}.
If you already have a live k8s cluster, great just copy-paste-enter. For development and lightweight prod, k3s seems a great option. See example/cluster for setup.
kubectl apply -f example/k8s/00-secrets.yaml
# Database: PostgreSQL
helm upgrade --install example-postgres oci://registry-1.docker.io/bitnamicharts/postgresql -f example/k8s/01-postgres.values.yaml
kubectl apply -f example/k8s/01-postgres.tcproute.yaml
kubectl wait --for=condition=Ready pod -l app.kubernetes.io/instance=example-postgres --timeout=-1s
# AuthN / AuthZ: ZITADEL
helm upgrade --install example-zitadel oci://registry-1.docker.io/edgeflare/zitadel -f example/k8s/02-zitadel.values.yaml
kubectl apply -f example/k8s/02-zitadel.httproute.yaml
kubectl get secrets zitadel-admin-sa -o jsonpath='{.data.zitadel-admin-sa\.json}' | base64 -d > __zitadel-machinekey/zitadel-admin-sa.json
export ZITADEL_ADMIN_PW=$(kubectl get secrets example-zitadel-firstinstance -o jsonpath='{.data.ZITADEL_FIRSTINSTANCE_ORG_HUMAN_PASSWORD}' | base64 -d)
Configure zitadel like in docker-compose. Then apply something like https://raw.githubusercontent.com/edgeflare/pgo/refs/heads/main/k8s.yaml
kubectl delete -f example/k8s/00-secrets.yaml -f example/k8s/01-postgres.tcproute.yaml -f example/k8s/02-zitadel.httproute.yaml -f example/k8s/03-postgrest.yaml
helm uninstall example-zitadel
helm uninstall example-postgres
kubectl delete cm zitadel-config-yaml
kubectl delete secret zitadel-admin-sa
kubectl delete jobs.batch example-zitadel-init example-zitadel-setup
kubectl delete $(kubectl get pvc -l app.kubernetes.io/instance=example-postgres -o name)