Skip to content

Commit b698150

Browse files
committed
fix: envoy config for http/2 traffic
Signed-off-by: hmoazzem <[email protected]>
1 parent 6d241a1 commit b698150

17 files changed

+760
-380
lines changed

README.md

+42-26
Original file line numberDiff line numberDiff line change
@@ -29,50 +29,63 @@ git clone [email protected]:edgeflare/edge.git && cd edge
2929

3030
### [docker-compose.yaml](./docker-compose.yaml)
3131

32-
Adjust the docker-compose.yaml
32+
1. determine a root domain (hostname) eg `example.org`. if such a globally routable domain isn't available,
33+
utilize https://sslip.io resolver, which returns embedded IP address in domain name. that's what this demo setup does
3334

34-
- Free up port 80 from envoy for zitadel's initial configuration via its management API which requires end-to-end HTTP/2 support.
35-
We still need to get envoy (in docker) to proxy HTTP/2 traffic. On k8s everything works fine.
35+
> when containers dependent on zitadel (it being the centralized IdP) fail, try restarting it once zitadel is healthy
3636
37-
```yaml
38-
envoy:
39-
ports:
40-
- 9901:9901
41-
# - 80:10080 # or use eg 10080:10080
37+
```sh
38+
export EDGE_DOMAIN_ROOT=192-168-0-121.sslip.io # resolves to 192.168.0.121 (gateway/envoy IP). use LAN or accesible IP/hostname
4239
```
4340

44-
- Expose ZITADEL on port 80, by uncommenting
41+
2. generate `envoy/config.yaml` and `pgo/config.yaml`
42+
43+
```sh
44+
sed "s/EDGE_DOMAIN_ROOT/${EDGE_DOMAIN_ROOT}/g" internal/stack/envoy/config.template.yaml > internal/stack/envoy/config.yaml
4545

46-
```yaml
47-
ports:
48-
- 80:8080
46+
sed "s/EDGE_DOMAIN_ROOT/${EDGE_DOMAIN_ROOT}/g" internal/stack/pgo/config.template.yaml > internal/stack/pgo/config.yaml
4947
```
5048

49+
3. ensure zitadel container can write admin service account key which edge uses to configure zitadel
50+
5151
```sh
52-
docker compose up -d
52+
mkdir -p __zitadel
53+
chmod -R a+rw __zitadel
5354
```
5455

55-
#### Use the centralized IdP for authorization in Postgres via `pgo rest` (PostgREST API)
56-
57-
Configure ZITADEL. Adjust the domain in env vars, and in `internal/stack/envoy/config.yaml`
56+
4. ensure ./tls.key ./tls.crt exist. Use something like
5857

5958
```sh
60-
export ZITADEL_HOSTNAME=iam.192-168-0-121.sslip.io
61-
export ZITADEL_ISSUER=http://$ZITADEL_HOSTNAME
62-
export ZITADEL_API=$ZITADEL_HOSTNAME:80
63-
export ZITADEL_KEY_PATH=__zitadel-machinekey/zitadel-admin-sa.json
64-
export ZITADEL_JWK_URL=http://$ZITADEL_HOSTNAME/oauth/v2/keys
59+
openssl req -x509 -newkey rsa:4096 -keyout tls.key -out tls.crt -days 365 -nodes \
60+
-subj "/CN=iam.example.local" \
61+
-addext "subjectAltName=DNS:*.example.local,DNS:*.${EDGE_DOMAIN_ROOT}"
62+
63+
# for envoy container to access keypair
64+
chmod 666 tls.crt
65+
chmod 666 tls.key
6566
```
6667

68+
envoy needs TLS config for end-to-end (even non-TLS) HTTP/2 required by zitadel management API. zitadel API bugs with self-signed certificates.
69+
For publicly trusted certificates, enable TLS by updating env vars in ZITADEL.
70+
71+
5. start containers
6772
```sh
68-
go run ./internal/stack/configure/...
73+
docker compose up -d
6974
```
7075

71-
The above go code creates, among others, an OIDC client which pgo uses for authN/authZ. Any OIDC compliant Identity Provider (eg , Keycloak, Auth0) can be used; pgo just needs the client credentials.
76+
Check zitadel health with `curl http://iam.${EDGE_DOMAIN_ROOT}/debug/healthz` or `docker exec -it edge_edge_1 /edge healthz`
77+
78+
#### Use the centralized IdP for authorization in Postgres via `pgo rest` (PostgREST API) as well as minio-s3, NATS etc
7279

73-
Once ZITADEL is configured, revert the ports (use 80 for envoy), and `docker compose down && docker compose up -d`
80+
edge so far creates the OIDC clients on ZITADEL. a bit works needed to for configuring consumers of client secrets.
81+
The idea is to use `edge` to serve config for each component, much like envoy control plane which is already embeded in edge for envoy to pull config dynamically.
7482

75-
Visit ZITADEL UI (eg at http://iam.192-168-0-121.sslip.io), login (see docker-compose.yaml) and regenerate client-secret for oauth2-proxy client in edge project. Then update `internal/stack/pgo/config.yaml` with the values. Again, `docker compose down && docker compose up -d`
83+
For now, visit ZITADEL UI at http://iam.${EDGE_DOMAIN_ROOT}, login (see docker-compose.yaml) and regenerate client-secrets for oauth2-proxy and minio clients in edge project. Then
84+
85+
- update `internal/stack/pgo/config.yaml` with the values
86+
- update relevant env vars in minio container
87+
88+
And `docker compose down && docker compose up -d`
7689

7790
#### `pgo rest`: PostgREST-compatible REST API
7891

@@ -98,13 +111,16 @@ GRANT ALL ON iam.users to anon;
98111
Now we can GET, POST, PATCH, DELETE on the users table in iam schema like:
99112

100113
```sh
101-
curl http://api.127-0-0-1.sslip.io/iam/users
114+
curl http://api.${EDGE_DOMAIN_ROOT}/iam/users
102115
```
103116

104117
##### `pgo pipeline`: Debezium-compatible CDC for realtime-event/replication etc
105118

106119
The demo pgo-pipeline container syncs users from auth-db (in projections.users14 table) to app-db (in iam.users)
107120

121+
#### minio-s3
122+
ensure minio MINIO_IDENTITY_OPENID_CLIENT_ID and MINIO_IDENTITY_OPENID_CLIENT_SECRET are set withc appropriate values. console ui is at http://minio.${EDGE_DOMAIN_ROOT}.
123+
108124
### Kubernetes
109125
If you already have a live k8s cluster, great just copy-paste-enter.
110126
For development and lightweight prod, [k3s](https://github.com/k3s-io/k3s) seems a great option.

docker-compose.yaml

+72-54
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
version: '3.8'
22

33
services:
4-
auth-db:
5-
image: docker.io/bitnami/postgresql:17
4+
db-auth:
5+
image: docker.io/bitnami/postgresql:17.3.0
66
environment:
77
POSTGRES_HOST_AUTH_METHOD: md5
88
POSTGRES_USER: postgres
@@ -12,7 +12,7 @@ services:
1212
ports:
1313
- 5431:5432
1414
volumes:
15-
- auth-db:/bitnami/postgresql
15+
- db-auth:/bitnami/postgresql
1616
healthcheck:
1717
test: ["CMD-SHELL", "pg_isready -U postgres"]
1818
interval: 5s
@@ -25,7 +25,7 @@ services:
2525
image: ghcr.io/zitadel/zitadel:latest
2626
command: 'start-from-init --masterkey "MasterkeyNeedsToHave32Characters" --tlsMode disabled'
2727
environment:
28-
ZITADEL_DATABASE_POSTGRES_HOST: auth-db
28+
ZITADEL_DATABASE_POSTGRES_HOST: db-auth
2929
ZITADEL_DATABASE_POSTGRES_PORT: 5432
3030
ZITADEL_DATABASE_POSTGRES_DATABASE: main
3131
ZITADEL_DATABASE_POSTGRES_USER_USERNAME: zitadel
@@ -34,7 +34,7 @@ services:
3434
ZITADEL_DATABASE_POSTGRES_ADMIN_USERNAME: postgres
3535
ZITADEL_DATABASE_POSTGRES_ADMIN_PASSWORD: postgrespw
3636
ZITADEL_DATABASE_POSTGRES_ADMIN_SSL_MODE: disable
37-
ZITADEL_EXTERNALDOMAIN: iam.192-168-0-121.sslip.io # resolves to 192.168.0.121 (gateway/envoy IP). use LAN or accesible IP/hostname
37+
ZITADEL_EXTERNALDOMAIN: iam.${EDGE_DOMAIN_ROOT}
3838
ZITADEL_EXTERNALPORT: 80
3939
ZITADEL_PORT: 8080
4040
ZITADEL_EXTERNALSECURE: false
@@ -43,20 +43,43 @@ services:
4343
ZITADEL_FIRSTINSTANCE_ORG_HUMAN_EMAIL_ADDRESS: [email protected]
4444
ZITADEL_FIRSTINSTANCE_ORG_HUMAN_PASSWORDCHANGEREQUIRED: false
4545
# machine user (service-account)
46-
ZITADEL_FIRSTINSTANCE_MACHINEKEYPATH: /machinekey/zitadel-admin-sa.json
47-
ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINE_USERNAME: zitadel-admin-sa
46+
ZITADEL_FIRSTINSTANCE_MACHINEKEYPATH: /machinekey/admin-sa.json
47+
ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINE_USERNAME: admin-sa
4848
ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINE_NAME: Admin
4949
ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINEKEY_TYPE: 1
5050
depends_on:
51-
auth-db:
51+
db-auth:
5252
condition: service_healthy
53-
# ports:
54-
# - 80:8080
53+
ports:
54+
- 8080:8080
5555
volumes:
56-
- $PWD/__zitadel-machinekey:/machinekey:rw,Z
56+
- $PWD/__zitadel:/machinekey:rw,Z
5757

58-
app-db:
59-
image: docker.io/bitnami/postgresql:17
58+
edge:
59+
user: "${UID:-1000}"
60+
build:
61+
context: "."
62+
dockerfile: "./internal/stack/Containerfile"
63+
entrypoint: sh -c "id && ls -la /workspace/zitadel/admin-sa.json && while [ ! -f /workspace/zitadel/admin-sa.json ]; do sleep 1; done; sleep 2; /edge serve"
64+
ports:
65+
# - 18000:18000 # xds-server
66+
- 8081:8081 # http-admin
67+
environment:
68+
EDGE_DOMAIN_ROOT: ${EDGE_DOMAIN_ROOT}
69+
EDGE_IAM_ZITADEL_MACHINEKEYPATH: "/workspace/zitadel/admin-sa.json"
70+
healthcheck:
71+
test: [CMD, /edge, healthz]
72+
interval: 5s
73+
timeout: 5s
74+
retries: 3
75+
start_period: 30s
76+
start_interval: 5s
77+
volumes:
78+
- $PWD/__zitadel:/workspace/zitadel:rw,Z,U
79+
restart: on-failure
80+
81+
db-app:
82+
image: docker.io/bitnami/postgresql:17.3.0
6083
environment:
6184
POSTGRES_HOST_AUTH_METHOD: md5
6285
POSTGRES_USER: postgres
@@ -66,112 +89,107 @@ services:
6689
ports:
6790
- 5432:5432
6891
volumes:
69-
- app-db:/bitnami/postgresql
92+
- db-app:/bitnami/postgresql
7093
healthcheck:
7194
test: ["CMD-SHELL", "pg_isready -U postgres"]
7295
interval: 5s
7396
timeout: 5s
7497
retries: 5
7598

76-
envoy-controlplane:
77-
build:
78-
context: "."
79-
dockerfile: "./internal/stack/envoy/Containerfile"
80-
entrypoint: /envoy-controlplane
81-
ports:
82-
- 18000:18000 # xds-server
83-
environment:
84-
- "DEBUG=true"
85-
8699
envoy:
100+
user: "${UID:-1000}"
87101
image: docker.io/envoyproxy/envoy:contrib-v1.33-latest
88-
# command: [envoy, --config-path, /etc/bootstrap.yaml, --base-id, 1] # when used with envoy-controlplane for dynamic config
89-
command: [envoy, --config-path, /etc/config.yaml, --base-id, 1]
102+
# command: [envoy, --config-path, /etc/envoy/bootstrap.yaml, --base-id, "1"] # when used with envoy-controlplane for dynamic config
103+
command: [envoy, --config-path, /etc/envoy/config.yaml, --base-id, "1", --disable-hot-restart]
90104
privileged: true
91105
ports:
92-
- 9901:9901 # admin
106+
# - 9901:9901 # admin
93107
- 80:10080 # http-proxy
108+
- 443:10443 # https-proxy
94109
volumes:
95-
# - $PWD/internal/stack/envoy/bootstrap.yaml:/etc/bootstrap.yaml:rw,Z # when used with envoy-controlplane
96-
- $PWD/internal/stack/envoy/config.yaml:/etc/config.yaml:rw,Z # hard-coded config
97-
depends_on:
98-
- envoy-controlplane
110+
# - $PWD/internal/stack/envoy/bootstrap.yaml:/etc/envoy/bootstrap.yaml:rw,Z # when used with envoy-controlplane
111+
- $PWD/internal/stack/envoy/config.yaml:/etc/envoy/config.yaml:rw,Z # hard-coded config
112+
- $PWD/tls.crt:/etc/envoy/tls.crt:rw,Z
113+
- $PWD/tls.key:/etc/envoy/tls.key:rw,Z
99114

100115
pgo-rest:
101116
image: ghcr.io/edgeflare/pgo
102117
command: [rest, --config, /rest/config.yaml]
103118
ports:
104-
- 8080:8080
119+
- 8082:8080
105120
volumes:
106121
- $PWD/internal/stack/pgo/config.yaml:/rest/config.yaml:rw,Z
107122
depends_on:
108-
app-db:
123+
db-app:
109124
condition: service_healthy
110-
# init-app-db:
111-
# condition: service_completed_successfully # errors
125+
edge:
126+
condition: service_healthy
127+
restart: on-failure
112128

113129
pgo-pipeline:
114130
image: ghcr.io/edgeflare/pgo
115131
command: [pipeline, --config, /pipeline/config.yaml]
116132
volumes:
117133
- $PWD/internal/stack/pgo/config.yaml:/pipeline/config.yaml:rw,Z
118134
depends_on:
119-
auth-db:
135+
db-auth:
120136
condition: service_healthy
121-
app-db:
137+
db-app:
122138
condition: service_healthy
123139

124-
minio:
140+
s3-minio:
125141
image: quay.io/minio/minio
126142
command: [server, --console-address, ":9001"]
127143
environment:
128144
MINIO_ROOT_USER: minioadmin
129145
MINIO_ROOT_PASSWORD: minio-secret-key-change-me
130146
MINIO_VOLUMES: /mnt/data
131-
MINIO_BROWSER_REDIRECT_URL: http://minio.127-0-0-1.sslip.io
147+
MINIO_BROWSER_REDIRECT_URL: http://minio.${EDGE_DOMAIN_ROOT}
132148
# OIDC
133-
MINIO_IDENTITY_OPENID_CLIENT_ID: "311219429557993516"
134-
MINIO_IDENTITY_OPENID_CLIENT_SECRET: "PdcOM6b3h2pcdAVc3es83PY62EVLATiMjQrela1IYChAhTbkr1RX5MNqCvMMLauw"
149+
MINIO_IDENTITY_OPENID_CLIENT_ID: ${EDGE_S3_MINIO_IDENTITY_OPENID_CLIENT_ID} # manually obtain
150+
MINIO_IDENTITY_OPENID_CLIENT_SECRET: ${EDGE_S3_MINIO_IDENTITY_OPENID_CLIENT_SECRET} # from zitadel UI
135151
MINIO_IDENTITY_OPENID_DISPLAY_NAME: "Login with SSO"
136-
MINIO_IDENTITY_OPENID_CONFIG_URL: http://iam.192-168-0-121.sslip.io/.well-known/openid-configuration
152+
MINIO_IDENTITY_OPENID_CONFIG_URL: http://iam.${EDGE_DOMAIN_ROOT}/.well-known/openid-configuration
137153
MINIO_IDENTITY_OPENID_CLAIM_NAME: policy_minio
138154
MINIO_IDENTITY_OPENID_REDIRECT_URI_DYNAMIC: on
139155
MINIO_IDENTITY_OPENID_CLAIM_USERINFO: on
140156
MINIO_IDENTITY_OPENID_COMMENT: "OIDC Identity Provider"
141157
# notify postgres
142158
MINIO_NOTIFY_POSTGRES_ENABLE: on
143-
MINIO_NOTIFY_POSTGRES_CONNECTION_STRING: "host=app-db port=5432 user=postgres password=postgrespw dbname=main sslmode=prefer"
159+
MINIO_NOTIFY_POSTGRES_CONNECTION_STRING: "host=db-app port=5432 user=postgres password=postgrespw dbname=main sslmode=prefer"
144160
MINIO_NOTIFY_POSTGRES_FORMAT: namespace
145161
MINIO_NOTIFY_POSTGRES_ID: minioevents
146162
MINIO_NOTIFY_POSTGRES_TABLE: minioevents
147163
volumes:
148-
- minio:/mnt/data
164+
- s3-minio:/mnt/data
149165
ports:
150166
- 9000:9000
151167
- 9001:9001
152168
depends_on:
153-
app-db:
169+
db-app:
154170
condition: service_healthy
155-
iam-zitadel:
171+
# iam-zitadel:
172+
# condition: service_healthy
173+
edge:
156174
condition: service_healthy
157175
# should also wait for initdb, zitadel client creation
158176

159-
init-app-db:
160-
image: docker.io/bitnami/postgresql:17
177+
init-db-app:
178+
image: docker.io/bitnami/postgresql:17.3.0
161179
environment:
162-
PGHOST: app-db
180+
PGHOST: db-app
163181
PGUSER: postgres
164182
PGPASSWORD: postgrespw
165183
PGDATABASE: main
166184
depends_on:
167-
app-db:
185+
db-app:
168186
condition: service_healthy
169187
entrypoint:
170188
- /bin/bash
171189
- -c
172190
- |
173191
echo "Waiting for PostgreSQL to be ready..."
174-
until PGPASSWORD=postgrespw psql -h app-db -U postgres -c '\q'; do
192+
until PGPASSWORD=postgrespw psql -h db-app -U postgres -c '\q'; do
175193
echo "PostgreSQL is unavailable - sleeping"
176194
sleep 2
177195
done
@@ -222,6 +240,6 @@ services:
222240
restart: on-failure
223241

224242
volumes:
225-
app-db:
226-
auth-db:
227-
minio:
243+
db-app:
244+
db-auth:
245+
s3-minio:

0 commit comments

Comments
 (0)