Skip to content

Commit 8b2be3a

Browse files
committed
fix: envoy config for http/2 traffic
Signed-off-by: hmoazzem <[email protected]>
1 parent 6d241a1 commit 8b2be3a

15 files changed

+658
-262
lines changed

README.md

+36-27
Original file line numberDiff line numberDiff line change
@@ -29,50 +29,56 @@ git clone [email protected]:edgeflare/edge.git && cd edge
2929

3030
### [docker-compose.yaml](./docker-compose.yaml)
3131

32-
Adjust the docker-compose.yaml
32+
- determine a root domain (hostname) eg `example.org`. if such a globally routable domain isn't available,
33+
use something like https://sslip.io, which returns embedded IP address in domain name. that's what this demo setup does
3334

34-
- Free up port 80 from envoy for zitadel's initial configuration via its management API which requires end-to-end HTTP/2 support.
35-
We still need to get envoy (in docker) to proxy HTTP/2 traffic. On k8s everything works fine.
35+
> when containers dependent on zitadel (it being the centralized IdP) fail, try restarting it once zitadel is healthy
3636
37-
```yaml
38-
envoy:
39-
ports:
40-
- 9901:9901
41-
# - 80:10080 # or use eg 10080:10080
37+
```sh
38+
EDGE_DOMAIN_ROOT=192-168-0-10.sslip.io # resolves to 192.168.0.121 (gateway/envoy IP). use LAN or accesible IP/hostname
39+
ZITADEL_EXTERNALDOMAIN=iam.192-168-0-10.sslip.io
40+
MINIO_BROWSER_REDIRECT_URL=http://minio.192-168-0-10.sslip.io
4241
```
4342

44-
- Expose ZITADEL on port 80, by uncommenting
43+
similarly adjust `internal/stack/envoy/config.yaml` and `internal/stack/pgo/config.yaml`
4544

46-
```yaml
47-
ports:
48-
- 80:8080
49-
```
45+
- ensure zitadel container can write admin service account key which edge uses to configure zitadel
5046

5147
```sh
52-
docker compose up -d
48+
mkdir -p __zitadel
49+
chmod -R a+rw __zitadel
5350
```
5451

55-
#### Use the centralized IdP for authorization in Postgres via `pgo rest` (PostgREST API)
56-
57-
Configure ZITADEL. Adjust the domain in env vars, and in `internal/stack/envoy/config.yaml`
52+
- ensure ./tls.key ./tls.crt exists. Use something like
5853

5954
```sh
60-
export ZITADEL_HOSTNAME=iam.192-168-0-121.sslip.io
61-
export ZITADEL_ISSUER=http://$ZITADEL_HOSTNAME
62-
export ZITADEL_API=$ZITADEL_HOSTNAME:80
63-
export ZITADEL_KEY_PATH=__zitadel-machinekey/zitadel-admin-sa.json
64-
export ZITADEL_JWK_URL=http://$ZITADEL_HOSTNAME/oauth/v2/keys
55+
openssl req -x509 -newkey rsa:4096 -keyout tls.key -out tls.crt -days 365 -nodes \
56+
-subj "/CN=iam.example.local" \
57+
-addext "subjectAltName=DNS:*.example.local,DNS:*.192-168-0-10.sslip.io"
58+
59+
# for envoy container to access keypair
60+
chmod 666 tls.crt
61+
chmod 666 tls.key
6562
```
6663

64+
This is to configure envoy for end-to-end HTTP/2 required by zitadel management API. zitadel API bugs with self-signed certificates.
65+
For publicly trusted certificates, enable TLS by updating env vars in ZITADEL.
66+
6767
```sh
68-
go run ./internal/stack/configure/...
68+
docker compose up -d
6969
```
7070

71-
The above go code creates, among others, an OIDC client which pgo uses for authN/authZ. Any OIDC compliant Identity Provider (eg , Keycloak, Auth0) can be used; pgo just needs the client credentials.
71+
Check zitadel health with `curl http://iam.192-168-0-10.sslip.io/debug/healthz` or `docker exec -it edge_edge_1 /edge healthz`
72+
73+
#### Use the centralized IdP for authorization in Postgres via `pgo rest` (PostgREST API) as well as minio-s3, NATS etc
7274

73-
Once ZITADEL is configured, revert the ports (use 80 for envoy), and `docker compose down && docker compose up -d`
75+
edge so far creates the clients. a bit works needed to for configuring consumers of client secrets.
76+
For now, isit ZITADEL UI (eg at http://iam.192-168-0-10.sslip.io), login (see docker-compose.yaml) and regenerate client-secrets for oauth2-proxy and minio clients in edge project. Then
7477

75-
Visit ZITADEL UI (eg at http://iam.192-168-0-121.sslip.io), login (see docker-compose.yaml) and regenerate client-secret for oauth2-proxy client in edge project. Then update `internal/stack/pgo/config.yaml` with the values. Again, `docker compose down && docker compose up -d`
78+
- update `internal/stack/pgo/config.yaml` with the values
79+
- update relevant env vars in minio container
80+
81+
And `docker compose down && docker compose up -d`
7682

7783
#### `pgo rest`: PostgREST-compatible REST API
7884

@@ -98,13 +104,16 @@ GRANT ALL ON iam.users to anon;
98104
Now we can GET, POST, PATCH, DELETE on the users table in iam schema like:
99105

100106
```sh
101-
curl http://api.127-0-0-1.sslip.io/iam/users
107+
curl http://api.192-168-0-10.sslip.io/iam/users
102108
```
103109

104110
##### `pgo pipeline`: Debezium-compatible CDC for realtime-event/replication etc
105111

106112
The demo pgo-pipeline container syncs users from auth-db (in projections.users14 table) to app-db (in iam.users)
107113

114+
#### minio-s3
115+
ensure minio MINIO_IDENTITY_OPENID_CLIENT_ID and MINIO_IDENTITY_OPENID_CLIENT_SECRET are set withc appropriate values. console ui is at http://minio.192-168-0-10.sslip.io.
116+
108117
### Kubernetes
109118
If you already have a live k8s cluster, great just copy-paste-enter.
110119
For development and lightweight prod, [k3s](https://github.com/k3s-io/k3s) seems a great option.

docker-compose.yaml

+73-53
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
version: '3.8'
22

33
services:
4-
auth-db:
5-
image: docker.io/bitnami/postgresql:17
4+
db-auth:
5+
image: docker.io/bitnami/postgresql:17.3.0
66
environment:
77
POSTGRES_HOST_AUTH_METHOD: md5
88
POSTGRES_USER: postgres
@@ -12,7 +12,7 @@ services:
1212
ports:
1313
- 5431:5432
1414
volumes:
15-
- auth-db:/bitnami/postgresql
15+
- db-auth:/bitnami/postgresql
1616
healthcheck:
1717
test: ["CMD-SHELL", "pg_isready -U postgres"]
1818
interval: 5s
@@ -25,7 +25,7 @@ services:
2525
image: ghcr.io/zitadel/zitadel:latest
2626
command: 'start-from-init --masterkey "MasterkeyNeedsToHave32Characters" --tlsMode disabled'
2727
environment:
28-
ZITADEL_DATABASE_POSTGRES_HOST: auth-db
28+
ZITADEL_DATABASE_POSTGRES_HOST: db-auth
2929
ZITADEL_DATABASE_POSTGRES_PORT: 5432
3030
ZITADEL_DATABASE_POSTGRES_DATABASE: main
3131
ZITADEL_DATABASE_POSTGRES_USER_USERNAME: zitadel
@@ -34,7 +34,7 @@ services:
3434
ZITADEL_DATABASE_POSTGRES_ADMIN_USERNAME: postgres
3535
ZITADEL_DATABASE_POSTGRES_ADMIN_PASSWORD: postgrespw
3636
ZITADEL_DATABASE_POSTGRES_ADMIN_SSL_MODE: disable
37-
ZITADEL_EXTERNALDOMAIN: iam.192-168-0-121.sslip.io # resolves to 192.168.0.121 (gateway/envoy IP). use LAN or accesible IP/hostname
37+
ZITADEL_EXTERNALDOMAIN: iam.192-168-0-10.sslip.io
3838
ZITADEL_EXTERNALPORT: 80
3939
ZITADEL_PORT: 8080
4040
ZITADEL_EXTERNALSECURE: false
@@ -43,20 +43,43 @@ services:
4343
ZITADEL_FIRSTINSTANCE_ORG_HUMAN_EMAIL_ADDRESS: [email protected]
4444
ZITADEL_FIRSTINSTANCE_ORG_HUMAN_PASSWORDCHANGEREQUIRED: false
4545
# machine user (service-account)
46-
ZITADEL_FIRSTINSTANCE_MACHINEKEYPATH: /machinekey/zitadel-admin-sa.json
47-
ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINE_USERNAME: zitadel-admin-sa
46+
ZITADEL_FIRSTINSTANCE_MACHINEKEYPATH: /machinekey/admin-sa.json
47+
ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINE_USERNAME: admin-sa
4848
ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINE_NAME: Admin
4949
ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINEKEY_TYPE: 1
5050
depends_on:
51-
auth-db:
51+
db-auth:
5252
condition: service_healthy
53-
# ports:
54-
# - 80:8080
53+
ports:
54+
- 8080:8080
5555
volumes:
56-
- $PWD/__zitadel-machinekey:/machinekey:rw,Z
56+
- $PWD/__zitadel:/machinekey:rw,Z
5757

58-
app-db:
59-
image: docker.io/bitnami/postgresql:17
58+
edge:
59+
user: "${UID:-1000}"
60+
build:
61+
context: "."
62+
dockerfile: "./internal/stack/Containerfile"
63+
entrypoint: sh -c "id && ls -la /workspace/zitadel/admin-sa.json && while [ ! -f /workspace/zitadel/admin-sa.json ]; do sleep 1; done; sleep 2; /edge serve"
64+
ports:
65+
# - 18000:18000 # xds-server
66+
- 8081:8081 # http-admin
67+
environment:
68+
EDGE_DOMAIN_ROOT: 192-168-0-10.sslip.io
69+
EDGE_IAM_ZITADEL_MACHINEKEYPATH: "/workspace/zitadel/admin-sa.json"
70+
healthcheck:
71+
test: [CMD, /edge, healthz]
72+
interval: 5s
73+
timeout: 5s
74+
retries: 3
75+
start_period: 30s
76+
start_interval: 5s
77+
volumes:
78+
- $PWD/__zitadel:/workspace/zitadel:rw,Z,U
79+
restart: on-failure
80+
81+
db-app:
82+
image: docker.io/bitnami/postgresql:17.3.0
6083
environment:
6184
POSTGRES_HOST_AUTH_METHOD: md5
6285
POSTGRES_USER: postgres
@@ -66,112 +89,109 @@ services:
6689
ports:
6790
- 5432:5432
6891
volumes:
69-
- app-db:/bitnami/postgresql
92+
- db-app:/bitnami/postgresql
7093
healthcheck:
7194
test: ["CMD-SHELL", "pg_isready -U postgres"]
7295
interval: 5s
7396
timeout: 5s
7497
retries: 5
7598

76-
envoy-controlplane:
77-
build:
78-
context: "."
79-
dockerfile: "./internal/stack/envoy/Containerfile"
80-
entrypoint: /envoy-controlplane
81-
ports:
82-
- 18000:18000 # xds-server
83-
environment:
84-
- "DEBUG=true"
85-
8699
envoy:
100+
user: "${UID:-1000}"
87101
image: docker.io/envoyproxy/envoy:contrib-v1.33-latest
88-
# command: [envoy, --config-path, /etc/bootstrap.yaml, --base-id, 1] # when used with envoy-controlplane for dynamic config
89-
command: [envoy, --config-path, /etc/config.yaml, --base-id, 1]
102+
# command: [envoy, --config-path, /etc/envoy/bootstrap.yaml, --base-id, "1"] # when used with envoy-controlplane for dynamic config
103+
command: [envoy, --config-path, /etc/envoy/config.yaml, --base-id, "1", --disable-hot-restart]
90104
privileged: true
91105
ports:
92-
- 9901:9901 # admin
106+
# - 9901:9901 # admin
93107
- 80:10080 # http-proxy
108+
- 443:10443 # https-proxy
94109
volumes:
95-
# - $PWD/internal/stack/envoy/bootstrap.yaml:/etc/bootstrap.yaml:rw,Z # when used with envoy-controlplane
96-
- $PWD/internal/stack/envoy/config.yaml:/etc/config.yaml:rw,Z # hard-coded config
97-
depends_on:
98-
- envoy-controlplane
110+
# - $PWD/internal/stack/envoy/bootstrap.yaml:/etc/envoy/bootstrap.yaml:rw,Z # when used with envoy-controlplane
111+
- $PWD/internal/stack/envoy/config.yaml:/etc/envoy/config.yaml:rw,Z # hard-coded config
112+
- $PWD/tls.crt:/etc/envoy/tls.crt:rw,Z
113+
- $PWD/tls.key:/etc/envoy/tls.key:rw,Z
99114

100115
pgo-rest:
101116
image: ghcr.io/edgeflare/pgo
102117
command: [rest, --config, /rest/config.yaml]
103118
ports:
104-
- 8080:8080
119+
- 8082:8080
105120
volumes:
106121
- $PWD/internal/stack/pgo/config.yaml:/rest/config.yaml:rw,Z
107122
depends_on:
108-
app-db:
123+
db-app:
109124
condition: service_healthy
110-
# init-app-db:
125+
edge:
126+
condition: service_healthy
127+
# init-db-app:
111128
# condition: service_completed_successfully # errors
129+
restart: on-failure
112130

113131
pgo-pipeline:
114132
image: ghcr.io/edgeflare/pgo
115133
command: [pipeline, --config, /pipeline/config.yaml]
116134
volumes:
117135
- $PWD/internal/stack/pgo/config.yaml:/pipeline/config.yaml:rw,Z
118136
depends_on:
119-
auth-db:
137+
db-auth:
120138
condition: service_healthy
121-
app-db:
139+
db-app:
122140
condition: service_healthy
123141

124-
minio:
142+
s3-minio:
125143
image: quay.io/minio/minio
126144
command: [server, --console-address, ":9001"]
127145
environment:
128146
MINIO_ROOT_USER: minioadmin
129147
MINIO_ROOT_PASSWORD: minio-secret-key-change-me
130148
MINIO_VOLUMES: /mnt/data
131-
MINIO_BROWSER_REDIRECT_URL: http://minio.127-0-0-1.sslip.io
149+
MINIO_BROWSER_REDIRECT_URL: http://minio.192-168-0-10.sslip.io
132150
# OIDC
133-
MINIO_IDENTITY_OPENID_CLIENT_ID: "311219429557993516"
134-
MINIO_IDENTITY_OPENID_CLIENT_SECRET: "PdcOM6b3h2pcdAVc3es83PY62EVLATiMjQrela1IYChAhTbkr1RX5MNqCvMMLauw"
151+
MINIO_IDENTITY_OPENID_CLIENT_ID: "311583275548213255"
152+
MINIO_IDENTITY_OPENID_CLIENT_SECRET: "dHUy2L30NrHlCwlp3ILShp7tlnnt6zbbvToUt07ZZt2VLut3uFVRVHSyeBuxntxO"
135153
MINIO_IDENTITY_OPENID_DISPLAY_NAME: "Login with SSO"
136-
MINIO_IDENTITY_OPENID_CONFIG_URL: http://iam.192-168-0-121.sslip.io/.well-known/openid-configuration
154+
MINIO_IDENTITY_OPENID_CONFIG_URL: http://iam.192-168-0-10.sslip.io/.well-known/openid-configuration
137155
MINIO_IDENTITY_OPENID_CLAIM_NAME: policy_minio
138156
MINIO_IDENTITY_OPENID_REDIRECT_URI_DYNAMIC: on
139157
MINIO_IDENTITY_OPENID_CLAIM_USERINFO: on
140158
MINIO_IDENTITY_OPENID_COMMENT: "OIDC Identity Provider"
141159
# notify postgres
142160
MINIO_NOTIFY_POSTGRES_ENABLE: on
143-
MINIO_NOTIFY_POSTGRES_CONNECTION_STRING: "host=app-db port=5432 user=postgres password=postgrespw dbname=main sslmode=prefer"
161+
MINIO_NOTIFY_POSTGRES_CONNECTION_STRING: "host=db-app port=5432 user=postgres password=postgrespw dbname=main sslmode=prefer"
144162
MINIO_NOTIFY_POSTGRES_FORMAT: namespace
145163
MINIO_NOTIFY_POSTGRES_ID: minioevents
146164
MINIO_NOTIFY_POSTGRES_TABLE: minioevents
147165
volumes:
148-
- minio:/mnt/data
166+
- s3-minio:/mnt/data
149167
ports:
150168
- 9000:9000
151169
- 9001:9001
152170
depends_on:
153-
app-db:
171+
db-app:
154172
condition: service_healthy
155-
iam-zitadel:
173+
# iam-zitadel:
174+
# condition: service_healthy
175+
edge:
156176
condition: service_healthy
157177
# should also wait for initdb, zitadel client creation
158178

159-
init-app-db:
160-
image: docker.io/bitnami/postgresql:17
179+
init-db-app:
180+
image: docker.io/bitnami/postgresql:17.3.0
161181
environment:
162-
PGHOST: app-db
182+
PGHOST: db-app
163183
PGUSER: postgres
164184
PGPASSWORD: postgrespw
165185
PGDATABASE: main
166186
depends_on:
167-
app-db:
187+
db-app:
168188
condition: service_healthy
169189
entrypoint:
170190
- /bin/bash
171191
- -c
172192
- |
173193
echo "Waiting for PostgreSQL to be ready..."
174-
until PGPASSWORD=postgrespw psql -h app-db -U postgres -c '\q'; do
194+
until PGPASSWORD=postgrespw psql -h db-app -U postgres -c '\q'; do
175195
echo "PostgreSQL is unavailable - sleeping"
176196
sleep 2
177197
done
@@ -222,6 +242,6 @@ services:
222242
restart: on-failure
223243

224244
volumes:
225-
app-db:
226-
auth-db:
227-
minio:
245+
db-app:
246+
db-auth:
247+
s3-minio:

0 commit comments

Comments
 (0)