Skip to content

Commit 6111755

Browse files
committed
doc: tmp use static envoy config in docker
Signed-off-by: hmoazzem <[email protected]>
1 parent 68cef4d commit 6111755

File tree

4 files changed

+178
-69
lines changed

4 files changed

+178
-69
lines changed

README.md

+22-30
Original file line numberDiff line numberDiff line change
@@ -18,30 +18,25 @@ Edge launches and configures these components to work together as a unified back
1818

1919
Edge can run as:
2020
- A single binary (embeds official component binaries)
21-
- [Docker compose](./docker-compose.yaml)
21+
- [Docker compose](./docker-compose.yaml) (follow this README)
2222
- Kubernetes resources (follow this README)
2323
- Via a Kubernetes CRD named [Project](./example/project.yaml)
2424

25-
This project is in the ideation stage. Edge configures/manages the four underlying tools to create a cohesive system.
25+
This project is in the ideation stage. Edge configures/manages the underlying tools to create a cohesive system.
2626

2727
Interested in experimenting or contributing? See [CONTRIBUTING.md](./CONTRIBUTING.md).
2828

2929
```sh
3030
git clone [email protected]:edgeflare/edge.git && cd edge
3131
```
3232

33-
This uses iam.example.local and api.example.local domains. Ensure they point to the Gateway IP (envoyproxy) eg by adding an entry to `/etc/hosts` like
34-
35-
```sh
36-
127.0.0.1 api.example.local iam.example.local
37-
```
38-
3933
### [docker-compose.yaml](./docker-compose.yaml)
4034

4135
Adjust the docker-compose.yaml
4236

4337
- Free up port 80 from envoy for zitadel's initial configuration via management API which requires end-to-end HTTP/2 support.
44-
envoyproxy config in docker doesn't support (our xds-server incomplete) HTTP/2 yet; on [k8s](https://raw.githubusercontent.com/edgeflare/pgo/refs/heads/main/k8s.yaml) everything works fine.
38+
envoyproxy config in docker doesn't support (we gotta finish xds-server) HTTP/2 yet.
39+
On [k8s](https://raw.githubusercontent.com/edgeflare/pgo/refs/heads/main/k8s.yaml) everything works fine.
4540

4641
```yaml
4742
envoy:
@@ -66,10 +61,10 @@ docker compose up -d
6661
Any OIDC compliant Identity Provider (eg ZITADEL, Keycloak, Auth0) can be used.
6762

6863
```sh
69-
export ZITADEL_ISSUER=http://iam.example.local
70-
export ZITADEL_API=iam.example.local:80
64+
export ZITADEL_ISSUER=http://iam.127-0-0-1.sslip.io
65+
export ZITADEL_API=iam.127-0-0-1.sslip.io:80
7166
export ZITADEL_KEY_PATH=__zitadel-machinekey/zitadel-admin-sa.json
72-
export ZITADEL_JWK_URL=http://iam.example.local/oauth/v2/keys
67+
export ZITADEL_JWK_URL=http://iam.127-0-0-1.sslip.io/oauth/v2/keys
7368
```
7469

7570
Configure components eg create OIDC clients in ZITADEL etc
@@ -78,26 +73,16 @@ Configure components eg create OIDC clients in ZITADEL etc
7873
go run ./internal/util/configure/...
7974
```
8075

81-
Once done, revert the ports (use 80 for envoy), and `docker compose restart`
76+
After configuring, revert the ports (use 80 for envoy), and `docker compose down && docker compose up -d`
8277

83-
#### pgo rest
8478

85-
Visit http://iam.example.local, login and regenerate client-secret for oauth2-proxy client in edge project. Then adjust `internal/util/pgo/config.yaml`
86-
87-
> `pgo rest` container fails because of proxy issues. It can be run locally
88-
89-
```sh
90-
go install github.com/edgeflare/pgo@latest # or download from release page
91-
```
92-
##### PostgREST-compatible REST API
9379

94-
```sh
95-
pgo rest --config internal/util/pgo/config.yaml --rest.pg_conn_string "host=localhost port=5432 user=pgo password=pgopw dbname=main sslmode=prefer"
96-
```
80+
Visit ZITADEL UI (eg at iam.192-168-0-121.sslip.io), login (see docker-compose.yaml) and regenerate client-secret for oauth2-proxy client in edge project.
81+
Then update `internal/util/pgo/config.yaml` with the values. Again, `docker compose down && docker compose up -d`
9782

98-
###### realtime/replication eg sync users from auth-db to app-db
83+
#### `pgo rest`: PostgREST-compatible REST API
9984

100-
Create table in sink-db. See pgo repo for more examples
85+
Create a table in app-db for REST and pipeline demo. See pgo repo for more examples
10186

10287
```sh
10388
PGUSER=postgres PGPASSWORD=postgrespw PGHOST=localhost PGDATABASE=main PGPORT=5432 psql
@@ -109,14 +94,22 @@ CREATE SCHEMA IF NOT EXISTS iam;
10994
CREATE TABLE IF NOT EXISTS iam.users (
11095
id TEXT DEFAULT gen_random_uuid()::TEXT PRIMARY KEY
11196
);
97+
98+
-- wide-open for demo. use GRANT and RLS for granular ACL
99+
GRANT USAGE ON SCHEMA iam to anon;
100+
GRANT ALL ON iam.users to anon;
112101
```
113102

114-
Start pipeline
103+
`docker restart edge_pgo-rest_1` to reload schema cache if it bugs. Now we can eg
115104

116105
```sh
117-
pgo pipeline --config internal/util/pgo/config.yaml
106+
curl localhost:8080/iam/users # via envoyproxy api.127-0-0-1.sslip.io isn't funtional yet
118107
```
119108

109+
##### `pgo pipeline`: Debezium-compatible CDC for realtime-event/replication etc
110+
111+
The demo pgo-pipeline container syncs users from auth-db (in projections.users14 table) to app-db (in iam.users)
112+
120113
### Kubernetes
121114
If you already have a live k8s cluster, great just copy-paste-enter.
122115
For development and lightweight prod, [k3s](https://github.com/k3s-io/k3s) seems a great option.
@@ -143,7 +136,6 @@ export ZITADEL_ADMIN_PW=$(kubectl get secrets example-zitadel-firstinstance -o j
143136

144137
Configure zitadel like in docker-compose. Then apply something like `https://raw.githubusercontent.com/edgeflare/pgo/refs/heads/main/k8s.yaml`
145138

146-
147139
## Cleanup
148140

149141
```sh

docker-compose.yaml

+47-34
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ services:
3434
ZITADEL_DATABASE_POSTGRES_ADMIN_USERNAME: postgres
3535
ZITADEL_DATABASE_POSTGRES_ADMIN_PASSWORD: postgrespw
3636
ZITADEL_DATABASE_POSTGRES_ADMIN_SSL_MODE: disable
37-
ZITADEL_EXTERNALDOMAIN: iam.example.local
37+
ZITADEL_EXTERNALDOMAIN: iam.192-168-0-121.sslip.io # resolves to 192.168.0.121 (gateway/envoy IP). use LAN or accesible IP/hostname
3838
ZITADEL_EXTERNALPORT: 80
3939
ZITADEL_PORT: 8080
4040
ZITADEL_EXTERNALSECURE: false
@@ -73,6 +73,52 @@ services:
7373
timeout: 5s
7474
retries: 5
7575

76+
envoy-controlplane:
77+
build:
78+
context: "."
79+
dockerfile: "./internal/util/envoy/Containerfile"
80+
entrypoint: /envoy-controlplane
81+
ports:
82+
- 18000:18000 # xds-server
83+
environment:
84+
- "DEBUG=true"
85+
86+
envoy:
87+
image: docker.io/envoyproxy/envoy:contrib-v1.33-latest
88+
# command: [envoy, --config-path, /etc/bootstrap.yaml, --base-id, 1] # when used with envoy-controlplane for dynamic config
89+
command: [envoy, --config-path, /etc/config.yaml, --base-id, 1]
90+
privileged: true
91+
ports:
92+
- 9901:9901 # admin
93+
- 80:10080 # http-proxy
94+
volumes:
95+
# - $PWD/internal/util/envoy/bootstrap.yaml:/etc/bootstrap.yaml:rw,Z # when used with envoy-controlplane
96+
- $PWD/internal/util/envoy/config.yaml:/etc/config.yaml:rw,Z # hard-coded config
97+
depends_on:
98+
- envoy-controlplane
99+
100+
pgo-rest:
101+
image: ghcr.io/edgeflare/pgo
102+
command: [rest, --config, /rest/config.yaml]
103+
ports:
104+
- 8080:8080
105+
volumes:
106+
- $PWD/internal/util/pgo/config.yaml:/rest/config.yaml:rw,Z
107+
depends_on:
108+
app-db:
109+
condition: service_healthy
110+
111+
pgo-pipeline:
112+
image: ghcr.io/edgeflare/pgo
113+
command: [pipeline, --config, /pipeline/config.yaml]
114+
volumes:
115+
- $PWD/internal/util/pgo/config.yaml:/pipeline/config.yaml:rw,Z
116+
depends_on:
117+
auth-db:
118+
condition: service_healthy
119+
app-db:
120+
condition: service_healthy
121+
76122
init-app-db:
77123
image: docker.io/bitnami/postgresql:17
78124
environment:
@@ -125,39 +171,6 @@ services:
125171
"
126172
restart: on-failure
127173

128-
envoy-controlplane:
129-
build:
130-
context: "."
131-
dockerfile: "./internal/util/envoy/Containerfile"
132-
entrypoint: /envoy-controlplane
133-
ports:
134-
- 18000:18000 # xds-server
135-
environment:
136-
- "DEBUG=true"
137-
138-
envoy:
139-
image: docker.io/envoyproxy/envoy:contrib-v1.33-latest
140-
command: [envoy, --config-path, /etc/bootstrap.yaml, --base-id, 1]
141-
privileged: true
142-
ports:
143-
- 9901:9901 # admin
144-
- 80:10080 # http-proxy
145-
volumes:
146-
- $PWD/internal/util/envoy/bootstrap.yaml:/etc/bootstrap.yaml:rw,Z
147-
depends_on:
148-
- envoy-controlplane
149-
150-
pgo-rest:
151-
image: ghcr.io/edgeflare/pgo
152-
command: [rest, --config, /rest/config.yaml]
153-
ports:
154-
- 8080:8080
155-
volumes:
156-
- $PWD/internal/util/pgo/config.yaml:/rest/config.yaml:rw,Z
157-
depends_on:
158-
app-db:
159-
condition: service_healthy
160-
161174
volumes:
162175
app-db:
163176
auth-db:

internal/util/envoy/config.yaml

+104
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,104 @@
1+
admin:
2+
address:
3+
socket_address:
4+
address: 0.0.0.0
5+
port_value: 9901
6+
7+
static_resources:
8+
listeners:
9+
- name: listener_0
10+
address:
11+
socket_address:
12+
address: 0.0.0.0
13+
port_value: 10080
14+
filter_chains:
15+
- filters:
16+
- name: envoy.filters.network.http_connection_manager
17+
typed_config:
18+
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
19+
codec_type: auto
20+
stat_prefix: ingress_http
21+
route_config:
22+
name: multi_service_route
23+
virtual_hosts:
24+
- name: iam_service
25+
domains: ["iam.127-0-0-1.sslip.io", "iam.192-168-0-121.sslip.io", "iam.example.local", "zitadel.example.local"]
26+
routes:
27+
- match: { prefix: "/" }
28+
route:
29+
cluster: iam_zitadel_service
30+
timeout: 0s
31+
max_stream_duration:
32+
grpc_timeout_header_max: 0s
33+
cors:
34+
allow_origin_string_match:
35+
- prefix: "*"
36+
allow_methods: GET, PUT, DELETE, POST, OPTIONS
37+
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
38+
max_age: "1728000"
39+
expose_headers: custom-header-1,grpc-status,grpc-message
40+
- name: pgo_service
41+
domains: ["api.127-0-0-1.sslip.io", "api.example.local", "pgo.example.local"]
42+
routes:
43+
- match: { prefix: "/" }
44+
route:
45+
cluster: pgo_rest_service
46+
timeout: 0s
47+
max_stream_duration:
48+
grpc_timeout_header_max: 0s
49+
cors:
50+
allow_origin_string_match:
51+
- prefix: "*"
52+
allow_methods: GET, PUT, DELETE, POST, OPTIONS
53+
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
54+
max_age: "1728000"
55+
expose_headers: custom-header-1,grpc-status,grpc-message
56+
http_filters:
57+
- name: envoy.filters.http.grpc_web
58+
typed_config:
59+
"@type": type.googleapis.com/envoy.extensions.filters.http.grpc_web.v3.GrpcWeb
60+
- name: envoy.filters.http.cors
61+
typed_config:
62+
"@type": type.googleapis.com/envoy.extensions.filters.http.cors.v3.Cors
63+
- name: envoy.filters.http.router
64+
typed_config:
65+
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
66+
67+
clusters:
68+
- name: iam_zitadel_service
69+
connect_timeout: 0.25s
70+
type: LOGICAL_DNS
71+
typed_extension_protocol_options:
72+
envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
73+
"@type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
74+
explicit_http_config:
75+
http2_protocol_options: {}
76+
lb_policy: round_robin
77+
load_assignment:
78+
cluster_name: iam_zitadel_cluster
79+
endpoints:
80+
- lb_endpoints:
81+
- endpoint:
82+
address:
83+
socket_address:
84+
address: iam-zitadel
85+
port_value: 8080
86+
87+
- name: pgo_rest_service
88+
connect_timeout: 0.25s
89+
type: LOGICAL_DNS
90+
typed_extension_protocol_options:
91+
envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
92+
"@type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
93+
explicit_http_config:
94+
http2_protocol_options: {}
95+
lb_policy: round_robin
96+
load_assignment:
97+
cluster_name: pgo_rest_cluster
98+
endpoints:
99+
- lb_endpoints:
100+
- endpoint:
101+
address:
102+
socket_address:
103+
address: pgo-rest
104+
port_value: 8080

internal/util/pgo/config.yaml

+5-5
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
rest:
22
listenAddr: ":8080"
33
pg:
4-
# connString: "host=app-db port=5432 user=pgo password=pgopw dbname=main sslmode=prefer" # container
5-
connString: "host=localhost port=5432 user=pgo password=pgopw dbname=main sslmode=prefer"
4+
connString: "host=app-db port=5432 user=pgo password=pgopw dbname=main sslmode=prefer" # container
5+
# connString: "host=localhost port=5432 user=pgo password=pgopw dbname=main sslmode=prefer"
66
oidc:
7-
issuer: http://iam.example.local
7+
issuer: http://iam.192-168-0-121.sslip.io
88
clientID: 311065325191888901
99
clientSecret: WCHwhcHqOFj1igPCh8MvTdidnKMUcUiJV40fnuekKNmY3tdyS6CtIWfRrBjbG24w
1010
roleClaimKey: .policy.pgrole
@@ -16,13 +16,13 @@ pipeline:
1616
- name: auth-db
1717
connector: postgres
1818
config:
19-
connString: "host=localhost port=5431 user=postgres password=postgrespw dbname=main sslmode=prefer replication=database"
19+
connString: "host=auth-db port=5432 user=postgres password=postgrespw dbname=main sslmode=prefer replication=database"
2020
replication:
2121
tables: ["projections.users14"]
2222
- name: app-db
2323
connector: postgres
2424
config:
25-
connString: "host=localhost port=5432 user=postgres password=postgrespw dbname=main sslmode=prefer"
25+
connString: "host=app-db port=5432 user=postgres password=postgrespw dbname=main sslmode=prefer"
2626
- name: debug # logs CDC events to stdout
2727
connector: debug
2828
pipelines:

0 commit comments

Comments
 (0)