-
Notifications
You must be signed in to change notification settings - Fork 343
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Including an internal-frontend service with auth enabled #571
Including an internal-frontend service with auth enabled #571
Conversation
When the Authorization's authorizer is a non-default value, this will automatically include the "internal-frontend" service. Without this, when you enable JWT based authorization, the worker service will fail repeatedly with an authorization failure. This pretty much follows the instructions in the release notes for 1.20: https://github.com/temporalio/temporal/releases/tag/v1.20.0 NOTE: I have not tested this with mTLS.
|
I would have a suggestion, why to decide based on authorization settings? could we simplify it with settings server:
frontend:
enabled: false
# other config for frontend component
internalFrontend:
enabled: true
# other config for internal-frontend component and do the implementation based on Thank you |
Yeah, I like that approach better. I would still need to know when to remove the publicClient stuff, I can probably add another config value for that. I'll see about putting those changes in, ideally in the next few days. |
I think the publicClient would be required when |
Hi,
Change 1I had to add {{- if eq $service "frontend" }}
{{- with $server.config.authorization }}
authorization:
{{- toYaml . | nindent 10 }}
{{- end }}
{{- end }} Change 2Repo uses camelCase, so I used {{- define "serviceName" -}}
{{- $service := index . 0 -}}
{{- if eq $service "internalFrontend" }}
{{- print "internal-frontend" }}
{{- else }}
{{- print $service }}
{{- end }}
{{- end -}} Change 3Usage for service-configmap.yaml: {{- range $originalService := (list "frontend" "internalFrontend" "history" "matching" "worker") }}
{{ $serviceValues := index $.Values.server $originalService }}
{{ $service := include "serviceName" (list $originalService) }}
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ include "temporal.fullname" $ }}-config-{{ $service }}"
Change 4Later in service-configmap.yaml I added missing section (condition required) and condition for publicClient: {{- if ne $service "frontend"}}
internal-frontend:
rpc:
grpcPort: {{ $server.internalFrontend.service.port }}
httpPort: {{ $server.internalFrontend.service.httpPort }}
membershipPort: {{ $server.internalFrontend.service.membershipPort }}
bindOnIP: "0.0.0.0"
{{- end }}
# ...
{{- if eq $service "frontend"}}
publicClient:
hostPort: "{{ include "temporal.componentname" (list $ "frontend") }}:{{ $server.frontend.service.port }}"
{{- end }}
Change 5Values.yaml internalFrontend:
service:
annotations: {}
type: ClusterIP
port: 7236
membershipPort: 6936
httpPort: 7246
ingress:
enabled: false
annotations: {}
hosts:
- "/"
tls: []
metrics:
annotations:
enabled: true
serviceMonitor: {}
prometheus: {}
podAnnotations: {}
podLabels: {}
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
additionalEnv: []
containerSecurityContext: {}
topologySpreadConstraints: []
podDisruptionBudget: {} Change 6Change for the first part to have a loop (for frontend and internalFrontend) in server-service.yaml {{- if $.Values.server.enabled }}
{{- range $originalService := (list "frontend" "internalFrontend") }}
{{ $serviceValues := index $.Values.server $originalService }}
{{ $service := include "serviceName" (list $originalService) }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "temporal.componentname" (list $ $service) }}
labels:
{{- include "temporal.resourceLabels" (list $ $originalService "") | nindent 4 }}
{{- if $serviceValues.service.annotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" $serviceValues.service.annotations "context" $) | nindent 4 }}
{{- end }}
spec:
type: {{ $serviceValues.service.type }}
ports:
- port: {{ $serviceValues.service.port }}
targetPort: rpc
protocol: TCP
name: grpc-rpc
{{- if hasKey $serviceValues.service "nodePort" }}
nodePort: {{ $serviceValues.service.nodePort }}
{{- end }}
- port: {{ $serviceValues.service.httpPort }}
targetPort: http
protocol: TCP
name: http
# TODO: Allow customizing the node HTTP port
selector:
app.kubernetes.io/name: {{ include "temporal.name" $ }}
app.kubernetes.io/instance: {{ $.Release.Name }}
app.kubernetes.io/component: {{ $service }}
---
{{- end }} Change 7To run admintool commands inside pod I have to add Authorization header: temporal operator namespace create --namespace <namespace> --grpc-meta=Authorization='Bearer <token_from_ui>' |
I had a similar issue after enabling authorizer and have enabled internal-frontend by adding an extra service endpoint rather than an entirely new pod. I did it that way because I found that the frontend was throwing errors on startup when I had specified For the deployment, pass env SERVICES="frontend:internal-frontend" to frontend container env {{- if and (eq $service "frontend") ($.Values.server.frontend.internal.enabled) }}
- name: SERVICES
value: "{{ $service }}:internal-frontend"
{{- else }}
- name: SERVICES
value: {{ $service }}
{{- end }} Plus the extra port ports:
...
{{- if and (eq $service "frontend") ($.Values.server.frontend.internal.enabled) }}
- name: rpc-internal
containerPort: {{ $serviceValues.internal.service.port }}
protocol: TCP
{{- end }} For the configmap, internal-frontend looks the same as you have {{- if $server.frontend.internal.enabled }}
internal-frontend:
rpc:
grpcPort: {{ $server.frontend.internal.service.port }}
membershipPort: {{ $server.frontend.internal.service.membershipPort }}
bindOnIP: "0.0.0.0"
{{- end }} But, {{- if $server.frontend.internal.enabled}}
{{- else }}
publicClient:
hostPort: "{{ include "temporal.componentname" (list $ "frontend") }}:{{ $server.frontend.service.port }}"
{{- end }} My values file addition looks like this. Same defaults used as the compose setup this is all based on. Given that this setup re-uses the same deployment of frontend for internal and "external" in this case, I think it is intuitive to nest the values under frontend:
...
# Enables internal-frontend so that the builtin worker can access the frontend while
# bypassing authorizer and claim mapper.
# Equivalent to env.USE_INTERNAL_FRONTEND in docker compose config
internal:
enabled: false
service:
# Evaluated as template
annotations: {}
type: ClusterIP
port: 7236
membershipPort: 6936 EDIT: I have also confirmed the above setup works with internode tls. I haven't checked tls for client connections, but suspect it would work the same way. |
There appears to be another PR to accomplish this here as well: #602 |
Yeah, I just ran it locally and it seems to meet my needs nicely. I'll close this PR in preference of #602. |
NOTE: I have not tested this with mTLS.
What was changed
This adds an internal-frontend portion to the config map when a non-default authorizer is enabled, and removes the publicClient section if it's present.
It starts the internal-frontend deployment using the same parameters as the normal frontend, but slightly different ports (7236, 6936).
This pretty much follows the instructions in the release notes for 1.20: https://github.com/temporalio/temporal/releases/tag/v1.20.0
Why?
When one enables authorization using the recent jwt authorization support, the worker pod fails repeatedly with an authorization failure because it is not providing the JWT token. By following the above instructions, we bring up a special "internal frontend" that it will use for internal communications instead, bypassing this requirement and allowing the worker to start up properly.
Closes issue [Feature Request] Ability to specify internal frontend in Helm chart #560
How was this tested:
Added the requisite configuration, and ensure that the Temporal worker starts successfully.
Added a section to the readme.