Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Including an internal-frontend service with auth enabled #571

Conversation

dleblanc
Copy link

NOTE: I have not tested this with mTLS.

What was changed

This adds an internal-frontend portion to the config map when a non-default authorizer is enabled, and removes the publicClient section if it's present.

It starts the internal-frontend deployment using the same parameters as the normal frontend, but slightly different ports (7236, 6936).

This pretty much follows the instructions in the release notes for 1.20: https://github.com/temporalio/temporal/releases/tag/v1.20.0

Why?

When one enables authorization using the recent jwt authorization support, the worker pod fails repeatedly with an authorization failure because it is not providing the JWT token. By following the above instructions, we bring up a special "internal frontend" that it will use for internal communications instead, bypassing this requirement and allowing the worker to start up properly.

  1. Closes issue [Feature Request] Ability to specify internal frontend in Helm chart #560

  2. How was this tested:

Added the requisite configuration, and ensure that the Temporal worker starts successfully.

  1. Any docs updates needed?

Added a section to the readme.

When the Authorization's authorizer is a non-default value, this will
automatically include the "internal-frontend" service. Without this,
when you enable JWT based authorization, the worker service will fail
repeatedly with an authorization failure.

This pretty much follows the instructions in the release notes for 1.20:
https://github.com/temporalio/temporal/releases/tag/v1.20.0

NOTE: I have not tested this with mTLS.
@dleblanc dleblanc requested a review from a team as a code owner September 30, 2024 23:18
@CLAassistant
Copy link

CLAassistant commented Sep 30, 2024

CLA assistant check
All committers have signed the CLA.

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@adamko147
Copy link

adamko147 commented Oct 15, 2024

I would have a suggestion, why to decide based on authorization settings? could we simplify it with settings

server:
    frontend:
        enabled: false
        # other config for frontend component
    internalFrontend:
        enabled: true
        # other config for internal-frontend component

and do the implementation based on {{ .Values.server.frontend.enabled }} and {{ .Values.server.internalFrontend.enabled }} instead of indirect logic based on authorization settings? In my opinion, it would be more flexible solution to fit more usecase, for example n our use case, we have our custom frontend build with authorizer independent of the server.config.authorization value, so your solution would not work for us without tweaking the helm char.

Thank you

@dleblanc
Copy link
Author

I would have a suggestion, why to decide based on authorization settings? could we simplify it with settings

server:
    frontend:
        enabled: false
        # other config for frontend component
    internalFrontend:
        enabled: true
        # other config for internal-frontend component

Yeah, I like that approach better. I would still need to know when to remove the publicClient stuff, I can probably add another config value for that.

I'll see about putting those changes in, ideally in the next few days.

@adamko147
Copy link

Yeah, I like that approach better. I would still need to know when to remove the publicClient stuff, I can probably add another config value for that.

I think the publicClient would be required when frontend.enabled == true, assuming if you're deploying built-in frontend (not your own), you want that to be accessible by clients.

@TheGeniesis
Copy link

TheGeniesis commented Oct 16, 2024

Hi,
You read in my mind - I have similar scenario - I want to use service principals and Azure Entra ID to handle users and apps auth.
Since, you already created MR I want to highlight some changes which I did to make everything work:

The source version was 0.44.0.
I was doing my changes to check if it works in internal project, before I thought it might be good to raise MR to original repo.

Change 1

I had to add if block in server-configmap.yaml. Only frontend is used externally, so only this service should have oAuth enabled (otherwise everything fails).

      {{- if eq $service "frontend" }}
      {{- with $server.config.authorization }}
      authorization:
        {{- toYaml . | nindent 10 }}
      {{- end }}
      {{- end }}

Change 2

Repo uses camelCase, so I used internalFrontend in values.yaml and added in _helpers.tpl:

{{- define "serviceName" -}}
    {{- $service := index . 0 -}}
    {{- if eq $service "internalFrontend" }}
        {{- print "internal-frontend" }}
    {{- else }}
        {{- print $service }}
    {{- end }}
{{- end -}}

Change 3

Usage for service-configmap.yaml:

{{- range $originalService := (list "frontend" "internalFrontend" "history" "matching" "worker") }}
{{ $serviceValues := index $.Values.server $originalService }}
{{ $service := include "serviceName" (list $originalService) }}
apiVersion: v1
kind: ConfigMap
metadata:
  name: "{{ include "temporal.fullname" $ }}-config-{{ $service }}"
  • serviceName might not be the best name here
  • internalFrontend should be added regarding to the condition in commend above
  • I added internalFrontend to all places where frontend was used (made a loop when needed)

Change 4

Later in service-configmap.yaml I added missing section (condition required) and condition for publicClient:

      {{- if ne $service "frontend"}}
      internal-frontend:
        rpc:
            grpcPort: {{ $server.internalFrontend.service.port }}
            httpPort: {{ $server.internalFrontend.service.httpPort }}
            membershipPort: {{ $server.internalFrontend.service.membershipPort }}
            bindOnIP: "0.0.0.0"
      {{- end }}
# ...
   {{- if eq $service "frontend"}}
   publicClient:
     hostPort: "{{ include "temporal.componentname" (list $ "frontend") }}:{{ $server.frontend.service.port }}"
   {{- end }}

Change 5

Values.yaml

  internalFrontend:
    service:
      annotations: {}
      type: ClusterIP
      port: 7236
      membershipPort: 6936
      httpPort: 7246
    ingress:
      enabled: false
      annotations: {}
      hosts:
        - "/"
      tls: []
    metrics:
      annotations:
        enabled: true
      serviceMonitor: {}
      prometheus: {}
    podAnnotations: {}
    podLabels: {}
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    additionalEnv: []
    containerSecurityContext: {}
    topologySpreadConstraints: []
    podDisruptionBudget: {}

Change 6

Change for the first part to have a loop (for frontend and internalFrontend) in server-service.yaml

{{- if $.Values.server.enabled }}
{{- range $originalService := (list "frontend" "internalFrontend") }}
{{ $serviceValues := index $.Values.server $originalService }}
{{ $service := include "serviceName" (list $originalService) }}
apiVersion: v1
kind: Service
metadata:
  name: {{ include "temporal.componentname" (list $ $service) }}
  labels:
    {{- include "temporal.resourceLabels" (list $ $originalService "") | nindent 4 }}
  {{- if $serviceValues.service.annotations }}
  annotations: {{- include "common.tplvalues.render" ( dict "value" $serviceValues.service.annotations "context" $) | nindent 4 }}
  {{- end }}
spec:
  type: {{ $serviceValues.service.type }}
  ports:
    - port: {{ $serviceValues.service.port }}
      targetPort: rpc
      protocol: TCP
      name: grpc-rpc
      {{- if hasKey $serviceValues.service "nodePort" }}
      nodePort: {{ $serviceValues.service.nodePort }}
      {{- end }}
    - port: {{ $serviceValues.service.httpPort }}
      targetPort: http
      protocol: TCP
      name: http
      # TODO: Allow customizing the node HTTP port
  selector:
    app.kubernetes.io/name: {{ include "temporal.name" $ }}
    app.kubernetes.io/instance: {{ $.Release.Name }}
    app.kubernetes.io/component: {{ $service }}
---
{{- end }}

Change 7

To run admintool commands inside pod I have to add Authorization header:

temporal operator namespace create --namespace <namespace> --grpc-meta=Authorization='Bearer <token_from_ui>'

@qs-synth
Copy link
Contributor

qs-synth commented Oct 25, 2024

I had a similar issue after enabling authorizer and have enabled internal-frontend by adding an extra service endpoint rather than an entirely new pod. I did it that way because I found that the frontend was throwing errors on startup when I had specified publicClient alongside services.internal-frontend and the configurations are mutually exclusive. The way I worked around it only takes a few changes.

For the deployment, pass env SERVICES="frontend:internal-frontend" to frontend container env

          {{- if and (eq $service "frontend") ($.Values.server.frontend.internal.enabled) }}
            - name: SERVICES
              value: "{{ $service }}:internal-frontend"
          {{- else }}
            - name: SERVICES
              value: {{ $service }}
          {{- end }}

Plus the extra port

          ports:
          ...
          {{- if and (eq $service "frontend") ($.Values.server.frontend.internal.enabled) }}
            - name: rpc-internal
              containerPort: {{ $serviceValues.internal.service.port }}
              protocol: TCP
          {{- end }}

For the configmap, internal-frontend looks the same as you have

      {{- if $server.frontend.internal.enabled }}
      internal-frontend:
        rpc:
          grpcPort: {{ $server.frontend.internal.service.port }}
          membershipPort: {{ $server.frontend.internal.service.membershipPort }}
          bindOnIP: "0.0.0.0"
      {{- end }}

But, publicClient has to be removed when internal is used. Prevents frontend from throwing startup errors.

    {{- if $server.frontend.internal.enabled}}
    {{- else }}
    publicClient:
      hostPort: "{{ include "temporal.componentname" (list $ "frontend") }}:{{ $server.frontend.service.port }}"
    {{- end }}

My values file addition looks like this. Same defaults used as the compose setup this is all based on. Given that this setup re-uses the same deployment of frontend for internal and "external" in this case, I think it is intuitive to nest the values under frontend.

  frontend:
    ...
    # Enables internal-frontend so that the builtin worker can access the frontend while
    # bypassing authorizer and claim mapper.
    # Equivalent to env.USE_INTERNAL_FRONTEND in docker compose config
    internal:
      enabled: false
      service:
        # Evaluated as template
        annotations: {}
        type: ClusterIP
        port: 7236
        membershipPort: 6936

EDIT: I have also confirmed the above setup works with internode tls. I haven't checked tls for client connections, but suspect it would work the same way.

@dleblanc
Copy link
Author

There appears to be another PR to accomplish this here as well: #602

@adamko147
Copy link

There appears to be another PR to accomplish this here as well: #602

I like the #602, it's kind of the implementation I had in mind. @dleblanc would #602 cover your use case as well? if yes, I would prefer to give it a "push" :)

Thanks

@dleblanc
Copy link
Author

There appears to be another PR to accomplish this here as well: #602

I like the #602, it's kind of the implementation I had in mind. @dleblanc would #602 cover your use case as well? if yes, I would prefer to give it a "push" :)

Thanks

Yeah, I just ran it locally and it seems to meet my needs nicely. I'll close this PR in preference of #602.

@dleblanc dleblanc closed this Oct 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants