Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Enable Native TLS Support for Cloud Run Deployments #365

Open
gbhall opened this issue Jan 26, 2025 · 5 comments
Open

Feature Request: Enable Native TLS Support for Cloud Run Deployments #365

gbhall opened this issue Jan 26, 2025 · 5 comments
Labels
good first issue Good for newcomers help wanted Extra attention is needed

Comments

@gbhall
Copy link

gbhall commented Jan 26, 2025

Description

Currently, deploying the application to Google Cloud Run necessitates providing TLS key and certificate files as arguments during deployment. However, Cloud Run manages HTTPS termination externally, forwarding decrypted HTTP traffic to the container. This dual TLS handling results in conflicts and errors such as:

"Client sent an HTTP request to an HTTPS server."

http: TLS handshake error from <IP>: EOF

Additionally, omitting TLS arguments prevents the application from starting, as it relies on these configurations for secure operations.

Error loading public key: open : no such file or directory

Proposed Solution

Introduce a configuration option or modification within the application to conditionally handle TLS based on the deployment environment or the presence of specific environment variables. This would enable the application to:

1.	Detect Deployment Context:
•	If deployed on Cloud Run, disable internal TLS handling and serve HTTP traffic.
•	If deployed elsewhere (e.g., on-premises or another cloud platform), enable TLS as required.

2.	Configuration Flexibility:
•	Utilize environment variables to toggle TLS functionality, allowing for dynamic configuration without altering the deployment command.

3.	Enhanced Compatibility:
•	Ensure that the application aligns with Cloud Run’s architecture, leveraging its managed HTTPS termination without internal conflicts.

Current Deployment Command

gcloud run deploy tesla-proxy \
  --image=tesla/vehicle-command:latest \                                                          
  --platform=managed \
  --region=us-central1 \
  --allow-unauthenticated \
  --port=8080 \
  --set-secrets="/config/tls-key/tls-key.pem=TESLA_TLS_KEY:latest" \                                                                                     
  --set-secrets="/config/tls-cert/tls-cert.pem=TESLA_TLS_CERT:latest" \
  --set-secrets="/config/fleet-key/fleet-key.pem=TESLA_PRIVATE_KEY:latest" \
  --args="-tls-key","/config/tls-key/tls-key.pem","-cert","/config/tls-cert/tls-cert.pem","-key-file","/config/fleet-key/fleet-key.pem","-host","0.0.0.0","-port","8080","-verbose"
@sethterashima
Copy link
Collaborator

sethterashima commented Jan 28, 2025

Our concern with similar proposals has been that if we make it easy to deploy the proxy without TLS, then people who don't know what they're doing will end up deploying it without TLS while exposed to the public Internet. We've had problems with third-party apps being misconfigured in this way in the past.

I'm open to suggestions for accomplishing what your'e after, but it needs friction. Friction is load-bearing here; it discourages an enthusiastic but non-techncial DIYer from blindly adding flags or running commands in a terminal. For example, even though TLS breaks server authentication instead of client authentication, this breakage deters most users from setting up insecure deployments because of the red flags that get raised when server authentication breaks.

One alternative: Use this repository as a package instead of using tesla-proxy directly. Tesla proxy is a thin wrapper around the proxy package, and you could write another thin wrapper that does what you want without needing to fork.

@gbhall
Copy link
Author

gbhall commented Jan 28, 2025

Okay, thank you, Seth. That's perfectly reasonable. Let me try and think of a solution that will make it feasible to deploy to certain platforms but still maintain enough friction to not allow people to irresponsibly deploy butchered, insecure instances.

@gbhall
Copy link
Author

gbhall commented Jan 28, 2025

@sethterashima I managed to get it to run successfully in Cloud Run with the following commit: gbhall@813d0e8

I'd still like to incorporate this into the official repo. I understand this commit introduced the lack of friction you're describing. If you're happy, I can add a bit more friction. However, this implementation works perfectly with Cloud Run, which is an ideal platform for deployment.

@lotharbach
Copy link

I consider the following a workaround and also wish for a proper config switch, running a second proxy is probably the maximum "friction" possible.

@gbhall I am able to run the unmodified tesla-http-proxy on gcloud run with a nginx sidecar similar to whats described on https://cloud.google.com/run/docs/internet-proxy-nginx-sidecar

So run tesla-http-proxy with (self signed) HTTPS and -port 4443
And have nginx listen to HTTP port 8080 and proxy everything proxy_pass https://127.0.0.1:4443;

It is of course not quite right to proxy once again and go back to https, and also wastes some resources having to run nginx. But it works and at my current scale of 1 vehicle, that does not matter and is easier to run than a fork/thin wrapper. I'm still exploring if this is a good "free" deployment option.

service.yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  annotations:
    run.googleapis.com/ingress: all
    run.googleapis.com/ingress-status: all
    run.googleapis.com/urls: '["https://tesla-proxy-123myid456.europe-west3.run.app"]'
    # Required to use Cloud Run multi-containers (preview feature)
    run.googleapis.com/launch-stage: BETA
  labels:
    cloud.googleapis.com/location: europe-west3
    run.googleapis.com/satisfiesPzs: 'true'
  name: tesla-proxy
  namespace: '123myid456'
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/maxScale: '100'
        run.googleapis.com/client-name: gcloud
        run.googleapis.com/client-version: 507.0.0
        run.googleapis.com/startup-cpu-boost: 'true'
        # Defines container startup order within multi-container service.
        # https://cloud.google.com/run/docs/configuring/containers#container-ordering
        run.googleapis.com/container-dependencies: "{nginx: [tesla-proxy]}"
      labels:
        client.knative.dev/nonce: pewaucnsov
        run.googleapis.com/startupProbeType: Default
    spec:
      containerConcurrency: 80
      containers:
      # A) Serving ingress container "nginx" listening at PORT 8080
      # Main entrypoint of multi-container service.
      # https://cloud.google.com/run/docs/container-contract#port
      - image: nginx
        name: nginx
        ports:
          - name: http1
            containerPort: 8080
        resources:
          limits:
            cpu: 500m
            memory: 256Mi
        volumeMounts:
          - name: nginx-conf-secret
            readOnly: true
            mountPath: /etc/nginx/conf.d/
        startupProbe:
          timeoutSeconds: 240
          periodSeconds: 240
          failureThreshold: 1
          tcpSocket:
            port: 8080
      - name: tesla-proxy
        args:
        - -tls-key
        - /config/tls-key/tls-key.pem
        - -cert
        - /config/tls-cert/tls-cert.pem
        - -key-file
        - /config/fleet-key/fleet-key.pem
        - -host
        - 0.0.0.0
        - -port
        - '4443'
        - -verbose
        image: tesla/vehicle-command:0.3.0
        resources:
          limits:
            cpu: 1000m
            memory: 512Mi
        startupProbe:
          failureThreshold: 1
          periodSeconds: 240
          tcpSocket:
            port: 4443
          timeoutSeconds: 240
        volumeMounts:
        - mountPath: /config/tls-key
          name: TESLA_TLS_KEY-gob-vuw-kux
        - mountPath: /config/tls-cert
          name: TESLA_TLS_CERT-xor-boz-mab
        - mountPath: /config/fleet-key
          name: TESLA_PRIVATE_KEY-sab-zan-riy
      serviceAccountName: [email protected]
      timeoutSeconds: 300
      volumes:
      - name: TESLA_TLS_KEY-gob-vuw-kux
        secret:
          items:
          - key: latest
            path: tls-key.pem
          secretName: TESLA_TLS_KEY
      - name: TESLA_TLS_CERT-xor-boz-mab
        secret:
          items:
          - key: latest
            path: tls-cert.pem
          secretName: TESLA_TLS_CERT
      - name: TESLA_PRIVATE_KEY-sab-zan-riy
        secret:
          items:
          - key: latest
            path: fleet-key.pem
          secretName: TESLA_PRIVATE_KEY
      - name: nginx-conf-secret
        secret:
          secretName: nginx_config
          items:
            - key: latest
              path: default.conf
  traffic:
  - latestRevision: true
    percent: 100
nginx.conf
server {
    # Listen at port 8080
    listen 8080; 
    # Server at localhost
    server_name _;
    # Enables gzip compression to make our app faster
    gzip on;

    location / {
        proxy_pass   https://127.0.0.1:4443;
    }
}

@thefireblade thefireblade added help wanted Extra attention is needed good first issue Good for newcomers labels Feb 21, 2025
@netdata-be
Copy link

What about using the x-forwarded-proto header to serve the request in plain HTTP?
It would solve the fact that non technical people will be able to run it without TLS

if the header is set by the reverse proxy / TLS offloading it should be enough, right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

5 participants