Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DB Credentials incorrect when within main pod #56

Open
huang-jy opened this issue Jan 24, 2025 · 10 comments
Open

DB Credentials incorrect when within main pod #56

huang-jy opened this issue Jan 24, 2025 · 10 comments

Comments

@huang-jy
Copy link
Contributor

huang-jy commented Jan 24, 2025

After fixing the start up issue, I've now tried to create the test user using the example provided in the docs, but tweaked a bit to represent my local setup:

$ kubectl exec -n pixelfed -it pixelfed-bd8cb75b9-f6jcj -- /bin/bash -c "php artisan user:create --name=myname --
username=myusername [email protected] --password=password --is_admin=false --confirm_email=true"
Creating a new user...

   Illuminate\Database\QueryException 

  SQLSTATE[08006] [7] connection to server at "postgresql" (10.108.150.21), port 5432 failed: FATAL:  password authentication failed for user "postgres" (Connection: pgsql, SQL: insert into "users" ("username", "name", "email", "password", "is_admin", "email_verified_at", "updated_at", "created_at") values (myusername, myname, [email protected], $2y$10$.nOUNAJHdVaCbMuGfiX3JuyZH/rGOtII3Lfrr6UZDL4hH/xIwKeE6, 0, 2025-01-24 19:00:34, 2025-01-24 19:00:34, 2025-01-24 19:00:34) returning "id")

  at vendor/laravel/framework/src/Illuminate/Database/Connection.php:825
    821▕                     $this->getName(), $query, $this->prepareBindings($bindings), $e
    822▕                 );
    823▕             }
    824▕ 
  ➜ 825▕             throw new QueryException(
    826▕                 $this->getName(), $query, $this->prepareBindings($bindings), $e
    827▕             );
    828▕         }
    829▕     }

      +19 vendor frames 

  20  app/Console/Commands/UserCreate.php:57
      Illuminate\Database\Eloquent\Model::save()
      +12 vendor frames 

  33  artisan:35
      Illuminate\Foundation\Console\Kernel::handle()

command terminated with exit code 1

It appears to be able to connect to the database, but not able to authenticate with it.

I suspect this may also be the reason why I see this on the homepage:

Image

@jessebot
Copy link
Collaborator

Hmmm, I'm not sure why this is happening.

  1. Can you confirm which version of the chart you are using?

  2. Can you provide your values.yaml (after anonymizing any sensitive data)?

@huang-jy
Copy link
Contributor Author

I used both my forked repo (after I resynced it) and the one from your repo.

Let me check the values yaml vs the base yaml

@jessebot
Copy link
Collaborator

Are you completely wiping in between test installs? Also did you clean up any persistent volumes after your installs?

@huang-jy
Copy link
Contributor Author

100%, I explicity go in and delete all pvcs and then the namespace as well when I'm uninstalling.

Also, there seems to be a discrepancy between the values returned when doing

helm inspect values pixelfed-jessebot/pixelfed >pixelfed-values.yaml

and the values.yaml in the repo.

For the most part, it's just comments

But these blocks:

persistence:
# -- enable persistence for the pixelfed pod
enabled: false
# -- storage class name
storageClassName: ""
# -- size of the persistent volume claim to create. Tgnored if persistence.existingClaim is set
storage: 2Gi
# -- accessMode
accessModes:
- ReadWriteOnce
# -- using an existing PVC instead of creating one with this chart
existingClaim: ""

mariadb:
# -- enable mariadb subchart - currently experimental for this chart
# read more about the values: https://github.com/bitnami/charts/tree/main/bitnami/mariadb
enabled: false
auth:
# -- Name for a custom database to create
database: "pixelfed"
# -- Name for a custom user to create
username: "pixelfed"
# -- Password for the root user. Ignored if existing secret is provided.
rootPassword: "newRootPassword123"
# -- Password for the new user. Ignored if existing secret is provided
password: "newUserPassword123"
# -- MariaDB replication user password. Ignored if existing secret is provided
replicationPassword: "newReplicationPassword123"
# -- Use existing secret for password details (auth.rootPassword,
# auth.password, auth.replicationPassword will be ignored and picked up
# from this secret). The secret has to contain the keys mariadb-root-password,
# mariadb-replication-password and mariadb-password
existingSecret: new-password-secret

Are in the repo values.yaml, but not in the chart values when being extracted by helm inspect

@huang-jy
Copy link
Contributor Author

I used both my forked repo (after I resynced it) and the one from your repo.

Let me check the values yaml vs the base yaml

Here's my values.yaml:

image:
  registry: ghcr.io
  # -- you can see the source [ghcr.io/mattlqx/docker-pixelfed](https://ghcr.io/mattlqx/docker-pixelfed)
  repository: mattlqx/docker-pixelfed
  # -- This sets the pull policy for images.
  pullPolicy: IfNotPresent
  # -- Overrides the image tag whose default is the chart appVersion
  # (v0.12.4-nginx is currently broken due to migration errors with postgresl,
  # so please either pin a sha tag or use dev-nging as the tag)
  # tag: "7d1d62c8552683225456c2a552ba8ca36afb24b32f706e425310de5bf84aeab1"
  tag: dev-nginx

# This block is for setting up the ingress for more information can be found here:
# https://kubernetes.io/docs/concepts/services-networking/ingress/
ingress:
  # -- enable deploy an Ingress resource - network traffic from outside the cluster
  enabled: true
  # -- ingress class name, e.g. nginx
  className: "nginx"
  # annotations to apply to the Ingress resource
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-production
  hosts:
    - host: pixelfed.mydomain.com
      paths:
        - path: /
          pathType: ImplementationSpecific
  tls:
    - secretName: pixelfed-tls
      hosts:
        - pixelfed.mydomain.com

postgresql:

  volumePermissions:
    # -- If you get "mkdir: cannot create directory ‘/bitnami/postgresql/data’: Permission denied"
    # error, set these (This often happens on setups like minikube)
    enabled: true

pixelfed:
  # -- timezone for docker container
  timezone: "europe/london"

  # app specific settings
  app:
    # -- change this to the domain of your pixelfed instance
    url: "https://pixelfed.mydomain.com"
    # -- change this to the language code of your pixelfed instance
    locale: "en"
    # -- The domain of your server, without https://
    domain: "pixelfed.mydomain.com"

  # -- Enable open registration for new accounts
  open_registration: false

  # Mail Configuration (Post-Installer)
  mail:
    # -- options: "smtp" (default), "sendmail", "mailgun", "mandrill", "ses"
    # "sparkpost", "log", "array"
    driver: smtp
    # -- mail server hostname
    host: smtp.mailgun.org
    # -- mail server port
    port: 465
    # -- mail server username
    username: "[email protected]"
    # -- mail server password
    password: "REDACTED"
    # -- mail server encryption type
    encryption: "tls"
    # -- address to use for sending emails
    from_address: "[email protected]"
    # -- name to use for sending emails
    from_name: "Pixelfed"

Obviously mydomain.com isn't actually my domain, but it does issue the certificate correctly on my domain when unredacted.

@huang-jy
Copy link
Contributor Author

huang-jy commented Jan 27, 2025

As a test I completely tore down and rebuilt my minikube cluster between tests and both the dev-nginx and the sha-based image tag fail, but what I've noticed is that on both images, I do not see the migrations being run, even if I set pixelfed.db.apply_new_migrations_automatically to true. So potentially does that mean the credentials aren't being set on the DB?

Is there a way to trigger the migrations manually?

@jessebot
Copy link
Collaborator

Are you setting the database credentials anywhere? You should have something like this:

postgresql:
  auth:
    user: "user"
    postgresPassword: "newPostgresPassword123"
    password: "newUserPassword123"
    replicationPassword: "newReplicationPassword123"

Source for the subchart where the above parameters are from:
https://github.com/bitnami/charts/blob/main/bitnami/postgresql/README.md#parameters

Is there a way to trigger the migrations manually?

I'm not sure actually! I would need to check upstream 🤔

Also, there seems to be a discrepancy between the values returned when doing...

That is strange. Can you tell me which version of the chart you have locally? Have you tried helm repo update?

@huang-jy
Copy link
Contributor Author

Are you setting the database credentials anywhere? You should have something like this:

No, I'm using the base values as a starting point. And in this block:

postgresql:
# -- enable the bundled [postgresql sub chart from Bitnami](https://github.com/bitnami/charts/blob/main/bitnami/postgresql/README.md#parameters).
# Must set to true if externalDatabase.enabled=false
enabled: true
fullnameOverride: "postgresql"
global:
storageClass: ""
volumePermissions:
# -- If you get "mkdir: cannot create directory ‘/bitnami/postgresql/data’: Permission denied"
# error, set these (This often happens on setups like minikube)
enabled: false

You aren't setting credentials, so the base subchart values apply, which I assume is:

https://github.com/bitnami/charts/blob/c1289ad737b591de32984fa074fbfe938459319f/bitnami/postgresql/values.yaml#L140-L175

Are we expected to provide PG credentials when installing the chart?

Also, there seems to be a discrepancy between the values returned when doing...

That is strange. Can you tell me which version of the chart you have locally? Have you tried helm repo update?

I'm using my fork of your repo so it should be up to date. They now seem mysteriously in sync (bar a blank newline), so you can ignore this bit.

@huang-jy
Copy link
Contributor Author

Okay, I think I might know what is going on.

When the chart is deployed, it creates the pixelfed and the PostgreSQL pods. However, the PostgreSQL pod takes longer to start up than the pixelfed pod does, so when the pixelfed container tries to communicate with the DB it fails -- and it looks like it doesn't retry.

If I restart the deployment rollout I see the migrations.

On some charts, such as the Bitnami Mastodon chart, they have an init container to wait for other containers to come up, such as ElasticSearch, see here:

https://github.com/bitnami/charts/blob/c1289ad737b591de32984fa074fbfe938459319f/bitnami/mastodon/templates/web/deployment.yaml#L119-L129

We should probably have something similar here, since PGSQL is essential for the pixelfed running. Happy to take a whack at it if you want.

@huang-jy
Copy link
Contributor Author

Looks like we can do this by simply setting:

      extraInitContainers:
        - name: init-wait-for-postgres
          image: busybox
          command:
            [
              "sh",
              "-c",
              "until nc -zv -w5 postgresql 5432; do echo waiting for postgresql; sleep 2; done",
            ]

Of course this doesn't account for installations where the DB is external so that needs to be taken into consideration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants