-
-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DB Credentials incorrect when within main pod #56
Comments
Hmmm, I'm not sure why this is happening.
|
I used both my forked repo (after I resynced it) and the one from your repo. Let me check the values yaml vs the base yaml |
Are you completely wiping in between test installs? Also did you clean up any persistent volumes after your installs? |
100%, I explicity go in and delete all pvcs and then the namespace as well when I'm uninstalling. Also, there seems to be a discrepancy between the values returned when doing
and the values.yaml in the repo. For the most part, it's just comments But these blocks: pixelfed-chart/charts/pixelfed/values.yaml Lines 309 to 321 in d5899ba
pixelfed-chart/charts/pixelfed/values.yaml Lines 617 to 636 in d5899ba
Are in the repo values.yaml, but not in the chart values when being extracted by helm inspect |
Here's my values.yaml: image:
registry: ghcr.io
# -- you can see the source [ghcr.io/mattlqx/docker-pixelfed](https://ghcr.io/mattlqx/docker-pixelfed)
repository: mattlqx/docker-pixelfed
# -- This sets the pull policy for images.
pullPolicy: IfNotPresent
# -- Overrides the image tag whose default is the chart appVersion
# (v0.12.4-nginx is currently broken due to migration errors with postgresl,
# so please either pin a sha tag or use dev-nging as the tag)
# tag: "7d1d62c8552683225456c2a552ba8ca36afb24b32f706e425310de5bf84aeab1"
tag: dev-nginx
# This block is for setting up the ingress for more information can be found here:
# https://kubernetes.io/docs/concepts/services-networking/ingress/
ingress:
# -- enable deploy an Ingress resource - network traffic from outside the cluster
enabled: true
# -- ingress class name, e.g. nginx
className: "nginx"
# annotations to apply to the Ingress resource
annotations:
cert-manager.io/cluster-issuer: letsencrypt-production
hosts:
- host: pixelfed.mydomain.com
paths:
- path: /
pathType: ImplementationSpecific
tls:
- secretName: pixelfed-tls
hosts:
- pixelfed.mydomain.com
postgresql:
volumePermissions:
# -- If you get "mkdir: cannot create directory ‘/bitnami/postgresql/data’: Permission denied"
# error, set these (This often happens on setups like minikube)
enabled: true
pixelfed:
# -- timezone for docker container
timezone: "europe/london"
# app specific settings
app:
# -- change this to the domain of your pixelfed instance
url: "https://pixelfed.mydomain.com"
# -- change this to the language code of your pixelfed instance
locale: "en"
# -- The domain of your server, without https://
domain: "pixelfed.mydomain.com"
# -- Enable open registration for new accounts
open_registration: false
# Mail Configuration (Post-Installer)
mail:
# -- options: "smtp" (default), "sendmail", "mailgun", "mandrill", "ses"
# "sparkpost", "log", "array"
driver: smtp
# -- mail server hostname
host: smtp.mailgun.org
# -- mail server port
port: 465
# -- mail server username
username: "[email protected]"
# -- mail server password
password: "REDACTED"
# -- mail server encryption type
encryption: "tls"
# -- address to use for sending emails
from_address: "[email protected]"
# -- name to use for sending emails
from_name: "Pixelfed" Obviously |
As a test I completely tore down and rebuilt my minikube cluster between tests and both the Is there a way to trigger the migrations manually? |
Are you setting the database credentials anywhere? You should have something like this: postgresql:
auth:
user: "user"
postgresPassword: "newPostgresPassword123"
password: "newUserPassword123"
replicationPassword: "newReplicationPassword123" Source for the subchart where the above parameters are from:
I'm not sure actually! I would need to check upstream 🤔
That is strange. Can you tell me which version of the chart you have locally? Have you tried |
No, I'm using the base values as a starting point. And in this block: pixelfed-chart/charts/pixelfed/values.yaml Lines 282 to 293 in 814653d
You aren't setting credentials, so the base subchart values apply, which I assume is: Are we expected to provide PG credentials when installing the chart?
I'm using my fork of your repo so it should be up to date. They now seem mysteriously in sync (bar a blank newline), so you can ignore this bit. |
Okay, I think I might know what is going on. When the chart is deployed, it creates the pixelfed and the PostgreSQL pods. However, the PostgreSQL pod takes longer to start up than the pixelfed pod does, so when the pixelfed container tries to communicate with the DB it fails -- and it looks like it doesn't retry. If I restart the deployment rollout I see the migrations. On some charts, such as the Bitnami Mastodon chart, they have an init container to wait for other containers to come up, such as ElasticSearch, see here: We should probably have something similar here, since PGSQL is essential for the pixelfed running. Happy to take a whack at it if you want. |
Looks like we can do this by simply setting: extraInitContainers:
- name: init-wait-for-postgres
image: busybox
command:
[
"sh",
"-c",
"until nc -zv -w5 postgresql 5432; do echo waiting for postgresql; sleep 2; done",
] Of course this doesn't account for installations where the DB is external so that needs to be taken into consideration. |
After fixing the start up issue, I've now tried to create the test user using the example provided in the docs, but tweaked a bit to represent my local setup:
It appears to be able to connect to the database, but not able to authenticate with it.
I suspect this may also be the reason why I see this on the homepage:
The text was updated successfully, but these errors were encountered: