-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error while using helm to deploy Langfuse and Postgres #41
Comments
This comment was marked as off-topic.
This comment was marked as off-topic.
@VincentMagn To me this looks like a postgres authentication issue. The service can reach the database, but not authenticate correctly. Could you share the custom.yaml you've created and redact any sensitive values? That would help us reproduce the issue and speed up the resolution. |
Yes, I think so too. Here it's: langfuse:
nextauth:
url: http://localhost:3000
secret: fakeSecret
salt: fakeSalt
telemetryEnabled: true
service:
type: ClusterIP
ingress:
enabled: false
postgresql:
deploy: true
auth:
password: anotherFakePwd |
@VincentMagn Where does the |
@Steffen911, I'have already tried to override the |
@VincentMagn Could you test whether the fix on this branch fixes it for you: #42? |
@Steffen911 Sorry for the delay. I tried it, but it still doesn't work. I deployed it using both my custom values and your default ones, but both failed. Additionally, I'm uncertain about the line I also tried with the modification above, but it still didn't work. Also: When I edit the username in the values yaml and replace it by another one, I have this error. The line with the ´Role does not exist´ is unexpected. Do not know if it may help to debug. This also have the effect to create another data in the langfuse-postgresql. So with a user not named postgresql it not only the data |
Hey @VincentMagn ,
So as steps to address your issues, I'd recommend:
Starting from the issue that you describe I was able to install the chart from my branch with a values.yaml overwrite that looked like this: langfuse:
nextauth:
url: http://localhost:3000
secret: fakeSecret
salt: fakeSalt
telemetryEnabled: true
service:
type: ClusterIP
ingress:
enabled: false
postgresql:
deploy: true
auth:
username: anotherFakeUser
password: anotherFakePwd
database: test If you confirm that this works for you, I'll go ahead and merge my PR for a new release. |
@Steffen911 sorry but it still does not work. I did the uninstallation process, then stash the git repo to remove all my modifications. Then, I made custom_values.yaml within exactly what you gave me in the previous comment. Finally, I run the cmd Now the connection works but while making the migration, some errors occur :
Also, each time pods are created, the first 2 or 3 lagfuse pods are crashing because the pods for postgrsql is not already up. I do not know if this also happen to you |
Hey @VincentMagn, Regarding the migrations: Can you connect to postgres and share the information in the Regarding the restarts: Yes, the same thing happens in my case and from our side that is expected. The container should automatically recover as soon as it's able to connect to Postgres. |
@Steffen911, good to know that the restarts are expected. For the logs in the _prisma_migrations :
|
@VincentMagn Initially you wrote that
i.e. it failed at a different migration step. If you reset the postgresql instance in between runs and delete the PV, do you see issues happening at different stages? I assume this might happen if the web container exceeds the default liveness and readiness probe thresholds on Kubernetes which would make it get killed. With a default period of 10s and a default failure threshold of 3, the container has about 30s before it receives an interrupt. On my machine with my tests that is usually sufficient, but it might not be in your case as I see a more than 20s range around your migration start times. Could you adjust the charts/langfuse/templates/deployment-web.yaml and add livenessProbe:
initialDelaySeconds: 30
httpGet:
path: {{ .Values.langfuse.next.healthcheckBasePath | default "" | trimSuffix "/" }}/api/public/health
port: http This change allows the pod up to 30s to do the migrations before the probes start, i.e. a total of 60s to complete the migrations. See https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes for more details on those. If those settings work for you, I'll make sure to have those configurable via the values.yaml. |
Hi @Steffen911, good news, it works well, thanks a lot |
@VincentMagn Thank you for the confirmation. I'll go ahead and create a separate ticket to track the liveness/readiness/startupProbe configurability and will close this one. |
Hi, I'm actually trying to deploy Langfuse with Posgres on a K8s cluster.
Following the installation, I run
helm repo add Langfuse https://langfuse.github.io/langfuse-k8s
helm repo update
Then, I made my local file within all values to override as explained in the "Deploy a Postgres server at the same time" chapter.
When I run :
helm install langfuse langfuse/langfuse -f custom.yaml
Where custom.yaml is the file where I copied the "Deploy a Postgres server at the same time" config.
The error is :
Do you have any idea what the problem might be?
The text was updated successfully, but these errors were encountered: