-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cassandra helm chart scaling ring down #32130
Comments
Currently, manual action would be required to remove a cassandra node from the cluster. By default, cassandra nodes are configured with Since the chart does not know when a node is being permanently scaled down, it is not possible to automatically run Since you have replication, cassandra has mechanisms to automatically detect a node was lost for too long and repair from the existing replicas, but if you would like to do it gracefully you need to log into the pod and run the decommission command manually. By running the decommission command, the readiness probe (nodetool status) will start to fail on the node, but liveness probe (nodetool info) will continue to work, so the pod will stop receiving connections but will continue running until scaled down. I hope this answered your question. |
Forgot to mention, if you see the pod running into 'Error' status after you decommission it, it is expected. The decommission command will stop the cassandra process and following restarts will fail with error:
This expected, it means cassandra process won't start again because it was decommissioned. If you wanted to restore the pod again, e.g. Important PVCs are persisted after scaling down a Statefulset (even after removing it). If a Statefulset called |
Hey @migruiz4 And, like you say, if we decide to scale up again its better to make sure the pvc is nuked for the decomissioned node so it makes a new one during the scaling process. I think this answers my question. Thanks. Hope it helps others |
Name and Version
bitnami/cassandra 12.1.1 image bitnami/cassandra:4.1.3-debian-11-r84
What is the problem this feature will solve?
More a question than a feature request.
If I want to scale down a ring will the helm chart decommission nodes and ensure the data is properly transfered to the remaining nodes?
I have a ring with 4 'nodes' and a keyspace with replication factor 3, I want to scale it back down to 3 'nodes' but am worried about data loss.
What is the feature you are proposing to solve the problem?
more documentation than anything
The text was updated successfully, but these errors were encountered: