You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’m running MaxScale as a Kubernetes pod using a ConfigMap, Secret, and Service resources. However, I’m experiencing an issue where the MaxScale pod is in a CrashLoopBackOff state.
Here’s the command I used to apply the configuration:
kubectl apply -f maxscale.yaml
Below is the configuration from maxscale.yaml:
# maxscale.yaml
---
apiVersion: v1kind: ConfigMapmetadata:
name: maxscale-configdata:
maxscale.cnf: | ######################## ## Server list ######################## [server1] type = server address = mariadb-sts-0.mariadb-service.kaizen.svc.cluster.local port = 3306 protocol = MariaDBBackend serv_weight = 1 [server2] type = server address = mariadb-sts-1.mariadb-service.kaizen.svc.cluster.local port = 3306 protocol = MariaDBBackend serv_weight = 1 [server3] type = server address = mariadb-sts-2.mariadb-service.kaizen.svc.cluster.local port = 3306 protocol = MariaDBBackend serv_weight = 1 ######################### ## MaxScale configuration ######################### [maxscale] threads = auto log_augmentation = 1 ms_timestamp = 1 syslog = 1 ######################### # Monitor for the servers ######################### [MariaDB-Monitor] type = monitor module = mariadbmon servers = server1,server2,server3 user = root password = secret auto_failover = true auto_rejoin = true enforce_read_only_slaves = 1 ######################### ## Service definitions for read/write splitting and read-only services. ######################### [Read-Write-Service] type = service router = readwritesplit servers = server1,server2,server3 user = root password = secret max_slave_connections = 100% max_sescmd_history = 1500 causal_reads = true causal_reads_timeout = 10 transaction_replay = true transaction_replay_max_size = 1Mi delayed_retry = true master_reconnection = true master_failure_mode = fail_on_write max_slave_replication_lag = 3 [Read-Only-Service] type = service router = readconnroute servers = server1,server2,server3 router_options = slave user = root password = secret ########################## ## Listener definitions for the service ## Listeners represent the ports the service will listen on. ########################## [Read-Write-Listener] type = listener service = Read-Write-Service protocol = MariaDBClient port = 4008 [Read-Only-Listener] type = listener service = Read-Only-Service protocol = MariaDBClient port = 4006
---
apiVersion: v1kind: Secretmetadata:
name: maxscale-secrettype: Opaquedata:
ROOT_USER: cm9vdA== # "root" in base64ROOT_PASSWORD: c2VjcmV0 # "secret" in base64MAXSCALE_USER: cm9vdA== # "root" in base64MAXSCALE_PASSWORD: c2VjcmV0 # "secret" in base64
---
apiVersion: v1kind: Podmetadata:
name: maxscalelabels:
app: maxscalespec:
containers:
- name: maxscaleimage: mariadb/maxscale:latestports:
- containerPort: 4006
- containerPort: 4008volumeMounts:
- name: maxscale-configmountPath: /etc/maxscale.cnfsubPath: maxscale.cnfenv:
- name: ROOT_USERvalueFrom:
secretKeyRef:
name: maxscale-secretkey: ROOT_USER
- name: ROOT_PASSWORDvalueFrom:
secretKeyRef:
name: maxscale-secretkey: ROOT_PASSWORD
- name: MAXSCALE_USERvalueFrom:
secretKeyRef:
name: maxscale-secretkey: MAXSCALE_USER
- name: MAXSCALE_PASSWORDvalueFrom:
secretKeyRef:
name: maxscale-secretkey: MAXSCALE_PASSWORDvolumes:
- name: maxscale-configconfigMap:
name: maxscale-config
---
apiVersion: v1kind: Servicemetadata:
name: maxscale-servicespec:
type: ClusterIPports:
- port: 4006targetPort: 4006name: ro-listener
- port: 4008targetPort: 4008name: rw-listenerselector:
app: maxscale
When I check the status of my pods with:
kubectl get all -n kaizen
I see the following output, indicating the MaxScale pod is failing:
NAME READY STATUS RESTARTS AGE
pod/mariadb-sts-0 1/1 Running 1 (78m ago) 95m
pod/mariadb-sts-1 1/1 Running 1 (78m ago) 131m
pod/mariadb-sts-2 1/1 Running 1 (78m ago) 130m
pod/maxscale 0/1 CrashLoopBackOff 1 (5s ago) 11s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mariadb-service ClusterIP None <none> 3306/TCP 132m
service/maxscale-service ClusterIP 10.105.33.116 <none> 4006/TCP,4008/TCP 11s
NAME READY AGE
statefulset.apps/mariadb-sts 3/3 132m
When I check the logs for the MaxScale pod, I see the following error message:
kubectl -n kaizen logs maxscale
Starting...
MaxScale PID = 18
'check system' not defined in control file, failed to add automatic configuration (service name maxscale is used already) -- please add 'check system <name>' manually
maxscale:2: Service name conflict, maxscale already defined '/var/log/monit.log'
Can anyone help me understand why the MaxScale pod is crashing and how to resolve this service name conflict? Any guidance would be greatly appreciated!
Thanks!
The text was updated successfully, but these errors were encountered:
Hi folks,
I’m running MaxScale as a Kubernetes pod using a ConfigMap, Secret, and Service resources. However, I’m experiencing an issue where the MaxScale pod is in a
CrashLoopBackOff
state.Here’s the command I used to apply the configuration:
Below is the configuration from
maxscale.yaml
:When I check the status of my pods with:
I see the following output, indicating the MaxScale pod is failing:
When I check the logs for the MaxScale pod, I see the following error message:
kubectl -n kaizen logs maxscale
Can anyone help me understand why the MaxScale pod is crashing and how to resolve this service name conflict? Any guidance would be greatly appreciated!
Thanks!
The text was updated successfully, but these errors were encountered: