-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Knative operator is active while knative-serving installation failed #278
Comments
Thank you for reporting your feedback to us! The internal ticket has been created: https://warthogs.atlassian.net/browse/KF-6794.
|
Discussed with the team, we should evaluate what is the expected behaviour of the upstream controller when it logs a |
I tried to look into the behaviour of the upstream images. The following are the case in the upstream image:
kubectl get pods -n knative-operator knative-operator-7b5d6b545f-4hk5c
NAME READY STATUS RESTARTS AGE
knative-operator-7b5d6b545f-4hk5c 0/1 CrashLoopBackOff 6 (94s ago) 7m28s And the logs are: kubectl logs -n knative-operator knative-operator-7b5d6b545f-4hk5c
2025/01/30 10:45:48 Registering 2 clients
2025/01/30 10:45:48 Registering 3 informer factories
2025/01/30 10:45:48 Registering 3 informers
2025/01/30 10:45:48 Registering 2 controllers
{"severity":"INFO","timestamp":"2025-01-30T10:45:48.884580007Z","logger":"knative-operator","caller":"profiling/server.go:65","message":"Profiling enabled: false","commit":"4959713-dirty","knative.dev/pod":"knative-operator-7b5d6b545f-4hk5c"}
{"severity":"EMERGENCY","timestamp":"2025-01-30T10:45:48.885154491Z","logger":"knative-operator","caller":"sharedmain/main.go:390","message":"Version check failed","commit":"4959713-dirty","knative.dev/pod":"knative-operator-7b5d6b545f-4hk5c","error":"kubernetes version \"1.25.16\" is not compatible, need at least \"1.30.0-0\" (this can be overridden with the env var \"KUBERNETES_MIN_VERSION\")","stacktrace":"knative.dev/pkg/injection/sharedmain.CheckK8sClientMinimumVersionOrDie\n\tknative.dev/[email protected]/injection/sharedmain/main.go:390\nknative.dev/pkg/injection/sharedmain.MainWithConfig\n\tknative.dev/[email protected]/injection/sharedmain/main.go:255\nknative.dev/pkg/injection/sharedmain.MainWithContext\n\tknative.dev/[email protected]/injection/sharedmain/main.go:209\nknative.dev/pkg/injection/sharedmain.Main\n\tknative.dev/[email protected]/injection/sharedmain/main.go:140\nmain.main\n\tknative.dev/operator/cmd/operator/main.go:26\nruntime.main\n\truntime/proc.go:272"} |
We can reproduce the issue with the following: sudo snap install microk8s --classic --channel=1.25/stable
sudo microk8s kubectl config view --raw > $HOME/.kube/config
kubectl apply -f https://github.com/knative/operator/releases/download/knative-v1.17.0/operator.yaml |
In the charm, I see that the process fails and Pebble never starts it again. Most probably because it failed in less that 1 second
|
Bug Description
The
knative-serving
charm is in active state while the logs indicate there is an unsupported version of K8s in the cluster. It might be a bad UX design, because nothing is deployed and the charm does not complain about it.To Reproduce
Deploy Kubeflow charm 1.9 in EKS cluster with K8s version 1.25
Environment
Deploy Kubeflow charm 1.9 in EKS cluster with K8s version 1.25
Relevant Log Output
Additional Context
No response
The text was updated successfully, but these errors were encountered: