-
Notifications
You must be signed in to change notification settings - Fork 5
feat(minikube): add minikube delete and debug commands #40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Updated base and module-controller configurations to use Deployment instead of Pod, enhancing scalability and management. Adjusted container images and environment variables for consistency. BREAKING CHANGE: Deployment structure modified, ensure compatibility with existing configurations.
Introduce commands to delete the module-controller deployment from minikube and build a debug version using minikube. Update the debug Dockerfile to set the correct ENTRYPOINT. Also, update the module-controller test YAML to use Deployment instead of Pod for better management.
WalkthroughConverted several example Pod manifests to Deployments and updated images, labels, affinity/anti-affinity rules; added six Minikube-focused Makefile targets for build/deploy/debug/port-forward/restart/delete; threaded a podKey (BizModelVersion) through BizOperationResponse and updated StartBiz/StopBiz signatures and payloads in HTTP and MQTT tunnel implementations. Changes
Sequence Diagram(s)sequenceDiagram
participant Dev as Developer
participant Make as Makefile
participant MK as Minikube
participant K8s as Kubernetes Cluster
Dev->>Make: make minikube-delete
Make->>K8s: kubectl delete deployment module-controller
Make->>K8s: wait for pods labeled app=module-controller
Dev->>Make: make minikube-build
Make->>MK: remove old debug image
Make->>MK: build debug.Dockerfile -> module-controller-v2:latest
Dev->>Make: make minikube-deploy
Make->>K8s: kubectl apply -f module-controller.yaml
Make->>K8s: wait deployment available
Dev->>Make: make minikube-debug
Make->>K8s: kubectl exec -- delve in module-controller pod
Dev->>Make: make minikube-port-forward
Make->>K8s: kubectl port-forward svc/deployment 2345:2345
Dev->>Make: make minikube-restart
Make->>Make: runs build -> deploy -> debug
sequenceDiagram
participant Caller as Controller
participant Tunnel as Http/MqttTunnel
participant Ark as ArkService
participant Client as Consumer
Caller->>Tunnel: StartBiz(node, podKey, container)
Tunnel->>Tunnel: set bizModel.BizModelVersion = podKey
Tunnel->>Ark: send InstallBizRequest{BizModel, PodKey}
Ark-->>Tunnel: response
Tunnel->>Client: OnOneBizDataArrived(BizOperationResponse{..., BizModelVersion: podKey})
Caller->>Tunnel: StopBiz(node, podKey, container)
Tunnel->>Tunnel: set bizModel.BizModelVersion = podKey
Tunnel->>Ark: send UninstallBizRequest{BizModel, PodKey}
Ark-->>Tunnel: response
Tunnel->>Client: OnOneBizDataArrived(BizOperationResponse{..., BizModelVersion: podKey})
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Rationale: multiple heterogeneous changes — manifest conversions (multiple files, straightforward), Makefile additions (low risk), and non-trivial API/data-model changes in tunnels + model (signature changes, payload shape, and propagation) that require careful review and cross-file consistency checks. Possibly related PRs
Suggested reviewers
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🔭 Outside diff range comments (1)
debug.Dockerfile (1)
35-38
: ARGs missing in the second stage break Delve build on multi-arch setups
TARGETOS
/TARGETARCH
are referenced here, but not re-declared after the newFROM
layer.
Docker does not carry ARG values across stages unless the ARG is re-stated, so both
environment variables expand to empty strings, makinggo build
fail on any build that
relies on them (GOARCH becomes""
).FROM golang:1.22.8 +ARG TARGETOS +ARG TARGETARCH WORKDIR /
🧹 Nitpick comments (10)
debug.Dockerfile (1)
45-45
: 👍 Clearer ENTRYPOINT; just add exec form for PID 1 signalsUsing Delve directly is great for debugging. Consider the
--
before the target binary
so Delve does not parse its flags, and keep theexec
form for signal propagation:-ENTRYPOINT ["dlv", "--listen=:2345", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "./module_controller"] +ENTRYPOINT ["dlv", "--listen=:2345", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "--", "./module_controller"]example/quick-start/module.yaml (1)
27-50
: Add basic pod security context to silence CKV_K8S_20 / 23affinity: nodeAffinity: @@ tolerations: - key: "schedule.koupleless.io/node-env" operator: "Equal" value: "dev" effect: "NoExecute" + securityContext: + runAsNonRoot: true + allowPrivilegeEscalation: false + capabilities: + drop: ["ALL"]example/quick-start/base.yaml (2)
24-26
: Missing trailing newline violates YAML-lint ruleAdd an EOL to keep linters green.
- value: {YOUR_MODULE_CONTROLLER_IP_AND_PORT} # 127.0.0.1:7777 + value: {YOUR_MODULE_CONTROLLER_IP_AND_PORT} # 127.0.0.1:7777 +
15-23
: Consider dropping privileges for the base container as wellSame securityContext block recommended for
module.yaml
can be added here to
avoid running as root by default.Makefile (2)
41-44
:minikube image build
only exists on minikube ≥ 1.26Older installations will fail. Either document the version requirement or fall
back to a classicdocker build && minikube image load
sequence.minikube-debug: fmt vet minikube-delete - minikube image rm $(IMG_DEBUG) - minikube image build -f debug.Dockerfile -t $(IMG_DEBUG) . + @minikube image rm $(IMG_DEBUG) || true + @if minikube image build --help >/dev/null 2>&1; then \ + minikube image build -f debug.Dockerfile -t $(IMG_DEBUG) . ;\ + else \ + docker build -f debug.Dockerfile -t $(IMG_DEBUG) . && minikube image load $(IMG_DEBUG) ;\ + fi kubectl apply -f example/quick-start/module-controller-test.yaml
37-38
:kubectl delete deployments.apps/...
skips namespaceIf the deployment lives outside
default
, the delete will silently do nothing.
Add-n $(NAMESPACE)
or parametrize it.example/quick-start/module-controller-test.yaml (3)
20-29
: Expose Delve only inside the cluster or behind an auth proxyPort 2345 is unauthenticated; exposing it on every replica is risky.
If you need remote debugging, restrict it with a NetworkPolicy or set
address=127.0.0.1
and usekubectl port-forward
.
15-24
: Noresources.requests
definedWithout requests the scheduler may over-commit and evict the pod.
Set requests matching the limits or lower, e.g.:resources: requests: cpu: "250m" memory: "200Mi" limits: cpu: "1000m" memory: "400Mi"
1-31
: Add pod security context to satisfy Checkov findingsSame minimal block as suggested for other manifests:
securityContext: runAsNonRoot: true allowPrivilegeEscalation: false capabilities: drop: ["ALL"]example/quick-start/module-controller.yaml (1)
29-29
: Add trailing newlineYAML-lint reports a missing newline at EOF—tiny but keeps tooling quiet.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
Makefile
(2 hunks)debug.Dockerfile
(1 hunks)example/quick-start/base.yaml
(1 hunks)example/quick-start/module-controller-test.yaml
(1 hunks)example/quick-start/module-controller.yaml
(1 hunks)example/quick-start/module.yaml
(3 hunks)
🧰 Additional context used
🪛 Checkov (3.2.334)
example/quick-start/module-controller-test.yaml
[MEDIUM] 1-31: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 1-31: Minimize the admission of root containers
(CKV_K8S_23)
example/quick-start/module.yaml
[MEDIUM] 1-58: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 1-58: Minimize the admission of root containers
(CKV_K8S_23)
example/quick-start/module-controller.yaml
[MEDIUM] 1-29: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 1-29: Minimize the admission of root containers
(CKV_K8S_23)
example/quick-start/base.yaml
[MEDIUM] 1-26: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 1-26: Minimize the admission of root containers
(CKV_K8S_23)
🪛 YAMLlint (1.37.1)
example/quick-start/module-controller.yaml
[error] 29-29: no new line character at the end of file
(new-line-at-end-of-file)
example/quick-start/base.yaml
[error] 26-26: no new line character at the end of file
(new-line-at-end-of-file)
🔇 Additional comments (2)
example/quick-start/module.yaml (1)
31-37
: Placeholder in requirednodeAffinity
will prevent scheduling
TO_BE_IMPLEMENTED
is unlikely to match any node and will block the pod.
If this is intentional, add a comment clarifying; otherwise supply the real
base name or drop the entire selector until it is known.example/quick-start/module-controller.yaml (1)
18-19
: Pin image by digest for immutabilityA moving
v2.1.4
tag can unexpectedly change the bits in production. Consider switching to a digest to guarantee reproducibility, e.g.:image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/release/module-controller-v2@sha256:<digest>
- name: biz1-web-single-host # this name must same with the biz name defined in the jar | ||
image: https://koupleless-dosc.oss-cn-hongkong.aliyuncs.com/biz1-web-single-host-0.0.1-SNAPSHOT-ark-biz.jar | ||
env: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
image
field is not a valid OCI image reference
Kubernetes expects an image reference like registry/namespace/repo:tag
.
A raw HTTPS URL pointing at a .jar
will fail to pull.
- image: https://koupleless-dosc.oss-cn-hongkong.aliyuncs.com/biz1-web-single-host-0.0.1-SNAPSHOT-ark-biz.jar
+ image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/biz1-web-single-host:0.0.1-SNAPSHOT
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
- name: biz1-web-single-host # this name must same with the biz name defined in the jar | |
image: https://koupleless-dosc.oss-cn-hongkong.aliyuncs.com/biz1-web-single-host-0.0.1-SNAPSHOT-ark-biz.jar | |
env: | |
- name: biz1-web-single-host # this name must same with the biz name defined in the jar | |
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/biz1-web-single-host:0.0.1-SNAPSHOT | |
env: |
🤖 Prompt for AI Agents
In example/quick-start/module.yaml around lines 22 to 24, the image field is
incorrectly set to a raw HTTPS URL pointing to a .jar file, which is not a valid
OCI image reference for Kubernetes. Replace the image value with a proper OCI
image reference format like registry/namespace/repo:tag that points to a
container image stored in a container registry.
limits: | ||
cpu: "1000m" | ||
memory: "400Mi" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Specify resources.requests
to avoid CPU / memory throttling surprises
Only limits are declared; requests default to 0
, which can lead to poor scheduling and unexpected throttling. Define sane requests matching expected baseline consumption.
resources:
+ requests:
+ cpu: 200m
+ memory: 200Mi
limits:
cpu: "1000m"
memory: "400Mi"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
resources: | |
limits: | |
cpu: "1000m" | |
memory: "400Mi" | |
resources: | |
requests: | |
cpu: 200m | |
memory: 200Mi | |
limits: | |
cpu: "1000m" | |
memory: "400Mi" |
🤖 Prompt for AI Agents
In example/quick-start/module-controller.yaml around lines 20 to 23, the
resource specification only includes limits for CPU and memory but lacks
requests, which can cause scheduling issues and throttling. Add a
resources.requests section with appropriate CPU and memory values that reflect
the expected baseline usage to ensure stable resource allocation and avoid
surprises.
- name: module-controller | ||
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/release/module-controller-v2:v2.1.4 # 已经打包好的镜像 | ||
imagePullPolicy: Always | ||
resources: | ||
limits: | ||
cpu: "1000m" | ||
memory: "400Mi" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add securityContext
to harden the container
Checkov flags (CKV_K8S_20
, CKV_K8S_23
) point out that the container may run as root and allow privilege escalation. Explicitly disabling those behaviours is a baseline hardening step.
- name: module-controller
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/release/module-controller-v2:v2.1.4 # 已经打包好的镜像
imagePullPolicy: Always
+ securityContext:
+ allowPrivilegeEscalation: false
+ runAsNonRoot: true
+ capabilities:
+ drop:
+ - "ALL"
resources:
limits:
cpu: "1000m"
memory: "400Mi"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
containers: | |
- name: module-controller | |
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/release/module-controller-v2:v2.1.4 # 已经打包好的镜像 | |
imagePullPolicy: Always | |
resources: | |
limits: | |
cpu: "1000m" | |
memory: "400Mi" | |
containers: | |
- name: module-controller | |
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/release/module-controller-v2:v2.1.4 # 已经打包好的镜像 | |
imagePullPolicy: Always | |
securityContext: | |
allowPrivilegeEscalation: false | |
runAsNonRoot: true | |
capabilities: | |
drop: | |
- "ALL" | |
resources: | |
limits: | |
cpu: "1000m" | |
memory: "400Mi" |
🤖 Prompt for AI Agents
In example/quick-start/module-controller.yaml around lines 16 to 23, the
container lacks a securityContext to prevent running as root and privilege
escalation. Add a securityContext section under the container spec that sets
runAsNonRoot to true, runAsUser to a non-root UID (e.g., 1000), and disables
privilege escalation by setting allowPrivilegeEscalation to false. This will
harden the container by enforcing non-root execution and preventing privilege
escalation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (1)
Makefile (1)
45-48
: Hard-coded 90 s rollout timeout
Some laptops (or slow registries) will exceed 90 s, causing false failures. Consider making this configurable:-kubectl wait --for=condition=available --timeout=90s deployments/module-controller +kubectl wait --for=condition=available --timeout=${TIMEOUT-180s} deployment/module-controller
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
Makefile
(2 hunks)
🔇 Additional comments (2)
Makefile (2)
40-43
: Inconsistent image name & non-portable Minikube sub-command
The tagmodule-controller-v2:latest
differs from the underscore-based tag used in the regularbuild
target, which may leave your manifest referencing a non-existent image.
Also,minikube image build
/rm
is only available in recent Minikube releases—older versions requireminikube image load
after a normaldocker build
.Please verify both the image name and the expected Minikube version.
58-59
:minikube-restart
blocks the Make run
Because it chains into the interactiveminikube-debug
, themake
process never returns, which hampers scripting/CI usage.
Consider splitting the interactive step or provide a non-interactive variant.
.PHONY: minikube-delete | ||
minikube-delete: ## Delete module-controller deployment from minikube | ||
kubectl delete deployments.apps/module-controller || true | ||
kubectl wait --for=delete pod -l app=module-controller --timeout=90s |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
kubectl wait
will error when no pods remain
If kubectl delete
finds nothing, the subsequent kubectl wait --for=delete
aborts with NotFound
, breaking the target. Swallow the error or use --ignore-not-found
.
-kubectl wait --for=delete pod -l app=module-controller --timeout=90s
+kubectl wait --for=delete pod -l app=module-controller --timeout=90s || true
🤖 Prompt for AI Agents
In the Makefile at lines 35 to 38, the kubectl wait command errors if no pods
remain after deletion, causing the target to fail. Modify the kubectl wait
command to include the --ignore-not-found flag to prevent errors when no pods
are found, ensuring the target completes successfully even if no pods exist.
.PHONY: minikube-port-forward | ||
minikube-port-forward: | ||
kubectl port-forward deployments/module-controller 2345:2345 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Same plural-resource issue for port-forward
-kubectl port-forward deployments/module-controller 2345:2345
+kubectl port-forward deployment/module-controller 2345:2345
🤖 Prompt for AI Agents
In the Makefile at lines 54 to 56, the kubectl port-forward command uses the
singular resource "deployments/module-controller" which is incorrect. Change
"deployments/module-controller" to the singular form
"deployment/module-controller" to correctly specify the resource type for
port-forwarding.
.PHONY: minikube-debug | ||
minikube-debug: | ||
kubectl exec deployments/module-controller -it -- dlv --listen=:2345 --headless=true --api-version=2 --accept-multiclient exec ./module_controller |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Resource name should be singular for exec
kubectl exec deployments/…
is not consistently accepted across kubectl versions. Use the singular resource alias to avoid “error: unable to upgrade connection” issues.
-kubectl exec deployments/module-controller -it -- dlv ...
+kubectl exec deployment/module-controller -it -- dlv ...
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
.PHONY: minikube-debug | |
minikube-debug: | |
kubectl exec deployments/module-controller -it -- dlv --listen=:2345 --headless=true --api-version=2 --accept-multiclient exec ./module_controller | |
.PHONY: minikube-debug | |
minikube-debug: | |
kubectl exec deployment/module-controller -it -- dlv --listen=:2345 --headless=true --api-version=2 --accept-multiclient exec ./module_controller |
🤖 Prompt for AI Agents
In the Makefile at lines 50 to 52, the kubectl exec command uses the plural
resource name "deployments/module-controller," which can cause compatibility
issues across kubectl versions. Change "deployments/module-controller" to the
singular form "deployment/module-controller" to ensure consistent behavior and
avoid connection errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (1)
example/quick-start/module-controller-test.yaml (1)
25-29
: Expose readiness / liveness probes for quicker fault detectionThe container opens ports 7777 and 2345 but Kubernetes has no way to know when the service is ready or unhealthy. Adding probes cuts mean-time-to-recovery and enables rolling updates:
livenessProbe: httpGet: path: /healthz port: httptunnel initialDelaySeconds: 10 periodSeconds: 20 readinessProbe: httpGet: path: /ready port: httptunnel initialDelaySeconds: 5 periodSeconds: 10If HTTP isn’t available, switch to
tcpSocket
orexec
.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
example/quick-start/module-controller-test.yaml
(1 hunks)
🧰 Additional context used
🪛 Checkov (3.2.334)
example/quick-start/module-controller-test.yaml
[MEDIUM] 1-32: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 1-32: Minimize the admission of root containers
(CKV_K8S_23)
- name: module-controller |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add securityContext
to drop root privileges and disable privilege escalation
By default the container will run as UID 0 with allowPrivilegeEscalation
= true, which violates hardening guidance and triggered the CKV_K8S_20 / CKV_K8S_23 findings.
Add an explicit securityContext:
- name: module-controller
+ securityContext:
+ runAsNonRoot: true
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop: [ "ALL" ]
This single change neutralises both Checkov findings and narrows the blast-radius if the process is compromised.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
containers: | |
- name: module-controller | |
containers: | |
- name: module-controller | |
securityContext: | |
runAsNonRoot: true | |
allowPrivilegeEscalation: false | |
capabilities: | |
drop: [ "ALL" ] |
🤖 Prompt for AI Agents
In example/quick-start/module-controller-test.yaml around lines 16 to 17, the
container lacks a securityContext that drops root privileges and disables
privilege escalation, causing security findings CKV_K8S_20 and CKV_K8S_23. Add a
securityContext block under the container definition that sets runAsNonRoot to
true, runAsUser to a non-root UID (e.g., 1000), and allowPrivilegeEscalation to
false to comply with security best practices and fix the findings.
limits: | ||
cpu: "1000m" | ||
memory: "400Mi" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add resource requests alongside limits for reliable scheduling
Only limits are specified; without requests the scheduler will assume 0 CPU / 0 Mi, leading to noisy-neighbor issues and unpredictable QoS.
resources:
- limits:
- cpu: "1000m"
- memory: "400Mi"
+ requests:
+ cpu: "200m"
+ memory: "200Mi"
+ limits:
+ cpu: "1000m"
+ memory: "400Mi"
Tune the numbers to your actual baseline consumption.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
resources: | |
limits: | |
cpu: "1000m" | |
memory: "400Mi" | |
resources: | |
requests: | |
cpu: "200m" | |
memory: "200Mi" | |
limits: | |
cpu: "1000m" | |
memory: "400Mi" |
🤖 Prompt for AI Agents
In example/quick-start/module-controller-test.yaml around lines 21 to 24,
resource limits are specified but resource requests are missing. Add a
resources.requests section alongside the existing limits with appropriate cpu
and memory values that reflect the baseline consumption to ensure reliable
scheduling and avoid noisy-neighbor issues.
imagePullPolicy: Never | ||
# imagePullPolicy: Always # 如果使用远程仓库的镜像,改为 Always |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
❓ Verification inconclusive
Pin the image tag and reconsider imagePullPolicy: Never
Using :latest
makes rollbacks and reproducible builds impossible. Moreover, imagePullPolicy: Never
only works on clusters where the image already exists (e.g. Minikube’s local daemon); it silently breaks in CI or remote clusters.
Consider:
- image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:latest
- imagePullPolicy: Never
+ image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:v2.1.4 # or a digest
+ imagePullPolicy: IfNotPresent
Aligning the tag with an immutable version (or SHA digest) ensures the exact binary you tested is the one you run everywhere.
Pin the image to an immutable tag and adjust imagePullPolicy
The current setup uses :latest
and imagePullPolicy: Never
, which:
- Prevents reproducible deployments (you can’t roll back to a known state).
- Silently fails in CI or remote clusters where the image isn’t pre-pulled.
File: example/quick-start/module-controller-test.yaml Lines: 18–20
- image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:latest
- imagePullPolicy: Never
+ image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:v2.1.4 # pin to a specific release or use a digest
+ imagePullPolicy: IfNotPresent
Aligning on a fixed tag (or SHA digest) guarantees you deploy exactly what you’ve tested, and using IfNotPresent
will pull the image if it’s missing without breaking on remote clusters.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:latest # 自己构建打包好的镜像 | |
imagePullPolicy: Never | |
# imagePullPolicy: Always # 如果使用远程仓库的镜像,改为 Always | |
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:v2.1.4 # pin to a specific release or use a digest | |
imagePullPolicy: IfNotPresent | |
# imagePullPolicy: Always # 如果使用远程仓库的镜像,改为 Always |
🤖 Prompt for AI Agents
In example/quick-start/module-controller-test.yaml around lines 18 to 20, the
image tag is set to :latest and imagePullPolicy is Never, which causes
non-reproducible deployments and failures in CI or remote clusters. Fix this by
replacing :latest with a fixed, immutable tag or SHA digest for the image, and
change imagePullPolicy to IfNotPresent to allow pulling the image if missing
without breaking deployments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (3)
common/model/model.go (2)
34-38
: Decouple common model from HTTP tunnel ark_serviceTyping Response as ark_service.ArkResponse creates an undesirable dependency from common/model to a tunnel-specific package. Prefer hosting ArkResponse in a neutral package (e.g., common/ark or common/model) or using a type from arkctl so common stays tunnel-agnostic.
37-37
: Clarify BizModelVersion doc to reflect it carries podKeyThe comment says “Version of the business model,” but this is now used to carry a podKey. Clarify to avoid confusion for future contributors.
Apply this small change:
- BizModelVersion string `json:"bizModelVersion"` // Version of the business model + BizModelVersion string `json:"bizModelVersion"` // Pod-level key (podKey) used for scoping operationsmodule_tunnels/koupleless_http_tunnel/http_tunnel.go (1)
337-348
: Confirm Ark HTTP API accepts BizModelVersion for pod scopingThis path encodes podKey via BizModel.BizModelVersion. Verify the Ark HTTP service interprets BizModelVersion as the podKey (or alternatively, extend the HTTP request model with an explicit PodKey to match MQTT for consistency).
Would you like me to open a follow-up to standardize payload shape (explicit PodKey on both HTTP/MQTT) to avoid overloading BizModelVersion semantics?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
common/model/model.go
(1 hunks)module_tunnels/koupleless_http_tunnel/http_tunnel.go
(4 hunks)module_tunnels/koupleless_mqtt_tunnel/mqtt_tunnel.go
(2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
module_tunnels/koupleless_mqtt_tunnel/mqtt_tunnel.go (4)
common/utils/utils.go (2)
TranslateCoreV1ContainerToBizModel
(58-64)FormatArkletCommandTopic
(36-38)common/zaplogger/logger.go (1)
FromContext
(19-26)module_tunnels/koupleless_http_tunnel/ark_service/model.go (2)
InstallBizRequest
(6-8)UninstallBizRequest
(11-13)common/model/consts.go (1)
CommandInstallBiz
(49-49)
common/model/model.go (1)
module_tunnels/koupleless_http_tunnel/ark_service/model.go (1)
ArkResponse
(16-31)
module_tunnels/koupleless_http_tunnel/http_tunnel.go (2)
common/model/model.go (1)
BizOperationResponse
(33-39)common/model/consts.go (2)
CommandInstallBiz
(49-49)CommandUnInstallBiz
(51-51)
🔇 Additional comments (4)
common/model/model.go (1)
34-38
: BizModelVersion threaded into responses — looks goodIncluding BizModelVersion in BizOperationResponse aligns with the new podKey propagation and keeps callbacks informed.
module_tunnels/koupleless_mqtt_tunnel/mqtt_tunnel.go (1)
8-11
: Imports adjustment is appropriateNew imports (strings, sync, time) are used below and required. No issues.
module_tunnels/koupleless_http_tunnel/http_tunnel.go (2)
327-348
: Good: podKey propagated via BizModelVersion and included in responseUpdating the signature and setting bizModel.BizModelVersion allows HTTP path to carry podKey through to Ark, and returning BizModelVersion in BizOperationResponse is consistent.
362-381
: StopBiz mirrors StartBiz changes — consistent propagationStopBiz mirrors the StartBiz handling for podKey via BizModelVersion and in the response. Looks consistent.
func (mqttTunnel *MqttTunnel) StartBiz(nodeName, podKey string, container *corev1.Container) error { | ||
bizModel := utils.TranslateCoreV1ContainerToBizModel(container) | ||
zlogger := zaplogger.FromContext(mqttTunnel.ctx).WithValues("bizName", bizModel.BizName, "bizVersion", bizModel.BizVersion) | ||
zlogger.Info("InstallModule") | ||
installBizRequestBytes, _ := json.Marshal(bizModel) | ||
installBizRequestBytes, _ := json.Marshal(ark_service.InstallBizRequest{ | ||
BizModel: bizModel, | ||
PodKey: podKey, | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Handle marshal errors and include podKey in logs
Ignoring json.Marshal errors risks publishing invalid payloads; also add podKey to the log context for traceability.
Apply:
func (mqttTunnel *MqttTunnel) StartBiz(nodeName, podKey string, container *corev1.Container) error {
bizModel := utils.TranslateCoreV1ContainerToBizModel(container)
- zlogger := zaplogger.FromContext(mqttTunnel.ctx).WithValues("bizName", bizModel.BizName, "bizVersion", bizModel.BizVersion)
+ zlogger := zaplogger.FromContext(mqttTunnel.ctx).WithValues("bizName", bizModel.BizName, "bizVersion", bizModel.BizVersion, "podKey", podKey)
zlogger.Info("InstallModule")
- installBizRequestBytes, _ := json.Marshal(ark_service.InstallBizRequest{
- BizModel: bizModel,
- PodKey: podKey,
- })
+ installReq := ark_service.InstallBizRequest{
+ BizModel: bizModel,
+ PodKey: podKey,
+ }
+ installBizRequestBytes, err := json.Marshal(installReq)
+ if err != nil {
+ zlogger.Error(err, "failed to marshal InstallBizRequest")
+ return err
+ }
nodeID := utils2.ExtractNodeIDFromNodeName(nodeName)
return mqttTunnel.mqttClient.Pub(utils.FormatArkletCommandTopic(mqttTunnel.env, nodeID, model.CommandInstallBiz), mqtt.Qos0, installBizRequestBytes)
}
Additionally, consider using QoS 1 for install/uninstall commands to improve delivery reliability if the broker/base requires at-least-once semantics.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
func (mqttTunnel *MqttTunnel) StartBiz(nodeName, podKey string, container *corev1.Container) error { | |
bizModel := utils.TranslateCoreV1ContainerToBizModel(container) | |
zlogger := zaplogger.FromContext(mqttTunnel.ctx).WithValues("bizName", bizModel.BizName, "bizVersion", bizModel.BizVersion) | |
zlogger.Info("InstallModule") | |
installBizRequestBytes, _ := json.Marshal(bizModel) | |
installBizRequestBytes, _ := json.Marshal(ark_service.InstallBizRequest{ | |
BizModel: bizModel, | |
PodKey: podKey, | |
}) | |
func (mqttTunnel *MqttTunnel) StartBiz(nodeName, podKey string, container *corev1.Container) error { | |
bizModel := utils.TranslateCoreV1ContainerToBizModel(container) | |
zlogger := zaplogger. | |
FromContext(mqttTunnel.ctx). | |
WithValues( | |
"bizName", bizModel.BizName, | |
"bizVersion", bizModel.BizVersion, | |
"podKey", podKey, | |
) | |
zlogger.Info("InstallModule") | |
installReq := ark_service.InstallBizRequest{ | |
BizModel: bizModel, | |
PodKey: podKey, | |
} | |
installBizRequestBytes, err := json.Marshal(installReq) | |
if err != nil { | |
zlogger.Error(err, "failed to marshal InstallBizRequest") | |
return err | |
} | |
nodeID := utils2.ExtractNodeIDFromNodeName(nodeName) | |
return mqttTunnel.mqttClient.Pub( | |
utils.FormatArkletCommandTopic(mqttTunnel.env, nodeID, model.CommandInstallBiz), | |
mqtt.Qos0, | |
installBizRequestBytes, | |
) | |
} |
🤖 Prompt for AI Agents
In module_tunnels/koupleless_mqtt_tunnel/mqtt_tunnel.go around lines 434 to 441,
the code ignores the json.Marshal error and doesn't include podKey in the log
context; update the code to check the error returned by json.Marshal for the
InstallBizRequest, log or return the error (do not proceed to publish on marshal
failure), and add "podKey" to the zaplogger.WithValues call so logs include the
pod identifier; also when publishing install/uninstall commands consider using
QoS 1 (at-least-once) instead of 0 to improve delivery reliability where broker
semantics require it.
💡 Verification agent
🧩 Analysis chain
Verify model definitions include PodKey in Install/Uninstall requests
The code sets PodKey on ark_service.InstallBizRequest/UninstallBizRequest. Ensure those structs declare a PodKey field; otherwise this won’t compile or the field will be dropped during marshal.
Run:
Expected: Both request types contain a line like PodKey string \
json:"podKey,omitempty"``.
🏁 Script executed:
#!/bin/bash
# Verify InstallBizRequest/UninstallBizRequest and PodKey presence
rg -n -C2 -P '^\s*type\s+InstallBizRequest\b' module_tunnels | sed -n '1,200p'
rg -n -C2 -P '^\s*type\s+UninstallBizRequest\b' module_tunnels | sed -n '1,200p'
rg -n -C2 '\bPodKey\b' module_tunnels
# Also check for BizModelVersion usage across repo (sanity)
rg -n -C2 '\bBizModelVersion\b'
Length of output: 5147
Add PodKey
to InstallBizRequest
/UninstallBizRequest
models
The ark_service.InstallBizRequest
and ark_service.UninstallBizRequest
types (in module_tunnels/koupleless_http_tunnel/ark_service/model.go
) don’t declare a PodKey
field, but mqtt_tunnel.go
is initializing one—this will fail to compile. Please update both structs to include:
type InstallBizRequest struct {
ark.BizModel `json:",inline"`
+ PodKey string `json:"podKey,omitempty"`
}
type UninstallBizRequest struct {
ark.BizModel `json:",inline"`
+ PodKey string `json:"podKey,omitempty"`
}
• File: module_tunnels/koupleless_http_tunnel/ark_service/model.go
– Add PodKey string \
json:"podKey,omitempty"`under the inlined
ark.BizModelin both request structs. • After adding these fields, re-run
go build` to confirm compilation.
🤖 Prompt for AI Agents
In module_tunnels/koupleless_mqtt_tunnel/mqtt_tunnel.go around lines 434-441,
mqtt_tunnel initializes InstallBizRequest.PodKey but the InstallBizRequest and
UninstallBizRequest structs lack that field; open
module_tunnels/koupleless_http_tunnel/ark_service/model.go and add PodKey string
with the json tag `podKey,omitempty` (PodKey string `json:"podKey,omitempty"`)
immediately under the inlined ark.BizModel in both InstallBizRequest and
UninstallBizRequest, save, then run go build to confirm the compilation
succeeds.
func (mqttTunnel *MqttTunnel) StopBiz(nodeName, podKey string, container *corev1.Container) error { | ||
bizModel := utils.TranslateCoreV1ContainerToBizModel(container) | ||
unInstallBizRequestBytes, _ := json.Marshal(bizModel) | ||
unInstallBizRequestBytes, _ := json.Marshal(ark_service.UninstallBizRequest{ | ||
BizModel: bizModel, | ||
PodKey: podKey, | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Also handle marshal errors for uninstall payload
Same reasoning as StartBiz; don’t drop marshal errors.
func (mqttTunnel *MqttTunnel) StopBiz(nodeName, podKey string, container *corev1.Container) error {
bizModel := utils.TranslateCoreV1ContainerToBizModel(container)
- unInstallBizRequestBytes, _ := json.Marshal(ark_service.UninstallBizRequest{
- BizModel: bizModel,
- PodKey: podKey,
- })
+ uninstallReq := ark_service.UninstallBizRequest{
+ BizModel: bizModel,
+ PodKey: podKey,
+ }
+ unInstallBizRequestBytes, err := json.Marshal(uninstallReq)
+ if err != nil {
+ zaplogger.FromContext(mqttTunnel.ctx).WithValues("bizName", bizModel.BizName, "bizVersion", bizModel.BizVersion, "podKey", podKey).
+ Error(err, "failed to marshal UninstallBizRequest")
+ return err
+ }
zlogger := zaplogger.FromContext(mqttTunnel.ctx).WithValues("bizName", bizModel.BizName, "bizVersion", bizModel.BizVersion)
zlogger.Info("UninstallModule")
nodeID := utils2.ExtractNodeIDFromNodeName(nodeName)
return mqttTunnel.mqttClient.Pub(utils.FormatArkletCommandTopic(mqttTunnel.env, nodeID, model.CommandUnInstallBiz), mqtt.Qos0, unInstallBizRequestBytes)
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
func (mqttTunnel *MqttTunnel) StopBiz(nodeName, podKey string, container *corev1.Container) error { | |
bizModel := utils.TranslateCoreV1ContainerToBizModel(container) | |
unInstallBizRequestBytes, _ := json.Marshal(bizModel) | |
unInstallBizRequestBytes, _ := json.Marshal(ark_service.UninstallBizRequest{ | |
BizModel: bizModel, | |
PodKey: podKey, | |
}) | |
func (mqttTunnel *MqttTunnel) StopBiz(nodeName, podKey string, container *corev1.Container) error { | |
bizModel := utils.TranslateCoreV1ContainerToBizModel(container) | |
uninstallReq := ark_service.UninstallBizRequest{ | |
BizModel: bizModel, | |
PodKey: podKey, | |
} | |
unInstallBizRequestBytes, err := json.Marshal(uninstallReq) | |
if err != nil { | |
zaplogger.FromContext(mqttTunnel.ctx). | |
WithValues("bizName", bizModel.BizName, "bizVersion", bizModel.BizVersion, "podKey", podKey). | |
Error(err, "failed to marshal UninstallBizRequest") | |
return err | |
} | |
zlogger := zaplogger.FromContext(mqttTunnel.ctx). | |
WithValues("bizName", bizModel.BizName, "bizVersion", bizModel.BizVersion) | |
zlogger.Info("UninstallModule") | |
nodeID := utils2.ExtractNodeIDFromNodeName(nodeName) | |
return mqttTunnel.mqttClient.Pub( | |
utils.FormatArkletCommandTopic(mqttTunnel.env, nodeID, model.CommandUnInstallBiz), | |
mqtt.Qos0, | |
unInstallBizRequestBytes, | |
) | |
} |
🤖 Prompt for AI Agents
module_tunnels/koupleless_mqtt_tunnel/mqtt_tunnel.go lines 446-451: the
json.Marshal call for the UninstallBizRequest currently ignores the error; check
and handle the marshal error instead of discarding it — capture the error from
json.Marshal, return or wrap it (and optionally log) when non-nil so callers
receive the failure, and only proceed to send the uninstall payload when marshal
succeeds.
Summary by CodeRabbit
New Features
Improvements