Skip to content

feat(minikube): add minikube delete and debug commands #40

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

Aiden2014
Copy link

@Aiden2014 Aiden2014 commented Jul 10, 2025

Summary by CodeRabbit

  • New Features

    • New Makefile targets to streamline Minikube workflows: clean, build, deploy, debug, port-forward, and a composite restart for in-cluster debugging.
  • Improvements

    • Example manifests converted from Pods to Deployments, updated images, named ports, replicas, selectors and clearer env placeholders.
    • Install/uninstall operations now propagate a PodKey (exposed in operation responses) for finer per-pod scoping.

Updated base and module-controller configurations to use Deployment
instead of Pod, enhancing scalability and management. Adjusted
container images and environment variables for consistency.

BREAKING CHANGE: Deployment structure modified, ensure compatibility
with existing configurations.
Introduce commands to delete the module-controller deployment from
minikube and build a debug version using minikube. Update the
debug Dockerfile to set the correct ENTRYPOINT.

Also, update the module-controller test YAML to use Deployment
instead of Pod for better management.
Copy link

coderabbitai bot commented Jul 10, 2025

Walkthrough

Converted several example Pod manifests to Deployments and updated images, labels, affinity/anti-affinity rules; added six Minikube-focused Makefile targets for build/deploy/debug/port-forward/restart/delete; threaded a podKey (BizModelVersion) through BizOperationResponse and updated StartBiz/StopBiz signatures and payloads in HTTP and MQTT tunnel implementations.

Changes

Cohort / File(s) Change Summary
Makefile targets
Makefile
Added new phony targets: minikube-delete, minikube-build, minikube-deploy, minikube-debug, minikube-port-forward, and composite minikube-restart to support Minikube workflows (delete/build/deploy/debug/port-forward/restart).
Quick-start manifests
example/quick-start/base.yaml, example/quick-start/module-controller.yaml, example/quick-start/module-controller-test.yaml, example/quick-start/module.yaml
Converted Pod manifests to apps/v1 Deployments (added replicas, selector, template), updated image tags and imagePullPolicy, adjusted labels and container ports, modified node affinity and added pod anti-affinity, and renamed deployment/labels in module.yaml.
Model type
common/model/model.go
Added exported field BizModelVersion string to BizOperationResponse and adjusted field order to include the new field.
HTTP tunnel
module_tunnels/koupleless_http_tunnel/http_tunnel.go
Updated StartBiz and StopBiz signatures to accept podKey string instead of unused param; assign podKey to bizModel.BizModelVersion and include it in emitted BizOperationResponse.
MQTT tunnel
module_tunnels/koupleless_mqtt_tunnel/mqtt_tunnel.go
Updated imports and changed StartBiz/StopBiz signatures to accept podKey string; include PodKey in marshaled InstallBizRequest/UninstallBizRequest payloads sent to Ark service.

Sequence Diagram(s)

sequenceDiagram
    participant Dev as Developer
    participant Make as Makefile
    participant MK as Minikube
    participant K8s as Kubernetes Cluster

    Dev->>Make: make minikube-delete
    Make->>K8s: kubectl delete deployment module-controller
    Make->>K8s: wait for pods labeled app=module-controller

    Dev->>Make: make minikube-build
    Make->>MK: remove old debug image
    Make->>MK: build debug.Dockerfile -> module-controller-v2:latest

    Dev->>Make: make minikube-deploy
    Make->>K8s: kubectl apply -f module-controller.yaml
    Make->>K8s: wait deployment available

    Dev->>Make: make minikube-debug
    Make->>K8s: kubectl exec -- delve in module-controller pod

    Dev->>Make: make minikube-port-forward
    Make->>K8s: kubectl port-forward svc/deployment 2345:2345

    Dev->>Make: make minikube-restart
    Make->>Make: runs build -> deploy -> debug
Loading
sequenceDiagram
    participant Caller as Controller
    participant Tunnel as Http/MqttTunnel
    participant Ark as ArkService
    participant Client as Consumer

    Caller->>Tunnel: StartBiz(node, podKey, container)
    Tunnel->>Tunnel: set bizModel.BizModelVersion = podKey
    Tunnel->>Ark: send InstallBizRequest{BizModel, PodKey}
    Ark-->>Tunnel: response
    Tunnel->>Client: OnOneBizDataArrived(BizOperationResponse{..., BizModelVersion: podKey})

    Caller->>Tunnel: StopBiz(node, podKey, container)
    Tunnel->>Tunnel: set bizModel.BizModelVersion = podKey
    Tunnel->>Ark: send UninstallBizRequest{BizModel, PodKey}
    Ark-->>Tunnel: response
    Tunnel->>Client: OnOneBizDataArrived(BizOperationResponse{..., BizModelVersion: podKey})
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Rationale: multiple heterogeneous changes — manifest conversions (multiple files, straightforward), Makefile additions (low risk), and non-trivial API/data-model changes in tunnels + model (signature changes, payload shape, and propagation) that require careful review and cross-file consistency checks.

Possibly related PRs

Suggested reviewers

  • lvjing2
  • gaosaroma

Poem

"I’m a rabbit in the dev-time glen,
Hopping pods to Deployments then—
Images bumped, labels set just so,
Minikube helpers make dev flow.
PodKey travels where it must go —
Debug, rebuild, and restart — hop, go!"

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🔭 Outside diff range comments (1)
debug.Dockerfile (1)

35-38: ARGs missing in the second stage break Delve build on multi-arch setups

TARGETOS / TARGETARCH are referenced here, but not re-declared after the new FROM layer.
Docker does not carry ARG values across stages unless the ARG is re-stated, so both
environment variables expand to empty strings, making go build fail on any build that
relies on them (GOARCH becomes "").

 FROM golang:1.22.8
+ARG TARGETOS
+ARG TARGETARCH
 WORKDIR /
🧹 Nitpick comments (10)
debug.Dockerfile (1)

45-45: 👍 Clearer ENTRYPOINT; just add exec form for PID 1 signals

Using Delve directly is great for debugging. Consider the -- before the target binary
so Delve does not parse its flags, and keep the exec form for signal propagation:

-ENTRYPOINT ["dlv", "--listen=:2345", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "./module_controller"]
+ENTRYPOINT ["dlv", "--listen=:2345", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "--", "./module_controller"]
example/quick-start/module.yaml (1)

27-50: Add basic pod security context to silence CKV_K8S_20 / 23

       affinity:
         nodeAffinity:
@@
       tolerations:
         - key: "schedule.koupleless.io/node-env"
           operator: "Equal"
           value: "dev"
           effect: "NoExecute"
+      securityContext:
+        runAsNonRoot: true
+        allowPrivilegeEscalation: false
+        capabilities:
+          drop: ["ALL"]
example/quick-start/base.yaml (2)

24-26: Missing trailing newline violates YAML-lint rule

Add an EOL to keep linters green.

-              value: {YOUR_MODULE_CONTROLLER_IP_AND_PORT} # 127.0.0.1:7777
+              value: {YOUR_MODULE_CONTROLLER_IP_AND_PORT} # 127.0.0.1:7777
+

15-23: Consider dropping privileges for the base container as well

Same securityContext block recommended for module.yaml can be added here to
avoid running as root by default.

Makefile (2)

41-44: minikube image build only exists on minikube ≥ 1.26

Older installations will fail. Either document the version requirement or fall
back to a classic docker build && minikube image load sequence.

minikube-debug: fmt vet minikube-delete
-	minikube image rm $(IMG_DEBUG)
-	minikube image build -f debug.Dockerfile -t $(IMG_DEBUG) .
+	@minikube image rm $(IMG_DEBUG) || true
+	@if minikube image build --help >/dev/null 2>&1; then \
+	  minikube image build -f debug.Dockerfile -t $(IMG_DEBUG) . ;\
+	else \
+	  docker build -f debug.Dockerfile -t $(IMG_DEBUG) . && minikube image load $(IMG_DEBUG) ;\
+	fi
 	kubectl apply -f example/quick-start/module-controller-test.yaml

37-38: kubectl delete deployments.apps/... skips namespace

If the deployment lives outside default, the delete will silently do nothing.
Add -n $(NAMESPACE) or parametrize it.

example/quick-start/module-controller-test.yaml (3)

20-29: Expose Delve only inside the cluster or behind an auth proxy

Port 2345 is unauthenticated; exposing it on every replica is risky.
If you need remote debugging, restrict it with a NetworkPolicy or set
address=127.0.0.1 and use kubectl port-forward.


15-24: No resources.requests defined

Without requests the scheduler may over-commit and evict the pod.
Set requests matching the limits or lower, e.g.:

resources:
  requests:
    cpu: "250m"
    memory: "200Mi"
  limits:
    cpu: "1000m"
    memory: "400Mi"

1-31: Add pod security context to satisfy Checkov findings

Same minimal block as suggested for other manifests:

securityContext:
  runAsNonRoot: true
  allowPrivilegeEscalation: false
  capabilities:
    drop: ["ALL"]
example/quick-start/module-controller.yaml (1)

29-29: Add trailing newline

YAML-lint reports a missing newline at EOF—tiny but keeps tooling quiet.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 801cda1 and 2010235.

📒 Files selected for processing (6)
  • Makefile (2 hunks)
  • debug.Dockerfile (1 hunks)
  • example/quick-start/base.yaml (1 hunks)
  • example/quick-start/module-controller-test.yaml (1 hunks)
  • example/quick-start/module-controller.yaml (1 hunks)
  • example/quick-start/module.yaml (3 hunks)
🧰 Additional context used
🪛 Checkov (3.2.334)
example/quick-start/module-controller-test.yaml

[MEDIUM] 1-31: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[MEDIUM] 1-31: Minimize the admission of root containers

(CKV_K8S_23)

example/quick-start/module.yaml

[MEDIUM] 1-58: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[MEDIUM] 1-58: Minimize the admission of root containers

(CKV_K8S_23)

example/quick-start/module-controller.yaml

[MEDIUM] 1-29: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[MEDIUM] 1-29: Minimize the admission of root containers

(CKV_K8S_23)

example/quick-start/base.yaml

[MEDIUM] 1-26: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[MEDIUM] 1-26: Minimize the admission of root containers

(CKV_K8S_23)

🪛 YAMLlint (1.37.1)
example/quick-start/module-controller.yaml

[error] 29-29: no new line character at the end of file

(new-line-at-end-of-file)

example/quick-start/base.yaml

[error] 26-26: no new line character at the end of file

(new-line-at-end-of-file)

🔇 Additional comments (2)
example/quick-start/module.yaml (1)

31-37: Placeholder in required nodeAffinity will prevent scheduling

TO_BE_IMPLEMENTED is unlikely to match any node and will block the pod.
If this is intentional, add a comment clarifying; otherwise supply the real
base name or drop the entire selector until it is known.

example/quick-start/module-controller.yaml (1)

18-19: Pin image by digest for immutability

A moving v2.1.4 tag can unexpectedly change the bits in production. Consider switching to a digest to guarantee reproducibility, e.g.:

image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/release/module-controller-v2@sha256:<digest>

Comment on lines +22 to 24
- name: biz1-web-single-host # this name must same with the biz name defined in the jar
image: https://koupleless-dosc.oss-cn-hongkong.aliyuncs.com/biz1-web-single-host-0.0.1-SNAPSHOT-ark-biz.jar
env:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

image field is not a valid OCI image reference

Kubernetes expects an image reference like registry/namespace/repo:tag.
A raw HTTPS URL pointing at a .jar will fail to pull.

-          image: https://koupleless-dosc.oss-cn-hongkong.aliyuncs.com/biz1-web-single-host-0.0.1-SNAPSHOT-ark-biz.jar
+          image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/biz1-web-single-host:0.0.1-SNAPSHOT
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: biz1-web-single-host # this name must same with the biz name defined in the jar
image: https://koupleless-dosc.oss-cn-hongkong.aliyuncs.com/biz1-web-single-host-0.0.1-SNAPSHOT-ark-biz.jar
env:
- name: biz1-web-single-host # this name must same with the biz name defined in the jar
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/biz1-web-single-host:0.0.1-SNAPSHOT
env:
🤖 Prompt for AI Agents
In example/quick-start/module.yaml around lines 22 to 24, the image field is
incorrectly set to a raw HTTPS URL pointing to a .jar file, which is not a valid
OCI image reference for Kubernetes. Replace the image value with a proper OCI
image reference format like registry/namespace/repo:tag that points to a
container image stored in a container registry.

Comment on lines +20 to +23
limits:
cpu: "1000m"
memory: "400Mi"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Specify resources.requests to avoid CPU / memory throttling surprises

Only limits are declared; requests default to 0, which can lead to poor scheduling and unexpected throttling. Define sane requests matching expected baseline consumption.

           resources:
+            requests:
+              cpu: 200m
+              memory: 200Mi
             limits:
               cpu: "1000m"
               memory: "400Mi"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
resources:
limits:
cpu: "1000m"
memory: "400Mi"
resources:
requests:
cpu: 200m
memory: 200Mi
limits:
cpu: "1000m"
memory: "400Mi"
🤖 Prompt for AI Agents
In example/quick-start/module-controller.yaml around lines 20 to 23, the
resource specification only includes limits for CPU and memory but lacks
requests, which can cause scheduling issues and throttling. Add a
resources.requests section with appropriate CPU and memory values that reflect
the expected baseline usage to ensure stable resource allocation and avoid
surprises.

Comment on lines +16 to +23
- name: module-controller
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/release/module-controller-v2:v2.1.4 # 已经打包好的镜像
imagePullPolicy: Always
resources:
limits:
cpu: "1000m"
memory: "400Mi"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add securityContext to harden the container

Checkov flags (CKV_K8S_20, CKV_K8S_23) point out that the container may run as root and allow privilege escalation. Explicitly disabling those behaviours is a baseline hardening step.

         - name: module-controller
           image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/release/module-controller-v2:v2.1.4 # 已经打包好的镜像
           imagePullPolicy: Always
+          securityContext:
+            allowPrivilegeEscalation: false
+            runAsNonRoot: true
+            capabilities:
+              drop:
+                - "ALL"
           resources:
             limits:
               cpu: "1000m"
               memory: "400Mi"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
containers:
- name: module-controller
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/release/module-controller-v2:v2.1.4 # 已经打包好的镜像
imagePullPolicy: Always
resources:
limits:
cpu: "1000m"
memory: "400Mi"
containers:
- name: module-controller
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/release/module-controller-v2:v2.1.4 # 已经打包好的镜像
imagePullPolicy: Always
securityContext:
allowPrivilegeEscalation: false
runAsNonRoot: true
capabilities:
drop:
- "ALL"
resources:
limits:
cpu: "1000m"
memory: "400Mi"
🤖 Prompt for AI Agents
In example/quick-start/module-controller.yaml around lines 16 to 23, the
container lacks a securityContext to prevent running as root and privilege
escalation. Add a securityContext section under the container spec that sets
runAsNonRoot to true, runAsUser to a non-root UID (e.g., 1000), and disables
privilege escalation by setting allowPrivilegeEscalation to false. This will
harden the container by enforcing non-root execution and preventing privilege
escalation.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
Makefile (1)

45-48: Hard-coded 90 s rollout timeout
Some laptops (or slow registries) will exceed 90 s, causing false failures. Consider making this configurable:

-kubectl wait --for=condition=available --timeout=90s deployments/module-controller
+kubectl wait --for=condition=available --timeout=${TIMEOUT-180s} deployment/module-controller
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cb471ed and 95a955f.

📒 Files selected for processing (1)
  • Makefile (2 hunks)
🔇 Additional comments (2)
Makefile (2)

40-43: Inconsistent image name & non-portable Minikube sub-command
The tag module-controller-v2:latest differs from the underscore-based tag used in the regular build target, which may leave your manifest referencing a non-existent image.
Also, minikube image build/rm is only available in recent Minikube releases—older versions require minikube image load after a normal docker build.

Please verify both the image name and the expected Minikube version.


58-59: minikube-restart blocks the Make run
Because it chains into the interactive minikube-debug, the make process never returns, which hampers scripting/CI usage.
Consider splitting the interactive step or provide a non-interactive variant.

Comment on lines +35 to +38
.PHONY: minikube-delete
minikube-delete: ## Delete module-controller deployment from minikube
kubectl delete deployments.apps/module-controller || true
kubectl wait --for=delete pod -l app=module-controller --timeout=90s
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

kubectl wait will error when no pods remain
If kubectl delete finds nothing, the subsequent kubectl wait --for=delete aborts with NotFound, breaking the target. Swallow the error or use --ignore-not-found.

-kubectl wait --for=delete pod -l app=module-controller --timeout=90s
+kubectl wait --for=delete pod -l app=module-controller --timeout=90s || true
🤖 Prompt for AI Agents
In the Makefile at lines 35 to 38, the kubectl wait command errors if no pods
remain after deletion, causing the target to fail. Modify the kubectl wait
command to include the --ignore-not-found flag to prevent errors when no pods
are found, ensuring the target completes successfully even if no pods exist.

Comment on lines +54 to +56
.PHONY: minikube-port-forward
minikube-port-forward:
kubectl port-forward deployments/module-controller 2345:2345
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Same plural-resource issue for port-forward

-kubectl port-forward deployments/module-controller 2345:2345
+kubectl port-forward deployment/module-controller 2345:2345
🤖 Prompt for AI Agents
In the Makefile at lines 54 to 56, the kubectl port-forward command uses the
singular resource "deployments/module-controller" which is incorrect. Change
"deployments/module-controller" to the singular form
"deployment/module-controller" to correctly specify the resource type for
port-forwarding.

Comment on lines +50 to +52
.PHONY: minikube-debug
minikube-debug:
kubectl exec deployments/module-controller -it -- dlv --listen=:2345 --headless=true --api-version=2 --accept-multiclient exec ./module_controller
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Resource name should be singular for exec
kubectl exec deployments/… is not consistently accepted across kubectl versions. Use the singular resource alias to avoid “error: unable to upgrade connection” issues.

-kubectl exec deployments/module-controller -it -- dlv ...
+kubectl exec deployment/module-controller -it -- dlv ...
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
.PHONY: minikube-debug
minikube-debug:
kubectl exec deployments/module-controller -it -- dlv --listen=:2345 --headless=true --api-version=2 --accept-multiclient exec ./module_controller
.PHONY: minikube-debug
minikube-debug:
kubectl exec deployment/module-controller -it -- dlv --listen=:2345 --headless=true --api-version=2 --accept-multiclient exec ./module_controller
🤖 Prompt for AI Agents
In the Makefile at lines 50 to 52, the kubectl exec command uses the plural
resource name "deployments/module-controller," which can cause compatibility
issues across kubectl versions. Change "deployments/module-controller" to the
singular form "deployment/module-controller" to ensure consistent behavior and
avoid connection errors.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
example/quick-start/module-controller-test.yaml (1)

25-29: Expose readiness / liveness probes for quicker fault detection

The container opens ports 7777 and 2345 but Kubernetes has no way to know when the service is ready or unhealthy. Adding probes cuts mean-time-to-recovery and enables rolling updates:

livenessProbe:
  httpGet:
    path: /healthz
    port: httptunnel
  initialDelaySeconds: 10
  periodSeconds: 20
readinessProbe:
  httpGet:
    path: /ready
    port: httptunnel
  initialDelaySeconds: 5
  periodSeconds: 10

If HTTP isn’t available, switch to tcpSocket or exec.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 95a955f and ea98ef8.

📒 Files selected for processing (1)
  • example/quick-start/module-controller-test.yaml (1 hunks)
🧰 Additional context used
🪛 Checkov (3.2.334)
example/quick-start/module-controller-test.yaml

[MEDIUM] 1-32: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[MEDIUM] 1-32: Minimize the admission of root containers

(CKV_K8S_23)

Comment on lines +16 to +17
- name: module-controller
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add securityContext to drop root privileges and disable privilege escalation

By default the container will run as UID 0 with allowPrivilegeEscalation = true, which violates hardening guidance and triggered the CKV_K8S_20 / CKV_K8S_23 findings.
Add an explicit securityContext:

         - name: module-controller
+          securityContext:
+            runAsNonRoot: true
+            allowPrivilegeEscalation: false
+            capabilities:
+              drop: [ "ALL" ]

This single change neutralises both Checkov findings and narrows the blast-radius if the process is compromised.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
containers:
- name: module-controller
containers:
- name: module-controller
securityContext:
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop: [ "ALL" ]
🤖 Prompt for AI Agents
In example/quick-start/module-controller-test.yaml around lines 16 to 17, the
container lacks a securityContext that drops root privileges and disables
privilege escalation, causing security findings CKV_K8S_20 and CKV_K8S_23. Add a
securityContext block under the container definition that sets runAsNonRoot to
true, runAsUser to a non-root UID (e.g., 1000), and allowPrivilegeEscalation to
false to comply with security best practices and fix the findings.

Comment on lines +21 to +24
limits:
cpu: "1000m"
memory: "400Mi"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add resource requests alongside limits for reliable scheduling

Only limits are specified; without requests the scheduler will assume 0 CPU / 0 Mi, leading to noisy-neighbor issues and unpredictable QoS.

           resources:
-            limits:
-              cpu: "1000m"
-              memory: "400Mi"
+            requests:
+              cpu: "200m"
+              memory: "200Mi"
+            limits:
+              cpu: "1000m"
+              memory: "400Mi"

Tune the numbers to your actual baseline consumption.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
resources:
limits:
cpu: "1000m"
memory: "400Mi"
resources:
requests:
cpu: "200m"
memory: "200Mi"
limits:
cpu: "1000m"
memory: "400Mi"
🤖 Prompt for AI Agents
In example/quick-start/module-controller-test.yaml around lines 21 to 24,
resource limits are specified but resource requests are missing. Add a
resources.requests section alongside the existing limits with appropriate cpu
and memory values that reflect the baseline consumption to ensure reliable
scheduling and avoid noisy-neighbor issues.

Comment on lines +18 to +20
imagePullPolicy: Never
# imagePullPolicy: Always # 如果使用远程仓库的镜像,改为 Always
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

❓ Verification inconclusive

Pin the image tag and reconsider imagePullPolicy: Never

Using :latest makes rollbacks and reproducible builds impossible. Moreover, imagePullPolicy: Never only works on clusters where the image already exists (e.g. Minikube’s local daemon); it silently breaks in CI or remote clusters.

Consider:

-          image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:latest
-          imagePullPolicy: Never
+          image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:v2.1.4 # or a digest
+          imagePullPolicy: IfNotPresent

Aligning the tag with an immutable version (or SHA digest) ensures the exact binary you tested is the one you run everywhere.


Pin the image to an immutable tag and adjust imagePullPolicy

The current setup uses :latest and imagePullPolicy: Never, which:

  • Prevents reproducible deployments (you can’t roll back to a known state).
  • Silently fails in CI or remote clusters where the image isn’t pre-pulled.

File: example/quick-start/module-controller-test.yaml Lines: 18–20

-          image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:latest
-          imagePullPolicy: Never
+          image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:v2.1.4   # pin to a specific release or use a digest
+          imagePullPolicy: IfNotPresent

Aligning on a fixed tag (or SHA digest) guarantees you deploy exactly what you’ve tested, and using IfNotPresent will pull the image if it’s missing without breaking on remote clusters.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:latest # 自己构建打包好的镜像
imagePullPolicy: Never
# imagePullPolicy: Always # 如果使用远程仓库的镜像,改为 Always
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:v2.1.4 # pin to a specific release or use a digest
imagePullPolicy: IfNotPresent
# imagePullPolicy: Always # 如果使用远程仓库的镜像,改为 Always
🤖 Prompt for AI Agents
In example/quick-start/module-controller-test.yaml around lines 18 to 20, the
image tag is set to :latest and imagePullPolicy is Never, which causes
non-reproducible deployments and failures in CI or remote clusters. Fix this by
replacing :latest with a fixed, immutable tag or SHA digest for the image, and
change imagePullPolicy to IfNotPresent to allow pulling the image if missing
without breaking deployments.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (3)
common/model/model.go (2)

34-38: Decouple common model from HTTP tunnel ark_service

Typing Response as ark_service.ArkResponse creates an undesirable dependency from common/model to a tunnel-specific package. Prefer hosting ArkResponse in a neutral package (e.g., common/ark or common/model) or using a type from arkctl so common stays tunnel-agnostic.


37-37: Clarify BizModelVersion doc to reflect it carries podKey

The comment says “Version of the business model,” but this is now used to carry a podKey. Clarify to avoid confusion for future contributors.

Apply this small change:

-    BizModelVersion string                                       `json:"bizModelVersion"` // Version of the business model
+    BizModelVersion string                                       `json:"bizModelVersion"` // Pod-level key (podKey) used for scoping operations
module_tunnels/koupleless_http_tunnel/http_tunnel.go (1)

337-348: Confirm Ark HTTP API accepts BizModelVersion for pod scoping

This path encodes podKey via BizModel.BizModelVersion. Verify the Ark HTTP service interprets BizModelVersion as the podKey (or alternatively, extend the HTTP request model with an explicit PodKey to match MQTT for consistency).

Would you like me to open a follow-up to standardize payload shape (explicit PodKey on both HTTP/MQTT) to avoid overloading BizModelVersion semantics?

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between ea98ef8 and 4f3d74a.

📒 Files selected for processing (3)
  • common/model/model.go (1 hunks)
  • module_tunnels/koupleless_http_tunnel/http_tunnel.go (4 hunks)
  • module_tunnels/koupleless_mqtt_tunnel/mqtt_tunnel.go (2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
module_tunnels/koupleless_mqtt_tunnel/mqtt_tunnel.go (4)
common/utils/utils.go (2)
  • TranslateCoreV1ContainerToBizModel (58-64)
  • FormatArkletCommandTopic (36-38)
common/zaplogger/logger.go (1)
  • FromContext (19-26)
module_tunnels/koupleless_http_tunnel/ark_service/model.go (2)
  • InstallBizRequest (6-8)
  • UninstallBizRequest (11-13)
common/model/consts.go (1)
  • CommandInstallBiz (49-49)
common/model/model.go (1)
module_tunnels/koupleless_http_tunnel/ark_service/model.go (1)
  • ArkResponse (16-31)
module_tunnels/koupleless_http_tunnel/http_tunnel.go (2)
common/model/model.go (1)
  • BizOperationResponse (33-39)
common/model/consts.go (2)
  • CommandInstallBiz (49-49)
  • CommandUnInstallBiz (51-51)
🔇 Additional comments (4)
common/model/model.go (1)

34-38: BizModelVersion threaded into responses — looks good

Including BizModelVersion in BizOperationResponse aligns with the new podKey propagation and keeps callbacks informed.

module_tunnels/koupleless_mqtt_tunnel/mqtt_tunnel.go (1)

8-11: Imports adjustment is appropriate

New imports (strings, sync, time) are used below and required. No issues.

module_tunnels/koupleless_http_tunnel/http_tunnel.go (2)

327-348: Good: podKey propagated via BizModelVersion and included in response

Updating the signature and setting bizModel.BizModelVersion allows HTTP path to carry podKey through to Ark, and returning BizModelVersion in BizOperationResponse is consistent.


362-381: StopBiz mirrors StartBiz changes — consistent propagation

StopBiz mirrors the StartBiz handling for podKey via BizModelVersion and in the response. Looks consistent.

Comment on lines 434 to 441
func (mqttTunnel *MqttTunnel) StartBiz(nodeName, podKey string, container *corev1.Container) error {
bizModel := utils.TranslateCoreV1ContainerToBizModel(container)
zlogger := zaplogger.FromContext(mqttTunnel.ctx).WithValues("bizName", bizModel.BizName, "bizVersion", bizModel.BizVersion)
zlogger.Info("InstallModule")
installBizRequestBytes, _ := json.Marshal(bizModel)
installBizRequestBytes, _ := json.Marshal(ark_service.InstallBizRequest{
BizModel: bizModel,
PodKey: podKey,
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Handle marshal errors and include podKey in logs

Ignoring json.Marshal errors risks publishing invalid payloads; also add podKey to the log context for traceability.

Apply:

 func (mqttTunnel *MqttTunnel) StartBiz(nodeName, podKey string, container *corev1.Container) error {
   bizModel := utils.TranslateCoreV1ContainerToBizModel(container)
-  zlogger := zaplogger.FromContext(mqttTunnel.ctx).WithValues("bizName", bizModel.BizName, "bizVersion", bizModel.BizVersion)
+  zlogger := zaplogger.FromContext(mqttTunnel.ctx).WithValues("bizName", bizModel.BizName, "bizVersion", bizModel.BizVersion, "podKey", podKey)
   zlogger.Info("InstallModule")
-  installBizRequestBytes, _ := json.Marshal(ark_service.InstallBizRequest{
-    BizModel: bizModel,
-    PodKey:   podKey,
-  })
+  installReq := ark_service.InstallBizRequest{
+    BizModel: bizModel,
+    PodKey:   podKey,
+  }
+  installBizRequestBytes, err := json.Marshal(installReq)
+  if err != nil {
+    zlogger.Error(err, "failed to marshal InstallBizRequest")
+    return err
+  }
   nodeID := utils2.ExtractNodeIDFromNodeName(nodeName)
   return mqttTunnel.mqttClient.Pub(utils.FormatArkletCommandTopic(mqttTunnel.env, nodeID, model.CommandInstallBiz), mqtt.Qos0, installBizRequestBytes)
 }

Additionally, consider using QoS 1 for install/uninstall commands to improve delivery reliability if the broker/base requires at-least-once semantics.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func (mqttTunnel *MqttTunnel) StartBiz(nodeName, podKey string, container *corev1.Container) error {
bizModel := utils.TranslateCoreV1ContainerToBizModel(container)
zlogger := zaplogger.FromContext(mqttTunnel.ctx).WithValues("bizName", bizModel.BizName, "bizVersion", bizModel.BizVersion)
zlogger.Info("InstallModule")
installBizRequestBytes, _ := json.Marshal(bizModel)
installBizRequestBytes, _ := json.Marshal(ark_service.InstallBizRequest{
BizModel: bizModel,
PodKey: podKey,
})
func (mqttTunnel *MqttTunnel) StartBiz(nodeName, podKey string, container *corev1.Container) error {
bizModel := utils.TranslateCoreV1ContainerToBizModel(container)
zlogger := zaplogger.
FromContext(mqttTunnel.ctx).
WithValues(
"bizName", bizModel.BizName,
"bizVersion", bizModel.BizVersion,
"podKey", podKey,
)
zlogger.Info("InstallModule")
installReq := ark_service.InstallBizRequest{
BizModel: bizModel,
PodKey: podKey,
}
installBizRequestBytes, err := json.Marshal(installReq)
if err != nil {
zlogger.Error(err, "failed to marshal InstallBizRequest")
return err
}
nodeID := utils2.ExtractNodeIDFromNodeName(nodeName)
return mqttTunnel.mqttClient.Pub(
utils.FormatArkletCommandTopic(mqttTunnel.env, nodeID, model.CommandInstallBiz),
mqtt.Qos0,
installBizRequestBytes,
)
}
🤖 Prompt for AI Agents
In module_tunnels/koupleless_mqtt_tunnel/mqtt_tunnel.go around lines 434 to 441,
the code ignores the json.Marshal error and doesn't include podKey in the log
context; update the code to check the error returned by json.Marshal for the
InstallBizRequest, log or return the error (do not proceed to publish on marshal
failure), and add "podKey" to the zaplogger.WithValues call so logs include the
pod identifier; also when publishing install/uninstall commands consider using
QoS 1 (at-least-once) instead of 0 to improve delivery reliability where broker
semantics require it.

💡 Verification agent

🧩 Analysis chain

Verify model definitions include PodKey in Install/Uninstall requests

The code sets PodKey on ark_service.InstallBizRequest/UninstallBizRequest. Ensure those structs declare a PodKey field; otherwise this won’t compile or the field will be dropped during marshal.

Run:

Expected: Both request types contain a line like PodKey string \json:"podKey,omitempty"``.


🏁 Script executed:

#!/bin/bash
# Verify InstallBizRequest/UninstallBizRequest and PodKey presence
rg -n -C2 -P '^\s*type\s+InstallBizRequest\b' module_tunnels | sed -n '1,200p'
rg -n -C2 -P '^\s*type\s+UninstallBizRequest\b' module_tunnels | sed -n '1,200p'
rg -n -C2 '\bPodKey\b' module_tunnels

# Also check for BizModelVersion usage across repo (sanity)
rg -n -C2 '\bBizModelVersion\b'

Length of output: 5147


Add PodKey to InstallBizRequest/UninstallBizRequest models

The ark_service.InstallBizRequest and ark_service.UninstallBizRequest types (in module_tunnels/koupleless_http_tunnel/ark_service/model.go) don’t declare a PodKey field, but mqtt_tunnel.go is initializing one—this will fail to compile. Please update both structs to include:

 type InstallBizRequest struct {
     ark.BizModel `json:",inline"`
+    PodKey string `json:"podKey,omitempty"`
 }

 type UninstallBizRequest struct {
     ark.BizModel `json:",inline"`
+    PodKey string `json:"podKey,omitempty"`
 }

• File: module_tunnels/koupleless_http_tunnel/ark_service/model.go
– Add PodKey string \json:"podKey,omitempty"`under the inlinedark.BizModelin both request structs. • After adding these fields, re-rungo build` to confirm compilation.

🤖 Prompt for AI Agents
In module_tunnels/koupleless_mqtt_tunnel/mqtt_tunnel.go around lines 434-441,
mqtt_tunnel initializes InstallBizRequest.PodKey but the InstallBizRequest and
UninstallBizRequest structs lack that field; open
module_tunnels/koupleless_http_tunnel/ark_service/model.go and add PodKey string
with the json tag `podKey,omitempty` (PodKey string `json:"podKey,omitempty"`)
immediately under the inlined ark.BizModel in both InstallBizRequest and
UninstallBizRequest, save, then run go build to confirm the compilation
succeeds.

Comment on lines 446 to 451
func (mqttTunnel *MqttTunnel) StopBiz(nodeName, podKey string, container *corev1.Container) error {
bizModel := utils.TranslateCoreV1ContainerToBizModel(container)
unInstallBizRequestBytes, _ := json.Marshal(bizModel)
unInstallBizRequestBytes, _ := json.Marshal(ark_service.UninstallBizRequest{
BizModel: bizModel,
PodKey: podKey,
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Also handle marshal errors for uninstall payload

Same reasoning as StartBiz; don’t drop marshal errors.

 func (mqttTunnel *MqttTunnel) StopBiz(nodeName, podKey string, container *corev1.Container) error {
   bizModel := utils.TranslateCoreV1ContainerToBizModel(container)
-  unInstallBizRequestBytes, _ := json.Marshal(ark_service.UninstallBizRequest{
-    BizModel: bizModel,
-    PodKey:   podKey,
-  })
+  uninstallReq := ark_service.UninstallBizRequest{
+    BizModel: bizModel,
+    PodKey:   podKey,
+  }
+  unInstallBizRequestBytes, err := json.Marshal(uninstallReq)
+  if err != nil {
+    zaplogger.FromContext(mqttTunnel.ctx).WithValues("bizName", bizModel.BizName, "bizVersion", bizModel.BizVersion, "podKey", podKey).
+      Error(err, "failed to marshal UninstallBizRequest")
+    return err
+  }
   zlogger := zaplogger.FromContext(mqttTunnel.ctx).WithValues("bizName", bizModel.BizName, "bizVersion", bizModel.BizVersion)
   zlogger.Info("UninstallModule")
   nodeID := utils2.ExtractNodeIDFromNodeName(nodeName)
   return mqttTunnel.mqttClient.Pub(utils.FormatArkletCommandTopic(mqttTunnel.env, nodeID, model.CommandUnInstallBiz), mqtt.Qos0, unInstallBizRequestBytes)
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func (mqttTunnel *MqttTunnel) StopBiz(nodeName, podKey string, container *corev1.Container) error {
bizModel := utils.TranslateCoreV1ContainerToBizModel(container)
unInstallBizRequestBytes, _ := json.Marshal(bizModel)
unInstallBizRequestBytes, _ := json.Marshal(ark_service.UninstallBizRequest{
BizModel: bizModel,
PodKey: podKey,
})
func (mqttTunnel *MqttTunnel) StopBiz(nodeName, podKey string, container *corev1.Container) error {
bizModel := utils.TranslateCoreV1ContainerToBizModel(container)
uninstallReq := ark_service.UninstallBizRequest{
BizModel: bizModel,
PodKey: podKey,
}
unInstallBizRequestBytes, err := json.Marshal(uninstallReq)
if err != nil {
zaplogger.FromContext(mqttTunnel.ctx).
WithValues("bizName", bizModel.BizName, "bizVersion", bizModel.BizVersion, "podKey", podKey).
Error(err, "failed to marshal UninstallBizRequest")
return err
}
zlogger := zaplogger.FromContext(mqttTunnel.ctx).
WithValues("bizName", bizModel.BizName, "bizVersion", bizModel.BizVersion)
zlogger.Info("UninstallModule")
nodeID := utils2.ExtractNodeIDFromNodeName(nodeName)
return mqttTunnel.mqttClient.Pub(
utils.FormatArkletCommandTopic(mqttTunnel.env, nodeID, model.CommandUnInstallBiz),
mqtt.Qos0,
unInstallBizRequestBytes,
)
}
🤖 Prompt for AI Agents
module_tunnels/koupleless_mqtt_tunnel/mqtt_tunnel.go lines 446-451: the
json.Marshal call for the UninstallBizRequest currently ignores the error; check
and handle the marshal error instead of discarding it — capture the error from
json.Marshal, return or wrap it (and optionally log) when non-nil so callers
receive the failure, and only proceed to send the uninstall payload when marshal
succeeds.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant