Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 27 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,32 @@ buildx: fmt vet
build: fmt vet
docker build -t serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module_controller:latest .

.PHONY: minikube-delete
minikube-delete: ## Delete module-controller deployment from minikube
kubectl delete deployments.apps/module-controller || true
kubectl wait --for=delete pod -l app=module-controller --timeout=90s
Comment on lines +35 to +38
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

kubectl wait will error when no pods remain
If kubectl delete finds nothing, the subsequent kubectl wait --for=delete aborts with NotFound, breaking the target. Swallow the error or use --ignore-not-found.

-kubectl wait --for=delete pod -l app=module-controller --timeout=90s
+kubectl wait --for=delete pod -l app=module-controller --timeout=90s || true
🤖 Prompt for AI Agents
In the Makefile at lines 35 to 38, the kubectl wait command errors if no pods
remain after deletion, causing the target to fail. Modify the kubectl wait
command to include the --ignore-not-found flag to prevent errors when no pods
are found, ensuring the target completes successfully even if no pods exist.


.PHONY: minikube-build
minikube-build: fmt vet minikube-delete ## Build debug version using minikube
minikube image rm serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:latest
minikube image build -f debug.Dockerfile -t serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:latest .

.PHONY: minikube-deploy
minikube-deploy: ## Deploy module-controller to minikube
kubectl apply -f example/quick-start/module-controller-test.yaml
kubectl wait --for=condition=available --timeout=90s deployments/module-controller

.PHONY: minikube-debug
minikube-debug:
kubectl exec deployments/module-controller -it -- dlv --listen=:2345 --headless=true --api-version=2 --accept-multiclient exec ./module_controller
Comment on lines +50 to +52
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Resource name should be singular for exec
kubectl exec deployments/… is not consistently accepted across kubectl versions. Use the singular resource alias to avoid “error: unable to upgrade connection” issues.

-kubectl exec deployments/module-controller -it -- dlv ...
+kubectl exec deployment/module-controller -it -- dlv ...
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
.PHONY: minikube-debug
minikube-debug:
kubectl exec deployments/module-controller -it -- dlv --listen=:2345 --headless=true --api-version=2 --accept-multiclient exec ./module_controller
.PHONY: minikube-debug
minikube-debug:
kubectl exec deployment/module-controller -it -- dlv --listen=:2345 --headless=true --api-version=2 --accept-multiclient exec ./module_controller
🤖 Prompt for AI Agents
In the Makefile at lines 50 to 52, the kubectl exec command uses the plural
resource name "deployments/module-controller," which can cause compatibility
issues across kubectl versions. Change "deployments/module-controller" to the
singular form "deployment/module-controller" to ensure consistent behavior and
avoid connection errors.


.PHONY: minikube-port-forward
minikube-port-forward:
kubectl port-forward deployments/module-controller 2345:2345
Comment on lines +54 to +56
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Same plural-resource issue for port-forward

-kubectl port-forward deployments/module-controller 2345:2345
+kubectl port-forward deployment/module-controller 2345:2345
🤖 Prompt for AI Agents
In the Makefile at lines 54 to 56, the kubectl port-forward command uses the
singular resource "deployments/module-controller" which is incorrect. Change
"deployments/module-controller" to the singular form
"deployment/module-controller" to correctly specify the resource type for
port-forwarding.


.PHONY: minikube-restart
minikube-restart: minikube-build minikube-deploy minikube-debug

##@ Deployment

ifndef ignore-not-found
Expand Down Expand Up @@ -74,3 +100,4 @@ $(CONTROLLER_GEN): $(LOCALBIN)
envtest: $(ENVTEST) ## Download envtest-setup locally if necessary.
$(ENVTEST): $(LOCALBIN)
test -s $(LOCALBIN)/setup-envtest || GOBIN=$(LOCALBIN) go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest

37 changes: 22 additions & 15 deletions example/quick-start/base.yaml
Original file line number Diff line number Diff line change
@@ -1,19 +1,26 @@
apiVersion: v1
kind: Pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: base
labels:
app: base
spec:
containers:
- name: base
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/base-web:1.3.3 # 已经打包好的镜像,源码在 https://github.com/koupleless/samples/blob/main/springboot-samples/web/tomcat/Dockerfile
imagePullPolicy: Always
ports:
replicas: 1
selector:
matchLabels:
app: base
template:
metadata:
labels:
app: base
spec:
containers:
- name: base
containerPort: 8080
- name: arklet
containerPort: 1238
env:
- name: MODULE_CONTROLLER_ADDRESS
value: { YOUR_MODULE_CONTROLLER_IP }
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/base-web:1.4.0 # 已经打包好的镜像, 镜像来源 https://github.com/koupleless/samples/blob/main/springboot-samples/web/tomcat/Dockerfile
imagePullPolicy: Always
ports:
- name: base
containerPort: 8080
- name: arklet
containerPort: 1238
env:
- name: MODULE_CONTROLLER_ADDRESS # 在 base-web 的 koupleless runtime 里是 `koupleless.arklet.http.heartbeat.endpoint`
value: {YOUR_MODULE_CONTROLLER_IP_AND_PORT} # 127.0.0.1:7777
50 changes: 29 additions & 21 deletions example/quick-start/module-controller-test.yaml
Original file line number Diff line number Diff line change
@@ -1,24 +1,32 @@
apiVersion: v1
kind: Pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: module-controller
labels:
app: module-controller
spec:
serviceAccountName: virtual-kubelet # 上一步中配置好的 Service Account
containers:
- name: module-controller
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:v2.1.2 # 已经打包好的镜像
imagePullPolicy: Always
resources:
limits:
cpu: "1000m"
memory: "400Mi"
ports:
- name: httptunnel
containerPort: 7777
- name: debug
containerPort: 2345
env:
- name: ENABLE_HTTP_TUNNEL
value: "true"
replicas: 1
selector:
matchLabels:
app: module-controller
template:
metadata:
labels:
app: module-controller
spec:
serviceAccountName: virtual-kubelet # 上一步中配置好的 Service Account
containers:
- name: module-controller
Comment on lines +16 to +17
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add securityContext to drop root privileges and disable privilege escalation

By default the container will run as UID 0 with allowPrivilegeEscalation = true, which violates hardening guidance and triggered the CKV_K8S_20 / CKV_K8S_23 findings.
Add an explicit securityContext:

         - name: module-controller
+          securityContext:
+            runAsNonRoot: true
+            allowPrivilegeEscalation: false
+            capabilities:
+              drop: [ "ALL" ]

This single change neutralises both Checkov findings and narrows the blast-radius if the process is compromised.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
containers:
- name: module-controller
containers:
- name: module-controller
securityContext:
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop: [ "ALL" ]
🤖 Prompt for AI Agents
In example/quick-start/module-controller-test.yaml around lines 16 to 17, the
container lacks a securityContext that drops root privileges and disables
privilege escalation, causing security findings CKV_K8S_20 and CKV_K8S_23. Add a
securityContext block under the container definition that sets runAsNonRoot to
true, runAsUser to a non-root UID (e.g., 1000), and allowPrivilegeEscalation to
false to comply with security best practices and fix the findings.

image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:latest # 自己构建打包好的镜像
imagePullPolicy: Never
# imagePullPolicy: Always # 如果使用远程仓库的镜像,改为 Always
Comment on lines +18 to +20
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

❓ Verification inconclusive

Pin the image tag and reconsider imagePullPolicy: Never

Using :latest makes rollbacks and reproducible builds impossible. Moreover, imagePullPolicy: Never only works on clusters where the image already exists (e.g. Minikube’s local daemon); it silently breaks in CI or remote clusters.

Consider:

-          image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:latest
-          imagePullPolicy: Never
+          image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:v2.1.4 # or a digest
+          imagePullPolicy: IfNotPresent

Aligning the tag with an immutable version (or SHA digest) ensures the exact binary you tested is the one you run everywhere.


Pin the image to an immutable tag and adjust imagePullPolicy

The current setup uses :latest and imagePullPolicy: Never, which:

  • Prevents reproducible deployments (you can’t roll back to a known state).
  • Silently fails in CI or remote clusters where the image isn’t pre-pulled.

File: example/quick-start/module-controller-test.yaml Lines: 18–20

-          image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:latest
-          imagePullPolicy: Never
+          image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:v2.1.4   # pin to a specific release or use a digest
+          imagePullPolicy: IfNotPresent

Aligning on a fixed tag (or SHA digest) guarantees you deploy exactly what you’ve tested, and using IfNotPresent will pull the image if it’s missing without breaking on remote clusters.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:latest # 自己构建打包好的镜像
imagePullPolicy: Never
# imagePullPolicy: Always # 如果使用远程仓库的镜像,改为 Always
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/module-controller-v2:v2.1.4 # pin to a specific release or use a digest
imagePullPolicy: IfNotPresent
# imagePullPolicy: Always # 如果使用远程仓库的镜像,改为 Always
🤖 Prompt for AI Agents
In example/quick-start/module-controller-test.yaml around lines 18 to 20, the
image tag is set to :latest and imagePullPolicy is Never, which causes
non-reproducible deployments and failures in CI or remote clusters. Fix this by
replacing :latest with a fixed, immutable tag or SHA digest for the image, and
change imagePullPolicy to IfNotPresent to allow pulling the image if missing
without breaking deployments.

resources:
limits:
cpu: "1000m"
memory: "400Mi"
Comment on lines +21 to +24
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add resource requests alongside limits for reliable scheduling

Only limits are specified; without requests the scheduler will assume 0 CPU / 0 Mi, leading to noisy-neighbor issues and unpredictable QoS.

           resources:
-            limits:
-              cpu: "1000m"
-              memory: "400Mi"
+            requests:
+              cpu: "200m"
+              memory: "200Mi"
+            limits:
+              cpu: "1000m"
+              memory: "400Mi"

Tune the numbers to your actual baseline consumption.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
resources:
limits:
cpu: "1000m"
memory: "400Mi"
resources:
requests:
cpu: "200m"
memory: "200Mi"
limits:
cpu: "1000m"
memory: "400Mi"
🤖 Prompt for AI Agents
In example/quick-start/module-controller-test.yaml around lines 21 to 24,
resource limits are specified but resource requests are missing. Add a
resources.requests section alongside the existing limits with appropriate cpu
and memory values that reflect the baseline consumption to ensure reliable
scheduling and avoid noisy-neighbor issues.

ports:
- name: httptunnel
containerPort: 7777
- name: debug
containerPort: 2345
env:
- name: ENABLE_HTTP_TUNNEL
value: "true"
45 changes: 26 additions & 19 deletions example/quick-start/module-controller.yaml
Original file line number Diff line number Diff line change
@@ -1,22 +1,29 @@
apiVersion: v1
kind: Pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: module-controller
labels:
app: module-controller
spec:
serviceAccountName: virtual-kubelet # 上一步中配置好的 Service Account
containers:
- name: module-controller
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/release/module-controller-v2:v2.1.2 # 已经打包好的镜像
imagePullPolicy: Always
resources:
limits:
cpu: "1000m"
memory: "400Mi"
ports:
- name: httptunnel
containerPort: 7777
env:
- name: ENABLE_HTTP_TUNNEL
value: "true"
replicas: 1
selector:
matchLabels:
app: module-controller
template:
metadata:
labels:
app: module-controller
spec:
serviceAccountName: virtual-kubelet # 上一步中配置好的 Service Account
containers:
- name: module-controller
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/release/module-controller-v2:v2.1.4 # 已经打包好的镜像
imagePullPolicy: Always
resources:
limits:
cpu: "1000m"
memory: "400Mi"
Comment on lines +20 to +23
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Specify resources.requests to avoid CPU / memory throttling surprises

Only limits are declared; requests default to 0, which can lead to poor scheduling and unexpected throttling. Define sane requests matching expected baseline consumption.

           resources:
+            requests:
+              cpu: 200m
+              memory: 200Mi
             limits:
               cpu: "1000m"
               memory: "400Mi"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
resources:
limits:
cpu: "1000m"
memory: "400Mi"
resources:
requests:
cpu: 200m
memory: 200Mi
limits:
cpu: "1000m"
memory: "400Mi"
🤖 Prompt for AI Agents
In example/quick-start/module-controller.yaml around lines 20 to 23, the
resource specification only includes limits for CPU and memory but lacks
requests, which can cause scheduling issues and throttling. Add a
resources.requests section with appropriate CPU and memory values that reflect
the expected baseline usage to ensure stable resource allocation and avoid
surprises.

Comment on lines +16 to +23
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add securityContext to harden the container

Checkov flags (CKV_K8S_20, CKV_K8S_23) point out that the container may run as root and allow privilege escalation. Explicitly disabling those behaviours is a baseline hardening step.

         - name: module-controller
           image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/release/module-controller-v2:v2.1.4 # 已经打包好的镜像
           imagePullPolicy: Always
+          securityContext:
+            allowPrivilegeEscalation: false
+            runAsNonRoot: true
+            capabilities:
+              drop:
+                - "ALL"
           resources:
             limits:
               cpu: "1000m"
               memory: "400Mi"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
containers:
- name: module-controller
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/release/module-controller-v2:v2.1.4 # 已经打包好的镜像
imagePullPolicy: Always
resources:
limits:
cpu: "1000m"
memory: "400Mi"
containers:
- name: module-controller
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/release/module-controller-v2:v2.1.4 # 已经打包好的镜像
imagePullPolicy: Always
securityContext:
allowPrivilegeEscalation: false
runAsNonRoot: true
capabilities:
drop:
- "ALL"
resources:
limits:
cpu: "1000m"
memory: "400Mi"
🤖 Prompt for AI Agents
In example/quick-start/module-controller.yaml around lines 16 to 23, the
container lacks a securityContext to prevent running as root and privilege
escalation. Add a securityContext section under the container spec that sets
runAsNonRoot to true, runAsUser to a non-root UID (e.g., 1000), and disables
privilege escalation by setting allowPrivilegeEscalation to false. This will
harden the container by enforcing non-root execution and preventing privilege
escalation.

ports:
- name: httptunnel
containerPort: 7777
env:
- name: ENABLE_HTTP_TUNNEL
value: "true"
31 changes: 20 additions & 11 deletions example/quick-start/module.yaml
Original file line number Diff line number Diff line change
@@ -1,23 +1,26 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: biz1
name: biz1-web-single-host
labels:
virtual-kubelet.koupleless.io/component: module-deployment
spec:
replicas: 1
selector:
matchLabels:
module: biz1
virtual-kubelet.koupleless.io/component: module
module.koupleless.io/name: biz1-web-single-host
module.koupleless.io/version: 0.0.1-SNAPSHOT
template:
metadata:
labels:
module: biz1
virtual-kubelet.koupleless.io/component: module
module.koupleless.io/name: biz1-web-single-host
module.koupleless.io/version: 0.0.1-SNAPSHOT
spec:
containers:
- name: biz1
image: https://serverless-opensource.oss-cn-shanghai.aliyuncs.com/module-packages/stable/biz1-web-single-host-0.0.1-SNAPSHOT-ark-biz.jar
- name: biz1-web-single-host # this name must same with the biz name defined in the jar
image: https://koupleless-dosc.oss-cn-hongkong.aliyuncs.com/biz1-web-single-host-0.0.1-SNAPSHOT-ark-biz.jar
env:
Comment on lines +22 to 24
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

image field is not a valid OCI image reference

Kubernetes expects an image reference like registry/namespace/repo:tag.
A raw HTTPS URL pointing at a .jar will fail to pull.

-          image: https://koupleless-dosc.oss-cn-hongkong.aliyuncs.com/biz1-web-single-host-0.0.1-SNAPSHOT-ark-biz.jar
+          image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/biz1-web-single-host:0.0.1-SNAPSHOT
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: biz1-web-single-host # this name must same with the biz name defined in the jar
image: https://koupleless-dosc.oss-cn-hongkong.aliyuncs.com/biz1-web-single-host-0.0.1-SNAPSHOT-ark-biz.jar
env:
- name: biz1-web-single-host # this name must same with the biz name defined in the jar
image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/biz1-web-single-host:0.0.1-SNAPSHOT
env:
🤖 Prompt for AI Agents
In example/quick-start/module.yaml around lines 22 to 24, the image field is
incorrectly set to a raw HTTPS URL pointing to a .jar file, which is not a valid
OCI image reference for Kubernetes. Replace the image value with a proper OCI
image reference format like registry/namespace/repo:tag that points to a
container image stored in a container registry.

- name: BIZ_VERSION
value: 0.0.1-SNAPSHOT
Expand All @@ -26,18 +29,24 @@ spec:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
# these labels in vnode generated in base `https://github.com/koupleless/runtime/blob/main/arklet-core/src/main/java/com/alipay/sofa/koupleless/arklet/core/hook/base/BaseMetadataHookImpl.java`
# you can define your own labels by implementing your own BaseMetadataHookImpl
- key: base.koupleless.io/name
operator: In
values:
- base
- key: base.koupleless.io/version
operator: In
values:
- 1.0.0
- TO_BE_IMPLEMENTED
- key: base.koupleless.io/cluster-name
operator: In
values:
- default
podAntiAffinity: # 打散调度核心配置
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
virtual-kubelet.koupleless.io/component: module
module.koupleless.io/name: biz1-web-single-host
module.koupleless.io/version: 0.0.1-SNAPSHOT
topologyKey: topology.kubernetes.io/zone
tolerations:
- key: "schedule.koupleless.io/virtual-node"
operator: "Equal"
Expand All @@ -46,4 +55,4 @@ spec:
- key: "schedule.koupleless.io/node-env"
operator: "Equal"
value: "dev"
effect: "NoExecute"
effect: "NoExecute"