Skip to content

Commit

Permalink
Merge main into stable
Browse files Browse the repository at this point in the history
  • Loading branch information
mprahl committed Jan 16, 2025
2 parents 5387a14 + f5c8cfe commit 04edf2b
Show file tree
Hide file tree
Showing 21 changed files with 312 additions and 153 deletions.
8 changes: 6 additions & 2 deletions .github/actions/kind/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,10 +35,14 @@ runs:
EOF'
- name: Setup KinD cluster
uses: helm/kind-action@v1.8.0
uses: helm/kind-action@v1
with:
cluster_name: cluster
version: v0.17.0
# The kind version to use
version: v0.25.0
# The Docker image for the cluster nodes - https://hub.docker.com/r/kindest/node/
node_image: kindest/node:v1.30.6@sha256:b6d08db72079ba5ae1f4a88a09025c0a904af3b52387643c285442afb05ab994
# The path to the kind config file
config: ${{ env.KIND_CONFIG_FILE }}

- name: Print cluster info
Expand Down
6 changes: 3 additions & 3 deletions .github/scripts/python_package_upload/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,12 @@ FROM docker.io/python:3.9
WORKDIR /app

# Copy the script into the container
COPY package_upload.sh /app/package_upload.sh
COPY package_download.sh /app/package_download.sh

# Make sure the script is executable
RUN chmod +x /app/package_upload.sh
RUN chmod +x /app/package_download.sh

# Store the files in a folder
VOLUME /app/packages

ENTRYPOINT ["/app/package_upload.sh"]
ENTRYPOINT ["/app/package_download.sh"]
155 changes: 155 additions & 0 deletions .github/scripts/tests/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,155 @@
# Setup the local environment

All the following commands must be executed in a single terminal instance.

## Increase inotify Limits
To prevent file monitoring issues in development environments (e.g., IDEs or file sync tools), increase inotify limits:
```bash
sudo sysctl fs.inotify.max_user_instances=2280
sudo sysctl fs.inotify.max_user_watches=1255360
```
## Prerequisites
* Kind https://kind.sigs.k8s.io/

## Create kind cluster
```bash
cat <<EOF | kind create cluster --name=kubeflow --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.30.6@sha256:b6d08db72079ba5ae1f4a88a09025c0a904af3b52387643c285442afb05ab994
kubeadmConfigPatches:
- |
kind: ClusterConfiguration
apiServer:
extraArgs:
"service-account-issuer": "kubernetes.default.svc"
"service-account-signing-key-file": "/etc/kubernetes/pki/sa.key"
EOF
```

## kubeconfig
Instead of replacing your kubeconfig, we are going to set to a diff file
```bash
kind get kubeconfig --name kubeflow > /tmp/kubeflow-config
export KUBECONFIG=/tmp/kubeflow-config
```
## docker
In order to by pass the docker limit issue while downloading the images. Let's use your credentials
```bash
docker login -u='...' -p='...' quay.io
```

Upload the secret. The following command will return an error. You need to replace `to` with user `username`
```bash
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=$HOME/.docker/config.json \
--type=kubernetes.io/dockerconfigjson
```

## Test environment variables
Replace the `/path/to` in order to match the `data-science-pipelines-operator` folder
```bash
export GIT_WORKSPACE=/path/to/data-science-pipelines-operator
```
The image registry is required because you are running the test locally.
It will build and upload the image to your repository.

Replace `username` with your quay user
```bash
export REGISTRY_ADDRESS=quay.io/username
```

## Run the test
```bash
sh .github/scripts/tests/tests.sh --kind
```

# Debug a test using GoLand
Let's say you wish to debug the `Should create a Pipeline Run` test.
The first step is right click inside the method content and select the menu
`Run 'TestIntegrationTestSuite'`. It will fail because you need to fill some parameters.
Edit the configuration for `TestIntegrationTestSuite/TestPipelineSuccessfulRun/Should_create_a_Pipeline_Run in github.com/opendatahub-io/data-science-pipelines-operator/tests`
````
-k8sApiServerHost=https://127.0.0.1:39873
-kubeconfig=/tmp/kubeflow-config
-DSPANamespace=test-dspa
-DSPAPath=/path/to/data-science-pipelines-operator/tests/resources/dspa-lite.yaml
````
## How to retrieve the parameters above
* `k8sApiServerHost`: inspect the kubeconfig and retrieve the server URL from there
* `kubeconfig`: the path where you stored the output of `kind get kubeconfig`
* `DSPANamespace`: namespace
* `DSPAPath`: full path for the dspa.yaml

`Should create a Pipeline Run`, `DSPANamespace` and `DSPAPath` depends on the test scenario.

If you wish to keep the resources, add `-skipCleanup=True` in the config above.

## If you wish to rerun the test you need to delete the dspa
```bash
$ kubectl delete datasciencepipelinesapplications test-dspa -n test-dspa
datasciencepipelinesapplication.datasciencepipelinesapplications.opendatahub.io "test-dspa" deleted
```

# `tests.sh` details
This Bash script is designed to set up and test environments for Data Science Pipelines Operator (DSPO)
using Kubernetes (K8s) or *OpenShift with RHOAI deployed*. It includes functionalities to deploy dependencies,
configure namespaces, build and deploy images, and execute integration tests.

## **Features**
1. **Environment Variables Declaration**:
The script requires and verifies environment variables such as `GIT_WORKSPACE`, `REGISTRY_ADDRESS`, and `K8SAPISERVERHOST`. These variables define the workspace, registry for container images, and K8s API server address.

2. **Deployment Functions**:
Functions like `deploy_dspo`, `deploy_minio`, and `deploy_mariadb` handle deploying necessary components (e.g., MinIO, MariaDB, PyPI server) to the cluster.

3. **Namespace Configuration**:
Functions like `create_opendatahub_namespace` and `create_dspa_namespace` create and configure Kubernetes namespaces required for DSPO and other dependencies.

4. **Integration Testing**:
The script provides commands to run integration tests for DSPO and its external connections using `run_tests` and `run_tests_dspa_external_connections`.

5. **Cleanup and Resource Removal**:
Includes options like `--clean-infra` to remove namespaces and resources before running tests.

6. **Conditional Execution**:
Supports setting up and testing environments for different targets:
- `kind` (local Kubernetes clusters)
- `rhoai` (Red Hat OpenShift AI)

7. **Customizable Parameters**:
Allows passing values for paths, namespaces, and K8s API server via command-line arguments.

## **Usage**
```bash
./tests.sh [OPTIONS]
```

### **Options**
- `--kind`
Targets local `kind` cluster.
- `--rhoai`
Targets RHOAI
- `--clean-infra`
Cleans existing resources before running tests.
- `--k8s-api-server-host <HOST>`
Specifies the Kubernetes API server host.
- `--dspa-namespace <NAMESPACE>`
Custom namespace for DSPA deployment.
- `--dspa-path <PATH>`
Path to DSPA resource YAML.
- `--endpoint-type <TYPE>`
Specifies endpoint type (`service` or `route`).

### **Example**
To deploy and test DSPA on a local `kind` cluster:
```bash
./tests.sh --kind --clean-infra --k8s-api-server-host "https://localhost:6443"
```

To deploy DSPA on RHOAI:
```bash
./tests.sh --rhoai --dspa-namespace "custom-namespace"
```
58 changes: 58 additions & 0 deletions .github/scripts/tests/collect_logs.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
#!/usr/bin/env bash

set -e

DSPA_NS=""
DSPO_NS=""

while [[ "$#" -gt 0 ]]; do
case $1 in
--dspa-ns) DSPA_NS="$2"; shift ;;
--dspo-ns) DSPO_NS="$2"; shift ;;
*) echo "Unknown parameter passed: $1"; exit 1 ;;
esac
shift
done

if [[ -z "$DSPA_NS" || -z "$DSPO_NS" ]]; then
echo "Both --dspa-ns and --dspo-ns parameters are required."
exit 1
fi

function check_namespace {
if ! kubectl get namespace "$1" &>/dev/null; then
echo "Namespace '$1' does not exist."
exit 1
fi
}

function display_pod_info {
local NAMESPACE=$1
local POD_NAMES

POD_NAMES=$(kubectl -n "${DSPA_NS}" get pods -o custom-columns=":metadata.name")

if [[ -z "${POD_NAMES}" ]]; then
echo "No pods found in namespace '${NAMESPACE}'."
return
fi

for POD_NAME in ${POD_NAMES}; do
echo "===== Pod: ${POD_NAME} in ${NAMESPACE} ====="

echo "----- EVENTS -----"
kubectl describe pod "${POD_NAME}" -n "${NAMESPACE}" | grep -A 100 Events || echo "No events found for pod ${POD_NAME}."

echo "----- LOGS -----"
kubectl logs "${POD_NAME}" -n "${NAMESPACE}" || echo "No logs found for pod ${POD_NAME}."

echo "==========================="
echo ""
done
}

check_namespace "$DSPA_NS"
check_namespace "$DSPO_NS"

display_pod_info "$DSPA_NS"
display_pod_info "$DSPO_NS"
5 changes: 4 additions & 1 deletion .github/scripts/tests/tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,10 @@ ENDPOINT_TYPE="service"

get_dspo_image() {
if [ "$REGISTRY_ADDRESS" = "" ]; then
echo "REGISTRY_ADDRESS variable not defined." && exit 1
# this function is called by `IMG=$(get_dspo_image)` that captures the standard output of get_dspo_image
set -x
echo "REGISTRY_ADDRESS variable not defined."
exit 1
fi
local image="${REGISTRY_ADDRESS}/data-science-pipelines-operator"
echo $image
Expand Down
10 changes: 10 additions & 0 deletions .github/workflows/kind-integration.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ on:
- config/**
- tests/**
- .github/resources/**
- .github/actions/**
- '.github/workflows/kind-integration.yml'
- '.github/scripts/tests/tests.sh'
- Makefile
Expand Down Expand Up @@ -47,6 +48,15 @@ jobs:
uses: ./.github/actions/kind

- name: Run test
id: test
working-directory: ${{ github.workspace }}/.github/scripts/tests
run: |
sh tests.sh --kind
continue-on-error: true

- name: Collect events and logs
if: steps.test.outcome != 'success'
working-directory: ${{ github.workspace }}/.github/scripts/tests
run: |
./collect_logs.sh --dspa-ns test-dspa --dspo-ns opendatahub
exit 1
4 changes: 2 additions & 2 deletions OWNERS
Original file line number Diff line number Diff line change
Expand Up @@ -3,17 +3,17 @@ approvers:
- DharmitD
- dsp-developers
- gmfrasca
- gregsheremeta
- HumairAK
- rimolive
- mprahl
reviewers:
- DharmitD
- gmfrasca
- gregsheremeta
- hbelmiro
- HumairAK
- rimolive
- VaniHaripriya
- mprahl
emeritus_approvers:
- accorvin
- harshad16
18 changes: 1 addition & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -504,23 +504,7 @@ oc delete project ${ODH_NS}

## Run tests

Simply clone the directory and execute `make test`.

To run it without `make` you can run the following:

```bash
tmpFolder=$(mktemp -d)
go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest
export KUBEBUILDER_ASSETS=$(${GOPATH}/bin/setup-envtest use 1.25.0 --bin-dir ${tmpFolder}/bin -p path)
go test ./... -coverprofile cover.out
# once $KUBEBUILDER_ASSETS you can also run the full test suite successfully by running:
pre-commit run --all-files
```

You can find a more permanent location to install `setup-envtest` into on your local filesystem and export
`KUBEBUILDER_ASSETS` into your `.bashrc` or equivalent. By doing this you can always run `pre-commit run --all-files`
without having to repeat these steps.
See `.github/scripts/tests/README.md`(https://github.com/opendatahub-io/data-science-pipelines-operator/blob/main/.github/scripts/tests/README.md)

## Metrics

Expand Down
4 changes: 4 additions & 0 deletions config/component_metadata.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
releases:
- name: Kubeflow Pipelines
version: 2.2.0
repoUrl: https://github.com/kubeflow/pipelines
17 changes: 16 additions & 1 deletion controllers/database.go
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,22 @@ var mariadbTemplates = []string{
"mariadb/default/tls-config.yaml.tmpl",
}

// tLSClientConfig creates and returns a TLS client configuration that includes
// a set of custom CA certificates for secure communication. It reads CA
// certificates from the environment variable `SSL_CERT_FILE` if it is set,
// and appends any additional certificates passed as input.
//
// Parameters:
//
// pems [][]byte: PEM-encoded certificates to be appended to the
// root certificate pool.
//
// Returns:
//
// *cryptoTls.Config: A TLS configuration with the certificates set to the updated
// certificate pool.
// error: An error if there is a failure in parsing any of the provided PEM
// certificates, or nil if successful.
func tLSClientConfig(pems [][]byte) (*cryptoTls.Config, error) {
rootCertPool := x509.NewCertPool()

Expand Down Expand Up @@ -120,7 +136,6 @@ var ConnectAndQueryDatabase = func(
// don't set anything
case "true":
var err error
// if pemCerts is empty, that is OK, we still add OS certs to the tls config
tlsConfig, err = tLSClientConfig(pemCerts)
if err != nil {
log.Info(fmt.Sprintf("Encountered error when processing custom ca bundle, Error: %v", err))
Expand Down
11 changes: 6 additions & 5 deletions docs/release/RELEASE.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,15 +38,16 @@ Steps on how to release `x.y+1`

1. Ensure `compatibility.yaml` is upto date, and generate a new `compatibility.md`
- Use [release-tools] to accomplish this
2. Cut branch `vx.y+1` from `main/master`
2. If the changes include a code rebase from KFP repo, ensure `config/component_metadata.yaml` is updated with the respective KFP version
3. Cut branch `vx.y+1` from `main/master`
- Do this for DSPO and DSP repos
3. Build images. Use the [build-tags] workflow, specifying the branches from above
4. Retrieve the sha images from the resulting workflow (check quay.io for the digests)
4. Build images. Use the [build-tags] workflow, specifying the branches from above
5. Retrieve the sha images from the resulting workflow (check quay.io for the digests)
- Using [release-tools] generate a `params.env` and submit a new pr to `vx.y+1` branch
- For images pulled from registry, ensure latest images are upto date
5. Perform any tests on the branch, confirm stability
6. Perform any tests on the branch, confirm stability
- If issues are found, they should be corrected in `main/master` and be cherry-picked into this branch.
6. Create a tag release (using the branches from above) for `x.y+1.0` in DSPO and DSP (e.g. `v1.3.0`)
7. Create a tag release (using the branches from above) for `x.y+1.0` in DSPO and DSP (e.g. `v1.3.0`)

## PATCH Releases

Expand Down
2 changes: 1 addition & 1 deletion docs/release/compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Each row outlines the versions for individual subcomponents and images that are

| dsp | kfp | argo | ml-metadata | envoy | ocp-pipelines | oauth-proxy | mariadb-103 | ubi-minimal | ubi-micro | openshift |
|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|
| 2.8 | 2.2.0 | 3.4.17 | 1.14.0 | 1.22.11 | N/A | v4.10 | 1 | N/A | N/A | 4.15,4.16,4.17 |
| 2.8 | 2.2.0 | 3.4.17 | 1.14.0 | 1.22.11 | N/A | v4.14 | 1 | N/A | N/A | 4.15,4.16,4.17 |
| 2.7 | 2.2.0 | 3.4.17 | 1.14.0 | 1.22.11 | N/A | v4.10 | 1 | N/A | N/A | 4.15,4.16,4.17 |
| 2.6 | 2.0.5 | 3.3.10 | 1.14.0 | 1.22.11 | N/A | v4.10 | 1 | N/A | N/A | 4.14,4.15,4.16 |
| 2.5 | 2.0.5 | 3.3.10 | 1.14.0 | 1.22.11 | N/A | v4.10 | 1 | N/A | N/A | 4.14,4.15,4.16 |
Expand Down
Loading

0 comments on commit 04edf2b

Please sign in to comment.