You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/on-premises/2.md
+9-5
Original file line number
Diff line number
Diff line change
@@ -9,9 +9,11 @@
9
9
10
10
## **i. Configuring the IBM Technology Zone reservation**
11
11
12
-
1.The foundation for the on-premises environment utilizes the `OpenShift Cluster (VMware on IBM Cloud) - UPI - Public` template from the collection of <ahref="https://techzone.ibm.com/collection/tech-zone-certified-base-images/journey-vmware-on-ibm-cloud-environments"target="_blank">IBM Technolgy Zone (ITZ) Certified Base Images</a>.
12
+
The foundation for the on-premises environment utilizes the `OpenShift Cluster (VMware on IBM Cloud) - UPI - Public` template from the collection of <ahref="https://techzone.ibm.com/collection/tech-zone-certified-base-images/journey-vmware-on-ibm-cloud-environments"target="_blank">IBM Technolgy Zone (ITZ) Certified Base Images</a>.
13
13
14
-
**Click** the link below to request a reservation directly from ITZ:
14
+
---
15
+
16
+
1.**Click** the link below to request a reservation directly from ITZ:
5. Once the cluster has been successfully deployed, you will receive an email with the header: `Reservation Ready on IBM Technology Zone`.
60
+
Once the cluster has been successfully deployed, you will receive an email with the header: `Reservation Ready on IBM Technology Zone`.
61
+
62
+
---
59
63
60
-
Confirm that the ITZ email states that **Status Update: Ready**^[A]^. Follow the link provided in the email, or access the **<ahref="https://techzone.ibm.com/my/reservations"target="_blank">My Reservations</a>** tab on ITZ to access your reservation.
64
+
5. Confirm that the ITZ email states that **Status Update: Ready**^[A]^. Follow the link provided in the email, or access the **<ahref="https://techzone.ibm.com/my/reservations"target="_blank">My Reservations</a>** tab on ITZ to access your reservation.
61
65
62
66
---
63
67
@@ -79,7 +83,7 @@
79
83
80
84
---
81
85
82
-
8. After logging into the OCP dashboard, copy the **URL** of the page (from your web browser) and record that to a notepad as *OCP dashboard URL*. This will be referenced in subsequent modules.
86
+
8. After logging into the OCP dashboard, copy the **URL** of the page (from your web browser) and record that to a notepad as **OCP Dashboard URL**.
Copy file name to clipboardexpand all lines: docs/on-premises/3.md
+26-12
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,11 @@
5
5
6
6
## **i. Connect to the bastion host**
7
7
8
-
1. Before configuring the bastion host node, you will need to retrieve its connectivity and access control details. Return to the **<ahref="https://techzone.ibm.com/my/reservations"target="_blank">My Reservations</a>** page on the IBM Technology Zone (ITZ) and open the dashboard for the OpenShift Container Platform (OCP) cluster.
8
+
Before configuring the bastion host node, you will need to retrieve its connectivity and access control details.
9
+
10
+
---
11
+
12
+
1. Return to the **<ahref="https://techzone.ibm.com/my/reservations"target="_blank">My Reservations</a>** page on the IBM Technology Zone (ITZ) and open the dashboard for the OpenShift Container Platform (OCP) cluster.
9
13
10
14
---
11
15
@@ -39,14 +43,14 @@
39
43
40
44
## **ii. OpenShift command line interface (oc)**
41
45
42
-
4.Next, install the OpenShift Command Line Interface (CLI), designated `oc`, to programmatically perform work with the bastion node.
46
+
Next, install the OpenShift Command Line Interface (CLI), designated `oc`, to programmatically perform work with the bastion node.
43
47
44
-
- Retrieve the *OCP dashboard URL* (recorded in Step 8 of the previous module).
48
+
---
45
49
46
-
- Obtain the *OpenShift base domain* by extracting the portion of the URL that matches the position highlighted in the sample URL below. Extract the portion after `.apps.` until the end of the URL. The characters in *your* OCP dashboard URL will differ.
50
+
4.**Retrieve** the *OCP dashboard URL* (recorded in Step 8 of the previous module). Obtain the *OpenShift base domain* by extracting the portion of the URL that matches the position highlighted in the sample URL below. Extract the portion after `.apps.` until the end of the URL. The characters in *your* OCP dashboard URL will differ.
- In this example, the value of the *OpenShift base domain* is `678a250b79141644e78804e0.ocp.techzone.ibm.com`
52
56
@@ -82,7 +86,11 @@
82
86
83
87
## **iii. Podman install **
84
88
85
-
7. IBM Cloud Pak for Data (CP4D)'s installer requires containers and as such Podman must be set up on the bastion host ahead of time. Using the connected Terminal console, execute the following instruction to install Podman:
89
+
IBM Cloud Pak for Data (CP4D)'s installer requires containers and as such Podman must be set up on the bastion host ahead of time.
90
+
91
+
---
92
+
93
+
7. Using the connected Terminal console, **execute** the following instruction to install Podman:
86
94
87
95
``` shell
88
96
sudo yum install -y podman
@@ -94,9 +102,11 @@
94
102
95
103
## **iv. Environment variables **
96
104
97
-
8. Next, you must set the environment variables needed for installation of CP4D on the cluster. The list is quite extensive and long, so rather than set these one at a time it's recommended that you first compile them into a single file on the bastion host. Afterwards you can set all the variables automatically using the single file.
105
+
Next, you must set the environment variables needed for installation of CP4D on the cluster. The list is quite extensive and long, so rather than set these one at a time it's recommended that you first compile them into a single file on the bastion host. Afterwards you can set all the variables automatically using the single file.
106
+
107
+
---
98
108
99
-
Below is a code block containing all of the necessary CP4D environment variables. Copy the contents of the entire block to your clipboard and paste into a notepad.
109
+
8. Below is a code block containing all of the necessary CP4D environment variables. **Copy** the contents of the entire block to your clipboard and **paste** into a notepad.
100
110
101
111
</br>
102
112
**CP4D Environment Variables**
@@ -209,7 +219,11 @@
209
219
210
220
## **v. Cloud Pak for Data command line interface (cpd-cli) **
211
221
212
-
11. Now that the environment variables have been set, the next step towards installing CP4D is preparing the command line interface (`cpd-cli`). First, execute the following command in the Terminal console so that subsequent actions taken are done with elevated permissions:
222
+
Now that the environment variables have been set, the next step towards installing CP4D is preparing the command line interface (`cpd-cli`).
223
+
224
+
---
225
+
226
+
11. **Execute** the following command in the Terminal console so that subsequent actions taken are done with elevated permissions:
213
227
214
228
``` shell
215
229
sudo bash
@@ -262,8 +276,8 @@
262
276
263
277
At this stage the bastion host node has been fully configured ahead of installing the necessary software, which will be covered in the subsequent modules.
264
278
265
-
??? warning "SESSION TIMEOUTS AND LOGGING BACK IN"
266
-
Be aware that SSH connections made over Terminal will time out after a long period of inactivity or due to a connection error. If you need to log back into the bastion terminal, follow the procedure below. Replace the `<...>` placeholders with values specific to *your* environment.
279
+
??? note "TROUBLESHOOTING: LOGGING IN AND SESSION TIMEOUTS"
280
+
Be aware that SSH connections made over Terminal will time out after a long period of inactivity or due to a connection error. If you need to log back into the bastion terminal, follow the procedure below. Replace the `<BASTION_PWD>` placeholder with the password specific to *your* environment.
Copy file name to clipboardexpand all lines: docs/on-premises/4.md
+48-12
Original file line number
Diff line number
Diff line change
@@ -5,16 +5,11 @@
5
5
6
6
## **i. Change the process IDs limit**
7
7
8
-
1.With a newly installed cluster, a *KubeletConfig* will need to be manually created before the cluster's process IDs can be modified. This file will define the `podPidsLimit` and `maxPods` variables for the environment.
8
+
With a newly installed cluster, a *KubeletConfig* will need to be manually created before the cluster's process IDs can be modified. This file will define the `podPidsLimit` and `maxPods` variables for the environment.
9
9
10
-
!!! note "KUBELETCONFIG TEST"
11
-
You can test whether a *KubeletConfig* file exists on the system by executing the following command:
10
+
---
12
11
13
-
``` shell
14
-
oc get kubeletconfig
15
-
```
16
-
17
-
Copy the contents of the following code block and then **execute** within your Terminal console to generate a new *KubeletConfig* file:
12
+
1. Copy the contents of the following code block and then **execute** within your Terminal console to generate a new *KubeletConfig* file:
18
13
19
14
```shell
20
15
oc apply -f - <<EOF
@@ -34,6 +29,13 @@
34
29
EOF
35
30
```
36
31
32
+
!!! note "KUBELETCONFIG TEST"
33
+
You can test whether a *KubeletConfig* file exists on the system by executing the following command:
34
+
35
+
``` shell
36
+
oc get kubeletconfig
37
+
```
38
+
37
39
---
38
40
39
41
2. Use the CP4D command line (`cpd-cli`) to log into OCP by **executing** the following code:
@@ -51,10 +53,11 @@
51
53
52
54
## **ii. Update cluster global pull secret**
53
55
54
-
3. Use `cpd-cli` to manage the creation or updating of the global image pull *secret* via the `add-icr-cred-to-global-pull-secret` command.
56
+
Use `cpd-cli` to manage the creation or updating of the global image pull *secret* via the `add-icr-cred-to-global-pull-secret` command.
55
57
56
-
</br>
57
-
**Execute** the following command within the Terminal console:
58
+
---
59
+
60
+
3. **Execute** the following command within the Terminal console:
In the following module, you will install the necessary prerequisite software required to deploy IBM Cloud Pak for Data and IBM watsonx Code Assistant.
92
+
In the following module, you will install the necessary prerequisite software required to deploy IBM Cloud Pak for Data and IBM watsonx Code Assistant.
93
+
94
+
??? note "TROUBLESHOOTING: LOGGING IN AND SESSION TIMEOUTS"
95
+
Be aware that SSH connections made over Terminal will time out after a long period of inactivity or due to a connection error. If you need to log back into the bastion terminal, follow the procedure below. Replace the `<BASTION_PWD>` placeholder with the password specific to *your* environment.
Copy file name to clipboardexpand all lines: docs/on-premises/5.md
+51-13
Original file line number
Diff line number
Diff line change
@@ -1,13 +1,13 @@
1
1
# **Install Prerequisite Software**</br>*On-Premises Installation and Deployment*
2
2
3
-
## **i. Install the Red Hat OpenShift cert-manager**
4
-
5
-
Following the release of **IBM Software Hub**, previous methods for installing IBM Cloud Pak for Data (CP4D) that relied on IBM Cert Manager are no longer required. IBM Software Hub is now the recommended path and will be the method adhered to with the following hands-on instructions.
6
-
7
-
!!! note ""
3
+
!!! quote ""
8
4
Review the <ahref="https://www.ibm.com/docs/en/software-hub/5.1.x?topic=cluster-installing-cert-manager-operator"target="_blank">**latest documentation on IBM Software Hub**</a> to determine the appropriate version needed for your client opportunity and OpenShift cluster version.
5
+
</br>
6
+
The following module <ahref="https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/security_and_compliance/cert-manager-operator-for-red-hat-openshift#cert-manager-operator-install"target="_blank">**follows the documentation**</a> for installing IBM Software Hub's `cert-manager-operator.v.13.0` for OpenShift Container Platform (OCP) v4.16.
9
7
10
-
The following module <ahref="https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/security_and_compliance/cert-manager-operator-for-red-hat-openshift#cert-manager-operator-install"target="_blank">**follows the documentation**</a> for installing IBM Software Hub's `cert-manager-operator.v.13.0` for OpenShift Container Platform (OCP) v4.16.
8
+
## **i. Install the Red Hat OpenShift cert-manager**
9
+
10
+
After the release of **IBM Software Hub**, previous methods for installing IBM Cloud Pak for Data (CP4D) that relied on IBM Cert Manager are no longer required. IBM Software Hub is now the recommended path and will be the method adhered in the following section.
11
11
12
12
---
13
13
@@ -39,6 +39,7 @@ The following module <a href="https://docs.redhat.com/en/documentation/openshift
39
39
40
40
6. You can track the progress of the Operator installation by drilling down into **Operators** > **Installed Operators** from the left-hand side of the OCP Dashboard. Scroll down until you locate the `openshift-cert-manager-operator` entry in the table.
41
41
42
+
</br>
42
43
Monitor the progress by observing changes to the fourth column of the table. **Wait** until Operator shows a status of `Upgrade Available` — then click the hyperlinked text to navigate to the *Subscription Details* panel.
43
44
44
45
---
@@ -90,9 +91,6 @@ The following module <a href="https://docs.redhat.com/en/documentation/openshift
90
91
</br>
91
92
Although GPUs will not be available to deploy or interact with for the *On-Premises Installation and Deployment* L4 training modules, participants will still be able to practice and learn the skills needed to prepare a cluster for GPUs.
92
93
93
-
!!! note ""
94
-
The following section is based off a selection of the complete IBM Documentation available for<a href="https://www.ibm.com/docs/en/software-hub/5.1.x?topic=software-installing-operators-services-that-require-gpus" target="_blank">**Installing operators for services that require GPUs**</a>.
95
-
96
94
This section will cover all of the necessary configuration and setup required to make GPUs available to an IBM watsonx Code Asssitant service — shy of actually getting to use the GPUs with the on-premises deployment. Participants will still be able to interact with GPU-powered instances in the latter IBM Cloud (SaaS) modules of the L4 curriculum.
97
95
98
96
Services such as IBM watsonx Code Assistant (on-premises), which requires access to GPUs, need to install several Operators on the OpenShift cluster to support the management of NVIDIA software components. Those components, in turn, are needed to provision the GPUs for access by the cluster.
@@ -102,12 +100,17 @@ IBM watsonx Code Assistant requires that the following Operators be installed:
102
100
- *NVIDIA GPU Operator*: within the `nvidia-gpu-operator` namespace
103
101
- *Red Hat OpenShift AI*
104
102
105
-
The following section will provide the instructions necessary to replicate this procedure. **Participants are welcome to practice this with the L4 environment provided** — just be aware that no physical GPU hardware will be available or connected at the conclusion of these steps.
106
-
107
103
---
108
104
109
105
## **iii. Install the Node Feature Discovery Operator**
110
106
107
+
!!! note ""
108
+
The following section is based off a selection of the complete IBM Documentation available for<a href="https://www.ibm.com/docs/en/software-hub/5.1.x?topic=software-installing-operators-services-that-require-gpus" target="_blank">**Installing operators for services that require GPUs**</a>.
109
+
110
+
The following section will provide the instructions necessary to replicate this procedure. **Participants are welcome to practice this with the L4 environment provided** — just be aware that no physical GPU hardware will be available or connected at the conclusion of these steps.
111
+
112
+
---
113
+
111
114
11. First, you will use the Terminal console to programmatically create a namespace `openshift-nfd`for the Node Feature Discovery (NDF) Operator. The following instruction set will create a namespace **Custom Resource** (CR) that defines the `openshift-ndf` namespace and then saves the YAML file to `nfd-namespace.yaml`.
112
115
113
116
Copy and paste the following code block into the Terminal console, then hit ++return++ to execute:
@@ -284,6 +287,8 @@ The following section will provide the instructions necessary to replicate this
284
287
!!! note ""
285
288
The following section is based off a selection of the complete NVIDIA Corporation documentation for<a href="https://docs.nvidia.com/datacenter/cloud-native/openshift/24.9.1/install-gpu-ocp.html" target="_blank">**Installing the NVIDIA GPU Operator on OpenShift**</a>.
286
289
290
+
---
291
+
287
292
17. **Create** the `nvidia-gpu-operator` namespace by executing the following code block with a Terminal console:
288
293
289
294
``` shell
@@ -378,7 +383,7 @@ The following section will provide the instructions necessary to replicate this
378
383
379
384
---
380
385
381
-
!!!warning"NOTE TO THE AUTHOR"
386
+
!!!tip"NOTE TO THE AUTHOR"
382
387
** THIS WILL BE THE SECTION YOU WILL NEED TO FILL OUT WITH A DEDICATED CLUSTER WITH GPU ACCESS, PURELY FOR DEMONSTRATION PURPOSES. REFER TO 00:21:00 TIMESTAMP IN THE RECORDING.**
383
388
384
389
---
@@ -574,4 +579,37 @@ IBM watsonx Code Assistant (on-premises) requires installation and configuration
574
579
575
580
## **vi. Next steps**
576
581
577
-
At this stage, all of the necessary prerequisites have been installed and you are ready to begin installation of an IBM Software Hub instance on the OCP cluster.
582
+
At this stage, all of the necessary prerequisites have been installed and you are ready to begin installation of an IBM Software Hub instance on the OCP cluster.
583
+
584
+
??? note "TROUBLESHOOTING: LOGGING IN AND SESSION TIMEOUTS"
585
+
Be aware that SSH connections made over Terminal will time out after a long period of inactivity or due to a connection error. If you need to log back into the bastion terminal, follow the procedure below. Replace the `<BASTION_PWD>` placeholder with the password specific to *your* environment.
0 commit comments