Skip to content

Commit a6e5237

Browse files
author
bienko
committed
March 14
1 parent 6341da2 commit a6e5237

File tree

9 files changed

+335
-70
lines changed

9 files changed

+335
-70
lines changed

docs/on-premises/2.md

+9-5
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,11 @@
99

1010
## **i. Configuring the IBM Technology Zone reservation**
1111

12-
1. The foundation for the on-premises environment utilizes the `OpenShift Cluster (VMware on IBM Cloud) - UPI - Public` template from the collection of <a href="https://techzone.ibm.com/collection/tech-zone-certified-base-images/journey-vmware-on-ibm-cloud-environments" target="_blank">IBM Technolgy Zone (ITZ) Certified Base Images</a>.
12+
The foundation for the on-premises environment utilizes the `OpenShift Cluster (VMware on IBM Cloud) - UPI - Public` template from the collection of <a href="https://techzone.ibm.com/collection/tech-zone-certified-base-images/journey-vmware-on-ibm-cloud-environments" target="_blank">IBM Technolgy Zone (ITZ) Certified Base Images</a>.
1313

14-
**Click** the link below to request a reservation directly from ITZ:
14+
---
15+
16+
1. **Click** the link below to request a reservation directly from ITZ:
1517

1618
!!! warning ""
1719
**URL:** <a href="https://techzone.ibm.com/my/reservations/create/63a3a25a3a4689001740dbb3" target="_blank">https://techzone.ibm.com/my/reservations/create/63a3a25a3a4689001740dbb3</a>
@@ -55,9 +57,11 @@
5557

5658
## **ii. Accessing the cluster**
5759

58-
5. Once the cluster has been successfully deployed, you will receive an email with the header: `Reservation Ready on IBM Technology Zone`.
60+
Once the cluster has been successfully deployed, you will receive an email with the header: `Reservation Ready on IBM Technology Zone`.
61+
62+
---
5963

60-
Confirm that the ITZ email states that **Status Update: Ready**^[A]^. Follow the link provided in the email, or access the **<a href="https://techzone.ibm.com/my/reservations" target="_blank">My Reservations</a>** tab on ITZ to access your reservation.
64+
5. Confirm that the ITZ email states that **Status Update: Ready**^[A]^. Follow the link provided in the email, or access the **<a href="https://techzone.ibm.com/my/reservations" target="_blank">My Reservations</a>** tab on ITZ to access your reservation.
6165

6266
---
6367

@@ -79,7 +83,7 @@
7983

8084
---
8185

82-
8. After logging into the OCP dashboard, copy the **URL** of the page (from your web browser) and record that to a notepad as *OCP dashboard URL*. This will be referenced in subsequent modules.
86+
8. After logging into the OCP dashboard, copy the **URL** of the page (from your web browser) and record that to a notepad as **OCP Dashboard URL**.
8387

8488
---
8589

docs/on-premises/3.md

+26-12
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,11 @@
55

66
## **i. Connect to the bastion host**
77

8-
1. Before configuring the bastion host node, you will need to retrieve its connectivity and access control details. Return to the **<a href="https://techzone.ibm.com/my/reservations" target="_blank">My Reservations</a>** page on the IBM Technology Zone (ITZ) and open the dashboard for the OpenShift Container Platform (OCP) cluster.
8+
Before configuring the bastion host node, you will need to retrieve its connectivity and access control details.
9+
10+
---
11+
12+
1. Return to the **<a href="https://techzone.ibm.com/my/reservations" target="_blank">My Reservations</a>** page on the IBM Technology Zone (ITZ) and open the dashboard for the OpenShift Container Platform (OCP) cluster.
913

1014
---
1115

@@ -39,14 +43,14 @@
3943

4044
## **ii. OpenShift command line interface (oc)**
4145

42-
4. Next, install the OpenShift Command Line Interface (CLI), designated `oc`, to programmatically perform work with the bastion node.
46+
Next, install the OpenShift Command Line Interface (CLI), designated `oc`, to programmatically perform work with the bastion node.
4347

44-
- Retrieve the *OCP dashboard URL* (recorded in Step 8 of the previous module).
48+
---
4549

46-
- Obtain the *OpenShift base domain* by extracting the portion of the URL that matches the position highlighted in the sample URL below. Extract the portion after `.apps.` until the end of the URL. The characters in *your* OCP dashboard URL will differ.
50+
4. **Retrieve** the *OCP dashboard URL* (recorded in Step 8 of the previous module). Obtain the *OpenShift base domain* by extracting the portion of the URL that matches the position highlighted in the sample URL below. Extract the portion after `.apps.` until the end of the URL. The characters in *your* OCP dashboard URL will differ.
4751

48-
!!! note ""
49-
https://console-openshift-console.apps.**678a250b79141644e78804e0.ocp.techzone.ibm.com**
52+
!!! note ""
53+
https://console-openshift-console.apps.**678a250b79141644e78804e0.ocp.techzone.ibm.com**
5054

5155
- In this example, the value of the *OpenShift base domain* is `678a250b79141644e78804e0.ocp.techzone.ibm.com`
5256

@@ -82,7 +86,11 @@
8286

8387
## **iii. Podman install **
8488

85-
7. IBM Cloud Pak for Data (CP4D)'s installer requires containers and as such Podman must be set up on the bastion host ahead of time. Using the connected Terminal console, execute the following instruction to install Podman:
89+
IBM Cloud Pak for Data (CP4D)'s installer requires containers and as such Podman must be set up on the bastion host ahead of time.
90+
91+
---
92+
93+
7. Using the connected Terminal console, **execute** the following instruction to install Podman:
8694
8795
``` shell
8896
sudo yum install -y podman
@@ -94,9 +102,11 @@
94102
95103
## **iv. Environment variables **
96104
97-
8. Next, you must set the environment variables needed for installation of CP4D on the cluster. The list is quite extensive and long, so rather than set these one at a time it's recommended that you first compile them into a single file on the bastion host. Afterwards you can set all the variables automatically using the single file.
105+
Next, you must set the environment variables needed for installation of CP4D on the cluster. The list is quite extensive and long, so rather than set these one at a time it's recommended that you first compile them into a single file on the bastion host. Afterwards you can set all the variables automatically using the single file.
106+
107+
---
98108

99-
Below is a code block containing all of the necessary CP4D environment variables. Copy the contents of the entire block to your clipboard and paste into a notepad.
109+
8. Below is a code block containing all of the necessary CP4D environment variables. **Copy** the contents of the entire block to your clipboard and **paste** into a notepad.
100110

101111
</br>
102112
**CP4D Environment Variables**
@@ -209,7 +219,11 @@
209219
210220
## **v. Cloud Pak for Data command line interface (cpd-cli) **
211221
212-
11. Now that the environment variables have been set, the next step towards installing CP4D is preparing the command line interface (`cpd-cli`). First, execute the following command in the Terminal console so that subsequent actions taken are done with elevated permissions:
222+
Now that the environment variables have been set, the next step towards installing CP4D is preparing the command line interface (`cpd-cli`).
223+
224+
---
225+
226+
11. **Execute** the following command in the Terminal console so that subsequent actions taken are done with elevated permissions:
213227
214228
``` shell
215229
sudo bash
@@ -262,8 +276,8 @@
262276
263277
At this stage the bastion host node has been fully configured ahead of installing the necessary software, which will be covered in the subsequent modules.
264278
265-
??? warning "SESSION TIMEOUTS AND LOGGING BACK IN"
266-
Be aware that SSH connections made over Terminal will time out after a long period of inactivity or due to a connection error. If you need to log back into the bastion terminal, follow the procedure below. Replace the `<...>` placeholders with values specific to *your* environment.
279+
??? note "TROUBLESHOOTING: LOGGING IN AND SESSION TIMEOUTS"
280+
Be aware that SSH connections made over Terminal will time out after a long period of inactivity or due to a connection error. If you need to log back into the bastion terminal, follow the procedure below. Replace the `<BASTION_PWD>` placeholder with the password specific to *your* environment.
267281
268282
1. Log back into the bastion node:
269283

docs/on-premises/4.md

+48-12
Original file line numberDiff line numberDiff line change
@@ -5,16 +5,11 @@
55

66
## **i. Change the process IDs limit**
77

8-
1. With a newly installed cluster, a *KubeletConfig* will need to be manually created before the cluster's process IDs can be modified. This file will define the `podPidsLimit` and `maxPods` variables for the environment.
8+
With a newly installed cluster, a *KubeletConfig* will need to be manually created before the cluster's process IDs can be modified. This file will define the `podPidsLimit` and `maxPods` variables for the environment.
99

10-
!!! note "KUBELETCONFIG TEST"
11-
You can test whether a *KubeletConfig* file exists on the system by executing the following command:
10+
---
1211

13-
``` shell
14-
oc get kubeletconfig
15-
```
16-
17-
Copy the contents of the following code block and then **execute** within your Terminal console to generate a new *KubeletConfig* file:
12+
1. Copy the contents of the following code block and then **execute** within your Terminal console to generate a new *KubeletConfig* file:
1813

1914
``` shell
2015
oc apply -f - << EOF
@@ -34,6 +29,13 @@
3429
EOF
3530
```
3631
32+
!!! note "KUBELETCONFIG TEST"
33+
You can test whether a *KubeletConfig* file exists on the system by executing the following command:
34+
35+
``` shell
36+
oc get kubeletconfig
37+
```
38+
3739
---
3840
3941
2. Use the CP4D command line (`cpd-cli`) to log into OCP by **executing** the following code:
@@ -51,10 +53,11 @@
5153
5254
## **ii. Update cluster global pull secret**
5355
54-
3. Use `cpd-cli` to manage the creation or updating of the global image pull *secret* via the `add-icr-cred-to-global-pull-secret` command.
56+
Use `cpd-cli` to manage the creation or updating of the global image pull *secret* via the `add-icr-cred-to-global-pull-secret` command.
5557
56-
</br>
57-
**Execute** the following command within the Terminal console:
58+
---
59+
60+
3. **Execute** the following command within the Terminal console:
5861
5962
``` shell
6063
cpd-cli manage add-icr-cred-to-global-pull-secret --entitled_registry_key=${IBM_ENTITLEMENT_KEY}
@@ -86,4 +89,37 @@
8689
8790
## **iii. Next steps**
8891
89-
In the following module, you will install the necessary prerequisite software required to deploy IBM Cloud Pak for Data and IBM watsonx Code Assistant.
92+
In the following module, you will install the necessary prerequisite software required to deploy IBM Cloud Pak for Data and IBM watsonx Code Assistant.
93+
94+
??? note "TROUBLESHOOTING: LOGGING IN AND SESSION TIMEOUTS"
95+
Be aware that SSH connections made over Terminal will time out after a long period of inactivity or due to a connection error. If you need to log back into the bastion terminal, follow the procedure below. Replace the `<BASTION_PWD>` placeholder with the password specific to *your* environment.
96+
97+
1. Log back into the bastion node:
98+
99+
``` shell
100+
ssh [email protected] -p 40222 <BASTION_PWD>
101+
```
102+
103+
2. Engage the `sudo` (privileged access) session:
104+
105+
``` shell
106+
sudo bash
107+
```
108+
109+
3. Source the environment variables stored in `cpd_vars.sh`:
110+
111+
``` shell
112+
source cpd_vars.sh
113+
```
114+
115+
4. Log back into OpenShift:
116+
117+
``` shell
118+
${OC_LOGIN}
119+
```
120+
121+
5. Log back into `cpd-cli`:
122+
123+
``` shell
124+
${CPDM_OC_LOGIN}
125+
```

docs/on-premises/5.md

+51-13
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
# **Install Prerequisite Software**</br>*On-Premises Installation and Deployment*
22

3-
## **i. Install the Red Hat OpenShift cert-manager**
4-
5-
Following the release of **IBM Software Hub**, previous methods for installing IBM Cloud Pak for Data (CP4D) that relied on IBM Cert Manager are no longer required. IBM Software Hub is now the recommended path and will be the method adhered to with the following hands-on instructions.
6-
7-
!!! note ""
3+
!!! quote ""
84
Review the <a href="https://www.ibm.com/docs/en/software-hub/5.1.x?topic=cluster-installing-cert-manager-operator" target="_blank">**latest documentation on IBM Software Hub**</a> to determine the appropriate version needed for your client opportunity and OpenShift cluster version.
5+
</br>
6+
The following module <a href="https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/security_and_compliance/cert-manager-operator-for-red-hat-openshift#cert-manager-operator-install" target="_blank">**follows the documentation**</a> for installing IBM Software Hub's `cert-manager-operator.v.13.0` for OpenShift Container Platform (OCP) v4.16.
97

10-
The following module <a href="https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/security_and_compliance/cert-manager-operator-for-red-hat-openshift#cert-manager-operator-install" target="_blank">**follows the documentation**</a> for installing IBM Software Hub's `cert-manager-operator.v.13.0` for OpenShift Container Platform (OCP) v4.16.
8+
## **i. Install the Red Hat OpenShift cert-manager**
9+
10+
After the release of **IBM Software Hub**, previous methods for installing IBM Cloud Pak for Data (CP4D) that relied on IBM Cert Manager are no longer required. IBM Software Hub is now the recommended path and will be the method adhered in the following section.
1111

1212
---
1313

@@ -39,6 +39,7 @@ The following module <a href="https://docs.redhat.com/en/documentation/openshift
3939

4040
6. You can track the progress of the Operator installation by drilling down into **Operators** > **Installed Operators** from the left-hand side of the OCP Dashboard. Scroll down until you locate the `openshift-cert-manager-operator` entry in the table.
4141

42+
</br>
4243
Monitor the progress by observing changes to the fourth column of the table. **Wait** until Operator shows a status of `Upgrade Available` — then click the hyperlinked text to navigate to the *Subscription Details* panel.
4344

4445
---
@@ -90,9 +91,6 @@ The following module <a href="https://docs.redhat.com/en/documentation/openshift
9091
</br>
9192
Although GPUs will not be available to deploy or interact with for the *On-Premises Installation and Deployment* L4 training modules, participants will still be able to practice and learn the skills needed to prepare a cluster for GPUs.
9293

93-
!!! note ""
94-
The following section is based off a selection of the complete IBM Documentation available for <a href="https://www.ibm.com/docs/en/software-hub/5.1.x?topic=software-installing-operators-services-that-require-gpus" target="_blank">**Installing operators for services that require GPUs**</a>.
95-
9694
This section will cover all of the necessary configuration and setup required to make GPUs available to an IBM watsonx Code Asssitant service — shy of actually getting to use the GPUs with the on-premises deployment. Participants will still be able to interact with GPU-powered instances in the latter IBM Cloud (SaaS) modules of the L4 curriculum.
9795

9896
Services such as IBM watsonx Code Assistant (on-premises), which requires access to GPUs, need to install several Operators on the OpenShift cluster to support the management of NVIDIA software components. Those components, in turn, are needed to provision the GPUs for access by the cluster.
@@ -102,12 +100,17 @@ IBM watsonx Code Assistant requires that the following Operators be installed:
102100
- *NVIDIA GPU Operator*: within the `nvidia-gpu-operator` namespace
103101
- *Red Hat OpenShift AI*
104102

105-
The following section will provide the instructions necessary to replicate this procedure. **Participants are welcome to practice this with the L4 environment provided** — just be aware that no physical GPU hardware will be available or connected at the conclusion of these steps.
106-
107103
---
108104

109105
## **iii. Install the Node Feature Discovery Operator**
110106

107+
!!! note ""
108+
The following section is based off a selection of the complete IBM Documentation available for <a href="https://www.ibm.com/docs/en/software-hub/5.1.x?topic=software-installing-operators-services-that-require-gpus" target="_blank">**Installing operators for services that require GPUs**</a>.
109+
110+
The following section will provide the instructions necessary to replicate this procedure. **Participants are welcome to practice this with the L4 environment provided** — just be aware that no physical GPU hardware will be available or connected at the conclusion of these steps.
111+
112+
---
113+
111114
11. First, you will use the Terminal console to programmatically create a namespace `openshift-nfd` for the Node Feature Discovery (NDF) Operator. The following instruction set will create a namespace **Custom Resource** (CR) that defines the `openshift-ndf` namespace and then saves the YAML file to `nfd-namespace.yaml`.
112115

113116
Copy and paste the following code block into the Terminal console, then hit ++return++ to execute:
@@ -284,6 +287,8 @@ The following section will provide the instructions necessary to replicate this
284287
!!! note ""
285288
The following section is based off a selection of the complete NVIDIA Corporation documentation for <a href="https://docs.nvidia.com/datacenter/cloud-native/openshift/24.9.1/install-gpu-ocp.html" target="_blank">**Installing the NVIDIA GPU Operator on OpenShift**</a>.
286289
290+
---
291+
287292
17. **Create** the `nvidia-gpu-operator` namespace by executing the following code block with a Terminal console:
288293
289294
``` shell
@@ -378,7 +383,7 @@ The following section will provide the instructions necessary to replicate this
378383
379384
---
380385
381-
!!! warning "NOTE TO THE AUTHOR"
386+
!!! tip "NOTE TO THE AUTHOR"
382387
** THIS WILL BE THE SECTION YOU WILL NEED TO FILL OUT WITH A DEDICATED CLUSTER WITH GPU ACCESS, PURELY FOR DEMONSTRATION PURPOSES. REFER TO 00:21:00 TIMESTAMP IN THE RECORDING.**
383388
384389
---
@@ -574,4 +579,37 @@ IBM watsonx Code Assistant (on-premises) requires installation and configuration
574579
575580
## **vi. Next steps**
576581
577-
At this stage, all of the necessary prerequisites have been installed and you are ready to begin installation of an IBM Software Hub instance on the OCP cluster.
582+
At this stage, all of the necessary prerequisites have been installed and you are ready to begin installation of an IBM Software Hub instance on the OCP cluster.
583+
584+
??? note "TROUBLESHOOTING: LOGGING IN AND SESSION TIMEOUTS"
585+
Be aware that SSH connections made over Terminal will time out after a long period of inactivity or due to a connection error. If you need to log back into the bastion terminal, follow the procedure below. Replace the `<BASTION_PWD>` placeholder with the password specific to *your* environment.
586+
587+
1. Log back into the bastion node:
588+
589+
``` shell
590+
ssh [email protected] -p 40222 <BASTION_PWD>
591+
```
592+
593+
2. Engage the `sudo` (privileged access) session:
594+
595+
``` shell
596+
sudo bash
597+
```
598+
599+
3. Source the environment variables stored in `cpd_vars.sh`:
600+
601+
``` shell
602+
source cpd_vars.sh
603+
```
604+
605+
4. Log back into OpenShift:
606+
607+
``` shell
608+
${OC_LOGIN}
609+
```
610+
611+
5. Log back into `cpd-cli`:
612+
613+
``` shell
614+
${CPDM_OC_LOGIN}
615+
```

0 commit comments

Comments
 (0)