Skip to content

Commit

Permalink
Copied from DOC 360
Browse files Browse the repository at this point in the history
  • Loading branch information
SherinDaher-Runai committed Jan 2, 2025
1 parent 232b31d commit 72ac5de
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 12 deletions.
7 changes: 3 additions & 4 deletions docs/platform-admin/workloads/assets/compute.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Click one of the values in the Workload(s) column to view the list of workloads
| Column | Description |
| :---- | :---- |
| Workload | The workload that uses the compute resource |
| Type | (Workspace/Training/Inference) |
| Type | Workspace/Training/Inference |
| Status | Represents the workload lifecycle. See the full list of [workload status](../overviews/managing-workloads.md#workload-status). |

### Customizing the table view
Expand All @@ -63,7 +63,7 @@ To add a new compute resource:
5. Enter a **name** for the compute resource. The name must be unique.
6. Optional: Provide a **description** of the essence of the compute resource
7. Set the resource types needed within a single node
(The Run:ai scheduler tries to match a single node that complies with the compute resource for each of the workload’s pods)
(the Run:ai scheduler tries to match a single node that complies with the compute resource for each of the workload’s pods)
* **GPU**
* **GPU devices per pod**
The number of devices (physical GPUs) per pod
Expand All @@ -79,8 +79,7 @@ To add a new compute resource:
* Select the memory request format
* **% (of device) -** Fraction of a GPU device’s memory
* **MB (memory size) -** An explicit GPU memory unit
* **GB (memory size) -** An explicit GPU memory unit
* **Multi-instance GPU (MIG)** - MIG profile (Deprecated)
* **GB (memory size) -** An explicit GPU memory unit
* Set the memory **Request -** The minimum amount of GPU memory that is provisioned per device. This means that any pod of a running workload that uses this compute resource, receives this amount of GPU memory for each device(s) the pod utilizes
* Optional: Set the memory **Limit** - The maximum amount of GPU memory that is provisioned per device. This means that any pod of a running workload that uses this compute resource, receives **at most** this amount of GPU memory for each device(s) the pod utilizes.
To set a Limit, first enable the limit toggle. The limit value must be equal to or higher than the request.
Expand Down
16 changes: 8 additions & 8 deletions docs/platform-admin/workloads/assets/environments.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,26 +22,26 @@ The Environments table consists of the following columns:
| Column | Description |
| :---- | :---- |
| Environment | The name of the environment |
| Description | A description of the essence of the environment |
| Description | A description of the environment |
| Scope | The [scope](overview.md#asset-scope) of this environment within the organizational tree. Click the name of the scope to view the organizational tree diagram |
| Image | The application or service to be run by the workload |
| Workload Architecture | This can be either standard for running workloads on a single node or distributed for running distributed workloads on a multiple nodes |
| Workload Architecture | This can be either standard for running workloads on a single node or distributed for running distributed workloads on multiple nodes |
| Tool(s) | The tools and connection types the environment exposes |
| Workload(s) | The list of existing workloads that use the environment |
| Workload types | The workload types that can use the environment |
| Workload types | The workload types that can use the environment (Workspace/ Training / Inference) |
| Template(s) | The list of workload templates that use this environment |
| Created by | The user who created the environment. By default Run:ai UI comes with [preinstalled environments](#tools-associated-with-the-environment) created by Run:ai |
| Created by | The user who created the environment. By default Run:ai UI comes with [preinstalled environments created by Run:ai](#environments-created-by-run:ai) created by Run:ai |
| Creation time | The timestamp of when the environment was created |
| Last updated | The timestamp of when the environment was last updated |
| Cluster | The cluster that the environment is associated with |
| Cluster | The cluster with which the environment is associated |

### Tools associated with the environment

Click one of the values in the tools column to view the list of tools and their connection type.

| Column | Description |
| :---- | :---- |
| Tool name | The name of the tool or application AI practitioner can set up within the environment. See [Integrations](./integrations/integration-overview.md) | TBD: link
| Tool name | The name of the tool or application AI practitioner can set up within the environment. |
| Connection type | The method by which you can access and interact with the running workload. It's essentially the "doorway" through which you can reach and use the tools the workload provide. (E.g node port, external URL, etc) |

### Workloads associated with the environment
Expand All @@ -52,7 +52,7 @@ Click one of the values in the Workload(s) column to view the list of workloads
| :---- | :---- |
| Workload | The workload that uses the environment |
| Type | The workload type (Workspace/Training/Inference) |
| Status | Represents the workload lifecycle. see the full list of [workload status](../../../Researcher/workloads/overviews/managing-workloads.md#workload-status) |
| Status | Represents the workload lifecycle. See the full list of [workload status](../../../Researcher/workloads/overviews/managing-workloads.md#workload-status) |

### Customizing the table view

Expand All @@ -64,7 +64,7 @@ Click one of the values in the Workload(s) column to view the list of workloads

## Environments created by Run:ai

When installing Run:ai, you automatically get the environment created by Run:ai to ease up the onboarding process and support different use cases out of the box.
When installing Run:ai, you automatically get the environments created by Run:ai to ease up the onboarding process and support different use cases out of the box.
These environments are created at the [scope](./overview.md#asset-scope) of the account.

| Environment | Image |
Expand Down

0 comments on commit 72ac5de

Please sign in to comment.