Skip to content

Commit

Permalink
Bugfix: update old references of 25.02 to 25.06 (#2151)
Browse files Browse the repository at this point in the history
Closes #2150 

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/nv-morpheus/Morpheus/blob/main/docs/source/developer_guide/contributing.md).
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.

Authors:
  - Will Killian (https://github.com/willkill07)

Approvers:
  - David Gardner (https://github.com/dagardner-nv)
  - https://github.com/hsin-c

URL: #2151
  • Loading branch information
willkill07 authored Jan 29, 2025
1 parent ef393ae commit eeb24b1
Show file tree
Hide file tree
Showing 37 changed files with 75 additions and 75 deletions.
2 changes: 1 addition & 1 deletion conda/environments/all_cuda-125_arch-aarch64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ dependencies:
- libwebp=1.3.2
- libzlib >=1.3.1,<2
- mlflow>=2.10.0,<2.18
- mrc=25.02
- mrc=25.06
- myst-parser=0.18.1
- nbsphinx
- networkx=2.8.8
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/all_cuda-125_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ dependencies:
- libwebp=1.3.2
- libzlib >=1.3.1,<2
- mlflow>=2.10.0,<2.18
- mrc=25.02
- mrc=25.06
- myst-parser=0.18.1
- nbsphinx
- networkx=2.8.8
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/dev_cuda-125_arch-aarch64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ dependencies:
- libwebp=1.3.2
- libzlib >=1.3.1,<2
- mlflow>=2.10.0,<2.18
- mrc=25.02
- mrc=25.06
- myst-parser=0.18.1
- nbsphinx
- networkx=2.8.8
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/dev_cuda-125_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ dependencies:
- libwebp=1.3.2
- libzlib >=1.3.1,<2
- mlflow>=2.10.0,<2.18
- mrc=25.02
- mrc=25.06
- myst-parser=0.18.1
- nbsphinx
- networkx=2.8.8
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/examples_cuda-125_arch-aarch64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ dependencies:
- kfp
- libwebp=1.3.2
- mlflow>=2.10.0,<2.18
- mrc=25.02
- mrc=25.06
- networkx=2.8.8
- nodejs=18.*
- numexpr
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/examples_cuda-125_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ dependencies:
- kfp
- libwebp=1.3.2
- mlflow>=2.10.0,<2.18
- mrc=25.02
- mrc=25.06
- networkx=2.8.8
- newspaper3k==0.2.8
- nodejs=18.*
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/runtime_cuda-125_arch-aarch64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ dependencies:
- grpcio-status
- libwebp=1.3.2
- mlflow>=2.10.0,<2.18
- mrc=25.02
- mrc=25.06
- networkx=2.8.8
- numpydoc=1.5
- pip
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/runtime_cuda-125_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ dependencies:
- grpcio-status
- libwebp=1.3.2
- mlflow>=2.10.0,<2.18
- mrc=25.02
- mrc=25.06
- networkx=2.8.8
- numpydoc=1.5
- pip
Expand Down
2 changes: 1 addition & 1 deletion docs/source/basics/building_a_pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@ This example shows an NLP Pipeline which uses several stages available in Morphe
#### Launching Triton
Run the following to launch Triton and load the `sid-minibert` model:
```bash
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model sid-minibert-onnx
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.06 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model sid-minibert-onnx
```

#### Launching Kafka
Expand Down
2 changes: 1 addition & 1 deletion docs/source/basics/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ The ONNX to TensorRT (TRT) conversion utility requires additional packages, whic
conda env update --solver=libmamba -n morpheus --file conda/environments/model-utils_cuda-125_arch-$(arch).yaml
```

Example usage of the ONNX to TRT conversion utility can be found in `models/README.md <https://github.com/nv-morpheus/Morpheus/blob/branch-25.02/models/README.md#generating-trt-models-from-onnx>`_.
Example usage of the ONNX to TRT conversion utility can be found in `models/README.md <https://github.com/nv-morpheus/Morpheus/blob/branch-25.06/models/README.md#generating-trt-models-from-onnx>`_.

AutoComplete
------------
Expand Down
6 changes: 3 additions & 3 deletions docs/source/cloud_deployment_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ The Helm chart (`morpheus-ai-engine`) that offers the auxiliary components requi
Follow the below steps to install Morpheus AI Engine:

```bash
helm fetch https://helm.ngc.nvidia.com/nvidia/morpheus/charts/morpheus-ai-engine-25.02.tgz --username='$oauthtoken' --password=$API_KEY --untar
helm fetch https://helm.ngc.nvidia.com/nvidia/morpheus/charts/morpheus-ai-engine-25.06.tgz --username='$oauthtoken' --password=$API_KEY --untar
```
```bash
helm install --set ngc.apiKey="$API_KEY" \
Expand Down Expand Up @@ -145,7 +145,7 @@ replicaset.apps/zookeeper-87f9f4dd 1 1 1 54s
Run the following command to pull the Morpheus SDK Client (referred to as Helm chart `morpheus-sdk-client`) on to your instance:

```bash
helm fetch https://helm.ngc.nvidia.com/nvidia/morpheus/charts/morpheus-sdk-client-25.02.tgz --username='$oauthtoken' --password=$API_KEY --untar
helm fetch https://helm.ngc.nvidia.com/nvidia/morpheus/charts/morpheus-sdk-client-25.06.tgz --username='$oauthtoken' --password=$API_KEY --untar
```

#### Morpheus SDK Client in Sleep Mode
Expand Down Expand Up @@ -183,7 +183,7 @@ kubectl -n $NAMESPACE exec sdk-cli-helper -- cp -RL /workspace/models /common
The Morpheus MLflow Helm chart offers MLflow server with Triton plugin to deploy, update, and remove models from the Morpheus AI Engine. The MLflow server UI can be accessed using NodePort `30500`. Follow the below steps to install the Morpheus MLflow:

```bash
helm fetch https://helm.ngc.nvidia.com/nvidia/morpheus/charts/morpheus-mlflow-25.02.tgz --username='$oauthtoken' --password=$API_KEY --untar
helm fetch https://helm.ngc.nvidia.com/nvidia/morpheus/charts/morpheus-mlflow-25.06.tgz --username='$oauthtoken' --password=$API_KEY --untar
```
```bash
helm install --set ngc.apiKey="$API_KEY" \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -235,7 +235,7 @@ We will launch a Triton Docker container with:

```shell
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 \
nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 \
nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.06 \
tritonserver --model-repository=/models/triton-model-repo \
--exit-on-error=false \
--log-info=true \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,10 +23,10 @@ Every account, user, service, and machine has a digital fingerprint that represe
To construct this digital fingerprint, we will be training unsupervised behavioral models at various granularities, including a generic model for all users in the organization along with fine-grained models for each user to monitor their behavior. These models are continuously updated and retrained over time​, and alerts are triggered when deviations from normality occur for any user​.

## Running the DFP Example
Instructions for building and running the DFP example are available in the [`examples/digital_fingerprinting/production/README.md`](https://github.com/nv-morpheus/Morpheus/blob/branch-25.02/examples/digital_fingerprinting/production/README.md) guide in the Morpheus repository.
Instructions for building and running the DFP example are available in the [`examples/digital_fingerprinting/production/README.md`](https://github.com/nv-morpheus/Morpheus/blob/branch-25.06/examples/digital_fingerprinting/production/README.md) guide in the Morpheus repository.

## Training Sources
The data we will want to use for the training and inference will be any sensitive system that the user interacts with, such as VPN, authentication and cloud services. The digital fingerprinting example ([`examples/digital_fingerprinting/production/README.md`](https://github.com/nv-morpheus/Morpheus/blob/branch-25.02/examples/digital_fingerprinting/production/README.md)) included in Morpheus ingests logs from [Azure Active Directory](https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/concept-sign-ins), and [Duo Authentication](https://duo.com/docs/adminapi).
The data we will want to use for the training and inference will be any sensitive system that the user interacts with, such as VPN, authentication and cloud services. The digital fingerprinting example ([`examples/digital_fingerprinting/production/README.md`](https://github.com/nv-morpheus/Morpheus/blob/branch-25.06/examples/digital_fingerprinting/production/README.md)) included in Morpheus ingests logs from [Azure Active Directory](https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/concept-sign-ins), and [Duo Authentication](https://duo.com/docs/adminapi).

The location of these logs could be either local to the machine running Morpheus, a shared file system like NFS, or on a remote store such as [Amazon S3](https://aws.amazon.com/s3/).

Expand Down
2 changes: 1 addition & 1 deletion docs/source/examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Morpheus supports multiple environments, each environment is intended to support

In addition to this many of the examples utilize the Morpheus Triton Models container which can be obtained by running the following command:
```bash
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.06
```

The following are the supported environments:
Expand Down
20 changes: 10 additions & 10 deletions docs/source/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,26 +42,26 @@ More advanced users, or those who are interested in using the latest pre-release
### Pull the Morpheus Image
1. Go to [https://catalog.ngc.nvidia.com/orgs/nvidia/teams/morpheus/containers/morpheus/tags](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/morpheus/containers/morpheus/tags)
1. Choose a version
1. Download the selected version, for example for `25.02`:
1. Download the selected version, for example for `25.06`:
```bash
docker pull nvcr.io/nvidia/morpheus/morpheus:25.02-runtime
docker pull nvcr.io/nvidia/morpheus/morpheus:25.06-runtime
```
1. Optional: Many of the examples require NVIDIA Triton Inference Server to be running with the included models. To download the Morpheus Triton Server Models container, ensure that the version number matches that of the Morpheus container you downloaded in the previous step, then run:
```bash
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.06
```

> **Note about Morpheus versions:**
>
> Morpheus uses Calendar Versioning ([CalVer](https://calver.org/)). For each Morpheus release there will be an image tagged in the form of `YY.MM-runtime` this tag will always refer to the latest point release for that version. In addition to this there will also be at least one point release version tagged in the form of `vYY.MM.00-runtime` this will be the initial point release for that version (ex. `v25.02.00-runtime`). In the event of a major bug, we may release additional point releases (ex. `v25.02.01-runtime`, `v25.02.02-runtime` etc...), and the `YY.MM-runtime` tag will be updated to reference that point release.
> Morpheus uses Calendar Versioning ([CalVer](https://calver.org/)). For each Morpheus release there will be an image tagged in the form of `YY.MM-runtime` this tag will always refer to the latest point release for that version. In addition to this there will also be at least one point release version tagged in the form of `vYY.MM.00-runtime` this will be the initial point release for that version (ex. `v25.06.00-runtime`). In the event of a major bug, we may release additional point releases (ex. `v25.06.01-runtime`, `v25.06.02-runtime` etc...), and the `YY.MM-runtime` tag will be updated to reference that point release.
>
> Users who want to ensure they are running with the latest bug fixes should use a release image tag (`YY.MM-runtime`). Users who need to deploy a specific version into production should use a point release image tag (`vYY.MM.00-runtime`).

### Starting the Morpheus Container
1. Ensure that [The NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#installation) is installed.
1. Start the container downloaded from the previous section:
```bash
docker run --rm -ti --runtime=nvidia --gpus=all --net=host -v /var/run/docker.sock:/var/run/docker.sock nvcr.io/nvidia/morpheus/morpheus:25.02-runtime bash
docker run --rm -ti --runtime=nvidia --gpus=all --net=host -v /var/run/docker.sock:/var/run/docker.sock nvcr.io/nvidia/morpheus/morpheus:25.06-runtime bash
```

Note about some of the flags above:
Expand Down Expand Up @@ -147,17 +147,17 @@ To run the built "release" container, use the following:
./docker/run_container_release.sh
```

The `./docker/run_container_release.sh` script accepts the same `DOCKER_IMAGE_NAME`, and `DOCKER_IMAGE_TAG` environment variables that the `./docker/build_container_release.sh` script does. For example, to run version `v25.02.00` use the following:
The `./docker/run_container_release.sh` script accepts the same `DOCKER_IMAGE_NAME`, and `DOCKER_IMAGE_TAG` environment variables that the `./docker/build_container_release.sh` script does. For example, to run version `v25.06.00` use the following:

```bash
DOCKER_IMAGE_TAG="v25.02.00-runtime" ./docker/run_container_release.sh
DOCKER_IMAGE_TAG="v25.06.00-runtime" ./docker/run_container_release.sh
```

## Acquiring the Morpheus Models Container

Many of the validation tests and example workflows require a Triton server to function. For simplicity, Morpheus provides a pre-built models container, which contains both the Triton and Morpheus models. Users implementing a release version of Morpheus can download the corresponding Triton models container from NGC with the following command:
```bash
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.06
```

Users working with an unreleased development version of Morpheus can build the Triton models container from the Morpheus repository. To build the Triton models container, run the following command from the root of the Morpheus repository:
Expand All @@ -170,7 +170,7 @@ models/docker/build_container.sh
In a new terminal, use the following command to launch a Docker container for Triton loading all of the included pre-trained models:
```bash
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 \
nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 \
nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.06 \
tritonserver --model-repository=/models/triton-model-repo \
--exit-on-error=false \
--log-info=true \
Expand All @@ -183,7 +183,7 @@ This will launch Triton using the default network ports (8000 for HTTP, 8001 for
Note: The above command is useful for testing out Morpheus, however it does load several models into GPU memory, which at the time of this writing consumes roughly 2GB of GPU memory. Production users should consider only loading the specific models they plan on using with the `--model-control-mode=explicit` and `--load-model` flags. For example, to launch Triton only loading the `abp-nvsmi-xgb` model:
```bash
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 \
nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 \
nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.06 \
tritonserver --model-repository=/models/triton-model-repo \
--exit-on-error=false \
--log-info=true \
Expand Down
4 changes: 2 additions & 2 deletions examples/abp_nvsmi_detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,12 +89,12 @@ This example utilizes the Triton Inference Server to perform inference.

Pull the Docker image for Triton:
```bash
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.06
```

Run the following to launch Triton and load the `abp-nvsmi-xgb` XGBoost model:
```bash
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model abp-nvsmi-xgb
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.06 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model abp-nvsmi-xgb
```

This will launch Triton and only load the `abp-nvsmi-xgb` model. This model has been configured with a max batch size of 32768, and to use dynamic batching for increased performance.
Expand Down
4 changes: 2 additions & 2 deletions examples/abp_pcap_detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,13 +30,13 @@ To run this example, an instance of Triton Inference Server and a sample dataset

### Triton Inference Server
```bash
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.06
```

##### Deploy Triton Inference Server
Run the following to launch Triton and load the `abp-pcap-xgb` model:
```bash
docker run --rm --gpus=all -p 8000:8000 -p 8001:8001 -p 8002:8002 --name tritonserver nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model abp-pcap-xgb
docker run --rm --gpus=all -p 8000:8000 -p 8001:8001 -p 8002:8002 --name tritonserver nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.06 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model abp-pcap-xgb
```

##### Verify Model Deployment
Expand Down
2 changes: 1 addition & 1 deletion examples/developer_guide/3_simple_cpp_stage/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ mark_as_advanced(MORPHEUS_CACHE_DIR)
list(PREPEND CMAKE_PREFIX_PATH "$ENV{CONDA_PREFIX}")

project(3_simple_cpp_stage
VERSION 25.02.00
VERSION 25.06.00
LANGUAGES C CXX
)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ list(PREPEND CMAKE_PREFIX_PATH "$ENV{CONDA_PREFIX}")
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")

project(4_rabbitmq_cpp_stage
VERSION 25.02.00
VERSION 25.06.00
LANGUAGES C CXX
)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ channels:
dependencies:
- boto3=1.35
- kfp
- morpheus-dfp=25.02
- morpheus-dfp=25.06
- nodejs=18.*
- papermill=2.4.0
- pip
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ channels:
dependencies:
- boto3=1.35
- kfp
- morpheus-dfp=25.02
- morpheus-dfp=25.06
- nodejs=18.*
- papermill=2.4.0
- pip
Expand Down
2 changes: 1 addition & 1 deletion examples/doca/vdb_realtime/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ To serve the embedding model, we will use Triton:
cd ${MORPHEUS_ROOT}

# Launch Triton
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model all-MiniLM-L6-v2
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.06 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model all-MiniLM-L6-v2
```

## Populate the Milvus database
Expand Down
6 changes: 3 additions & 3 deletions examples/llm/vdb_upload/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,12 +148,12 @@ milvus-server --data .tmp/milvusdb

- Pull the Docker image for Triton:
```bash
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.06
```

- Run the following to launch Triton and load the `all-MiniLM-L6-v2` model:
```bash
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model all-MiniLM-L6-v2
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.06 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model all-MiniLM-L6-v2
```

This will launch Triton and only load the `all-MiniLM-L6-v2` model. Once Triton has loaded the model, the following
Expand Down Expand Up @@ -287,7 +287,7 @@ using `sentence-transformers/paraphrase-multilingual-mpnet-base-v2` as an exampl
- Reload the docker container, specifying that we also need to load paraphrase-multilingual-mpnet-base-v2
```bash
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 \
nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 tritonserver \
nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.06 tritonserver \
--model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model \
all-MiniLM-L6-v2 --load-model sentence-transformers/paraphrase-multilingual-mpnet-base-v2
```
Expand Down
4 changes: 2 additions & 2 deletions examples/log_parsing/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,14 +34,14 @@ Pull the Morpheus Triton models Docker image from NGC.
Example:

```bash
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.06
```

##### Start Triton Inference Server Container
From the Morpheus repo root directory, run the following to launch Triton and load the `log-parsing-onnx` model:

```bash
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model log-parsing-onnx
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.06 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model log-parsing-onnx
```

##### Verify Model Deployment
Expand Down
Loading

0 comments on commit eeb24b1

Please sign in to comment.