From eb3fbb9664d7ab4d3a976cce6ad748e2d52c3705 Mon Sep 17 00:00:00 2001 From: Stewart Blacklock Date: Wed, 25 Jan 2023 12:09:00 -0800 Subject: [PATCH 1/4] Archiving Notice --- README.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/README.md b/README.md index 2a4a9d02..9a11e5e5 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,8 @@ +# DISCONTINUATION OF PROJECT # +This project will no longer be maintained by Intel. +This project has been identified as having known security escapes. +Intel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project. +Intel no longer accepts patches to this project.

From 5ff93649cd4fc45230a22bfa6a91b6e88b031b34 Mon Sep 17 00:00:00 2001 From: Robert Dower Date: Wed, 25 Jan 2023 13:13:11 -0800 Subject: [PATCH 2/4] Update README.md --- README.md | 5 ----- 1 file changed, 5 deletions(-) diff --git a/README.md b/README.md index 9a11e5e5..2a4a9d02 100644 --- a/README.md +++ b/README.md @@ -1,8 +1,3 @@ -# DISCONTINUATION OF PROJECT # -This project will no longer be maintained by Intel. -This project has been identified as having known security escapes. -Intel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project. -Intel no longer accepts patches to this project.

From abfe7551a21d6908c63515c0a9c80ee3b2d9afed Mon Sep 17 00:00:00 2001 From: Vishnudas Thaniel S Date: Sun, 9 Jul 2023 01:35:13 +0530 Subject: [PATCH 3/4] Update README.md Added EOL Notice --- README.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/README.md b/README.md index 2a4a9d02..f96df178 100644 --- a/README.md +++ b/README.md @@ -211,6 +211,13 @@ Note: ONNX Runtime is not required to run the MoE layer. It is integrated in sta
+**OpenVINO™ integration with Torch-ORT will no longer be supported as of OpenVINO™ 2023.0 release.** + +If you are looking to deploy your PyTorch models on Intel based devices, you have a few options. +If you prefer the native PyTorch framework APIs, consider using the Intel Extension for PyTorch (IPEX). Another option is to utilize [OpenVINO Model Conversion API](https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html),which enables the automatic importation and conversion of standard PyTorch models during runtime. It is not necessary to convert your PyTorch models offline now. + + + ONNX Runtime for PyTorch supports PyTorch model inference using ONNX Runtime and Intel® OpenVINO™. It is available via the torch-ort-infer python package. This package enables OpenVINO™ Execution Provider for ONNX Runtime by default for accelerating inference on various Intel® CPUs, Intel® integrated GPUs, and Intel® Movidius™ Vision Processing Units - referred to as VPU. From 02d6dd8b5c9407d5e9f0aacb76dd837e22e1523b Mon Sep 17 00:00:00 2001 From: Vishnudas Thaniel S Date: Sun, 9 Jul 2023 01:37:47 +0530 Subject: [PATCH 4/4] Update README.md updated EOL notice text --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index f96df178..a1005fa4 100644 --- a/README.md +++ b/README.md @@ -211,12 +211,12 @@ Note: ONNX Runtime is not required to run the MoE layer. It is integrated in sta
-**OpenVINO™ integration with Torch-ORT will no longer be supported as of OpenVINO™ 2023.0 release.** +**EOL NOTICE : OpenVINO™ integration with Torch-ORT will no longer be supported as of OpenVINO™ 2023.0 release.** If you are looking to deploy your PyTorch models on Intel based devices, you have a few options. If you prefer the native PyTorch framework APIs, consider using the Intel Extension for PyTorch (IPEX). Another option is to utilize [OpenVINO Model Conversion API](https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html),which enables the automatic importation and conversion of standard PyTorch models during runtime. It is not necessary to convert your PyTorch models offline now. - +**END OF EOL NOTICE** ONNX Runtime for PyTorch supports PyTorch model inference using ONNX Runtime and Intel® OpenVINO™.