Skip to content

Branch to update eol notice #176

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -211,6 +211,13 @@ Note: ONNX Runtime is not required to run the MoE layer. It is integrated in sta

</div>

**EOL NOTICE : OpenVINO™ integration with Torch-ORT will no longer be supported as of OpenVINO™ 2023.0 release.**

If you are looking to deploy your PyTorch models on Intel based devices, you have a few options.
If you prefer the native PyTorch framework APIs, consider using the Intel Extension for PyTorch (IPEX). Another option is to utilize [OpenVINO Model Conversion API](https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html),which enables the automatic importation and conversion of standard PyTorch models during runtime. It is not necessary to convert your PyTorch models offline now.

**END OF EOL NOTICE**

ONNX Runtime for PyTorch supports PyTorch model inference using ONNX Runtime and Intel® OpenVINO™.

It is available via the torch-ort-infer python package. This package enables OpenVINO™ Execution Provider for ONNX Runtime by default for accelerating inference on various Intel® CPUs, Intel® integrated GPUs, and Intel® Movidius™ Vision Processing Units - referred to as VPU.
Expand Down