Skip to content

Commit

Permalink
[Backend] Support onnxruntime DirectML inference. (#1304)
Browse files Browse the repository at this point in the history
* Fix links in readme

* Fix links in readme

* Update PPOCRv2/v3 examples

* Update auto compression configs

* Add neww quantization  support for paddleclas model

* Update quantized Yolov6s model download link

* Improve PPOCR comments

* Add English doc for quantization

* Fix PPOCR rec model bug

* Add  new paddleseg quantization support

* Add  new paddleseg quantization support

* Add  new paddleseg quantization support

* Add  new paddleseg quantization support

* Add Ascend model list

* Add ascend model list

* Add ascend model list

* Add ascend model list

* Add ascend model list

* Add ascend model list

* Add ascend model list

* Support DirectML in onnxruntime

* Support onnxruntime DirectML

* Support onnxruntime DirectML

* Support onnxruntime DirectML

* Support OnnxRuntime DirectML

* Support OnnxRuntime DirectML

* Support OnnxRuntime DirectML

* Support OnnxRuntime DirectML

* Support OnnxRuntime DirectML

* Support OnnxRuntime DirectML

* Support OnnxRuntime DirectML

* Support OnnxRuntime DirectML

* Remove DirectML vision model example

* Imporve OnnxRuntime DirectML

* Imporve OnnxRuntime DirectML

* fix opencv cmake in Windows

* recheck codestyle
  • Loading branch information
yunyaoXYY authored Feb 17, 2023
1 parent efa4656 commit c38b7d4
Show file tree
Hide file tree
Showing 22 changed files with 393 additions and 60 deletions.
1 change: 1 addition & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@ option(ENABLE_CVCUDA "Whether to enable NVIDIA CV-CUDA to boost image preprocess
option(ENABLE_ENCRYPTION "Whether to enable ENCRYPTION." OFF)
option(ENABLE_BENCHMARK "Whether to enable Benchmark mode." OFF)
option(WITH_ASCEND "Whether to compile for Huawei Ascend deploy." OFF)
option(WITH_DIRECTML "Whether to compile for onnxruntime DirectML deploy." OFF)
option(WITH_TIMVX "Whether to compile for TIMVX deploy." OFF)
option(WITH_KUNLUNXIN "Whether to compile for KunlunXin XPU deploy." OFF)
option(WITH_TESTING "Whether to compile with unittest." OFF)
Expand Down
3 changes: 0 additions & 3 deletions cmake/check.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,6 @@ if(WIN32)
if(ENABLE_POROS_BACKEND)
message(FATAL_ERROR "-DENABLE_POROS_BACKEND=ON doesn't support on non 64-bit system now.")
endif()
if(ENABLE_VISION)
message(FATAL_ERROR "-DENABLE_VISION=ON doesn't support on non 64-bit system now.")
endif()
endif()
endif()

Expand Down
10 changes: 8 additions & 2 deletions cmake/onnxruntime.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -44,14 +44,20 @@ set(CMAKE_BUILD_RPATH "${CMAKE_BUILD_RPATH}" "${ONNXRUNTIME_LIB_DIR}")
set(ONNXRUNTIME_VERSION "1.12.0")
set(ONNXRUNTIME_URL_PREFIX "https://bj.bcebos.com/paddle2onnx/libs/")

if(WIN32)
if(WIN32)
if(WITH_GPU)
set(ONNXRUNTIME_FILENAME "onnxruntime-win-x64-gpu-${ONNXRUNTIME_VERSION}.zip")
elseif(WITH_DIRECTML)
set(ONNXRUNTIME_FILENAME "onnxruntime-directml-win-x64.zip")
else()
set(ONNXRUNTIME_FILENAME "onnxruntime-win-x64-${ONNXRUNTIME_VERSION}.zip")
endif()
if(NOT CMAKE_CL_64)
set(ONNXRUNTIME_FILENAME "onnxruntime-win-x86-${ONNXRUNTIME_VERSION}.zip")
if(WITH_DIRECTML)
set(ONNXRUNTIME_FILENAME "onnxruntime-directml-win-x86.zip")
else()
set(ONNXRUNTIME_FILENAME "onnxruntime-win-x86-${ONNXRUNTIME_VERSION}.zip")
endif()
endif()
elseif(APPLE)
if(CURRENT_OSX_ARCH MATCHES "arm64")
Expand Down
12 changes: 11 additions & 1 deletion cmake/opencv.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,11 @@
set(COMPRESSED_SUFFIX ".tgz")

if(WIN32)
set(OPENCV_FILENAME "opencv-win-x64-3.4.16")
if(NOT CMAKE_CL_64)
set(OPENCV_FILENAME "opencv-win-x86-3.4.16")
else()
set(OPENCV_FILENAME "opencv-win-x64-3.4.16")
endif()
set(COMPRESSED_SUFFIX ".zip")
elseif(APPLE)
if(CURRENT_OSX_ARCH MATCHES "arm64")
Expand Down Expand Up @@ -51,6 +55,12 @@ endif()
set(OPENCV_INSTALL_DIR ${THIRD_PARTY_PATH}/install/)
if(ANDROID)
set(OPENCV_URL_PREFIX "https://bj.bcebos.com/fastdeploy/third_libs")
elseif(WIN32)
if(NOT CMAKE_CL_64)
set(OPENCV_URL_PREFIX "https://bj.bcebos.com/fastdeploy/third_libs")
else()
set(OPENCV_URL_PREFIX "https://bj.bcebos.com/paddle2onnx/libs")
endif()
else() # TODO: use fastdeploy/third_libs instead.
set(OPENCV_URL_PREFIX "https://bj.bcebos.com/paddle2onnx/libs")
endif()
Expand Down
1 change: 1 addition & 0 deletions cmake/summary.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@ function(fastdeploy_summary)
message(STATUS " WITH_GPU : ${WITH_GPU}")
message(STATUS " WITH_TESTING : ${WITH_TESTING}")
message(STATUS " WITH_ASCEND : ${WITH_ASCEND}")
message(STATUS " WITH_DIRECTML : ${WITH_DIRECTML}")
message(STATUS " WITH_TIMVX : ${WITH_TIMVX}")
message(STATUS " WITH_KUNLUNXIN : ${WITH_KUNLUNXIN}")
message(STATUS " WITH_CAPI : ${WITH_CAPI}")
Expand Down
59 changes: 59 additions & 0 deletions docs/cn/build_and_install/directml.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
[English](../../en/build_and_install/directml.md) | 简体中文

# DirectML部署库编译
Direct Machine Learning (DirectML) 是Windows系统上用于机器学习的一款高性能, 提供硬件加速的 DirectX 12 库.
目前, Fastdeploy的ONNX Runtime后端已集成DirectML,让用户可以在支持DirectX 12的 AMD/Intel/Nvidia/Qualcomm的GPU上部署模型.

更多详细介绍可见:
- [ONNX Runtime DirectML Execution Provider](https://onnxruntime.ai/docs/execution-providers/DirectML-ExecutionProvider.html)

# DirectML使用需求
- 编译需求: Visuald Studio 2017 及其以上工具链.
- 操作系统: Windows10, 1903 版本, 及其更新版本. (DirectML为操作系统的组成部分, 无需单独安装)
- 硬件需求: 支持DirectX 12的显卡, 例如, AMD GCN 第一代及以上版本/ Intel Haswell HD集成显卡及以上版本/Nvidia Kepler架构及以上版本/ Qualcomm Adreno 600及以上版本.

# 编译DirectML部署库
DirectML是基于ONNX Runtime后端集成, 所以要使用DirectML, 用户需要打开编译ONNX Runtime的选项. 同时, FastDeploy的DirectML支持x64/x86(Win32)架构的程序构建.


x64示例, 在Windows菜单中,找到`x64 Native Tools Command Prompt for VS 2019`打开,执行如下命令
```bat
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy
mkdir build && cd build
cmake .. -G "Visual Studio 16 2019" -A x64 ^
-DWITH_DIRECTML=ON ^
-DENABLE_ORT_BACKEND=ON ^
-DENABLE_VISION=ON ^
-DCMAKE_INSTALL_PREFIX="D:\Paddle\compiled_fastdeploy" ^
msbuild fastdeploy.sln /m /p:Configuration=Release /p:Platform=x64
msbuild INSTALL.vcxproj /m /p:Configuration=Release /p:Platform=x64
```
编译完成后,即在`CMAKE_INSTALL_PREFIX`指定的目录下生成C++推理库.
如您使用CMake GUI可参考文档[Windows使用CMakeGUI + Visual Studio 2019 IDE编译](../faq/build_on_win_with_gui.md)


x86(Win32)示例, 在Windows菜单中,找到`x86 Native Tools Command Prompt for VS 2019`打开,执行如下命令
```bat
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy
mkdir build && cd build
cmake .. -G "Visual Studio 16 2019" -A Win32 ^
-DWITH_DIRECTML=ON ^
-DENABLE_ORT_BACKEND=ON ^
-DENABLE_VISION=ON ^
-DCMAKE_INSTALL_PREFIX="D:\Paddle\compiled_fastdeploy" ^
msbuild fastdeploy.sln /m /p:Configuration=Release /p:Platform=Win32
msbuild INSTALL.vcxproj /m /p:Configuration=Release /p:Platform=Win32
```
编译完成后,即在`CMAKE_INSTALL_PREFIX`指定的目录下生成C++推理库.
如您使用CMake GUI可参考文档[Windows使用CMakeGUI + Visual Studio 2019 IDE编译](../faq/build_on_win_with_gui.md)

# 使用DirectML库
DirectML编译库的使用方式, 和其他硬件在Windows上使用的方式一样, 参考以下链接.
- [FastDeploy C++库在Windows上的多种使用方式](../faq/use_sdk_on_windows_build.md)
- [在 Windows 使用 FastDeploy C++ SDK](../faq/use_sdk_on_windows.md)
57 changes: 57 additions & 0 deletions docs/en/build_and_install/directml.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
English | [中文](../../cn/build_and_install/directml.md)

# How to Build DirectML Deployment Environment
Direct Machine Learning (DirectML) is a high-performance, hardware-accelerated DirectX 12 library for machine learning on Windows systems.
Currently, Fastdeploy's ONNX Runtime backend has DirectML integrated, allowing users to deploy models on AMD/Intel/Nvidia/Qualcomm GPUs with DirectX 12 support.

More details:
- [ONNX Runtime DirectML Execution Provider](https://onnxruntime.ai/docs/execution-providers/DirectML-ExecutionProvider.html)

# DirectML requirements
- Compilation requirements: Visual Studio 2017 toolchain and above.
- Operating system: Windows 10, version 1903, and newer. (DirectML is part of the operating system and does not need to be installed separately)
- Hardware requirements: DirectX 12 supported graphics cards, e.g., AMD GCN 1st generation and above/ Intel Haswell HD integrated graphics and above/ Nvidia Kepler architecture and above/ Qualcomm Adreno 600 and above.

# How to Build and Install DirectML C++ SDK
The DirectML is integrated with the ONNX Runtime backend, so to use DirectML, users need to turn on the option to compile ONNX Runtime. Also, FastDeploy's DirectML supports building programs for x64/x86 (Win32) architectures.

For the x64 example, in the Windows menu, find `x64 Native Tools Command Prompt for VS 2019` and open it by executing the following command
```bat
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy
mkdir build && cd build
cmake .. -G "Visual Studio 16 2019" -A x64 ^
-DWITH_DIRECTML=ON ^
-DENABLE_ORT_BACKEND=ON ^
-DENABLE_VISION=ON ^
-DCMAKE_INSTALL_PREFIX="D:\Paddle\compiled_fastdeploy" ^
msbuild fastdeploy.sln /m /p:Configuration=Release /p:Platform=x64
msbuild INSTALL.vcxproj /m /p:Configuration=Release /p:Platform=x64
```
Once compiled, the C++ inference library is generated in the directory specified by `CMAKE_INSTALL_PREFIX`
If you use CMake GUI, please refer to [How to Compile with CMakeGUI + Visual Studio 2019 IDE on Windows](../faq/build_on_win_with_gui.md)


For the x86(Win32) example, in the Windows menu, find `x86 Native Tools Command Prompt for VS 2019` and open it by executing the following command
```bat
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy
mkdir build && cd build
cmake .. -G "Visual Studio 16 2019" -A Win32 ^
-DWITH_DIRECTML=ON ^
-DENABLE_ORT_BACKEND=ON ^
-DENABLE_VISION=ON ^
-DCMAKE_INSTALL_PREFIX="D:\Paddle\compiled_fastdeploy" ^
msbuild fastdeploy.sln /m /p:Configuration=Release /p:Platform=Win32
msbuild INSTALL.vcxproj /m /p:Configuration=Release /p:Platform=Win32
```
Once compiled, the C++ inference library is generated in the directory specified by `CMAKE_INSTALL_PREFIX`
If you use CMake GUI, please refer to [How to Compile with CMakeGUI + Visual Studio 2019 IDE on Windows](../faq/build_on_win_with_gui.md)

# How to use compiled DirectML SDK.
The DirectML compiled library can be used in the same way as any other hardware on Windows, see the following link.
- [Using the FastDeploy C++ SDK on Windows Platform](../faq/use_sdk_on_windows.md)
77 changes: 77 additions & 0 deletions examples/runtime/cpp/infer_paddle_dml.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "fastdeploy/runtime.h"

namespace fd = fastdeploy;

int main(int argc, char* argv[]) {
// create option
fd::RuntimeOption runtime_option;

// model and param files
std::string model_file = "mobilenetv2/inference.pdmodel";
std::string params_file = "mobilenetv2/inference.pdiparams";

// read model From disk.
// runtime_option.SetModelPath(model_file, params_file,
// fd::ModelFormat::PADDLE);

// read model from buffer
std::string model_buffer, params_buffer;
fd::ReadBinaryFromFile(model_file, &model_buffer);
fd::ReadBinaryFromFile(params_file, &params_buffer);
runtime_option.SetModelBuffer(model_buffer, params_buffer,
fd::ModelFormat::PADDLE);

// setup other option
runtime_option.SetCpuThreadNum(12);
// use ONNX Runtime DirectML
runtime_option.UseOrtBackend();
runtime_option.UseDirectML();

// init runtime
std::unique_ptr<fd::Runtime> runtime =
std::unique_ptr<fd::Runtime>(new fd::Runtime());
if (!runtime->Init(runtime_option)) {
std::cerr << "--- Init FastDeploy Runitme Failed! "
<< "\n--- Model: " << model_file << std::endl;
return -1;
} else {
std::cout << "--- Init FastDeploy Runitme Done! "
<< "\n--- Model: " << model_file << std::endl;
}
// init input tensor shape
fd::TensorInfo info = runtime->GetInputInfo(0);
info.shape = {1, 3, 224, 224};

std::vector<fd::FDTensor> input_tensors(1);
std::vector<fd::FDTensor> output_tensors(1);

std::vector<float> inputs_data;
inputs_data.resize(1 * 3 * 224 * 224);
for (size_t i = 0; i < inputs_data.size(); ++i) {
inputs_data[i] = std::rand() % 1000 / 1000.0f;
}
input_tensors[0].SetExternalData({1, 3, 224, 224}, fd::FDDataType::FP32,
inputs_data.data());

// get input name
input_tensors[0].name = info.name;

runtime->Infer(input_tensors, &output_tensors);

output_tensors[0].PrintInfo();
return 0;
}
16 changes: 9 additions & 7 deletions examples/vision/classification/paddleclas/cpp/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
English | [简体中文](README_CN.md)
# PaddleClas C++ Deployment Example

This directory provides examples that `infer.cc` fast finishes the deployment of PaddleClas models on CPU/GPU and GPU accelerated by TensorRT.
This directory provides examples that `infer.cc` fast finishes the deployment of PaddleClas models on CPU/GPU and GPU accelerated by TensorRT.

Before deployment, two steps require confirmation.

Expand All @@ -13,13 +13,13 @@ Taking ResNet50_vd inference on Linux as an example, the compilation test can be
```bash
mkdir build
cd build
# Download FastDeploy precompiled library. Users can choose your appropriate version in the`FastDeploy Precompiled Library` mentioned above
# Download FastDeploy precompiled library. Users can choose your appropriate version in the`FastDeploy Precompiled Library` mentioned above
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j

# Download ResNet50_vd model file and test images
# Download ResNet50_vd model file and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/ResNet50_vd_infer.tgz
tar -xvf ResNet50_vd_infer.tgz
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
Expand All @@ -35,12 +35,14 @@ wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/Ima
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 3
# KunlunXin XPU inference
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 4
# Ascend inference
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 5
```

The above command works for Linux or MacOS. Refer to
The above command works for Linux or MacOS. Refer to
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md) for SDK use-pattern in Windows

## PaddleClas C++ Interface
## PaddleClas C++ Interface

### PaddleClas Class

Expand All @@ -57,8 +59,8 @@ PaddleClas model loading and initialization, where model_file and params_file ar
**Parameter**
> * **model_file**(str): Model file path
> * **params_file**(str): Parameter file path
> * **model_file**(str): Model file path
> * **params_file**(str): Parameter file path
> * **config_file**(str): Inference deployment configuration file
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default. (use the default configuration)
> * **model_format**(ModelFormat): Model format. Paddle format by default
Expand Down
9 changes: 5 additions & 4 deletions examples/vision/classification/paddleclas/cpp/infer.cc
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,8 @@ void IpuInfer(const std::string& model_dir, const std::string& image_file) {
std::cout << res.Str() << std::endl;
}

void KunlunXinInfer(const std::string& model_dir, const std::string& image_file) {
void KunlunXinInfer(const std::string& model_dir,
const std::string& image_file) {
auto model_file = model_dir + sep + "inference.pdmodel";
auto params_file = model_dir + sep + "inference.pdiparams";
auto config_file = model_dir + sep + "inference_cls.yaml";
Expand Down Expand Up @@ -152,7 +153,7 @@ void AscendInfer(const std::string& model_dir, const std::string& image_file) {
auto model_file = model_dir + sep + "inference.pdmodel";
auto params_file = model_dir + sep + "inference.pdiparams";
auto config_file = model_dir + sep + "inference_cls.yaml";

auto option = fastdeploy::RuntimeOption();
option.UseAscend();

Expand All @@ -172,14 +173,14 @@ void AscendInfer(const std::string& model_dir, const std::string& image_file) {
std::cout << res.Str() << std::endl;
}


int main(int argc, char* argv[]) {
if (argc < 4) {
std::cout << "Usage: infer_demo path/to/model path/to/image run_option, "
"e.g ./infer_demo ./ResNet50_vd ./test.jpeg 0"
<< std::endl;
std::cout << "The data type of run_option is int, 0: run with cpu; 1: run "
"with gpu; 2: run with gpu and use tensorrt backend; 3: run with ipu; 4: run with kunlunxin."
"with gpu; 2: run with gpu and use tensorrt backend; 3: run "
"with ipu; 4: run with kunlunxin."
<< std::endl;
return -1;
}
Expand Down
Empty file modified examples/vision/segmentation/paddleseg/cpu-gpu/cpp/infer.cc
100644 → 100755
Empty file.
6 changes: 5 additions & 1 deletion fastdeploy/core/config.h.in
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,10 @@
#cmakedefine WITH_GPU
#endif

#ifndef WITH_DIRECTML
#cmakedefine WITH_DIRECTML
#endif

#ifndef ENABLE_TRT_BACKEND
#cmakedefine ENABLE_TRT_BACKEND
#endif
Expand All @@ -59,4 +63,4 @@

#ifndef ENABLE_BENCHMARK
#cmakedefine ENABLE_BENCHMARK
#endif
#endif
Loading

0 comments on commit c38b7d4

Please sign in to comment.