Skip to content

Commit

Permalink
[Other] Change all XPU to KunlunXin (#973)
Browse files Browse the repository at this point in the history
* [FlyCV] Bump up FlyCV -> official release 1.0.0

* XPU to KunlunXin

* update

* update model link

* update doc

* update device

* update code

* useless code

Co-authored-by: DefTruth <[email protected]>
Co-authored-by: DefTruth <[email protected]>
  • Loading branch information
3 people authored Dec 27, 2022
1 parent 6078bd9 commit 45865c8
Show file tree
Hide file tree
Showing 111 changed files with 370 additions and 369 deletions.
6 changes: 3 additions & 3 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ option(ENABLE_TEXT "Whether to enable text models usage." OFF)
option(ENABLE_FLYCV "Whether to enable flycv to boost image preprocess." OFF)
option(WITH_ASCEND "Whether to compile for Huawei Ascend deploy." OFF)
option(WITH_TIMVX "Whether to compile for TIMVX deploy." OFF)
option(WITH_XPU "Whether to compile for KunlunXin XPU deploy." OFF)
option(WITH_KUNLUNXIN "Whether to compile for KunlunXin XPU deploy." OFF)
option(WITH_TESTING "Whether to compile with unittest." OFF)
############################# Options for Android cross compiling #########################
option(WITH_OPENCV_STATIC "Use OpenCV static lib for Android." OFF)
Expand Down Expand Up @@ -148,12 +148,12 @@ if (WITH_ASCEND)
include(${PROJECT_SOURCE_DIR}/cmake/ascend.cmake)
endif()

if (WITH_XPU)
if (WITH_KUNLUNXIN)
if(NOT ENABLE_LITE_BACKEND)
set(ENABLE_LITE_BACKEND ON)
endif()
if(NOT CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "x86_64")
message(FATAL_ERROR "XPU is only supported on Linux x64 platform")
message(FATAL_ERROR "KunlunXin XPU is only supported on Linux x64 platform")
endif()
if(NOT PADDLELITE_URL)
set(PADDLELITE_URL "https://bj.bcebos.com/fastdeploy/third_libs/lite-linux-x64-xpu-20221215.tgz")
Expand Down
4 changes: 2 additions & 2 deletions FastDeploy.cmake.in
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ set(OPENCV_DIRECTORY "@OPENCV_DIRECTORY@")
set(ORT_DIRECTORY "@ORT_DIRECTORY@")
set(OPENVINO_DIRECTORY "@OPENVINO_DIRECTORY@")
set(RKNN2_TARGET_SOC "@RKNN2_TARGET_SOC@")
set(WITH_XPU @WITH_XPU@)
set(WITH_KUNLUNXIN @WITH_KUNLUNXIN@)

set(FASTDEPLOY_LIBS "")
set(FASTDEPLOY_INCS "")
Expand Down Expand Up @@ -246,7 +246,7 @@ if(ENABLE_PADDLE_FRONTEND)
list(APPEND FASTDEPLOY_LIBS ${PADDLE2ONNX_LIB})
endif()

if(WITH_XPU)
if(WITH_KUNLUNXIN)
list(APPEND FASTDEPLOY_LIBS -lpthread -lrt -ldl)
endif()

Expand Down
2 changes: 1 addition & 1 deletion cmake/summary.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ function(fastdeploy_summary)
message(STATUS " ENABLE_OPENVINO_BACKEND : ${ENABLE_OPENVINO_BACKEND}")
message(STATUS " WITH_ASCEND : ${WITH_ASCEND}")
message(STATUS " WITH_TIMVX : ${WITH_TIMVX}")
message(STATUS " WITH_XPU : ${WITH_XPU}")
message(STATUS " WITH_KUNLUNXIN : ${WITH_KUNLUNXIN}")
if(ENABLE_ORT_BACKEND)
message(STATUS " ONNXRuntime version : ${ONNXRUNTIME_VERSION}")
endif()
Expand Down
2 changes: 1 addition & 1 deletion docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
- [Build and Install FastDeploy Library on GPU Platform](en/build_and_install/gpu.md)
- [Build and Install FastDeploy Library on CPU Platform](en/build_and_install/cpu.md)
- [Build and Install FastDeploy Library on IPU Platform](en/build_and_install/ipu.md)
- [Build and Install FastDeploy Library on KunlunXin XPU Platform](en/build_and_install/xpu.md)
- [Build and Install FastDeploy Library on KunlunXin XPU Platform](en/build_and_install/kunlunxin.md)
- [Build and Install on RV1126 Platform](en/build_and_install/rv1126.md)
- [Build and Install on RK3588 and RK356X Platform](en/build_and_install/rknpu2.md)
- [Build and Install on A311D Platform](en/build_and_install/a311d.md)
Expand Down
2 changes: 1 addition & 1 deletion docs/README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
- [GPU部署环境编译安装](cn/build_and_install/gpu.md)
- [CPU部署环境编译安装](cn/build_and_install/cpu.md)
- [IPU部署环境编译安装](cn/build_and_install/ipu.md)
- [昆仑芯XPU部署环境编译安装](cn/build_and_install/xpu.md)
- [昆仑芯XPU部署环境编译安装](cn/build_and_install/kunlunxin.md)
- [瑞芯微RV1126部署环境编译安装](cn/build_and_install/rv1126.md)
- [瑞芯微RK3588部署环境编译安装](cn/build_and_install/rknpu2.md)
- [晶晨A311D部署环境编译安装](cn/build_and_install/a311d.md)
Expand Down
2 changes: 1 addition & 1 deletion docs/README_EN.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
- [Build and Install FastDeploy Library on GPU Platform](en/build_and_install/gpu.md)
- [Build and Install FastDeploy Library on CPU Platform](en/build_and_install/cpu.md)
- [Build and Install FastDeploy Library on IPU Platform](en/build_and_install/ipu.md)
- [Build and Install FastDeploy Library on KunlunXin XPU Platform](en/build_and_install/xpu.md)
- [Build and Install FastDeploy Library on KunlunXin XPU Platform](en/build_and_install/kunlunxin.md)
- [Build and Install on RV1126 Platform](en/build_and_install/rv1126.md)
- [Build and Install on RK3588 Platform](en/build_and_install/rknpu2.md)
- [Build and Install on A311D Platform](en/build_and_install/a311d.md)
Expand Down
4 changes: 2 additions & 2 deletions docs/cn/build_and_install/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
- [瑞芯微RV1126部署环境](rv1126.md)
- [瑞芯微RK3588部署环境](rknpu2.md)
- [晶晨A311D部署环境](a311d.md)
- [昆仑芯XPU部署环境](xpu.md)
- [昆仑芯XPU部署环境](kunlunxin.md)
- [华为昇腾部署环境](huawei_ascend.md)


Expand All @@ -27,7 +27,7 @@
| ENABLE_LITE_BACKEND | 默认OFF,是否编译集成Paddle Lite后端(编译Android库时需要设置为ON) |
| ENABLE_RKNPU2_BACKEND | 默认OFF,是否编译集成RKNPU2后端(RK3588/RK3568/RK3566上推荐打开) |
| WITH_ASCEND | 默认OFF,当在华为昇腾NPU上部署时, 需要设置为ON |
| WITH_XPU | 默认OFF,当在昆仑芯XPU上部署时,需设置为ON |
| WITH_KUNLUNXIN | 默认OFF,当在昆仑芯XPU上部署时,需设置为ON |
| WITH_TIMVX | 默认OFF,需要在RV1126/RV1109/A311D上部署时,需设置为ON |
| ENABLE_TRT_BACKEND | 默认OFF,是否编译集成TensorRT后端(GPU上推荐打开) |
| ENABLE_OPENVINO_BACKEND | 默认OFF,是否编译集成OpenVINO后端(CPU上推荐打开) |
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
[English](../../en/build_and_install/xpu.md) | 简体中文
[English](../../en/build_and_install/kunlunxin.md) | 简体中文

# 昆仑芯 XPU 部署环境编译安装

Expand All @@ -10,7 +10,7 @@ FastDeploy 基于 Paddle Lite 后端支持在昆仑芯 XPU 上进行部署推理
相关编译选项说明如下:
|编译选项|默认值|说明|备注|
|:---|:---|:---|:---|
| WITH_XPU| OFF | 需要在XPU上部署时需要设置为ON | - |
| WITH_KUNLUNXIN| OFF | 需要在昆仑芯XPU上部署时需要设置为ON | - |
| ENABLE_ORT_BACKEND | OFF | 是否编译集成ONNX Runtime后端 | - |
| ENABLE_PADDLE_BACKEND | OFF | 是否编译集成Paddle Inference后端 | - |
| ENABLE_OPENVINO_BACKEND | OFF | 是否编译集成OpenVINO后端 | - |
Expand Down Expand Up @@ -41,11 +41,11 @@ cd FastDeploy
mkdir build && cd build

# CMake configuration with KunlunXin xpu toolchain
cmake -DWITH_XPU=ON \
cmake -DWITH_KUNLUNXIN=ON \
-DWITH_GPU=OFF \ # 不编译 GPU
-DENABLE_ORT_BACKEND=ON \ # 可选择开启 ORT 后端
-DENABLE_PADDLE_BACKEND=ON \ # 可选择开启 Paddle 后端
-DCMAKE_INSTALL_PREFIX=fastdeploy-xpu \
-DCMAKE_INSTALL_PREFIX=fastdeploy-kunlunxin \
-DENABLE_VISION=ON \ # 是否编译集成视觉模型的部署模块,可选择开启
-DOPENCV_DIRECTORY=/usr/lib/x86_64-linux-gnu/cmake/opencv4 \
..
Expand All @@ -54,14 +54,14 @@ cmake -DWITH_XPU=ON \
make -j8
make install
```
编译完成之后,会生成 fastdeploy-xpu 目录,表示基于 Paddle Lite 的 FastDeploy 库编译完成。
编译完成之后,会生成 fastdeploy-kunlunxin 目录,表示基于 Paddle Lite 的 FastDeploy 库编译完成。

## Python 编译
编译命令如下:
```bash
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/python
export WITH_XPU=ON
export WITH_KUNLUNXIN=ON
export WITH_GPU=OFF
export ENABLE_ORT_BACKEND=ON
export ENABLE_PADDLE_BACKEND=ON
Expand Down
4 changes: 2 additions & 2 deletions docs/en/build_and_install/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ English | [中文](../../cn/build_and_install/README.md)
- [Build and Install on RV1126 Platform](rv1126.md)
- [Build and Install on RK3588 Platform](rknpu2.md)
- [Build and Install on A311D Platform](a311d.md)
- [Build and Install on KunlunXin XPU Platform](xpu.md)
- [Build and Install on KunlunXin XPU Platform](kunlunxin.md)


## Build options
Expand All @@ -29,7 +29,7 @@ English | [中文](../../cn/build_and_install/README.md)
| ENABLE_VISION | Default OFF,whether to enable vision models deployment module |
| ENABLE_TEXT | Default OFF,whether to enable text models deployment module |
| WITH_GPU | Default OFF, if build on GPU, this needs to be ON |
| WITH_XPU | Default OFF,if deploy on KunlunXin XPU,this needs to be ON |
| WITH_KUNLUNXIN | Default OFF,if deploy on KunlunXin XPU,this needs to be ON |
| WITH_TIMVX | Default OFF,if deploy on RV1126/RV1109/A311D,this needs to be ON |
| WITH_ASCEND | Default OFF,if deploy on Huawei Ascend,this needs to be ON |
| CUDA_DIRECTORY | Default /usr/local/cuda, if build on GPU, this defines the path of CUDA(>=11.2) |
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
English | [中文](../../cn/build_and_install/xpu.md)
English | [中文](../../cn/build_and_install/kunlunxin.md)

# How to Build KunlunXin XPU Deployment Environment

Expand All @@ -10,7 +10,7 @@ The relevant compilation options are described as follows:
|Compile Options|Default Values|Description|Remarks|
|:---|:---|:---|:---|
| ENABLE_LITE_BACKEND | OFF | It needs to be set to ON when compiling the RK library| - |
| WITH_XPU | OFF | It needs to be set to ON when compiling the KunlunXin XPU library| - |
| WITH_KUNLUNXIN | OFF | It needs to be set to ON when compiling the KunlunXin XPU library| - |
| ENABLE_ORT_BACKEND | OFF | whether to intergrate ONNX Runtime backend | - |
| ENABLE_PADDLE_BACKEND | OFF | whether to intergrate Paddle Inference backend | - |
| ENABLE_OPENVINO_BACKEND | OFF | whether to intergrate OpenVINO backend | - |
Expand Down Expand Up @@ -44,11 +44,11 @@ cd FastDeploy
mkdir build && cd build

# CMake configuration with KunlunXin xpu toolchain
cmake -DWITH_XPU=ON \
cmake -DWITH_KUNLUNXIN=ON \
-DWITH_GPU=OFF \
-DENABLE_ORT_BACKEND=ON \
-DENABLE_PADDLE_BACKEND=ON \
-DCMAKE_INSTALL_PREFIX=fastdeploy-xpu \
-DCMAKE_INSTALL_PREFIX=fastdeploy-kunlunxin \
-DENABLE_VISION=ON \
-DOPENCV_DIRECTORY=/usr/lib/x86_64-linux-gnu/cmake/opencv4 \
..
Expand All @@ -57,14 +57,14 @@ cmake -DWITH_XPU=ON \
make -j8
make install
```
After the compilation is complete, the fastdeploy-xpu directory will be generated, indicating that the Padddle Lite based FastDeploy library has been compiled.
After the compilation is complete, the fastdeploy-kunlunxin directory will be generated, indicating that the Padddle Lite based FastDeploy library has been compiled.

## Python compile
The compilation command is as follows:
```bash
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/python
export WITH_XPU=ON
export WITH_KUNLUNXIN=ON
export WITH_GPU=OFF
export ENABLE_ORT_BACKEND=ON
export ENABLE_PADDLE_BACKEND=ON
Expand Down
4 changes: 2 additions & 2 deletions examples/multimodal/stable_diffusion/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ python infer.py --model_dir stable-diffusion-v1-4/ --scheduler "pndm" --backend
python infer.py --model_dir stable-diffusion-v1-5/ --scheduler "euler_ancestral" --backend paddle
# 在昆仑芯XPU上推理
python infer.py --model_dir stable-diffusion-v1-5/ --scheduler "euler_ancestral" --backend paddle-xpu
python infer.py --model_dir stable-diffusion-v1-5/ --scheduler "euler_ancestral" --backend paddle-kunlunxin
```

#### 参数说明
Expand All @@ -52,7 +52,7 @@ python infer.py --model_dir stable-diffusion-v1-5/ --scheduler "euler_ancestral"
|----------|--------------|
| --model_dir | 导出后模型的目录。 |
| --model_format | 模型格式。默认为`'paddle'`,可选列表:`['paddle', 'onnx']`|
| --backend | 推理引擎后端。默认为`paddle`,可选列表:`['onnx_runtime', 'paddle', 'paddle-xpu']`,当模型格式为`onnx`时,可选列表为`['onnx_runtime']`|
| --backend | 推理引擎后端。默认为`paddle`,可选列表:`['onnx_runtime', 'paddle', 'paddle-kunlunxin']`,当模型格式为`onnx`时,可选列表为`['onnx_runtime']`|
| --scheduler | StableDiffusion 模型的scheduler。默认为`'pndm'`。可选列表:`['pndm', 'euler_ancestral']`,StableDiffusio模型对应的scheduler可参考[ppdiffuser模型列表](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/textual_inversion)|
| --unet_model_prefix | UNet模型前缀。默认为`unet`|
| --vae_model_prefix | VAE模型前缀。默认为`vae_decoder`|
Expand Down
14 changes: 7 additions & 7 deletions examples/multimodal/stable_diffusion/infer.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ def parse_arguments():
type=str,
default='paddle',
# Note(zhoushunjie): Will support 'tensorrt', 'paddle-tensorrt' soon.
choices=['onnx_runtime', 'paddle', 'paddle-xpu'],
choices=['onnx_runtime', 'paddle', 'paddle-kunlunxin'],
help="The inference runtime backend of unet model and text encoder model."
)
parser.add_argument(
Expand Down Expand Up @@ -175,9 +175,9 @@ def create_trt_runtime(model_dir,
return fd.Runtime(option)


def create_xpu_runtime(model_dir, model_prefix, device_id=0):
def create_kunlunxin_runtime(model_dir, model_prefix, device_id=0):
option = fd.RuntimeOption()
option.use_xpu(
option.use_kunlunxin(
device_id,
l3_workspace_size=(64 * 1024 * 1024 - 4 * 1024),
locked=False,
Expand Down Expand Up @@ -306,18 +306,18 @@ def get_scheduler(args):
dynamic_shape=unet_dynamic_shape,
device_id=args.device_id)
print(f"Spend {time.time() - start : .2f} s to load unet model.")
elif args.backend == "paddle-xpu":
elif args.backend == "paddle-kunlunxin":
print("=== build text_encoder_runtime")
text_encoder_runtime = create_xpu_runtime(
text_encoder_runtime = create_kunlunxin_runtime(
args.model_dir,
args.text_encoder_model_prefix,
device_id=args.device_id)
print("=== build vae_decoder_runtime")
vae_decoder_runtime = create_xpu_runtime(
vae_decoder_runtime = create_kunlunxin_runtime(
args.model_dir, args.vae_model_prefix, device_id=args.device_id)
print("=== build unet_runtime")
start = time.time()
unet_runtime = create_xpu_runtime(
unet_runtime = create_kunlunxin_runtime(
args.model_dir, args.unet_model_prefix, device_id=args.device_id)
print(f"Spend {time.time() - start : .2f} s to load unet model.")
pipe = StableDiffusionFastDeployPipeline(
Expand Down
4 changes: 2 additions & 2 deletions examples/text/ernie-3.0/cpp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,8 @@ tar xvfz ernie-3.0-medium-zh-afqmc.tgz
# GPU Inference
./seq_cls_infer_demo --device gpu --model_dir ernie-3.0-medium-zh-afqmc

# XPU 推理
./seq_cls_infer_demo --device xpu --model_dir ernie-3.0-medium-zh-afqmc
# KunlunXin XPU 推理
./seq_cls_infer_demo --device kunlunxin --model_dir ernie-3.0-medium-zh-afqmc
```
The result returned after running is as follows:
```bash
Expand Down
6 changes: 3 additions & 3 deletions examples/text/ernie-3.0/cpp/seq_cls_infer.cc
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ const char sep = '/';
DEFINE_string(model_dir, "", "Directory of the inference model.");
DEFINE_string(vocab_path, "", "Path of the vocab file.");
DEFINE_string(device, "cpu",
"Type of inference device, support 'cpu', 'xpu' or 'gpu'.");
"Type of inference device, support 'cpu', 'kunlunxin' or 'gpu'.");
DEFINE_string(backend, "onnx_runtime",
"The inference runtime backend, support: ['onnx_runtime', "
"'paddle', 'openvino', 'tensorrt', 'paddle_tensorrt']");
Expand Down Expand Up @@ -61,8 +61,8 @@ bool CreateRuntimeOption(fastdeploy::RuntimeOption* option) {
<< ", param_path = " << param_path << std::endl;
option->SetModelPath(model_path, param_path);

if (FLAGS_device == "xpu") {
option->UseXpu();
if (FLAGS_device == "kunlunxin") {
option->UseKunlunXin();
return true;
} else if (FLAGS_device == "gpu") {
option->UseGpu();
Expand Down
4 changes: 2 additions & 2 deletions examples/text/ernie-3.0/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,8 @@ python seq_cls_infer.py --device cpu --model_dir ernie-3.0-medium-zh-afqmc
# GPU Inference
python seq_cls_infer.py --device gpu --model_dir ernie-3.0-medium-zh-afqmc

# XPU Inference
python seq_cls_infer.py --device xpu --model_dir ernie-3.0-medium-zh-afqmc
# KunlunXin XPU Inference
python seq_cls_infer.py --device kunlunxin --model_dir ernie-3.0-medium-zh-afqmc

```
The result returned after running is as follows:
Expand Down
8 changes: 4 additions & 4 deletions examples/text/ernie-3.0/python/seq_cls_infer.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,8 @@ def parse_arguments():
"--device",
type=str,
default='cpu',
choices=['gpu', 'cpu', 'xpu'],
help="Type of inference device, support 'cpu', 'xpu' or 'gpu'.")
choices=['gpu', 'cpu', 'kunlunxin'],
help="Type of inference device, support 'cpu', 'kunlunxin' or 'gpu'.")
parser.add_argument(
"--backend",
type=str,
Expand Down Expand Up @@ -94,8 +94,8 @@ def create_fd_runtime(self, args):
model_path = os.path.join(args.model_dir, "infer.pdmodel")
params_path = os.path.join(args.model_dir, "infer.pdiparams")
option.set_model_path(model_path, params_path)
if args.device == 'xpu':
option.use_xpu()
if args.device == 'kunlunxin':
option.use_kunlunxin()
option.use_paddle_lite_backend()
return fd.Runtime(option)
if args.device == 'cpu':
Expand Down
8 changes: 4 additions & 4 deletions examples/vision/classification/paddleclas/cpp/infer.cc
Original file line number Diff line number Diff line change
Expand Up @@ -96,13 +96,13 @@ void IpuInfer(const std::string& model_dir, const std::string& image_file) {
std::cout << res.Str() << std::endl;
}

void XpuInfer(const std::string& model_dir, const std::string& image_file) {
void KunlunXinInfer(const std::string& model_dir, const std::string& image_file) {
auto model_file = model_dir + sep + "inference.pdmodel";
auto params_file = model_dir + sep + "inference.pdiparams";
auto config_file = model_dir + sep + "inference_cls.yaml";

auto option = fastdeploy::RuntimeOption();
option.UseXpu();
option.UseKunlunXin();
auto model = fastdeploy::vision::classification::PaddleClasModel(
model_file, params_file, config_file, option);
if (!model.Initialized()) {
Expand Down Expand Up @@ -179,7 +179,7 @@ int main(int argc, char* argv[]) {
"e.g ./infer_demo ./ResNet50_vd ./test.jpeg 0"
<< std::endl;
std::cout << "The data type of run_option is int, 0: run with cpu; 1: run "
"with gpu; 2: run with gpu and use tensorrt backend; 3: run with ipu; 4: run with xpu."
"with gpu; 2: run with gpu and use tensorrt backend; 3: run with ipu; 4: run with kunlunxin."
<< std::endl;
return -1;
}
Expand All @@ -193,7 +193,7 @@ int main(int argc, char* argv[]) {
} else if (std::atoi(argv[3]) == 3) {
IpuInfer(argv[1], argv[2]);
} else if (std::atoi(argv[3]) == 4) {
XpuInfer(argv[1], argv[2]);
KunlunXinInfer(argv[1], argv[2]);
} else if (std::atoi(argv[3]) == 5) {
AscendInfer(argv[1], argv[2]);
}
Expand Down
4 changes: 2 additions & 2 deletions examples/vision/classification/paddleclas/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,8 @@ python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg -
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device gpu --use_trt True --topk 1
# IPU推理(注意:IPU推理首次运行会有序列化模型的操作,有一定耗时,需要耐心等待)
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device ipu --topk 1
# XPU推理
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device xpu --topk 1
# 昆仑芯XPU推理
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device kunlunxin --topk 1
# 华为昇腾NPU推理
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device ascend --topk 1
```
Expand Down
6 changes: 3 additions & 3 deletions examples/vision/classification/paddleclas/python/infer.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ def parse_arguments():
"--device",
type=str,
default='cpu',
help="Type of inference device, support 'cpu' or 'gpu' or 'ipu' or 'xpu' or 'ascend' ."
help="Type of inference device, support 'cpu' or 'gpu' or 'ipu' or 'kunlunxin' or 'ascend' ."
)
parser.add_argument(
"--use_trt",
Expand All @@ -36,8 +36,8 @@ def build_option(args):
if args.device.lower() == "ipu":
option.use_ipu()

if args.device.lower() == "xpu":
option.use_xpu()
if args.device.lower() == "kunlunxin":
option.use_kunlunxin()

if args.device.lower() == "ascend":
option.use_ascend()
Expand Down
Loading

0 comments on commit 45865c8

Please sign in to comment.