[PaddlePaddle Hackathon4 No.184] Add PaddleDetection Models Deployment Rust Examples (#1717)

* [PaddlePaddle Hackathon4 No.186] Add PaddleDetection Models Deployment Go Examples

Signed-off-by: wanziyu <ziyuwan@zju.edu.cn>

* Fix YOLOv8 Deployment Go Example

Signed-off-by: wanziyu <ziyuwan@zju.edu.cn>

* [Hackathon4 No.184] Add PaddleDetection Models Deployment Rust Examples

Signed-off-by: wanziyu <ziyuwan@zju.edu.cn>

* Add main and cargo files in examples

Signed-off-by: wanziyu <ziyuwan@zju.edu.cn>

---------

Signed-off-by: wanziyu <ziyuwan@zju.edu.cn>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
This commit is contained in:
wanziyu
2023-04-03 11:19:28 +08:00
committed by GitHub
parent 8772e3d86b
commit 95c977c638
18 changed files with 911 additions and 0 deletions
@@ -0,0 +1,11 @@
[package]
name = "infer"
version = "0.1.0"
edition = "2021"
[dependencies]
libc = "0.2"
clap = "2.32.0"
[build-dependencies]
bindgen = "0.53.1"
@@ -0,0 +1,54 @@
English | [简体中文](README_CN.md)
# PaddleDetection Rust Deployment Example
This directory provides examples that `main.rs` and`build.rs` use `bindgen` to call FastDeploy C API and fast finish the deployment of PaddleDetection model PPYOLOE on CPU/GPU.
Before deployment, three steps require confirmation
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
- 3. Download Rustup and install [Rust](https://www.rust-lang.org/tools/install)
Taking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.4 above (x.x.x>1.0.4) or develop version (x.x.x=0.0.0) is required to support this model.
### Use Rust and bindgen to deploy PPYOLOE model
Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above.
```bash
wget https://fastdeploy.bj.bcebos.com/dev/cpp/fastdeploy-linux-x64-0.0.0.tgz
tar xvf fastdeploy-linux-x64-0.0.0.tgz
```
Download the PPYOLOE model file and test images.
```bash
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
tar xvf ppyoloe_crn_l_300e_coco.tgz
```
In `build.rs`, configure the`cargo:rustc-link-search`to FastDeploy dynamic library path. The FastDeploy dynamic library is located in the `/lib` directory. Configure the`cargo:rustc-link-lib` to FastDeploy dynamic library`fastdeploy` and the`headers_dir`to FastDeploy C API directory path.
```bash
println!("cargo:rustc-link-search=./fastdeploy-linux-x64-0.0.0/lib");
println!("cargo:rustc-link-lib=fastdeploy");
let headers_dir = PathBuf::from("./fastdeploy-linux-x64-0.0.0/include");
```
Use the following command to add Fastdeploy library path to the environment variable.
```bash
source /Path/to/fastdeploy-linux-x64-0.0.0/fastdeploy_init.sh
```
Use `Cargo` tool to compile the Rust project.
```bash
cargo build
```
After compiling, use the following command to obtain the predicted results.
```bash
# CPU inference
cargo run -- --model ./ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device 0
# GPU inference
cargo run -- --model ./ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device 1
```
Then visualized inspection result is saved in the local image `vis_result_ppyoloe.jpg`.
@@ -0,0 +1,53 @@
[English](README.md) | 简体中文
# PaddleDetection Rust 部署示例
本目录下提供`main.rs``build.rs`, 使用Rust的`bindgen`库调用FastDeploy C API快速完成PaddleDetection模型PPYOLOE在CPU/GPU上部署的示例
在部署前,需确认以下三个步骤
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 3. 根据开发环境,使用Rustup安装[Rust](https://www.rust-lang.org/tools/install)
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>1.0.4)或FastDeploy的Develop版本(x.x.x=0.0.0)
### 使用Rust和bindgen进行PPYOLOE模型推理部署
在当前目录下,下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
```bash
wget https://fastdeploy.bj.bcebos.com/dev/cpp/fastdeploy-linux-x64-0.0.0.tgz
tar xvf fastdeploy-linux-x64-0.0.0.tgz
```
下载PPYOLOE模型文件和测试图片
```bash
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
tar xvf ppyoloe_crn_l_300e_coco.tgz
```
配置`build.rs`中的`cargo:rustc-link-search`参数配置为FastDeploy动态库路径,动态库位于预编译库的`/lib`目录中,`cargo:rustc-link-lib`参数配置为FastDeploy动态库`fastdeploy``headers_dir`变量配置为FastDeploy C API目录的路径
```bash
println!("cargo:rustc-link-search=./fastdeploy-linux-x64-0.0.0/lib");
println!("cargo:rustc-link-lib=fastdeploy");
let headers_dir = PathBuf::from("./fastdeploy-linux-x64-0.0.0/include");
```
将FastDeploy的库路径添加到环境变量
```bash
source /Path/to/fastdeploy-linux-x64-0.0.0/fastdeploy_init.sh
```
使用Cargo编译Rust项目
```bash
cargo build
```
编译完成后,使用如下命令执行可得到预测结果
```bash
# CPU推理
cargo run -- --model ./ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device 0
# GPU推理
cargo run -- --model ./ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device 1
```
可视化的检测结果图片保存在本地`vis_result_ppyoloe.jpg`
@@ -0,0 +1,26 @@
extern crate bindgen;
use std::env;
use std::path::PathBuf;
use std::fs::canonicalize;
fn main() {
println!("cargo:rustc-link-search=./fastdeploy-linux-x64-0.0.0/lib");
println!("cargo:rustc-link-lib=fastdeploy");
println!("cargo:rerun-if-changed=wrapper.h");
let headers_dir = PathBuf::from("./fastdeploy-linux-x64-0.0.0/include");
let headers_dir_canonical = canonicalize(headers_dir).unwrap();
let include_path = headers_dir_canonical.to_str().unwrap();
let bindings = bindgen::Builder::default()
.header("wrapper.h")
.clang_arg(format!("-I{include_path}"))
.parse_callbacks(Box::new(bindgen::CargoCallbacks))
.generate()
.expect("Unable to generate bindings");
let out_path = PathBuf::from(env::var("OUT_DIR").unwrap());
bindings
.write_to_file(out_path.join("bindings.rs"))
.expect("Couldn't write bindings!");
}
@@ -0,0 +1,182 @@
#![allow(non_upper_case_globals)]
#![allow(non_camel_case_types)]
#![allow(non_snake_case)]
#![allow(clashing_extern_declarations)]
#![allow(temporary_cstring_as_ptr)]
extern crate libc;
use libc::{c_char, c_void, malloc, free};
use std::ffi::CString;
use clap::{App, Arg};
pub mod fd {
include!(concat!(env!("OUT_DIR"), "/bindings.rs"));
}
#[link(name = "fastdeploy")]
extern "C" {
pub fn snprintf(s: *const libc::c_char, n: usize, format: *const libc::c_char, _: ...) -> libc::c_int;
}
#[cfg(target_os = "windows")]
const sep: char = '\\';
#[cfg(not(target_os = "windows"))]
const sep: char = '/';
fn FDBooleanToRust(b: fd::FD_C_Bool) -> bool {
let cFalse: fd::FD_C_Bool = 0;
if b != cFalse {
return true;
}
return false;
}
fn CpuInfer(model_dir: *const c_char, image_file: *const c_char) {
unsafe {
let model_file = malloc(100) as *const c_char;
let params_file = malloc(100) as *const c_char;
let config_file = malloc(100) as *const c_char;
let max_size: usize = 99;
let fmt = CString::new("%s%c%s").unwrap();
let model_path = CString::new("model.pdmodel").unwrap();
let params_path = CString::new("model.pdiparams").unwrap();
let config_path = CString::new("infer_cfg.yml").unwrap();
snprintf(model_file, max_size, fmt.as_ptr(), model_dir, sep, model_path.as_ptr());
snprintf(params_file, max_size, fmt.as_ptr(), model_dir, sep, params_path.as_ptr());
snprintf(config_file, max_size, fmt.as_ptr(), model_dir, sep, config_path.as_ptr());
let option = fd::FD_C_CreateRuntimeOptionWrapper();
fd::FD_C_RuntimeOptionWrapperUseCpu(option);
let model: *mut fd::FD_C_PPYOLOEWrapper = fd::FD_C_CreatePPYOLOEWrapper(
model_file, params_file, config_file, option, fd::FD_C_ModelFormat_PADDLE as i32);
if !FDBooleanToRust(fd::FD_C_PPYOLOEWrapperInitialized(model)) {
print!("Failed to initialize.\n");
fd::FD_C_DestroyRuntimeOptionWrapper(option);
fd::FD_C_DestroyPPYOLOEWrapper(model);
return;
}
let image = fd::FD_C_Imread(image_file);
let result = fd::FD_C_CreateDetectionResult();
if !FDBooleanToRust(fd::FD_C_PPYOLOEWrapperPredict(model, image, result)) {
print!("Failed to predict.\n");
fd::FD_C_DestroyRuntimeOptionWrapper(option);
fd::FD_C_DestroyPPYOLOEWrapper(model);
fd::FD_C_DestroyMat(image);
free(result as *mut c_void);
return;
}
let vis_im = fd::FD_C_VisDetection(image, result, 0.5, 1, 0.5);
let vis_im_path = CString::new("vis_result_ppyoloe.jpg").unwrap();
fd::FD_C_Imwrite(vis_im_path.as_ptr(), vis_im);
print!("Visualized result saved in ./vis_result_ppyoloe.jpg\n");
fd::FD_C_DestroyRuntimeOptionWrapper(option);
fd::FD_C_DestroyPPYOLOEWrapper(model);
fd::FD_C_DestroyDetectionResult(result);
fd::FD_C_DestroyMat(image);
fd::FD_C_DestroyMat(vis_im);
}
}
fn GpuInfer(model_dir: *const c_char, image_file: *const c_char) {
unsafe {
let model_file = malloc(100) as *const c_char;
let params_file = malloc(100) as *const c_char;
let config_file = malloc(100) as *const c_char;
let max_size: usize = 99;
let fmt = CString::new("%s%c%s").unwrap();
let model_path = CString::new("model.pdmodel").unwrap();
let params_path = CString::new("model.pdiparams").unwrap();
let config_path = CString::new("infer_cfg.yml").unwrap();
snprintf(model_file, max_size, fmt.as_ptr(), model_dir, '/', model_path.as_ptr());
snprintf(params_file, max_size, fmt.as_ptr(), model_dir, '/', params_path.as_ptr());
snprintf(config_file, max_size, fmt.as_ptr(), model_dir, '/', config_path.as_ptr());
let option = fd::FD_C_CreateRuntimeOptionWrapper();
fd::FD_C_RuntimeOptionWrapperUseGpu(option, 0);
let model: *mut fd::FD_C_PPYOLOEWrapper = fd::FD_C_CreatePPYOLOEWrapper(
model_file, params_file, config_file, option, fd::FD_C_ModelFormat_PADDLE as i32);
if !FDBooleanToRust(fd::FD_C_PPYOLOEWrapperInitialized(model)) {
print!("Failed to initialize.\n");
fd::FD_C_DestroyRuntimeOptionWrapper(option);
fd::FD_C_DestroyPPYOLOEWrapper(model);
return;
}
let image = fd::FD_C_Imread(image_file);
let result = fd::FD_C_CreateDetectionResult();
if !FDBooleanToRust(fd::FD_C_PPYOLOEWrapperPredict(model, image, result)) {
print!("Failed to predict.\n");
fd::FD_C_DestroyRuntimeOptionWrapper(option);
fd::FD_C_DestroyPPYOLOEWrapper(model);
fd::FD_C_DestroyMat(image);
free(result as *mut c_void);
return;
}
let vis_im = fd::FD_C_VisDetection(image, result, 0.5, 1, 0.5);
let vis_im_path = CString::new("vis_result_ppyoloe.jpg").unwrap();
fd::FD_C_Imwrite(vis_im_path.as_ptr(), vis_im);
print!("Visualized result saved in ./vis_result_ppyoloe.jpg\n");
fd::FD_C_DestroyRuntimeOptionWrapper(option);
fd::FD_C_DestroyPPYOLOEWrapper(model);
fd::FD_C_DestroyDetectionResult(result);
fd::FD_C_DestroyMat(image);
fd::FD_C_DestroyMat(vis_im);
}
}
fn main(){
let matches = App::new("infer command")
.version("0.1")
.about("Infer Run Options")
.arg(Arg::with_name("model")
.long("model")
.help("paddle detection model to use")
.takes_value(true)
.required(true))
.arg(Arg::with_name("image")
.long("image")
.help("image to predict")
.takes_value(true)
.required(true))
.arg(Arg::with_name("device")
.long("device")
.help("The data type of run_option is int, 0: run with cpu; 1: run with gpu")
.takes_value(true)
.required(true))
.get_matches();
let model_dir = matches.value_of("model").unwrap();
let image_file = matches.value_of("image").unwrap();
let device_type = matches.value_of("device").unwrap();
if model_dir != "" && image_file != "" {
if device_type == "0" {
CpuInfer(CString::new(model_dir).unwrap().as_ptr(), CString::new(image_file).unwrap().as_ptr());
}else if device_type == "1" {
GpuInfer(CString::new(model_dir).unwrap().as_ptr(), CString::new(image_file).unwrap().as_ptr());
}
}else{
print!("Usage: cargo run -- --model path/to/model_dir --image path/to/image --device run_option \n");
print!("e.g cargo run -- --model ./ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device 0 \n");
}
}
@@ -0,0 +1 @@
#include <fastdeploy_capi/vision.h>
@@ -0,0 +1,11 @@
[package]
name = "infer"
version = "0.1.0"
edition = "2021"
[dependencies]
libc = "0.2"
clap = "2.32.0"
[build-dependencies]
bindgen = "0.53.1"
@@ -0,0 +1,53 @@
English | [简体中文](README_CN.md)
# PaddleDetection Rust Deployment Example
This directory provides examples that `main.rs` and`build.rs` use `bindgen` to call FastDeploy C API and fast finish the deployment of PaddleDetection model YOLOv5 on CPU/GPU.
Before deployment, three steps require confirmation
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
- 3. Download Rustup and install [Rust](https://www.rust-lang.org/tools/install)
Taking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.4 above (x.x.x>1.0.4) or develop version (x.x.x=0.0.0) is required to support this model.
### Use Rust and bindgen to deploy YOLOv5 model
Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above.
```bash
wget https://fastdeploy.bj.bcebos.com/dev/cpp/fastdeploy-linux-x64-0.0.0.tgz
tar xvf fastdeploy-linux-x64-0.0.0.tgz
```
Download the YOLOv5 ONNX model file and test images
```bash
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
```
In `build.rs`, configure the`cargo:rustc-link-search`to FastDeploy dynamic library path. The FastDeploy dynamic library is located in the `/lib` directory. Configure the`cargo:rustc-link-lib` to FastDeploy dynamic library`fastdeploy` and the`headers_dir`to FastDeploy C API directory path.
```bash
println!("cargo:rustc-link-search=./fastdeploy-linux-x64-0.0.0/lib");
println!("cargo:rustc-link-lib=fastdeploy");
let headers_dir = PathBuf::from("./fastdeploy-linux-x64-0.0.0/include");
```
Use the following command to add Fastdeploy library path to the environment variable.
```bash
source /Path/to/fastdeploy-linux-x64-0.0.0/fastdeploy_init.sh
```
Use `Cargo` tool to compile the Rust project.
```bash
cargo build
```
After compiling, use the following command to obtain the predicted results.
```bash
# CPU inference
cargo run -- --model yolov5s.onnx --image 000000014439.jpg --device 0
# GPU inference
cargo run -- --model yolov5s.onnx --image 000000014439.jpg --device 1
```
Then visualized inspection result is saved in the local image `vis_result_yolov5.jpg`.
@@ -0,0 +1,52 @@
[English](README.md) | 简体中文
# PaddleDetection Rust 部署示例
本目录下提供`main.rs``build.rs`, 使用Rust的`bindgen`库调用FastDeploy C API快速完成PaddleDetection模型YOLOv5在CPU/GPU上部署的示例
在部署前,需确认以下三个步骤
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 3. 根据开发环境,使用Rustup安装[Rust](https://www.rust-lang.org/tools/install)
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>1.0.4)或FastDeploy的Develop版本(x.x.x=0.0.0)
### 使用Rust和bindgen进行YOLOv5模型推理部署
在当前目录下,下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
```bash
wget https://fastdeploy.bj.bcebos.com/dev/cpp/fastdeploy-linux-x64-0.0.0.tgz
tar xvf fastdeploy-linux-x64-0.0.0.tgz
```
下载官方转换好的 YOLOv5 ONNX 模型文件和测试图片
```bash
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
```
配置`build.rs`中的`cargo:rustc-link-search`参数配置为FastDeploy动态库路径,动态库位于预编译库的`/lib`目录中,`cargo:rustc-link-lib`参数配置为FastDeploy动态库`fastdeploy``headers_dir`变量配置为FastDeploy C API目录的路径
```bash
println!("cargo:rustc-link-search=./fastdeploy-linux-x64-0.0.0/lib");
println!("cargo:rustc-link-lib=fastdeploy");
let headers_dir = PathBuf::from("./fastdeploy-linux-x64-0.0.0/include");
```
将FastDeploy的库路径添加到环境变量
```bash
source /Path/to/fastdeploy-linux-x64-0.0.0/fastdeploy_init.sh
```
使用Cargo编译Rust项目
```bash
cargo build
```
编译完成后,使用如下命令执行可得到预测结果
```bash
# CPU推理
cargo run -- --model yolov5s.onnx --image 000000014439.jpg --device 0
# GPU推理
cargo run -- --model yolov5s.onnx --image 000000014439.jpg --device 1
```
可视化的检测结果图片保存在本地`vis_result_yolov5.jpg`
+26
View File
@@ -0,0 +1,26 @@
extern crate bindgen;
use std::env;
use std::path::PathBuf;
use std::fs::canonicalize;
fn main() {
println!("cargo:rustc-link-search=./fastdeploy-linux-x64-0.0.0/lib");
println!("cargo:rustc-link-lib=fastdeploy");
println!("cargo:rerun-if-changed=wrapper.h");
let headers_dir = PathBuf::from("./fastdeploy-linux-x64-0.0.0/include");
let headers_dir_canonical = canonicalize(headers_dir).unwrap();
let include_path = headers_dir_canonical.to_str().unwrap();
let bindings = bindgen::Builder::default()
.header("wrapper.h")
.clang_arg(format!("-I{include_path}"))
.parse_callbacks(Box::new(bindgen::CargoCallbacks))
.generate()
.expect("Unable to generate bindings");
let out_path = PathBuf::from(env::var("OUT_DIR").unwrap());
bindings
.write_to_file(out_path.join("bindings.rs"))
.expect("Couldn't write bindings!");
}
@@ -0,0 +1,149 @@
#![allow(non_upper_case_globals)]
#![allow(non_camel_case_types)]
#![allow(non_snake_case)]
#![allow(clashing_extern_declarations)]
#![allow(temporary_cstring_as_ptr)]
extern crate libc;
use libc::{c_char, c_void, free};
use std::ffi::CString;
use clap::{App, Arg};
pub mod fd {
include!(concat!(env!("OUT_DIR"), "/bindings.rs"));
}
#[link(name = "fastdeploy")]
extern "C" {
pub fn snprintf(s: *const libc::c_char, n: usize, format: *const libc::c_char, _: ...) -> libc::c_int;
}
fn FDBooleanToRust(b: fd::FD_C_Bool) -> bool {
let cFalse: fd::FD_C_Bool = 0;
if b != cFalse {
return true;
}
return false;
}
fn CpuInfer(model_file: *const c_char, image_file: *const c_char) {
unsafe {
let option = fd::FD_C_CreateRuntimeOptionWrapper();
fd::FD_C_RuntimeOptionWrapperUseCpu(option);
let model: *mut fd::FD_C_YOLOv5Wrapper = fd::FD_C_CreateYOLOv5Wrapper(
model_file, CString::new("").unwrap().as_ptr(), option, fd::FD_C_ModelFormat_ONNX as i32);
if !FDBooleanToRust(fd::FD_C_YOLOv5WrapperInitialized(model)) {
print!("Failed to initialize.\n");
fd::FD_C_DestroyRuntimeOptionWrapper(option);
fd::FD_C_DestroyYOLOv5Wrapper(model);
return;
}
let image = fd::FD_C_Imread(image_file);
let result = fd::FD_C_CreateDetectionResult();
if !FDBooleanToRust(fd::FD_C_YOLOv5WrapperPredict(model, image, result)) {
print!("Failed to predict.\n");
fd::FD_C_DestroyRuntimeOptionWrapper(option);
fd::FD_C_DestroyYOLOv5Wrapper(model);
fd::FD_C_DestroyMat(image);
free(result as *mut c_void);
return;
}
let vis_im = fd::FD_C_VisDetection(image, result, 0.5, 1, 0.5);
let vis_im_path = CString::new("vis_result_yolov5.jpg").unwrap();
fd::FD_C_Imwrite(vis_im_path.as_ptr(), vis_im);
print!("Visualized result saved in ./vis_result_yolov5.jpg\n");
fd::FD_C_DestroyRuntimeOptionWrapper(option);
fd::FD_C_DestroyYOLOv5Wrapper(model);
fd::FD_C_DestroyDetectionResult(result);
fd::FD_C_DestroyMat(image);
fd::FD_C_DestroyMat(vis_im);
}
}
fn GpuInfer(model_file: *const c_char, image_file: *const c_char) {
unsafe {
let option = fd::FD_C_CreateRuntimeOptionWrapper();
fd::FD_C_RuntimeOptionWrapperUseGpu(option, 0);
let model: *mut fd::FD_C_YOLOv5Wrapper = fd::FD_C_CreateYOLOv5Wrapper(
model_file, CString::new("").unwrap().as_ptr(), option, fd::FD_C_ModelFormat_ONNX as i32);
if !FDBooleanToRust(fd::FD_C_YOLOv5WrapperInitialized(model)) {
print!("Failed to initialize.\n");
fd::FD_C_DestroyRuntimeOptionWrapper(option);
fd::FD_C_DestroyYOLOv5Wrapper(model);
return;
}
let image = fd::FD_C_Imread(image_file);
let result = fd::FD_C_CreateDetectionResult();
if !FDBooleanToRust(fd::FD_C_YOLOv5WrapperPredict(model, image, result)) {
print!("Failed to predict.\n");
fd::FD_C_DestroyRuntimeOptionWrapper(option);
fd::FD_C_DestroyYOLOv5Wrapper(model);
fd::FD_C_DestroyMat(image);
free(result as *mut c_void);
return;
}
let vis_im = fd::FD_C_VisDetection(image, result, 0.5, 1, 0.5);
let vis_im_path = CString::new("vis_result_yolov5.jpg").unwrap();
fd::FD_C_Imwrite(vis_im_path.as_ptr(), vis_im);
print!("Visualized result saved in ./vis_result_yolov5.jpg\n");
fd::FD_C_DestroyRuntimeOptionWrapper(option);
fd::FD_C_DestroyYOLOv5Wrapper(model);
fd::FD_C_DestroyDetectionResult(result);
fd::FD_C_DestroyMat(image);
fd::FD_C_DestroyMat(vis_im);
}
}
fn main(){
let matches = App::new("infer command")
.version("0.1")
.about("Infer Run Options")
.arg(Arg::with_name("model")
.long("model")
.help("paddle detection model to use")
.takes_value(true)
.required(true))
.arg(Arg::with_name("image")
.long("image")
.help("image to predict")
.takes_value(true)
.required(true))
.arg(Arg::with_name("device")
.long("device")
.help("The data type of run_option is int, 0: run with cpu; 1: run with gpu")
.takes_value(true)
.required(true))
.get_matches();
let model_file = matches.value_of("model").unwrap();
let image_file = matches.value_of("image").unwrap();
let device_type = matches.value_of("device").unwrap();
if model_file != "" && image_file != "" {
if device_type == "0" {
CpuInfer(CString::new(model_file).unwrap().as_ptr(), CString::new(image_file).unwrap().as_ptr());
}else if device_type == "1" {
GpuInfer(CString::new(model_file).unwrap().as_ptr(), CString::new(image_file).unwrap().as_ptr());
}
}else{
print!("Usage: cargo run -- --model path/to/model --image path/to/image --device run_option \n");
print!("e.g cargo run -- --model yolov5s.onnx --image 000000014439.jpg --device 0 \n");
}
}
@@ -0,0 +1 @@
#include <fastdeploy_capi/vision.h>
@@ -0,0 +1,11 @@
[package]
name = "infer"
version = "0.1.0"
edition = "2021"
[dependencies]
libc = "0.2"
clap = "2.32.0"
[build-dependencies]
bindgen = "0.53.1"
@@ -0,0 +1,53 @@
English | [简体中文](README_CN.md)
# PaddleDetection Rust Deployment Example
This directory provides examples that `main.rs` and`build.rs` use `bindgen` to call FastDeploy C API and fast finish the deployment of PaddleDetection model YOLOv8 on CPU/GPU.
Before deployment, three steps require confirmation
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
- 3. Download Rustup and install [Rust](https://www.rust-lang.org/tools/install)
Taking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.4 above (x.x.x>1.0.4) or develop version (x.x.x=0.0.0) is required to support this model.
### Use Rust and bindgen to deploy YOLOv8 model
Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above.
```bash
wget https://fastdeploy.bj.bcebos.com/dev/cpp/fastdeploy-linux-x64-0.0.0.tgz
tar xvf fastdeploy-linux-x64-0.0.0.tgz
```
Download the YOLOv8 ONNX model file and test images
```bash
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov8s.onnx
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
```
In `build.rs`, configure the`cargo:rustc-link-search`to FastDeploy dynamic library path. The FastDeploy dynamic library is located in the `/lib` directory. Configure the`cargo:rustc-link-lib` to FastDeploy dynamic library`fastdeploy` and the`headers_dir`to FastDeploy C API directory path.
```bash
println!("cargo:rustc-link-search=./fastdeploy-linux-x64-0.0.0/lib");
println!("cargo:rustc-link-lib=fastdeploy");
let headers_dir = PathBuf::from("./fastdeploy-linux-x64-0.0.0/include");
```
Use the following command to add Fastdeploy library path to the environment variable.
```bash
source /Path/to/fastdeploy-linux-x64-0.0.0/fastdeploy_init.sh
```
Use `Cargo` tool to compile the Rust project.
```bash
cargo build
```
After compiling, use the following command to obtain the predicted results.
```bash
# CPU inference
cargo run -- --model yolov8s.onnx --image 000000014439.jpg --device 0
# GPU inference
cargo run -- --model yolov8s.onnx --image 000000014439.jpg --device 1
```
Then visualized inspection result is saved in the local image `vis_result_yolov8.jpg`.
@@ -0,0 +1,52 @@
[English](README.md) | 简体中文
# PaddleDetection Rust 部署示例
本目录下提供`main.rs``build.rs`, 使用Rust的`bindgen`库调用FastDeploy C API快速完成PaddleDetection模型YOLOv8在CPU/GPU上部署的示例
在部署前,需确认以下三个步骤
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 3. 根据开发环境,使用Rustup安装[Rust](https://www.rust-lang.org/tools/install)
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>1.0.4)或FastDeploy的Develop版本(x.x.x=0.0.0)
### 使用Rust和bindgen进行YOLOv8模型推理部署
在当前目录下,下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
```bash
wget https://fastdeploy.bj.bcebos.com/dev/cpp/fastdeploy-linux-x64-0.0.0.tgz
tar xvf fastdeploy-linux-x64-0.0.0.tgz
```
下载官方转换好的 YOLOv8 ONNX 模型文件和测试图片
```bash
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov8s.onnx
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
```
配置`build.rs`中的`cargo:rustc-link-search`参数配置为FastDeploy动态库路径,动态库位于预编译库的`/lib`目录中,`cargo:rustc-link-lib`参数配置为FastDeploy动态库`fastdeploy``headers_dir`变量配置为FastDeploy C API目录的路径
```bash
println!("cargo:rustc-link-search=./fastdeploy-linux-x64-0.0.0/lib");
println!("cargo:rustc-link-lib=fastdeploy");
let headers_dir = PathBuf::from("./fastdeploy-linux-x64-0.0.0/include");
```
将FastDeploy的库路径添加到环境变量
```bash
source /Path/to/fastdeploy-linux-x64-0.0.0/fastdeploy_init.sh
```
使用Cargo编译Rust项目
```bash
cargo build
```
编译完成后,使用如下命令执行可得到预测结果
```bash
# CPU推理
cargo run -- --model yolov8s.onnx --image 000000014439.jpg --device 0
# GPU推理
cargo run -- --model yolov8s.onnx --image 000000014439.jpg --device 1
```
可视化的检测结果图片保存在本地`vis_result_yolov8.jpg`
+26
View File
@@ -0,0 +1,26 @@
extern crate bindgen;
use std::env;
use std::path::PathBuf;
use std::fs::canonicalize;
fn main() {
println!("cargo:rustc-link-search=./fastdeploy-linux-x64-0.0.0/lib");
println!("cargo:rustc-link-lib=fastdeploy");
println!("cargo:rerun-if-changed=wrapper.h");
let headers_dir = PathBuf::from("./fastdeploy-linux-x64-0.0.0/include");
let headers_dir_canonical = canonicalize(headers_dir).unwrap();
let include_path = headers_dir_canonical.to_str().unwrap();
let bindings = bindgen::Builder::default()
.header("wrapper.h")
.clang_arg(format!("-I{include_path}"))
.parse_callbacks(Box::new(bindgen::CargoCallbacks))
.generate()
.expect("Unable to generate bindings");
let out_path = PathBuf::from(env::var("OUT_DIR").unwrap());
bindings
.write_to_file(out_path.join("bindings.rs"))
.expect("Couldn't write bindings!");
}
@@ -0,0 +1,149 @@
#![allow(non_upper_case_globals)]
#![allow(non_camel_case_types)]
#![allow(non_snake_case)]
#![allow(clashing_extern_declarations)]
#![allow(temporary_cstring_as_ptr)]
extern crate libc;
use libc::{c_char, c_void, free};
use std::ffi::CString;
use clap::{App, Arg};
pub mod fd {
include!(concat!(env!("OUT_DIR"), "/bindings.rs"));
}
#[link(name = "fastdeploy")]
extern "C" {
pub fn snprintf(s: *const libc::c_char, n: usize, format: *const libc::c_char, _: ...) -> libc::c_int;
}
fn FDBooleanToRust(b: fd::FD_C_Bool) -> bool {
let cFalse: fd::FD_C_Bool = 0;
if b != cFalse {
return true;
}
return false;
}
fn CpuInfer(model_file: *const c_char, image_file: *const c_char) {
unsafe {
let option = fd::FD_C_CreateRuntimeOptionWrapper();
fd::FD_C_RuntimeOptionWrapperUseCpu(option);
let model: *mut fd::FD_C_YOLOv8Wrapper = fd::FD_C_CreateYOLOv8Wrapper(
model_file, CString::new("").unwrap().as_ptr(), option, fd::FD_C_ModelFormat_ONNX as i32);
if !FDBooleanToRust(fd::FD_C_YOLOv8WrapperInitialized(model)) {
print!("Failed to initialize.\n");
fd::FD_C_DestroyRuntimeOptionWrapper(option);
fd::FD_C_DestroyYOLOv8Wrapper(model);
return;
}
let image = fd::FD_C_Imread(image_file);
let result = fd::FD_C_CreateDetectionResult();
if !FDBooleanToRust(fd::FD_C_YOLOv8WrapperPredict(model, image, result)) {
print!("Failed to predict.\n");
fd::FD_C_DestroyRuntimeOptionWrapper(option);
fd::FD_C_DestroyYOLOv8Wrapper(model);
fd::FD_C_DestroyMat(image);
free(result as *mut c_void);
return;
}
let vis_im = fd::FD_C_VisDetection(image, result, 0.5, 1, 0.5);
let vis_im_path = CString::new("vis_result_yolov8.jpg").unwrap();
fd::FD_C_Imwrite(vis_im_path.as_ptr(), vis_im);
print!("Visualized result saved in ./vis_result_yolov8.jpg\n");
fd::FD_C_DestroyRuntimeOptionWrapper(option);
fd::FD_C_DestroyYOLOv8Wrapper(model);
fd::FD_C_DestroyDetectionResult(result);
fd::FD_C_DestroyMat(image);
fd::FD_C_DestroyMat(vis_im);
}
}
fn GpuInfer(model_file: *const c_char, image_file: *const c_char) {
unsafe {
let option = fd::FD_C_CreateRuntimeOptionWrapper();
fd::FD_C_RuntimeOptionWrapperUseGpu(option, 0);
let model: *mut fd::FD_C_YOLOv8Wrapper = fd::FD_C_CreateYOLOv8Wrapper(
model_file, CString::new("").unwrap().as_ptr(), option, fd::FD_C_ModelFormat_ONNX as i32);
if !FDBooleanToRust(fd::FD_C_YOLOv8WrapperInitialized(model)) {
print!("Failed to initialize.\n");
fd::FD_C_DestroyRuntimeOptionWrapper(option);
fd::FD_C_DestroyYOLOv8Wrapper(model);
return;
}
let image = fd::FD_C_Imread(image_file);
let result = fd::FD_C_CreateDetectionResult();
if !FDBooleanToRust(fd::FD_C_YOLOv8WrapperPredict(model, image, result)) {
print!("Failed to predict.\n");
fd::FD_C_DestroyRuntimeOptionWrapper(option);
fd::FD_C_DestroyYOLOv8Wrapper(model);
fd::FD_C_DestroyMat(image);
free(result as *mut c_void);
return;
}
let vis_im = fd::FD_C_VisDetection(image, result, 0.5, 1, 0.5);
let vis_im_path = CString::new("vis_result_yolov8.jpg").unwrap();
fd::FD_C_Imwrite(vis_im_path.as_ptr(), vis_im);
print!("Visualized result saved in ./vis_result_yolov8.jpg\n");
fd::FD_C_DestroyRuntimeOptionWrapper(option);
fd::FD_C_DestroyYOLOv8Wrapper(model);
fd::FD_C_DestroyDetectionResult(result);
fd::FD_C_DestroyMat(image);
fd::FD_C_DestroyMat(vis_im);
}
}
fn main(){
let matches = App::new("infer command")
.version("0.1")
.about("Infer Run Options")
.arg(Arg::with_name("model")
.long("model")
.help("paddle detection model to use")
.takes_value(true)
.required(true))
.arg(Arg::with_name("image")
.long("image")
.help("image to predict")
.takes_value(true)
.required(true))
.arg(Arg::with_name("device")
.long("device")
.help("The data type of run_option is int, 0: run with cpu; 1: run with gpu")
.takes_value(true)
.required(true))
.get_matches();
let model_file = matches.value_of("model").unwrap();
let image_file = matches.value_of("image").unwrap();
let device_type = matches.value_of("device").unwrap();
if model_file != "" && image_file != "" {
if device_type == "0" {
CpuInfer(CString::new(model_file).unwrap().as_ptr(), CString::new(image_file).unwrap().as_ptr());
}else if device_type == "1" {
GpuInfer(CString::new(model_file).unwrap().as_ptr(), CString::new(image_file).unwrap().as_ptr());
}
}else{
print!("Usage: cargo run -- --model path/to/model --image path/to/image --device run_option \n");
print!("e.g cargo run -- --model yolov8s.onnx --image 000000014439.jpg --device 0 \n");
}
}
@@ -0,0 +1 @@
#include <fastdeploy_capi/vision.h>