[PaddlePaddle Hackathon4 No.186] Add PaddleDetection Models Deployment Go Examples (#1648)

* [PaddlePaddle Hackathon4 No.186] Add PaddleDetection Models Deployment Go Examples

Signed-off-by: wanziyu <ziyuwan@zju.edu.cn>

* Fix YOLOv8 Deployment Go Example

Signed-off-by: wanziyu <ziyuwan@zju.edu.cn>

---------

Signed-off-by: wanziyu <ziyuwan@zju.edu.cn>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
This commit is contained in:
wanziyu
2023-03-28 20:30:03 +08:00
committed by GitHub
parent b15df7a8ee
commit b1d2903b93
9 changed files with 808 additions and 0 deletions
+57
View File
@@ -0,0 +1,57 @@
English | [简体中文](README_CN.md)
# PaddleDetection Golang Deployment Example
This directory provides examples that `infer.go` uses CGO to call FastDeploy C API and fast finish the deployment of PaddleDetection models, including PPYOLOE on CPU/GPU.
Before deployment, two steps require confirmation
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Taking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.4 above (x.x.x>1.0.4) or develop version (x.x.x=0.0.0) is required to support this model.
### Use Golang and CGO to deploy PPYOLOE model
Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above.
```bash
wget https://fastdeploy.bj.bcebos.com/dev/cpp/fastdeploy-linux-x64-0.0.0.tgz
tar xvf fastdeploy-linux-x64-0.0.0.tgz
```
Copy FastDeploy C APIs from precompiled library to the current directory.
```bash
cp -r fastdeploy-linux-x64-0.0.0/include/fastdeploy_capi .
```
Download the PPYOLOE model file and test images.
```bash
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
tar xvf ppyoloe_crn_l_300e_coco.tgz
```
Configure the `cgo CFLAGS: -I` to FastDeploy C API directory path and the `cgo LDFLAGS: -L` to FastDeploy dynamic library path. The FastDeploy dynamic library is located in the `/lib` directory.
```bash
cgo CFLAGS: -I./fastdeploy_capi
cgo LDFLAGS: -L./fastdeploy-linux-x64-0.0.0/lib -lfastdeploy
```
Use the following command to add Fastdeploy library path to the environment variable.
```bash
source /Path/to/fastdeploy-linux-x64-0.0.0/fastdeploy_init.sh
```
Compile the Go file `infer.go`.
```bash
go build infer.go
```
After compiling, use the following command to obtain the predicted results.
```bash
# CPU inference
./infer -model ./ppyoloe_crn_l_300e_coco -image 000000014439.jpg -device 0
# GPU inference
./infer -model ./ppyoloe_crn_l_300e_coco -image 000000014439.jpg -device 1
```
Then visualized inspection result is saved in the local image `vis_result.jpg`.
@@ -0,0 +1,56 @@
[English](README.md) | 简体中文
# PaddleDetection Golang 部署示例
本目录下提供`infer.go`, 使用CGO调用FastDeploy C API快速完成PaddleDetection模型PPYOLOE在CPU/GPU上部署的示例
在部署前,需确认以下两个步骤
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>1.0.4)或FastDeploy的Develop版本(x.x.x=0.0.0)
### 使用Golang和CGO工具进行PPYOLOE模型推理部署
在当前目录下,下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
```bash
wget https://fastdeploy.bj.bcebos.com/dev/cpp/fastdeploy-linux-x64-0.0.0.tgz
tar xvf fastdeploy-linux-x64-0.0.0.tgz
```
将FastDeploy C API文件拷贝至当前目录
```bash
cp -r fastdeploy-linux-x64-0.0.0/include/fastdeploy_capi .
```
下载PPYOLOE模型文件和测试图片
```bash
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
tar xvf ppyoloe_crn_l_300e_coco.tgz
```
配置`infer.go`中的`cgo CFLAGS: -I`参数配置为C API文件路径,`cgo LDFLAGS: -L`参数配置为FastDeploy的动态库路径,动态库位于预编译库的`/lib`目录中
```bash
cgo CFLAGS: -I./fastdeploy_capi
cgo LDFLAGS: -L./fastdeploy-linux-x64-0.0.0/lib -lfastdeploy
```
将FastDeploy的库路径添加到环境变量
```bash
source /Path/to/fastdeploy-linux-x64-0.0.0/fastdeploy_init.sh
```
编译Go文件`infer.go`
```bash
go build infer.go
```
编译完成后,使用如下命令执行可得到预测结果
```bash
# CPU推理
./infer -model ./ppyoloe_crn_l_300e_coco -image 000000014439.jpg -device 0
# GPU推理
./infer -model ./ppyoloe_crn_l_300e_coco -image 000000014439.jpg -device 1
```
可视化的检测结果图片保存在本地`vis_result.jpg`
+185
View File
@@ -0,0 +1,185 @@
// Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
// #cgo CFLAGS: -I./fastdeploy_capi
// #cgo LDFLAGS: -L./fastdeploy-linux-x64-0.0.0/lib -lfastdeploy
// #include <fastdeploy_capi/vision.h>
// #include <stdio.h>
// #include <stdbool.h>
// #include <stdlib.h>
/*
#include <stdio.h>
#ifdef WIN32
const char sep = '\\';
#else
const char sep = '/';
#endif
char* GetModelFilePath(char* model_dir, char* model_file, int max_size){
snprintf(model_file, max_size, "%s%c%s", model_dir, sep, "model.pdmodel");
return model_file;
}
char* GetParametersFilePath(char* model_dir, char* params_file, int max_size){
snprintf(params_file, max_size, "%s%c%s", model_dir, sep, "model.pdiparams");
return params_file;
}
char* GetConfigFilePath(char* model_dir, char* config_file, int max_size){
snprintf(config_file, max_size, "%s%c%s", model_dir, sep, "infer_cfg.yml");
return config_file;
}
*/
import "C"
import (
"flag"
"fmt"
"unsafe"
)
func FDBooleanToGo(b C.FD_C_Bool) bool {
var cFalse C.FD_C_Bool
if b != cFalse {
return true
}
return false
}
func CpuInfer(modelDir *C.char, imageFile *C.char) {
var modelFile = (*C.char)(C.malloc(C.size_t(100)))
var paramsFile = (*C.char)(C.malloc(C.size_t(100)))
var configFile = (*C.char)(C.malloc(C.size_t(100)))
var maxSize = 99
modelFile = C.GetModelFilePath(modelDir, modelFile, C.int(maxSize))
paramsFile = C.GetParametersFilePath(modelDir, paramsFile, C.int(maxSize))
configFile = C.GetConfigFilePath(modelDir, configFile, C.int(maxSize))
var option *C.FD_C_RuntimeOptionWrapper = C.FD_C_CreateRuntimeOptionWrapper()
C.FD_C_RuntimeOptionWrapperUseCpu(option)
var model *C.FD_C_PPYOLOEWrapper = C.FD_C_CreatePPYOLOEWrapper(
modelFile, paramsFile, configFile, option, C.FD_C_ModelFormat_PADDLE)
if !FDBooleanToGo(C.FD_C_PPYOLOEWrapperInitialized(model)) {
fmt.Printf("Failed to initialize.\n")
C.FD_C_DestroyRuntimeOptionWrapper(option)
C.FD_C_DestroyPPYOLOEWrapper(model)
return
}
var image C.FD_C_Mat = C.FD_C_Imread(imageFile)
var result *C.FD_C_DetectionResult = C.FD_C_CreateDetectionResult()
if !FDBooleanToGo(C.FD_C_PPYOLOEWrapperPredict(model, image, result)) {
fmt.Printf("Failed to predict.\n")
C.FD_C_DestroyRuntimeOptionWrapper(option)
C.FD_C_DestroyPPYOLOEWrapper(model)
C.FD_C_DestroyMat(image)
C.free(unsafe.Pointer(result))
return
}
var visImage C.FD_C_Mat = C.FD_C_VisDetection(image, result, 0.5, 1, 0.5)
C.FD_C_Imwrite(C.CString("vis_result.jpg"), visImage)
fmt.Printf("Visualized result saved in ./vis_result.jpg\n")
C.FD_C_DestroyRuntimeOptionWrapper(option)
C.FD_C_DestroyPPYOLOEWrapper(model)
C.FD_C_DestroyDetectionResult(result)
C.FD_C_DestroyMat(image)
C.FD_C_DestroyMat(visImage)
}
func GpuInfer(modelDir *C.char, imageFile *C.char) {
var modelFile = (*C.char)(C.malloc(C.size_t(100)))
var paramsFile = (*C.char)(C.malloc(C.size_t(100)))
var configFile = (*C.char)(C.malloc(C.size_t(100)))
var maxSize = 99
modelFile = C.GetModelFilePath(modelDir, modelFile, C.int(maxSize))
paramsFile = C.GetParametersFilePath(modelDir, paramsFile, C.int(maxSize))
configFile = C.GetConfigFilePath(modelDir, configFile, C.int(maxSize))
var option *C.FD_C_RuntimeOptionWrapper = C.FD_C_CreateRuntimeOptionWrapper()
C.FD_C_RuntimeOptionWrapperUseGpu(option, 0)
var model *C.FD_C_PPYOLOEWrapper = C.FD_C_CreatePPYOLOEWrapper(
modelFile, paramsFile, configFile, option, C.FD_C_ModelFormat_PADDLE)
if !FDBooleanToGo(C.FD_C_PPYOLOEWrapperInitialized(model)) {
fmt.Printf("Failed to initialize.\n")
C.FD_C_DestroyRuntimeOptionWrapper(option)
C.FD_C_DestroyPPYOLOEWrapper(model)
return
}
var image C.FD_C_Mat = C.FD_C_Imread(imageFile)
var result *C.FD_C_DetectionResult = C.FD_C_CreateDetectionResult()
if !FDBooleanToGo(C.FD_C_PPYOLOEWrapperPredict(model, image, result)) {
fmt.Printf("Failed to predict.\n")
C.FD_C_DestroyRuntimeOptionWrapper(option)
C.FD_C_DestroyPPYOLOEWrapper(model)
C.FD_C_DestroyMat(image)
C.free(unsafe.Pointer(result))
return
}
var visImage C.FD_C_Mat = C.FD_C_VisDetection(image, result, 0.5, 1, 0.5)
C.FD_C_Imwrite(C.CString("vis_result.jpg"), visImage)
fmt.Printf("Visualized result saved in ./vis_result.jpg\n")
C.FD_C_DestroyRuntimeOptionWrapper(option)
C.FD_C_DestroyPPYOLOEWrapper(model)
C.FD_C_DestroyDetectionResult(result)
C.FD_C_DestroyMat(image)
C.FD_C_DestroyMat(visImage)
}
var (
modelDir string
imageFile string
deviceType int
)
func init() {
flag.StringVar(&modelDir, "model", "", "paddle detection model to use")
flag.StringVar(&imageFile, "image", "", "image to predict")
flag.IntVar(&deviceType, "device", 0, "The data type of run_option is int, 0: run with cpu; 1: run with gpu")
}
func main() {
flag.Parse()
if modelDir != "" && imageFile != "" {
if deviceType == 0 {
CpuInfer(C.CString(modelDir), C.CString(imageFile))
} else if deviceType == 1 {
GpuInfer(C.CString(modelDir), C.CString(imageFile))
}
} else {
fmt.Printf("Usage: ./infer -model path/to/model_dir -image path/to/image -device run_option \n")
fmt.Printf("e.g ./infer -model ./ppyoloe_crn_l_300e_coco -image 000000014439.jpg -device 0 \n")
}
}
+56
View File
@@ -0,0 +1,56 @@
English | [简体中文](README_CN.md)
# YOLOv5 Golang Deployment Example
This directory provides examples that `infer.go` uses CGO to call FastDeploy C API and finish the deployment of YOLOv5 model on CPU/GPU.
Before deployment, two steps require confirmation
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Taking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.4 above (x.x.x>1.0.4) or develop version (x.x.x=0.0.0) is required to support this model.
### Use Golang and CGO to deploy YOLOv5 model
Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above.
```bash
wget https://fastdeploy.bj.bcebos.com/dev/cpp/fastdeploy-linux-x64-0.0.0.tgz
tar xvf fastdeploy-linux-x64-0.0.0.tgz
```
Copy FastDeploy C APIs from precompiled library to the current directory.
```bash
cp -r fastdeploy-linux-x64-0.0.0/include/fastdeploy_capi .
```
Download the YOLOv5 ONNX model file and test images
```bash
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
```
Configure the `cgo CFLAGS: -I` to FastDeploy C API directory path and the `cgo LDFLAGS: -L` to FastDeploy dynamic library path. The FastDeploy dynamic library is located in the `/lib` directory.
```bash
cgo CFLAGS: -I./fastdeploy_capi
cgo LDFLAGS: -L./fastdeploy-linux-x64-0.0.0/lib -lfastdeploy
```
Use the following command to add Fastdeploy library path to the environment variable.
```bash
source /Path/to/fastdeploy-linux-x64-0.0.0/fastdeploy_init.sh
```
Compile the Go file `infer.go`.
```bash
go build infer.go
```
After compiling, use the following command to obtain the predicted results.
```bash
# CPU inference
./infer -model yolov5s.onnx -image 000000014439.jpg -device 0
# GPU inference
./infer -model yolov5s.onnx -image 000000014439.jpg -device 1
```
Then visualized inspection result is saved in the local image `vis_result.jpg`.
@@ -0,0 +1,55 @@
[English](README.md) | 简体中文
# YOLOv5 Golang 部署示例
本目录下提供`infer.go`, 使用CGO调用FastDeploy C API快速完成YOLOv5模型在CPU/GPU上部署的示例
在部署前,需确认以下两个步骤
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>1.0.4)或FastDeploy的Develop版本(x.x.x=0.0.0)
### 使用Golang和CGO工具进行YOLOv5模型推理部署
在当前目录下,下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
```bash
wget https://fastdeploy.bj.bcebos.com/dev/cpp/fastdeploy-linux-x64-0.0.0.tgz
tar xvf fastdeploy-linux-x64-0.0.0.tgz
```
将FastDeploy C API文件拷贝至当前目录
```bash
cp -r fastdeploy-linux-x64-0.0.0/include/fastdeploy_capi .
```
下载官方转换好的 YOLOv5 ONNX 模型文件和测试图片
```bash
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
```
配置`infer.go`中的`cgo CFLAGS: -I`参数配置为C API文件路径,`cgo LDFLAGS: -L`参数配置为FastDeploy的动态库路径,动态库位于预编译库的`/lib`目录中
```bash
cgo CFLAGS: -I./fastdeploy_capi
cgo LDFLAGS: -L./fastdeploy-linux-x64-0.0.0/lib -lfastdeploy
```
将FastDeploy的库路径添加到环境变量
```bash
source /Path/to/fastdeploy-linux-x64-0.0.0/fastdeploy_init.sh
```
编译Go文件`infer.go`
```bash
go build infer.go
```
编译完成后,使用如下命令执行可得到预测结果
```bash
# CPU推理
./infer -model yolov5s.onnx -image 000000014439.jpg -device 0
# GPU推理
./infer -model yolov5s.onnx -image 000000014439.jpg -device 1
```
可视化的检测结果图片保存在本地`vis_result.jpg`
+144
View File
@@ -0,0 +1,144 @@
// Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
// #cgo CFLAGS: -I./fastdeploy_capi
// #cgo LDFLAGS: -L./fastdeploy-linux-x64-0.0.0/lib -lfastdeploy
// #include <fastdeploy_capi/vision.h>
// #include <stdio.h>
// #include <stdbool.h>
// #include <stdlib.h>
import "C"
import (
"flag"
"fmt"
"unsafe"
)
func FDBooleanToGo(b C.FD_C_Bool) bool {
var cFalse C.FD_C_Bool
if b != cFalse {
return true
}
return false
}
func CpuInfer(modelFile *C.char, imageFile *C.char) {
var option *C.FD_C_RuntimeOptionWrapper = C.FD_C_CreateRuntimeOptionWrapper()
C.FD_C_RuntimeOptionWrapperUseCpu(option)
var model *C.FD_C_YOLOv5Wrapper = C.FD_C_CreateYOLOv5Wrapper(
modelFile, C.CString(""), option, C.FD_C_ModelFormat_ONNX)
if !FDBooleanToGo(C.FD_C_YOLOv5WrapperInitialized(model)) {
fmt.Printf("Failed to initialize.\n")
C.FD_C_DestroyRuntimeOptionWrapper(option)
C.FD_C_DestroyYOLOv5Wrapper(model)
return
}
var image C.FD_C_Mat = C.FD_C_Imread(imageFile)
var result *C.FD_C_DetectionResult = C.FD_C_CreateDetectionResult()
if !FDBooleanToGo(C.FD_C_YOLOv5WrapperPredict(model, image, result)) {
fmt.Printf("Failed to predict.\n")
C.FD_C_DestroyRuntimeOptionWrapper(option)
C.FD_C_DestroyYOLOv5Wrapper(model)
C.FD_C_DestroyMat(image)
C.free(unsafe.Pointer(result))
return
}
var visImage C.FD_C_Mat = C.FD_C_VisDetection(image, result, 0.5, 1, 0.5)
C.FD_C_Imwrite(C.CString("vis_result.jpg"), visImage)
fmt.Printf("Visualized result saved in ./vis_result.jpg\n")
C.FD_C_DestroyRuntimeOptionWrapper(option)
C.FD_C_DestroyYOLOv5Wrapper(model)
C.FD_C_DestroyDetectionResult(result)
C.FD_C_DestroyMat(image)
C.FD_C_DestroyMat(visImage)
}
func GpuInfer(modelFile *C.char, imageFile *C.char) {
var option *C.FD_C_RuntimeOptionWrapper = C.FD_C_CreateRuntimeOptionWrapper()
C.FD_C_RuntimeOptionWrapperUseGpu(option, 0)
var model *C.FD_C_YOLOv5Wrapper = C.FD_C_CreateYOLOv5Wrapper(
modelFile, C.CString(""), option, C.FD_C_ModelFormat_ONNX)
if !FDBooleanToGo(C.FD_C_YOLOv5WrapperInitialized(model)) {
fmt.Printf("Failed to initialize.\n")
C.FD_C_DestroyRuntimeOptionWrapper(option)
C.FD_C_DestroyYOLOv5Wrapper(model)
return
}
var image C.FD_C_Mat = C.FD_C_Imread(imageFile)
var result *C.FD_C_DetectionResult = C.FD_C_CreateDetectionResult()
if !FDBooleanToGo(C.FD_C_YOLOv5WrapperPredict(model, image, result)) {
fmt.Printf("Failed to predict.\n")
C.FD_C_DestroyRuntimeOptionWrapper(option)
C.FD_C_DestroyYOLOv5Wrapper(model)
C.FD_C_DestroyMat(image)
C.free(unsafe.Pointer(result))
return
}
var visImage C.FD_C_Mat = C.FD_C_VisDetection(image, result, 0.5, 1, 0.5)
C.FD_C_Imwrite(C.CString("vis_result.jpg"), visImage)
fmt.Printf("Visualized result saved in ./vis_result.jpg\n")
C.FD_C_DestroyRuntimeOptionWrapper(option)
C.FD_C_DestroyYOLOv5Wrapper(model)
C.FD_C_DestroyDetectionResult(result)
C.FD_C_DestroyMat(image)
C.FD_C_DestroyMat(visImage)
}
var (
modelFile string
imageFile string
deviceType int
)
func init() {
flag.StringVar(&modelFile, "model", "", "paddle detection model to use")
flag.StringVar(&imageFile, "image", "", "image to predict")
flag.IntVar(&deviceType, "device", 0, "The data type of run_option is int, 0: run with cpu; 1: run with gpu")
}
func main() {
flag.Parse()
if modelFile != "" && imageFile != "" {
if deviceType == 0 {
CpuInfer(C.CString(modelFile), C.CString(imageFile))
} else if deviceType == 1 {
GpuInfer(C.CString(modelFile), C.CString(imageFile))
}
} else {
fmt.Printf("Usage: ./infer -model path/to/model_dir -image path/to/image -device run_option \n")
fmt.Printf("e.g ./infer -model yolov5s.onnx -image 000000014439.jpg -device 0 \n")
}
}
+56
View File
@@ -0,0 +1,56 @@
English | [简体中文](README_CN.md)
# YOLOv8 Golang Deployment Example
This directory provides examples that `infer.go` uses CGO to call FastDeploy C API and finish the deployment of YOLOv8 model on CPU/GPU.
Before deployment, two steps require confirmation
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Taking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.4 above (x.x.x>1.0.4) or develop version (x.x.x=0.0.0) is required to support this model.
### Use Golang and CGO to deploy YOLOv8 model
Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above.
```bash
wget https://fastdeploy.bj.bcebos.com/dev/cpp/fastdeploy-linux-x64-0.0.0.tgz
tar xvf fastdeploy-linux-x64-0.0.0.tgz
```
Copy FastDeploy C APIs from precompiled library to the current directory.
```bash
cp -r fastdeploy-linux-x64-0.0.0/include/fastdeploy_capi .
```
Download the YOLOv8 ONNX model file and test images
```bash
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov8s.onnx
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
```
Configure the `cgo CFLAGS: -I` to FastDeploy C API directory path and the `cgo LDFLAGS: -L` to FastDeploy dynamic library path. The FastDeploy dynamic library is located in the `/lib` directory.
```bash
cgo CFLAGS: -I./fastdeploy_capi
cgo LDFLAGS: -L./fastdeploy-linux-x64-0.0.0/lib -lfastdeploy
```
Use the following command to add Fastdeploy library path to the environment variable.
```bash
source /Path/to/fastdeploy-linux-x64-0.0.0/fastdeploy_init.sh
```
Compile the Go file `infer.go`.
```bash
go build infer.go
```
After compiling, use the following command to obtain the predicted results.
```bash
# CPU inference
./infer -model yolov8s.onnx -image 000000014439.jpg -device 0
# GPU inference
./infer -model yolov8s.onnx -image 000000014439.jpg -device 1
```
Then visualized inspection result is saved in the local image `vis_result.jpg`.
@@ -0,0 +1,55 @@
[English](README.md) | 简体中文
# YOLOv8 Golang 部署示例
本目录下提供`infer.go`, 使用CGO调用FastDeploy C API快速完成YOLOv8模型在CPU/GPU上部署的示例
在部署前,需确认以下两个步骤
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>1.0.4)或FastDeploy的Develop版本(x.x.x=0.0.0)
### 使用Golang和CGO工具进行YOLOv8模型推理部署
在当前目录下,下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
```bash
wget https://fastdeploy.bj.bcebos.com/dev/cpp/fastdeploy-linux-x64-0.0.0.tgz
tar xvf fastdeploy-linux-x64-0.0.0.tgz
```
将FastDeploy C API文件拷贝至当前目录
```bash
cp -r fastdeploy-linux-x64-0.0.0/include/fastdeploy_capi .
```
下载官方转换好的 YOLOv8 ONNX 模型文件和测试图片
```bash
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov8s.onnx
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
```
配置`infer.go`中的`cgo CFLAGS: -I`参数配置为C API文件路径,`cgo LDFLAGS: -L`参数配置为FastDeploy的动态库路径,动态库位于预编译库的`/lib`目录中
```bash
cgo CFLAGS: -I./fastdeploy_capi
cgo LDFLAGS: -L./fastdeploy-linux-x64-0.0.0/lib -lfastdeploy
```
将FastDeploy的库路径添加到环境变量
```bash
source /Path/to/fastdeploy-linux-x64-0.0.0/fastdeploy_init.sh
```
编译Go文件`infer.go`
```bash
go build infer.go
```
编译完成后,使用如下命令执行可得到预测结果
```bash
# CPU推理
./infer -model yolov8s.onnx -image 000000014439.jpg -device 0
# GPU推理
./infer -model yolov8s.onnx -image 000000014439.jpg -device 1
```
可视化的检测结果图片保存在本地`vis_result.jpg`
+144
View File
@@ -0,0 +1,144 @@
// Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
// #cgo CFLAGS: -I./fastdeploy_capi
// #cgo LDFLAGS: -L./fastdeploy-linux-x64-0.0.0/lib -lfastdeploy
// #include <fastdeploy_capi/vision.h>
// #include <stdio.h>
// #include <stdbool.h>
// #include <stdlib.h>
import "C"
import (
"flag"
"fmt"
"unsafe"
)
func FDBooleanToGo(b C.FD_C_Bool) bool {
var cFalse C.FD_C_Bool
if b != cFalse {
return true
}
return false
}
func CpuInfer(modelFile *C.char, imageFile *C.char) {
var option *C.FD_C_RuntimeOptionWrapper = C.FD_C_CreateRuntimeOptionWrapper()
C.FD_C_RuntimeOptionWrapperUseCpu(option)
var model *C.FD_C_YOLOv8Wrapper = C.FD_C_CreateYOLOv8Wrapper(
modelFile, C.CString(""), option, C.FD_C_ModelFormat_ONNX)
if !FDBooleanToGo(C.FD_C_YOLOv8WrapperInitialized(model)) {
fmt.Printf("Failed to initialize.\n")
C.FD_C_DestroyRuntimeOptionWrapper(option)
C.FD_C_DestroyYOLOv8Wrapper(model)
return
}
var image C.FD_C_Mat = C.FD_C_Imread(imageFile)
var result *C.FD_C_DetectionResult = C.FD_C_CreateDetectionResult()
if !FDBooleanToGo(C.FD_C_YOLOv8WrapperPredict(model, image, result)) {
fmt.Printf("Failed to predict.\n")
C.FD_C_DestroyRuntimeOptionWrapper(option)
C.FD_C_DestroyYOLOv8Wrapper(model)
C.FD_C_DestroyMat(image)
C.free(unsafe.Pointer(result))
return
}
var visImage C.FD_C_Mat = C.FD_C_VisDetection(image, result, 0.5, 1, 0.5)
C.FD_C_Imwrite(C.CString("vis_result.jpg"), visImage)
fmt.Printf("Visualized result saved in ./vis_result.jpg\n")
C.FD_C_DestroyRuntimeOptionWrapper(option)
C.FD_C_DestroyYOLOv8Wrapper(model)
C.FD_C_DestroyDetectionResult(result)
C.FD_C_DestroyMat(image)
C.FD_C_DestroyMat(visImage)
}
func GpuInfer(modelFile *C.char, imageFile *C.char) {
var option *C.FD_C_RuntimeOptionWrapper = C.FD_C_CreateRuntimeOptionWrapper()
C.FD_C_RuntimeOptionWrapperUseGpu(option, 0)
var model *C.FD_C_YOLOv8Wrapper = C.FD_C_CreateYOLOv8Wrapper(
modelFile, C.CString(""), option, C.FD_C_ModelFormat_ONNX)
if !FDBooleanToGo(C.FD_C_YOLOv8WrapperInitialized(model)) {
fmt.Printf("Failed to initialize.\n")
C.FD_C_DestroyRuntimeOptionWrapper(option)
C.FD_C_DestroyYOLOv8Wrapper(model)
return
}
var image C.FD_C_Mat = C.FD_C_Imread(imageFile)
var result *C.FD_C_DetectionResult = C.FD_C_CreateDetectionResult()
if !FDBooleanToGo(C.FD_C_YOLOv8WrapperPredict(model, image, result)) {
fmt.Printf("Failed to predict.\n")
C.FD_C_DestroyRuntimeOptionWrapper(option)
C.FD_C_DestroyYOLOv8Wrapper(model)
C.FD_C_DestroyMat(image)
C.free(unsafe.Pointer(result))
return
}
var visImage C.FD_C_Mat = C.FD_C_VisDetection(image, result, 0.5, 1, 0.5)
C.FD_C_Imwrite(C.CString("vis_result.jpg"), visImage)
fmt.Printf("Visualized result saved in ./vis_result.jpg\n")
C.FD_C_DestroyRuntimeOptionWrapper(option)
C.FD_C_DestroyYOLOv8Wrapper(model)
C.FD_C_DestroyDetectionResult(result)
C.FD_C_DestroyMat(image)
C.FD_C_DestroyMat(visImage)
}
var (
modelFile string
imageFile string
deviceType int
)
func init() {
flag.StringVar(&modelFile, "model", "", "paddle detection model to use ")
flag.StringVar(&imageFile, "image", "", "image to predict")
flag.IntVar(&deviceType, "device", 0, "The data type of run_option is int, 0: run with cpu; 1: run with gpu")
}
func main() {
flag.Parse()
if modelFile != "" && imageFile != "" {
if deviceType == 0 {
CpuInfer(C.CString(modelFile), C.CString(imageFile))
} else if deviceType == 1 {
GpuInfer(C.CString(modelFile), C.CString(imageFile))
}
} else {
fmt.Printf("Usage: ./infer -model path/to/model_dir -image path/to/image -device run_option \n")
fmt.Printf("e.g ./infer -model yolov8s.onnx -image 000000014439.jpg -device 0 \n")
}
}