[Model] Support Paddle3D PETR v2 model (#1863)

* Support PETR v2

* make petrv2 precision equal with the origin repo

* delete extra func

* modify review problem

* delete visualize

* Update README_CN.md

* Update README.md

* Update README_CN.md

* fix build problem

* delete external variable and function

---------

Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
This commit is contained in:
CoolCola
2023-05-19 10:45:36 +08:00
committed by GitHub
parent c8ff8b63e8
commit e3b285c762
20 changed files with 1181 additions and 0 deletions
+63
View File
@@ -0,0 +1,63 @@
English | [简体中文](README_CN.md)
# Petr Python Deployment Example
Before deployment, the following two steps need to be confirmed
- 1. The hardware and software environment meets the requirements, refer to [FastDeploy environment requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
- 2. FastDeploy Python whl package installation, refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
This directory provides an example of `infer.py` to quickly complete the deployment of Petr on CPU/GPU. Execute the following script to complete
```bash
#Download deployment sample code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/vision/paddle3d/petr/python
wget https://bj.bcebos.com/fastdeploy/models/petr.tar.gz
tar -xf petr.tar.gz
wget https://bj.bcebos.com/fastdeploy/models/petr_test.png
# CPU reasoning
python infer.py --model petr --image petr_test.png --device cpu
# GPU inference
python infer.py --model petr --image petr_test.png --device gpu
```
## Petr Python interface
```python
fastdeploy.vision.detection.Petr(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
```
Petr model loading and initialization.
**parameter**
> * **model_file**(str): model file path
> * **params_file**(str): parameter file path
> * **config_file**(str): configuration file path
> * **runtime_option**(RuntimeOption): Backend reasoning configuration, the default is None, that is, the default configuration is used
> * **model_format**(ModelFormat): model format, the default is Paddle format
### predict function
> ```python
> Petr. predict(image_data)
> ```
>
> Model prediction interface, the input image directly outputs the detection result.
>
> **parameters**
>
> > * **image_data**(np.ndarray): input data, note that it must be in HWC, BGR format
> **Back**
>
> > Return the `fastdeploy.vision.PerceptionResult` structure, structure description reference document [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
## Other documents
- [Petr Model Introduction](..)
- [Petr C++ deployment](../cpp)
- [Description of model prediction results](../../../../../docs/api/vision_results/)
- [How to switch model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
@@ -0,0 +1,65 @@
[English](README.md) | 简体中文
# Petr Python 部署示例
在部署前,需确认以下两个步骤
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. FastDeploy Python whl 包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
本目录下提供 `infer.py` 快速完成 Petr 在 CPU/GPU上部署的示例。执行如下脚本即可完成
```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/vision/paddle3d/petr/python
wget https://bj.bcebos.com/fastdeploy/models/petr.tar.gz
tar -xf petr.tar.gz
wget https://bj.bcebos.com/fastdeploy/models/petr_test.png
# CPU推理
python infer.py --model petr --image petr_test.png --device cpu
# GPU推理
python infer.py --model petr --image petr_test.png --device gpu
```
## Petr Python接口
```python
fastdeploy.vision.perception.Petr(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
```
Petr模型加载和初始化。
**参数**
> * **model_file**(str): 模型文件路径
> * **params_file**(str): 参数文件路径
> * **config_file**(str): 配置文件路径
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
### predict 函数
> ```python
> Petr.predict(image_data)
> ```
>
> 模型预测结口,输入图像直接输出检测结果。
>
> **参数**
>
> > * **image_data**(np.ndarray): 输入数据,注意需为HWCBGR格式
> **返回**
>
> > 返回`fastdeploy.vision.PerceptionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
## 其它文档
- [Petr 模型介绍](..)
- [Petr C++部署](../cpp)
- [模型预测结果说明](../../../../../docs/api/vision_results/)
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+45
View File
@@ -0,0 +1,45 @@
import fastdeploy as fd
import cv2
import os
from fastdeploy import ModelFormat
def parse_arguments():
import argparse
import ast
parser = argparse.ArgumentParser()
parser.add_argument(
"--model", required=True, help="Path of petr paddle model.")
parser.add_argument(
"--image", required=True, help="Path of test image file.")
parser.add_argument(
"--device",
type=str,
default='cpu',
help="Type of inference device, support 'cpu' or 'gpu'.")
return parser.parse_args()
def build_option(args):
option = fd.RuntimeOption()
if args.device.lower() == "gpu":
option.use_gpu(0)
if args.device.lower() == "cpu":
option.use_cpu()
return option
args = parse_arguments()
model_file = os.path.join(args.model, "petrv2_inference.pdmodel")
params_file = os.path.join(args.model, "petrv2_inference.pdiparams")
config_file = os.path.join(args.model, "infer_cfg.yml")
# 配置runtime,加载模型
runtime_option = build_option(args)
model = fd.vision.perception.Petr(
model_file, params_file, config_file, runtime_option=runtime_option)
# 预测图片检测结果
im = cv2.imread(args.image)
result = model.predict(im)
print(result)