diff --git a/examples/vision/segmentation/paddleseg/README.md b/examples/vision/segmentation/paddleseg/README.md index c240e41b74..32d08a4f6a 100644 --- a/examples/vision/segmentation/paddleseg/README.md +++ b/examples/vision/segmentation/paddleseg/README.md @@ -1,32 +1,139 @@ # PaddleSeg高性能全场景模型部署方案—FastDeploy -## FastDeploy介绍 +## 目录 +- [FastDeploy介绍](#FastDeploy介绍) +- [语义分割模型部署](#语义分割模型部署) +- [Matting模型部署](#Matting模型部署) +- [常见问题](#常见问题) -[FastDeploy](https://github.com/PaddlePaddle/FastDeploy)是一款全场景、易用灵活、极致高效的AI推理部署工具,使用FastDeploy可以简单高效的在10+款硬件上对PaddleSeg模型进行快速部署 +## 1. FastDeploy介绍 +
-## 支持如下的硬件部署 +**[⚡️FastDeploy](https://github.com/PaddlePaddle/FastDeploy)**是一款**全场景**、**易用灵活**、**极致高效**的AI推理部署工具,支持**云边端**部署。使用FastDeploy可以简单高效的在X86 CPU、NVIDIA GPU、飞腾CPU、ARM CPU、Intel GPU、昆仑、昇腾、瑞芯微、晶晨、算能等10+款硬件上对PaddleSeg模型进行快速部署,并且支持Paddle Inference、Paddle Lite、TensorRT、OpenVINO、ONNXRuntime、RKNPU2、SOPHGO等多种推理后端。 -| 硬件支持列表 | | | | -|:----- | :-- | :-- | :-- | -| [NVIDIA GPU](cpu-gpu) | [X86 CPU](cpu-gpu)| [飞腾CPU](cpu-gpu) | [ARM CPU](cpu-gpu) | -| [Intel GPU(独立显卡/集成显卡)](cpu-gpu) | [昆仑](kunlun) | [昇腾](ascend) | [瑞芯微](rockchip) | -| [晶晨](amlogic) | [算能](sophgo) | +
+ -## 更多部署方式 +
-- [Android ARM CPU部署](android) -- [服务化Serving部署](serving) -- [web部署](web) -- [模型自动化压缩工具](quantize) +## 2. 语义分割模型部署 +
+### 2.1 硬件支持列表 -## 常见问题 +|硬件类型|该硬件是否支持|使用指南|Python|C++| +|:---:|:---:|:---:|:---:|:---:| +|X86 CPU|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅| +|NVIDIA GPU|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅| +|飞腾CPU|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅| +|ARM CPU|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅| +|Intel GPU(集成显卡)|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅| +|Intel GPU(独立显卡)|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅| +|昆仑|✅|[链接](semantic_segmentation/kunlun)|✅|✅| +|昇腾|✅|[链接](semantic_segmentation/ascend)|✅|✅| +|瑞芯微|✅|[链接](semantic_segmentation/rockchip)|✅|✅| +|晶晨|✅|[链接](semantic_segmentation/amlogic)|--|✅|✅| +|算能|✅|[链接](semantic_segmentation/sophgo)|✅|✅| -遇到问题可查看常见问题集合文档或搜索FastDeploy issues,链接如下: +### 2.2. 详细使用文档 +- X86 CPU + - [部署模型准备](semantic_segmentation/cpu-gpu) + - [Python部署示例](semantic_segmentation/cpu-gpu/python/) + - [C++部署示例](semantic_segmentation/cpu-gpu/cpp/) +- NVIDIA GPU + - [部署模型准备](semantic_segmentation/cpu-gpu) + - [Python部署示例](semantic_segmentation/cpu-gpu/python/) + - [C++部署示例](semantic_segmentation/cpu-gpu/cpp/) +- 飞腾CPU + - [部署模型准备](semantic_segmentation/cpu-gpu) + - [Python部署示例](semantic_segmentation/cpu-gpu/python/) + - [C++部署示例](semantic_segmentation/cpu-gpu/cpp/) +- ARM CPU + - [部署模型准备](semantic_segmentation/cpu-gpu) + - [Python部署示例](semantic_segmentation/cpu-gpu/python/) + - [C++部署示例](semantic_segmentation/cpu-gpu/cpp/) +- Intel GPU + - [部署模型准备](semantic_segmentation/cpu-gpu) + - [Python部署示例](semantic_segmentation/cpu-gpu/python/) + - [C++部署示例](semantic_segmentation/cpu-gpu/cpp/) +- 昆仑 XPU + - [部署模型准备](semantic_segmentation/kunlun) + - [Python部署示例](semantic_segmentation/kunlun/python/) + - [C++部署示例](semantic_segmentation/kunlun/cpp/) +- 昇腾 Ascend + - [部署模型准备](semantic_segmentation/ascend) + - [Python部署示例](semantic_segmentation/ascend/python/) + - [C++部署示例](semantic_segmentation/ascend/cpp/) +- 瑞芯微 Rockchip + - [部署模型准备](semantic_segmentation/rockchip/) + - [Python部署示例](semantic_segmentation/rockchip/rknpu2/) + - [C++部署示例](semantic_segmentation/rockchip/rknpu2/) +- 晶晨 Amlogic + - [部署模型准备](semantic_segmentation/amlogic/a311d/) + - [C++部署示例](semantic_segmentation/amlogic/a311d/cpp/) +- 算能 Sophgo + - [部署模型准备](semantic_segmentation/sophgo/) + - [Python部署示例](semantic_segmentation/sophgo/python/) + - [C++部署示例](semantic_segmentation/sophgo/cpp/) -[常见问题集合](https://github.com/PaddlePaddle/FastDeploy/tree/develop/docs/cn/faq) +### 2.3 更多部署方式 -[FastDeploy issues](https://github.com/PaddlePaddle/FastDeploy/issues) +- [Android ARM CPU部署](semantic_segmentation/android) +- [服务化Serving部署](semantic_segmentation/serving) +- [web部署](semantic_segmentation/web) +- [模型自动化压缩工具](semantic_segmentation/quantize) -若以上方式都无法解决问题,欢迎给FastDeploy提交新的[issue](https://github.com/PaddlePaddle/FastDeploy/issues) +## 3. Matting模型部署 +
+ +### 3.1 硬件支持列表 + +|硬件类型|该硬件是否支持|使用指南|Python|C++| +|:---:|:---:|:---:|:---:|:---:| +|X86 CPU|✅|[链接](matting/cpu-gpu)|✅|✅| +|NVIDIA GPU|✅|[链接](matting/cpu-gpu)|✅|✅| +|飞腾CPU|✅|[链接](matting/cpu-gpu)|✅|✅| +|ARM CPU|✅|[链接](matting/cpu-gpu)|✅|✅| +|Intel GPU(集成显卡)|✅|[链接](matting/cpu-gpu)|✅|✅| +|Intel GPU(独立显卡)|✅|[链接](matting/cpu-gpu)|✅|✅| +|昆仑|✅|[链接](matting/kunlun)|✅|✅| +|昇腾|✅|[链接](matting/ascend)|✅|✅| + +### 3.2 详细使用文档 +- X86 CPU + - [部署模型准备](matting/cpu-gpu) + - [Python部署示例](matting/cpu-gpu/python/) + - [C++部署示例](matting/cpu-gpu/cpp/) +- NVIDIA GPU + - [部署模型准备](matting/cpu-gpu) + - [Python部署示例](matting/cpu-gpu/python/) + - [C++部署示例](matting/cpu-gpu/cpp/) +- 飞腾CPU + - [部署模型准备](matting/cpu-gpu) + - [Python部署示例](matting/cpu-gpu/python/) + - [C++部署示例](matting/cpu-gpu/cpp/) +- ARM CPU + - [部署模型准备](matting/cpu-gpu) + - [Python部署示例](matting/cpu-gpu/python/) + - [C++部署示例](matting/cpu-gpu/cpp/) +- Intel GPU + - [部署模型准备](matting/cpu-gpu) + - [Python部署示例](matting/cpu-gpu/python/) + - [C++部署示例](cpu-gpu/cpp/) +- 昆仑 XPU + - [部署模型准备](matting/kunlun) + - [Python部署示例](matting/kunlun/README.md) + - [C++部署示例](matting/kunlun/README.md) +- 昇腾 Ascend + - [部署模型准备](matting/ascend) + - [Python部署示例](matting/ascend/README.md) + - [C++部署示例](matting/ascend/README.md) + +## 4. 常见问题 +
+ +遇到问题可查看常见问题集合,搜索FastDeploy issue,*或给FastDeploy提交[issue](https://github.com/PaddlePaddle/FastDeploy/issues)*: + +[常见问题集合](https://github.com/PaddlePaddle/FastDeploy/tree/develop/docs/cn/faq) +[FastDeploy issues](https://github.com/PaddlePaddle/FastDeploy/issues) diff --git a/examples/vision/segmentation/paddleseg/android/README.md b/examples/vision/segmentation/paddleseg/android/README.md deleted file mode 100644 index ad363015bb..0000000000 --- a/examples/vision/segmentation/paddleseg/android/README.md +++ /dev/null @@ -1,177 +0,0 @@ -English | [简体中文](README_CN.md) -# PaddleSeg Android Demo for Image Segmentation - -For real-time portrait segmentation on Android, this demo has good ease of use and openness. You can run your own training model in the demo. - -## Environment Preparations - -1. Install the Android Studio tool locally, for details see [Android Stuido official website](https://developer.android.com/studio). -2. Get an Android phone and turn on USB debugging mode. How to turn on: ` Phone Settings -> Find Developer Options -> Turn on Developer Options and USB Debug Mode`. - -## Deployment Steps - -1. Image Segmentation PaddleSeg Demo is located in `fastdeploy/examples/vision/segmentation/paddleseg/android` directory. -2. Please use Android Studio to open paddleseg/android project. -3. Connect your phone to your computer, turn on USB debugging and file transfer mode, and connect your own mobile device on Android Studio (your phone needs to be enabled to allow software installation from USB). - -

-image -

- -> **Notes:** ->> If you encounter an NDK configuration error during importing, compiling or running the program, please open ` File > Project Structure > SDK Location` and change `Andriod SDK location` to your locally configured SDK path. - -4. Click the Run button to automatically compile the APP and install it to your phone. (The process will automatically download the pre-compiled FastDeploy Android library and model files, internet connection required.) -The success interface is as follows. Figure 1: Install APP on phone; Figure 2: The opening interface, it will automatically recognize the person in the picture and draw the mask; Figure 3: APP setting options, click setting in the upper right corner, and you can set different options. - -| APP icon | APP effect | APP setting options - | --- | --- | --- | - | image | image | image | - - -## PaddleSegModel Java API Introduction -- Model initialization API: Model initialization API contains two methods, you can initialize directly through the constructor, or call init function at the appropriate program node. PaddleSegModel initialization parameters are described as follows: - - modelFile: String, path to the model file in paddle format, e.g. model.pdmodel. - - paramFile: String, path to the parameter file in paddle format, e.g. model.pdiparams. - - configFile: String, preprocessing configuration file of model inference, e.g. deploy.yml. - - option: RuntimeOption, optional, model initialization option. If this parameter is not passed, the default runtime option will be used. - -```java -// Constructor w/o label file -public PaddleSegModel(); // An empty constructor, which can be initialised by calling init function later. -public PaddleSegModel(String modelFile, String paramsFile, String configFile); -public PaddleSegModel(String modelFile, String paramsFile, String configFile, RuntimeOption option); -// Call init manually w/o label file -public boolean init(String modelFile, String paramsFile, String configFile, RuntimeOption option); -``` -- Model prediction API: Model prediction API includes direct prediction API and API with visualization function. Direct prediction means that no image is saved and no result is rendered to Bitmap, but only the inference result is predicted. Prediction and visualization means to predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap (currently supports Bitmap in format ARGB8888), which can be displayed in camera later. -```java -// Directly predict: do not save images or render result to Bitmap. -public SegmentationResult predict(Bitmap ARGB8888Bitmap); -// Predict and visualize: predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap. -public SegmentationResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float weight); -public SegmentationResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float weight); // Only rendering images without saving. -// Modify result, but not return it. Concerning performance, you can use the following interface with CxxBuffer in SegmentationResult. -public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result); -public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result, String savedImagePath, float weight); -public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result, boolean rendering, float weight); -``` -- Set vertical or horizontal mode: For PP-HumanSeg series model, you should call this method to set the vertical mode to true. -```java -public void setVerticalScreenFlag(boolean flag); -``` -- Model resource release API: Calling function release() API can release model resources, and true means successful release, false means failure. Calling function initialized() can determine whether the model is initialized successfully, and true means successful initialization, false means failure. -```java -public boolean release(); // Release native resources. -public boolean initialized(); // Check if initialization is successful. -``` - -- Runtime Option Setting -```java -public void enableLiteFp16(); // Enable fp16 precision inference -public void disableLiteFP16(); // Disable fp16 precision inference -public void setCpuThreadNum(int threadNum); // Set number of threads. -public void setLitePowerMode(LitePowerMode mode); // Set power mode. -public void setLitePowerMode(String modeStr); // Set power mode by string. -``` - -- Segmentation Result -```java -public class SegmentationResult { - public int[] mLabelMap; // The predicted label map, each pixel position corresponds to a label HxW. - public float[] mScoreMap; // The predicted score map, each pixel position corresponds to a score HxW. - public long[] mShape; // The real shape(H,W) of label map. - public boolean mContainScoreMap = false; // Whether score map is included. - // You can choose to use CxxBuffer directly instead of copying it to JAVA layer through JNI. - // This method can improve performance to some extent. - public void setCxxBufferFlag(boolean flag); // Set whether the mode is CxxBuffer. - public boolean releaseCxxBuffer(); // Release CxxBuffer manually!!! - public boolean initialized(); // Check if the result is valid. -} -``` -Other reference: C++/Python corresponding SegmentationResult description: [api/vision_results/segmentation_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result.md). - - -- Model calling example 1: Using constructor and the default RuntimeOption: -```java -import java.nio.ByteBuffer; -import android.graphics.Bitmap; -import android.opengl.GLES20; - -import com.baidu.paddle.fastdeploy.vision.SegmentationResult; -import com.baidu.paddle.fastdeploy.vision.segmentation.PaddleSegModel; - -// Initialise model. -PaddleSegModel model = new PaddleSegModel( - "portrait_pp_humansegv2_lite_256x144_inference_model/model.pdmodel", - "portrait_pp_humansegv2_lite_256x144_inference_model/model.pdiparams", - "portrait_pp_humansegv2_lite_256x144_inference_model/deploy.yml"); - -// If the camera is in portrait mode, the PP-HumanSeg series needs to change the mark. -model.setVerticalScreenFlag(true); - -// Read Bitmaps: The following is the pseudo code of reading the Bitmap. -ByteBuffer pixelBuffer = ByteBuffer.allocate(width * height * 4); -GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, pixelBuffer); -Bitmap ARGB8888ImageBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888); -ARGB8888ImageBitmap.copyPixelsFromBuffer(pixelBuffer); - -// Model inference. -SegmentationResult result = new SegmentationResult(); -result.setCxxBufferFlag(true); - -model.predict(ARGB8888ImageBitmap, result); - -// Release CxxBuffer. -result.releaseCxxBuffer(); - -// Or return SegmentationResult directly. -SegmentationResult result = model.predict(ARGB8888ImageBitmap); - -// Release model resources. -model.release(); -``` - -- Model calling example 2: Call init function manually at the appropriate program node and customize RuntimeOption. -```java -// import id. -import com.baidu.paddle.fastdeploy.RuntimeOption; -import com.baidu.paddle.fastdeploy.LitePowerMode; -import com.baidu.paddle.fastdeploy.vision.SegmentationResult; -import com.baidu.paddle.fastdeploy.vision.segmentation.PaddleSegModel; -// Create empty model. -PaddleSegModel model = new PaddleSegModel(); -// Model path. -String modelFile = "portrait_pp_humansegv2_lite_256x144_inference_model/model.pdmodel"; -String paramFile = "portrait_pp_humansegv2_lite_256x144_inference_model/model.pdiparams"; -String configFile = "portrait_pp_humansegv2_lite_256x144_inference_model/deploy.yml"; -// Specify RuntimeOption. -RuntimeOption option = new RuntimeOption(); -option.setCpuThreadNum(2); -option.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH); -option.enableLiteFp16(); -// If the camera is in portrait mode, the PP-HumanSeg series needs to change the mark. -model.setVerticalScreenFlag(true); -// Initialise with the init function. -model.init(modelFile, paramFile, configFile, option); -// Read Bitmap, predict model, release resources, id. -``` -For details, please refer to [SegmentationMainActivity](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/segmentation/SegmentationMainActivity.java). - -## Replace FastDeploy SDK and model - Steps to replace the FastDeploy prediction libraries and model are very simple. The location of the prediction library is `app/libs/fastdeploy-android-sdk-xxx.aar`, where `xxx` indicates the version of the prediction library you are currently using. The location of the model is, `app/src/main/assets/models/portrait_pp_humansegv2_lite_256x144_inference_model`. -- Replace FastDeploy Android SDK: Download or compile the latest FastDeploy Android SDK, unzip it and put it in the `app/libs` directory. For details please refer to: - - [Use FastDeploy Java SDK on Android](https://github.com/PaddlePaddle/FastDeploy/tree/develop/java/android) - -- Steps for replacing the PaddleSeg model. - - Put your PaddleSeg model in `app/src/main/assets/models`; - - Modify the model path in `app/src/main/res/values/strings.xml`, such as: -```xml - -models/human_pp_humansegv1_lite_192x192_inference_model -``` - -## Other Documenets -If you are interested in more FastDeploy Java API documents and how to access the FastDeploy C++ API via JNI, you can refer to the following: -- [Use FastDeploy Java SDK on Android](https://github.com/PaddlePaddle/FastDeploy/tree/develop/java/android) -- [Use FastDeploy C++ SDK on Android](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/use_cpp_sdk_on_android.md) diff --git a/examples/vision/segmentation/paddleseg/cpu-gpu/python/README.md b/examples/vision/segmentation/paddleseg/cpu-gpu/python/README.md deleted file mode 100644 index 12d7c7eb1c..0000000000 --- a/examples/vision/segmentation/paddleseg/cpu-gpu/python/README.md +++ /dev/null @@ -1,45 +0,0 @@ -[English](README.md) | 简体中文 -# PaddleSeg Python部署示例 -本目录下提供`infer.py`快速完成PP-LiteSeg在CPU/GPU,以及GPU上通过Paddle-TensorRT加速部署的示例。执行如下脚本即可完成 - -## 部署环境准备 - -在部署前,需确认软硬件环境,同时下载预编译python wheel 包,参考文档[FastDeploy预编译库安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#FastDeploy预编译库安装) - -【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../ppmatting) - -```bash -#下载部署示例代码 -git clone https://github.com/PaddlePaddle/FastDeploy.git -cd FastDeploy/examples/vision/segmentation/paddleseg/cpu-gpu/python - -# 下载Unet模型文件和测试图片 -wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz -tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz -wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png - -# CPU推理 -python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png --device cpu -# GPU推理 -python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu -# GPU上使用Paddle-TensorRT推理 (注意:Paddle-TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待) -python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu --use_trt True -``` - -运行完成可视化结果如下图所示 -
- -
- -## 快速链接 -- [PaddleSeg python API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/semantic_segmentation.html) -- [FastDeploy部署PaddleSeg模型概览](..) -- [PaddleSeg C++部署](../cpp) - -## 常见问题 -- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/vision_result_related_problems.md) -- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md) -- [Intel GPU(独立显卡/集成显卡)的使用](https://github.com/PaddlePaddle/FastDeploy/blob/develop/tutorials/intel_gpu/README.md) -- [编译CPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/cpu.md) -- [编译GPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/gpu.md) -- [编译Jetson部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/jetson.md) diff --git a/examples/vision/segmentation/paddleseg/matting/README.md b/examples/vision/segmentation/paddleseg/matting/README.md new file mode 100644 index 0000000000..1c9871f6f8 --- /dev/null +++ b/examples/vision/segmentation/paddleseg/matting/README.md @@ -0,0 +1,54 @@ +# PaddleSeg Matting模型高性能全场景部署方案-FastDeploy + +## 1. FastDeploy介绍 +**[⚡️FastDeploy](https://github.com/PaddlePaddle/FastDeploy)**是一款**全场景**、**易用灵活**、**极致高效**的AI推理部署工具,支持**云边端**部署。使用FastDeploy可以简单高效的在X86 CPU、NVIDIA GPU、飞腾CPU、ARM CPU、Intel GPU、昆仑、昇腾、瑞芯微、晶晨、算能等10+款硬件上对PaddleSeg Matting模型进行快速部署,并且支持Paddle Inference、Paddle Lite、TensorRT、OpenVINO、ONNXRuntime、RKNPU2、SOPHGO等多种推理后端。 + +## 2. 硬件支持列表 + +|硬件类型|该硬件是否支持|使用指南|Python|C++| +|:---:|:---:|:---:|:---:|:---:| +|X86 CPU|✅|[链接](cpu-gpu)|✅|✅| +|NVIDIA GPU|✅|[链接](cpu-gpu)|✅|✅| +|飞腾CPU|✅|[链接](cpu-gpu)|✅|✅| +|ARM CPU|✅|[链接](cpu-gpu)|✅|✅| +|Intel GPU(集成显卡)|✅|[链接](cpu-gpu)|✅|✅| +|Intel GPU(独立显卡)|✅|[链接](cpu-gpu)|✅|✅| +|昆仑|✅|[链接](kunlun)|✅|✅| +|昇腾|✅|[链接](ascend)|✅|✅| + +## 3. 详细使用文档 +- X86 CPU + - [部署模型准备](cpu-gpu) + - [Python部署示例](cpu-gpu/python/) + - [C++部署示例](cpu-gpu/cpp/) +- NVIDIA GPU + - [部署模型准备](cpu-gpu) + - [Python部署示例](cpu-gpu/python/) + - [C++部署示例](cpu-gpu/cpp/) +- 飞腾CPU + - [部署模型准备](cpu-gpu) + - [Python部署示例](cpu-gpu/python/) + - [C++部署示例](cpu-gpu/cpp/) +- ARM CPU + - [部署模型准备](cpu-gpu) + - [Python部署示例](cpu-gpu/python/) + - [C++部署示例](cpu-gpu/cpp/) +- Intel GPU + - [部署模型准备](cpu-gpu) + - [Python部署示例](cpu-gpu/python/) + - [C++部署示例](cpu-gpu/cpp/) +- 昆仑 XPU + - [部署模型准备](kunlun) + - [Python部署示例](kunlun/README.md) + - [C++部署示例](kunlun/README.md) +- 昇腾 Ascend + - [部署模型准备](ascend) + - [Python部署示例](ascend/README.md) + - [C++部署示例](ascend/README.md) + +## 4. 常见问题 + +遇到问题可查看常见问题集合,搜索FastDeploy issue,*或给FastDeploy提交[issue](https://github.com/PaddlePaddle/FastDeploy/issues)*: + +[常见问题集合](https://github.com/PaddlePaddle/FastDeploy/tree/develop/docs/cn/faq) +[FastDeploy issues](https://github.com/PaddlePaddle/FastDeploy/issues) diff --git a/examples/vision/segmentation/paddleseg/matting/ascend/README.md b/examples/vision/segmentation/paddleseg/matting/ascend/README.md new file mode 100644 index 0000000000..f5c209fb3c --- /dev/null +++ b/examples/vision/segmentation/paddleseg/matting/ascend/README.md @@ -0,0 +1,31 @@ +# PaddleSeg Matting模型高性能全场景部署方案-FastDeploy + +## 1. 说明 +PaddleSeg支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上快速部署Matting模型 + +## 2. 使用预导出的模型列表 +为了方便开发者的测试,下面提供了PP-Matting导出的各系列模型,开发者可直接下载使用。其中精度指标来源于PP-Matting中对各模型的介绍(未提供精度数据),详情各参考PP-Matting中的说明。**注意**`deploy.yaml`文件记录导出模型的`input_shape`以及预处理信息,若不满足要求,用户可重新导出相关模型。 + +| 模型 | 参数大小 | 精度 | 备注 | +|:---------------------------------------------------------------- |:----- |:----- | :------ | +| [PP-Matting-512](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz) | 106MB | - | +| [PP-Matting-1024](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-1024.tgz) | 106MB | - | +| [PP-HumanMatting](https://bj.bcebos.com/paddlehub/fastdeploy/PPHumanMatting.tgz) | 247MB | - | +| [Modnet-ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_ResNet50_vd.tgz) | 355MB | - | +| [Modnet-MobileNetV2](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_MobileNetV2.tgz) | 28MB | - | +| [Modnet-HRNet_w18](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_HRNet_w18.tgz) | 51MB | - | + +## 3. 自行导出PaddleSeg部署模型 +### 3.1 模型版本 + +支持[PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg/tree/develop) 高于2.6版本的Matting模型,目前FastDeploy中测试过模型如下: +- [PP-Matting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting) +- [PP-HumanMatting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting) +- [ModNet系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting) + +### 3.2 模型导出 +PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting),**注意**:PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息 + +## 4. 详细的部署示例 +- [Python部署](../cpu-gpu/python) +- [C++部署](../cpu-gpu/cpp) diff --git a/examples/vision/segmentation/ppmatting/cpu-gpu/README.md b/examples/vision/segmentation/paddleseg/matting/cpu-gpu/README.md similarity index 51% rename from examples/vision/segmentation/ppmatting/cpu-gpu/README.md rename to examples/vision/segmentation/paddleseg/matting/cpu-gpu/README.md index e590ac42c2..0767101346 100644 --- a/examples/vision/segmentation/ppmatting/cpu-gpu/README.md +++ b/examples/vision/segmentation/paddleseg/matting/cpu-gpu/README.md @@ -1,32 +1,10 @@ # PaddleSeg Matting模型高性能全场景部署方案-FastDeploy -PaddleSeg通过[FastDeploy](https://github.com/PaddlePaddle/FastDeploy)支持在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)、昆仑芯、华为昇腾硬件上部署Matting模型 +## 1. 说明 +PaddleSeg支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上快速部署Matting模型 -## 模型版本说明 - -- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg/tree/develop) ->> **注意**:支持PaddleSeg高于2.6版本的Matting模型 - -目前FastDeploy支持如下模型的部署 - -- [PP-Matting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting) -- [PP-HumanMatting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting) -- [ModNet系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting) - - -## 准备PaddleSeg部署模型 -在部署前,需要先将Matting模型导出成部署模型,导出步骤参考文档[导出模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting) - -**注意** -- PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息 - -## 预导出的推理模型 - -为了方便开发者的测试,下面提供了PP-Matting导出的各系列模型,开发者可直接下载使用。 - -其中精度指标来源于PP-Matting中对各模型的介绍(未提供精度数据),详情各参考PP-Matting中的说明。 - ->> **注意**`deploy.yaml`文件记录导出模型的`input_shape`以及预处理信息,若不满足要求,用户可重新导出相关模型 +## 2. 使用预导出的模型列表 +为了方便开发者的测试,下面提供了PP-Matting导出的各系列模型,开发者可直接下载使用。其中精度指标来源于PP-Matting中对各模型的介绍(未提供精度数据),详情各参考PP-Matting中的说明。**注意**`deploy.yaml`文件记录导出模型的`input_shape`以及预处理信息,若不满足要求,用户可重新导出相关模型。 | 模型 | 参数大小 | 精度 | 备注 | |:---------------------------------------------------------------- |:----- |:----- | :------ | @@ -37,7 +15,17 @@ PaddleSeg通过[FastDeploy](https://github.com/PaddlePaddle/FastDeploy)支持在 | [Modnet-MobileNetV2](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_MobileNetV2.tgz) | 28MB | - | | [Modnet-HRNet_w18](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_HRNet_w18.tgz) | 51MB | - | -## 详细部署文档 +## 3. 自行导出PaddleSeg部署模型 +### 3.1 模型版本 +支持[PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg/tree/develop) 高于2.6版本的Matting模型,目前FastDeploy中测试过模型如下: +- [PP-Matting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting) +- [PP-HumanMatting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting) +- [ModNet系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting) + +### 3.2 模型导出 +PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting),**注意**:PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息 + +## 4. 详细的部署示例 - [Python部署](python) - [C++部署](cpp) diff --git a/examples/vision/segmentation/paddleseg/ascend/cpp/CMakeLists.txt b/examples/vision/segmentation/paddleseg/matting/cpu-gpu/cpp/CMakeLists.txt similarity index 75% rename from examples/vision/segmentation/paddleseg/ascend/cpp/CMakeLists.txt rename to examples/vision/segmentation/paddleseg/matting/cpu-gpu/cpp/CMakeLists.txt index 93540a7e83..776d832f91 100644 --- a/examples/vision/segmentation/paddleseg/ascend/cpp/CMakeLists.txt +++ b/examples/vision/segmentation/paddleseg/matting/cpu-gpu/cpp/CMakeLists.txt @@ -1,14 +1,11 @@ PROJECT(infer_demo C CXX) CMAKE_MINIMUM_REQUIRED (VERSION 3.10) -# 指定下载解压后的fastdeploy库路径 option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.") include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake) -# 添加FastDeploy依赖头文件 include_directories(${FASTDEPLOY_INCS}) add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc) -# 添加FastDeploy库依赖 target_link_libraries(infer_demo ${FASTDEPLOY_LIBS}) diff --git a/examples/vision/segmentation/ppmatting/cpu-gpu/cpp/README.md b/examples/vision/segmentation/paddleseg/matting/cpu-gpu/cpp/README.md similarity index 66% rename from examples/vision/segmentation/ppmatting/cpu-gpu/cpp/README.md rename to examples/vision/segmentation/paddleseg/matting/cpu-gpu/cpp/README.md index a60aa3ffa2..21be155edb 100644 --- a/examples/vision/segmentation/ppmatting/cpu-gpu/cpp/README.md +++ b/examples/vision/segmentation/paddleseg/matting/cpu-gpu/cpp/README.md @@ -1,20 +1,36 @@ [English](README.md) | 简体中文 -# PP-Matting C++部署示例 +# PP-Matting CPU-GPU C++部署示例 本目录下提供`infer.cc`快速完成PP-Matting在CPU/GPU、昆仑芯、华为昇腾以及GPU上通过Paddle-TensorRT加速部署的示例。 -在部署前,需确认软硬件环境,同时下载预编译部署库,参考文档[FastDeploy预编译库安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install) +## 1. 说明 +PaddleSeg支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上快速部署Matting模型 ->> **注意** 只有CPU、GPU提供预编译库,华为昇腾以及昆仑芯需要参考以上文档自行编译部署环境 +## 2. 部署环境准备 +在部署前,需确认软硬件环境,同时下载预编译部署库,参考文档[FastDeploy预编译库安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install),**注意** 只有CPU、GPU提供预编译库,华为昇腾以及昆仑芯需要参考以上文档自行编译部署环境。 +## 3. 部署模型准备 +在部署前,请准备好您所需要运行的推理模型,你可以选择使用[预导出的推理模型](../README.md)或者[自行导出PaddleSeg部署模型](../README.md)。 + +## 4. 运行部署示例 以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.0以上(x.x.x>=1.0.0) ```bash -mkdir build -cd build # 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用 wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz tar xvf fastdeploy-linux-x64-x.x.x.tgz + +# 下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cd FastDeploy/examples/vision/segmentation/matting/cpp-gpu/cpp +# # 如果您希望从PaddleSeg下载示例代码,请运行 +# git clone https://github.com/PaddlePaddle/PaddleSeg.git +# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支 +# # git checkout develop +# cd PaddleSeg/deploy/fastdeploy/matting/cpp-gpu/cpp + +# 编译部署示例 +mkdir build && cd build cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x make -j @@ -24,7 +40,6 @@ tar -xvf PP-Matting-512.tgz wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg - # CPU推理 ./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 0 # GPU推理 @@ -34,7 +49,7 @@ wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg # 昆仑芯XPU推理 ./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 3 ``` ->> ***注意** 以上示例未提供华为昇腾的示例,在编译好昇腾部署环境后,只需改造一行代码,将示例文件中KunlunXinInfer方法的`option.UseKunlunXin()`为`option.UseAscend()`就可以完成在华为昇腾上的推理部署 +**注意** 以上示例未提供华为昇腾的示例,在编译好昇腾部署环境后,只需改造一行代码,将示例文件中KunlunXinInfer方法的`option.UseKunlunXin()`为`option.UseAscend()`就可以完成在华为昇腾上的推理部署 运行完成可视化结果如下图所示
@@ -45,14 +60,14 @@ wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考: -- [如何在Windows中使用FastDeploy C++ SDK](../../../../../../docs/cn/faq/use_sdk_on_windows.md) +- [如何在Windows中使用FastDeploy C++ SDK](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/use_sdk_on_windows.md) -## 快速链接 +## 5. 更多指南 - [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html) - [FastDeploy部署PaddleSeg模型概览](../../) - [Python部署](../python) -## 常见问题 +## 6. 常见问题 - [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md) - [Intel GPU(独立显卡/集成显卡)的使用](https://github.com/PaddlePaddle/FastDeploy/blob/develop/tutorials/intel_gpu/README.md) - [编译CPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/cpu.md) diff --git a/examples/vision/segmentation/ppmatting/cpu-gpu/cpp/infer.cc b/examples/vision/segmentation/paddleseg/matting/cpu-gpu/cpp/infer.cc similarity index 100% rename from examples/vision/segmentation/ppmatting/cpu-gpu/cpp/infer.cc rename to examples/vision/segmentation/paddleseg/matting/cpu-gpu/cpp/infer.cc diff --git a/examples/vision/segmentation/ppmatting/cpu-gpu/python/README.md b/examples/vision/segmentation/paddleseg/matting/cpu-gpu/python/README.md similarity index 65% rename from examples/vision/segmentation/ppmatting/cpu-gpu/python/README.md rename to examples/vision/segmentation/paddleseg/matting/cpu-gpu/python/README.md index 2adb2458ac..cb844e045d 100644 --- a/examples/vision/segmentation/ppmatting/cpu-gpu/python/README.md +++ b/examples/vision/segmentation/paddleseg/matting/cpu-gpu/python/README.md @@ -1,24 +1,35 @@ [English](README.md) | 简体中文 -# PP-Matting Python部署示例 +# PP-Matting CPU-GPU Python部署示例 本目录下提供`infer.py`快速完成PP-Matting在CPU/GPU、昆仑芯、华为昇腾,以及GPU上通过Paddle-TensorRT加速部署的示例。执行如下脚本即可完成 -## 部署环境准备 +## 1. 说明 +PaddleSeg支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上快速部署Matting模型 -在部署前,需确认软硬件环境,同时下载预编译python wheel 包,参考文档[FastDeploy预编译库安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install) ->> **注意** 只有CPU、GPU提供预编译库,华为昇腾以及昆仑芯需要参考以上文档自行编译部署环境 +## 2. 部署环境准备 +在部署前,需确认软硬件环境,同时下载预编译部署库,参考文档[FastDeploy预编译库安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install),**注意** 只有CPU、GPU提供预编译库,华为昇腾以及昆仑芯需要参考以上文档自行编译部署环境。 +## 3. 部署模型准备 +在部署前,请准备好您所需要运行的推理模型,你可以选择使用[预导出的推理模型](../README.md)或者[自行导出PaddleSeg部署模型](../README.md)。 + +## 4. 运行部署示例 ```bash -#下载部署示例代码 +# 下载部署示例代码 git clone https://github.com/PaddlePaddle/FastDeploy.git -cd FastDeploy/examples/vision/matting/ppmatting/python +cd FastDeploy/examples/vision/segmentation/matting/cpp-gpu/python +# # 如果您希望从PaddleSeg下载示例代码,请运行 +# git clone https://github.com/PaddlePaddle/PaddleSeg.git +# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支 +# # git checkout develop +# cd PaddleSeg/deploy/fastdeploy/matting/cpp-gpu/python # 下载PP-Matting模型文件和测试图片 wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz tar -xvf PP-Matting-512.tgz wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg + # CPU推理 python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device cpu # GPU推理 @@ -28,7 +39,7 @@ python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bg # 昆仑芯XPU推理 python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device kunlunxin ``` ->> ***注意** 以上示例未提供华为昇腾的示例,在编译好昇腾部署环境后,只需改造一行代码,将示例文件中的`option.use_kunlunxin()`为`option.use_ascend()`就可以完成在华为昇腾上的推理部署 +**注意** 以上示例未提供华为昇腾的示例,在编译好昇腾部署环境后,只需改造一行代码,将示例文件中的`option.use_kunlunxin()`为`option.use_ascend()`就可以完成在华为昇腾上的推理部署 运行完成可视化结果如下图所示
@@ -38,12 +49,12 @@ python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bg
-## 快速链接 +## 5. 更多指南 - [PaddleSeg python API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/semantic_segmentation.html) - [FastDeploy部署PaddleSeg模型概览](..) - [PaddleSeg C++部署](../cpp) -## 常见问题 +## 6. 常见问题 - [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/vision_result_related_problems.md) - [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md) - [Intel GPU(独立显卡/集成显卡)的使用](https://github.com/PaddlePaddle/FastDeploy/blob/develop/tutorials/intel_gpu/README.md) diff --git a/examples/vision/segmentation/ppmatting/cpu-gpu/python/infer.py b/examples/vision/segmentation/paddleseg/matting/cpu-gpu/python/infer.py similarity index 96% rename from examples/vision/segmentation/ppmatting/cpu-gpu/python/infer.py rename to examples/vision/segmentation/paddleseg/matting/cpu-gpu/python/infer.py index a6150ff0e2..148cd0c969 100755 --- a/examples/vision/segmentation/ppmatting/cpu-gpu/python/infer.py +++ b/examples/vision/segmentation/paddleseg/matting/cpu-gpu/python/infer.py @@ -51,7 +51,7 @@ def build_option(args): args = parse_arguments() -# 配置runtime,加载模型 +# setup runtime runtime_option = build_option(args) model_file = os.path.join(args.model, "model.pdmodel") params_file = os.path.join(args.model, "model.pdiparams") @@ -59,12 +59,13 @@ config_file = os.path.join(args.model, "deploy.yaml") model = fd.vision.matting.PPMatting( model_file, params_file, config_file, runtime_option=runtime_option) -# 预测图片抠图结果 +# predict im = cv2.imread(args.image) bg = cv2.imread(args.bg) result = model.predict(im) print(result) -# 可视化结果 + +# visualize vis_im = fd.vision.vis_matting(im, result) vis_im_with_bg = fd.vision.swap_background(im, bg, result) cv2.imwrite("visualized_result_fg.png", vis_im) diff --git a/examples/vision/segmentation/paddleseg/matting/kunlun/README.md b/examples/vision/segmentation/paddleseg/matting/kunlun/README.md new file mode 100644 index 0000000000..f5c209fb3c --- /dev/null +++ b/examples/vision/segmentation/paddleseg/matting/kunlun/README.md @@ -0,0 +1,31 @@ +# PaddleSeg Matting模型高性能全场景部署方案-FastDeploy + +## 1. 说明 +PaddleSeg支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上快速部署Matting模型 + +## 2. 使用预导出的模型列表 +为了方便开发者的测试,下面提供了PP-Matting导出的各系列模型,开发者可直接下载使用。其中精度指标来源于PP-Matting中对各模型的介绍(未提供精度数据),详情各参考PP-Matting中的说明。**注意**`deploy.yaml`文件记录导出模型的`input_shape`以及预处理信息,若不满足要求,用户可重新导出相关模型。 + +| 模型 | 参数大小 | 精度 | 备注 | +|:---------------------------------------------------------------- |:----- |:----- | :------ | +| [PP-Matting-512](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz) | 106MB | - | +| [PP-Matting-1024](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-1024.tgz) | 106MB | - | +| [PP-HumanMatting](https://bj.bcebos.com/paddlehub/fastdeploy/PPHumanMatting.tgz) | 247MB | - | +| [Modnet-ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_ResNet50_vd.tgz) | 355MB | - | +| [Modnet-MobileNetV2](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_MobileNetV2.tgz) | 28MB | - | +| [Modnet-HRNet_w18](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_HRNet_w18.tgz) | 51MB | - | + +## 3. 自行导出PaddleSeg部署模型 +### 3.1 模型版本 + +支持[PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg/tree/develop) 高于2.6版本的Matting模型,目前FastDeploy中测试过模型如下: +- [PP-Matting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting) +- [PP-HumanMatting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting) +- [ModNet系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting) + +### 3.2 模型导出 +PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting),**注意**:PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息 + +## 4. 详细的部署示例 +- [Python部署](../cpu-gpu/python) +- [C++部署](../cpu-gpu/cpp) diff --git a/examples/vision/segmentation/paddleseg/rockchip/rv1126/README.md b/examples/vision/segmentation/paddleseg/rockchip/rv1126/README.md deleted file mode 100644 index e03960f09a..0000000000 --- a/examples/vision/segmentation/paddleseg/rockchip/rv1126/README.md +++ /dev/null @@ -1,45 +0,0 @@ -[English](README.md) | 简体中文 -# PaddleSeg在瑞芯微NPU上通过FastDeploy部署模型 - -## PaddleSeg支持部署的瑞芯微的芯片型号 -支持如下芯片的部署 -- Rockchip RV1109 -- Rockchip RV1126 -- Rockchip RK1808 - ->> **注意**:需要注意的是,芯原(verisilicon)作为 IP 设计厂商,本身并不提供实体SoC产品,而是授权其 IP 给芯片厂商,如:晶晨(Amlogic),瑞芯微(Rockchip)等。因此本文是适用于被芯原授权了 NPU IP 的芯片产品。只要芯片产品没有大副修改芯原的底层库,则该芯片就可以使用本文档作为 Paddle Lite 推理部署的参考和教程。在本文中,晶晨 SoC 中的 NPU 和 瑞芯微 SoC 中的 NPU 统称为芯原 NPU。 -瑞芯微 RV1126 是一款编解码芯片,专门面相人工智能的机器视觉领域。 - -本示例基于RV1126来介绍如何使用FastDeploy部署PaddleSeg模型 - -PaddleSeg支持通过FastDeploy在RV1126上基于Paddle-Lite部署相关Segmentation模型 - -## 瑞芯微 RV1126支持的PaddleSeg模型 - -- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) ->> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型 - -目前瑞芯微 RV1126 的 NPU 支持的量化模型如下: -- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md) - -## 预导出的量化推理模型 -为了方便开发者的测试,下面提供了PaddleSeg导出的部分量化后的推理模型,开发者可直接下载使用。 - -| 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) | -|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- | -| [PP-LiteSeg-T(STDC1)-cityscapes-without-argmax](https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz)| 31MB | 1024x512 | 77.04% | 77.73% | 77.46% | -**注意** -- PaddleSeg量化模型包含`model.pdmodel`、`model.pdiparams`、`deploy.yaml`和`subgraph.txt`四个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息,subgraph.txt是为了异构计算而存储的配置文件 -- 若以上列表中无满足要求的模型,可参考下方教程自行导出适配A311D的模型 - -## PaddleSeg动态图模型导出为RV1126支持的INT8模型 -模型导出分为以下两步 -1. PaddleSeg训练的动态图模型导出为推理静态图模型,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md) -瑞芯微RV1126仅支持INT8 -2. 将推理模型量化压缩为INT8模型,FastDeploy模型量化的方法及一键自动化压缩工具可以参考[模型量化](../../../quantize/README.md) - -## 详细部署文档 - -目前,瑞芯微 RV1126 上只支持C++的部署。 - -- [C++部署](cpp) diff --git a/examples/vision/segmentation/paddleseg/semantic_segmentation/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/README.md new file mode 100644 index 0000000000..aadf674789 --- /dev/null +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/README.md @@ -0,0 +1,75 @@ +# PaddleSeg语义分割模型高性能全场景部署方案-FastDeploy + +## 1. FastDeploy介绍 +**[⚡️FastDeploy](https://github.com/PaddlePaddle/FastDeploy)**是一款**全场景**、**易用灵活**、**极致高效**的AI推理部署工具,支持**云边端**部署。使用FastDeploy可以简单高效的在X86 CPU、NVIDIA GPU、飞腾CPU、ARM CPU、Intel GPU、昆仑、昇腾、瑞芯微、晶晨、算能等10+款硬件上对PaddleSeg语义分割模型进行快速部署,并且支持Paddle Inference、Paddle Lite、TensorRT、OpenVINO、ONNXRuntime、RKNPU2、SOPHGO等多种推理后端。 + +## 2. 硬件支持列表 + +|硬件类型|该硬件是否支持|使用指南|Python|C++| +|:---:|:---:|:---:|:---:|:---:| +|X86 CPU|✅|[链接](cpu-gpu)|✅|✅| +|NVIDIA GPU|✅|[链接](cpu-gpu)|✅|✅| +|飞腾CPU|✅|[链接](cpu-gpu)|✅|✅| +|ARM CPU|✅|[链接](cpu-gpu)|✅|✅| +|Intel GPU(集成显卡)|✅|[链接](cpu-gpu)|✅|✅| +|Intel GPU(独立显卡)|✅|[链接](cpu-gpu)|✅|✅| +|昆仑|✅|[链接](kunlun)|✅|✅| +|昇腾|✅|[链接](ascend)|✅|✅| +|瑞芯微|✅|[链接](rockchip)|✅|✅| +|晶晨|✅|[链接](amlogic)|--|✅| +|算能|✅|[链接](sophgo)|✅|✅| + +## 3. 详细使用文档 +- X86 CPU + - [部署模型准备](cpu-gpu) + - [Python部署示例](cpu-gpu/python/) + - [C++部署示例](cpu-gpu/cpp/) +- NVIDIA GPU + - [部署模型准备](cpu-gpu) + - [Python部署示例](cpu-gpu/python/) + - [C++部署示例](cpu-gpu/cpp/) +- 飞腾CPU + - [部署模型准备](cpu-gpu) + - [Python部署示例](cpu-gpu/python/) + - [C++部署示例](cpu-gpu/cpp/) +- ARM CPU + - [部署模型准备](cpu-gpu) + - [Python部署示例](cpu-gpu/python/) + - [C++部署示例](cpu-gpu/cpp/) +- Intel GPU + - [部署模型准备](cpu-gpu) + - [Python部署示例](cpu-gpu/python/) + - [C++部署示例](cpu-gpu/cpp/) +- 昆仑 XPU + - [部署模型准备](kunlun) + - [Python部署示例](kunlun/python/) + - [C++部署示例](kunlun/cpp/) +- 昇腾 Ascend + - [部署模型准备](ascend) + - [Python部署示例](ascend/python/) + - [C++部署示例](ascend/cpp/) +- 瑞芯微 Rockchip + - [部署模型准备](rockchip/) + - [Python部署示例](rockchip/rknpu2/) + - [C++部署示例](rockchip/rknpu2/) +- 晶晨 Amlogic + - [部署模型准备](amlogic/a311d/) + - [C++部署示例](amlogic/a311d/cpp/) +- 算能 Sophgo + - [部署模型准备](sophgo/) + - [Python部署示例](sophgo/python/) + - [C++部署示例](sophgo/cpp/) + +## 4. 更多部署方式 + +- [Android ARM CPU部署](android) +- [服务化Serving部署](serving) +- [web部署](web) +- [模型自动化压缩工具](quantize) + +## 5. 常见问题 + +遇到问题可查看常见问题集合,搜索FastDeploy issue,*或给FastDeploy提交[issue](https://github.com/PaddlePaddle/FastDeploy/issues)*: + +[常见问题集合](https://github.com/PaddlePaddle/FastDeploy/tree/develop/docs/cn/faq) +[FastDeploy issues](https://github.com/PaddlePaddle/FastDeploy/issues) diff --git a/examples/vision/segmentation/paddleseg/amlogic/a311d/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/amlogic/a311d/README.md similarity index 58% rename from examples/vision/segmentation/paddleseg/amlogic/a311d/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/amlogic/a311d/README.md index 80a85d8daf..d58830ee2c 100644 --- a/examples/vision/segmentation/paddleseg/amlogic/a311d/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/amlogic/a311d/README.md @@ -1,30 +1,17 @@ [English](README.md) | 简体中文 -# PaddleSeg在晶晨NPU上通过FastDeploy部署模型 +# PaddleSeg 语义分割模型在晶晨NPU上的部署方案-FastDeploy -## PaddleSeg支持部署的晶晨芯片型号 -支持如下芯片的部署 +## 1. 说明 + +晶晨A311D是一款先进的AI应用处理器。PaddleSeg支持通过FastDeploy在A311D上基于Paddle-Lite部署相关Segmentation模型。**注意**:需要注意的是,芯原(verisilicon)作为 IP 设计厂商,本身并不提供实体SoC产品,而是授权其 IP 给芯片厂商,如:晶晨(Amlogic),瑞芯微(Rockchip)等。因此本文是适用于被芯原授权了 NPU IP 的芯片产品。只要芯片产品没有大副修改芯原的底层库,则该芯片就可以使用本文档作为 Paddle Lite 推理部署的参考和教程。在本文中,晶晨 SoC 中的 NPU 和 瑞芯微 SoC 中的 NPU 统称为芯原 NPU。目前支持如下芯片的部署: - Amlogic A311D - Amlogic C308X - Amlogic S905D3 -本示例基于晶晨A311D来介绍如何使用FastDeploy部署PaddleSeg模型 - -晶晨A311D是一款先进的AI应用处理器。PaddleSeg支持通过FastDeploy在A311D上基于Paddle-Lite部署相关Segmentation模型 - ->> **注意**:需要注意的是,芯原(verisilicon)作为 IP 设计厂商,本身并不提供实体SoC产品,而是授权其 IP 给芯片厂商,如:晶晨(Amlogic),瑞芯微(Rockchip)等。因此本文是适用于被芯原授权了 NPU IP 的芯片产品。只要芯片产品没有大副修改芯原的底层库,则该芯片就可以使用本文档作为 Paddle Lite 推理部署的参考和教程。在本文中,晶晨 SoC 中的 NPU 和 瑞芯微 SoC 中的 NPU 统称为芯原 NPU。 - -## 晶晨A311D支持的PaddleSeg模型 - -- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) ->> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型 - -目前晶晨A311D所支持的PaddleSeg模型如下: -- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md) - -## 预导出的量化推理模型 -为了方便开发者的测试,下面提供了PaddleSeg导出的部分量化后的推理模型,开发者可直接下载使用。 +本示例基于晶晨A311D来介绍如何使用FastDeploy部署PaddleSeg模型。 +## 2. 使用预导出的模型列表 | 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) | |:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- | | [PP-LiteSeg-T(STDC1)-cityscapes-without-argmax](https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz)| 31MB | 1024x512 | 77.04% | 77.73% | 77.46% | @@ -32,13 +19,19 @@ - PaddleSeg量化模型包含`model.pdmodel`、`model.pdiparams`、`deploy.yaml`和`subgraph.txt`四个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息,subgraph.txt是为了异构计算而存储的配置文件 - 若以上列表中无满足要求的模型,可参考下方教程自行导出适配A311D的模型 -## PaddleSeg动态图模型导出为A311D支持的INT8模型 +## 3. 自行导出晶晨A311D支持的PaddleSeg模型 + +### 3.1 模型版本 +- 支持[PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)高于2.6版本的Segmentation模型,目前FastDeploy测试过可在晶晨A311D成功部署的模型: +- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md) + +### 3.2 PaddleSeg动态图模型导出为A311D支持的INT8模型 模型导出分为以下两步 1. PaddleSeg训练的动态图模型导出为推理静态图模型,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md) 晶晨A311D仅支持INT8 2. 将推理模型量化压缩为INT8模型,FastDeploy模型量化的方法及一键自动化压缩工具可以参考[模型量化](../../../quantize/README.md) -## 详细部署文档 +## 4. 详细部署示例 目前,A311D上只支持C++的部署。 diff --git a/examples/vision/segmentation/paddleseg/amlogic/a311d/cpp/CMakeLists.txt b/examples/vision/segmentation/paddleseg/semantic_segmentation/amlogic/a311d/cpp/CMakeLists.txt similarity index 89% rename from examples/vision/segmentation/paddleseg/amlogic/a311d/cpp/CMakeLists.txt rename to examples/vision/segmentation/paddleseg/semantic_segmentation/amlogic/a311d/cpp/CMakeLists.txt index 64b7a64661..af493f6b67 100755 --- a/examples/vision/segmentation/paddleseg/amlogic/a311d/cpp/CMakeLists.txt +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/amlogic/a311d/cpp/CMakeLists.txt @@ -1,17 +1,14 @@ PROJECT(infer_demo C CXX) CMAKE_MINIMUM_REQUIRED (VERSION 3.10) -# 指定下载解压后的fastdeploy库路径 option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.") include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake) -# 添加FastDeploy依赖头文件 include_directories(${FASTDEPLOY_INCS}) include_directories(${FastDeploy_INCLUDE_DIRS}) add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc) -# 添加FastDeploy库依赖 target_link_libraries(infer_demo ${FASTDEPLOY_LIBS}) set(CMAKE_INSTALL_PREFIX ${CMAKE_SOURCE_DIR}/build/install) diff --git a/examples/vision/segmentation/paddleseg/amlogic/a311d/cpp/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/amlogic/a311d/cpp/README.md similarity index 67% rename from examples/vision/segmentation/paddleseg/amlogic/a311d/cpp/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/amlogic/a311d/cpp/README.md index 4240c7e1f9..29c578677a 100644 --- a/examples/vision/segmentation/paddleseg/amlogic/a311d/cpp/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/amlogic/a311d/cpp/README.md @@ -1,28 +1,35 @@ [English](README.md) | 简体中文 -# PP-LiteSeg 量化模型 C++ 部署示例 +# PaddleSeg TIMVX A311D C++ 部署示例 本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-LiteSeg 量化模型在晶晨 A311D 上的部署推理加速。 -## 部署准备 -### FastDeploy 交叉编译环境准备 -软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装) +## 1. 部署环境准备 +### 1.1 FastDeploy 交叉编译环境准备 +软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 晶晨 A311d 编译文档](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装) -### 模型准备 -1. 用户可以直接使用由[FastDeploy 提供的量化模型](../README.md#晶晨a311d支持的paddleseg模型)进行部署。 -2. 若FastDeploy没有提供满足要求的量化模型,用户可以参考[PaddleSeg动态图模型导出为A311D支持的INT8模型](../README.md#paddleseg动态图模型导出为a311d支持的int8模型)自行导出或训练量化模型 +## 2. 部署模型准备 +1. 用户可以直接使用由[FastDeploy 提供的量化模型](../README.md)进行部署。 +2. 若FastDeploy没有提供满足要求的量化模型,用户可以参考[PaddleSeg动态图模型导出为A311D支持的INT8模型](../README.md)自行导出或训练量化模型 3. 若上述导出或训练的模型出现精度下降或者报错,则需要使用异构计算,使得模型算子部分跑在A311D的ARM CPU上进行调试以及精度验证,其中异构计算所需的文件是subgraph.txt。具体关于异构计算可参考:[异构计算](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/heterogeneous_computing_on_timvx_npu.md)。 -## 在 A311D 上部署量化后的 PP-LiteSeg 分割模型 +## 3. 在 A311D 上部署量化后的 PP-LiteSeg 分割模型 请按照以下步骤完成在 A311D 上部署 PP-LiteSeg 量化模型: 1. 将编译后的库拷贝到当前目录,可使用如下命令: ```bash -cp -r FastDeploy/build/fastdeploy-timvx/ path/to/paddleseg/amlogic/a311d/cpp +# 下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/segmentation/semantic_segmentation/amlogic/a311d/cpp +# # 如果您希望从PaddleSeg下载示例代码,请运行 +# git clone https://github.com/PaddlePaddle/PaddleSeg.git +# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支 +# # git checkout develop +# cp -r FastDeploy/build/fastdeploy-timvx/ PaddleSeg/deploy/fastdeploy/semantic_segmentation/amlogic/a311d/cpp ``` 2. 在当前路径下载部署所需的模型和示例图片: ```bash -cd path/to/paddleseg/amlogic/a311d/cpp +cd FastDeploy/examples/vision/segmentation/semantic_segmentation/amlogic/a311d/cpp mkdir models && mkdir images wget https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz tar -xvf ppliteseg.tar.gz @@ -33,7 +40,7 @@ cp -r cityscapes_demo.png images 3. 编译部署示例,可使入如下命令: ```bash -cd path/to/paddleseg/amlogic/a311d/cpp +cd FastDeploy/examples/vision/segmentation/semantic_segmentation/amlogic/a311d/cpp mkdir build && cd build cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 .. make -j8 @@ -54,6 +61,6 @@ bash run_with_adb.sh infer_demo ppliteseg cityscapes_demo.png $DEVICE_ID -## 快速链接 +## 4. 更多指南 - [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html) - [FastDeploy部署PaddleSeg模型概览](../../) diff --git a/examples/vision/segmentation/paddleseg/amlogic/a311d/cpp/infer.cc b/examples/vision/segmentation/paddleseg/semantic_segmentation/amlogic/a311d/cpp/infer.cc similarity index 100% rename from examples/vision/segmentation/paddleseg/amlogic/a311d/cpp/infer.cc rename to examples/vision/segmentation/paddleseg/semantic_segmentation/amlogic/a311d/cpp/infer.cc diff --git a/examples/vision/segmentation/paddleseg/amlogic/a311d/cpp/run_with_adb.sh b/examples/vision/segmentation/paddleseg/semantic_segmentation/amlogic/a311d/cpp/run_with_adb.sh similarity index 100% rename from examples/vision/segmentation/paddleseg/amlogic/a311d/cpp/run_with_adb.sh rename to examples/vision/segmentation/paddleseg/semantic_segmentation/amlogic/a311d/cpp/run_with_adb.sh diff --git a/examples/vision/segmentation/paddleseg/android/.gitignore b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/.gitignore similarity index 100% rename from examples/vision/segmentation/paddleseg/android/.gitignore rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/.gitignore diff --git a/examples/vision/segmentation/paddleseg/android/README_CN.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/README.md similarity index 100% rename from examples/vision/segmentation/paddleseg/android/README_CN.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/README.md diff --git a/examples/vision/segmentation/paddleseg/android/app/build.gradle b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/build.gradle similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/build.gradle rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/build.gradle diff --git a/examples/vision/segmentation/paddleseg/android/app/proguard-rules.pro b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/proguard-rules.pro similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/proguard-rules.pro rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/proguard-rules.pro diff --git a/examples/vision/segmentation/paddleseg/android/app/src/androidTest/java/com/baidu/paddle/fastdeploy/ExampleInstrumentedTest.java b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/androidTest/java/com/baidu/paddle/fastdeploy/ExampleInstrumentedTest.java similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/androidTest/java/com/baidu/paddle/fastdeploy/ExampleInstrumentedTest.java rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/androidTest/java/com/baidu/paddle/fastdeploy/ExampleInstrumentedTest.java diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/AndroidManifest.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/AndroidManifest.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/AndroidManifest.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/AndroidManifest.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/assets/labels/coco_label_list.txt b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/assets/labels/coco_label_list.txt similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/assets/labels/coco_label_list.txt rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/assets/labels/coco_label_list.txt diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/assets/labels/en_dict.txt b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/assets/labels/en_dict.txt similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/assets/labels/en_dict.txt rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/assets/labels/en_dict.txt diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/assets/labels/imagenet1k_label_list.txt b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/assets/labels/imagenet1k_label_list.txt similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/assets/labels/imagenet1k_label_list.txt rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/assets/labels/imagenet1k_label_list.txt diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/assets/labels/pascalvoc_label_list b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/assets/labels/pascalvoc_label_list similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/assets/labels/pascalvoc_label_list rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/assets/labels/pascalvoc_label_list diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/assets/labels/ppocr_keys_v1.txt b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/assets/labels/ppocr_keys_v1.txt similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/assets/labels/ppocr_keys_v1.txt rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/assets/labels/ppocr_keys_v1.txt diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/segmentation/SegmentationMainActivity.java b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/segmentation/SegmentationMainActivity.java similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/segmentation/SegmentationMainActivity.java rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/segmentation/SegmentationMainActivity.java diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/segmentation/SegmentationSettingsActivity.java b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/segmentation/SegmentationSettingsActivity.java similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/segmentation/SegmentationSettingsActivity.java rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/segmentation/SegmentationSettingsActivity.java diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/Utils.java b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/Utils.java similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/Utils.java rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/Utils.java diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/layout/ActionBarLayout.java b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/layout/ActionBarLayout.java similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/layout/ActionBarLayout.java rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/layout/ActionBarLayout.java diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/view/AppCompatPreferenceActivity.java b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/view/AppCompatPreferenceActivity.java similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/view/AppCompatPreferenceActivity.java rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/view/AppCompatPreferenceActivity.java diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/view/CameraSurfaceView.java b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/view/CameraSurfaceView.java similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/view/CameraSurfaceView.java rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/view/CameraSurfaceView.java diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/view/ResultListView.java b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/view/ResultListView.java similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/view/ResultListView.java rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/view/ResultListView.java diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/view/adapter/BaseResultAdapter.java b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/view/adapter/BaseResultAdapter.java similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/view/adapter/BaseResultAdapter.java rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/view/adapter/BaseResultAdapter.java diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/view/model/BaseResultModel.java b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/view/model/BaseResultModel.java similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/view/model/BaseResultModel.java rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/java/com/baidu/paddle/fastdeploy/app/ui/view/model/BaseResultModel.java diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/action_button_layer.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/action_button_layer.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/action_button_layer.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/action_button_layer.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/album_btn.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/album_btn.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/album_btn.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/album_btn.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/ic_launcher_foreground.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/ic_launcher_foreground.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/ic_launcher_foreground.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/ic_launcher_foreground.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/realtime_start_btn.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/realtime_start_btn.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/realtime_start_btn.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/realtime_start_btn.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/realtime_stop_btn.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/realtime_stop_btn.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/realtime_stop_btn.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/realtime_stop_btn.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/result_page_border_section_bk.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/result_page_border_section_bk.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/result_page_border_section_bk.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/result_page_border_section_bk.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/round_corner_btn.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/round_corner_btn.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/round_corner_btn.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/round_corner_btn.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/seekbar_progress_realtime.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/seekbar_progress_realtime.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/seekbar_progress_realtime.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/seekbar_progress_realtime.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/seekbar_progress_result.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/seekbar_progress_result.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/seekbar_progress_result.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/seekbar_progress_result.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/seekbar_thumb.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/seekbar_thumb.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/seekbar_thumb.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/seekbar_thumb.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/seekbar_thumb_shape.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/seekbar_thumb_shape.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/seekbar_thumb_shape.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/seekbar_thumb_shape.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/switch_side_btn.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/switch_side_btn.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/switch_side_btn.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/switch_side_btn.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/take_picture_btn.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/take_picture_btn.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-v24/take_picture_btn.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-v24/take_picture_btn.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/album.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/album.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/album.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/album.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/album_pressed.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/album_pressed.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/album_pressed.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/album_pressed.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/back_btn.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/back_btn.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/back_btn.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/back_btn.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/more_menu.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/more_menu.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/more_menu.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/more_menu.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/realtime_start.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/realtime_start.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/realtime_start.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/realtime_start.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/realtime_start_pressed.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/realtime_start_pressed.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/realtime_start_pressed.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/realtime_start_pressed.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/realtime_stop.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/realtime_stop.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/realtime_stop.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/realtime_stop.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/realtime_stop_pressed.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/realtime_stop_pressed.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/realtime_stop_pressed.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/realtime_stop_pressed.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/scan_icon.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/scan_icon.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/scan_icon.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/scan_icon.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/seekbar_handle.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/seekbar_handle.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/seekbar_handle.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/seekbar_handle.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/seekbar_progress_dotted.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/seekbar_progress_dotted.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/seekbar_progress_dotted.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/seekbar_progress_dotted.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/seekbar_thumb_invisible.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/seekbar_thumb_invisible.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/seekbar_thumb_invisible.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/seekbar_thumb_invisible.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/switch_side.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/switch_side.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/switch_side.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/switch_side.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/switch_side_pressed.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/switch_side_pressed.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/switch_side_pressed.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/switch_side_pressed.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/take_picture.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/take_picture.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/take_picture.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/take_picture.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/take_picture_pressed.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/take_picture_pressed.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xhdpi/take_picture_pressed.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xhdpi/take_picture_pressed.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xxhdpi-v4/btn_switch_default.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xxhdpi-v4/btn_switch_default.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xxhdpi-v4/btn_switch_default.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xxhdpi-v4/btn_switch_default.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xxhdpi-v4/btn_switch_pressed.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xxhdpi-v4/btn_switch_pressed.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable-xxhdpi-v4/btn_switch_pressed.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable-xxhdpi-v4/btn_switch_pressed.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable/btn_settings.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable/btn_settings.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable/btn_settings.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable/btn_settings.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable/btn_settings_default.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable/btn_settings_default.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable/btn_settings_default.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable/btn_settings_default.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable/btn_settings_pressed.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable/btn_settings_pressed.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable/btn_settings_pressed.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable/btn_settings_pressed.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable/btn_shutter.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable/btn_shutter.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable/btn_shutter.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable/btn_shutter.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable/btn_shutter_default.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable/btn_shutter_default.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable/btn_shutter_default.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable/btn_shutter_default.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable/btn_shutter_pressed.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable/btn_shutter_pressed.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable/btn_shutter_pressed.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable/btn_shutter_pressed.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable/btn_switch.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable/btn_switch.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable/btn_switch.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable/btn_switch.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable/ic_launcher_background.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable/ic_launcher_background.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/drawable/ic_launcher_background.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/drawable/ic_launcher_background.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/layout/segmentation_activity_main.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/layout/segmentation_activity_main.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/layout/segmentation_activity_main.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/layout/segmentation_activity_main.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/layout/segmentation_camera_page.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/layout/segmentation_camera_page.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/layout/segmentation_camera_page.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/layout/segmentation_camera_page.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/layout/segmentation_result_page.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/layout/segmentation_result_page.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/layout/segmentation_result_page.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/layout/segmentation_result_page.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/layout/segmentation_result_page_item.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/layout/segmentation_result_page_item.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/layout/segmentation_result_page_item.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/layout/segmentation_result_page_item.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-anydpi-v26/ic_launcher.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-anydpi-v26/ic_launcher.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-anydpi-v26/ic_launcher.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-anydpi-v26/ic_launcher.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-anydpi-v26/ic_launcher_round.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-anydpi-v26/ic_launcher_round.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-anydpi-v26/ic_launcher_round.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-anydpi-v26/ic_launcher_round.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-hdpi/ic_launcher.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-hdpi/ic_launcher.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-hdpi/ic_launcher.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-hdpi/ic_launcher.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-hdpi/ic_launcher_round.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-hdpi/ic_launcher_round.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-hdpi/ic_launcher_round.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-hdpi/ic_launcher_round.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-mdpi/ic_launcher.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-mdpi/ic_launcher.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-mdpi/ic_launcher.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-mdpi/ic_launcher.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-mdpi/ic_launcher_round.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-mdpi/ic_launcher_round.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-mdpi/ic_launcher_round.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-mdpi/ic_launcher_round.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-xhdpi/ic_launcher.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-xhdpi/ic_launcher.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-xhdpi/ic_launcher.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-xhdpi/ic_launcher.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-xhdpi/ic_launcher_round.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-xhdpi/ic_launcher_round.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-xhdpi/ic_launcher_round.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-xhdpi/ic_launcher_round.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-xxhdpi/ic_launcher.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-xxhdpi/ic_launcher.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-xxhdpi/ic_launcher.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-xxhdpi/ic_launcher.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-xxhdpi/ic_launcher_round.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-xxhdpi/ic_launcher_round.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-xxhdpi/ic_launcher_round.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-xxhdpi/ic_launcher_round.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-xxxhdpi/ic_launcher.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-xxxhdpi/ic_launcher.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-xxxhdpi/ic_launcher.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-xxxhdpi/ic_launcher.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-xxxhdpi/ic_launcher_round.png b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-xxxhdpi/ic_launcher_round.png similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/mipmap-xxxhdpi/ic_launcher_round.png rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/mipmap-xxxhdpi/ic_launcher_round.png diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/values/arrays.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/values/arrays.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/values/arrays.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/values/arrays.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/values/colors.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/values/colors.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/values/colors.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/values/colors.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/values/dimens.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/values/dimens.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/values/dimens.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/values/dimens.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/values/strings.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/values/strings.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/values/strings.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/values/strings.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/values/styles.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/values/styles.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/values/styles.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/values/styles.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/values/values.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/values/values.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/values/values.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/values/values.xml diff --git a/examples/vision/segmentation/paddleseg/android/app/src/main/res/xml/segmentation_setting.xml b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/xml/segmentation_setting.xml similarity index 100% rename from examples/vision/segmentation/paddleseg/android/app/src/main/res/xml/segmentation_setting.xml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/app/src/main/res/xml/segmentation_setting.xml diff --git a/examples/vision/segmentation/paddleseg/android/build.gradle b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/build.gradle similarity index 100% rename from examples/vision/segmentation/paddleseg/android/build.gradle rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/build.gradle diff --git a/examples/vision/segmentation/paddleseg/android/gradle.properties b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/gradle.properties similarity index 100% rename from examples/vision/segmentation/paddleseg/android/gradle.properties rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/gradle.properties diff --git a/examples/vision/segmentation/paddleseg/android/gradle/wrapper/gradle-wrapper.jar b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/gradle/wrapper/gradle-wrapper.jar similarity index 100% rename from examples/vision/segmentation/paddleseg/android/gradle/wrapper/gradle-wrapper.jar rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/gradle/wrapper/gradle-wrapper.jar diff --git a/examples/vision/segmentation/paddleseg/android/gradle/wrapper/gradle-wrapper.properties b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/gradle/wrapper/gradle-wrapper.properties similarity index 100% rename from examples/vision/segmentation/paddleseg/android/gradle/wrapper/gradle-wrapper.properties rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/gradle/wrapper/gradle-wrapper.properties diff --git a/examples/vision/segmentation/paddleseg/android/gradlew b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/gradlew similarity index 100% rename from examples/vision/segmentation/paddleseg/android/gradlew rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/gradlew diff --git a/examples/vision/segmentation/paddleseg/android/gradlew.bat b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/gradlew.bat similarity index 100% rename from examples/vision/segmentation/paddleseg/android/gradlew.bat rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/gradlew.bat diff --git a/examples/vision/segmentation/paddleseg/android/local.properties b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/local.properties similarity index 100% rename from examples/vision/segmentation/paddleseg/android/local.properties rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/local.properties diff --git a/examples/vision/segmentation/paddleseg/android/settings.gradle b/examples/vision/segmentation/paddleseg/semantic_segmentation/android/settings.gradle similarity index 100% rename from examples/vision/segmentation/paddleseg/android/settings.gradle rename to examples/vision/segmentation/paddleseg/semantic_segmentation/android/settings.gradle diff --git a/examples/vision/segmentation/paddleseg/ascend/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/ascend/README.md similarity index 76% rename from examples/vision/segmentation/paddleseg/ascend/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/ascend/README.md index 05f4d83485..ea9af4f210 100644 --- a/examples/vision/segmentation/paddleseg/ascend/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/ascend/README.md @@ -1,37 +1,11 @@ [English](README.md) | 简体中文 -# PaddleSeg利用FastDeploy在华为昇腾上部署模型 +# PaddleSeg 语义分割模型在华为昇腾上部署方案-FastDeploy +## 1. 说明 PaddleSeg支持通过FastDeploy在华为昇腾上部署Segmentation相关模型 -## 支持的PaddleSeg模型 - -- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) ->> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型 - -目前FastDeploy支持如下模型的部署 - -- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md) -- [PP-HumanSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/contrib/PP-HumanSeg/README.md) -- [FCN系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/fcn/README.md) -- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md) -- [SegFormer系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/segformer/README.md) - ->>**注意** 若需要在华为昇腾上部署**PP-Matting**、**PP-HumanMatting**请从[Matting模型部署](../../ppmatting/)下载对应模型,部署过程与此文档一致 - -## 准备PaddleSeg部署模型 -PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md) - -**注意** -- PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息 - -## 预导出的推理模型 - -为了方便开发者的测试,下面提供了PaddleSeg导出的部分推理模型模型 -- without-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op none` -- with-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op argmax` - -开发者可直接下载使用。 +## 2. 使用预导出的模型列表 | 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) | |:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- | @@ -45,7 +19,25 @@ PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.co | [Deeplabv3-ResNet101-OS8-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_with_argmax_infer.tgz) \| [Deeplabv3-ResNet101-OS8-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_without_argmax_infer.tgz) | 150MB | 1024x512 | 79.90% | 80.22% | 80.47% | | [SegFormer_B0-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/SegFormer_B0-cityscapes-with-argmax.tgz) \| [SegFormer_B0-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/SegFormer_B0-cityscapes-without-argmax.tgz) | 15MB | 1024x1024 | 76.73% | 77.16% | - | -## 详细部署文档 +补充说明: +- 文件名标记了`without-argmax`的模型,导出方式为:**不指定**`--input_shape`,**指定**`--output_op none` +- 文件名标记了`with-argmax`的模型导出方式为:**不指定**`--input_shape`,**指定**`--output_op argmax` +## 3. 自行导出PaddleSeg部署模型 +### 3.1 模型版本 +支持[PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)高于2.6版本的Segmentation模型,如果部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../matting/)。目前FastDeploy测试过成功部署的模型: +- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md) +- [PP-HumanSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/contrib/PP-HumanSeg/README.md) +- [FCN系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/fcn/README.md) +- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md) +- [SegFormer系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/segformer/README.md) + +### 3.2 模型导出 +PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md),**注意**:PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息 + +### 3.3 导出须知 +请参考[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)中`output_op`参数的说明,获取您部署所需的模型,比如是否带`argmax`或`softmax`算子 + +## 4. 详细部署的部署示例 - [Python部署](python) - [C++部署](cpp) diff --git a/examples/vision/segmentation/ppmatting/cpu-gpu/cpp/CMakeLists.txt b/examples/vision/segmentation/paddleseg/semantic_segmentation/ascend/cpp/CMakeLists.txt similarity index 75% rename from examples/vision/segmentation/ppmatting/cpu-gpu/cpp/CMakeLists.txt rename to examples/vision/segmentation/paddleseg/semantic_segmentation/ascend/cpp/CMakeLists.txt index 93540a7e83..776d832f91 100644 --- a/examples/vision/segmentation/ppmatting/cpu-gpu/cpp/CMakeLists.txt +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/ascend/cpp/CMakeLists.txt @@ -1,14 +1,11 @@ PROJECT(infer_demo C CXX) CMAKE_MINIMUM_REQUIRED (VERSION 3.10) -# 指定下载解压后的fastdeploy库路径 option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.") include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake) -# 添加FastDeploy依赖头文件 include_directories(${FASTDEPLOY_INCS}) add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc) -# 添加FastDeploy库依赖 target_link_libraries(infer_demo ${FASTDEPLOY_LIBS}) diff --git a/examples/vision/segmentation/paddleseg/ascend/cpp/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/ascend/cpp/README.md similarity index 57% rename from examples/vision/segmentation/paddleseg/ascend/cpp/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/ascend/cpp/README.md index cfa01c6635..79f60d7fb4 100644 --- a/examples/vision/segmentation/paddleseg/ascend/cpp/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/ascend/cpp/README.md @@ -1,16 +1,25 @@ [English](README.md) | 简体中文 -# PaddleSeg C++部署示例 +# PaddleSeg Ascend NPU C++部署示例 本目录下提供`infer.cc`快速完成PP-LiteSeg在华为昇腾上部署的示例。 -## 华为昇腾NPU编译FastDeploy环境准备 +## 1. 部署环境准备 在部署前,需自行编译基于华为昇腾NPU的预测库,参考文档[华为昇腾NPU部署环境编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装) ->>**注意** **PP-Matting**、**PP-HumanMatting**的模型,请从[Matting模型部署](../../../ppmatting/)下载 +## 2. 部署模型准备 +在部署前,请准备好您所需要运行的推理模型,你可以选择使用[预导出的推理模型](../README.md)或者[自行导出PaddleSeg部署模型](../README.md),如果你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../matting)。 +## 3. 运行部署示例 +以Linux上推理为例,在本目录执行如下命令即可完成编译测试。 ```bash -#下载部署示例代码 -cd path/to/paddleseg/ascend/cpp +# 下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cd FastDeploy/examples/vision/segmentation/semantic_segmentation/ascend/cpp +# 如果您希望从PaddleSeg下载示例代码,请运行 +# git clone https://github.com/PaddlePaddle/PaddleSeg.git +# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支 +# # git checkout develop +# cd PaddleSeg/deploy/fastdeploy/semantic_segmentation/ascend/cpp mkdir build cd build @@ -32,7 +41,7 @@ wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png -## 快速链接 +## 4. 更多指南 - [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html) - [FastDeploy部署PaddleSeg模型概览](../../) - [Python部署](../python) diff --git a/examples/vision/segmentation/paddleseg/ascend/cpp/infer.cc b/examples/vision/segmentation/paddleseg/semantic_segmentation/ascend/cpp/infer.cc similarity index 100% rename from examples/vision/segmentation/paddleseg/ascend/cpp/infer.cc rename to examples/vision/segmentation/paddleseg/semantic_segmentation/ascend/cpp/infer.cc diff --git a/examples/vision/segmentation/paddleseg/ascend/python/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/ascend/python/README.md similarity index 59% rename from examples/vision/segmentation/paddleseg/ascend/python/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/ascend/python/README.md index ee11ea7b99..2ca85d385c 100644 --- a/examples/vision/segmentation/paddleseg/ascend/python/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/ascend/python/README.md @@ -1,17 +1,24 @@ [English](README.md) | 简体中文 -# PaddleSeg Python部署示例 +# PaddleSeg Ascend NPU Python部署示例 本目录下提供`infer.py`快速完成PP-LiteSeg在华为昇腾上部署的示例。 -## 华为昇腾NPU编译FastDeploy wheel包环境准备 +## 1. 部署环境准备 在部署前,需自行编译基于华为昇腾NPU的FastDeploy python wheel包并安装,参考文档[华为昇腾NPU部署环境编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装) ->>**注意** **PP-Matting**、**PP-HumanMatting**的模型,请从[Matting模型部署](../../../ppmatting)下载 - +## 2. 部署模型准备 +在部署前,请准备好您所需要运行的推理模型,你可以选择使用[预导出的推理模型](../README.md)或者[自行导出PaddleSeg部署模型](../README.md),如果你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../matting)。 +## 3. 运行部署示例 ```bash -#下载部署示例代码 -cd path/to/paddleseg/ascend/cpp +# 下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cd FastDeploy/examples/vision/segmentation/semantic_segmentation/ascend/python +# 如果您希望从PaddleSeg下载示例代码,请运行 +# git clone https://github.com/PaddlePaddle/PaddleSeg.git +# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支 +# # git checkout develop +# cd PaddleSeg/deploy/fastdeploy/semantic_segmentation/ascend/python # 下载PP-LiteSeg模型文件和测试图片 wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz @@ -27,10 +34,10 @@ python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --ima -## 快速链接 +## 4. 更多指南 - [PaddleSeg python API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/semantic_segmentation.html) - [FastDeploy部署PaddleSeg模型概览](..) - [PaddleSeg C++部署](../cpp) -## 常见问题 +## 5. 常见问题 - [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/vision_result_related_problems.md) diff --git a/examples/vision/segmentation/paddleseg/ascend/python/infer.py b/examples/vision/segmentation/paddleseg/semantic_segmentation/ascend/python/infer.py similarity index 92% rename from examples/vision/segmentation/paddleseg/ascend/python/infer.py rename to examples/vision/segmentation/paddleseg/semantic_segmentation/ascend/python/infer.py index 180f30e808..b699fd7899 100755 --- a/examples/vision/segmentation/paddleseg/ascend/python/infer.py +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/ascend/python/infer.py @@ -17,18 +17,18 @@ def parse_arguments(): runtime_option = fd.RuntimeOption() runtime_option.use_ascend() -# 配置runtime,加载模型 +# setup runtime model_file = os.path.join(args.model, "model.pdmodel") params_file = os.path.join(args.model, "model.pdiparams") config_file = os.path.join(args.model, "deploy.yaml") model = fd.vision.segmentation.PaddleSegModel( model_file, params_file, config_file, runtime_option=runtime_option) -# 预测图片分割结果 +# predict im = cv2.imread(args.image) result = model.predict(im) print(result) -# 可视化结果 +# visualize vis_im = fd.vision.vis_segmentation(im, result, weight=0.5) cv2.imwrite("vis_img.png", vis_im) diff --git a/examples/vision/segmentation/paddleseg/cpu-gpu/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/README.md similarity index 76% rename from examples/vision/segmentation/paddleseg/cpu-gpu/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/README.md index b126e9ddb0..849bc5eda7 100644 --- a/examples/vision/segmentation/paddleseg/cpu-gpu/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/README.md @@ -1,38 +1,11 @@ [English](README.md) | 简体中文 -# PaddleSeg模型高性能全场景部署方案-FastDeploy +# PaddleSeg 语义分割模型高性能全场景部署方案-FastDeploy -PaddleSeg支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上部署Segmentation模型 +## 1. 说明 +PaddleSeg支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上快速部署Segmentation模型 -## 模型版本说明 - -- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) ->> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型 - -目前FastDeploy支持如下模型的部署 - -- [U-Net系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/unet/README.md) -- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md) -- [PP-HumanSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/contrib/PP-HumanSeg/README.md) -- [FCN系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/fcn/README.md) -- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md) -- [SegFormer系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/segformer/README.md) - ->>**注意** 如部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../ppmatting) - -## 准备PaddleSeg部署模型 -PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md) - -**注意** -- PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息 - -## 预导出的推理模型 - -为了方便开发者的测试,下面提供了PaddleSeg导出的部分模型 -- without-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op none` -- with-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op argmax` - -开发者可直接下载使用。 +## 2. 使用预导出的模型列表 | 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) | |:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- | @@ -47,7 +20,26 @@ PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.co | [Deeplabv3-ResNet101-OS8-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_with_argmax_infer.tgz) \| [Deeplabv3-ResNet101-OS8-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_without_argmax_infer.tgz) | 150MB | 1024x512 | 79.90% | 80.22% | 80.47% | | [SegFormer_B0-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/SegFormer_B0-cityscapes-with-argmax.tgz) \| [SegFormer_B0-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/SegFormer_B0-cityscapes-without-argmax.tgz) | 15MB | 1024x1024 | 76.73% | 77.16% | - | -## 详细部署文档 +补充说明: +- 文件名标记了`without-argmax`的模型,导出方式为:**不指定**`--input_shape`,**指定**`--output_op none` +- 文件名标记了`with-argmax`的模型导出方式为:**不指定**`--input_shape`,**指定**`--output_op argmax` +## 3. 自行导出PaddleSeg部署模型 +### 3.1 模型版本 +支持[PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)高于2.6版本的Segmentation模型,如果部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../matting/)。目前FastDeploy测试过成功部署的模型: +- [U-Net系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/unet/README.md) +- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md) +- [PP-HumanSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/contrib/PP-HumanSeg/README.md) +- [FCN系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/fcn/README.md) +- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md) +- [SegFormer系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/segformer/README.md) + +### 3.2 模型导出 +PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md),**注意**:PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息 + +### 3.3 导出须知 +请参考[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)中`output_op`参数的说明,获取您部署所需的模型,比如是否带`argmax`或`softmax`算子 + +## 4. 详细的部署示例 - [Python部署](python) - [C++部署](cpp) diff --git a/examples/vision/segmentation/paddleseg/cpu-gpu/c/CMakeLists.txt b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/c/CMakeLists.txt similarity index 100% rename from examples/vision/segmentation/paddleseg/cpu-gpu/c/CMakeLists.txt rename to examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/c/CMakeLists.txt diff --git a/examples/vision/segmentation/paddleseg/cpu-gpu/c/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/c/README.md similarity index 85% rename from examples/vision/segmentation/paddleseg/cpu-gpu/c/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/c/README.md index e991593572..5fc49ac528 100755 --- a/examples/vision/segmentation/paddleseg/cpu-gpu/c/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/c/README.md @@ -5,8 +5,8 @@ This directory provides `infer.c` to finish the deployment of PaddleSeg on CPU/G Before deployment, two steps require confirmation -- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md) -- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md) +- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/en/build_and_install/download_prebuilt_libraries.md) +- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/en/build_and_install/download_prebuilt_libraries.md) Taking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.4 or above (x.x.x>=1.0.4) is required to support this model. @@ -32,7 +32,7 @@ wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png ``` The above command works for Linux or MacOS. For SDK in Windows, refer to: -- [How to use FastDeploy C++ SDK in Windows](../../../../../../docs/en/faq/use_sdk_on_windows.md) +- [How to use FastDeploy C++ SDK in Windows](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/en/faq/use_sdk_on_windows.md) The visualized result after running is as follows @@ -154,7 +154,7 @@ FD_C_Bool FD_C_PaddleSegWrapperPredict( > **Params** > * **fd_c_ppseg_wrapper**(FD_C_PaddleSegWrapper*): Pointer to manipulate PaddleSeg object. > * **img**(FD_C_Mat): pointer to cv::Mat object, which can be obained by FD_C_Imread interface -> * **result**(FD_C_SegmentationResult*): Segmentation prediction results, Refer to [Vision Model Prediction Results](../../../../../../docs/api/vision_results/) for SegmentationResult +> * **result**(FD_C_SegmentationResult*): Segmentation prediction results, Refer to [Vision Model Prediction Results](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/) for SegmentationResult #### Result @@ -180,5 +180,5 @@ FD_C_Mat FD_C_VisSegmentation(FD_C_Mat im, - [PPSegmentation Model Description](../../) - [PaddleSeg Python Deployment](../python) -- [Model Prediction Results](../../../../../../docs/api/vision_results/) -- [How to switch the model inference backend engine](../../../../../../docs/cn/faq/how_to_change_backend.md) +- [Model Prediction Results](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/) +- [How to switch the model inference backend engine](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/en/faq/how_to_change_backend.md) diff --git a/examples/vision/segmentation/paddleseg/cpu-gpu/c/README_CN.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/c/README_CN.md similarity index 75% rename from examples/vision/segmentation/paddleseg/cpu-gpu/c/README_CN.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/c/README_CN.md index e33b2c44fb..5be1bcc302 100644 --- a/examples/vision/segmentation/paddleseg/cpu-gpu/c/README_CN.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/c/README_CN.md @@ -1,13 +1,18 @@ [English](README.md) | 简体中文 -# PaddleSeg C部署示例 +# PaddleSeg CPU-GPU C部署示例 本目录下提供`infer.c`来调用C API快速完成PaddleSeg模型在CPU/GPU上部署的示例。 -在部署前,需确认以下两个步骤 +## 1. 说明 +PaddleSeg支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上快速部署Segmentation模型。 -- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) -- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) +## 2. 部署环境准备 +在部署前,需确认软硬件环境,同时下载预编译部署库,参考[FastDeploy安装文档](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#FastDeploy预编译库安装)安装FastDeploy预编译库。 +## 3. 部署模型准备 +在部署前,请准备好您所需要运行的推理模型,你可以选择使用[预导出的推理模型](../README.md)或者[自行导出PaddleSeg部署模型](../README.md),如果你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../matting)。 + +## 4. 运行部署示例 以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4) ```bash @@ -31,20 +36,19 @@ wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png ./infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png 1 ``` -以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考: -- [如何在Windows中使用FastDeploy C++ SDK](../../../../../../docs/cn/faq/use_sdk_on_windows.md) - -如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境: -- [如何使用华为昇腾NPU部署](../../../../../../docs/cn/faq/use_sdk_on_ascend.md) - 运行完成可视化结果如下图所示
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考: +- [如何在Windows中使用FastDeploy C++ SDK](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/use_sdk_on_windows.md) -## PaddleSeg C API接口 +如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境: +- [如何使用华为昇腾NPU部署](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/use_sdk_on_ascend.md) + +## 5. PaddleSeg C API接口 ### 配置 @@ -155,7 +159,7 @@ FD_C_Bool FD_C_PaddleSegWrapperPredict( > **参数** > * **fd_c_ppseg_wrapper**(FD_C_PaddleSegWrapper*): 指向PaddleSeg模型的指针 > * **img**(FD_C_Mat): 输入图像的指针,指向cv::Mat对象,可以调用FD_C_Imread读取图像获取 -> * **result**FD_C_SegmentationResult*): Segmentation检测结果,SegmentationResult说明参考[视觉模型预测结果](../../../../../../docs/api/vision_results/) +> * **result**FD_C_SegmentationResult*): Segmentation检测结果,SegmentationResult说明参考[视觉模型预测结果](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/) #### Predict结果 @@ -177,9 +181,9 @@ FD_C_Mat FD_C_VisSegmentation(FD_C_Mat im, > * **vis_im**(FD_C_Mat): 指向可视化图像的指针 -## 其它文档 +## 6. 常见问题 - [PPSegmentation 系列模型介绍](../../) - [PaddleSeg Python部署](../python) -- [模型预测结果说明](../../../../../../docs/api/vision_results/) -- [如何切换模型推理后端引擎](../../../../../../docs/cn/faq/how_to_change_backend.md) +- [模型预测结果说明](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/) +- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md) diff --git a/examples/vision/segmentation/paddleseg/cpu-gpu/c/infer.c b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/c/infer.c similarity index 100% rename from examples/vision/segmentation/paddleseg/cpu-gpu/c/infer.c rename to examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/c/infer.c diff --git a/examples/vision/segmentation/paddleseg/cpu-gpu/cpp/CMakeLists.txt b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/cpp/CMakeLists.txt similarity index 75% rename from examples/vision/segmentation/paddleseg/cpu-gpu/cpp/CMakeLists.txt rename to examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/cpp/CMakeLists.txt index 93540a7e83..776d832f91 100644 --- a/examples/vision/segmentation/paddleseg/cpu-gpu/cpp/CMakeLists.txt +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/cpp/CMakeLists.txt @@ -1,14 +1,11 @@ PROJECT(infer_demo C CXX) CMAKE_MINIMUM_REQUIRED (VERSION 3.10) -# 指定下载解压后的fastdeploy库路径 option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.") include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake) -# 添加FastDeploy依赖头文件 include_directories(${FASTDEPLOY_INCS}) add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc) -# 添加FastDeploy库依赖 target_link_libraries(infer_demo ${FASTDEPLOY_LIBS}) diff --git a/examples/vision/segmentation/paddleseg/cpu-gpu/cpp/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/cpp/README.md similarity index 55% rename from examples/vision/segmentation/paddleseg/cpu-gpu/cpp/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/cpp/README.md index 2c1f54e9a1..7287fb5f8d 100644 --- a/examples/vision/segmentation/paddleseg/cpu-gpu/cpp/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/cpp/README.md @@ -1,25 +1,36 @@ [English](README.md) | 简体中文 -# PaddleSeg C++部署示例 +# PaddleSeg CPU-GPU C++部署示例 本目录下提供`infer.cc`快速完成PP-LiteSeg在CPU/GPU,以及GPU上通过Paddle-TensorRT加速部署的示例。 -## 部署环境准备 +## 1. 说明 +PaddleSeg支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上快速部署Segmentation模型。 -在部署前,需确认软硬件环境,同时下载预编译部署库,参考文档[FastDeploy预编译库安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#FastDeploy预编译库安装) +## 2. 部署环境准备 +在部署前,需确认软硬件环境,同时下载预编译部署库,参考[FastDeploy安装文档](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#FastDeploy预编译库安装)安装FastDeploy预编译库。 ->> **注意** 如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../ppmatting) +## 3. 部署模型准备 +在部署前,请准备好您所需要运行的推理模型,你可以选择使用[预导出的推理模型](../README.md)或者[自行导出PaddleSeg部署模型](../README.md),如果你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../matting)。 +## 4. 运行部署示例 以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.0以上(x.x.x>=1.0.0) -```bash -#下载部署示例代码 -cd path/to/paddleseg/cpp-gpu/cpp - -mkdir build -cd build +```bash # 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用 wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz tar xvf fastdeploy-linux-x64-x.x.x.tgz + +# 下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cd FastDeploy/examples/vision/segmentation/semantic_segmentation/cpp-gpu/cpp +# 如果您希望从PaddleSeg下载示例代码,请运行 +# git clone https://github.com/PaddlePaddle/PaddleSeg.git +# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支 +# # git checkout develop +# cd PaddleSeg/deploy/fastdeploy/semantic_segmentation/cpp-gpu/cpp + +# 编译部署示例 +mkdir build && cd build cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x make -j @@ -28,7 +39,7 @@ wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_wi tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png - +# 运行部署示例 # CPU推理 ./infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png 0 # GPU推理 @@ -42,16 +53,15 @@ wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png -> **注意:** -以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考: -- [如何在Windows中使用FastDeploy C++ SDK](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/use_sdk_on_windows.md) +- 注意,以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考文档: [如何在Windows中使用FastDeploy C++ SDK](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/use_sdk_on_windows.md) +- 关于如何通过FastDeploy使用更多不同的推理后端,以及如何使用不同的硬件,请参考文档:[如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md) -## 快速链接 +## 6. 更多指南 - [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html) - [FastDeploy部署PaddleSeg模型概览](../../) - [Python部署](../python) -## 常见问题 +## 7. 常见问题 - [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md) - [Intel GPU(独立显卡/集成显卡)的使用](https://github.com/PaddlePaddle/FastDeploy/blob/develop/tutorials/intel_gpu/README.md) - [编译CPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/cpu.md) diff --git a/examples/vision/segmentation/paddleseg/cpu-gpu/cpp/infer.cc b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/cpp/infer.cc old mode 100755 new mode 100644 similarity index 98% rename from examples/vision/segmentation/paddleseg/cpu-gpu/cpp/infer.cc rename to examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/cpp/infer.cc index eefeebeba7..9d05ac4380 --- a/examples/vision/segmentation/paddleseg/cpu-gpu/cpp/infer.cc +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/cpp/infer.cc @@ -92,7 +92,7 @@ void TrtInfer(const std::string& model_dir, const std::string& image_file) { option.SetTrtInputShape("x", {1, 3, 256, 256}, {1, 3, 1024, 1024}, {1, 3, 2048, 2048}); - auto model = fastdeploy::vision::segmentation::PaddleSegModel( + auto model = fastdeploy::vision::segmentation::PaddleSegModel( model_file, params_file, config_file, option); if (!model.Initialized()) { diff --git a/examples/vision/segmentation/paddleseg/cpu-gpu/csharp/CMakeLists.txt b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/csharp/CMakeLists.txt similarity index 100% rename from examples/vision/segmentation/paddleseg/cpu-gpu/csharp/CMakeLists.txt rename to examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/csharp/CMakeLists.txt diff --git a/examples/vision/segmentation/paddleseg/cpu-gpu/csharp/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/csharp/README.md similarity index 80% rename from examples/vision/segmentation/paddleseg/cpu-gpu/csharp/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/csharp/README.md index db4aeee4a4..3c674ed385 100755 --- a/examples/vision/segmentation/paddleseg/cpu-gpu/csharp/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/csharp/README.md @@ -5,8 +5,8 @@ This directory provides `infer.cs` to finish the deployment of PaddleSeg on CPU/ Before deployment, two steps require confirmation -- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md) -- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md) +- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/en/build_and_install/download_prebuilt_libraries.md) +- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/en/build_and_install/download_prebuilt_libraries.md) Please follow below instructions to compile and test in Windows. FastDeploy version 1.0.4 or above (x.x.x>=1.0.4) is required to support this model. @@ -35,7 +35,7 @@ msbuild infer_demo.sln /m:4 /p:Configuration=Release /p:Platform=x64 ``` For more information about how to use FastDeploy SDK to compile a project with Visual Studio 2019. Please refer to -- [Using the FastDeploy C++ SDK on Windows Platform](../../../../../../docs/en/faq/use_sdk_on_windows.md) +- [Using the FastDeploy C++ SDK on Windows Platform](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/en/faq/use_sdk_on_windows.md) ## 4. Execute compiled program @@ -93,12 +93,12 @@ fastdeploy.SegmentationResult Predict(OpenCvSharp.Mat im) >> > **Return** > ->> * **result**: Segmentation prediction results, refer to [Vision Model Prediction Results](../../../../../../docs/api/vision_results/) for SegmentationResult +>> * **result**: Segmentation prediction results, refer to [Vision Model Prediction Results](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/) for SegmentationResult ## Other Documents - [PPSegmentation Model Description](../../) - [PaddleSeg Python Deployment](../python) -- [Model Prediction Results](../../../../../../docs/api/vision_results/) -- [How to switch the model inference backend engine](../../../../../../docs/cn/faq/how_to_change_backend.md) +- [Model Prediction Results](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/) +- [How to switch the model inference backend engine](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md) diff --git a/examples/vision/segmentation/paddleseg/cpu-gpu/csharp/README_CN.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/csharp/README_CN.md similarity index 64% rename from examples/vision/segmentation/paddleseg/cpu-gpu/csharp/README_CN.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/csharp/README_CN.md index ecdfd667f5..6a4eacd952 100644 --- a/examples/vision/segmentation/paddleseg/cpu-gpu/csharp/README_CN.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/csharp/README_CN.md @@ -1,26 +1,31 @@ [English](README.md) | 简体中文 -# PaddleSeg C#部署示例 +# PaddleSeg CPU-GPU C#部署示例 本目录下提供`infer.cs`来调用C# API快速完成PaddleSeg模型在CPU/GPU上部署的示例。 -在部署前,需确认以下两个步骤 +## 1. 说明 +PaddleSeg支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上快速部署Segmentation模型。 + +## 2. 部署环境准备 +在部署前,需确认软硬件环境,同时下载预编译部署库,参考[FastDeploy安装文档](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#FastDeploy预编译库安装)安装FastDeploy预编译库。 + +## 3. 部署模型准备 +在部署前,请准备好您所需要运行的推理模型,你可以选择使用[预导出的推理模型](../README.md)或者[自行导出PaddleSeg部署模型](../README.md),如果你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../matting)。 -- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) -- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) 在本目录执行如下命令即可在Windows完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4) -## 1. 下载C#包管理程序nuget客户端 +## 4. 下载C#包管理程序nuget客户端 > https://dist.nuget.org/win-x86-commandline/v6.4.0/nuget.exe 下载完成后将该程序添加到环境变量**PATH**中 -## 2. 下载模型文件和测试图片 +## 4. 下载模型文件和测试图片 > https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz # (下载后解压缩) > https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png -## 3. 编译示例代码 +## 6. 编译示例代码 本文档编译的示例代码可在解压的库中找到,编译工具依赖VS 2019的安装,**Windows打开x64 Native Tools Command Prompt for VS 2019命令工具**,通过如下命令开始编译 @@ -35,10 +40,10 @@ msbuild infer_demo.sln /m:4 /p:Configuration=Release /p:Platform=x64 ``` 关于使用Visual Studio 2019创建sln工程,或者CMake工程等方式编译的更详细信息,可参考如下文档 -- [在 Windows 使用 FastDeploy C++ SDK](../../../../../../docs/cn/faq/use_sdk_on_windows.md) -- [FastDeploy C++库在Windows上的多种使用方式](../../../../../../docs/cn/faq/use_sdk_on_windows_build.md) +- [在 Windows 使用 FastDeploy C++ SDK](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/use_sdk_on_windows.md) +- [FastDeploy C++库在Windows上的多种使用方式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/use_sdk_on_windows_build.md) -## 4. 运行可执行程序 +## 7. 运行可执行程序 注意Windows上运行时,需要将FastDeploy依赖的库拷贝至可执行程序所在目录, 或者配置环境变量。FastDeploy提供了工具帮助我们快速将所有依赖库拷贝至可执行程序所在目录,通过如下命令将所有依赖的dll文件拷贝至可执行程序所在的目录(可能生成的可执行文件在Release下还有一层目录,这里假设生成的可执行文件在Release处) ```shell @@ -56,7 +61,7 @@ infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.pn infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png 1 ``` -## PaddleSeg C#接口 +## 8. PaddleSeg C#接口 ### 模型 @@ -93,10 +98,11 @@ fastdeploy.SegmentationResult Predict(OpenCvSharp.Mat im) >> > **返回值** > ->> * **result**: Segmentation检测结果,SegmentationResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/) +>> * **result**: Segmentation检测结果,SegmentationResult说明参考[视觉模型预测结果](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/) +## 9. 常见问题 -- [模型介绍](../../) -- [Python部署](../python) -- [视觉模型预测结果](../../../../../../docs/api/vision_results/) -- [如何切换模型推理后端引擎](../../../../../../docs/cn/faq/how_to_change_backend.md) +- [PPSegmentation 系列模型介绍](../../) +- [PaddleSeg Python部署](../python) +- [模型预测结果说明](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/) +- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md) diff --git a/examples/vision/segmentation/paddleseg/cpu-gpu/csharp/infer.cs b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/csharp/infer.cs similarity index 100% rename from examples/vision/segmentation/paddleseg/cpu-gpu/csharp/infer.cs rename to examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/csharp/infer.cs diff --git a/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/python/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/python/README.md new file mode 100644 index 0000000000..16177f69e4 --- /dev/null +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/python/README.md @@ -0,0 +1,70 @@ +[English](README.md) | 简体中文 +# PaddleSeg CPU-GPU Python部署示例 +本目录下提供`infer.py`快速完成PP-LiteSeg在CPU/GPU,以及GPU上通过Paddle-TensorRT加速部署的示例。执行如下脚本即可完成 + +## 1. 说明 +PaddleSeg支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上快速部署Segmentation模型。 + +## 2. 部署环境准备 +在部署前,需确认软硬件环境,同时下载预编译部署库,参考[FastDeploy安装文档](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#FastDeploy预编译库安装)安装FastDeploy预编译库。 + +## 3. 部署模型准备 +在部署前,请准备好您所需要运行的推理模型,你可以选择使用[预导出的推理模型](../README.md)或者[自行导出PaddleSeg部署模型](../README.md),如果你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../matting)。 + +## 4. 运行部署示例 +```bash +# 安装FastDpeloy python包(详细文档请参考`部署环境准备`) +pip install fastdeploy-gpu-python -f https://www.paddlepaddle.org.cn/whl/fastdeploy.html +conda config --add channels conda-forge && conda install cudatoolkit=11.2 cudnn=8.2 + +# 下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cd FastDeploy/examples/vision/segmentation/semantic_segmentation/cpp-gpu/python +# 如果您希望从PaddleSeg下载示例代码,请运行 +# git clone https://github.com/PaddlePaddle/PaddleSeg.git +# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支 +# # git checkout develop +# cd PaddleSeg/deploy/fastdeploy/semantic_segmentation/cpp-gpu/python + +# 下载Unet模型文件和测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz +tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz +wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png + +# 运行部署示例 +# CPU推理 +python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png --device cpu +# GPU推理 +python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu +# GPU上使用Paddle-TensorRT推理 (注意:Paddle-TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待) +python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu --use_trt True +``` + +运行完成可视化结果如下图所示 +
+ +
+ +## 5. 部署示例选项说明 + +|参数|含义|默认值 +|---|---|---| +|--model|指定模型文件夹所在的路径|None| +|--image|指定测试图片所在的路径|None| +|--device|指定即将运行的硬件类型,支持的值为`[cpu, gpu]`,当设置为cpu时,可运行在x86 cpu/arm cpu等cpu上|cpu| +|--use_trt|是否使用trt,该项只在device为gpu时有效|False| + +关于如何通过FastDeploy使用更多不同的推理后端,以及如何使用不同的硬件,请参考文档:[如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md) + +## 6. 更多指南 +- [PaddleSeg python API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/semantic_segmentation.html) +- [FastDeploy部署PaddleSeg模型概览](..) +- [PaddleSeg C++部署](../cpp) + +## 7. 常见问题 +- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/vision_result_related_problems.md) +- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md) +- [Intel GPU(独立显卡/集成显卡)的使用](https://github.com/PaddlePaddle/FastDeploy/blob/develop/tutorials/intel_gpu/README.md) +- [编译CPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/cpu.md) +- [编译GPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/gpu.md) +- [编译Jetson部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/jetson.md) diff --git a/examples/vision/segmentation/paddleseg/cpu-gpu/python/infer.py b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/python/infer.py similarity index 95% rename from examples/vision/segmentation/paddleseg/cpu-gpu/python/infer.py rename to examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/python/infer.py index d90f6eb4c5..930347595b 100755 --- a/examples/vision/segmentation/paddleseg/cpu-gpu/python/infer.py +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/cpu-gpu/python/infer.py @@ -43,7 +43,7 @@ def build_option(args): args = parse_arguments() -# 配置runtime,加载模型 +# settting for runtime runtime_option = build_option(args) model_file = os.path.join(args.model, "model.pdmodel") params_file = os.path.join(args.model, "model.pdiparams") @@ -51,11 +51,11 @@ config_file = os.path.join(args.model, "deploy.yaml") model = fd.vision.segmentation.PaddleSegModel( model_file, params_file, config_file, runtime_option=runtime_option) -# 预测图片分割结果 +# predict im = cv2.imread(args.image) result = model.predict(im) print(result) -# 可视化结果 +# visualize vis_im = fd.vision.vis_segmentation(im, result, weight=0.5) cv2.imwrite("vis_img.png", vis_im) diff --git a/examples/vision/segmentation/paddleseg/kunlun/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/kunlun/README.md similarity index 78% rename from examples/vision/segmentation/paddleseg/kunlun/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/kunlun/README.md index 9b6adfadd6..187bd91c03 100644 --- a/examples/vision/segmentation/paddleseg/kunlun/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/kunlun/README.md @@ -1,8 +1,10 @@ [English](README.md) | 简体中文 -# PaddleSeg利用FastDeploy在昆仑芯上部署模型 +# PaddleSeg 语义分割模型在昆仑芯上部署方案-FastDeploy + +## 1. 说明 +PaddleSeg支持利用FastDeploy在昆仑芯片上部署Segmentation模型。 -## PaddleSeg支持部署的昆仑芯的芯片型号 支持如下芯片的部署 - 昆仑 818-100(推理芯片) - 昆仑 818-300(训练芯片) @@ -11,38 +13,7 @@ - K100/K200 昆仑 AI 加速卡 - R200 昆仑芯 AI 加速卡 - -PaddleSeg支持利用FastDeploy在昆仑芯片上部署Segmentation模型 - -## 模型版本说明 - -- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) ->> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型 - -目前FastDeploy支持如下模型的部署 - -- [U-Net系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/unet/README.md) -- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md) -- [PP-HumanSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/contrib/PP-HumanSeg/README.md) -- [FCN系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/fcn/README.md) -- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md) -- [SegFormer系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/segformer/README.md) - ->>**注意** 若需要在华为昇腾上部署**PP-Matting**、**PP-HumanMatting**请从[Matting模型部署](../../../ppmating/)下载对应模型,部署过程与此文档一致 - -## 准备PaddleSeg部署模型 -PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md) - -**注意** -- PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息 - -## 预导出的推理模型 - -为了方便开发者的测试,下面提供了PaddleSeg导出的部分模型 -- without-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op none` -- with-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op argmax` - -开发者可直接下载使用。 +## 2. 使用预导出的模型列表 | 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) | |:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- | @@ -57,7 +28,26 @@ PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.co | [Deeplabv3-ResNet101-OS8-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_with_argmax_infer.tgz) \| [Deeplabv3-ResNet101-OS8-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_without_argmax_infer.tgz) | 150MB | 1024x512 | 79.90% | 80.22% | 80.47% | | [SegFormer_B0-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/SegFormer_B0-cityscapes-with-argmax.tgz) \| [SegFormer_B0-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/SegFormer_B0-cityscapes-without-argmax.tgz) | 15MB | 1024x1024 | 76.73% | 77.16% | - | -## 详细部署文档 +补充说明: +- 文件名标记了`without-argmax`的模型,导出方式为:**不指定**`--input_shape`,**指定**`--output_op none` +- 文件名标记了`with-argmax`的模型导出方式为:**不指定**`--input_shape`,**指定**`--output_op argmax` +## 3. 自行导出PaddleSeg部署模型 +### 3.1 模型版本 +支持[PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)高于2.6版本的Segmentation模型,如果部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../matting/)。目前FastDeploy测试过成功部署的模型: +- [U-Net系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/unet/README.md) +- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md) +- [PP-HumanSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/contrib/PP-HumanSeg/README.md) +- [FCN系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/fcn/README.md) +- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md) +- [SegFormer系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/segformer/README.md) + +### 3.2 模型导出 +PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md),**注意**:PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息 + +### 3.3 导出须知 +请参考[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)中`output_op`参数的说明,获取您部署所需的模型,比如是否带`argmax`或`softmax`算子 + +## 4. 详细部署的部署示例 - [Python部署](python) - [C++部署](cpp) diff --git a/examples/vision/segmentation/paddleseg/kunlun/cpp/CMakeLists.txt b/examples/vision/segmentation/paddleseg/semantic_segmentation/kunlun/cpp/CMakeLists.txt similarity index 75% rename from examples/vision/segmentation/paddleseg/kunlun/cpp/CMakeLists.txt rename to examples/vision/segmentation/paddleseg/semantic_segmentation/kunlun/cpp/CMakeLists.txt index 93540a7e83..776d832f91 100644 --- a/examples/vision/segmentation/paddleseg/kunlun/cpp/CMakeLists.txt +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/kunlun/cpp/CMakeLists.txt @@ -1,14 +1,11 @@ PROJECT(infer_demo C CXX) CMAKE_MINIMUM_REQUIRED (VERSION 3.10) -# 指定下载解压后的fastdeploy库路径 option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.") include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake) -# 添加FastDeploy依赖头文件 include_directories(${FASTDEPLOY_INCS}) add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc) -# 添加FastDeploy库依赖 target_link_libraries(infer_demo ${FASTDEPLOY_LIBS}) diff --git a/examples/vision/segmentation/paddleseg/kunlun/cpp/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/kunlun/cpp/README.md similarity index 50% rename from examples/vision/segmentation/paddleseg/kunlun/cpp/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/kunlun/cpp/README.md index c5b20ec998..2592fc91e0 100644 --- a/examples/vision/segmentation/paddleseg/kunlun/cpp/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/kunlun/cpp/README.md @@ -1,21 +1,30 @@ [English](README.md) | 简体中文 -# PaddleSeg C++部署示例 +# PaddleSeg XPU C++部署示例 -本目录下提供`infer.cc`快速完成PP-LiteSeg在华为昇腾上部署的示例。 +本目录下提供`infer.cc`快速完成PP-LiteSeg在昆仑芯 XPU 上部署的示例。 -## 昆仑芯XPU编译FastDeploy环境准备 +## 1. 部署环境准备 在部署前,需自行编译基于昆仑芯XPU的预测库,参考文档[昆仑芯XPU部署环境编译安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装) ->>**注意** **PP-Matting**、**PP-HumanMatting**的模型,请从[Matting模型部署](../../../../matting)下载 +## 2. 部署模型准备 +在部署前,请准备好您所需要运行的推理模型,你可以选择使用[预导出的推理模型](../README.md)或者[自行导出PaddleSeg部署模型](../README.md),如果你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../matting)。 +## 3. 运行部署示例 +以Linux上推理为例,在本目录执行如下命令即可完成编译测试。 ```bash -#下载部署示例代码 -cd path/to/paddleseg/ascend/cpp +# 下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cd FastDeploy/examples/vision/segmentation/semantic_segmentation/kunlun/cpp +# 如果您希望从PaddleSeg下载示例代码,请运行 +# git clone https://github.com/PaddlePaddle/PaddleSeg.git +# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支 +# # git checkout develop +# cd PaddleSeg/deploy/fastdeploy/semantic_segmentation/kunlun/cpp mkdir build cd build # 使用编译完成的FastDeploy库编译infer_demo -cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-ascend +cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-kunlunxin make -j # 下载PP-LiteSeg模型文件和测试图片 @@ -32,8 +41,7 @@ wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png -## 快速链接 -how_to_change_backend.md) +## 4. 更多指南 - [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html) - [FastDeploy部署PaddleSeg模型概览](../../) - [Python部署](../python) diff --git a/examples/vision/segmentation/paddleseg/kunlun/cpp/infer.cc b/examples/vision/segmentation/paddleseg/semantic_segmentation/kunlun/cpp/infer.cc similarity index 100% rename from examples/vision/segmentation/paddleseg/kunlun/cpp/infer.cc rename to examples/vision/segmentation/paddleseg/semantic_segmentation/kunlun/cpp/infer.cc diff --git a/examples/vision/segmentation/paddleseg/kunlun/python/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/kunlun/python/README.md similarity index 55% rename from examples/vision/segmentation/paddleseg/kunlun/python/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/kunlun/python/README.md index aee36bffe1..cd8618206f 100644 --- a/examples/vision/segmentation/paddleseg/kunlun/python/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/kunlun/python/README.md @@ -1,18 +1,25 @@ [English](README.md) | 简体中文 -# PaddleSeg Python部署示例 +# PaddleSeg XPU Python部署示例 -本目录下提供`infer.py`快速完成PP-LiteSeg在华为昇腾上部署的示例。 +本目录下提供`infer.py`快速完成PP-LiteSeg昆仑芯 XPU上部署的示例。 -## 昆仑XPU编译FastDeploy wheel包环境准备 +## 1. 部署环境准备 在部署前,需自行编译基于昆仑XPU的FastDeploy python wheel包并安装,参考文档[昆仑芯XPU部署环境](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装) ->>**注意** **PP-Matting**、**PP-HumanMatting**的模型,请从[Matting模型部署](../../../ppmatting)下载 - +## 2. 部署模型准备 +在部署前,请准备好您所需要运行的推理模型,你可以选择使用[预导出的推理模型](../README.md)或者[自行导出PaddleSeg部署模型](../README.md),如果你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../matting)。 +## 3. 运行部署示例 ```bash -#下载部署示例代码 -cd path/to/paddleseg/ascend/cpp +# 下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cd FastDeploy/examples/vision/segmentation/semantic_segmentation/kunlun/python +# 如果您希望从PaddleSeg下载示例代码,请运行 +# git clone https://github.com/PaddlePaddle/PaddleSeg.git +# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支 +# # git checkout develop +# cd PaddleSeg/deploy/fastdeploy/semantic_segmentation/kunlun/python # 下载PP-LiteSeg模型文件和测试图片 wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz @@ -28,10 +35,10 @@ python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --ima -## 快速链接 +## 4. 更多指南 - [PaddleSeg python API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/semantic_segmentation.html) - [FastDeploy部署PaddleSeg模型概览](..) - [PaddleSeg C++部署](../cpp) -## 常见问题 +## 5. 常见问题 - [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/vision_result_related_problems.md) diff --git a/examples/vision/segmentation/paddleseg/kunlun/python/infer.py b/examples/vision/segmentation/paddleseg/semantic_segmentation/kunlun/python/infer.py similarity index 92% rename from examples/vision/segmentation/paddleseg/kunlun/python/infer.py rename to examples/vision/segmentation/paddleseg/semantic_segmentation/kunlun/python/infer.py index bfbde415f6..f80e1cb6f1 100755 --- a/examples/vision/segmentation/paddleseg/kunlun/python/infer.py +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/kunlun/python/infer.py @@ -17,18 +17,18 @@ def parse_arguments(): runtime_option = fd.RuntimeOption() runtime_option.use_kunlunxin() -# 配置runtime,加载模型 +# setup runtime model_file = os.path.join(args.model, "model.pdmodel") params_file = os.path.join(args.model, "model.pdiparams") config_file = os.path.join(args.model, "deploy.yaml") model = fd.vision.segmentation.PaddleSegModel( model_file, params_file, config_file, runtime_option=runtime_option) -# 预测图片分割结果 +# predict im = cv2.imread(args.image) result = model.predict(im) print(result) -# 可视化结果 +# visualize vis_im = fd.vision.vis_segmentation(im, result, weight=0.5) cv2.imwrite("vis_img.png", vis_im) diff --git a/examples/vision/segmentation/paddleseg/quantize/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/quantize/README.md similarity index 53% rename from examples/vision/segmentation/paddleseg/quantize/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/quantize/README.md index 067f34cd6d..f84e201651 100644 --- a/examples/vision/segmentation/paddleseg/quantize/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/quantize/README.md @@ -1,14 +1,17 @@ -[English](README.md) | 简体中文 -# PaddleSeg 量化模型部署 +[English](README.md) | 简体中文 + +# PaddleSeg 量化模型部署-FastDeploy + FastDeploy已支持部署量化模型,并提供一键模型自动化压缩的工具. 用户可以使用一键模型自动化压缩工具,自行对模型量化后部署, 也可以直接下载FastDeploy提供的量化模型进行部署. -## FastDeploy一键模型自动化压缩工具 -FastDeploy 提供了一键模型自动化压缩工具, 能够简单地通过输入一个配置文件, 对模型进行量化. -详细教程请见: [一键模型自动化压缩工具](https://github.com/PaddlePaddle/FastDeploy/tree/develop/tools/common_tools/auto_compression) ->> **注意**: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。 +## 1. FastDeploy一键模型自动化压缩工具 + +FastDeploy 提供了一键模型自动化压缩工具, 能够简单地通过输入一个配置文件, 对模型进行量化. +详细教程请见: [一键模型自动化压缩工具](https://github.com/PaddlePaddle/FastDeploy/tree/develop/tools/common_tools/auto_compression)。**注意**: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。 + +## 2. 量化完成的PaddleSeg模型 -## 量化完成的PaddleSeg模型 用户也可以直接下载下表中的量化模型进行部署.(点击模型名字即可下载) | 模型 | 量化方式 | @@ -17,11 +20,20 @@ FastDeploy 提供了一键模型自动化压缩工具, 能够简单地通过输 量化后模型的Benchmark比较,请参考[量化模型 Benchmark](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/quantize.md) -## 支持部署量化模型的硬件 +## 3. 支持部署量化模型的硬件 + FastDeploy 量化模型部署的过程大致都与FP32模型类似,只是模型量化与非量化的区别,如果硬件在量化模型部署过程有特殊处理,也会在文档中特别标明,因此量化模型部署可以参考如下硬件的链接 -| 硬件支持列表 | | | | -|:----- | :-- | :-- | :-- | -| [NVIDIA GPU](../cpu-gpu) | [X86 CPU](../cpu-gpu)| [飞腾CPU](../cpu-gpu) | [ARM CPU](../cpu-gpu) | -| [Intel GPU(独立显卡/集成显卡)](../cpu-gpu) | [昆仑](../kunlun) | [昇腾](../ascend) | [瑞芯微](../rockchip) | -| [晶晨](../amlogic) | [算能](../sophgo) | +|硬件类型|该硬件是否支持|使用指南|Python|C++| +|:---:|:---:|:---:|:---:|:---:| +|X86 CPU|✅|[链接](cpu-gpu)|✅|✅| +|NVIDIA GPU|✅|[链接](cpu-gpu)|✅|✅| +|飞腾CPU|✅|[链接](cpu-gpu)|✅|✅| +|ARM CPU|✅|[链接](cpu-gpu)|✅|✅| +|Intel GPU(集成显卡)|✅|[链接](cpu-gpu)|✅|✅| +|Intel GPU(独立显卡)|✅|[链接](cpu-gpu)|✅|✅| +|昆仑|✅|[链接](kunlun)|✅|✅| +|昇腾|✅|[链接](ascend)|✅|✅| +|瑞芯微|✅|[链接](rockchip)|✅|✅| +|晶晨|✅|[链接](amlogic)|--|✅| +|算能|✅|[链接](sophgo)|✅|✅| diff --git a/examples/vision/segmentation/paddleseg/rockchip/rknpu2/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/README.md similarity index 78% rename from examples/vision/segmentation/paddleseg/rockchip/rknpu2/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/README.md index a536630e30..84f97f101a 100644 --- a/examples/vision/segmentation/paddleseg/rockchip/rknpu2/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/README.md @@ -1,7 +1,8 @@ [English](README.md) | 简体中文 -# PaddleSeg利用FastDeploy基于RKNPU2部署Segmentation模型 +# PaddleSeg 语义分割模型RKNPU2部署方案-FastDeploy +## 1. 说明 RKNPU2 提供了一个高性能接口来访问 Rockchip NPU,支持如下硬件的部署 - RK3566/RK3568 - RK3588/RK3588S @@ -9,31 +10,7 @@ RKNPU2 提供了一个高性能接口来访问 Rockchip NPU,支持如下硬件 本示例基于 RV3588 来介绍如何使用 FastDeploy 部署 PaddleSeg 模型 -## 模型版本说明 - -- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) ->> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型 - -目前FastDeploy使用RKNPU2推理PaddleSeg支持如下模型的部署: -- [U-Net系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/unet/README.md) -- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md) -- [PP-HumanSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/contrib/PP-HumanSeg/README.md) -- [FCN系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/fcn/README.md) -- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md) - -## 准备PaddleSeg部署模型 -PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md) - -**注意** -- PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息 - -## 下载预训练模型 - -为了方便开发者的测试,下面提供了PaddleSeg导出的部分模型 -- without-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op none` -- with-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op argmax` - -开发者可直接下载使用。 +## 2. 使用预导出的模型列表 | 模型 | 参数文件大小 | 输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) | |:----------------|:-------|:---------|:-------|:------------|:---------------| @@ -47,19 +24,35 @@ PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.co | [FCN-HRNet-W18-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_without_argmax_infer.tgz) | 37MB | 1024x512 | 78.97% | 79.49% | 79.74% | | [Deeplabv3-ResNet101-OS8-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_without_argmax_infer.tgz) | 150MB | 1024x512 | 79.90% | 80.22% | 80.47% | -## 准备PaddleSeg部署模型以及转换模型 + +## 3. 自行导出PaddleSeg部署模型 + +### 3.1 模型版本 +支持[PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)高于2.6版本的Segmentation模型。目前FastDeploy测试过成功部署的模型: +- [U-Net系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/unet/README.md) +- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md) +- [PP-HumanSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/contrib/PP-HumanSeg/README.md) +- [FCN系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/fcn/README.md) +- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md) +- [SegFormer系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/segformer/README.md) + +### 3.2 模型导出 +PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md),**注意**:PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息 + +### 3.3 导出须知 +请参考[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)中`output_op`参数的说明,获取您部署所需的模型,比如是否带`argmax`或`softmax`算子 + +### 3.4 转换为RKNN模型 RKNPU部署模型前需要将Paddle模型转换成RKNN模型,具体步骤如下: * PaddleSeg训练模型导出为推理模型,请参考[PaddleSeg模型导出说明](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md),也可以使用上表中的FastDeploy的预导出模型 * Paddle模型转换为ONNX模型,请参考[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) * ONNX模型转换RKNN模型的过程,请参考[转换文档](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/export.md)进行转换。 -上述步骤可参考以下具体示例 - -## 模型转换示例 +上述步骤可参考以下具体示例,模型转换示例: * [PP-HumanSeg](./pp_humanseg.md) -## 详细部署文档 +## 4. 详细的部署示例 - [RKNN总体部署教程](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/rknpu2.md) - [C++部署](cpp) - [Python部署](python) diff --git a/examples/vision/segmentation/paddleseg/rockchip/rknpu2/cpp/CMakeLists.txt b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/cpp/CMakeLists.txt similarity index 75% rename from examples/vision/segmentation/paddleseg/rockchip/rknpu2/cpp/CMakeLists.txt rename to examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/cpp/CMakeLists.txt index b723e4691e..a46b11f813 100644 --- a/examples/vision/segmentation/paddleseg/rockchip/rknpu2/cpp/CMakeLists.txt +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/cpp/CMakeLists.txt @@ -3,10 +3,9 @@ project(infer_demo) set(CMAKE_CXX_STANDARD 14) -# 指定下载解压后的fastdeploy库路径 option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.") include(${FASTDEPLOY_INSTALL_DIR}/FastDeployConfig.cmake) include_directories(${FastDeploy_INCLUDE_DIRS}) add_executable(infer_demo infer.cc) -target_link_libraries(infer_demo ${FastDeploy_LIBS}) \ No newline at end of file +target_link_libraries(infer_demo ${FastDeploy_LIBS}) diff --git a/examples/vision/segmentation/paddleseg/rockchip/rknpu2/cpp/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/cpp/README.md similarity index 65% rename from examples/vision/segmentation/paddleseg/rockchip/rknpu2/cpp/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/cpp/README.md index 6bb08d5020..a35952df49 100644 --- a/examples/vision/segmentation/paddleseg/rockchip/rknpu2/cpp/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/cpp/README.md @@ -1,8 +1,9 @@ [English](README.md) | 简体中文 -# PaddleSeg C++部署示例 +# PaddleSeg RKNPU2 C++部署示例 本目录下用于展示PaddleSeg系列模型在RKNPU2上的部署,以下的部署过程以PPHumanSeg为例子。 +## 1. 部署环境准备 在部署前,需确认以下两个步骤: 1. 软硬件环境满足要求 @@ -10,17 +11,23 @@ 以上步骤请参考[RK2代NPU部署库编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/rknpu2.md)实现 -## 转换模型 +## 2. 部署模型准备 模型转换代码请参考[模型转换文档](../README.md) -## 编译SDK - -请参考[RK2代NPU部署库编译](../../../../../../../docs/cn/faq/rknpu2/build.md)编译SDK. - -### 编译example +## 3. 运行部署示例 ```bash +# 下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cd FastDeploy/examples/vision/segmentation/semantic_segmentation/rockchip/rknpu2/cpp +# 如果您希望从PaddleSeg下载示例代码,请运行 +# git clone https://github.com/PaddlePaddle/PaddleSeg.git +# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支 +# # git checkout develop +# cd PaddleSeg/deploy/fastdeploy/semantic_segmentation/rockchip/rknpu2/cpp + +# 编译部署示例 mkdir build && cd build cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x make -j8 @@ -28,10 +35,11 @@ make -j8 wget https://paddleseg.bj.bcebos.com/dygraph/pp_humanseg_v2/images.zip unzip -qo images.zip +# 运行部署示例 ./infer_demo model/Portrait_PP_HumanSegV2_Lite_256x144_infer/ images/portrait_heng.jpg ``` -## 注意事项 +## 4. 更多指南 RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,需要先调用DisableNormalizeAndPermute(C++)或`disable_normalize_and_permute(Python),在预处理阶段禁用归一化以及数据格式的转换。 - [模型介绍](../../) diff --git a/examples/vision/segmentation/paddleseg/rockchip/rknpu2/cpp/infer.cc b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/cpp/infer.cc similarity index 89% rename from examples/vision/segmentation/paddleseg/rockchip/rknpu2/cpp/infer.cc rename to examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/cpp/infer.cc index 42b38ba30f..fd1c131bf2 100644 --- a/examples/vision/segmentation/paddleseg/rockchip/rknpu2/cpp/infer.cc +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/cpp/infer.cc @@ -13,10 +13,12 @@ // limitations under the License. #include #include + #include "fastdeploy/vision.h" void ONNXInfer(const std::string& model_dir, const std::string& image_file) { - std::string model_file = model_dir + "/Portrait_PP_HumanSegV2_Lite_256x144_infer.onnx"; + std::string model_file = + model_dir + "/Portrait_PP_HumanSegV2_Lite_256x144_infer.onnx"; std::string params_file; std::string config_file = model_dir + "/deploy.yaml"; auto option = fastdeploy::RuntimeOption(); @@ -43,13 +45,12 @@ void ONNXInfer(const std::string& model_dir, const std::string& image_file) { tc.PrintInfo("PPSeg in ONNX"); cv::imwrite("infer_onnx.jpg", vis_im); - std::cout - << "Visualized result saved in ./infer_onnx.jpg" - << std::endl; + std::cout << "Visualized result saved in ./infer_onnx.jpg" << std::endl; } void RKNPU2Infer(const std::string& model_dir, const std::string& image_file) { - std::string model_file = model_dir + "/Portrait_PP_HumanSegV2_Lite_256x144_infer_rk3588.rknn"; + std::string model_file = + model_dir + "/Portrait_PP_HumanSegV2_Lite_256x144_infer_rk3588.rknn"; std::string params_file; std::string config_file = model_dir + "/deploy.yaml"; auto option = fastdeploy::RuntimeOption(); @@ -78,9 +79,7 @@ void RKNPU2Infer(const std::string& model_dir, const std::string& image_file) { tc.PrintInfo("PPSeg in RKNPU2"); cv::imwrite("infer_rknn.jpg", vis_im); - std::cout - << "Visualized result saved in ./infer_rknn.jpg" - << std::endl; + std::cout << "Visualized result saved in ./infer_rknn.jpg" << std::endl; } int main(int argc, char* argv[]) { @@ -99,4 +98,3 @@ int main(int argc, char* argv[]) { } return 0; } - diff --git a/examples/vision/segmentation/paddleseg/rockchip/rknpu2/pp_humanseg.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/pp_humanseg.md similarity index 100% rename from examples/vision/segmentation/paddleseg/rockchip/rknpu2/pp_humanseg.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/pp_humanseg.md diff --git a/examples/vision/segmentation/paddleseg/rockchip/rknpu2/pp_humanseg_EN.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/pp_humanseg_EN.md similarity index 100% rename from examples/vision/segmentation/paddleseg/rockchip/rknpu2/pp_humanseg_EN.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/pp_humanseg_EN.md diff --git a/examples/vision/segmentation/paddleseg/rockchip/rknpu2/python/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/python/README.md similarity index 60% rename from examples/vision/segmentation/paddleseg/rockchip/rknpu2/python/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/python/README.md index 288409e196..539893e4a4 100644 --- a/examples/vision/segmentation/paddleseg/rockchip/rknpu2/python/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/python/README.md @@ -1,38 +1,49 @@ [English](README.md) | 简体中文 -# PaddleSeg Python部署示例 +# PaddleSeg RKNPU2 Python部署示例 +本目录下用于展示PaddleSeg系列模型在RKNPU2上的部署,以下的部署过程以PPHumanSeg为例子。 + +## 1. 部署环境准备 在部署前,需确认以下步骤 - 1. 软硬件环境满足要求,RKNPU2环境部署等参考[FastDeploy环境要求](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/rknpu2.md) -【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../../../matting/) +## 2. 部署模型准备 + +模型转换代码请参考[模型转换文档](../README.md) + +## 3. 运行部署示例 本目录下提供`infer.py`快速完成PPHumanseg在RKNPU上部署的示例。执行如下脚本即可完成 ```bash # 下载部署示例代码 git clone https://github.com/PaddlePaddle/FastDeploy.git -cd FastDeploy/examples/vision/segmentation/paddleseg/python +cd FastDeploy/examples/vision/segmentation/semantic_segmentation/rockchip/rknpu2/python +# 如果您希望从PaddleSeg下载示例代码,请运行 +# git clone https://github.com/PaddlePaddle/PaddleSeg.git +# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支 +# # git checkout develop +# cd PaddleSeg/deploy/fastdeploy/semantic_segmentation/rockchip/rknpu2/python # 下载图片 wget https://paddleseg.bj.bcebos.com/dygraph/pp_humanseg_v2/images.zip unzip images.zip -# 推理 +# 运行部署示例 python3 infer.py --model_file ./Portrait_PP_HumanSegV2_Lite_256x144_infer/Portrait_PP_HumanSegV2_Lite_256x144_infer_rk3588.rknn \ --config_file ./Portrait_PP_HumanSegV2_Lite_256x144_infer/deploy.yaml \ --image images/portrait_heng.jpg ``` - -## 注意事项 +## 4. 注意事项 RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,需要先调用DisableNormalizeAndPermute(C++)或`disable_normalize_and_permute(Python),在预处理阶段禁用归一化以及数据格式的转换。 -## 快速链接 +## 5. 更多指南 - [FastDeploy部署PaddleSeg模型概览](..) -- [PaddleSeg C++部署](../cpp) -- [转换PaddleSeg模型至RKNN模型文档](../README.md#准备paddleseg部署模型以及转换模型) +- [PaddleSeg RKNPU2 C++部署](../cpp) +- [转换PaddleSeg模型至RKNN模型文档](../README.md) -## 常见问题 +## 6. 常见问题 - [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/vision_result_related_problems.md) diff --git a/examples/vision/segmentation/paddleseg/rockchip/rknpu2/python/infer.py b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/python/infer.py similarity index 95% rename from examples/vision/segmentation/paddleseg/rockchip/rknpu2/python/infer.py rename to examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/python/infer.py index 193a6dfb9b..4634085f6f 100644 --- a/examples/vision/segmentation/paddleseg/rockchip/rknpu2/python/infer.py +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rknpu2/python/infer.py @@ -37,7 +37,7 @@ def build_option(args): args = parse_arguments() -# 配置runtime,加载模型 +# setup runtime runtime_option = build_option(args) model_file = args.model_file params_file = "" @@ -52,11 +52,11 @@ model = fd.vision.segmentation.PaddleSegModel( model.preprocessor.disable_normalize() model.preprocessor.disable_permute() -# 预测图片分割结果 +# predict im = cv2.imread(args.image) result = model.predict(im) print(result) -# 可视化结果 +# visualize vis_im = fd.vision.vis_segmentation(im, result, weight=0.5) cv2.imwrite("vis_img.png", vis_im) diff --git a/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rv1126/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rv1126/README.md new file mode 100644 index 0000000000..ca790216d5 --- /dev/null +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rv1126/README.md @@ -0,0 +1,39 @@ +[English](README.md) | 简体中文 + +# PaddleSeg 语义分割模瑞芯微NPU部署方案-FastDeploy + +## 1. 说明 +本示例基于RV1126来介绍如何使用FastDeploy部署PaddleSeg模型,支持如下芯片的部署: +- Rockchip RV1109 +- Rockchip RV1126 +- Rockchip RK1808 + +## 2. 预导出的量化推理模型 +为了方便开发者的测试,下面提供了PaddleSeg导出的部分量化后的推理模型,开发者可直接下载使用。 + +| 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) | +|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- | +| [PP-LiteSeg-T(STDC1)-cityscapes-without-argmax](https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz)| 31MB | 1024x512 | 77.04% | 77.73% | 77.46% | +**注意** +- PaddleSeg量化模型包含`model.pdmodel`、`model.pdiparams`、`deploy.yaml`和`subgraph.txt`四个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息,subgraph.txt是为了异构计算而存储的配置文件 + +## 3. 自行导出RV1126支持的INT8模型 + +### 3.1 模型版本 +支持[PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)高于2.6版本的Segmentation模型。目前FastDeploy测试过成功部署的模型: +- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md) + +### 3.2 模型导出 +PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md),**注意**:PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息 + +### 3.3 导出须知 +请参考[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)中`output_op`参数的说明,获取您部署所需的模型,比如是否带`argmax`或`softmax`算子 + +### 3.4 转换为为RV1126支持的INT8模型 +瑞芯微RV1126仅支持INT8,将推理模型量化压缩为INT8模型,FastDeploy模型量化的方法及一键自动化压缩工具可以参考[模型量化](../../../quantize/README.md) + +## 4. 详细的部署示例 + +目前,瑞芯微 RV1126 上只支持C++的部署。 + +- [C++部署](cpp) diff --git a/examples/vision/segmentation/paddleseg/rockchip/rv1126/cpp/CMakeLists.txt b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rv1126/cpp/CMakeLists.txt similarity index 89% rename from examples/vision/segmentation/paddleseg/rockchip/rv1126/cpp/CMakeLists.txt rename to examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rv1126/cpp/CMakeLists.txt index 64b7a64661..af493f6b67 100755 --- a/examples/vision/segmentation/paddleseg/rockchip/rv1126/cpp/CMakeLists.txt +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rv1126/cpp/CMakeLists.txt @@ -1,17 +1,14 @@ PROJECT(infer_demo C CXX) CMAKE_MINIMUM_REQUIRED (VERSION 3.10) -# 指定下载解压后的fastdeploy库路径 option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.") include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake) -# 添加FastDeploy依赖头文件 include_directories(${FASTDEPLOY_INCS}) include_directories(${FastDeploy_INCLUDE_DIRS}) add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc) -# 添加FastDeploy库依赖 target_link_libraries(infer_demo ${FASTDEPLOY_LIBS}) set(CMAKE_INSTALL_PREFIX ${CMAKE_SOURCE_DIR}/build/install) diff --git a/examples/vision/segmentation/paddleseg/rockchip/rv1126/cpp/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rv1126/cpp/README.md similarity index 76% rename from examples/vision/segmentation/paddleseg/rockchip/rv1126/cpp/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rv1126/cpp/README.md index 0b47d04b9f..c57f400812 100644 --- a/examples/vision/segmentation/paddleseg/rockchip/rv1126/cpp/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rv1126/cpp/README.md @@ -1,24 +1,31 @@ [English](README.md) | 简体中文 -# PP-LiteSeg 量化模型 C++ 部署示例 +# PaddleSeg 量化模型 RV1126 C++ 部署示例 本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-LiteSeg 量化模型在 RV1126 上的部署推理加速。 -## 部署准备 -### FastDeploy 交叉编译环境准备 +## 1. 部署环境准备 +### 1.1 FastDeploy 交叉编译环境准备 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[瑞芯微RV1126部署环境](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装) -### 模型准备 -1. 用户可以直接使用由[FastDeploy 提供的量化模型](../README.md#瑞芯微-rv1126-支持的paddleseg模型)进行部署。 -2. 若FastDeploy没有提供满足要求的量化模型,用户可以参考[PaddleSeg动态图模型导出为RV1126支持的INT8模型](../README.md#paddleseg动态图模型导出为rv1126支持的int8模型)自行导出或训练量化模型 +## 2. 部署模型准备 +1. 用户可以直接使用由[FastDeploy 提供的量化模型](../README.md)进行部署。 +2. 若FastDeploy没有提供满足要求的量化模型,用户可以参考[PaddleSeg动态图模型导出为RV1126支持的INT8模型](../README.md)自行导出或训练量化模型 3. 若上述导出或训练的模型出现精度下降或者报错,则需要使用异构计算,使得模型算子部分跑在RV1126的ARM CPU上进行调试以及精度验证,其中异构计算所需的文件是subgraph.txt。具体关于异构计算可参考:[异构计算](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/heterogeneous_computing_on_timvx_npu.md)。 -## 在 RV1126 上部署量化后的 PP-LiteSeg 分割模型 +## 3. 运行部署示例 请按照以下步骤完成在 RV1126 上部署 PP-LiteSeg 量化模型: 1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/a311d.md#基于-paddle-lite-的-fastdeploy-交叉编译库编译) 2. 将编译后的库拷贝到当前目录,可使用如下命令: ```bash -cp -r FastDeploy/build/fastdeploy-timvx/ path/to/paddleseg/rockchip/rv1126/cpp +# 下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/segmentation/semantic_segmentation/rockchip/rv1126/cpp +# 如果您希望从PaddleSeg下载示例代码,请运行 +# git clone https://github.com/PaddlePaddle/PaddleSeg.git +# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支 +# # git checkout develop +# cp -r FastDeploy/build/fastdeploy-timvx/ PaddleSeg/deploy/fastdeploy/semantic_segmentation/rockchip/rv1126/cpp ``` 3. 在当前路径下载部署所需的模型和示例图片: @@ -54,6 +61,6 @@ bash run_with_adb.sh infer_demo ppliteseg cityscapes_demo.png $DEVICE_ID -## 快速链接 +## 4. 更多指南 - [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html) - [FastDeploy部署PaddleSeg模型概览](../../) diff --git a/examples/vision/segmentation/paddleseg/rockchip/rv1126/cpp/infer.cc b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rv1126/cpp/infer.cc similarity index 100% rename from examples/vision/segmentation/paddleseg/rockchip/rv1126/cpp/infer.cc rename to examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rv1126/cpp/infer.cc diff --git a/examples/vision/segmentation/paddleseg/rockchip/rv1126/cpp/run_with_adb.sh b/examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rv1126/cpp/run_with_adb.sh similarity index 100% rename from examples/vision/segmentation/paddleseg/rockchip/rv1126/cpp/run_with_adb.sh rename to examples/vision/segmentation/paddleseg/semantic_segmentation/rockchip/rv1126/cpp/run_with_adb.sh diff --git a/examples/vision/segmentation/paddleseg/serving/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/README.md similarity index 81% rename from examples/vision/segmentation/paddleseg/serving/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/serving/README.md index 00074183d2..d7cde2af26 100644 --- a/examples/vision/segmentation/paddleseg/serving/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/README.md @@ -1,39 +1,11 @@ [English](README.md) | 简体中文 # PaddleSeg 使用 FastDeploy 服务化部署 Segmentation 模型 -## FastDeploy 服务化部署介绍 +## 1. FastDeploy 服务化部署介绍 在线推理作为企业或个人线上部署模型的最后一环,是工业界必不可少的环节,其中最重要的就是服务化推理框架。FastDeploy 目前提供两种服务化部署方式:simple_serving和fastdeploy_serving - simple_serving:适用于只需要通过http等调用AI推理任务,没有高并发需求的场景。simple_serving基于Flask框架具有简单高效的特点,可以快速验证线上部署模型的可行性 - fastdeploy_serving:适用于高并发、高吞吐量请求的场景。基于Triton Inference Server框架,是一套可用于实际生产的完备且性能卓越的服务化部署框架 -## 模型版本说明 - -- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) ->> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型 - -目前FastDeploy支持如下模型的部署 - -- [U-Net系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/unet/README.md) -- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md) -- [PP-HumanSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/contrib/PP-HumanSeg/README.md) -- [FCN系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/fcn/README.md) -- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md) -- [SegFormer系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/segformer/README.md) - ->>**注意** 如部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../ppmatting) - -## 准备PaddleSeg部署模型 -PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md) - -**注意** -- PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息 - -## 预导出的推理模型 - -为了方便开发者的测试,下面提供了PaddleSeg导出的部分模型 -- without-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op none` -- with-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op argmax` - -开发者可直接下载使用。 +## 2. 使用预导出的模型列表 | 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) | |:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- | @@ -48,7 +20,27 @@ PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.co | [Deeplabv3-ResNet101-OS8-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_with_argmax_infer.tgz) \| [Deeplabv3-ResNet101-OS8-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_without_argmax_infer.tgz) | 150MB | 1024x512 | 79.90% | 80.22% | 80.47% | | [SegFormer_B0-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/SegFormer_B0-cityscapes-with-argmax.tgz) \| [SegFormer_B0-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/SegFormer_B0-cityscapes-without-argmax.tgz) | 15MB | 1024x1024 | 76.73% | 77.16% | - | -## 详细部署文档 +补充说明: +- 文件名标记了`without-argmax`的模型,导出方式为:**不指定**`--input_shape`,**指定**`--output_op none` +- 文件名标记了`with-argmax`的模型导出方式为:**不指定**`--input_shape`,**指定**`--output_op argmax` + +## 3. 自行导出PaddleSeg部署模型 +### 3.1 模型版本 +支持[PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)高于2.6版本的Segmentation模型,如果部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../matting/)。目前FastDeploy测试过成功部署的模型: +- [U-Net系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/unet/README.md) +- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md) +- [PP-HumanSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/contrib/PP-HumanSeg/README.md) +- [FCN系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/fcn/README.md) +- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md) +- [SegFormer系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/segformer/README.md) + +### 3.2 模型导出 +PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md),**注意**:PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息 + +### 3.3 导出须知 +请参考[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)中`output_op`参数的说明,获取您部署所需的模型,比如是否带`argmax`或`softmax`算子 + +## 4. 详细的部署示例 - [fastdeploy serving](fastdeploy_serving) - [simple serving](simple_serving) diff --git a/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer/deploy.yaml b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer/deploy.yaml similarity index 100% rename from examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer/deploy.yaml rename to examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer/deploy.yaml diff --git a/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/README.md similarity index 59% rename from examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/README.md index e48c27c5d2..02b83ca825 100644 --- a/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/README.md @@ -1,31 +1,40 @@ English | [简体中文](README_CN.md) # PaddleSeg Serving Deployment Demo -The PaddleSeg serving deployment Demo is built with FastDeploy Serving. FastDeploy Serving is a service-oriented deployment framework suitable for high-concurrency and high-throughput requests encapsulated based on the Triton Inference Server framework. It is a complete and high-performance service-oriented deployment framework that can be used in actual production. If you don’t need high-concurrency and high-throughput scenarios, and just want to quickly test the feasibility of online deployment of the model, please refer to [fastdeploy_serving](../simple_serving/) +The PaddleSeg serving deployment Demo is built with FastDeploy Serving. FastDeploy Serving is a service-oriented deployment framework suitable for high-concurrency and high-throughput requests encapsulated based on the Triton Inference Server framework. It is a complete and high-performance service-oriented deployment framework that can be used in actual production. If you don’t need high-concurrency and high-throughput scenarios, and just want to quickly test the feasibility of online deployment of the model, please refer to [simple_serving](../simple_serving/) -## Environment +## 1. Environment Before serving deployment, it is necessary to confirm the hardware and software environment requirements of the service image and the image pull command, please refer to [FastDeploy service deployment](https://github.com/PaddlePaddle/FastDeploy/blob/develop/serving/README.md) -## Launch Serving +## 2. Launch Serving ```bash # Download demo code git clone https://github.com/PaddlePaddle/FastDeploy.git -cd FastDeploy/examples/vision/segmentation/paddleseg/serving +cd FastDeploy/examples/vision/segmentation/semantic_segmentation/serving/fastdeploy_serving +# If you want to download the demo code from PaddleSeg repo, please run +# git clone https://github.com/PaddlePaddle/PaddleSeg.git +# # Note: If the current branch cannot find the following fastdeploy test code, switch to the develop branch +# # git checkout develop +# cd PaddleSeg/deploy/fastdeploy/semantic_segmentation/serving/fastdeploy_serving -#Download PP_LiteSeg model file +# Download PP_LiteSeg model file wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz tar -xvf PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz -# Move the model files to models/infer/1 -mv yolov5s.onnx models/infer/1/ +# Move the model files to models/runtime/1 +mv PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer/model.pdmodel models/runtime/1/ +mv PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer/model.pdiparams models/runtime/1/ # Pull fastdeploy image, x.y.z is FastDeploy version, example 1.0.2. -docker pull paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10 +# GPU image +docker pull registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10 +# CPU image +docker pull registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-cpu-only-21.10 # Run the docker. The docker name is fd_serving, and the current directory is mounted as the docker's /serving directory -nvidia-docker run -it --net=host --name fd_serving -v `pwd`/:/serving paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10 bash +nvidia-docker run -it --net=host --name fd_serving -v `pwd`/:/serving registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10 bash # Start the service (Without setting the CUDA_VISIBLE_DEVICES environment variable, it will have scheduling privileges for all GPU cards) CUDA_VISIBLE_DEVICES=0 fastdeployserver --model-repository=/serving/models --backend-config=python,shm-default-byte-size=10485760 @@ -33,22 +42,22 @@ CUDA_VISIBLE_DEVICES=0 fastdeployserver --model-repository=/serving/models --bac Output the following contents if serving is launched -``` +```bash ...... I0928 04:51:15.784517 206 grpc_server.cc:4117] Started GRPCInferenceService at 0.0.0.0:8001 I0928 04:51:15.785177 206 http_server.cc:2815] Started HTTPService at 0.0.0.0:8000 I0928 04:51:15.826578 206 http_server.cc:167] Started Metrics Service at 0.0.0.0:8002 ``` -## Client Requests +## 3. Client Requests Execute the following command in the physical machine to send a grpc request and output the result -``` -#Download test images +```bash +# Download test images wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png -#Installing client-side dependencies +# Installing client-side dependencies python3 -m pip install tritonclient\[all\] # Send requests @@ -57,12 +66,19 @@ python3 paddleseg_grpc_client.py When the request is sent successfully, the results are returned in json format and printed out: +```bash +tm: name: "INPUT" +datatype: "UINT8" +shape: -1 +shape: -1 +shape: -1 +shape: 3 + +output_name: SEG_RESULT +Only print the first 20 labels in label_map of SEG_RESULT +{'label_map': [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2], 'score_map': [], 'shape': [1024, 2048], 'contain_score_map': False} ``` -``` +## 4. Modify Configs -## Modify Configs - - - -The default is to run ONNXRuntime on CPU. If developers need to run it on GPU or other inference engines, please see the [Configs File](../../../../../../serving/docs/EN/model_configuration-en.md) to modify the configs in `models/runtime/config.pbtxt`. +The default is to run ONNXRuntime on CPU. If developers need to run it on GPU or other inference engines, please see the [Configs File](https://github.com/PaddlePaddle/FastDeploy/blob/develop/serving/docs/zh_CN/model_configuration.md) to modify the configs in `models/runtime/config.pbtxt`. diff --git a/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/README_CN.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/README_CN.md similarity index 80% rename from examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/README_CN.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/README_CN.md index a07a773e58..609841bca9 100644 --- a/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/README_CN.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/README_CN.md @@ -1,20 +1,25 @@ [English](README.md) | 简体中文 # PaddleSeg 服务化部署示例 -PaddleSeg 服务化部署示例是利用FastDeploy Serving搭建的服务化部署示例。FastDeploy Serving是基于Triton Inference Server框架封装的适用于高并发、高吞吐量请求的服务化部署框架,是一套可用于实际生产的完备且性能卓越的服务化部署框架。如没有高并发,高吞吐场景的需求,只想快速检验模型线上部署的可行性,请参考[fastdeploy_serving](../simple_serving/) +PaddleSeg 服务化部署示例是利用FastDeploy Serving搭建的服务化部署示例。FastDeploy Serving是基于Triton Inference Server框架封装的适用于高并发、高吞吐量请求的服务化部署框架,是一套可用于实际生产的完备且性能卓越的服务化部署框架。如没有高并发,高吞吐场景的需求,只想快速检验模型线上部署的可行性,请参考[simple_serving](../simple_serving/) -## 部署环境准备 +## 1. 部署环境准备 在服务化部署前,需确认服务化镜像的软硬件环境要求和镜像拉取命令,请参考[FastDeploy服务化部署](https://github.com/PaddlePaddle/FastDeploy/blob/develop/serving/README_CN.md) -## 启动服务 +## 2. 启动服务 ```bash -#下载部署示例代码 +# 下载部署示例代码 git clone https://github.com/PaddlePaddle/FastDeploy.git -cd FastDeploy/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving +cd FastDeploy/examples/vision/segmentation/semantic_segmentation/serving/fastdeploy_serving +# 如果您希望从PaddleSeg下载示例代码,请运行 +# git clone https://github.com/PaddlePaddle/PaddleSeg.git +# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支 +# # git checkout develop +# cd PaddleSeg/deploy/fastdeploy/semantic_segmentation/serving/fastdeploy_serving -#下载PP-LiteSeg模型文件 +# 下载PP-LiteSeg模型文件 wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz tar -xvf PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz @@ -34,10 +39,10 @@ nvidia-docker run -it --net=host --name fd_serving -v `pwd`/:/serving registry.b # 启动服务(不设置CUDA_VISIBLE_DEVICES环境变量,会拥有所有GPU卡的调度权限) CUDA_VISIBLE_DEVICES=0 fastdeployserver --model-repository=/serving/models --backend-config=python,shm-default-byte-size=10485760 ``` ->> **注意**: 当出现"Address already in use", 请使用`--grpc-port`指定端口号来启动服务,同时更改paddleseg_grpc_client.py中的请求端口号 +**注意**: 当出现"Address already in use", 请使用`--grpc-port`指定端口号来启动服务,同时更改paddleseg_grpc_client.py中的请求端口号 服务启动成功后, 会有以下输出: -``` +```bash ...... I0928 04:51:15.784517 206 grpc_server.cc:4117] Started GRPCInferenceService at 0.0.0.0:8001 I0928 04:51:15.785177 206 http_server.cc:2815] Started HTTPService at 0.0.0.0:8000 @@ -45,14 +50,14 @@ I0928 04:51:15.826578 206 http_server.cc:167] Started Metrics Service at 0.0.0.0 ``` -## 客户端请求 +## 3. 客户端请求 在物理机器中执行以下命令,发送grpc请求并输出结果 -``` -#下载测试图片 +```bash +# 下载测试图片 wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png -#安装客户端依赖 +# 安装客户端依赖 python3 -m pip install tritonclient[all] # 发送请求 @@ -60,7 +65,7 @@ python3 paddleseg_grpc_client.py ``` 发送请求成功后,会返回json格式的检测结果并打印输出: -``` +```bash tm: name: "INPUT" datatype: "UINT8" shape: -1 @@ -73,14 +78,14 @@ Only print the first 20 labels in label_map of SEG_RESULT {'label_map': [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2], 'score_map': [], 'shape': [1024, 2048], 'contain_score_map': False} ``` -## 配置修改 +## 4. 配置修改 当前默认配置在CPU上运行ONNXRuntime引擎, 如果要在GPU或其他推理引擎上运行。 需要修改`models/runtime/config.pbtxt`中配置,详情请参考[配置文档](https://github.com/PaddlePaddle/FastDeploy/blob/develop/serving/docs/zh_CN/model_configuration.md) -## 更多部署方式 +## 5. 更多指南 - [使用 VisualDL 进行 Serving 可视化部署](https://github.com/PaddlePaddle/FastDeploy/blob/develop/serving/docs/zh_CN/vdl_management.md) -## 常见问题 +## 6. 常见问题 - [如何编写客户端 HTTP/GRPC 请求](https://github.com/PaddlePaddle/FastDeploy/blob/develop/serving/docs/zh_CN/client.md) - [如何编译服务化部署镜像](https://github.com/PaddlePaddle/FastDeploy/blob/develop/serving/docs/zh_CN/compile.md) - [服务化部署原理及动态Batch介绍](https://github.com/PaddlePaddle/FastDeploy/blob/develop/serving/docs/zh_CN/demo.md) diff --git a/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/models/paddleseg/1/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/models/paddleseg/1/README.md similarity index 100% rename from examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/models/paddleseg/1/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/models/paddleseg/1/README.md diff --git a/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/models/paddleseg/config.pbtxt b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/models/paddleseg/config.pbtxt similarity index 100% rename from examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/models/paddleseg/config.pbtxt rename to examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/models/paddleseg/config.pbtxt diff --git a/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/models/postprocess/1/model.py b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/models/postprocess/1/model.py similarity index 100% rename from examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/models/postprocess/1/model.py rename to examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/models/postprocess/1/model.py diff --git a/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/models/postprocess/config.pbtxt b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/models/postprocess/config.pbtxt similarity index 100% rename from examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/models/postprocess/config.pbtxt rename to examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/models/postprocess/config.pbtxt diff --git a/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/models/preprocess/1/model.py b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/models/preprocess/1/model.py similarity index 100% rename from examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/models/preprocess/1/model.py rename to examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/models/preprocess/1/model.py diff --git a/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/models/preprocess/config.pbtxt b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/models/preprocess/config.pbtxt similarity index 100% rename from examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/models/preprocess/config.pbtxt rename to examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/models/preprocess/config.pbtxt diff --git a/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/models/runtime/1/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/models/runtime/1/README.md similarity index 100% rename from examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/models/runtime/1/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/models/runtime/1/README.md diff --git a/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/models/runtime/config.pbtxt b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/models/runtime/config.pbtxt similarity index 100% rename from examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/models/runtime/config.pbtxt rename to examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/models/runtime/config.pbtxt diff --git a/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/paddleseg_grpc_client.py b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/paddleseg_grpc_client.py similarity index 100% rename from examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/paddleseg_grpc_client.py rename to examples/vision/segmentation/paddleseg/semantic_segmentation/serving/fastdeploy_serving/paddleseg_grpc_client.py diff --git a/examples/vision/segmentation/paddleseg/serving/simple_serving/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/simple_serving/README.md similarity index 73% rename from examples/vision/segmentation/paddleseg/serving/simple_serving/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/serving/simple_serving/README.md index ea8223ecc4..9697b1f9a0 100644 --- a/examples/vision/segmentation/paddleseg/serving/simple_serving/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/simple_serving/README.md @@ -4,15 +4,20 @@ English | [简体中文](README_CN.md) PaddleSeg Python Simple serving is an example of serving deployment built by FastDeploy based on the Flask framework that can quickly verify the feasibility of online model deployment. It completes AI inference tasks based on http requests, and is suitable for simple scenarios without concurrent inference task. For high concurrency and high throughput scenarios, please refer to [fastdeploy_serving](../fastdeploy_serving/) -## Environment +## 1. Environment - 1. Prepare environment and install FastDeploy Python whl, refer to [download_prebuilt_libraries](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/en/build_and_install#install-prebuilt-fastdeploy) -Server: +## 2. Launch Serving ```bash # Download demo code git clone https://github.com/PaddlePaddle/FastDeploy.git -cd FastDeploy/examples/vision/segmentation/paddleseg/python/serving +cd FastDeploy/examples/vision/segmentation/semantic_segmentation/serving/simple_serving +# If you want to download the demo code from PaddleSeg repo, please run +# git clone https://github.com/PaddlePaddle/PaddleSeg.git +# # Note: If the current branch cannot find the following fastdeploy test code, switch to the develop branch +# # git checkout develop +# cd PaddleSeg/deploy/fastdeploy/semantic_segmentation/serving/simple_serving # Download PP_LiteSeg model wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz @@ -23,12 +28,8 @@ tar -xvf PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz fastdeploy simple_serving --app server:app ``` -Client: +## 3. Client Requests ```bash -# Download demo code -git clone https://github.com/PaddlePaddle/FastDeploy.git -cd FastDeploy/examples/vision/segmentation/paddleseg/python/serving - # Download test image wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png diff --git a/examples/vision/segmentation/paddleseg/serving/simple_serving/README_CN.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/simple_serving/README_CN.md similarity index 68% rename from examples/vision/segmentation/paddleseg/serving/simple_serving/README_CN.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/serving/simple_serving/README_CN.md index afc1325dcb..a45c12429d 100644 --- a/examples/vision/segmentation/paddleseg/serving/simple_serving/README_CN.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/simple_serving/README_CN.md @@ -4,18 +4,23 @@ PaddleSeg Python轻量服务化部署是FastDeploy基于Flask框架搭建的可快速验证线上模型部署可行性的服务化部署示例,基于http请求完成AI推理任务,适用于无并发推理的简单场景,如有高并发,高吞吐场景的需求请参考[fastdeploy_serving](../fastdeploy_serving/) -## 部署环境准备 +## 1. 部署环境准备 在部署前,需确认软硬件环境,同时下载预编译python wheel 包,参考文档[FastDeploy预编译库安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#FastDeploy预编译库安装) -服务端: +## 2. 启动服务 ```bash # 下载部署示例代码 git clone https://github.com/PaddlePaddle/FastDeploy.git -cd FastDeploy/examples/vision/segmentation/paddleseg/python/serving +cd FastDeploy/examples/vision/segmentation/semantic_segmentation/serving/simple_serving +# 如果您希望从PaddleSeg下载示例代码,请运行 +# git clone https://github.com/PaddlePaddle/PaddleSeg.git +# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支 +# # git checkout develop +# cd PaddleSeg/deploy/fastdeploy/semantic_segmentation/serving/simple_serving # 下载PP-LiteSeg模型文件 -wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz +wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz tar -xvf PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz # 启动服务,可修改server.py中的配置项来指定硬件、后端等 @@ -23,7 +28,7 @@ tar -xvf PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz fastdeploy simple_serving --app server:app ``` -客户端: +## 3. 客户端请求 ```bash # 下载测试图片 wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg diff --git a/examples/vision/segmentation/paddleseg/serving/simple_serving/client.py b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/simple_serving/client.py similarity index 100% rename from examples/vision/segmentation/paddleseg/serving/simple_serving/client.py rename to examples/vision/segmentation/paddleseg/semantic_segmentation/serving/simple_serving/client.py diff --git a/examples/vision/segmentation/paddleseg/serving/simple_serving/server.py b/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/simple_serving/server.py similarity index 100% rename from examples/vision/segmentation/paddleseg/serving/simple_serving/server.py rename to examples/vision/segmentation/paddleseg/semantic_segmentation/serving/simple_serving/server.py diff --git a/examples/vision/segmentation/paddleseg/sophgo/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/sophgo/README.md similarity index 89% rename from examples/vision/segmentation/paddleseg/sophgo/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/sophgo/README.md index 67f6364866..edcd06a99b 100644 --- a/examples/vision/segmentation/paddleseg/sophgo/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/sophgo/README.md @@ -1,21 +1,13 @@ [English](README.md) | 简体中文 # PaddleSeg在算能(Sophgo)硬件上通过FastDeploy部署模型 -## PaddleSeg支持部署的Sophgo的芯片型号 -支持如下芯片的部署 +## 1. 说明 +PaddleSeg支持部署的支持如下型号的Sophgo芯片的部署 - Sophgo 1684X PaddleSeg支持通过FastDeploy在算能TPU上部署相关Segmentation模型 -## 算能硬件支持的PaddleSeg模型 - -- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) ->> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型 - -目前算能TPU支持的模型如下: -- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md) - -## 预导出的推理模型 +## 2. 预导出的推理模型 为了方便开发者的测试,下面提供了PaddleSeg导出的部分推理模型,开发者可直接下载使用。 @@ -25,18 +17,23 @@ PaddleSeg训练模型导出为推理模型,请参考其文档说明[模型导 |:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- | | [PP-LiteSeg-T(STDC1)-cityscapes-without-argmax](https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz)| 31MB | 1024x512 | 77.04% | 77.73% | 77.46% | -## 将PaddleSeg推理模型转换为bmodel模型步骤 +## 3. 自行导出算能硬件支持的PaddleSeg模型 +### 3.1 模型版本 +支持PaddleSeg高于2.6版本的Segmentation模型,目前FastDeploy测试过可在算能TPU支持的模型如下: +- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md) + +### 3.2 将PaddleSeg推理模型转换为bmodel模型步骤 SOPHGO-TPU部署模型前需要将Paddle模型转换成bmodel模型,具体步骤如下: - 下载Paddle模型[PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz) - Paddle模型转换为ONNX模型,请参考[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) - ONNX模型转换bmodel模型的过程,请参考[TPU-MLIR](https://github.com/sophgo/tpu-mlir) -## bmode模型转换example +### 3.3 bmode模型转换example 下面以[PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz)为例,教大家如何转换Paddle模型到SOPHGO-TPU支持的bmodel模型 -### 下载PP-LiteSeg-B(STDC2)-cityscapes-without-argmax模型,并转换为ONNX模型 +- 下载PP-LiteSeg-B(STDC2)-cityscapes-without-argmax模型,并转换为ONNX模型 ```shell # 下载Paddle2ONNX仓库 git clone https://github.com/PaddlePaddle/Paddle2ONNX @@ -63,7 +60,7 @@ paddle2onnx --model_dir pp_liteseg_fix \ --enable_dev_version True ``` -### 导出bmodel模型 +- 导出bmodel模型 以转换BM1684x的bmodel模型为例子,我们需要下载[TPU-MLIR](https://github.com/sophgo/tpu-mlir)工程,安装过程具体参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。 #### 1. 安装 @@ -113,6 +110,6 @@ model_deploy.py \ ``` 最终获得可以在BM1684x上能够运行的bmodel模型pp_liteseg_1684x_f32.bmodel。如果需要进一步对模型进行加速,可以将ONNX模型转换为INT8 bmodel,具体步骤参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。 -## 快速链接 -- [Cpp部署](./cpp) +## 4. 详细的部署示例 +- [C++部署](./cpp) - [Python部署](./python) diff --git a/examples/vision/segmentation/paddleseg/sophgo/cpp/CMakeLists.txt b/examples/vision/segmentation/paddleseg/semantic_segmentation/sophgo/cpp/CMakeLists.txt similarity index 100% rename from examples/vision/segmentation/paddleseg/sophgo/cpp/CMakeLists.txt rename to examples/vision/segmentation/paddleseg/semantic_segmentation/sophgo/cpp/CMakeLists.txt diff --git a/examples/vision/segmentation/paddleseg/sophgo/cpp/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/sophgo/cpp/README.md similarity index 56% rename from examples/vision/segmentation/paddleseg/sophgo/cpp/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/sophgo/cpp/README.md index 8711d6a683..71cf469059 100644 --- a/examples/vision/segmentation/paddleseg/sophgo/cpp/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/sophgo/cpp/README.md @@ -1,12 +1,15 @@ [English](README.md) | 简体中文 -# PaddleSeg C++部署示例 +# PaddleSeg 算能 C++ 部署示例 本目录下提供`infer.cc`快速完成PP-LiteSeg在SOPHGO BM1684x板子上加速部署的示例。 -## 算能硬件编译FastDeploy环境准备 +## 1. 部署环境准备 在部署前,需自行编译基于算能硬件的预测库,参考文档[算能硬件部署环境](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#算能硬件部署环境) -## 生成基本目录文件 +## 2. 部署模型准备 +在部署前,请准备好您所需要运行的推理模型,你可以选择使用[预导出的推理模型](../README.md)或者[自行导出PaddleSeg部署模型](../README.md)。 + +## 3. 生成基本目录文件 该例程由以下几个部分组成 ```text @@ -18,24 +21,36 @@ └── model # 存放模型文件的文件夹 ``` -## 编译 +## 4. 运行部署示例 -### 编译FastDeploy +### 4.1 编译FastDeploy 请参考[SOPHGO部署库编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/sophgo.md)编译SDK,编译完成后,将在build目录下生成fastdeploy-sophgo目录。拷贝fastdeploy-sophgo至当前目录 -### 拷贝模型文件,以及配置文件至model文件夹 -将Paddle模型转换为SOPHGO bmodel模型,转换步骤参考[文档](../README.md#将paddleseg推理模型转换为bmodel模型步骤) +### 4.2 下载部署示例代码 +```bash +# 下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cd FastDeploy/examples/vision/segmentation/semantic_segmentation/sophgo/cpp +# # 如果您希望从PaddleSeg下载示例代码,请运行 +# git clone https://github.com/PaddlePaddle/PaddleSeg.git +# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支 +# # git checkout develop +# cd PaddleSeg/deploy/fastdeploy/semantic_segmentation/sophgo/cpp +``` + +### 4.3 拷贝模型文件,以及配置文件至model文件夹 +将Paddle模型转换为SOPHGO bmodel模型,转换步骤参考[文档](../README.md) 将转换后的SOPHGO bmodel模型文件拷贝至model中 -### 准备测试图片至image文件夹 +### 4.4 准备测试图片至image文件夹 ```bash wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png cp cityscapes_demo.png ./images ``` -### 编译example +### 4.5 编译example ```bash cd build @@ -43,14 +58,14 @@ cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-sophgo make ``` -## 运行例程 +### 4.6 运行例程 ```bash ./infer_demo model images/cityscapes_demo.png ``` -## 快速链接 +## 5. 更多指南 - [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html) - [FastDeploy部署PaddleSeg模型概览](../../) - [Python部署](../python) -- [模型转换](../README.md#将paddleseg推理模型转换为bmodel模型步骤) +- [模型转换](../README.md) diff --git a/examples/vision/segmentation/paddleseg/sophgo/cpp/infer.cc b/examples/vision/segmentation/paddleseg/semantic_segmentation/sophgo/cpp/infer.cc similarity index 91% rename from examples/vision/segmentation/paddleseg/sophgo/cpp/infer.cc rename to examples/vision/segmentation/paddleseg/semantic_segmentation/sophgo/cpp/infer.cc index 934ab648c9..77066a7ac7 100644 --- a/examples/vision/segmentation/paddleseg/sophgo/cpp/infer.cc +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/sophgo/cpp/infer.cc @@ -13,6 +13,7 @@ // limitations under the License. #include #include + #include "fastdeploy/vision.h" void SophgoInfer(const std::string& model_dir, const std::string& image_file) { @@ -29,13 +30,13 @@ void SophgoInfer(const std::string& model_dir, const std::string& image_file) { std::cerr << "Failed to initialize." << std::endl; return; } - //model.GetPreprocessor().DisableNormalizeAndPermute(); + // model.GetPreprocessor().DisableNormalizeAndPermute(); fastdeploy::TimeCounter tc; tc.Start(); auto im_org = cv::imread(image_file); - //the input of bmodel should be fixed + // the input of bmodel should be fixed int new_width = 512; int new_height = 512; cv::Mat im; @@ -51,9 +52,7 @@ void SophgoInfer(const std::string& model_dir, const std::string& image_file) { tc.PrintInfo("PPSeg in Sophgo"); cv::imwrite("infer_sophgo.jpg", vis_im); - std::cout - << "Visualized result saved in ./infer_sophgo.jpg" - << std::endl; + std::cout << "Visualized result saved in ./infer_sophgo.jpg" << std::endl; } int main(int argc, char* argv[]) { @@ -68,4 +67,3 @@ int main(int argc, char* argv[]) { SophgoInfer(argv[1], argv[2]); return 0; } - diff --git a/examples/vision/segmentation/paddleseg/sophgo/python/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/sophgo/python/README.md similarity index 55% rename from examples/vision/segmentation/paddleseg/sophgo/python/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/sophgo/python/README.md index 0649b01292..5de41d8c0f 100644 --- a/examples/vision/segmentation/paddleseg/sophgo/python/README.md +++ b/examples/vision/segmentation/paddleseg/semantic_segmentation/sophgo/python/README.md @@ -1,22 +1,31 @@ [English](README.md) | 简体中文 -# PaddleSeg Python部署示例 +# PaddleSeg 算能 Python部署示例 -## 算能硬件编译FastDeploy wheel包环境准备 +## 1. 部署环境准备 在部署前,需自行编译基于算能硬件的FastDeploy python wheel包并安装,参考文档[算能硬件部署环境](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#算能硬件部署环境) 本目录下提供`infer.py`快速完成 pp_liteseg 在SOPHGO TPU上部署的示例。执行如下脚本即可完成 +## 2. 部署模型准备 +在部署前,请准备好您所需要运行的推理模型,你可以选择使用[预导出的推理模型](../README.md)或者[自行导出PaddleSeg部署模型](../README.md)。 + +## 3. 运行部署示例 ```bash # 下载部署示例代码 git clone https://github.com/PaddlePaddle/FastDeploy.git -cd path/to/paddleseg/sophgo/python +cd FastDeploy/examples/vision/segmentation/semantic_segmentation/sophgo/python +# # 如果您希望从PaddleSeg下载示例代码,请运行 +# git clone https://github.com/PaddlePaddle/PaddleSeg.git +# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支 +# # git checkout develop +# cd PaddleSeg/deploy/fastdeploy/semantic_segmentation/sophgo/python # 下载图片 wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png # PaddleSeg模型转换为bmodel模型 -将Paddle模型转换为SOPHGO bmodel模型,转换步骤参考[文档](../README.md#将paddleseg推理模型转换为bmodel模型步骤) +将Paddle模型转换为SOPHGO bmodel模型,转换步骤参考[文档](../README_CN.md) # 推理 python3 infer.py --model_file ./bmodel/pp_liteseg_1684x_f32.bmodel --config_file ./bmodel/deploy.yaml --image cityscapes_demo.png @@ -25,9 +34,9 @@ python3 infer.py --model_file ./bmodel/pp_liteseg_1684x_f32.bmodel --config_file 运行结果保存在sophgo_img.png中 ``` -## 快速链接 -- [pp_liteseg C++部署](../cpp) -- [转换 pp_liteseg SOPHGO模型文档](../README.md#导出bmodel模型) +## 4. 更多指南 +- [PP-LiteSeg SOPHGO C++部署](../cpp) +- [转换 PP-LiteSeg SOPHGO模型文档](../README.md) -## 常见问题 +## 5. 常见问题 - [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/vision_result_related_problems.md) diff --git a/examples/vision/segmentation/paddleseg/sophgo/python/infer.py b/examples/vision/segmentation/paddleseg/semantic_segmentation/sophgo/python/infer.py similarity index 100% rename from examples/vision/segmentation/paddleseg/sophgo/python/infer.py rename to examples/vision/segmentation/paddleseg/semantic_segmentation/sophgo/python/infer.py diff --git a/examples/vision/segmentation/paddleseg/web/README.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/web/README.md similarity index 100% rename from examples/vision/segmentation/paddleseg/web/README.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/web/README.md diff --git a/examples/vision/segmentation/paddleseg/web/README_CN.md b/examples/vision/segmentation/paddleseg/semantic_segmentation/web/README_CN.md similarity index 100% rename from examples/vision/segmentation/paddleseg/web/README_CN.md rename to examples/vision/segmentation/paddleseg/semantic_segmentation/web/README_CN.md diff --git a/examples/vision/segmentation/ppmatting/README.md b/examples/vision/segmentation/ppmatting/README.md deleted file mode 100644 index b3dd9cc800..0000000000 --- a/examples/vision/segmentation/ppmatting/README.md +++ /dev/null @@ -1,22 +0,0 @@ -# PaddleSeg高性能全场景模型部署方案—FastDeploy - -## FastDeploy介绍 - -[FastDeploy](https://github.com/PaddlePaddle/FastDeploy)是一款全场景、易用灵活、极致高效的AI推理部署工具,使用FastDeploy可以简单高效的在10+款硬件上对PaddleSeg Matting模型进行快速部署 - -## 支持如下的硬件部署 - -| 硬件支持列表 | | | | -|:----- | :-- | :-- | :-- | -| [NVIDIA GPU](cpu-gpu) | [X86 CPU](cpu-gpu)| [飞腾CPU](cpu-gpu) | [ARM CPU](cpu-gpu) | -| [Intel GPU(独立显卡/集成显卡)](cpu-gpu) | [昆仑](cpu-gpu) | [昇腾](cpu-gpu) | - -## 常见问题 - -遇到问题可查看常见问题集合文档或搜索FastDeploy issues,链接如下: - -[常见问题集合](https://github.com/PaddlePaddle/FastDeploy/tree/develop/docs/cn/faq) - -[FastDeploy issues](https://github.com/PaddlePaddle/FastDeploy/issues) - -若以上方式都无法解决问题,欢迎给FastDeploy提交新的[issue](https://github.com/PaddlePaddle/FastDeploy/issues) diff --git a/examples/vision/segmentation/ppmatting/ascend/README.md b/examples/vision/segmentation/ppmatting/ascend/README.md deleted file mode 120000 index 3ed44e1300..0000000000 --- a/examples/vision/segmentation/ppmatting/ascend/README.md +++ /dev/null @@ -1 +0,0 @@ -../cpu-gpu/README.md \ No newline at end of file diff --git a/examples/vision/segmentation/ppmatting/kunlun/README.md b/examples/vision/segmentation/ppmatting/kunlun/README.md deleted file mode 120000 index 3ed44e1300..0000000000 --- a/examples/vision/segmentation/ppmatting/kunlun/README.md +++ /dev/null @@ -1 +0,0 @@ -../cpu-gpu/README.md \ No newline at end of file