diff --git a/docs/README.md b/docs/README.md deleted file mode 120000 index b1dd06cd5c..0000000000 --- a/docs/README.md +++ /dev/null @@ -1 +0,0 @@ -README_EN.md diff --git a/docs/README.md b/docs/README.md new file mode 100755 index 0000000000..b2990d5e23 --- /dev/null +++ b/docs/README.md @@ -0,0 +1,47 @@ +[简体中文](README_CN.md)| English + +# Tutorials + +## Install + +- [Install FastDeploy Prebuilt Libraries](en/build_and_install/download_prebuilt_libraries.md) +- [Build and Install FastDeploy Library on GPU Platform](en/build_and_install/gpu.md) +- [Build and Install FastDeploy Library on CPU Platform](en/build_and_install/cpu.md) +- [Build and Install FastDeploy Library on IPU Platform](en/build_and_install/ipu.md) +- [Build and Install FastDeploy Library on KunlunXin XPU Platform](en/build_and_install/xpu.md) +- [Build and Install on RV1126 Platform](en/build_and_install/rv1126.md) +- [Build and Install on RK3588 Platform](en/build_and_install/rknpu2.md) +- [Build and Install on A311D Platform](en/build_and_install/a311d.md) +- [Build and Install FastDeploy Library on Nvidia Jetson Platform](en/build_and_install/jetson.md) +- [Build and Install FastDeploy Library on Android Platform](en/build_and_install/android.md) +- [Build and Install FastDeploy Serving Deployment Image](../serving/docs/EN/compile-en.md) + +## A Quick Start - Demos + +- [Python Deployment Demo](en/quick_start/models/python.md) +- [C++ Deployment Demo](en/quick_start/models/cpp.md) +- [A Quick Start on Runtime Python](en/quick_start/runtime/python.md) +- [A Quick Start on Runtime C++](en/quick_start/runtime/cpp.md) + +## API + +- [Python API](https://baidu-paddle.github.io/fastdeploy-api/python/html/) +- [C++ API](https://baidu-paddle.github.io/fastdeploy-api/cpp/html/) +- [Android Java API](../java/android) + +## Performance Optimization + +- [Quantization Acceleration](en/quantize.md) + +## Frequent Q&As + +- [1. How to Change Inference Backends](en/faq/how_to_change_backend.md) +- [2. How to Use FastDeploy C++ SDK on Windows Platform](en/faq/use_sdk_on_windows.md) +- [3. How to Use FastDeploy C++ SDK on Android Platform](en/faq/use_cpp_sdk_on_android.md) +- [4. Tricks of TensorRT](en/faq/tensorrt_tricks.md) +- [5. How to Develop a New Model](en/faq/develop_a_new_model.md) + +## More FastDeploy Deployment Module + +- [Deployment AI Model as a Service](../serving) +- [Benchmark Testing](../benchmark) diff --git a/docs/README_CN.md b/docs/README_CN.md index b11b7576cf..2a798904fd 100755 --- a/docs/README_CN.md +++ b/docs/README_CN.md @@ -1,4 +1,4 @@ -[English](README_EN.md) | 简体中文 +[English](README.md) | 简体中文 # 使用文档 diff --git a/docs/README_EN.md b/docs/README_EN.md deleted file mode 100755 index b2990d5e23..0000000000 --- a/docs/README_EN.md +++ /dev/null @@ -1,47 +0,0 @@ -[简体中文](README_CN.md)| English - -# Tutorials - -## Install - -- [Install FastDeploy Prebuilt Libraries](en/build_and_install/download_prebuilt_libraries.md) -- [Build and Install FastDeploy Library on GPU Platform](en/build_and_install/gpu.md) -- [Build and Install FastDeploy Library on CPU Platform](en/build_and_install/cpu.md) -- [Build and Install FastDeploy Library on IPU Platform](en/build_and_install/ipu.md) -- [Build and Install FastDeploy Library on KunlunXin XPU Platform](en/build_and_install/xpu.md) -- [Build and Install on RV1126 Platform](en/build_and_install/rv1126.md) -- [Build and Install on RK3588 Platform](en/build_and_install/rknpu2.md) -- [Build and Install on A311D Platform](en/build_and_install/a311d.md) -- [Build and Install FastDeploy Library on Nvidia Jetson Platform](en/build_and_install/jetson.md) -- [Build and Install FastDeploy Library on Android Platform](en/build_and_install/android.md) -- [Build and Install FastDeploy Serving Deployment Image](../serving/docs/EN/compile-en.md) - -## A Quick Start - Demos - -- [Python Deployment Demo](en/quick_start/models/python.md) -- [C++ Deployment Demo](en/quick_start/models/cpp.md) -- [A Quick Start on Runtime Python](en/quick_start/runtime/python.md) -- [A Quick Start on Runtime C++](en/quick_start/runtime/cpp.md) - -## API - -- [Python API](https://baidu-paddle.github.io/fastdeploy-api/python/html/) -- [C++ API](https://baidu-paddle.github.io/fastdeploy-api/cpp/html/) -- [Android Java API](../java/android) - -## Performance Optimization - -- [Quantization Acceleration](en/quantize.md) - -## Frequent Q&As - -- [1. How to Change Inference Backends](en/faq/how_to_change_backend.md) -- [2. How to Use FastDeploy C++ SDK on Windows Platform](en/faq/use_sdk_on_windows.md) -- [3. How to Use FastDeploy C++ SDK on Android Platform](en/faq/use_cpp_sdk_on_android.md) -- [4. Tricks of TensorRT](en/faq/tensorrt_tricks.md) -- [5. How to Develop a New Model](en/faq/develop_a_new_model.md) - -## More FastDeploy Deployment Module - -- [Deployment AI Model as a Service](../serving) -- [Benchmark Testing](../benchmark) diff --git a/docs/api/vision_results/README_EN.md b/docs/api/vision_results/README.md similarity index 100% rename from docs/api/vision_results/README_EN.md rename to docs/api/vision_results/README.md diff --git a/docs/api/vision_results/README_CN.md b/docs/api/vision_results/README_CN.md index 580e7b0c5f..94efce21e6 100755 --- a/docs/api/vision_results/README_CN.md +++ b/docs/api/vision_results/README_CN.md @@ -1,4 +1,4 @@ -[English](README_EN.md)| 简体中文 +[English](README.md)| 简体中文 # 视觉模型预测结果说明 FastDeploy根据视觉模型的任务类型,定义了不同的结构体(`fastdeploy/vision/common/result.h`)来表达模型预测结果,具体如下表所示 diff --git a/docs/api/vision_results/matting_result_EN.md b/docs/api/vision_results/matting_result_EN.md index 45a953e18d..e9765fc7ba 100644 --- a/docs/api/vision_results/matting_result_EN.md +++ b/docs/api/vision_results/matting_result_EN.md @@ -1,6 +1,6 @@ English | [中文](matting_result.md) -# MattingResult keying results +# Matting Result The MattingResult code is defined in `fastdeploy/vision/common/result.h`, and is used to indicate the predicted value of alpha transparency predicted and the predicted foreground, etc. diff --git a/docs/en/faq/develop_a_new_model.md b/docs/en/faq/develop_a_new_model.md index 14a3e2f977..6a1d5c81d5 100644 --- a/docs/en/faq/develop_a_new_model.md +++ b/docs/en/faq/develop_a_new_model.md @@ -5,17 +5,18 @@ English | [中文](../../cn/faq/develop_a_new_model.md) | Step | Description | Create or modify the files | |:-----------:|:--------------------------------------------------------------------------------:|:-----------------------------------------:| -| [1](#step2) | Add a model implementation to the corresponding task module in FastDeploy/vision | resnet.h、resnet.cc、vision.h | -| [2](#step4) | Python interface binding via pybind | resnet_pybind.cc、classification_pybind.cc | -| [3](#step5) | Use Python to call Interface | resnet.py、\_\_init\_\_.py | +| [1](#step2) | Add a model implementation to the corresponding task module in FastDeploy/vision | resnet.h, resnet.cc, vision.h | +| [2](#step4) | Python interface binding via pybind | resnet_pybind.cc, classification_pybind.cc | +| [3](#step5) | Use Python to call Interface | resnet.py, \_\_init\_\_.py | After completing the above 3 steps, an external model is integrated. If you want to contribute your code to FastDeploy, it is very kind of you to add test code, instructions (Readme), and code annotations for the added model in the [test](#test). -## Model Integration +## Model Integration + +### Prepare the models -### Prepare the models Before integrating external models, it is important to convert the trained models (.pt, .pdparams, etc.) to the model formats (.onnx, .pdmodel) that FastDeploy supports for deployment. Most open source repositories provide model conversion scripts for developers. As torchvision does not provide conversion scripts, developers can write conversion scripts manually. In this demo, we convert `torchvison.models.resnet50` to `resnet50.onnx` with the following code for your reference. @@ -40,7 +41,7 @@ torch.onnx.export(model, Running the above script will generate a`resnet50.onnx` file. -### C++ +### C++ * Create`resnet.h` file * Create a path @@ -64,7 +65,7 @@ class FASTDEPLOY_DECL ResNet : public FastDeployModel { * Create a path * FastDeploy/fastdeploy/vision/classification/contrib/resnet.cc (FastDeploy/C++ code/vision/task name/external model name/model name.cc) * Create content - * Implement the specific logic of the functions declared in `resnet.h` to `resnet.cc`, where `PreProcess` and `PostProcess` need to refer to the official source library for pre- and post-processing logic reproduction. The specific logic of each ResNet function is as follows. For more detailed code, please refer to [resnet.cc](https:// github.com/PaddlePaddle/FastDeploy/pull/347/files#diff-d229d702de28345253a53f2a5839fd2c638f3d32fffa6a7d04d23db9da13a871). + * Implement the specific logic of the functions declared in `resnet.h` to `resnet.cc`, where `PreProcess` and `PostProcess` need to refer to the official source library for pre- and post-processing logic reproduction. The specific logic of each ResNet function is as follows. For more detailed code, please refer to [resnet.cc](https://github.com/PaddlePaddle/FastDeploy/pull/347/files#diff-d229d702de28345253a53f2a5839fd2c638f3d32fffa6a7d04d23db9da13a871). ```C++ ResNet::ResNet(...) { @@ -93,7 +94,7 @@ bool ResNet::Predict(cv::Mat* im, ClassifyResult* result, int topk) { return true; } ``` - + * Add new model file to`vision.h` * modify location * FastDeploy/fastdeploy/vision.h @@ -105,7 +106,7 @@ bool ResNet::Predict(cv::Mat* im, ClassifyResult* result, int topk) { #endif ``` -### Pybind +### Pybind * Create Pybind file @@ -146,7 +147,7 @@ bool ResNet::Predict(cv::Mat* im, ClassifyResult* result, int topk) { } ``` -### Python +### Python * Create`resnet.py`file * Create path @@ -167,7 +168,7 @@ class ResNet(FastDeployModel): def size(self, wh): ... ``` - + * Import ResNet classes * modify path * FastDeploy/python/fastdeploy/vision/classification/\_\_init\_\_.py (FastDeploy/Python code/fastdeploy/vision model/task name/\_\_init\_\_.py) @@ -177,7 +178,7 @@ class ResNet(FastDeployModel): from .contrib.resnet import ResNet ``` -## Test +## Test ### Compile @@ -229,7 +230,7 @@ pip install fastdeploy_gpu_python-Version number-cpxx-cpxxm-system architecture. ``` * C++ - * Write CmakeLists、C++ code and README.md . Please refer to[cpp/](https://github.com/PaddlePaddle/FastDeploy/pull/347/files#diff-afcbe607b796509581f89e38b84190717f1eeda2df0419a2ac9034197ead5f96) + * Write CmakeLists、C++ code and README.md . Please refer to [cpp/](https://github.com/PaddlePaddle/FastDeploy/pull/347/files#diff-afcbe607b796509581f89e38b84190717f1eeda2df0419a2ac9034197ead5f96) * Compile infer.cc * Path:FastDeploy/examples/vision/classification/resnet/cpp/ @@ -240,7 +241,7 @@ make ``` * Python - * Please refer to[python/](https://github.com/PaddlePaddle/FastDeploy/pull/347/files#diff-5a0d6be8c603a8b81454ac14c17fb93555288d9adf92bbe40454449309700135) for Python code and Readme.md + * Please refer to [python/](https://github.com/PaddlePaddle/FastDeploy/pull/347/files#diff-5a0d6be8c603a8b81454ac14c17fb93555288d9adf92bbe40454449309700135) for Python code and Readme.md ### Annotate the Code @@ -249,7 +250,7 @@ make To make the code clear for understanding, developers can annotate the newly-added code. - C++ code - Developers need to add annotations for functions and variables in the resnet.h file, there are three annotating methods as follows, please refer to [resnet.h](https://github.com/PaddlePaddle/FastDeploy/pull/347/files#diff- 69128489e918f305c208476ba793d8167e77de2aa7cadf5dcbac30da448bd28e) for more details. + Developers need to add annotations for functions and variables in the resnet.h file, there are three annotating methods as follows, please refer to [resnet.h](https://github.com/PaddlePaddle/FastDeploy/pull/347/files#diff-69128489e918f305c208476ba793d8167e77de2aa7cadf5dcbac30da448bd28e) for more details. ```C++ /** \brief Predict for the input "im", the result will be saved in "result". diff --git a/docs/en/faq/use_sdk_on_windows.md b/docs/en/faq/use_sdk_on_windows.md index e4ae109ff0..d830388d25 100644 --- a/docs/en/faq/use_sdk_on_windows.md +++ b/docs/en/faq/use_sdk_on_windows.md @@ -92,7 +92,7 @@ In particular, for the configuration method of the dependency library required b ### 3.2 SDK usage method 2: Visual Studio 2019 creates sln project using C++ SDK -This section is for non-CMake users and describes how to create a sln project in Visual Studio 2019 to use FastDeploy C++ SDK. CMake users please read the next section directly. In addition, this section is a special thanks to "Awake to the Southern Sky" for his tutorial on FastDeploy: [How to deploy PaddleDetection target detection model on Windows using FastDeploy C++].(https://www.bilibili.com/read/cv18807232) +This section is for non-CMake users and describes how to create a sln project in Visual Studio 2019 to use FastDeploy C++ SDK. CMake users please read the next section directly. In addition, this section is a special thanks to "Awake to the Southern Sky" for his tutorial on FastDeploy: [How to deploy PaddleDetection target detection model on Windows using FastDeploy C++](https://www.bilibili.com/read/cv18807232).
@@ -192,7 +192,7 @@ Compile successfully, you can see the exe saved in: D:\qiuyanjun\fastdeploy_test\infer_ppyoloe\x64\Release\infer_ppyoloe.exe ``` -(2)Execute the executable file and get the inference result. First you need to copy all the dlls to the directory where the exe is located. At the same time, you also need to download and extract the pyoloe model files and test images, and then copy them to the directory where the exe is located. Special note, the exe needs to run when the dependency library configuration method, please refer to the section: [various methods to configure the exe to run the required dependency library](#CommandLineDeps) +(2)Execute the executable file and get the inference result. First you need to copy all the dlls to the directory where the exe is located. At the same time, you also need to download and extract the pyoloe model files and test images, and then copy them to the directory where the exe is located. Special note, the exe needs to run when the dependency library configuration method, please refer to the section: [various methods to configure the exe to run the required dependency library](#CommandLineDeps).  @@ -331,7 +331,7 @@ Open the saved image to view the visualization results at:
-Special note, the exe needs to run when the dependency library configuration method, please refer to the section: [a variety of methods to configure the exe to run the required dependency library](#CommandLineDeps)
+Special note, the exe needs to run when the dependency library configuration method, please refer to the section: [a variety of methods to configure the exe to run the required dependency library](#CommandLineDeps).
## 4. Multiple methods to Configure the Required Dependencies for the Exe Runtime
diff --git a/java/android/README.md b/java/android/README.md
index d3328f344d..1c557fca3e 100644
--- a/java/android/README.md
+++ b/java/android/README.md
@@ -1,42 +1,44 @@
-# FastDeploy Android AAR 包使用文档
-FastDeploy Android SDK 目前支持图像分类、目标检测、OCR文字识别、语义分割和人脸检测等任务,对更多的AI任务支持将会陆续添加进来。以下为各个任务对应的API文档,在Android下使用FastDeploy中集成的模型,只需以下几个步骤:
-- 模型初始化
-- 调用`predict`接口
-- 可视化验证(可选)
+English | [简体中文](README_CN.md)
-|图像分类|目标检测|OCR文字识别|人像分割|人脸检测|
-|:---:|:---:|:---:|:---:|:---:|
+# FastDeploy Android AAR Package
+Currently FastDeploy Android SDK supports image classification, target detection, OCR text recognition, semantic segmentation and face detection. More AI tasks will be added in the future. The following is the API documents for each task. To use the models integrated in FastDeploy on Android, you only need to take the following steps.
+- Model initialization
+- Calling the `predict` interface
+- Visualization validation (optional)
+
+|Image Classification|Target Detection|OCR Text Recognition|Portrait Segmentation|Face Detection|
+|:---:|:---:|:---:|:---:|:---:|
||||||
-## 内容目录
+## Content
-- [下载及配置SDK](#SDK)
-- [图像分类API](#Classification)
-- [目标检测API](#Detection)
-- [语义分割API](#Segmentation)
-- [OCR文字识别API](#OCR)
-- [人脸检测API](#FaceDetection)
-- [识别结果说明](#VisionResults)
-- [RuntimeOption说明](#RuntimeOption)
-- [可视化接口API](#Visualize)
-- [模型使用示例](#Demo)
-- [App示例工程使用方式](#App)
+- [Download and Configure SDK](#SDK)
+- [Image Classification API](#Classification)
+- [Target Detection API](#Detection)
+- [Semantic Segmentation API](#Segmentation)
+- [OCR Text Recognition API ](#OCR)
+- [Face Detection API](#FaceDetection)
+- [Identification Result Description](#VisionResults)
+- [Runtime Option Description](#RuntimeOption)
+- [Visualization Interface ](#Visualize)
+- [Examples of How to Use Models](#Demo)
+- [How to Use the App Sample Project](#App)
-## 下载及配置SDK
+## Download and Configure SDK
-### 下载 FastDeploy Android SDK
-Release版本(Java SDK 目前仅支持Android,当前版本为 1.0.0)
+### Download FastDeploy Android SDK
+The release version is as follows (Java SDK currently supports Android only, and current version is 1.0.0):
-| 平台 | 文件 | 说明 |
+| Platform | File | Description |
| :--- | :--- | :---- |
-| Android Java SDK | [fastdeploy-android-sdk-1.0.0.aar](https://bj.bcebos.com/fastdeploy/release/android/fastdeploy-android-sdk-1.0.0.aar) | NDK 20 编译产出, minSdkVersion 15,targetSdkVersion 28 |
+| Android Java SDK | [fastdeploy-android-sdk-1.0.0.aar](https://bj.bcebos.com/fastdeploy/release/android/fastdeploy-android-sdk-1.0.0.aar) | NDK 20 compiles, minSdkVersion 15,targetSdkVersion 28 |
-更多预编译库信息,请参考: [download_prebuilt_libraries.md](../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+For more information for pre-compile library, please refer to: [download_prebuilt_libraries.md](../../docs/cn/build_and_install/download_prebuilt_libraries.md).
-### 配置 FastDeploy Android SDK
+## Configure FastDeploy Android SDK
-首先,将fastdeploy-android-sdk-xxx.aar拷贝到您Android工程的libs目录下,其中`xxx`表示您所下载的SDK的版本号。
+First, please copy fastdeploy-android-sdk-xxx.aar to the libs directory of your Android project, where `xxx` indicates the version number of the SDK you download.
```shell
├── build.gradle
├── libs
@@ -45,7 +47,7 @@ Release版本(Java SDK 目前仅支持Android,当前版本为 1.0.0)
└── src
```
-然后,在您的Android工程中的build.gradble引入FastDeploy SDK,如下:
+Then, please add FastDeploy SDK to build.gradble in your Android project.
```java
dependencies {
implementation fileTree(include: ['*.aar'], dir: 'libs')
@@ -54,349 +56,349 @@ dependencies {
}
```
-## 图像分类API
+## Image Classification API
-### PaddleClasModel Java API 说明
-- 模型初始化 API: 模型初始化API包含两种方式,方式一是通过构造函数直接初始化;方式二是,通过调用init函数,在合适的程序节点进行初始化。PaddleClasModel初始化参数说明如下:
- - modelFile: String, paddle格式的模型文件路径,如 model.pdmodel
- - paramFile: String, paddle格式的参数文件路径,如 model.pdiparams
- - configFile: String, 模型推理的预处理配置文件,如 infer_cfg.yml
- - labelFile: String, 可选参数,表示label标签文件所在路径,用于可视化,如 imagenet1k_label_list.txt,每一行包含一个label
- - option: RuntimeOption,可选参数,模型初始化option。如果不传入该参数则会使用默认的运行时选项。
+### PaddleClasModel Java API Introduction
+- Model initialization API: Model initialization API contains two methods, you can initialize directly through the constructor, or call init function at the appropriate program node. PaddleClasModel initialization parameters are described as follows:
+ - modelFile: String, path to the model file in paddle format, e.g. model.pdmodel.
+ - paramFile: String, path to the parameter file in paddle format, e.g. model.pdiparams.
+ - configFile: String, preprocessing configuration file of model inference, e.g. infer_cfg.yml.
+ - labelFile: String, optional, path to the label file, for visualization, e.g. imagenet1k_label_list.txt, in which each line contains a label.
+ - option: RuntimeOption, optional, model initialization option. If this parameter is not passed, the default runtime option will be used.
```java
-// 构造函数: constructor w/o label file
-public PaddleClasModel(); // 空构造函数,之后可以调用init初始化
+// Constructor w/o label file
+public PaddleClasModel(); // An empty constructor, which can be initialised by calling init function later.
public PaddleClasModel(String modelFile, String paramsFile, String configFile);
public PaddleClasModel(String modelFile, String paramsFile, String configFile, String labelFile);
public PaddleClasModel(String modelFile, String paramsFile, String configFile, RuntimeOption option);
public PaddleClasModel(String modelFile, String paramsFile, String configFile, String labelFile, RuntimeOption option);
-// 手动调用init初始化: call init manually w/o label file
+// Call init manually w/o label file
public boolean init(String modelFile, String paramsFile, String configFile, RuntimeOption option);
public boolean init(String modelFile, String paramsFile, String configFile, String labelFile, RuntimeOption option);
```
-- 模型预测 API:模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指,不保存图片以及不渲染结果到Bitmap上,仅预测推理结果。预测并且可视化是指,预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
+- Model prediction API: Model prediction API includes direct prediction API and API with visualization function. Direct prediction means that no image is saved and no result is rendered to Bitmap, but only the inference result is predicted. Prediction and visualization means to predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap (currently supports Bitmap in format ARGB8888), which can be displayed in camera later.
```java
-// 直接预测:不保存图片以及不渲染结果到Bitmap上
+// Directly predict: do not save images or render result to Bitmap.
public ClassifyResult predict(Bitmap ARGB8888Bitmap);
-// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap上
+// Predict and visualize: predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap.
public ClassifyResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float scoreThreshold);
-public ClassifyResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float scoreThreshold); // 只渲染 不保存图片
+public ClassifyResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float scoreThreshold); // Only rendering images without saving.
```
-- 模型资源释放 API:调用 release() API 可以释放模型资源,返回true表示释放成功,false表示失败;调用 initialized() 可以判断模型是否初始化成功,true表示初始化成功,false表示失败。
+- Model resource release API: Calling function release() API can release model resources, and true means successful release, false means failure. Calling function initialized() can determine whether the model is initialized successfully, and true means successful initialization, false means failure.
```java
-public boolean release(); // 释放native资源
-public boolean initialized(); // 检查是否初始化成功
+public boolean release(); // Release native resources.
+public boolean initialized(); // Check if initialization is successful.
```
-## 目标检测API
+## Target Detection API
-### PicoDet Java API 说明
-- 模型初始化 API: 模型初始化API包含两种方式,方式一是通过构造函数直接初始化;方式二是,通过调用init函数,在合适的程序节点进行初始化。PicoDet初始化参数说明如下:
- - modelFile: String, paddle格式的模型文件路径,如 model.pdmodel
- - paramFile: String, paddle格式的参数文件路径,如 model.pdiparams
- - configFile: String, 模型推理的预处理配置文件,如 infer_cfg.yml
- - labelFile: String, 可选参数,表示label标签文件所在路径,用于可视化,如 coco_label_list.txt,每一行包含一个label
- - option: RuntimeOption,可选参数,模型初始化option。如果不传入该参数则会使用默认的运行时选项。
-
+### PicoDet Java API Introduction
+- Model initialization API: Model initialization API contains two methods, you can initialize directly through the constructor, or call init function at the appropriate program node. PicoDet initialization parameters are described as follows:
+ - modelFile: String, path to the model file in paddle format, e.g. model.pdmodel.
+ - paramFile: String, path to the parameter file in paddle format, e.g. model.pdiparams.
+ - configFile: String, preprocessing configuration file of model inference, e.g. infer_cfg.yml.
+ - labelFile: String, optional, path to the label file, for visualization, e.g. coco_label_list.txt, in which each line contains a label.
+ - option: RuntimeOption, optional, model initialization option. If this parameter is not passed, the default runtime option will be used.
+
```java
-// 构造函数: constructor w/o label file
-public PicoDet(); // 空构造函数,之后可以调用init初始化
+// Constructor w/o label file.
+public PicoDet(); // An empty constructor, which can be initialised by calling init function later.
public PicoDet(String modelFile, String paramsFile, String configFile);
public PicoDet(String modelFile, String paramsFile, String configFile, String labelFile);
public PicoDet(String modelFile, String paramsFile, String configFile, RuntimeOption option);
public PicoDet(String modelFile, String paramsFile, String configFile, String labelFile, RuntimeOption option);
-// 手动调用init初始化: call init manually w/o label file
+// Call init manually w/o label file.
public boolean init(String modelFile, String paramsFile, String configFile, RuntimeOption option);
public boolean init(String modelFile, String paramsFile, String configFile, String labelFile, RuntimeOption option);
```
-- 模型预测 API:模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指,不保存图片以及不渲染结果到Bitmap上,仅预测推理结果。预测并且可视化是指,预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
+- Model prediction API: Model prediction API includes direct prediction API and API with visualization function. Direct prediction means that no image is saved and no result is rendered to Bitmap, but only the inference result is predicted. Prediction and visualization means to predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap (currently supports Bitmap in format ARGB8888), which can be displayed in camera later.
```java
-// 直接预测:不保存图片以及不渲染结果到Bitmap上
+// Directly predict: do not save images or render result to Bitmap.
public DetectionResult predict(Bitmap ARGB8888Bitmap);
-// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap上
+// Predict and visualize: predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap.
public DetectionResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float scoreThreshold);
-public DetectionResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float scoreThreshold); // 只渲染 不保存图片
+public DetectionResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float scoreThreshold); // Only rendering images without saving.
```
-- 模型资源释放 API:调用 release() API 可以释放模型资源,返回true表示释放成功,false表示失败;调用 initialized() 可以判断模型是否初始化成功,true表示初始化成功,false表示失败。
+- Model resource release API: Calling function release() API can release model resources, and true means successful release, false means failure. Calling function initialized() can determine whether the model is initialized successfully, and true means successful initialization, false means failure.
```java
-public boolean release(); // 释放native资源
-public boolean initialized(); // 检查是否初始化成功
+public boolean release(); // Release native resources.
+public boolean initialized(); // Check if initialization is successful.
```
-## OCR文字识别API
+## OCR Text Recognition API
-### PP-OCRv2 & PP-OCRv3 Java API 说明
-- 模型初始化 API: 模型初始化API包含两种方式,方式一是通过构造函数直接初始化;方式二是,通过调用init函数,在合适的程序节点进行初始化。 PP-OCR初始化参数说明如下:
- - modelFile: String, paddle格式的模型文件路径,如 model.pdmodel
- - paramFile: String, paddle格式的参数文件路径,如 model.pdiparams
- - labelFile: String, 可选参数,表示label标签文件所在路径,用于可视化,如 ppocr_keys_v1.txt,每一行包含一个label
- - option: RuntimeOption,可选参数,模型初始化option。如果不传入该参数则会使用默认的运行时选项。
-与其他模型不同的是,PP-OCRv2 和 PP-OCRv3 包含 DBDetector、Classifier和Recognizer等基础模型,以及PPOCRv2和PPOCRv3等pipeline类型。
+### PP-OCRv2 & PP-OCRv3 Java API Introduction
+- Model initialization API: Model initialization API contains two methods, you can initialize directly through the constructor, or call init function at the appropriate program node. PP-OCR initialization parameters are described as follows:
+ - modelFile: String, path to the model file in paddle format, e.g. model.pdmodel.
+ - paramFile: String, path to the parameter file in paddle format, e.g. model.pdiparams.
+ - labelFile: String, optional, path to the label file, for visualization, e.g. ppocr_keys_v1.txt, in which each line contains a label.
+ - option: RuntimeOption, optional, model initialization option. If this parameter is not passed, the default runtime option will be used.
+Unlike other models, PP-OCRv2 and PP-OCRv3 contain base models such as DBDetector, Classifier and Recognizer, and pipeline types such as PPOCRv2 and PPOCRv3.
```java
-// 构造函数: constructor w/o label file
+// Constructor w/o label file
public DBDetector(String modelFile, String paramsFile);
public DBDetector(String modelFile, String paramsFile, RuntimeOption option);
public Classifier(String modelFile, String paramsFile);
public Classifier(String modelFile, String paramsFile, RuntimeOption option);
public Recognizer(String modelFile, String paramsFile, String labelPath);
public Recognizer(String modelFile, String paramsFile, String labelPath, RuntimeOption option);
-public PPOCRv2(); // 空构造函数,之后可以调用init初始化
+public PPOCRv2(); // An empty constructor, which can be initialised by calling init function later.
// Constructor w/o classifier
public PPOCRv2(DBDetector detModel, Recognizer recModel);
public PPOCRv2(DBDetector detModel, Classifier clsModel, Recognizer recModel);
-public PPOCRv3(); // 空构造函数,之后可以调用init初始化
+public PPOCRv3(); // An empty constructor, which can be initialised by calling init function later.
// Constructor w/o classifier
public PPOCRv3(DBDetector detModel, Recognizer recModel);
public PPOCRv3(DBDetector detModel, Classifier clsModel, Recognizer recModel);
```
-- 模型预测 API:模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指,不保存图片以及不渲染结果到Bitmap上,仅预测推理结果。预测并且可视化是指,预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
+- Model prediction API: Model prediction API includes direct prediction API and API with visualization function. Direct prediction means that no image is saved and no result is rendered to Bitmap, but only the inference result is predicted. Prediction and visualization means to predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap (currently supports Bitmap in format ARGB8888), which can be displayed in camera later.
```java
-// 直接预测:不保存图片以及不渲染结果到Bitmap上
+// Directly predict: do not save images or render result to Bitmap.
public OCRResult predict(Bitmap ARGB8888Bitmap);
-// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap上
+// Predict and visualize: predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap.
public OCRResult predict(Bitmap ARGB8888Bitmap, String savedImagePath);
-public OCRResult predict(Bitmap ARGB8888Bitmap, boolean rendering); // 只渲染 不保存图片
+public OCRResult predict(Bitmap ARGB8888Bitmap, boolean rendering); // Only rendering images without saving.
```
-- 模型资源释放 API:调用 release() API 可以释放模型资源,返回true表示释放成功,false表示失败;调用 initialized() 可以判断模型是否初始化成功,true表示初始化成功,false表示失败。
+- Model resource release API: Calling function release() API can release model resources, and true means successful release, false means failure. Calling function initialized() can determine whether the model is initialized successfully, and true means successful initialization, false means failure.
```java
-public boolean release(); // 释放native资源
-public boolean initialized(); // 检查是否初始化成功
+public boolean release(); // Release native resources.
+public boolean initialized(); // Check if initialization is successful.
```
-## 语义分割API
+## Semantic Segmentation API
-### PaddleSegModel Java API 说明
-- 模型初始化 API: 模型初始化API包含两种方式,方式一是通过构造函数直接初始化;方式二是,通过调用init函数,在合适的程序节点进行初始化。PaddleSegModel初始化参数说明如下:
- - modelFile: String, paddle格式的模型文件路径,如 model.pdmodel
- - paramFile: String, paddle格式的参数文件路径,如 model.pdiparams
- - configFile: String, 模型推理的预处理配置文件,如 infer_cfg.yml
- - option: RuntimeOption,可选参数,模型初始化option。如果不传入该参数则会使用默认的运行时选项。
+### PaddleSegModel Java API Introduction
+- Model initialization API: Model initialization API contains two methods, you can initialize directly through the constructor, or call init function at the appropriate program node. PaddleSegModel initialization parameters are described as follows:
+ - modelFile: String, path to the model file in paddle format, e.g. model.pdmodel.
+ - paramFile: String, path to the parameter file in paddle format, e.g. model.pdiparams.
+ - configFile: String, preprocessing configuration file of model inference, e.g. infer_cfg.yml.
+ - option: RuntimeOption, optional, model initialization option. If this parameter is not passed, the default runtime option will be used.
```java
-// 构造函数: constructor w/o label file
-public PaddleSegModel(); // 空构造函数,之后可以调用init初始化
+// Constructor w/o label file
+public PaddleSegModel(); // An empty constructor, which can be initialised by calling init function later.
public PaddleSegModel(String modelFile, String paramsFile, String configFile);
public PaddleSegModel(String modelFile, String paramsFile, String configFile, RuntimeOption option);
-// 手动调用init初始化: call init manually w/o label file
+// Call init manually w/o label file
public boolean init(String modelFile, String paramsFile, String configFile, RuntimeOption option);
```
-- 模型预测 API:模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指,不保存图片以及不渲染结果到Bitmap上,仅预测推理结果。预测并且可视化是指,预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
+- Model prediction API: Model prediction API includes direct prediction API and API with visualization function. Direct prediction means that no image is saved and no result is rendered to Bitmap, but only the inference result is predicted. Prediction and visualization means to predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap (currently supports Bitmap in format ARGB8888), which can be displayed in camera later.
```java
-// 直接预测:不保存图片以及不渲染结果到Bitmap上
+// Directly predict: do not save images or render result to Bitmap.
public SegmentationResult predict(Bitmap ARGB8888Bitmap);
-// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap上
+// Predict and visualize: predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap.
public SegmentationResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float weight);
-public SegmentationResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float weight); // 只渲染 不保存图片
-// 修改result,而非返回result,关注性能的用户可以将以下接口与SegmentationResult的CxxBuffer一起使用
+public SegmentationResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float weight); // Only rendering images without saving.
+// Modify result, but not return it. Concerning performance, you can use the following interface with CxxBuffer in SegmentationResult.
public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result);
public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result, String savedImagePath, float weight);
public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result, boolean rendering, float weight);
```
-- 设置竖屏或横屏模式: 对于 PP-HumanSeg系列模型,必须要调用该方法设置竖屏模式为true.
+- Set vertical or horizontal mode: For PP-HumanSeg series model, you should call this method to set the vertical mode to true.
```java
public void setVerticalScreenFlag(boolean flag);
```
-- 模型资源释放 API:调用 release() API 可以释放模型资源,返回true表示释放成功,false表示失败;调用 initialized() 可以判断模型是否初始化成功,true表示初始化成功,false表示失败。
+- Model resource release API: Calling function release() API can release model resources, and true means successful release, false means failure. Calling function initialized() can determine whether the model is initialized successfully, and true means successful initialization, false means failure.
```java
-public boolean release(); // 释放native资源
-public boolean initialized(); // 检查是否初始化成功
+public boolean release(); // Release native resources.
+public boolean initialized(); // Check if initialization is successful.
```
-## 人脸检测API
+## Face Detection API
-### SCRFD Java API 说明
-- 模型初始化 API: 模型初始化API包含两种方式,方式一是通过构造函数直接初始化;方式二是,通过调用init函数,在合适的程序节点进行初始化。PaddleSegModel初始化参数说明如下:
- - modelFile: String, paddle格式的模型文件路径,如 model.pdmodel
- - paramFile: String, paddle格式的参数文件路径,如 model.pdiparams
- - option: RuntimeOption,可选参数,模型初始化option。如果不传入该参数则会使用默认的运行时选项。
+### SCRFD Java API Introduction
+- Model initialization API: Model initialization API contains two methods, you can initialize directly through the constructor, or call init function at the appropriate program node. PaddleSegModel initialization parameters are described as follows:
+ - modelFile: String, path to the model file in paddle format, e.g. model.pdmodel.
+ - paramFile: String, path to the parameter file in paddle format, e.g. model.pdiparams.
+ - option: RuntimeOption, optional, model initialization option. If this parameter is not passed, the default runtime option will be used.
```java
-// 构造函数: constructor w/o label file
-public SCRFD(); // 空构造函数,之后可以调用init初始化
+// Constructor w/o label file.
+public SCRFD(); // An empty constructor, which can be initialised by calling init function later.
public SCRFD(String modelFile, String paramsFile);
public SCRFD(String modelFile, String paramsFile, RuntimeOption option);
-// 手动调用init初始化: call init manually w/o label file
+// Call init manually w/o label file.
public boolean init(String modelFile, String paramsFile, RuntimeOption option);
```
-- 模型预测 API:模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指,不保存图片以及不渲染结果到Bitmap上,仅预测推理结果。预测并且可视化是指,预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
+- Model prediction API: Model prediction API includes direct prediction API and API with visualization function. Direct prediction means that no image is saved and no result is rendered to Bitmap, but only the inference result is predicted. Prediction and visualization means to predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap (currently supports Bitmap in format ARGB8888), which can be displayed in camera later.
```java
-// 直接预测:不保存图片以及不渲染结果到Bitmap上
+// Directly predict: do not save images or render result to Bitmap.
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap);
-public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, float confThreshold, float nmsIouThreshold); // 设置置信度阈值和NMS阈值
-// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap上
+public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, float confThreshold, float nmsIouThreshold); // Set confidence thresholds and NMS thresholds.
+// Predict and visualize: predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap.
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float confThreshold, float nmsIouThreshold);
-public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float confThreshold, float nmsIouThreshold); // 只渲染 不保存图片
+public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float confThreshold, float nmsIouThreshold); // Only rendering images without saving.
```
-- 模型资源释放 API:调用 release() API 可以释放模型资源,返回true表示释放成功,false表示失败;调用 initialized() 可以判断模型是否初始化成功,true表示初始化成功,false表示失败。
+- Model resource release API: Calling function release() API can release model resources, and true means successful release, false means failure. Calling function initialized() can determine whether the model is initialized successfully, and true means successful initialization, false means failure.
```java
-public boolean release(); // 释放native资源
-public boolean initialized(); // 检查是否初始化成功
+public boolean release(); // Release native resources.
+public boolean initialized(); // Check if initialization is successful.
```
-### YOLOv5Face Java API 说明
-- 模型初始化 API: 模型初始化API包含两种方式,方式一是通过构造函数直接初始化;方式二是,通过调用init函数,在合适的程序节点进行初始化。PaddleSegModel初始化参数说明如下:
- - modelFile: String, paddle格式的模型文件路径,如 model.pdmodel
- - paramFile: String, paddle格式的参数文件路径,如 model.pdiparams
- - option: RuntimeOption,可选参数,模型初始化option。如果不传入该参数则会使用默认的运行时选项。
+### YOLOv5Face Java API Introduction
+- Model initialization API: Model initialization API contains two methods, you can initialize directly through the constructor, or call init function at the appropriate program node. PaddleSegModel initialization parameters are described as follows:
+ - modelFile: String, path to the model file in paddle format, e.g. model.pdmodel.
+ - paramFile: String, path to the parameter file in paddle format, e.g. model.pdiparams.
+ - option: RuntimeOption, optional, model initialization option. If this parameter is not passed, the default runtime option will be used.
```java
-// 构造函数: constructor w/o label file
-public YOLOv5Face(); // 空构造函数,之后可以调用init初始化
+// Constructor w/o label file.
+public YOLOv5Face(); // An empty constructor, which can be initialised by calling init function later.
public YOLOv5Face(String modelFile, String paramsFile);
public YOLOv5Face(String modelFile, String paramsFile, RuntimeOption option);
-// 手动调用init初始化: call init manually w/o label file
+// Call init manually w/o label file.
public boolean init(String modelFile, String paramsFile, RuntimeOption option);
```
-- 模型预测 API:模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指,不保存图片以及不渲染结果到Bitmap上,仅预测推理结果。预测并且可视化是指,预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
+- Model prediction API: Model prediction API includes direct prediction API and API with visualization function. Direct prediction means that no image is saved and no result is rendered to Bitmap, but only the inference result is predicted. Prediction and visualization means to predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap (currently supports Bitmap in format ARGB8888), which can be displayed in camera later.
```java
-// 直接预测:不保存图片以及不渲染结果到Bitmap上
+// Directly predict: do not save images or render result to Bitmap.
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap);
-public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, float confThreshold, float nmsIouThreshold); // 设置置信度阈值和NMS阈值
-// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap上
+public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, float confThreshold, float nmsIouThreshold); // Set confidence thresholds and NMS thresholds.
+// Predict and visualize: predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap.
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float confThreshold, float nmsIouThreshold);
-public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float confThreshold, float nmsIouThreshold); // 只渲染 不保存图片
+public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float confThreshold, float nmsIouThreshold); // Only rendering images without saving.
```
-- 模型资源释放 API:调用 release() API 可以释放模型资源,返回true表示释放成功,false表示失败;调用 initialized() 可以判断模型是否初始化成功,true表示初始化成功,false表示失败。
+- Model resource release API: Calling function release() API can release model resources, and true means successful release, false means failure. Calling function initialized() can determine whether the model is initialized successfully, and true means successful initialization, false means failure.
```java
-public boolean release(); // 释放native资源
-public boolean initialized(); // 检查是否初始化成功
+public boolean release(); // Release native resources.
+public boolean initialized(); // Check if initialization is successful.
```
-## 识别结果说明
+## Identification Result Description
-- 图像分类ClassifyResult说明
+- Image classification result description
```java
public class ClassifyResult {
- public float[] mScores; // [n] 每个类别的得分(概率)
- public int[] mLabelIds; // [n] 分类ID 具体的类别类型
- public boolean initialized(); // 检测结果是否有效
+ public float[] mScores; // [n] Scores of every class(probability).
+ public int[] mLabelIds; // [n] Class ID, specific class type.
+ public boolean initialized(); // To test whether the result is valid.
}
```
-其他参考:C++/Python对应的ClassifyResult说明: [api/vision_results/classification_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/classification_result.md)
+Other reference: C++/Python corresponding ClassifyResult description: [api/vision_results/classification_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/classification_result.md)
-- 目标检测DetectionResult说明
+- Target detection result description
```java
public class DetectionResult {
- public float[][] mBoxes; // [n,4] 检测框 (x1,y1,x2,y2)
- public float[] mScores; // [n] 每个检测框得分(置信度,概率值)
- public int[] mLabelIds; // [n] 分类ID
- public boolean initialized(); // 检测结果是否有效
+ public float[][] mBoxes; // [n,4] Detecting box (x1,y1,x2,y2).
+ public float[] mScores; // [n] Score (confidence level, probability value) for each detecting box.
+ public int[] mLabelIds; // [n] Class ID.
+ public boolean initialized(); // To test whether the result is valid.
}
```
-其他参考:C++/Python对应的DetectionResult说明: [api/vision_results/detection_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/detection_result.md)
+Other reference: C++/Python corresponding DetectionResult description: [api/vision_results/detection_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/detection_result.md)
-- OCR文字识别OCRResult说明
+- OCR text recognition result description
```java
public class OCRResult {
- public int[][] mBoxes; // [n,8] 表示单张图片检测出来的所有目标框坐标 每个框以8个int数值依次表示框的4个坐标点,顺序为左下,右下,右上,左上
- public String[] mText; // [n] 表示多个文本框内被识别出来的文本内容
- public float[] mRecScores; // [n] 表示文本框内识别出来的文本的置信度
- public float[] mClsScores; // [n] 表示文本框的分类结果的置信度
- public int[] mClsLabels; // [n] 表示文本框的方向分类类别
- public boolean initialized(); // 检测结果是否有效
+ public int[][] mBoxes; // [n,8] indicates the coordinates of all target boxes detected in a single image. Each box is 8 int values representing the 4 coordinate points of the box, in the order of lower left, lower right, upper right, upper left.
+ public String[] mText; // [n] indicates the content recognized in multiple text boxes.
+ public float[] mRecScores; // [n] indicates the confidence level of the text recognized in the text box.
+ public float[] mClsScores; // [n] indicates the confidence level of the classification result of the text.
+ public int[] mClsLabels; // [n] indicates the direction classification category of the text box.
+ public boolean initialized(); // To test whether the result is valid.
}
```
-其他参考:C++/Python对应的OCRResult说明: [api/vision_results/ocr_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/ocr_result.md)
+Other reference: C++/Python corresponding OCRResult description: [api/vision_results/ocr_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/ocr_result.md)
-- 语义分割SegmentationResult结果说明
+- Semantic segmentation result description
```java
public class SegmentationResult {
- public int[] mLabelMap; // 预测到的label map 每个像素位置对应一个label HxW
- public float[] mScoreMap; // 预测到的得分 map 每个像素位置对应一个score HxW
- public long[] mShape; // label map实际的shape (H,W)
- public boolean mContainScoreMap = false; // 是否包含 score map
- // 用户可以选择直接使用CxxBuffer,而非通过JNI拷贝到Java层,
- // 该方式可以一定程度上提升性能
- public void setCxxBufferFlag(boolean flag); // 设置是否为CxxBuffer模式
- public boolean releaseCxxBuffer(); // 手动释放CxxBuffer!!!
- public boolean initialized(); // 检测结果是否有效
+ public int[] mLabelMap; // The predicted label map, each pixel position corresponds to a label HxW.
+ public float[] mScoreMap; // The predicted score map, each pixel position corresponds to a score HxW.
+ public long[] mShape; // The real shape(H,W) of label map.
+ public boolean mContainScoreMap = false; // Whether score map is included.
+ // You can choose to use CxxBuffer directly instead of copying it to JAVA layer through JNI.
+ // This method can improve performance to some extent.
+ public void setCxxBufferFlag(boolean flag); // Set whether the mode is CxxBuffer.
+ public boolean releaseCxxBuffer(); // Release CxxBuffer manually!!!
+ public boolean initialized(); // Check if the result is valid.
}
```
-其他参考:C++/Python对应的SegmentationResult说明: [api/vision_results/segmentation_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result.md)
+Other reference:C++/Python corresponding SegmentationResult description: [api/vision_results/segmentation_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result.md)
-- 人脸检测FaceDetectionResult结果说明
+- Face detection result description
```java
public class FaceDetectionResult {
- public float[][] mBoxes; // [n,4] 检测框 (x1,y1,x2,y2)
- public float[] mScores; // [n] 每个检测框得分(置信度,概率值)
- public float[][] mLandmarks; // [nx?,2] 每个检测到的人脸对应关键点
- int mLandmarksPerFace = 0; // 每个人脸对应的关键点个数
- public boolean initialized(); // 检测结果是否有效
+ public float[][] mBoxes; // [n,4] detection box (x1,y1,x2,y2)
+ public float[] mScores; // [n] scores(confidence level, probability value) of every detection box
+ public float[][] mLandmarks; // [nx?,2] Each detected face corresponding keypoint
+ int mLandmarksPerFace = 0; // Each face corresponding keypoints number
+ public boolean initialized(); // Check if the result is valid.
}
```
-其他参考:C++/Python对应的FaceDetectionResult说明: [api/vision_results/face_detection_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/face_detection_result.md)
+Other reference:C++/Python corresponding FaceDetectionResult description: [api/vision_results/face_detection_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/face_detection_result.md)
-## RuntimeOption说明
+## Runtime Option Description
-- RuntimeOption设置说明
+- RuntimeOption setting description
```java
public class RuntimeOption {
- public void enableLiteFp16(); // 开启fp16精度推理
- public void disableLiteFP16(); // 关闭fp16精度推理
- public void enableLiteInt8(); // 开启int8精度推理,针对量化模型
- public void disableLiteInt8(); // 关闭int8精度推理
- public void setCpuThreadNum(int threadNum); // 设置线程数
- public void setLitePowerMode(LitePowerMode mode); // 设置能耗模式
- public void setLitePowerMode(String modeStr); // 通过字符串形式设置能耗模式
+ public void enableLiteFp16(); // Enable fp16 precision inference
+ public void disableLiteFP16(); // Disable fp16 precision inference
+ public void enableLiteInt8(); // Enable int8 precision inference, for quantized models
+ public void disableLiteInt8(); // Disable int8 precision inference
+ public void setCpuThreadNum(int threadNum); // Set number of threads.
+ public void setLitePowerMode(LitePowerMode mode); // Set power mode.
+ public void setLitePowerMode(String modeStr); // Set power mode by string.
}
```
-## 可视化接口
+## Visualization Interface
-FastDeploy Android SDK同时提供一些可视化接口,可用于快速验证推理结果。以下接口均把结果result渲染在输入的Bitmap上。具体的可视化API接口如下:
+FastDeploy Android SDK also provides visual interfaces that can be used to quickly validate the inference results. The following interfaces all render the result in the input Bitmap.
```java
public class Visualize {
- // 默认参数接口
+ // Default parameter interface.
public static boolean visClassification(Bitmap ARGB8888Bitmap, ClassifyResult result);
public static boolean visDetection(Bitmap ARGB8888Bitmap, DetectionResult result);
public static boolean visFaceDetection(Bitmap ARGB8888Bitmap, FaceDetectionResult result);
public static boolean visOcr(Bitmap ARGB8888Bitmap, OCRResult result);
public static boolean visSegmentation(Bitmap ARGB8888Bitmap, SegmentationResult result);
- // 有可设置参数的可视化接口
- // visDetection: 可设置阈值(大于该阈值的框进行绘制)、框线大小、字体大小、类别labels等
+ // Visual interface with configurable parameters.
+ // visDetection: You can configure the threshold value (draw the boxes higher than the threshold), box line size, font size, labels, etc.
public static boolean visDetection(Bitmap ARGB8888Bitmap, DetectionResult result, float scoreThreshold);
public static boolean visDetection(Bitmap ARGB8888Bitmap, DetectionResult result, float scoreThreshold, int lineSize, float fontSize);
public static boolean visDetection(Bitmap ARGB8888Bitmap, DetectionResult result, String[] labels);
public static boolean visDetection(Bitmap ARGB8888Bitmap, DetectionResult result, String[] labels, float scoreThreshold, int lineSize, float fontSize);
- // visClassification: 可设置阈值(大于该阈值的框进行绘制)、字体大小、类别labels等
+ // visClassification: You can configure the threshold value (draw the boxes higher than the threshold), font size, labels, etc.
public static boolean visClassification(Bitmap ARGB8888Bitmap, ClassifyResult result, float scoreThreshold,float fontSize);
public static boolean visClassification(Bitmap ARGB8888Bitmap, ClassifyResult result, String[] labels);
public static boolean visClassification(Bitmap ARGB8888Bitmap, ClassifyResult result, String[] labels, float scoreThreshold,float fontSize);
- // visSegmentation: weight背景权重
+ // visSegmentation: Background weight.
public static boolean visSegmentation(Bitmap ARGB8888Bitmap, SegmentationResult result, float weight);
- // visFaceDetection: 线大小、字体大小等
+ // visFaceDetection: String size, font size, etc.
public static boolean visFaceDetection(Bitmap ARGB8888Bitmap, FaceDetectionResult result, int lineSize, float fontSize);
}
```
-对应的可视化类型为:
+The corresponding visualization types:
```java
import com.baidu.paddle.fastdeploy.vision.Visualize;
```
-## 模型使用示例
+## Examples of How to Use Models
-- 模型调用示例1:使用构造函数以及默认的RuntimeOption
+- Example 1: Using constructor function and default RuntimeOption.
```java
import java.nio.ByteBuffer;
import android.graphics.Bitmap;
@@ -405,90 +407,92 @@ import android.opengl.GLES20;
import com.baidu.paddle.fastdeploy.vision.DetectionResult;
import com.baidu.paddle.fastdeploy.vision.detection.PicoDet;
-// 初始化模型
+// Initialize model.
PicoDet model = new PicoDet("picodet_s_320_coco_lcnet/model.pdmodel",
"picodet_s_320_coco_lcnet/model.pdiparams",
"picodet_s_320_coco_lcnet/infer_cfg.yml");
-// 模型推理
+// Model inference.
DetectionResult result = model.predict(ARGB8888ImageBitmap);
-// 释放模型资源
+// Release model resources.
model.release();
```
-- 模型调用示例2: 在合适的程序节点,手动调用init,并自定义RuntimeOption
+- Example 2: Manually call init function at appropriate program nodes, and customize RuntimeOption.
```java
-// import 同上 ...
+// import id.
import com.baidu.paddle.fastdeploy.RuntimeOption;
import com.baidu.paddle.fastdeploy.LitePowerMode;
import com.baidu.paddle.fastdeploy.vision.DetectionResult;
import com.baidu.paddle.fastdeploy.vision.detection.PicoDet;
-// 新建空模型
+// Create a new empty model.
PicoDet model = new PicoDet();
-// 模型路径
+// Model path.
String modelFile = "picodet_s_320_coco_lcnet/model.pdmodel";
String paramFile = "picodet_s_320_coco_lcnet/model.pdiparams";
String configFile = "picodet_s_320_coco_lcnet/infer_cfg.yml";
-// 指定RuntimeOption
+// Set RuntimeOption.
RuntimeOption option = new RuntimeOption();
option.setCpuThreadNum(2);
option.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
option.enableLiteFp16();
-// 使用init函数初始化
+// Initiaze with init function.
model.init(modelFile, paramFile, configFile, option);
-// Bitmap读取、模型预测、资源释放 同上 ...
+// Reading Bitmap, model prediction, resource release id.
```
-## App示例工程使用方式
+## How to Use the App Sample Project
-FastDeploy在java/android/app目录下提供了一些示例工程,以下将介绍示例工程的使用方式。由于java/android目录下同时还包含JNI工程,因此想要使用示例工程的用户还需要配置NDK,如果您只关心Java API的使用,并且不想配置NDK,可以直接跳转到以下详细的案例链接。
+FastDeploy provides some sample projects in the java/android/app directory. Since the java/android directory also contains JNI projects, users who want to use the sample projects also need to configure the NDK. If you only want to use the Java API and don't want to configure the NDK, you can jump to the detailed case links below.
-- [图像分类App示例工程](../../examples/vision/classification/paddleclas/android)
-- [目标检测App示例工程](../../examples/vision/detection/paddledetection/android)
-- [OCR文字识别App示例工程](../../examples/vision/ocr/PP-OCRv2/android)
-- [人像分割App示例工程](../../examples/vision/segmentation/paddleseg/android)
-- [人脸检测App示例工程](../../examples/vision/facedet/scrfd/android)
+- [App sample project of image classification](../../examples/vision/classification/paddleclas/android)
+- [App sample project of target detection](../../examples/vision/detection/paddledetection/android)
+- [App sample project of OCR text detection](../../examples/vision/ocr/PP-OCRv2/android)
+- [App sample project of portrait segmentation](../../examples/vision/segmentation/paddleseg/android)
+- [App sample project of face detection](../../examples/vision/facedet/scrfd/android)
-### 环境准备
+### Prepare for Environment
-1. 在本地环境安装好 Android Studio 工具,详细安装方法请见[Android Stuido 官网](https://developer.android.com/studio)。
-2. 准备一部 Android 手机,并开启 USB 调试模式。开启方法: `手机设置 -> 查找开发者选项 -> 打开开发者选项和 USB 调试模式`
+1. Install Android Studio tools in your local environment, please refer to [Android Stuido official website](https://developer.android.com/studio) for detailed installation method.
+2. Get an Android phone and turn on USB debugging mode. How to turn on: ` Phone Settings -> Find Developer Options -> Turn on Developer Options and USB Debug Mode`.
-**注意**:如果您的 Android Studio 尚未配置 NDK ,请根据 Android Studio 用户指南中的[安装及配置 NDK 和 CMake ](https://developer.android.com/studio/projects/install-ndk)内容,预先配置好 NDK 。您可以选择最新的 NDK 版本,或者使用 FastDeploy Android 预测库版本一样的 NDK
-### 部署步骤
+**Notes**:If your Android Studio is not configured with an NDK, please configure the it according to [Installing and Configuring NDK and CMake](https://developer.android.com/studio/projects/install-ndk) in the Android Studio User Guide. You can either choose the latest NDK version or use the same version as the FastDeploy Android prediction library.
-1. App示例工程位于 `fastdeploy/java/android/app` 目录
-2. 用 Android Studio 打开 `fastdeploy/java/android` 工程,注意是`java/android`目录
-3. 手机连接电脑,打开 USB 调试和文件传输模式,并在 Android Studio 上连接自己的手机设备(手机需要开启允许从 USB 安装软件权限)
+### Configuration Steps
+
+1. The App sample project is located in directory `fastdeploy/java/android/app`.
+2. Open `fastdeploy/java/android` project by Android Studio, please note that the directory is `java/android`.
+3. Connect your phone to your computer, turn on USB debugging and file transfer mode, and connect your own mobile device on Android Studio (your phone needs to be enabled to allow software installation from USB).
+
+
+
+