mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2026-04-24 09:44:10 +08:00
[Doc]Add English version of documents in examples (#1070)
* Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update README_EN.md * Rename README_EN.md to README_CN.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update README_EN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update README_EN.md * Rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update README.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README_CN.md * Update README.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README.md * Update and rename README_CN.md to README_EN.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README_CN.md * Update README.md * Update export.md * Create export_cn.md * Update README.md * Create README_CN.md * Update README.md * Create README_CN.md
This commit is contained in:
@@ -1,50 +1,51 @@
|
||||
# PaddleClas 量化模型部署
|
||||
FastDeploy已支持部署量化模型,并提供一键模型自动化压缩的工具.
|
||||
用户可以使用一键模型自动化压缩工具,自行对模型量化后部署, 也可以直接下载FastDeploy提供的量化模型进行部署.
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleClas Quantification Model Deployment
|
||||
FastDeploy supports the deployment of quantification models and provides a convenient tool for automatic model compression.
|
||||
Users can use it to deploy models after quantification or directly deploy quantized models provided by FastDeploy.
|
||||
|
||||
## FastDeploy一键模型自动化压缩工具
|
||||
FastDeploy 提供了一键模型自动化压缩工具, 能够简单地通过输入一个配置文件, 对模型进行量化.
|
||||
详细教程请见: [一键模型自动化压缩工具](../../../../../tools/common_tools/auto_compression/)
|
||||
注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。
|
||||
## FastDeploy one-click auto-compression tool
|
||||
FastDeploy provides a one-click auto-compression tool that allows users to quantize models by simply entering a configuration file.
|
||||
Refer to [one-click auto-compression tool](../../../../../tools/common_tools/auto_compression/) for details.
|
||||
Attention:The quantized classification model still requires the inference_cls.yaml file in the FP32 model folder. The model folder after personal quantification does not contain this yaml file. But users can copy this yaml file from the FP32 model folder to your quantized model folder.
|
||||
|
||||
## 下载量化完成的PaddleClas模型
|
||||
用户也可以直接下载下表中的量化模型进行部署.
|
||||
## Download the quantized PaddleClas model
|
||||
Users can also directly download the quantized models in the table below.
|
||||
|
||||
Benchmark表格说明:
|
||||
- Runtime时延为模型在各种Runtime上的推理时延,包含CPU->GPU数据拷贝,GPU推理,GPU->CPU数据拷贝时间. 不包含模型各自的前后处理时间.
|
||||
- 端到端时延为模型在实际推理场景中的时延, 包含模型的前后处理.
|
||||
- 所测时延均为推理1000次后求得的平均值, 单位是毫秒.
|
||||
- INT8 + FP16 为在推理INT8量化模型的同时, 给Runtime 开启FP16推理选项
|
||||
- INT8 + FP16 + PM, 为在推理INT8量化模型和开启FP16的同时, 开启使用Pinned Memory的选项,可加速GPU->CPU数据拷贝的速度
|
||||
- 最大加速比, 为FP32时延除以INT8推理的最快时延,得到最大加速比.
|
||||
- 策略为量化蒸馏训练时, 采用少量无标签数据集训练得到量化模型, 并在全量验证集上验证精度, INT8精度并不代表最高的INT8精度.
|
||||
- CPU为Intel(R) Xeon(R) Gold 6271C, 所有测试中固定CPU线程数为1. GPU为Tesla T4, TensorRT版本8.4.15.
|
||||
Benchmark table description:
|
||||
- Runtime latency: model’s inference latency on multiple Runtimes, including CPU->GPU data copy, GPU inference, and GPU->CPU data copy time. It does not include the pre and post processing time of the model.
|
||||
- End2End latency: model’s latency in the actual inference scenario, including the pre and post processing time of the model.
|
||||
- Measured latency: The average latency after 1000 times of inference in milliseconds.
|
||||
- INT8 + FP16: Enable FP16 inference for Runtime while inferring the INT8 quantification model
|
||||
- INT8 + FP16 + PM: Use Pinned Memory to speed up the GPU->CPU data copy while inferring the INT8 quantization model with FP16 turned on.
|
||||
- Maximum speedup ratio: Obtained by dividing the FP32 latency by the highest INT8 inference latency.
|
||||
- The strategy is to use a few unlabeled data sets to train the model for quantification and to verify the accuracy on the full validation set. The INT8 accuracy does not represent the highest value.
|
||||
- The CPU is Intel(R) Xeon(R) Gold 6271C, and the number of CPU threads is fixed to 1. The GPU is Tesla T4 with TensorRT version 8.4.15.
|
||||
|
||||
### Runtime Benchmark
|
||||
| 模型 |推理后端 |部署硬件 | FP32 Runtime时延 | INT8 Runtime时延 | INT8 + FP16 Runtime时延 | INT8+FP16+PM Runtime时延 | 最大加速比 | FP32 Top1 | INT8 Top1 | 量化方式 |
|
||||
| Model |Inference Backend |Deployment Hardware | FP32 Runtime Latency | INT8 Runtime Latency | INT8 + FP16 Runtime Latency | INT8+FP16+PM Runtime Latency | Maximum Speedup Ratio | FP32 Top1 | INT8 Top1 | Quantification Method |
|
||||
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | TensorRT | GPU | 3.55 | 0.99|0.98|1.06 | 3.62 | 79.12 | 79.06 | 离线量化 |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | Paddle-TensorRT | GPU | 3.46 |None |0.87|1.03 | 3.98 | 79.12 | 79.06 | 离线量化 |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | ONNX Runtime | CPU | 76.14 | 35.43 |None|None | 2.15 | 79.12 | 78.87| 离线量化|
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | Paddle Inference | CPU | 76.21 | 24.01 |None|None | 3.17 | 79.12 | 78.55 | 离线量化|
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | TensorRT | GPU | 0.91 | 0.43 |0.49 | 0.54 | 2.12 |77.89 | 76.86 | 离线量化 |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | Paddle-TensorRT | GPU | 0.88| None| 0.49|0.51 | 1.80 |77.89 | 76.86 | 离线量化 |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | ONNX Runtime | CPU | 30.53 | 9.59|None|None | 3.18 |77.89 | 75.09 |离线量化 |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | Paddle Inference | CPU | 12.29 | 4.68 | None|None|2.62 |77.89 | 71.36 |离线量化 |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | TensorRT | GPU | 3.55 | 0.99|0.98|1.06 | 3.62 | 79.12 | 79.06 | Offline |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | Paddle-TensorRT | GPU | 3.46 |None |0.87|1.03 | 3.98 | 79.12 | 79.06 | Offline |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | ONNX Runtime | CPU | 76.14 | 35.43 |None|None | 2.15 | 79.12 | 78.87| Offline|
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | Paddle Inference | CPU | 76.21 | 24.01 |None|None | 3.17 | 79.12 | 78.55 | Offline|
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | TensorRT | GPU | 0.91 | 0.43 |0.49 | 0.54 | 2.12 |77.89 | 76.86 | Offline |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | Paddle-TensorRT | GPU | 0.88| None| 0.49|0.51 | 1.80 |77.89 | 76.86 | Offline |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | ONNX Runtime | CPU | 30.53 | 9.59|None|None | 3.18 |77.89 | 75.09 |Offline |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | Paddle Inference | CPU | 12.29 | 4.68 | None|None|2.62 |77.89 | 71.36 |Offline |
|
||||
|
||||
### 端到端 Benchmark
|
||||
| 模型 |推理后端 |部署硬件 | FP32 End2End时延 | INT8 End2End时延 | INT8 + FP16 End2End时延 | INT8+FP16+PM End2End时延 | 最大加速比 | FP32 Top1 | INT8 Top1 | 量化方式 |
|
||||
### End2End Benchmark
|
||||
| Model |Inference Backend |Deployment Hardware | FP32 End2End Latency | INT8 End2End Latency | INT8 + FP16 End2End Latency | INT8+FP16+PM End2End Latency | Maximum Speedup Ratio | FP32 Top1 | INT8 Top1 | Quantification Method |
|
||||
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | TensorRT | GPU | 4.92| 2.28|2.24|2.23 | 2.21 | 79.12 | 79.06 | 离线量化 |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | Paddle-TensorRT | GPU | 4.48|None |2.09|2.10 | 2.14 | 79.12 | 79.06 | 离线量化 |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | ONNX Runtime | CPU | 77.43 | 41.90 |None|None | 1.85 | 79.12 | 78.87| 离线量化|
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | Paddle Inference | CPU | 80.60 | 27.75 |None|None | 2.90 | 79.12 | 78.55 | 离线量化|
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | TensorRT | GPU | 2.19 | 1.48|1.57| 1.57 | 1.48 |77.89 | 76.86 | 离线量化 |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | Paddle-TensorRT | GPU | 2.04| None| 1.47|1.45 | 1.41 |77.89 | 76.86 | 离线量化 |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | ONNX Runtime | CPU | 34.02 | 12.97|None|None | 2.62 |77.89 | 75.09 |离线量化 |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | Paddle Inference | CPU | 16.31 | 7.42 | None|None| 2.20 |77.89 | 71.36 |离线量化 |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | TensorRT | GPU | 4.92| 2.28|2.24|2.23 | 2.21 | 79.12 | 79.06 | Offline |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | Paddle-TensorRT | GPU | 4.48|None |2.09|2.10 | 2.14 | 79.12 | 79.06 | Offline |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | ONNX Runtime | CPU | 77.43 | 41.90 |None|None | 1.85 | 79.12 | 78.87| Offline|
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | Paddle Inference | CPU | 80.60 | 27.75 |None|None | 2.90 | 79.12 | 78.55 | Offline|
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | TensorRT | GPU | 2.19 | 1.48|1.57| 1.57 | 1.48 |77.89 | 76.86 | Offline |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | Paddle-TensorRT | GPU | 2.04| None| 1.47|1.45 | 1.41 |77.89 | 76.86 | Offline |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | ONNX Runtime | CPU | 34.02 | 12.97|None|None | 2.62 |77.89 | 75.09 |Offline |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | Paddle Inference | CPU | 16.31 | 7.42 | None|None| 2.20 |77.89 | 71.36 |Offline |
|
||||
|
||||
## 详细部署文档
|
||||
## Detailed Deployment Tutorials
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployment](cpp)
|
||||
|
||||
Reference in New Issue
Block a user