add api docs

This commit is contained in:
jiangjiajun
2022-10-11 06:01:18 +00:00
parent c4625a2cb6
commit 964aaf051c
8 changed files with 50 additions and 32 deletions
+21
View File
@@ -0,0 +1,21 @@
# API文档生成
## 环境要求
- Ubuntu
- python >= 3.6
## 前置依赖安装
```
sudo apt-get install doxygen
pip install -r python/requirements.txt
```
## 文档生成
```
bash build.sh
```
执行完成后,浏览器打开当前目录下的`index.html`即可查阅API文档
+9
View File
@@ -0,0 +1,9 @@
CURRENT_DIR=${PWD}
cd cpp
doxygen
cd ../python
make html
cd ${CURRENT_DIR}
+3 -2
View File
@@ -791,6 +791,7 @@ WARN_LOGFILE =
# Note: If this tag is empty the current directory is searched.
INPUT = ../../../fastdeploy/
INPUT += ./main_page.md
# This tag can be used to specify the character encoding of the source files
# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses
@@ -982,7 +983,7 @@ FILTER_SOURCE_PATTERNS =
# (index.html). This can be useful if you have a project on for instance GitHub
# and want to reuse the introduction page also for the doxygen output.
USE_MDFILE_AS_MAINPAGE = main_page.md
USE_MDFILE_AS_MAINPAGE = ./main_page.md
#---------------------------------------------------------------------------
# Configuration options related to source browsing
@@ -1666,7 +1667,7 @@ EXTRA_SEARCH_MAPPINGS =
# If the GENERATE_LATEX tag is set to YES, doxygen will generate LaTeX output.
# The default value is: YES.
GENERATE_LATEX = YES
GENERATE_LATEX = NO
# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. If a
# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+13
View File
@@ -0,0 +1,13 @@
<h1 id="fastdeploy-api-documents">FastDeploy API Documents</h1>
<ul>
<li><p><a href="./python/_build/html">Python API</a></p>
</li>
<li><p><a href="./cpp/docs/html">C++ API</a></p>
</li>
</ul>
<h2 id="more-information">More Information</h2>
<ul>
<li>Github: <a href="https://github.com/PaddlePaddle/FastDeployqq.com">https://github.com/PaddlePaddle/FastDeployqq.com</a> </li>
<li>Email: fastdeploy@baidu.com</li>
<li><a href="https://join.slack.com/t/fastdeployworkspace/shared_invite/zt-1hhvpb279-iw2pNPwrDaMBQ5OQhO3Siw">Click here join FastDeploy Slack</a></li>
</ul>
+2 -1
View File
@@ -35,7 +35,8 @@ extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.mathjax',
'sphinx_markdown_tables',
'sphinx.ext.viewcode'
'sphinx.ext.viewcode',
'sphinx.ext.githubpages',
]
autoclass_content = 'both'
+2 -2
View File
@@ -1,8 +1,8 @@
# Python部署
# PPYOLOE Python部署
确认开发环境已安装FastDeploy,参考[FastDeploy安装](../../build_and_install/)安装预编译的FastDeploy,或根据自己需求进行编译安装。
本文档以PaddleDetection目标检测模型PicoDet为例展示CPU上的推理示例
本文档以PaddleDetection目标检测模型PPYOLOE为例展示CPU上的推理示例
## 1. 获取模型和测试图像
-27
View File
@@ -1,27 +0,0 @@
# FastDeploy C++ API Summary
## Runtime
FastDeploy Runtime can be used as an inference engine with the same code, we can deploy Paddle/ONNX model on different device by different backends.
Currently, FastDeploy supported backends listed as below,
| Backend | Hardware | Support Model Format | Platform |
| :------ | :------- | :------------------- | :------- |
| Paddle Inference | CPU/Nvidia GPU | Paddle | Windows(x64)/Linux(x64) |
| ONNX Runtime | CPU/Nvidia GPU | Paddle/ONNX | Windows(x64)/Linux(x64/aarch64)/Mac(x86/arm64) |
| TensorRT | Nvidia GPU | Paddle/ONNX | Windows(x64)/Linux(x64)/Jetson |
| OpenVINO | CPU | Paddle/ONNX | Windows(x64)/Linux(x64)/Mac(x86) |
### Example code
- [Python examples](./)
- [C++ examples](./)
### Related APIs
- [RuntimeOption](./structfastdeploy_1_1RuntimeOption.html)
- [Runtime](./structfastdeploy_1_1Runtime.html)
## Vision Models
| Task | Model | API | Example |
| :---- | :---- | :---- | :----- |
| object detection | PaddleDetection/PPYOLOE | [fastdeploy::vision::detection::PPYOLOE](./classfastdeploy_1_1vision_1_1detection_1_1PPYOLOE.html) | [C++](./)/[Python](./) |