[Other] Remove useless comments in PaddleSeg quantize example. (#735)

* Imporve OCR Readme

* Improve OCR Readme

* Improve OCR Readme

* Improve OCR Readme

* Improve OCR Readme

* Add Initialize function to PP-OCR

* Add Initialize function to PP-OCR

* Add Initialize function to PP-OCR

* Make all the model links come from PaddleOCR

* Improve OCR readme

* Improve OCR readme

* Improve OCR readme

* Improve OCR readme

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add check for label file in postprocess of Rec model

* Add check for label file in postprocess of Rec model

* Add check for label file in postprocess of Rec model

* Add check for label file in postprocess of Rec model

* Add check for label file in postprocess of Rec model

* Add check for label file in postprocess of Rec model

* Add comments to create API docs

* Improve OCR comments

* Rename OCR and add comments

* Make sure previous python example works

* Make sure previous python example works

* Fix Rec model bug

* Fix Rec model bug

* Fix rec model bug

* Add SetTrtMaxBatchSize function for TensorRT

* Add SetTrtMaxBatchSize Pybind

* Add set_trt_max_batch_size python function

* Set TRT dynamic shape in PPOCR examples

* Set TRT dynamic shape in PPOCR examples

* Set TRT dynamic shape in PPOCR examples

* Fix PPOCRv2 python example

* Fix PPOCR dynamic input shape bug

* Remove useless code

* Fix PPOCR bug

* Remove useless comments  in PaddleSeg example

Co-authored-by: Jason <jiangjiajun@baidu.com>
This commit is contained in:
yunyaoXYY
2022-11-29 10:53:30 +08:00
committed by GitHub
parent ef28af7807
commit cfb0a983ea
2 changed files with 1 additions and 25 deletions
@@ -43,28 +43,6 @@ void InitAndInfer(const std::string& model_dir, const std::string& image_file,
}
// int main(int argc, char* argv[]) {
// if (argc < 3) {
// std::cout
// << "Usage: infer_demo path/to/model_dir path/to/image run_option, "
// "e.g ./infer_model ./ppseg_model_dir ./test.jpeg 0"
// << std::endl;
// std::cout << "The data type of run_option is int, 0: run with cpu; 1: run "
// "with gpu; 2: run with gpu and use tensorrt backend."
// << std::endl;
// return -1;
// }
// fastdeploy::RuntimeOption option;
// option.UseCpu();
// option.UsePaddleInferBackend();
// std::cout<<"Xyy-debug, enable Paddle Backend==!";
// std::string model_dir = argv[1];
// std::string test_image = argv[2];
// InitAndInfer(model_dir, test_image, option);
// return 0;
// }
int main(int argc, char* argv[]) {
if (argc < 4) {
@@ -86,11 +64,9 @@ int main(int argc, char* argv[]) {
if (flag == 0) {
option.UseCpu();
option.UseOrtBackend();
std::cout<<"Use ORT!"<<std::endl;
} else if (flag == 1) {
option.UseCpu();
option.UsePaddleInferBackend();
std::cout<<"Use PP!"<<std::endl;
}
std::string model_dir = argv[1];