luukunn
3651113ee5
[DataProcessor]Remove ENABLE_V1_DATA_PROCESSOR ( #7052 )
...
* remove ENABLE_V1_DATA_PROCESSOR
* fix unit test
* fix unit test
2026-04-01 09:53:41 +08:00
qwes5s5
daa95244f7
abort requests ( #6992 )
2026-03-31 11:02:26 +08:00
luukunn
e6804ba97d
[Optimization]Streaming requests return complete special tokens. ( #6998 )
...
* return special token
* add completions
* update
* fix
* add prompt_token_ids& completion_token_ids=None,
* fix unite test
2026-03-26 09:49:43 +08:00
luukunn
c3d8db85c4
[Optimization] Update ZMQ server ( #6735 )
...
* add batch zmq send reaponse
* update
* Revert "update"
This reverts commit 0234a25b47 .
* update
* remove lock
* fix unit test
* add unit test
* add unit test
* pre commit
* add unit test
* fix unit test
* add unit test
* fix worker>1
* update zmq_worker_pid
* fix unit test
* fix unit test
* fix unit test
* add unit test
* fix unit test
* fix first token time
* fix logprobs
* add unit test
* op
* remore debug log
---------
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com >
2026-03-19 21:53:16 +08:00
qwes5s5
375b5b7b21
[Feature]Log Format Normalization and Trace Log Optimization ( #6370 )
...
* log refactor
* log refactor 2
* log refactor 3
2026-03-03 11:31:45 +08:00
luukunn
6b968a76f1
【Optimization】update data_processor & add tool parser plugins ( #6096 )
...
* update data_processor
* fix unit test
* fix unit test
* add unit test
* add tool parser plugins
* fix tool call
* fix tool call
* fix tool call
* fix unit test
* fix unit test
* add unit test
* fix unit test
* fix unit test
* fix unit test
2026-01-22 17:17:32 +08:00
kxz2002
6e416c62dd
[Optimization] The pre- and post-processing pipeline do not perform dict conversion ( #5494 )
...
* to_request_for_infer initial commit
* refact to from_chat_completion_request
* preprocess use request initial commit
* bugfix
* processors refact to using request
* bug fix
* refact Request from_generic_request
* post process initial commit
* bugfix
* postprocess second commit
* bugfix
* serving_embedding initial commit
* serving_reward initial commit
* bugfix
* replace function name
* async_llm initial commit
* offline initial commit and fix bug
* bugfix
* fix async_llm
* remove add speculate_metrics into data
* fix logprobs bug
* fix echo bug
* fix bug
* fix reasoning_max_tokens
* bugfix
* bugfix and modify unittest
* bugfix and modify unit test
* bugfix
* bugfix
* bugfix
* modify unittest
* fix error when reasong_content is none for text_processor
* remove some unnessary logic
* revert removed logic
* implement add and set method for RequestOutput and refact code
* modify unit test
* modify unit test
* union process_request and process_request_obj
* remove a unit test
* union process_response and process_response_obj
* support qwen3_vl_processor
* modify unittest and remove comments
* fix prompt_logprobs
* fix codestyle
* add v1
* v1
* fix unit test
* fix unit test
* fix pre-commit
* fix
* add process request
* add process request
* fix
* fix
* fix unit test
* fix unit test
* fix unit test
* fix unit test
* fix unit test
* remove file
* add unit test
* add unit test
* add unit test
* fix unit test
* fix unit test
* fix
* fix
---------
Co-authored-by: Jiaxin Sui <95567040+plusNew001@users.noreply.github.com >
Co-authored-by: luukunn <981429396@qq.com >
Co-authored-by: luukunn <83932082+luukunn@users.noreply.github.com >
Co-authored-by: Zhang Yulong <35552275+ZhangYulongg@users.noreply.github.com >
2026-01-22 00:50:52 +08:00
qwes5s5
b2a2e11551
[Feature] Support stopping the inference for the corresponding request in the online service after a disconnection request. ( #5320 )
...
* request disconnect
* request disconnect
* fix bug
* fix bug--amend
---------
Co-authored-by: root <root@yq01-sys-rpm26xc1knu.yq01.baidu.com >
2026-01-16 11:46:13 +08:00
qwes5s5
b3ca7f041a
[BugFix] Fix redundant prompt_logprobs in the second chunk of streaming response when return_token_ids is enabled for v1/completions and fix trace file name ( #5829 )
...
* fix prompt logprobs bug
* fix trace file name
---------
Co-authored-by: qwes5s5 <root@yq01-sys-rpm26xc1knu.yq01.baidu.com >
2026-01-06 14:11:43 +08:00
Copilot
7d5282e158
[APIServer][Feature] Add configurable worker health check timeout via FD_WORKER_ALIVE_TIMEOUT ( #5865 )
...
* Initial plan
* Add configurable FD_WORKER_ALIVE_TIMEOUT environment variable
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com >
* Add test for FD_WORKER_ALIVE_TIMEOUT environment variable
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com >
* Update docs/zh/usage/environment_variables.md
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* Update docs/usage/environment_variables.md
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* Improve test coverage to validate integration with check_health calls
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com >
* Remove test_worker_alive_timeout.py per reviewer feedback
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com >
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com >
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com >
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
2026-01-05 09:47:12 +08:00
kxz2002
cad2932990
[BugFix] Fix process_response_dict to support async in serving_completion ( #5758 )
...
* support process_response_dict async initial commit
* fixbug
* add unit test
* optimize
2025-12-26 17:40:58 +08:00
xiaolei373
a30b4da260
[Feature] Tracing: Fine-Grained Tracing for Request Latency Part1 ( #5458 )
2025-12-16 16:36:09 +08:00
GoldPancake
909059c60a
[Feature] Support for request-level speculative decoding metrics monitoring. ( #5518 )
...
* support spec metrics monitor per request
* fix bug
* remove debug log
* fix ut bugs
2025-12-12 12:22:18 +08:00
qwes5s5
d79438bb86
add detoken switch ( #5463 )
CE Compile Job / ce_job_pre_check (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
Deploy GitHub Pages / deploy (push) Has been cancelled
2025-12-10 21:44:02 +08:00
Juncai
80efe98f8d
[PD Disaggregation] Add timestamp for analyzing splitwise deployment ( #5317 )
...
* Add timestamp for analyzing splitwise deployment
* up
* up
* up
* up
* up
* up
* fix format
* fix
2025-12-08 10:08:44 +08:00
qwes5s5
117980dd4e
[LogProbs]Enable prompt logprobs output and modify data transmission method for the online interface. ( #5089 )
...
* add prompt logprobs
* Merge prompt_logprobs_tensors and prompt_logprobs
* fix param check
* trigger ci
* fix unitest
* fix logprobs bug
2025-12-02 13:49:51 +08:00
kxz2002
97189079b9
[BugFix] unify max_tokens ( #4968 )
...
* unify max tokens
* modify and add unit test
* modify and add unit test
* modify and add unit tests
---------
Co-authored-by: YuBaoku <49938469+EmmonsCurse@users.noreply.github.com >
2025-11-18 20:01:33 +08:00
qwes5s5
36216e62f0
[Log] Add trace log and add loggingInstrumentor tool ( #4692 )
...
* add trace logger and trace print
* trigger ci
* fix unittest
* translate notes and add copyright
---------
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com >
Co-authored-by: YuBaoku <49938469+EmmonsCurse@users.noreply.github.com >
2025-11-17 11:08:57 +08:00
Juncai
08ca0f6aea
[Feature] [PD] add simple router and refine splitwise deployment ( #4709 )
...
* add simple router and refine splitwise deployment
* fix
2025-11-06 14:56:02 +08:00
SunLei
2a9ed72533
feat: add support for API usage with multimodal models ( #4548 )
...
* feat: add support for API usage with multimodal models
* completion_tokens contains num_image_tokens
* remove test_request.py
* fix: paddle.device.is_compiled_with_cuda()
* fix test_unstream_without_logprobs
2025-10-28 20:23:46 +08:00
kxz2002
327fa4c255
[DataProcessor] add reasoning_tokens into usage info ( #4520 )
...
CE Compile Job / ce_job_pre_check (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
Deploy GitHub Pages / deploy (push) Has been cancelled
* add reasoning_tokens into usage info initial commit
* add unit tests
* modify unit test
* modify and add unit tests
* fix unit test
* move steam usage to processor
* modify processor
* modify test_logprobs
* modify test_logprobs.py
* modify stream reasoning tokens accumulation
* fix unit test
2025-10-25 16:57:58 +08:00
SunLei
ee915220bd
[Speculative Decoding] Add draft_logprobs Support for Speculative Decode MTP ( #4467 )
...
* feat: add draft_logprobs for Speculative Decode MTP
* feat: add draft_logprobs for Speculative Decode MTP
* feat: add draft_logprobs for Speculative Decode MTP
* fix: postprocess for speculative decode
* test: test_speculative_decoding_use_logprobs
* fix: test_completion_echo
* fix test_max_streaming_tokens
---------
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com >
2025-10-21 14:57:50 +08:00
kxz2002
b5b993e48e
【feature】support n parameter ( #4273 )
...
CE Compile Job / ce_job_pre_check (push) Has been cancelled
Deploy GitHub Pages / deploy (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
Publish Job / publish_pre_check (push) Has been cancelled
Publish Job / print_publish_pre_check_outputs (push) Has been cancelled
Publish Job / FD-Clone-Linux (push) Has been cancelled
Publish Job / Show Code Archive Output (push) Has been cancelled
Publish Job / BUILD_SM8090 (push) Has been cancelled
Publish Job / BUILD_SM8689 (push) Has been cancelled
Publish Job / PADDLE_PYPI_UPLOAD_8090 (push) Has been cancelled
Publish Job / PADDLE_PYPI_UPLOAD_8689 (push) Has been cancelled
Publish Job / Run FD Image Build (push) Has been cancelled
Publish Job / Run FastDeploy Unit Tests and Coverage (push) Has been cancelled
Publish Job / Run FastDeploy LogProb Tests (push) Has been cancelled
Publish Job / Extracted partial CE model tasks to run in CI. (push) Has been cancelled
Publish Job / Run Base Tests (push) Has been cancelled
Publish Job / Run Accuracy Tests (push) Has been cancelled
Publish Job / Run Stable Tests (push) Has been cancelled
CI Images Build / FD-Clone-Linux (push) Has been cancelled
CI Images Build / Show Code Archive Output (push) Has been cancelled
CI Images Build / CI Images Build (push) Has been cancelled
CI Images Build / BUILD_SM8090 (push) Has been cancelled
CI Images Build / Run FastDeploy Unit Tests and Coverage (push) Has been cancelled
CI Images Build / Run FastDeploy LogProb Tests (push) Has been cancelled
CI Images Build / Extracted partial CE model tasks to run in CI. (push) Has been cancelled
CI Images Build / Run Base Tests (push) Has been cancelled
CI Images Build / Run Accuracy Tests (push) Has been cancelled
CI Images Build / Run Stable Tests (push) Has been cancelled
CI Images Build / Publish Docker Images Pre Check (push) Has been cancelled
* support n parameter
* pre-commit check
* pre-commit check
* restore format_and_add_data
* update n_param
* bug fix index - str to int
* bug fix del child_task
* bug fix metrics
* add debug info
* add debug info2
* remove debug info
* change connecting symbol to '-'
* bugfix change connecting symbol
* bugfix change connecting symbol2
* unit tests fix
* unit test fix2
* unittest add param n=2
* n param add unit tests and adapt to echo
* pre-commit fix
* resolve review
* adjust stop reason
* add unittest for _create_chat_completion_choice
* modify unittest
* solve confict
* solve conflict
* resolve conflict
---------
Co-authored-by: LiqinruiG <37392159+LiqinruiG@users.noreply.github.com >
Co-authored-by: gaoziyuan <m13689897706@163.com >
2025-10-17 20:51:59 +08:00
LiqinruiG
4251ac5e95
【Fix】 remove text_after_process & raw_prediction ( #4421 )
...
* remove text_after_process & raw_prediction
* remove text_after_process & raw_prediction
2025-10-16 19:00:18 +08:00
ltd0924
28d1b6cd97
[BugFix] fix multinode bugs ( #4377 )
...
* [BugFix] fix multinode bugs
* Update test_config.py
* Update test_config.py
* Update test_config.py
---------
Co-authored-by: ltd0924 <luotingdan@baidu.com >
2025-10-15 11:43:39 +08:00
ltd0924
83720da79f
[Feature] support clear data ( #3601 )
...
* [Feature] support clear data
* update
* fix
* fix
* fix
* fix
* fix
* fix
* fix
2025-09-23 10:20:02 +08:00
xiaolei373
ddf5606263
Bugfix test exception ( #4171 )
...
* feat(log):add_request_and_response_log
* modify default error type
2025-09-19 11:48:49 +08:00
xiaolei373
98447beb4d
Add param valid log ( #4113 )
...
* feat(log):add_request_and_response_log
* [bugfix] add param valid log
---------
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com >
2025-09-18 10:39:24 +08:00
xiaolei373
9ac539471d
[format] Valid para format error info ( #4035 )
...
* feat(log):add_request_and_response_log
* 报错信息与OpenAI对齐
2025-09-12 19:05:17 +08:00
zhuzixuan
a47976e82d
[Echo] Support more types of prompt echo ( #4022 )
...
* wenxin-tools-700 When the prompt type is list[int] or list[list[int]], it needs to support echoing after decoding.
* wenxin-tools-700 When the prompt type is list[int] or list[list[int]], it needs to support echoing after decoding.
* wenxin-tools-700 When the prompt type is list[int] or list[list[int]], it needs to support echoing after decoding.
* wenxin-tools-700 When the prompt type is list[int] or list[list[int]], it needs to support echoing after decoding.
* wenxin-tools-700 When the prompt type is list[int] or list[list[int]], it needs to support echoing after decoding.
* wenxin-tools-700 When the prompt type is list[int] or list[list[int]], it needs to support echoing after decoding.
* wenxin-tools-700 When the prompt type is list[int] or list[list[int]], it needs to support echoing after decoding.
* wenxin-tools-700 When the prompt type is list[int] or list[list[int]], it needs to support echoing after decoding.
* wenxin-tools-700 When the prompt type is list[int] or list[list[int]], it needs to support echoing after decoding.
---------
Co-authored-by: luukunn <83932082+luukunn@users.noreply.github.com >
2025-09-11 19:34:44 +08:00
zhuzixuan
83bd55100b
[Optimize]Error messages about Model api. ( #3839 )
...
* add v1/models interface related
* add model parameters
* default model verification
* unit test
* check model err_msg
* unit test
* type annotation
* model parameter in response
* modify document description
* modify document description
* unit test
* verification
* verification update
* model_name
* pre-commit
* update test case
* update test case
* Update tests/entrypoints/openai/test_serving_models.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* Update tests/entrypoints/openai/test_serving_models.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* Update tests/entrypoints/openai/test_serving_models.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* Update tests/entrypoints/openai/test_serving_models.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* Update fastdeploy/entrypoints/openai/serving_models.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* 优化报错信息。
---------
Co-authored-by: yangzichao01 <yangzichao01@baidu.com >
Co-authored-by: Yzc216 <101054010+Yzc216@users.noreply.github.com >
Co-authored-by: LiqinruiG <37392159+LiqinruiG@users.noreply.github.com >
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
2025-09-08 15:52:26 +08:00
ltd0924
0c45e225d3
mv connection_manager init ( #3901 )
...
Co-authored-by: Yuanle Liu <yuanlehome@163.com >
2025-09-05 21:11:48 +08:00
SunLei
29628de6a7
Support for async processor added. ( #3869 )
...
* Support for async processor added.
* remove yappi code
---------
Co-authored-by: Yuanle Liu <yuanlehome@163.com >
2025-09-04 19:58:53 +08:00
luukunn
fc598d4c5a
add reasoning parser plugin ( #3811 )
...
* add reasoning parser plugin
* fix finish reason
2025-09-03 18:31:27 +08:00
ltd0924
bf0cf5167a
[BugFix] fix max streaming tokens invalid ( #3789 )
2025-09-02 13:57:32 +08:00
李泳桦
88297240e7
[feat] completion api supports passing input token ids in either prompt or prompt_token_ids ( #3311 )
...
* [feat] completion api supports passing input token ids in either `prompt` or `prompt_token_ids`
* [fix] update comment
* [fix] fix type error
* [test] add a unittest file for serving api test
* [test] try to fix ci error
* [chore] rename test function names
* [test] try to fix ci error
* [test] try to fix ci error
* [test] add tests for qwen
2025-08-29 14:19:42 +08:00
gaoziyuan
82e64b13e1
[NewFeature]Support dp multi api server && Fix some bug in mixed ep && merge develop ( #3598 )
...
* [Feature] update ep
* fix ci
* fix ci
* fix ci
* fix ci
* fix ci
* fix ci
* fix ci
* fix queue ports idx
* fix ci
* fix ci
* fix ci
* fix ci
* fix ci
* fix ci
* fix ci
* fix ci
* Update engine.py
* fix ci
* fix some bug in mixed ep
* add server fix and op fix
* rm some log
* fix code style
* ltd fix
* fix
* fix
* fix some bug
* fix bug
* fix bug
* fix style
* Update config.py
* Update splitwise_connector.py
* Update cache_messager.py
* Update __init__.py
* merge and fix
* Update engine.py
* Update common_engine.py
* Update run_ci_xpu.sh
* Update ernie_processor.py
* Update ernie_processor.py
---------
Co-authored-by: ltd0924 <ltd0924@sina.com >
Co-authored-by: ltd0924 <32387785+ltd0924@users.noreply.github.com >
2025-08-26 19:59:02 +08:00
SunLei
2f28f40d90
fix: replace list * n initialization with list comprehension to avoid shared references ( #3618 )
2025-08-26 17:53:31 +08:00
ltd0924
66c5addce4
[Bugfix] fix api server control signal bugs ( #3531 )
...
CE Compile Job / ce_job_pre_check (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
Deploy GitHub Pages / deploy (push) Has been cancelled
* Update serving_chat.py
* Update serving_completion.py
* Update serving_completion.py
2025-08-25 21:13:04 +08:00
李泳桦
8bea4b1e25
[fix] fix output tokens count in streaming completion api ( #3507 )
2025-08-21 18:19:13 +08:00
luukunn
371fb3f853
[Feature] add tool parser ( #3483 )
...
* add tool parser
* add x1 enable_thinking
* restart ci
* fix vl reasoning parser
* modify call style
* modify call style
* add offline enablethinking
* fix completion
* fix
* fix unit test
* fix unit test
* fix unit test
* fix vl reasoning parser
* fix vl reasoning parser
2025-08-21 17:25:44 +08:00
Yzc216
466cbb5a99
[Feature] Models api ( #3073 )
...
* add v1/models interface related
* add model parameters
* default model verification
* unit test
* check model err_msg
* unit test
* type annotation
* model parameter in response
* modify document description
* modify document description
* unit test
* verification
* verification update
* model_name
* pre-commit
* update test case
* update test case
* Update tests/entrypoints/openai/test_serving_models.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* Update tests/entrypoints/openai/test_serving_models.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* Update tests/entrypoints/openai/test_serving_models.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* Update tests/entrypoints/openai/test_serving_models.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* Update fastdeploy/entrypoints/openai/serving_models.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
---------
Co-authored-by: LiqinruiG <37392159+LiqinruiG@users.noreply.github.com >
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
2025-08-21 17:02:56 +08:00
ltd0924
51f68ae593
[Feature] add dealer manager to reuse the connection ( #3471 )
...
* [BugFix] fix control signal release failed
* [BugFix] fix control signal release failed
* update
* update
* update
* [Feature] add dealer manager to reuse the connection
* fix
* fix
* fix
* fix
* fix
* fix
* Create test_dealer_connection_manager.py
* Delete test/entrypoints/openai directory
* Update test_dealer_connection_manager.py
* Update test_dealer_connection_manager.py
2025-08-21 13:11:13 +08:00
memoryCoderC
31f639f10b
[Feature] add prompt_tokens and completion_tokens ( #3504 )
Deploy GitHub Pages / deploy (push) Has been cancelled
2025-08-21 10:23:27 +08:00
kevin
67298cf4c0
add error traceback info ( #3419 )
...
Deploy GitHub Pages / deploy (push) Has been cancelled
* add error traceback info
* update error msg
* update code
---------
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com >
2025-08-19 19:32:04 +08:00
ltd0924
bca8905b40
[BugFix] fix control signal release failed ( #3390 )
...
* [BugFix] fix control signal release failed
* [BugFix] fix control signal release failed
* update
* update
* update
2025-08-19 13:51:38 +08:00
zhuzixuan
c95b3395e9
【BugFix】completion接口echo回显支持 ( #3245 )
...
* wenxin-tools-511,修复v1/completion无法回显的问题。
* 支持多prompt的回显
* 支持多prompt情况下的流式回显
* 补充了 completion 接口支持 echo 的单元测试
* pre-commit
* 移除了多余的test文件
* 修复了completion接口echo支持的单测方法
* 补充了单元测试文件
* 补充单测
* unittest
* 补充单测
* 修复单测
* 删除不必要的assert.
* 重新提交
* 更新测试方法
* ut
* 验证是否是正确思路单测
* 验证是否是正确思路单测
* 验证是否是正确思路单测3
* 优化单测代码,有针对性地缩小单测范围。
* 优化单测代码2,有针对性地缩小单测范围。
* 优化单测代码3,有针对性地缩小单测范围。
* support 'echo' in chat/completion.
* update
* update
* update
* update
* update
* update
* 补充了关于tokenid的单元测试
* update
* 修正index错误
* 修正index错误
2025-08-19 10:41:51 +08:00
xiaolei373
d4f610e4cd
feat(log):add_request_and_response_log ( #3373 )
Deploy GitHub Pages / deploy (push) Has been cancelled
2025-08-13 23:27:41 +08:00
luukunn
eda83ca672
add Tool Parser ( #3272 )
...
Deploy GitHub Pages / deploy (push) Has been cancelled
* add tool-parser
* add tool-parser
* add tool parser
* add tool parser
* fix
* add offline
* add offline
* fix
* parsers:tool&reasoning
* 修改tool parser名称·
* update
* fix reasoning-parser
* add requirements
* fix finish reason
* fix
* fix reasoning-parser
* fix
* fix
* fix
* fix
* fix
---------
Co-authored-by: zhuzixuan <zhuzixuan@baidu.com >
2025-08-13 01:06:55 +08:00
memoryCoderC
2d1a4cacdf
Completion add raw_prediction/text_after_process ( #3356 )
2025-08-12 23:06:45 +08:00