mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2026-04-23 00:17:25 +08:00
[Optimization] The pre- and post-processing pipeline do not perform dict conversion (#5494)
* to_request_for_infer initial commit * refact to from_chat_completion_request * preprocess use request initial commit * bugfix * processors refact to using request * bug fix * refact Request from_generic_request * post process initial commit * bugfix * postprocess second commit * bugfix * serving_embedding initial commit * serving_reward initial commit * bugfix * replace function name * async_llm initial commit * offline initial commit and fix bug * bugfix * fix async_llm * remove add speculate_metrics into data * fix logprobs bug * fix echo bug * fix bug * fix reasoning_max_tokens * bugfix * bugfix and modify unittest * bugfix and modify unit test * bugfix * bugfix * bugfix * modify unittest * fix error when reasong_content is none for text_processor * remove some unnessary logic * revert removed logic * implement add and set method for RequestOutput and refact code * modify unit test * modify unit test * union process_request and process_request_obj * remove a unit test * union process_response and process_response_obj * support qwen3_vl_processor * modify unittest and remove comments * fix prompt_logprobs * fix codestyle * add v1 * v1 * fix unit test * fix unit test * fix pre-commit * fix * add process request * add process request * fix * fix * fix unit test * fix unit test * fix unit test * fix unit test * fix unit test * remove file * add unit test * add unit test * add unit test * fix unit test * fix unit test * fix * fix --------- Co-authored-by: Jiaxin Sui <95567040+plusNew001@users.noreply.github.com> Co-authored-by: luukunn <981429396@qq.com> Co-authored-by: luukunn <83932082+luukunn@users.noreply.github.com> Co-authored-by: Zhang Yulong <35552275+ZhangYulongg@users.noreply.github.com>
This commit is contained in:
@@ -82,9 +82,9 @@ class TestLodChatTemplate(unittest.IsolatedAsyncioTestCase):
|
||||
):
|
||||
return prompt_token_ids
|
||||
|
||||
async def mock_format_and_add_data(current_req_dict):
|
||||
current_req_dict["text_after_process"] = "你好"
|
||||
return current_req_dict
|
||||
async def mock_format_and_add_data(current_req_obj):
|
||||
current_req_obj["prompt_tokens"] = "你好"
|
||||
return current_req_obj
|
||||
|
||||
self.chat_completion_handler.chat_completion_full_generator = mock_chat_completion_full_generator
|
||||
self.chat_completion_handler.engine_client.format_and_add_data = mock_format_and_add_data
|
||||
@@ -92,6 +92,8 @@ class TestLodChatTemplate(unittest.IsolatedAsyncioTestCase):
|
||||
self.chat_completion_handler.engine_client.semaphore.acquire = AsyncMock(return_value=None)
|
||||
self.chat_completion_handler.engine_client.semaphore.status = MagicMock(return_value="mock_status")
|
||||
chat_completiom = await self.chat_completion_handler.create_chat_completion(request)
|
||||
print("1" * 50)
|
||||
print(chat_completiom)
|
||||
self.assertEqual(self.input_chat_template, chat_completiom["chat_template"])
|
||||
|
||||
async def test_serving_chat_cus(self):
|
||||
@@ -110,9 +112,9 @@ class TestLodChatTemplate(unittest.IsolatedAsyncioTestCase):
|
||||
):
|
||||
return prompt_token_ids
|
||||
|
||||
async def mock_format_and_add_data(current_req_dict):
|
||||
current_req_dict["text_after_process"] = "你好"
|
||||
return current_req_dict
|
||||
async def mock_format_and_add_data(current_req_obj):
|
||||
current_req_obj["prompt_tokens"] = "你好"
|
||||
return current_req_obj
|
||||
|
||||
self.chat_completion_handler.chat_completion_full_generator = mock_chat_completion_full_generator
|
||||
self.chat_completion_handler.engine_client.format_and_add_data = mock_format_and_add_data
|
||||
|
||||
Reference in New Issue
Block a user