[Cherry-Pick][OP][Feature] 统一 limit_thinking_content_length CUDA 算子,支持回复长度限制与注入序列 (#6506)

* Initial plan

* feat: migrate core PR6493 changes to release 2.4

Co-authored-by: yuanlehome <23653004+yuanlehome@users.noreply.github.com>

* fix ci

* fix ci

* fix ci

* fix ci

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: yuanlehome <23653004+yuanlehome@users.noreply.github.com>
This commit is contained in:
Yuanle Liu
2026-02-26 10:02:01 +08:00
committed by GitHub
parent 2bd6263f82
commit 2b79d971f1
27 changed files with 721 additions and 1660 deletions
+4
View File
@@ -71,6 +71,7 @@ class SamplingParams:
can complete the sequence.
max_tokens: Maximum number of tokens to generate per output sequence.
reasoning_max_tokens: Maximum number of tokens to generate for reasoning per output sequence.
response_max_tokens: Maximum number of tokens to generate for response per output sequence.
min_tokens: Minimum number of tokens to generate per output sequence
before EOS or stop_token_ids can be generated
logprobs: Number of log probabilities to return per output token.
@@ -97,6 +98,7 @@ class SamplingParams:
stop_seqs_len: Optional[int] = None
max_tokens: Optional[int] = None
reasoning_max_tokens: Optional[int] = None
response_max_tokens: Optional[int] = None
min_tokens: int = 1
logprobs: Optional[int] = None
prompt_logprobs: Optional[int] = None
@@ -135,6 +137,7 @@ class SamplingParams:
stop_token_ids=None,
max_tokens=None,
reasoning_max_tokens=None,
response_max_tokens=None,
min_tokens=1,
logprobs=None,
prompt_logprobs=None,
@@ -159,6 +162,7 @@ class SamplingParams:
stop_token_ids=stop_token_ids,
max_tokens=max_tokens if max_tokens is not None else 8192,
reasoning_max_tokens=reasoning_max_tokens,
response_max_tokens=response_max_tokens,
min_tokens=min_tokens,
logprobs=logprobs,
prompt_logprobs=prompt_logprobs,