Commit Graph

5086 Commits

Author SHA1 Message Date
freeliuzc 7a6c28781b [Speculative Decoding] Optimize attn_mask_offset and fix mtp bug (#7005)
* optimize attn_mask_offset and optimize mtp usage

* delete useless branch

* fix kernel format

* fix kernel runner
2026-03-25 01:52:06 -07:00
YuBaoku aee293be0f [CI] Optimize: add vl swap_test and remove useless code (#7000) 2026-03-25 11:33:56 +08:00
YuBaoku 4e8d503e3c Revert "add deepep precision test (#6984)" (#7004)
This reverts commit 522d12c25a.
2026-03-25 10:50:40 +08:00
chen c92e277cf1 [RL] RoPE without fmad opt (#6901)
* env FD_ENABLE_RL=1 do fmul_rn(a*b) in rope
2026-03-24 21:19:53 +08:00
Zhang Yulong 6f5aa883f7 [benchmark] update benchmark tools (#6991)
* [benchmark] update benchmark tools

* [benchmark] update benchmark tools
2026-03-24 20:56:27 +08:00
周周周 522d12c25a add deepep precision test (#6984) 2026-03-24 19:51:33 +08:00
zhupengyang 5780345646 [XPU] fix speculate_verify (#6985) 2026-03-24 18:55:09 +08:00
SUN Dong 6cff780fdb [RL] Support moe_topk_select using Paddle native operators and Add fused stack-transpose-quant for BlockWiseFP8 MoE weight quantization and swiglu-fp8-quant op for DeepGemmFusedMoE for training alignment (#6850)
* [RL] Add fused stack-transpose-quant for BlockWiseFP8 MoE weight quantization

* update

* update

* update

* support custom topk inDeepGemmFusedMoeMethod  apply_tp

* apply_ep_prefill support moe_topk_select

* update

* add ut

* add ut

* add ut

* modity doc

* fix env and docs

* add ut

---------

Co-authored-by: zhanghonggeng <zhanghonggeng@baidu.com>
2026-03-24 11:12:39 +08:00
Nyakku Shigure 8b6bbb3504 [Optimization] Use a separate driver when using Triton with Paddle (#6897)
---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-24 10:56:00 +08:00
freeliuzc e87ce4b8cd [Speculative Decoding] refactor MTP and optimize spec-decoding postprocess (#6973)
* support new mtp

* refactor(speculate_decoding and mtp): optimize mtp sturcture logic. Update spec-branch status-process

* fix cuda-graph for spec-decoding

* fix xpu mtp and fix some note

* fix unittest and optmize note

* fix model status update in eos-branch
2026-03-24 10:19:01 +08:00
bukejiyu c62f6b4ea5 [Others] Fix PD reorder for MTP (#6792)
* fix pd reorder in mtp

* add ut

* update

* fix mtp
2026-03-23 21:10:22 +08:00
YuBaoku 1b276e62d4 [CI] Upgrade GitHub Actions for Node 24 compatibility (#6975)
* [CI] Upgrade GitHub Actions for Node 24 compatibility
2026-03-23 20:38:22 +08:00
Ding defaffd5fb 【Hackathon 10th Spring No.45】FastDeploy 支持在 T4/V100 硬件的编译 -part (#6488)
* fix(custom_ops): gate unsupported ops for sm70/sm75 build

* fix(custom_ops): gate deepgemm exports to sm75+ only

* [BugFix][OP] deduplicate CUDA sources to avoid moe_deepgemm multiple definition

* revert two custom_ops files to 352f922f9
2026-03-23 19:16:23 +08:00
xiaoxiaohehe001 c1f7991aec [BugFix] add worker_process no grad (#6971) 2026-03-23 02:10:56 -07:00
wikilsh 5e469fc901 [RL][BugFix][Optimization] Support chunked part files loading and fix model path format in IPC snapshot strategy (#6852)
* [RL] Support chunked part files loading in IPC snapshot strategy

## Motivation

When using IPC snapshot for elastic recovery in RL training, loading a single large pdparams file causes a significant memory spike. This PR refactors `_update_ipc_snapshot` to support loading chunked part files to avoid the memory spike.

## Modifications

Refactored `_update_ipc_snapshot` in `fastdeploy/rl/dynamic_weight_manager.py` with a three-level loading priority:

1. **Chunked part files** (`model_state.tpR{id}.part{N}.pdparams`): Load multiple smaller shards sequentially, freeing memory between each chunk via `gc.collect()` to avoid memory spike.
2. **Single full file** (`model_state.tpR{id}.pdparams`): Legacy single-file loading path (preserved for backward compatibility).
3. **Shared fallback directory** (`/shared_ipc_meta/...`): Oldest legacy fallback path (preserved for backward compatibility).

Also fixed the rank ID in the file name pattern from hardcoded `tp0` to dynamic `paddle.distributed.get_rank()`.

## Checklist

- [ ] Add at least a tag in the PR title.
- [ ] Format your code, run `pre-commit` before commit.
- [ ] Add unit tests. Please write the reason in this PR if no unit tests.
- [ ] Provide accuracy results.
- [ ] If the current PR is submitting to the `release` branch, make sure the PR has been submitted to the `develop` branch, then cherry-pick it to the `release` branch with the `[Cherry-Pick]` PR tag.

Co-Authored-By: lishuaihui <lishuaihui@baidu.com>

* [RL] Support chunked part files loading in IPC snapshot strategy

## Motivation

When using IPC snapshot for elastic recovery in RL training, loading a single large pdparams file causes a significant memory spike. This PR refactors `_update_ipc_snapshot` to support loading chunked part files to avoid the memory spike.

## Modifications

Refactored `_update_ipc_snapshot` in `fastdeploy/rl/dynamic_weight_manager.py` with a three-level loading priority:

1. **Chunked part files** (`model_state.tpR{id}.part{N}.pdparams`): Load multiple smaller shards sequentially, freeing memory between each chunk via `gc.collect()` to avoid memory spike.
2. **Single full file** (`model_state.tpR{id}.pdparams`): Legacy single-file loading path (preserved for backward compatibility).
3. **Shared fallback directory** (`/shared_ipc_meta/...`): Oldest legacy fallback path (preserved for backward compatibility).

Also fixed the rank ID in the file name pattern from hardcoded `tp0` to dynamic `paddle.distributed.get_rank()`.

## Checklist

- [ ] Add at least a tag in the PR title.
- [ ] Format your code, run `pre-commit` before commit.
- [ ] Add unit tests. Please write the reason in this PR if no unit tests.
- [ ] Provide accuracy results.
- [ ] If the current PR is submitting to the `release` branch, make sure the PR has been submitted to the `develop` branch, then cherry-pick it to the `release` branch with the `[Cherry-Pick]` PR tag.

Co-Authored-By: lishuaihui <lishuaihui@baidu.com>

* [RL][BugFix] Fix ambiguous model path format and add legacy fallback in IPC snapshot

## Motivation
The previous snapshot file naming `model_state.tp{rank}{id}` concatenated
rank and id without a separator, causing ambiguity (e.g., rank=1, id=234
and rank=12, id=34 both produce `tp1234`). Additionally, after the naming
format is updated, existing checkpoints saved in the old format would fail
to load during elastic recovery, causing unnecessary failures.

## Modifications
- Add dot separator between rank and id in snapshot file name:
  `model_state.tp{rank}{id}` → `model_state.tp{rank}.{id}`
- Add Priority 3 legacy fallback to load old-format files
  (`model_state.tp0{id}.pdparams`) for backward compatibility during
  rolling upgrades
- Update docstring and error message to reflect the new 4-level priority

Co-Authored-By: lishuaihui <lishuaihui@baidu.com>

* [RL][Test] Add unit tests for DynamicWeightManager._update_ipc_snapshot

Cover all 4 loading priority branches (chunked part files, single full
pdparams, legacy format, shared directory fallback) with mock-based
tests to verify correct behavior without filesystem or GPU dependencies.

Co-Authored-By: lishuaihui <lishuaihui@baidu.com>

* [RL][Test] Remove unused import 'call' in test_update_ipc_snapshot.py

Co-Authored-By: lishuaihui <lishuaihui@baidu.com>

* Potential fix for pull request finding

Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>

* [RL] Fix snapshot part index to match filename numbering

Parse part index from filename (e.g. .part0.) instead of using
enumerate index, so that logs and src_type stay consistent with
the actual file naming convention.

Co-Authored-By: wikilsh <wiki_hui@qq.com>

---------

Co-authored-by: YuBaoku <49938469+EmmonsCurse@users.noreply.github.com>
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
2026-03-23 16:17:41 +08:00
jc bb881c2c0a [PD Disaggregation] pd + cache_storage support vl model (#6906)
* pd + cache_storage support vl model

* support vl model

* fix test
2026-03-23 15:35:20 +08:00
周周周 5416da8c6e remove assert (#6970)
Co-authored-by: “liuruian” <liuruian@baidu.com>
2026-03-23 14:22:03 +08:00
jackyYang6 634d23a38a [Bugfix] Align thinking_budget behavior with ERNIE reasoning flow (#6934)
* [Bugfix] Align thinking_budget behavior with ERNIE reasoning flow

* [Docs] Fix thinking_budget markdown formatting

* [Test] Align ernie thinking budget test with process_request_dict
2026-03-23 14:15:55 +08:00
sunxin 7a78001be2 fix execute_model_normal in empty run (#6968) 2026-03-23 14:07:46 +08:00
luukunn 33e79f922a [Optimization]Optimize CPU utilization (#6950)
* Optimize CPU utilization
2026-03-22 23:02:39 +08:00
YuBaoku fdd12ff5ba [CI] Fix: incorrect downstream job execution when only build_gpu/xpu is skipped (#6958)
* [CI] Fix: incorrect downstream job execution when only build_gpu/xpu is skipped

* [CI] Fix: avoid skipping required jobs by moving skip logic to steps

* [CI] Fix: Invalid secret, github-token is not defined
2026-03-22 17:00:18 +08:00
YuBaoku 0b4c1cba9b [CI] Change 21b ep4 to tp1_dp4 in 4_cards_tests (#6745)
* [CI] Change 21b ep4 to tp1_dp4 in 4_cards_tests
2026-03-20 20:42:23 +08:00
YuBaoku 030820db4c [CI] Optimize CI: refine check-bypass/cancel logic and fix nightly task (#6939)
* [CI] Optimize CI: add check-bypass for workflow skip control

* fix ci_image_build and publish_job

* [CI] Optimize CI: add check-bypass and cancel

* [CI] update to PFCCLab/ci-bypass@v2
2026-03-20 19:34:45 +08:00
jackyYang6 00eb12f656 [BugFix][Models] Unify PaddleFormers fused QKV TP loading and stabilize fallback TP path (#6555)
* [BugFix][Models] avoid custom all-reduce in PaddleFormers fallback TP path and tighten TP-aware layout matching

* [BugFix][Models] unify PaddleFormers fused QKV TP loading and align fallback tests
2026-03-20 16:37:58 +08:00
SunLei 32b6900d01 fix code type (#6951) 2026-03-20 16:14:12 +08:00
AIbin bf7e2424d0 [Optimization][Feature]Supports multiple batches of DSK-DSA. (#6930)
* support DSA_MUTI_BATCH

* update test topk

* update dsk-dsa
2026-03-20 15:59:22 +08:00
周周周 1c38da2118 Make seq_lens_this_time/decoder/encoder equal shape (#6942) 2026-03-20 15:31:52 +08:00
Zhang Yulong 2b10ebc1f1 [benchmark] Refactor debug logging and payload handling (#6949)
* Refactor debug logging and payload handling

* Update backend_request_func.py
2026-03-20 15:04:10 +08:00
Zhang Yulong 3a4e139f65 [Benchmark] fix multi turn (#6948) 2026-03-20 13:22:30 +08:00
cloudforge1 aca733b95c [CI]【Hackathon 10th Spring No.32】load_weight_utils unit test (#6740)
* 【Hackathon 10th Spring No.32】Unit test for load_weight_utils.py

* [CI]【Hackathon 10th Spring No.32】rewrite load_weight_utils unit test

* [CI]【Hackathon 10th Spring No.32】improve load_weight_utils coverage to 83%

- Add test_load_ep_checkpoint_basic: exercises EP checkpoint loading with minimal fixture
- Add test_composite_ep_branch: covers EP path in load_composite_checkpoint
- Add test_get_weight_iterator_unordered: covers unordered sharded safetensors path

* [CI]【Hackathon 10th Spring No.32】align load_weight_utils test with gold standard (tmp_path, split tests)

* [CI]【Hackathon 10th Spring No.32】add coverage tests for load_weight_utils

- Add test_is_layers_grouped: test layers_are_grouped() with grouped, interleaved, and no-layer keys
- Add test_save_model_bf16_cache: exercise save_model decorator with is_checkpoint_bf16=True
- Add test_composite_checkpoint_ep: test load_composite_checkpoint use_ep=True branch
- Add test_composite_checkpoint_rank_mismatch: test tp_size != rank_dirs ValueError
- Add test_composite_checkpoint_kv_quant: test float8_e4m3fn kv_cache path
- Add __main__ block for direct execution

* [CI]【Hackathon 10th Spring No.32】raise load_weight_utils test delta

* [CI]【Hackathon 10th Spring No.32】cover TP sequence-parallel MoE load branches

* test: add load_reordered_experts, pre-sharded, and empty-state tests


---------

Co-authored-by: cloudforge1 <cloudforge1@users.noreply.github.com>
2026-03-20 13:14:30 +08:00
xjkmfa 3b203994e2 [Benchmark] Update Qwen3 vl 32k yaml (#6946) 2026-03-20 11:48:53 +08:00
xjkmfa a81116ad90 [Benchmark] Update Qwen3 vl dense yaml (#6945) 2026-03-20 11:26:47 +08:00
sunxin d77edf8fc9 opt wfp8afp8 triton moe (#6938) 2026-03-20 11:07:25 +08:00
mouxin 96b0ecea6b [Feature] Update Counter Release (#6943) 2026-03-20 10:51:37 +08:00
luukunn f4a79d4c00 [Optimization]Unified data processing for online and offline (#6891)
* remove process_request

* fix chat

* fix unit test

* remove process response

* fix unit test

* fix offline decode

* Potential fix for pull request finding

Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>

* fix sampling_params

---------

Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
2026-03-19 21:56:09 +08:00
luukunn c3d8db85c4 [Optimization] Update ZMQ server (#6735)
* add batch zmq send reaponse

* update

* Revert "update"

This reverts commit 0234a25b47.

* update

* remove lock

* fix unit test

* add unit test

* add unit test

* pre commit

* add unit test

* fix unit test

* add unit test

* fix worker>1

* update zmq_worker_pid

* fix unit test

* fix unit test

* fix unit test

* add unit test

* fix unit test

* fix first token time

* fix logprobs

* add unit test

* op

* remore debug log

---------

Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com>
2026-03-19 21:53:16 +08:00
cloudforge1 9148562ed0 [CI]【Hackathon 10th Spring No.35】resource_manager 单测补充 (#6734)
* [CI]【Hackathon 10th Spring No.35】resource_manager 单测补充

* [CI]【Hackathon 10th Spring No.35】resource_manager 单测补充

* [CI]【Hackathon 10th Spring No.35】add __main__ block

---------

Co-authored-by: cloudforge1 <cloudforge1@users.noreply.github.com>
Co-authored-by: CSWYF3634076 <wangyafeng@baidu.com>
2026-03-19 17:45:21 +08:00
YuBaoku 7141db0e01 [CI] Optimize CI: update nightly test_image build workflow (#6937) 2026-03-19 17:39:01 +08:00
周周周 b1c800b64b remove load_up_proj_weight_first (#6932) 2026-03-19 17:21:34 +08:00
sunxin 33e01f22a8 [Feature][Sampling] Extend top-k_top-p sampling to all backends and unify greedy decoding with top_k=1 (#6894)
* update sampling

* fix

* fix

* fix mtp

* fix test
2026-03-19 01:43:10 -07:00
YuBaoku 2b84a4276e [CI] Optimize CI: add timeout and cancel on PR close (#6933) 2026-03-19 15:54:30 +08:00
JYChen f95d8ca7df [RL] support qkrmsnorm use proxy-norm (#6862)
* support qkrmsnorm use paddle.nn.functional.rms_norm

* remove flags in fd
2026-03-18 23:27:26 -07:00
周周周 1a05744c4e nvfp4.py support ep (#6920) 2026-03-19 14:07:46 +08:00
周周周 c184a7cb69 remove source in weight_loader in moe.py (#6892) 2026-03-19 13:31:43 +08:00
Nyakku Shigure dd93f8ffb4 [Optimization] Skip compat guard when torch is not installed (#6913) 2026-03-19 11:29:27 +08:00
AIbin 4794a28f3d opt glm5 model (#6916) 2026-03-19 11:13:33 +08:00
jc dd55cda3c8 [CI] Add test for pd and cache storage (#6876)
* Add test for pd and cache storage

* up

* up

* fix bug

* fix bug

* up docker image

* up
2026-03-19 10:38:27 +08:00
gongweibao fb6c56dfd5 [BugFix][DataProcessor] Force top_k=1 for greedy decoding when temperature=0 (#6748)
* [BugFix] Force top_k=1 for greedy decoding when temperature=0

When temperature is set to 0 (greedy decoding), only setting temperature
to a small epsilon is insufficient — the sampling kernel may still pick
non-top-1 tokens. Explicitly set top_k=1 in all processors to guarantee
argmax behavior.

Additionally, add argmax fast-path in top_k_top_p_sampling() under
FD_DETERMINISTIC_MODE to handle non-rejection sampling backends that
ignore top_k parameter.

* Extract greedy decoding from FD_DETERMINISTIC_MODE guard

top_k=1 → argmax is a correctness optimization, not deterministic-specific.
Remove the FD_DETERMINISTIC_MODE guard so all-greedy fast-path and
mixed-batch override work unconditionally.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Update test_torch_model.py

---------

Co-authored-by: gongweibao <gognweibao@baidu.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: YuBaoku <49938469+EmmonsCurse@users.noreply.github.com>
2026-03-18 17:36:43 +08:00
AIbin 9b117aafac support glm-moe-dsa model (#6863) 2026-03-18 17:21:55 +08:00
YuBaoku 07543685ec [CI] Isolate cache and ccache for CUDA 13.0 build 2026-03-18 11:41:46 +08:00