lizhenyun01
2be8656c29
[BugFix] fix mtp split kv attetion ( #5920 )
...
* [BugFix] fix mtp split kv attetion
* clean code
* clean code
2026-01-07 04:07:31 -08:00
kevin
a76e8ae40c
[Feature] support rdma pd dy-c8 ( #5788 )
...
* add rdma pd dy-c8
* update code
2026-01-07 14:55:25 +08:00
周周周
f15df1ec89
Revert cuda check ( #5915 )
...
* commit
* commit
2026-01-07 14:40:18 +08:00
yangjianfengo1
59523b27de
opt w4afp8 ( #5853 )
2026-01-07 12:22:35 +08:00
周周周
83ae59431e
[BugFix] fix BatchMLAWithPagedKVCacheKernel O_tmp ( #5895 )
2026-01-06 15:39:06 +08:00
Yuanle Liu
5e729bc2ba
[OPs] ep_moe_expert_dispatch.cu dispatch num_experts_per_rank 5 ( #5890 )
2026-01-06 10:39:35 +08:00
周周周
ab553b3b8b
revert cuda_check ( #5883 )
2026-01-05 20:51:31 +08:00
lizexu123
1d3ae7c024
[BugFix] fix w4afp8 tp=8 ( #5868 )
...
* fix w4afp8 tp=8
* fix
2026-01-05 18:59:02 +08:00
chen
ac39c0f887
support fa3 qwen-vl rope ( #5869 )
2026-01-05 15:29:34 +08:00
sunxin
adb91dcacc
[BugFix] Fix wint4 ep issue caused by empty run ( #5870 )
2026-01-05 14:24:37 +08:00
周周周
e3957a5ebc
[Others] remove template NUM_EXPERTS_PER_RANK in permute_x_fp8_kernel ( #5620 )
2026-01-04 11:21:15 +08:00
Sunny-bot1
598d292a69
w4afp8 fix quant ( #5830 )
2025-12-30 21:16:13 +08:00
Yonghua Li
a8d3e3ba12
[BugFix] fix shm opened but not closed in set_data_ipc ( #5826 )
2025-12-29 23:35:07 +08:00
CSWYF3634076
9286403570
[Models] Add Qwen3-VL Model Support ( #5763 )
...
* support v1 loader
* remove useless code
* remove useless
* [Model] support Qwen3VL images success
* [Model] support Qwen3VL rope_3d
* [Model] support Qwen3VL remove log
* [Model] support Qwen3VL RL
* [Model] support Qwen3VL tp
* [Model] support Qwen3VL video
* [Model] support Qwen3VL fix ernievl
* [Model] support Qwen3VL fix get_image_boundaries.cc array out of bounds
* [Model] support Qwen3VL fix multi card
* [Model] support Qwen3VL file close
* [Model] support Qwen3VL fix ce
* [Model] support Qwen3VL fix unittest
* [Model] support Qwen3VL add unittest
---------
Co-authored-by: Ayakouji <yuhongh@qq.com >
2025-12-29 17:39:33 +08:00
周周周
a3f0696e35
[BugFix] fix compile error in sm89 ( #5809 )
2025-12-29 16:55:52 +08:00
Longzhi Wang
11329ee35e
[Model] support mode config for expert_dispatch ( #5748 )
2025-12-29 13:37:20 +08:00
Ryan
09229d8953
change count_tokens_per_expert_func declaration: Tensor -> vector<Tensor> ( #5794 )
2025-12-26 19:02:28 +08:00
Ryan
724045c426
add some op infershape&dtype ( #5762 )
2025-12-26 16:17:39 +08:00
周周周
03363cab4c
make flash_mask attention pybind ( #5783 )
2025-12-26 14:31:35 +08:00
kevin
5538dda3c8
[Feature] pd support dy-c8 ipc ( #5750 )
...
* pd support dy-c8 ipc
* update code
* support v0
* update code
2025-12-25 21:22:34 +08:00
freeliuzc
9018ccf74e
[Speculative Decoding] Fix attn_mask_offset for multi-step MTP in mixed and PD-split modes ( #5738 )
...
* fix attn_mask_offset in mtp with multi-step and pd-split-mode
* fix xpu operater register
* update pmtp multi-step mtp strategy in d-split -mode
* add note
* fix xpu register
2025-12-25 01:54:59 -08:00
Juncai
412867fd99
[Feature] Support KV Cache Storage ( #5571 )
...
* Support Mooncake Store
* up
* up
* add op
* fix conflict
* fix error
* up for comments
* avoid thread lock
* up
* fix unittest
* fix unittest
* remove debug info
* consider tp_size > 1
* add default rdma_nics
* add utils
* up
* fix error
---------
Co-authored-by: YuBaoku <49938469+EmmonsCurse@users.noreply.github.com >
2025-12-25 16:30:35 +08:00
chen
c7ab32d154
check ( #5736 )
2025-12-24 16:49:20 +08:00
周周周
922a73ddd6
[Others] clean code ( #5691 )
2025-12-24 11:28:47 +08:00
lizexu123
6d323769dd
fix w4afp8 ( #5634 )
2025-12-22 13:39:41 +08:00
chen
a32cb54d0b
[BugFix] Fix custom_all_reduce overflow ( #5662 )
...
* check
* check
* code style
2025-12-19 18:24:21 +08:00
yzwu
ac013803f3
[Iluvatar] Support V1_KVCACHE_SCHEDULER and paddleocr-vl rope mode ( #5555 )
2025-12-18 02:14:25 -08:00
Yuanle Liu
cdc0004894
Revert "[Feature] add ue8m0 for per_token_quant_fp8 ( #5563 )" ( #5611 )
...
This reverts commit 73e1d6aa90 .
2025-12-17 13:59:06 +08:00
Yuanle Liu
867803ae10
[BugFix] fix speculate_limit_thinking_content_length ( #5590 )
...
* fix speculate_limit_thinking_content_length
* update
2025-12-16 04:31:45 -08:00
chen
27ef3610b5
support glm fa3 ( #5586 )
2025-12-16 19:33:27 +08:00
fxyfxy777
73e1d6aa90
[Feature] add ue8m0 for per_token_quant_fp8 ( #5563 )
...
* ue8m0
* add default arg
---------
Co-authored-by: YuBaoku <49938469+EmmonsCurse@users.noreply.github.com >
2025-12-16 18:40:12 +08:00
Echo-Nie
50100f98d7
[Feature] Support fusedmoe on Blackwell ( #5325 )
...
* update sm100
* fix
* fix style
2025-12-16 11:58:50 +08:00
freeliuzc
532f9ba227
[BugFix][Speculative Decoding](Spend many dyas to solve)Fix write qknorm cache bug in speculative decoding ( #5491 )
...
* [liuzichang spend 10 dyas]fix write qknorm cache bug
* fix 'fix cachekv bug''
2025-12-15 18:27:11 +08:00
chen
a389bb7c5c
[Feature][Optimization] Qwen Support Dynamic block_wise_fp8 cache ( #5486 )
2025-12-12 17:10:17 +08:00
Juncai
d67388a479
[PD Disaggregation] Distinguish the pipelines for sending kv signal in different prefill ( #5514 )
...
* Distinguish the pipelines for sending kv signal in different prefill
* up
2025-12-12 14:05:36 +08:00
Neil Zhu
4403a21d4b
[Metax] refactor cutlass moe and optimize flash attention ( #5361 )
...
* [Metax] refactor moe and flash attention backend
---------
Co-authored-by: zhangchenyi_dl <16219492+zhangchenyidl@user.noreply.gitee.com >
2025-12-10 17:15:17 +08:00
Copilot
e38709b499
[BugFix] Fix limit_thinking early return logic in CUDA kernels ( #5471 )
...
* Initial plan
* [BugFix] Fix limit_thinking bug - change AND to OR in condition checks
Co-authored-by: yuanlehome <23653004+yuanlehome@users.noreply.github.com >
* Update Chinese comments to reflect OR logic instead of AND
Co-authored-by: yuanlehome <23653004+yuanlehome@users.noreply.github.com >
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com >
Co-authored-by: yuanlehome <23653004+yuanlehome@users.noreply.github.com >
2025-12-10 11:03:19 +08:00
lzy
99f607eef5
[Others] Maintain the mtp branch temporarily. ( #5446 )
2025-12-09 19:17:53 +08:00
lizexu123
95eab9f9ee
[Feature] support stop_token_ids ( #5399 )
...
* support stop_token_ids
* fix
* delete chinese
* support both
* delete print
2025-12-09 17:49:12 +08:00
xiaozude
df67379bc3
[Metax] modify wrapSize to WARP_SIZE ( #5442 )
2025-12-09 01:44:02 -08:00
周周周
31410415db
FA3 support qwen3 ( #5441 )
2025-12-09 16:16:16 +08:00
K11OntheBoat
8d99bac532
Remove CUDA ERROR 9 of inputs of get_padding_offset kernel ( #5440 )
...
Co-authored-by: K11OntheBoat <“ruianmaidanglao@163.com ”>
2025-12-09 14:17:30 +08:00
周周周
2aea8a3a60
[Others] Remove useless code ( #5404 )
2025-12-08 13:59:46 +08:00
GoldPancake
8545b705ed
fix top_p_candidates ( #5400 )
...
Co-authored-by: freeliuzc <lzc842650834@gmail.com >
2025-12-05 20:01:05 +08:00
Yonghua Li
f4119d51b4
[PD Disaggregation] support DP via v1 router and decouple DP and EP ( #5197 )
...
* [fix] support DP via v1 router and decouple DP and EP
* [fix] fix scripts
* [fix] reset model path
* [fix] dp use get_output_ep, fix router port type, update scripts
* [merge] merge with latest code
* [chore] remove some debug log
* [fix] fix code style check
* [fix] fix test_multi_api_server for log_dir name
* [chore] reduce logs
* Apply suggestions from code review
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
2025-12-04 15:38:43 +08:00
周周周
a36d60aa18
[FIX BUG] fix bug in TP in permute_x_fp8_kernel ( #5350 )
...
* commit
* commit
* commit
* commit
* commit
* commit
2025-12-03 05:17:37 -08:00
Sunny-bot1
d5a9b75b4e
fix cutlass ep ( #5337 )
2025-12-03 14:06:01 +08:00
lzy
c71a44c7e5
supports mtp split_kv_attn ( #5343 )
2025-12-03 12:40:16 +08:00
Sunny-bot1
3629db4129
[Quantization] Support w4afp8 MoE dynamic quantization ( #5282 )
...
* support dynamic activation quant for w4afp8
* support dynamic w4afp8
* add test
* fix
* fix
---------
Co-authored-by: zhoutianzi666 <17801055074@163.com >
2025-12-02 18:56:16 +08:00
周周周
fb7f951612
[UNITEST] add test ( #5305 )
2025-12-02 17:59:01 +08:00