kevin
7707be8384
[Feature][KVCache] Implement Cache Manager V1 with GPU + CPU Cache Support (1/n) ( #7097 )
...
* [Feature][KVCache] Support cache manager v1 architecture
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com >
* Update cache manager and related modules
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com >
* chore: update cache_manager and related modules
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com >
* fix: add node to evictable set in complete_swap_to_device
When a node transitions from SWAP_TO_DEVICE to DEVICE via
complete_swap_to_device, it was not being added to the
_evictable_device set. This caused nodes with ref_count=0 to
become "orphaned" - not appearing in any evictable set despite
having cache_status=DEVICE.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com >
* feat: update cache manager v1 and related modules
- Add new cache_manager.py with cache management functionality
- Add radix_tree.py for prefix caching
- Update block_pool.py and metadata.py
- Update request.py and resource_manager_v1.py for scheduling
- Update gpu_model_runner.py for GPU model execution
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com >
* feat(cache): add cache controller v1 implementation
- Add CacheController class for cache management
- Update config.py with cache related configurations
- Refactor gpu_model_runner.py for improved cache handling
* feat(cache_manager): update cache manager v1
* fix(cache_manager): 修复 swap_cache H2D/D2H 方向的 block_ids 逻辑并清理 ForwardMeta
## Motivation
修复 swap_cache_optimized.cu 中 H2D 方向时 src/dst block_ids 使用错误的问题,
并清理 ForwardMeta 中已废弃的 cache_controller 字段。
## Modifications
- fix: swap_cache_optimized.cu 中根据 D2H 模板参数正确选取 src/dst block_ids,
修复 H2D 方向 src/dst 倒置 bug(同时修复 SwapCachePerLayerImpl 和 SwapCacheAllLayersBatchImpl)
- refactor: cache_manager/v1/__init__.py 将 LayerSwapTimeoutError 导入从
cache_controller 改为 cache_utils(正确来源)
- refactor: ForwardMeta 移除废弃的 cache_controller 字段
- refactor: gpu_model_runner.py 移除对应的 cache_controller 赋值语句
- test: 新增 tests/cache_manager/v1/test_swap_cache_ops.py 单元测试
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com >
* feat(cache_manager): refactor cache manager v1 and optimize swap ops
## Motivation
对 cache manager v1 进行重构和优化,精简代码结构,提升可维护性。
## Modifications
- 重构 transfer_manager.py,大幅精简代码逻辑
- 优化 swap_cache_optimized.cu GPU 算子实现
- 调整 cache_manager.py、cache_controller.py 逻辑,修复 free_device_blocks 方法缺失问题
- 更新 block_pool.py、cache_utils.py、metadata.py、radix_tree.py
- 精简 gpu_model_runner.py、forward_meta.py、attention.py 中相关调用
- 更新对应单元测试(test_cache_controller、test_swap_cache_ops、test_transfer_manager)
- 调整 config.py 中相关配置项
* [KVCache][MTP] 支持 cache_manager_v1 下的 MTP KV Cache 初始化及多模态 hash
## Motivation
在 enable_cache_manager_v1 路径下,MTP(speculative decode)的 KV Cache 需要由
CacheController 统一管理,以复用 swap/transfer 能力,同时修复多模态场景下 block
hash 未携带 multimodal extra_keys 的问题。
## Modifications
- `cache_controller.py`
- 新增 `initialize_mtp_kv_cache`:通过 CacheController 初始化 MTP KV Cache,
并将其注册到 cache_kvs_map,使 transfer_manager 自动覆盖 MTP 层
- `initialize_host_cache` 中的 num_layers 改为包含 MTP 额外 cache 层数,保证
Host Cache 也为 MTP 分配足够空间
- `_free_gpu_cache` 改名为 `free_gpu_cache`(对外可调用)
- `cache_utils.py`
- 新增 `get_block_hash_extra_keys`:提取单个 block 内的多模态 hash 信息,
对齐 PrefixCacheManager 的 multimodal extra_keys 逻辑
- `get_request_block_hasher` 中在 hash_block_tokens 时携带 extra_keys,
修复多模态场景 prefix cache 命中率不准的问题
- `spec_decode/mtp.py`
- `update_mtp_block_num` 新增 `skip_cache_init` 参数,避免 v1 cache manager
路径下重复初始化 MTP KV Cache
- `gpu_model_runner.py`
- `initialize_kv_cache(v1)` 路径:在主模型 cache 初始化后,调用
`cache_controller.initialize_mtp_kv_cache` 完成 MTP cache 创建
- `clear_cache` / `wakeup` / `reset` 等路径:respect `enable_cache_manager_v1`
标志,跳过重复的 proposer.initialize_kv_cache 调用
## Usage or Command
```bash
# 启动支持 MTP + cache_manager_v1 的推理服务(示例)
bash run.sh
```
* fix(cache_manager): multi-GPU fix, mm hash boundary fix, and remove batch ops
1. Fix CuPy stream/event creation for multi-GPU: wrap all stream operations
with cp.cuda.Device(device_id) context to ensure streams/events are bound
to the correct device, preventing cross-device errors in multi-GPU setups.
2. Remove cudaSetDevice from SwapCacheAllLayers (handled by cupy context now).
3. Remove swap_cache_all_layers_batch op: simplified the implementation by
removing the batch upload variant; all-layer transfers now use the standard
swap_cache_all_layers with cupy device context.
4. Fix mm hash boundary comparison in get_block_hash_extra_keys: change
strict less-than (<) to less-than-or-equal (<=) so that multimodal items
ending exactly at block start are correctly excluded.
5. Extract config fields to KVCacheBase: model_config, cache_config,
quant_config, parallel_config are now set in the base class __init__ to
avoid duplication in CacheController and CacheManager subclasses.
6. Translate metadata.py docstrings from Chinese to English for broader
contributor accessibility.
7. Add test_cache_utils.py: comprehensive unit tests for
get_block_hash_extra_keys covering all boundary and overlap scenarios.
8. Expand test suite: test_request.py cache fields tests, test_radix_tree.py
backup candidate tests, test_transfer_manager.py and test_cache_manager.py
multi-GPU and concurrent operation tests.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com >
* [BugFix][KVCache] fix List import and move write_policy normalization to CacheManager
## Motivation
修复两处问题:
1. `fastdeploy/engine/request.py` 中 `List` 未导入导致 pre-commit F821 报错
2. `write_policy` 归一化逻辑(`write_through` → `write_through_selective`)不应放在 `FDConfig`,移至 `CacheManager.__init__` 中,使其只影响 Cache Manager V1 的内部逻辑
## Modifications
- `fastdeploy/engine/request.py`: 在 `typing` 导入中补充 `List`,删除重复的 `CacheSwapMetadata` TYPE_CHECKING 导入,修复 F821/F811
- `fastdeploy/config.py`: 删除 `write_policy` 归一化逻辑
- `fastdeploy/cache_manager/v1/cache_manager.py`: 将归一化逻辑移入 `CacheManager.__init__`
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com >
* [BugFix][KVCache] fix pre-commit code style issues
## Motivation
修复 CI pre-commit 代码风格检查失败问题。
## Modifications
- `fastdeploy/engine/common_engine.py`: black 格式化
- `fastdeploy/worker/worker_process.py`: black 格式化 + isort 修复
- `fastdeploy/cache_manager/v1/storage/__init__.py`: isort 修复
- `fastdeploy/worker/gpu_worker.py`: isort 修复
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com >
* [Feature][KVCache] update cache_manager_v1 modules
## Motivation
更新 Cache Manager V1 相关模块,完善版权信息、改进模块结构与可维护性。
## Modifications
- `fastdeploy/cache_manager/v1/` 系列模块:补充版权 header,优化代码结构
- `fastdeploy/config.py`:配置项更新
- `fastdeploy/engine/sched/resource_manager_v1.py`:调度相关更新
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com >
* [Feature][KVCache] add BatchRequest.from_tasks and refactor worker task parsing
## Motivation
将 worker_process 中重复的 task 解析逻辑收敛到 BatchRequest,减少代码冗余,提升可维护性。
## Modifications
- `fastdeploy/engine/request.py`:新增 `BatchRequest.from_tasks()` 类方法,统一将 task_queue 任务分类为推理请求和控制请求
- `fastdeploy/worker/worker_process.py`:使用 `BatchRequest.from_tasks()` 替代内联解析逻辑,并修复重复的 control_reqs 处理块
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com >
* [Feature][KVCache] add NUMA affinity for host cache and skip swap cache tests
## Motivation
优化 Host cache 内存分配的 NUMA 亲和性,减少跨 NUMA 访问延迟;
同时跳过 swap cache ops 测试(当前环境不支持)。
## Modifications
- `fastdeploy/cache_manager/v1/cache_controller.py`:
- 新增 `_get_numa_node_for_gpu()` 方法,通过 nvidia-smi 或 sysfs 获取 GPU 对应的 NUMA 节点
- 新增 `_bind_to_closest_numa_node()` 方法,绑定当前线程到 GPU 最近的 NUMA 节点
- 在 `initialize_host_cache()` 中调用 NUMA 绑定,优化 H2D 传输性能
- `tests/cache_manager/v1/test_swap_cache_ops.py`:跳过所有测试类(`TestSwapCacheAllLayersCorrectness`、`TestSwapCacheAllLayersPerformance`、`TestSwapCacheRandomBlockIndices`)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com >
* [BugFix][KVCache] fix unittest failures for cache_manager_v1
三个单测因接口变更或 Mock 方式问题导致失败,需修复。
- tests/distributed/chunked_moe.py:`setup_model_runner` 使用 `__new__` 跳过 `__init__`,补加 `enable_cache_manager_v1 = False`,修复 `AttributeError`
- tests/engine/test_resource_manager.py:`PrefixCacheManager` 为局部导入,`patch` 路径改为定义位置 `fastdeploy.cache_manager.prefix_cache_manager.PrefixCacheManager`
- tests/v1/test_resource_manager_v1.py:`_trigger_preempt` 第四参数已由 `list` 改为 `BatchRequest`,更新测试传参和断言
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com >
* [BugFix][KVCache] remove debug logging code
## Modifications
- fastdeploy/engine/request.py:删除调试用 logger 及 prompt_hashes 中的 debug 日志
- fastdeploy/worker/worker_process.py:删除 __main__ 中的调试 import 和 print 语句
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com >
* [BugFix][KVCache] fix cupy device id caching and pickle for _match_result
## Motivation
修复两个 bug:
1. `transfer_manager.py` 中每次调用 `cp.cuda.runtime.getDevice()` 存在隐患,应在初始化时缓存为实例变量,保证后续操作使用一致的设备 ID。
2. `request.py` 的 `__getstate__` 未跳过 `_match_result`,该字段包含 BlockNode 树的父子循环引用,pickle 时会触发 `RecursionError`;同时补充 `__setstate__` 确保 unpickle 后字段恢复为安全默认值。
## Modifications
- `transfer_manager.py`:初始化时调用 `cp.cuda.runtime.getDevice()` 并缓存到 `self._cupy_device_id`,后续 `with cp.cuda.Device(...)` 和日志均使用该缓存值。
- `request.py`:
- `__getstate__` 中将 `_match_result` 加入跳过集合 `_SKIP_KEYS`,避免循环引用导致 pickle 失败。
- 新增 `__setstate__`,unpickle 后将 `_block_hasher` 和 `_match_result` 恢复为 `None`。
## Usage or Command
* fix(test): fix unit test errors for _trigger_preempt and wakeup with MTP
- Use BatchRequest instead of list in test_trigger_preempt_records_tasks
- Add missing enable_cache_manager_v1 attr in TestSleepWakeupBehavior._make_runner
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com >
* [BugFix][KVCache] fix gpu_free_block_list returning wrong block IDs
## Motivation
`gpu_free_block_list` 的兼容 property 中误用了 `list(range(N))`,
将 `available_blocks()` 的返回值当作整数传给 `range()`,
导致返回 `[0, 1, ..., N-1]` 的假列表,而非真实的空闲 block ID。
## Modifications
- `cache_manager/v1/cache_manager.py`:将 `list(range(self._device_pool.available_blocks()))` 改为 `list(self._device_pool.available_blocks())`
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com >
* [BugFix][KVCache] 修复 gpu_free_block_list 返回 int 导致 TypeError
## Motivation
gpu_free_block_list 属性中调用 BlockPool.available_blocks(),
该方法返回 int(空闲块数量),用 list() 包装 int 会触发
TypeError: 'int' object is not iterable。
## Modifications
将 list(self._device_pool.available_blocks()) 改为
list(self._device_pool._free_blocks),直接返回空闲块索引列表。
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com >
* [KVCache][CacheManager] 适配 V1 CacheManager 的 pause/sleep/free_cache 操作
## Motivation
V1 CacheManager 引入了新的 reset_cache() 接口,pause 和 sleep 操作需要适配,
同时 free_cache 需要支持可选的 clear_storage 参数。
## Modifications
- cache_controller.py: free_cache 新增 clear_storage 参数(默认 False),
仅当 clear_storage=True 时才调用 _clear_storage(),避免不必要的 storage 清空
- common_engine.py: pause 和 sleep 操作中,当 ENABLE_V1_KVCACHE_MANAGER 时
使用 cache_manager.reset_cache() 替代旧的 reset() 和 pause_transfer 逻辑
- gpu_model_runner.py: sleep 时仅在非 V1 cache manager 下执行 MTP cache 清除
## Usage or Command
# 启动服务(V1 CacheManager)
python -m fastdeploy.entrypoints.openai.api_server \
--enable-v1-kvcache-manager \
...
* [BugFix][KVCache] fix missing enable_cache_manager_v1 in test mocks and remove unused select_blocks_for_backup
- Remove unused `select_blocks_for_backup` method from radix_tree.py
- Fix `match_prefix` default param `skip_storage=True` and log order in cache_manager.py
- Sync test_gpu_model_runner.py with upstream/develop (add TestInsertTasksV1SplitwiseSuffix)
- Add `enable_cache_manager_v1=False` to all mock runners to fix AttributeError in CI
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com >
* [BugFix][KVCache] simplify _free_blocks in ResourceManagerV1 for non-v1 path
Remove redundant prefix_caching branch in else path; always call
recycle_gpu_blocks with full block_tables for non-cache-manager-v1 case.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com >
* [KVCache][Optimization][BugFix] fix and optimize block_pool, cache_manager, transfer_manager, request
## Motivation
修复 cache_manager v1 中若干代码质量问题,提升性能并消除潜在的类型不一致 Bug。
## Modifications
1. **block_pool.py**:`BlockPool.allocate` 将逐个 pop 循环替换为切片 + 批量 set.update,消除 Python 循环开销,O(n) → O(k)(C 层批量操作)
2. **cache_manager.py**:`match_prefix` 在 prefix caching 关闭时提前 return 前写入空 `MatchResult()`,避免调用方解引用 `_match_result=None` 崩溃
3. **transfer_manager.py**:`_build_device_layer_indices` 在 `_cache_kvs_map` 为空时也重置四个层索引列表,防止残留旧 tensor 被 swap 算子使用
4. **request.py**:`BatchRequest.append_swap_metadata` / `append_evict_metadata` 构造 `CacheSwapMetadata` 时将 `src_type`/`dst_type` 从字符串改为 `CacheLevel` 枚举,与字段类型声明一致;补充 `CacheLevel` 导入;`match_result` 属性返回类型标注修正为 `Optional[MatchResult]`
5. **resource_manager_v1.py**:`_allocate_gpu_blocks` 日志从 `INFO` 降级为 `DEBUG`,消除高频调度路径的日志噪音
6. **tests/engine/test_request.py**:同步更新 `src_type`/`dst_type` 断言为 `CacheLevel` 枚举值,补充 `CacheLevel` 导入
## Usage or Command
单元测试:
```bash
source .venv/py310/bin/activate
cd baidu/FastDeploy
python -m pytest tests/cache_manager/v1/test_cache_manager.py -v
python -m pytest tests/cache_manager/v1/test_transfer_manager.py -v
python -m pytest tests/engine/test_request.py -v
```
* [BugFix][KVCache] Fix BlockPool.allocate returns all blocks when num_blocks=0
## Motivation
当 `allocate(num_blocks=0)` 被调用时,Python 负索引陷阱导致严重错误:
`-0 == 0`,所以 `self._free_blocks[-0:]` 等价于 `self._free_blocks[0:]`,
会返回并清空整个空闲块列表,而非返回空列表。
## Modifications
在 `BlockPool.allocate` 中增加对 `num_blocks == 0` 的提前判断,直接返回 `[]`,
避免触发 Python 负索引陷阱。
## Usage or Command
```bash
# 运行相关单元测试验证修复
python -m pytest tests/cache_manager/v1/test_cache_manager.py -vv -s
```
* [KVCache][Test] add unit tests for cache_manager v1 modules
## Motivation
补全 cache_manager/v1 各模块的单测覆盖,确保核心方法有完整的测试保障。
## Modifications
新增/补充以下测试文件,全部 326 个用例通过:
- tests/cache_manager/v1/test_block_pool.py(新建)
覆盖 BlockPool.get_metadata/set_metadata/resize、DeviceBlockPool/HostBlockPool
- tests/cache_manager/v1/test_metadata.py(新建)
覆盖 BlockNode、RadixTreeStats、MatchResult、CacheSwapMetadata、AsyncTaskHandler
- tests/cache_manager/v1/test_cache_utils.py(补充)
新增 hash_block_tokens、get_request_block_hasher、LayerDoneCounter 时间追踪及内部辅助方法
- tests/cache_manager/v1/test_radix_tree.py(补充)
新增 TestCompleteSwapToDevice 专项测试类(6 个用例)
- tests/cache_manager/v1/test_cache_manager.py(补充)
新增 offload_to_host、load_from_host、pending backup 系列、prepare_prefetch_metadata
- tests/cache_manager/v1/test_transfer_manager.py(补充)
新增 _swap_single_layer 校验路径、sync_input/output_stream、record_input_stream_event
## Usage or Command
```bash
# 运行所有新增单测
source .venv/py310/bin/activate
python -m pytest tests/cache_manager/v1/test_block_pool.py \
tests/cache_manager/v1/test_metadata.py \
tests/cache_manager/v1/test_cache_utils.py \
tests/cache_manager/v1/test_radix_tree.py \
tests/cache_manager/v1/test_cache_manager.py \
tests/cache_manager/v1/test_transfer_manager.py -v
# 期望结果:326 passed
```
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com >
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com >
2026-04-21 14:39:00 +08:00
Bingoo
6b891da02b
[Optimization] enable trtllm_all_reduce fusion kernel in glm model ( #6660 )
...
* enable trtllm_all_reduce fusion kernel in glm model
* fix conflict
* format update
* fix a bug
* modify test
* modify test
* support empty tensor and modify test
* fix test_linear config issues
* modify test name
* add edge test case
* modify format
* fix conflict
* modify default max token num in trtllm_allreduce_fusion
* add max token num branch for trtllm_allreduce_fusion
* fix format
* fix rmsnorm config issue
* modify 2025 to 2026
* using compat grard
* Lazily import flashinfer.comm and fix test config issue
* fix test issues
* add flashinfer cache dir clean machine
* fix some issues
2026-04-16 14:10:19 +08:00
jc
e53f5184ac
PD deployment support without router ( #7412 )
2026-04-15 20:13:07 +08:00
AIbin
8eebbcaf15
[BugFix][Scheduler]Fix FD_DISABLE_CHUNKED_PREFILL max_num_batched_tokens limit ( #7407 )
...
* fix FD_DISABLE_CHUNKED_PREFILL max_num_batched_tokens=max_model_len
* fix FD_DISABLE_CHUNKED_PREFILL max_num_batched_tokens=max_model_len
2026-04-15 15:55:11 +08:00
bukejiyu
14d46181b8
[Loader] add multi-thread model loading ( #6877 )
...
* multi-thread-loader
* fix ut
2026-04-09 23:40:15 -07:00
GoldPancake
c1fb3112f8
[FDConfig] Support CLI args for quantization params and add cudagraph validation ( #7281 )
...
* refactor quant cli param
2026-04-10 14:13:42 +08:00
sunxin
ae2f9f4d22
[BugFix] Enable moe_gate_fp32 using FD_ENABLE_RL ( #7130 )
...
* rl gate fp32
* clean
2026-04-06 21:07:38 -07:00
sunxin
c29e86fc9d
[Feature] Support mtp overlap schedule ( #7001 )
2026-04-01 14:24:26 +08:00
freeliuzc
4fd877ed43
[Speculative Decoding] Support mtp expert-parallel and support different modality deploy ( #7018 )
...
* support mtp ep and support different modality
* fix default arg
2026-03-26 13:52:16 +08:00
jc
04fde3b227
[PD Disaggregation] Prefill and decode support cache storage ( #6768 )
...
* Prefill and decode support cache storage
* up
* up
* update docs and refine mooncake store
* up
2026-03-16 14:44:49 +08:00
RichardWooSJTU
9f0778f991
[Feature] Support EP prefill with num_worst_tokens ( #6574 )
...
* support num worst tokens
* support num worst tokens
* fix build error
* support num worst tokens: fix errors
* support num worst tokens: fix feild
* support num worst tokens: delete requiements
* replace permute and depermute op by pure cuda
* replace permute and depermute op by pure cuda
* fix ci
* fix op
* fix nan
* fix code style
---------
Co-authored-by: YuBaoku <49938469+EmmonsCurse@users.noreply.github.com >
2026-03-11 17:09:07 +08:00
sunxin
812657beee
fix pd overlap ( #6753 )
2026-03-10 20:29:54 +08:00
sunxin
28f7727a3d
[Feature] Set overlap schedule as default ( #6668 )
...
* overlap default
2026-03-09 22:34:54 +08:00
yzwu
3345641f4e
[Iluvatar][CI] fix the dim error of seq_lens_encoder and seq_lens_decoder ( #6637 )
2026-03-04 14:00:40 +08:00
sunxin
53aaac69da
[Optimization] Enable BF16 gate computation for GLM and Qwen ( #6457 )
...
* gate bf16
* add gate-fp32
* fix
* update baseline
* update
* update
* fix
2026-02-26 21:08:46 -08:00
sunxin
9b0a82cfa9
[Model Runner] Support overlap schedule ( #6259 )
2026-02-04 10:49:44 +08:00
Moonchild1227
39dc4b0c2e
[Feature] [KVCache] support file_store kv cache backend ( #6188 )
...
* fix(examples): comment out stop.sh to avoid error when script is missing
* feat: add file_store support for cache manager
* [fix] fix multi gpu transfer
* [fix] fix global kvcache transfer
* [Feature] [KVCache] support file_store kv cache backend
* chore: update FileStore according to PR comments
* fix: remove comments
* fix: add swap_cache_layout for file store
* fix: remove rank key
* fix: Switch KV cache storage to pure file mode
* Temporarily disable support for Tensor types
* fix: remove args --kvcache_file_path & add envs FILE_BACKEND_STORAGE_DIR
* fixx: Simplify cache_transfer_manager.py
* fix: fix syntax bug
* fix: Simplify file_store.py
* fix: Use the key directly as the filename
* fix: Simplify set()
* fix: Simplify cache_transfer_manager.py & file_store.py
* fix: Only support load to cpu buffer
* feat: add FileStore backend for cache transfer
* fix: guard zmq import
2026-02-03 14:37:58 +08:00
CSWYF3634076
08c411518f
[Loader] support dummy load weight ( #6169 )
...
* [Loader] support dummy load weight
* [Loader] support dummy load weight v2
* [Loader] support dummy load weight unittest
* [Loader] support dummy load weight unittest v2
* [Loader] support dummy load weight v3 docs and fp8
2026-01-26 13:58:53 +08:00
wangyifei
b7c5daa316
[RL] add pause, update_weights, resume interface for async RL ( #6052 )
...
* support dynamic run_control_request through zmq from apiserver to common_engine
* support pause/resume/is_paused/update_weights in apiserver->common_engine by common run_control_method
* change /is_puased from HTTP POST method to GET method
* add pause、resume、is_paused implementation
* support engine <==> worker communication(request&response)
* support sync weights through RDMA from checkpoint_transfer
* support specified version, rsync_config in update_weights rpc call
* add pause, update_weights, resume interface for async RL
* bug fix: update_weights support using default arguments
* fix typo
* typo fix
* typo fix
* typo fix
* add unitest for control request/response, localscheduler.get_inflight_requests, resource_manager_v1.preempted_all
* add "rsync" to LoadConfig.load_strategy Literal type hints
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* typo fix
* typo fix
* Apply suggestion from @Copilot
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* check version/rsync params
* add error log when version.txt not exists
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* raise specified ValueError when paramters check failed
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* tp barrier after run_control_method
* encode 'engine_worker_queue_port' to unique name of worker2engine fmq queue
* typo fix
* typo fix
---------
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com >
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
2026-01-23 10:18:07 +08:00
Yonghua Li
8d27a523e7
[Feature] [KVCache] support attention_store kv cache backend ( #5823 )
...
* [feat] support attention_store kv cache backend
* [fix] fix codestyle
* [chore] optimize log
* [fix] fix write storage task
* [fix] fix read storage
* [fix] fix code conflict after merge develop
* [fix] fix cache bytes and read task token ids
* [chore] add model for cache transfer manager
* [chore] add some log
* [chore] remove launched_cache_manager_signal
* [fix] fix write_back_storage_task match_block_num condition
* [fix] fix swap_cost_time
* [ci] fix ci
* Update fastdeploy/engine/sched/resource_manager_v1.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* Update fastdeploy/cache_manager/cache_transfer_manager.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* Update fastdeploy/cache_manager/transfer_factory/mooncake_store/attention_store.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
---------
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com >
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
2026-01-22 21:01:23 +08:00
jackyYang6
988e0bc338
[Feature] Add PaddleFormers fallback backend ( #5999 )
...
* feat(paddleformers): add dense text model fallback backend
* docs(paddleformers): add user guide and fix code review issues
* add fallback unit test
* precommit format
* fix pre-commit
* fix: address code review feedback
* docs: add PaddleFormers backend documentation (EN) and simplify installation
---------
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com >
Co-authored-by: YuBaoku <49938469+EmmonsCurse@users.noreply.github.com >
2026-01-19 21:50:50 +08:00
Jingfeng Wu
7d44009f39
[FDConfig] transfer metrics_port ( #6056 )
...
* transfer metrics_port
* transfer metrics_port
2026-01-19 19:58:57 +08:00
ming1753
7c56041272
[BugFix] fix PaddleOCR-VL illegal memory ( #6042 )
2026-01-14 20:07:43 -08:00
chenjian
6da06abc17
[Featue] Enable output caching by default ( #5987 )
...
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com >
2026-01-13 19:34:21 +08:00
Yonghua Li
60ee72f682
[BugFix] [MultiAPIServer] fix rdma script and port check for multi api server ( #5935 )
...
* [fix] fix rdma script and add more error log for multi api server
* [fix] log
* [fix] fix test_multi_api_server
* [fix] fix multi api server port check
---------
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com >
2026-01-12 10:38:52 +08:00
MingkunZhang
cb09b52e66
[Metax] fix shape error & output garbled code when reasoning big picture or video ( #5965 )
...
Co-authored-by: root <root@lt-wks-10-0-180-15.pub.metax-tech.com >
2026-01-09 13:41:45 +08:00
MingkunZhang
f732d7d2ad
[Metax] adapt prefix caching & cpu swap ( #5844 )
...
Co-authored-by: root <root@lt-wks-10-0-180-15.pub.metax-tech.com >
2025-12-31 17:02:48 +08:00
ddchenhao66
9e45ef7ca9
[XPU]MAX_BSZ aligns gpu settings and disable prefix cache in OCR VL ( #5831 )
2025-12-31 09:49:12 +08:00
Juncai
412867fd99
[Feature] Support KV Cache Storage ( #5571 )
...
* Support Mooncake Store
* up
* up
* add op
* fix conflict
* fix error
* up for comments
* avoid thread lock
* up
* fix unittest
* fix unittest
* remove debug info
* consider tp_size > 1
* add default rdma_nics
* add utils
* up
* fix error
---------
Co-authored-by: YuBaoku <49938469+EmmonsCurse@users.noreply.github.com >
2025-12-25 16:30:35 +08:00
chenjian
b90a922f98
[Bug fix] Set enable_cache_output as false by default ( #5751 )
2025-12-24 21:37:24 +08:00
GoldPancake
23d488c488
[Feature] Entropy calculation support ( #5692 )
...
CE Compile Job / ce_job_pre_check (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
* support entropy
* fix bug
---------
Co-authored-by: YuBaoku <49938469+EmmonsCurse@users.noreply.github.com >
2025-12-23 21:19:47 +08:00
Yonghua Li
4f830aa505
[RL] provide options for whether shutdown comm group after weights cleared ( #5663 )
...
Publish Job / publish_pre_check (push) Has been cancelled
Publish Job / print_publish_pre_check_outputs (push) Has been cancelled
Publish Job / FD-Clone-Linux (push) Has been cancelled
Publish Job / Show Code Archive Output (push) Has been cancelled
Publish Job / BUILD_SM8090 (push) Has been cancelled
Publish Job / BUILD_SM8689 (push) Has been cancelled
Publish Job / PADDLE_PYPI_UPLOAD_8090 (push) Has been cancelled
Publish Job / PADDLE_PYPI_UPLOAD_8689 (push) Has been cancelled
Publish Job / Run FD Image Build (push) Has been cancelled
Publish Job / Run FastDeploy Unit Tests and Coverage (push) Has been cancelled
Publish Job / Run FastDeploy LogProb Tests (push) Has been cancelled
Publish Job / Extracted partial CE model tasks to run in CI. (push) Has been cancelled
Publish Job / Run Base Tests (push) Has been cancelled
Publish Job / Run Accuracy Tests (push) Has been cancelled
Publish Job / Run Stable Tests (push) Has been cancelled
CI Images Build / FD-Clone-Linux (push) Has been cancelled
CI Images Build / Show Code Archive Output (push) Has been cancelled
CI Images Build / CI Images Build (push) Has been cancelled
CI Images Build / BUILD_SM8090 (push) Has been cancelled
CI Images Build / Run FastDeploy Unit Tests and Coverage (push) Has been cancelled
CI Images Build / Run FastDeploy LogProb Tests (push) Has been cancelled
CI Images Build / Extracted partial CE model tasks to run in CI. (push) Has been cancelled
CI Images Build / Run Base Tests (push) Has been cancelled
CI Images Build / Publish Docker Images Pre Check (push) Has been cancelled
CE Compile Job / ce_job_pre_check (push) Has been cancelled
Deploy GitHub Pages / deploy (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
* [rl] provide options for whether shutdown comm group after weights cleared
* [fix] fix args hardcode
* [fix] change args type
* [fix] add worker process args
2025-12-19 07:06:48 -08:00
fmiao2372
a8fce47195
[Intel HPU] enable kv cache scheduler v1 for hpu ( #5648 )
...
* [Intel HPU] enable kv cache scheduler v1 for hpu
* fix copilt comments
2025-12-19 12:03:39 +08:00
yzwu
ac013803f3
[Iluvatar] Support V1_KVCACHE_SCHEDULER and paddleocr-vl rope mode ( #5555 )
2025-12-18 02:14:25 -08:00
Yonghua Li
0c8c6369ed
[Feature] [PD Disaggregation] simplify configuration for pd-disaggregated deployment, and refactor post-init and usage for all ports ( #5415 )
...
* [feat] simplify configuration for pd-disaggregated deployment, and refactor post-init and usage for all ports
* [fix] fix some bugs
* [fix] fix rdma port for cache manager/messager
* [fix] temporarily cancel port availability check to see if it can pass ci test
* [feat] simplify args for multi api server
* [fix] fix dp
* [fix] fix port for xpu
* [fix] add tests for ports post processing & fix ci
* [test] fix test_multi_api_server
* [fix] fix rdma_comm_ports args for multi_api_server
* [fix] fix test_common_engine
* [fix] fix test_cache_transfer_manager
* [chore] automatically setting FD_ENABLE_MULTI_API_SERVER
* [fix] avoid api server from creating engine_args twice
* [fix] fix test_run_batch
* [fix] fix test_metrics
* [fix] fix splitwise connector init
* [test] add test_rdma_transfer and test_expert_service
* [fix] fix code syntax
* [fix] fix test_rdma_transfer and build wheel with rdma script
2025-12-17 15:50:42 +08:00
kevin
954a145d57
[Optimization] support mm prefill batch ( #5313 )
...
CE Compile Job / ce_job_pre_check (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
Deploy GitHub Pages / deploy (push) Has been cancelled
* support mm prefill batch
* update code
* update code
* update code
* update code
* fix encoder cache bug
* update code
* update code
* fix bug
* fix paddle ocr bug
* fix xpu bug
* update code
2025-12-11 22:21:14 +08:00
RAM
b2908b8e82
[New][RL] Support Rollout Routing Replay ( #5405 )
...
* [RL] Support Rollout Routing Replay
* add routing indices cache
* fix config bug and moe forward bug
* R3 Support GLM
* support eb4.5
* fix merge bug
* Apply suggestion from @Copilot
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* Apply suggestion from @Copilot
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* Apply suggestion from @Copilot
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* Apply suggestion from @Copilot
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* add routing replay ci
* support glm topk
* support orther top_k
* fix ci bug
* pre-commit
* only support chatcmpl
* Revert "Revert "[RL] Support Rollout Routing Replay (#5321 )" (#5402 )"
This reverts commit c45e064f3d .
* Fix XPU and NPU bug
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
Co-authored-by: Yuanle Liu <yuanlehome@163.com >
2025-12-05 22:06:26 +08:00
Jiang-Jia-Jun
c45e064f3d
Revert "[RL] Support Rollout Routing Replay ( #5321 )" ( #5402 )
...
This reverts commit 96d2d4877b .
2025-12-05 20:19:39 +08:00
RAM
96d2d4877b
[RL] Support Rollout Routing Replay ( #5321 )
...
* [RL] Support Rollout Routing Replay
* add routing indices cache
* fix config bug and moe forward bug
* R3 Support GLM
* support eb4.5
* fix merge bug
* Apply suggestion from @Copilot
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* Apply suggestion from @Copilot
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* Apply suggestion from @Copilot
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* Apply suggestion from @Copilot
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* add routing replay ci
* support glm topk
* support orther top_k
* fix ci bug
* pre-commit
* only support chatcmpl
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
Co-authored-by: Yuanle Liu <yuanlehome@163.com >
2025-12-05 20:01:33 +08:00
kevin
c9d7f9e7c3
[BugFix] fix async download bug ( #5349 )
...
* fix async download bug
* update log
* Revert "update log"
This reverts commit 5816e602f4 .
* update code
* fix mtp bug
2025-12-05 18:59:12 +08:00
chenjian
3878a99b69
[Fearture] Support cache kv cache for output tokens ( #4535 )
...
* [Fearture] Support cache kv cache for output tokens
* fix bug
* fix ci bug
* improve coverage
* enable output caching by default
* fix ci
---------
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com >
2025-12-04 20:53:08 +08:00
ddchenhao66
4e8096bd0d
[XPU] xpu support mm prefix cache ( #5356 )
...
Co-authored-by: ddchenhao66 <dhaochen163.com>
2025-12-03 19:07:34 +08:00
qw86972190
6048ea37bd
[XPU]add enable_logprob ( #5279 )
...
* [XPU]Update document
* [XPU]Update documentation
* [XPU]add enable_logprob
* Fix code style issues
* “doc”
* “docs”
* “doc”
* Fix code style via pre-commit
---------
Co-authored-by: root <root@gajl-bbc-onlinec-com-1498354.gajl.baidu.com >
2025-12-02 15:32:28 +08:00
Longzhi Wang
add524d80c
[Feature] support chunked moe ( #4575 )
...
* [Feature] support chunked moe
* update
* update
* fix and add test
* update
* fix conflict and modity test
* fix fused_moe
* fix fused_moe
* fix docstring
* fix
* fix typo
* fix test
* fix
* fix
* fix test
* fix test
2025-12-01 15:17:18 +08:00
Daci
f25ee3a26f
[Feature] enable guided decoding ENABLE_V1_KVCACHE_SCHEDULER = 1 ( #5140 )
...
* enable guided decoding ENABLE_V1_KVCACHE_SCHEDULER = 1
* Apply suggestions from code review
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
2025-11-26 10:22:35 +08:00
kevin
8e4e3ff510
[Feature] support eplb in api_server ( #4782 )
...
* support eplb in api_server
* update code
* add eplb test case
* update eplb
* support tp+dp eplb
* update test cese
* update code
* update code
* fix bug
* update copilot review
* update test case name
2025-11-24 20:22:29 +08:00
xiaozude
d5bd64336a
[Metax] support ENABLE_V1_KVCACHE_SCHEDULER ( #5163 )
2025-11-24 19:19:49 +08:00
Yuanle Liu
5bcf79d780
[BugFix] fix num of rdma_comm_ports check ( #5168 )
...
* fix num of rdma_comm_ports check
* update
* update
* update
2025-11-21 18:31:14 +08:00
kevin
7454480e07
[Feature] support bos download retry ( #5137 )
...
* support bos download retry
* update code
* update code
2025-11-21 10:18:32 +08:00
Yonghua Li
43097a512a
[BugFix] [PD Disaggregation] fix v1 scheduler prefill node profile run & ipc transfer protocol ( #5132 )
...
CE Compile Job / ce_job_pre_check (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
Deploy GitHub Pages / deploy (push) Has been cancelled
* [fix] fix v1 scheduler profile run for append attention in prefill node
* [fix] skip send_signal if kv signal not inited for gpu and xpu
* [fix] extend fix to flash_attn & mla_attn
* [fix] fix v1 pd run in ipc transfer protocol
* [ci] add test for v1 pd profile run using ipc transfer protocol
* [style] fix code style check
* [style] fix code style again
* [fix] fix profile run
* [update] remove --num-gpu-blocks-override in example script
* [chore] rename forward_meta is_profiling to is_dummy_or_profile_run
2025-11-20 21:39:22 +08:00