mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2026-04-23 00:17:25 +08:00
7707be8384
* [Feature][KVCache] Support cache manager v1 architecture Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Update cache manager and related modules Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: update cache_manager and related modules Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: add node to evictable set in complete_swap_to_device When a node transitions from SWAP_TO_DEVICE to DEVICE via complete_swap_to_device, it was not being added to the _evictable_device set. This caused nodes with ref_count=0 to become "orphaned" - not appearing in any evictable set despite having cache_status=DEVICE. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat: update cache manager v1 and related modules - Add new cache_manager.py with cache management functionality - Add radix_tree.py for prefix caching - Update block_pool.py and metadata.py - Update request.py and resource_manager_v1.py for scheduling - Update gpu_model_runner.py for GPU model execution Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(cache): add cache controller v1 implementation - Add CacheController class for cache management - Update config.py with cache related configurations - Refactor gpu_model_runner.py for improved cache handling * feat(cache_manager): update cache manager v1 * fix(cache_manager): 修复 swap_cache H2D/D2H 方向的 block_ids 逻辑并清理 ForwardMeta ## Motivation 修复 swap_cache_optimized.cu 中 H2D 方向时 src/dst block_ids 使用错误的问题, 并清理 ForwardMeta 中已废弃的 cache_controller 字段。 ## Modifications - fix: swap_cache_optimized.cu 中根据 D2H 模板参数正确选取 src/dst block_ids, 修复 H2D 方向 src/dst 倒置 bug(同时修复 SwapCachePerLayerImpl 和 SwapCacheAllLayersBatchImpl) - refactor: cache_manager/v1/__init__.py 将 LayerSwapTimeoutError 导入从 cache_controller 改为 cache_utils(正确来源) - refactor: ForwardMeta 移除废弃的 cache_controller 字段 - refactor: gpu_model_runner.py 移除对应的 cache_controller 赋值语句 - test: 新增 tests/cache_manager/v1/test_swap_cache_ops.py 单元测试 Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(cache_manager): refactor cache manager v1 and optimize swap ops ## Motivation 对 cache manager v1 进行重构和优化,精简代码结构,提升可维护性。 ## Modifications - 重构 transfer_manager.py,大幅精简代码逻辑 - 优化 swap_cache_optimized.cu GPU 算子实现 - 调整 cache_manager.py、cache_controller.py 逻辑,修复 free_device_blocks 方法缺失问题 - 更新 block_pool.py、cache_utils.py、metadata.py、radix_tree.py - 精简 gpu_model_runner.py、forward_meta.py、attention.py 中相关调用 - 更新对应单元测试(test_cache_controller、test_swap_cache_ops、test_transfer_manager) - 调整 config.py 中相关配置项 * [KVCache][MTP] 支持 cache_manager_v1 下的 MTP KV Cache 初始化及多模态 hash ## Motivation 在 enable_cache_manager_v1 路径下,MTP(speculative decode)的 KV Cache 需要由 CacheController 统一管理,以复用 swap/transfer 能力,同时修复多模态场景下 block hash 未携带 multimodal extra_keys 的问题。 ## Modifications - `cache_controller.py` - 新增 `initialize_mtp_kv_cache`:通过 CacheController 初始化 MTP KV Cache, 并将其注册到 cache_kvs_map,使 transfer_manager 自动覆盖 MTP 层 - `initialize_host_cache` 中的 num_layers 改为包含 MTP 额外 cache 层数,保证 Host Cache 也为 MTP 分配足够空间 - `_free_gpu_cache` 改名为 `free_gpu_cache`(对外可调用) - `cache_utils.py` - 新增 `get_block_hash_extra_keys`:提取单个 block 内的多模态 hash 信息, 对齐 PrefixCacheManager 的 multimodal extra_keys 逻辑 - `get_request_block_hasher` 中在 hash_block_tokens 时携带 extra_keys, 修复多模态场景 prefix cache 命中率不准的问题 - `spec_decode/mtp.py` - `update_mtp_block_num` 新增 `skip_cache_init` 参数,避免 v1 cache manager 路径下重复初始化 MTP KV Cache - `gpu_model_runner.py` - `initialize_kv_cache(v1)` 路径:在主模型 cache 初始化后,调用 `cache_controller.initialize_mtp_kv_cache` 完成 MTP cache 创建 - `clear_cache` / `wakeup` / `reset` 等路径:respect `enable_cache_manager_v1` 标志,跳过重复的 proposer.initialize_kv_cache 调用 ## Usage or Command ```bash # 启动支持 MTP + cache_manager_v1 的推理服务(示例) bash run.sh ``` * fix(cache_manager): multi-GPU fix, mm hash boundary fix, and remove batch ops 1. Fix CuPy stream/event creation for multi-GPU: wrap all stream operations with cp.cuda.Device(device_id) context to ensure streams/events are bound to the correct device, preventing cross-device errors in multi-GPU setups. 2. Remove cudaSetDevice from SwapCacheAllLayers (handled by cupy context now). 3. Remove swap_cache_all_layers_batch op: simplified the implementation by removing the batch upload variant; all-layer transfers now use the standard swap_cache_all_layers with cupy device context. 4. Fix mm hash boundary comparison in get_block_hash_extra_keys: change strict less-than (<) to less-than-or-equal (<=) so that multimodal items ending exactly at block start are correctly excluded. 5. Extract config fields to KVCacheBase: model_config, cache_config, quant_config, parallel_config are now set in the base class __init__ to avoid duplication in CacheController and CacheManager subclasses. 6. Translate metadata.py docstrings from Chinese to English for broader contributor accessibility. 7. Add test_cache_utils.py: comprehensive unit tests for get_block_hash_extra_keys covering all boundary and overlap scenarios. 8. Expand test suite: test_request.py cache fields tests, test_radix_tree.py backup candidate tests, test_transfer_manager.py and test_cache_manager.py multi-GPU and concurrent operation tests. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * [BugFix][KVCache] fix List import and move write_policy normalization to CacheManager ## Motivation 修复两处问题: 1. `fastdeploy/engine/request.py` 中 `List` 未导入导致 pre-commit F821 报错 2. `write_policy` 归一化逻辑(`write_through` → `write_through_selective`)不应放在 `FDConfig`,移至 `CacheManager.__init__` 中,使其只影响 Cache Manager V1 的内部逻辑 ## Modifications - `fastdeploy/engine/request.py`: 在 `typing` 导入中补充 `List`,删除重复的 `CacheSwapMetadata` TYPE_CHECKING 导入,修复 F821/F811 - `fastdeploy/config.py`: 删除 `write_policy` 归一化逻辑 - `fastdeploy/cache_manager/v1/cache_manager.py`: 将归一化逻辑移入 `CacheManager.__init__` Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * [BugFix][KVCache] fix pre-commit code style issues ## Motivation 修复 CI pre-commit 代码风格检查失败问题。 ## Modifications - `fastdeploy/engine/common_engine.py`: black 格式化 - `fastdeploy/worker/worker_process.py`: black 格式化 + isort 修复 - `fastdeploy/cache_manager/v1/storage/__init__.py`: isort 修复 - `fastdeploy/worker/gpu_worker.py`: isort 修复 Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * [Feature][KVCache] update cache_manager_v1 modules ## Motivation 更新 Cache Manager V1 相关模块,完善版权信息、改进模块结构与可维护性。 ## Modifications - `fastdeploy/cache_manager/v1/` 系列模块:补充版权 header,优化代码结构 - `fastdeploy/config.py`:配置项更新 - `fastdeploy/engine/sched/resource_manager_v1.py`:调度相关更新 Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * [Feature][KVCache] add BatchRequest.from_tasks and refactor worker task parsing ## Motivation 将 worker_process 中重复的 task 解析逻辑收敛到 BatchRequest,减少代码冗余,提升可维护性。 ## Modifications - `fastdeploy/engine/request.py`:新增 `BatchRequest.from_tasks()` 类方法,统一将 task_queue 任务分类为推理请求和控制请求 - `fastdeploy/worker/worker_process.py`:使用 `BatchRequest.from_tasks()` 替代内联解析逻辑,并修复重复的 control_reqs 处理块 Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * [Feature][KVCache] add NUMA affinity for host cache and skip swap cache tests ## Motivation 优化 Host cache 内存分配的 NUMA 亲和性,减少跨 NUMA 访问延迟; 同时跳过 swap cache ops 测试(当前环境不支持)。 ## Modifications - `fastdeploy/cache_manager/v1/cache_controller.py`: - 新增 `_get_numa_node_for_gpu()` 方法,通过 nvidia-smi 或 sysfs 获取 GPU 对应的 NUMA 节点 - 新增 `_bind_to_closest_numa_node()` 方法,绑定当前线程到 GPU 最近的 NUMA 节点 - 在 `initialize_host_cache()` 中调用 NUMA 绑定,优化 H2D 传输性能 - `tests/cache_manager/v1/test_swap_cache_ops.py`:跳过所有测试类(`TestSwapCacheAllLayersCorrectness`、`TestSwapCacheAllLayersPerformance`、`TestSwapCacheRandomBlockIndices`) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * [BugFix][KVCache] fix unittest failures for cache_manager_v1 三个单测因接口变更或 Mock 方式问题导致失败,需修复。 - tests/distributed/chunked_moe.py:`setup_model_runner` 使用 `__new__` 跳过 `__init__`,补加 `enable_cache_manager_v1 = False`,修复 `AttributeError` - tests/engine/test_resource_manager.py:`PrefixCacheManager` 为局部导入,`patch` 路径改为定义位置 `fastdeploy.cache_manager.prefix_cache_manager.PrefixCacheManager` - tests/v1/test_resource_manager_v1.py:`_trigger_preempt` 第四参数已由 `list` 改为 `BatchRequest`,更新测试传参和断言 Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * [BugFix][KVCache] remove debug logging code ## Modifications - fastdeploy/engine/request.py:删除调试用 logger 及 prompt_hashes 中的 debug 日志 - fastdeploy/worker/worker_process.py:删除 __main__ 中的调试 import 和 print 语句 Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * [BugFix][KVCache] fix cupy device id caching and pickle for _match_result ## Motivation 修复两个 bug: 1. `transfer_manager.py` 中每次调用 `cp.cuda.runtime.getDevice()` 存在隐患,应在初始化时缓存为实例变量,保证后续操作使用一致的设备 ID。 2. `request.py` 的 `__getstate__` 未跳过 `_match_result`,该字段包含 BlockNode 树的父子循环引用,pickle 时会触发 `RecursionError`;同时补充 `__setstate__` 确保 unpickle 后字段恢复为安全默认值。 ## Modifications - `transfer_manager.py`:初始化时调用 `cp.cuda.runtime.getDevice()` 并缓存到 `self._cupy_device_id`,后续 `with cp.cuda.Device(...)` 和日志均使用该缓存值。 - `request.py`: - `__getstate__` 中将 `_match_result` 加入跳过集合 `_SKIP_KEYS`,避免循环引用导致 pickle 失败。 - 新增 `__setstate__`,unpickle 后将 `_block_hasher` 和 `_match_result` 恢复为 `None`。 ## Usage or Command * fix(test): fix unit test errors for _trigger_preempt and wakeup with MTP - Use BatchRequest instead of list in test_trigger_preempt_records_tasks - Add missing enable_cache_manager_v1 attr in TestSleepWakeupBehavior._make_runner Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * [BugFix][KVCache] fix gpu_free_block_list returning wrong block IDs ## Motivation `gpu_free_block_list` 的兼容 property 中误用了 `list(range(N))`, 将 `available_blocks()` 的返回值当作整数传给 `range()`, 导致返回 `[0, 1, ..., N-1]` 的假列表,而非真实的空闲 block ID。 ## Modifications - `cache_manager/v1/cache_manager.py`:将 `list(range(self._device_pool.available_blocks()))` 改为 `list(self._device_pool.available_blocks())` Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * [BugFix][KVCache] 修复 gpu_free_block_list 返回 int 导致 TypeError ## Motivation gpu_free_block_list 属性中调用 BlockPool.available_blocks(), 该方法返回 int(空闲块数量),用 list() 包装 int 会触发 TypeError: 'int' object is not iterable。 ## Modifications 将 list(self._device_pool.available_blocks()) 改为 list(self._device_pool._free_blocks),直接返回空闲块索引列表。 Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * [KVCache][CacheManager] 适配 V1 CacheManager 的 pause/sleep/free_cache 操作 ## Motivation V1 CacheManager 引入了新的 reset_cache() 接口,pause 和 sleep 操作需要适配, 同时 free_cache 需要支持可选的 clear_storage 参数。 ## Modifications - cache_controller.py: free_cache 新增 clear_storage 参数(默认 False), 仅当 clear_storage=True 时才调用 _clear_storage(),避免不必要的 storage 清空 - common_engine.py: pause 和 sleep 操作中,当 ENABLE_V1_KVCACHE_MANAGER 时 使用 cache_manager.reset_cache() 替代旧的 reset() 和 pause_transfer 逻辑 - gpu_model_runner.py: sleep 时仅在非 V1 cache manager 下执行 MTP cache 清除 ## Usage or Command # 启动服务(V1 CacheManager) python -m fastdeploy.entrypoints.openai.api_server \ --enable-v1-kvcache-manager \ ... * [BugFix][KVCache] fix missing enable_cache_manager_v1 in test mocks and remove unused select_blocks_for_backup - Remove unused `select_blocks_for_backup` method from radix_tree.py - Fix `match_prefix` default param `skip_storage=True` and log order in cache_manager.py - Sync test_gpu_model_runner.py with upstream/develop (add TestInsertTasksV1SplitwiseSuffix) - Add `enable_cache_manager_v1=False` to all mock runners to fix AttributeError in CI Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * [BugFix][KVCache] simplify _free_blocks in ResourceManagerV1 for non-v1 path Remove redundant prefix_caching branch in else path; always call recycle_gpu_blocks with full block_tables for non-cache-manager-v1 case. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * [KVCache][Optimization][BugFix] fix and optimize block_pool, cache_manager, transfer_manager, request ## Motivation 修复 cache_manager v1 中若干代码质量问题,提升性能并消除潜在的类型不一致 Bug。 ## Modifications 1. **block_pool.py**:`BlockPool.allocate` 将逐个 pop 循环替换为切片 + 批量 set.update,消除 Python 循环开销,O(n) → O(k)(C 层批量操作) 2. **cache_manager.py**:`match_prefix` 在 prefix caching 关闭时提前 return 前写入空 `MatchResult()`,避免调用方解引用 `_match_result=None` 崩溃 3. **transfer_manager.py**:`_build_device_layer_indices` 在 `_cache_kvs_map` 为空时也重置四个层索引列表,防止残留旧 tensor 被 swap 算子使用 4. **request.py**:`BatchRequest.append_swap_metadata` / `append_evict_metadata` 构造 `CacheSwapMetadata` 时将 `src_type`/`dst_type` 从字符串改为 `CacheLevel` 枚举,与字段类型声明一致;补充 `CacheLevel` 导入;`match_result` 属性返回类型标注修正为 `Optional[MatchResult]` 5. **resource_manager_v1.py**:`_allocate_gpu_blocks` 日志从 `INFO` 降级为 `DEBUG`,消除高频调度路径的日志噪音 6. **tests/engine/test_request.py**:同步更新 `src_type`/`dst_type` 断言为 `CacheLevel` 枚举值,补充 `CacheLevel` 导入 ## Usage or Command 单元测试: ```bash source .venv/py310/bin/activate cd baidu/FastDeploy python -m pytest tests/cache_manager/v1/test_cache_manager.py -v python -m pytest tests/cache_manager/v1/test_transfer_manager.py -v python -m pytest tests/engine/test_request.py -v ``` * [BugFix][KVCache] Fix BlockPool.allocate returns all blocks when num_blocks=0 ## Motivation 当 `allocate(num_blocks=0)` 被调用时,Python 负索引陷阱导致严重错误: `-0 == 0`,所以 `self._free_blocks[-0:]` 等价于 `self._free_blocks[0:]`, 会返回并清空整个空闲块列表,而非返回空列表。 ## Modifications 在 `BlockPool.allocate` 中增加对 `num_blocks == 0` 的提前判断,直接返回 `[]`, 避免触发 Python 负索引陷阱。 ## Usage or Command ```bash # 运行相关单元测试验证修复 python -m pytest tests/cache_manager/v1/test_cache_manager.py -vv -s ``` * [KVCache][Test] add unit tests for cache_manager v1 modules ## Motivation 补全 cache_manager/v1 各模块的单测覆盖,确保核心方法有完整的测试保障。 ## Modifications 新增/补充以下测试文件,全部 326 个用例通过: - tests/cache_manager/v1/test_block_pool.py(新建) 覆盖 BlockPool.get_metadata/set_metadata/resize、DeviceBlockPool/HostBlockPool - tests/cache_manager/v1/test_metadata.py(新建) 覆盖 BlockNode、RadixTreeStats、MatchResult、CacheSwapMetadata、AsyncTaskHandler - tests/cache_manager/v1/test_cache_utils.py(补充) 新增 hash_block_tokens、get_request_block_hasher、LayerDoneCounter 时间追踪及内部辅助方法 - tests/cache_manager/v1/test_radix_tree.py(补充) 新增 TestCompleteSwapToDevice 专项测试类(6 个用例) - tests/cache_manager/v1/test_cache_manager.py(补充) 新增 offload_to_host、load_from_host、pending backup 系列、prepare_prefetch_metadata - tests/cache_manager/v1/test_transfer_manager.py(补充) 新增 _swap_single_layer 校验路径、sync_input/output_stream、record_input_stream_event ## Usage or Command ```bash # 运行所有新增单测 source .venv/py310/bin/activate python -m pytest tests/cache_manager/v1/test_block_pool.py \ tests/cache_manager/v1/test_metadata.py \ tests/cache_manager/v1/test_cache_utils.py \ tests/cache_manager/v1/test_radix_tree.py \ tests/cache_manager/v1/test_cache_manager.py \ tests/cache_manager/v1/test_transfer_manager.py -v # 期望结果:326 passed ``` --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com>
946 lines
43 KiB
Python
946 lines
43 KiB
Python
"""
|
|
# Copyright (c) 2025 PaddlePaddle Authors. All Rights Reserved.
|
|
#
|
|
# Licensed under the Apache License, Version 2.0 (the "License"
|
|
# you may not use this file except in compliance with the License.
|
|
# You may obtain a copy of the License at
|
|
#
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
#
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
# See the License for the specific language governing permissions and
|
|
# limitations under the License.
|
|
"""
|
|
|
|
from __future__ import annotations
|
|
|
|
import copy
|
|
import json
|
|
import multiprocessing
|
|
import os
|
|
import re
|
|
import signal
|
|
import subprocess
|
|
import sys
|
|
import threading
|
|
import time
|
|
import traceback
|
|
import uuid
|
|
import weakref
|
|
from dataclasses import asdict
|
|
|
|
import numpy as np
|
|
import paddle
|
|
from tqdm import tqdm
|
|
|
|
import fastdeploy.metrics.trace as tracing
|
|
from fastdeploy.engine.args_utils import EngineArgs
|
|
from fastdeploy.engine.common_engine import (
|
|
EngineService,
|
|
_format_worker_launch_failure_message,
|
|
)
|
|
from fastdeploy.engine.expert_service import start_data_parallel_service
|
|
from fastdeploy.engine.request import Request
|
|
from fastdeploy.inter_communicator import EngineWorkerQueue, IPCSignal
|
|
from fastdeploy.logger.request_logger import (
|
|
RequestLogLevel,
|
|
log_request,
|
|
log_request_error,
|
|
)
|
|
from fastdeploy.metrics.metrics import main_process_metrics
|
|
from fastdeploy.platforms import current_platform
|
|
from fastdeploy.utils import EngineError, console_logger, envs, llm_logger
|
|
|
|
|
|
class LLMEngine:
|
|
"""
|
|
Engine class responsible for managing the Large Language Model (LLM) operations.
|
|
|
|
Attributes:
|
|
cfg (Config): Configuration object containing all the parameters.
|
|
cached_generated_tokens (queue.Queue): Queue to store generated tokens.
|
|
scheduler (LocalScheduler or GlobalScheduler): Scheduling tasks.
|
|
input_processor (InputPreprocessor): Preprocessor for input data.
|
|
resource_manager (ResourceManager): Manager for resource allocation.
|
|
token_processor (TokenProcessor): Processor for token generation.
|
|
engine_worker_queue (EngineWorkerQueue): Queue for communication between engine and workers.
|
|
is_started (bool): Flag indicating if the engine has started.
|
|
do_profile (int): Flag indicating if profiling is enabled.
|
|
"""
|
|
|
|
@classmethod
|
|
def from_engine_args(cls, engine_args: EngineArgs):
|
|
"""
|
|
Creates an LLM engine from the provided engine arguments.
|
|
|
|
Args:
|
|
engine_args (EngineArgs): Engine arguments object.
|
|
|
|
Returns:
|
|
LLMEngine: Instance of the LLMEngine class.
|
|
"""
|
|
# Create the engine configs.
|
|
config = engine_args.create_engine_config()
|
|
# Create the LLMEngine.
|
|
return cls(cfg=config)
|
|
|
|
def __init__(self, cfg):
|
|
"""
|
|
Initializes the LLMEngine with the provided configuration.
|
|
|
|
Args:
|
|
cfg (Config): Config object containing all the configuration parameters.
|
|
"""
|
|
self.cfg = cfg
|
|
self.cfg.print()
|
|
self.running = True
|
|
self.is_started = False
|
|
|
|
self.engine = EngineService(cfg)
|
|
|
|
if self.cfg.cache_config.num_gpu_blocks_override is None:
|
|
self.do_profile = 1
|
|
else:
|
|
self.do_profile = 0
|
|
self._finalizer = weakref.finalize(self, self._exit_sub_services)
|
|
|
|
main_process_metrics.set_cache_config_info(obj=self.cfg.cache_config)
|
|
|
|
tracing.trace_set_thread_info("engine")
|
|
|
|
def start(self, api_server_pid=None):
|
|
"""
|
|
Initializes the engine and starts its sub-services.
|
|
If `api_server_pid` is defined, will launch a thread
|
|
to keep getting request from zmq_server.
|
|
|
|
NOTE: To clarify the launch order of the components of the LLM engine:
|
|
1. First, launch splitwise scheduler (if necessary) and expert services (if necessary).
|
|
2. Then, launch common engine, which includes some background threads that inserts tasks and receives ouptuts.
|
|
3. Most importantly, launch workers and cache services. The launch order of them are listed as follows.
|
|
|
|
| Profile | Mixed | PrefixCache | Cache -> Worker | Worker -> Cache |
|
|
|---------|-------|-------------|-----------------|-----------------|
|
|
| 1 | 1 | 1 | 0 | 1 |
|
|
| 1 | 1 | 0 | 0 | 0 |
|
|
| 1 | 0 | 1 | 0 | 1 |
|
|
| 1 | 0 | 0 | 0 | 1 |
|
|
| 0 | 1 | 1 | 0 | 1 |
|
|
| 0 | 1 | 0 | 0 | 0 |
|
|
| 0 | 0 | 1 | 1 | 0 |
|
|
| 0 | 0 | 0 | 1 | 0 |
|
|
|
|
4. Finally, inform user the engine has successfully started.
|
|
|
|
"""
|
|
assert not self.is_started, "The engine is already started."
|
|
start_time = time.time()
|
|
|
|
self.api_server_pid = api_server_pid
|
|
self.ipc_signal_suffix = self.cfg.parallel_config.engine_worker_queue_port[0]
|
|
self._init_worker_signals()
|
|
|
|
self.launch_components()
|
|
|
|
self.engine.start()
|
|
self.engine.create_data_processor()
|
|
self.data_processor = self.engine.data_processor
|
|
|
|
# If block numer is specified and model is deployed in mixed mode, start cache manager first
|
|
if not self.do_profile and self.cfg.scheduler_config.splitwise_role != "mixed":
|
|
if not current_platform.is_intel_hpu():
|
|
device_ids = self.cfg.parallel_config.device_ids.split(",")
|
|
self.cache_manager_processes = self.engine.start_cache_service(device_ids, self.ipc_signal_suffix)
|
|
|
|
# Start workers
|
|
self.worker_proc = self._start_worker_service()
|
|
console_logger.info("Waiting for worker processes to be ready...")
|
|
time.sleep(5)
|
|
self.worker_init_status = dict()
|
|
|
|
result_container = {}
|
|
|
|
def check_worker_initialize_status_func(res: dict):
|
|
res["worker_is_alive"] = True
|
|
if not self.check_worker_initialize_status():
|
|
console_logger.error(_format_worker_launch_failure_message(envs.FD_LOG_DIR))
|
|
res["worker_is_alive"] = False
|
|
|
|
self.check_worker_initialize_status_func_thread = threading.Thread(
|
|
target=check_worker_initialize_status_func, args=(result_container,), daemon=True
|
|
)
|
|
self.check_worker_initialize_status_func_thread.start()
|
|
|
|
# Wait model loading
|
|
while self.loaded_model_signal.value[0] == 0:
|
|
# Make sure worker process is alive
|
|
if not self.check_worker_initialize_status_func_thread.is_alive():
|
|
return False
|
|
time.sleep(1)
|
|
|
|
# If block number is not specified, let workers do profiling to determine the block number,
|
|
# and then start the cache manager
|
|
if self.do_profile:
|
|
if not self._stop_profile():
|
|
return False
|
|
elif self.cfg.scheduler_config.splitwise_role == "mixed" and self.cfg.cache_config.enable_prefix_caching:
|
|
if not current_platform.is_intel_hpu() and not envs.ENABLE_V1_KVCACHE_MANAGER:
|
|
device_ids = self.cfg.parallel_config.device_ids.split(",")
|
|
self.cache_manager_processes = self.engine.start_cache_service(device_ids, self.ipc_signal_suffix)
|
|
|
|
if envs.FD_ENABLE_INTERNAL_ADAPTER:
|
|
assert (
|
|
envs.FD_ZMQ_RECV_REQUEST_SERVER_PORTS is not None or envs.FD_ZMQ_RECV_REQUEST_SERVER_PORT is not None
|
|
), "Please set FD_ZMQ_RECV_REQUEST_SERVER_PORTS or FD_ZMQ_RECV_REQUEST_SERVER_PORT when enabling internal adapter."
|
|
assert (
|
|
envs.FD_ZMQ_SEND_RESPONSE_SERVER_PORTS is not None or envs.FD_ZMQ_SEND_RESPONSE_SERVER_PORT is not None
|
|
), "Please set FD_ZMQ_SEND_RESPONSE_SERVER_PORTS or FD_ZMQ_SEND_RESPONSE_SERVER_PORT when enabling internal adapter."
|
|
if envs.FD_ZMQ_RECV_REQUEST_SERVER_PORTS is not None:
|
|
envs.FD_ZMQ_RECV_REQUEST_SERVER_PORT = envs.FD_ZMQ_RECV_REQUEST_SERVER_PORTS.split(",")[0]
|
|
if envs.FD_ZMQ_SEND_RESPONSE_SERVER_PORTS is not None:
|
|
envs.FD_ZMQ_SEND_RESPONSE_SERVER_PORT = envs.FD_ZMQ_SEND_RESPONSE_SERVER_PORTS.split(",")[0]
|
|
llm_logger.info(
|
|
f"envs.FD_ZMQ_RECV_REQUEST_SERVER_PORT:{envs.FD_ZMQ_RECV_REQUEST_SERVER_PORT},envs.FD_ZMQ_SEND_RESPONSE_SERVER_PORT:{envs.FD_ZMQ_SEND_RESPONSE_SERVER_PORT}"
|
|
)
|
|
|
|
if api_server_pid is not None:
|
|
llm_logger.info(f"Start zmq server, api_server_pid: {api_server_pid}")
|
|
self.engine.start_zmq_service(api_server_pid)
|
|
|
|
# Worker launched
|
|
self.check_worker_initialize_status_func_thread.join()
|
|
if not result_container["worker_is_alive"]:
|
|
console_logger.error(_format_worker_launch_failure_message(envs.FD_LOG_DIR))
|
|
return False
|
|
|
|
console_logger.info(f"Worker processes are launched with {time.time() - start_time} seconds.")
|
|
|
|
# Print blocks number & max running requests to console
|
|
if envs.ENABLE_V1_KVCACHE_SCHEDULER:
|
|
block_size = self.cfg.cache_config.block_size
|
|
num_gpu_blocks = self.cfg.cache_config.num_gpu_blocks_override or self.cfg.cache_config.total_block_num
|
|
num_cpu_blocks = self.cfg.cache_config.num_cpu_blocks
|
|
max_running_requests = min(
|
|
(num_gpu_blocks + num_cpu_blocks) * block_size // self.cfg.model_config.max_model_len,
|
|
self.cfg.scheduler_config.max_num_seqs,
|
|
)
|
|
console_logger.info(
|
|
f"Detected {num_gpu_blocks} gpu blocks and {num_cpu_blocks} cpu blocks in cache (block size: {block_size})."
|
|
)
|
|
console_logger.info(
|
|
f"FastDeploy will be serving {max_running_requests} running requests "
|
|
f"if each sequence reaches its maximum length: {self.cfg.model_config.max_model_len}"
|
|
)
|
|
|
|
return True
|
|
|
|
def _get_generated_result(self):
|
|
"""
|
|
Get result from scheduler, this function is called by generate()
|
|
which is only used in offline inference.
|
|
"""
|
|
return self.engine.scheduler.get_results()
|
|
|
|
# _insert_task_to_worker moved to CommonEngine
|
|
|
|
def _has_guided_input(self, request):
|
|
"""
|
|
Check if the request has any guided input.
|
|
"""
|
|
return any(
|
|
x is not None
|
|
for x in (
|
|
request.guided_json,
|
|
request.guided_regex,
|
|
request.guided_choice,
|
|
request.structural_tag,
|
|
request.guided_grammar,
|
|
request.guided_json_object,
|
|
)
|
|
)
|
|
|
|
def add_requests(self, task, sampling_params=None, **kwargs):
|
|
"""
|
|
Add a new request to the queue.
|
|
|
|
Args:
|
|
task: Request A dictionary representing the request.
|
|
sampling_params: A dictionary representing the sampling parameters.
|
|
|
|
Returns:
|
|
None
|
|
"""
|
|
# TODO 输入输出长度确认
|
|
|
|
if sampling_params is not None:
|
|
if sampling_params.temperature is not None and abs(sampling_params.temperature) < 1e-06:
|
|
sampling_params.temperature = 1e-06
|
|
task.update({k: v for k, v in asdict(sampling_params).items() if v is not None})
|
|
|
|
# Prepare chat_template_kwargs before calling process_request_dict
|
|
chat_template_kwargs = kwargs.get("chat_template_kwargs") or {}
|
|
chat_template_kwargs["chat_template"] = kwargs.get("chat_template")
|
|
task["chat_template_kwargs"] = chat_template_kwargs
|
|
|
|
# Use dict to call process_request_dict
|
|
task = self.engine.data_processor.process_request_dict(task, self.cfg.model_config.max_model_len)
|
|
|
|
# Create Request struct after processing
|
|
request = Request.from_dict(task)
|
|
request.metrics.scheduler_recv_req_time = time.time()
|
|
log_request(RequestLogLevel.CONTENT, message="Receive request {request}", request=request)
|
|
request.metrics.preprocess_start_time = time.time()
|
|
|
|
request.prompt_token_ids_len = len(request.prompt_token_ids)
|
|
request.need_prefill_tokens = request.prompt_token_ids_len
|
|
input_ids_len = request.prompt_token_ids_len
|
|
request.set(
|
|
"max_tokens",
|
|
min(
|
|
self.cfg.model_config.max_model_len - input_ids_len,
|
|
request.get("max_tokens"),
|
|
),
|
|
)
|
|
min_tokens = request.get("min_tokens")
|
|
if input_ids_len + min_tokens >= self.cfg.model_config.max_model_len:
|
|
error_msg = (
|
|
f"Input text is too long, length of prompt token({input_ids_len}) "
|
|
f"+ min_dec_len ({min_tokens}) >= max_model_len "
|
|
)
|
|
log_request_error(
|
|
message="request[{request_id}] error: {error}",
|
|
request_id=request.get("request_id"),
|
|
error=error_msg,
|
|
)
|
|
raise EngineError(error_msg, error_code=400)
|
|
|
|
if input_ids_len > self.cfg.model_config.max_model_len:
|
|
error_msg = f"Length of input token({input_ids_len}) exceeds the limit max_model_len({self.cfg.model_config.max_model_len})."
|
|
log_request_error(
|
|
message="request[{request_id}] error: {error}",
|
|
request_id=request.get("request_id"),
|
|
error=error_msg,
|
|
)
|
|
raise EngineError(error_msg, error_code=400)
|
|
|
|
if request.get("stop_seqs_len") is not None:
|
|
stop_seqs_len = request.get("stop_seqs_len")
|
|
max_stop_seqs_num = envs.FD_MAX_STOP_SEQS_NUM
|
|
if len(stop_seqs_len) > max_stop_seqs_num:
|
|
error_msg = (
|
|
f"Length of stop ({stop_seqs_len}) exceeds the limit max_stop_seqs_num({max_stop_seqs_num})."
|
|
"Please reduce the number of stop or set a lager max_stop_seqs_num by `FD_MAX_STOP_SEQS_NUM`"
|
|
)
|
|
log_request_error(
|
|
message="request[{request_id}] error: {error}",
|
|
request_id=request.get("request_id"),
|
|
error=error_msg,
|
|
)
|
|
raise EngineError(error_msg, error_code=400)
|
|
stop_seqs_max_len = envs.FD_STOP_SEQS_MAX_LEN
|
|
for single_stop_seq_len in stop_seqs_len:
|
|
if single_stop_seq_len > stop_seqs_max_len:
|
|
error_msg = (
|
|
f"Length of stop_seqs({single_stop_seq_len}) exceeds the limit stop_seqs_max_len({stop_seqs_max_len})."
|
|
"Please reduce the length of stop sequences or set a larger stop_seqs_max_len by `FD_STOP_SEQS_MAX_LEN`"
|
|
)
|
|
log_request_error(
|
|
message="request[{request_id}] error: {error}",
|
|
request_id=request.get("request_id"),
|
|
error=error_msg,
|
|
)
|
|
raise EngineError(error_msg, error_code=400)
|
|
|
|
if self._has_guided_input(request):
|
|
err_msg = None
|
|
if self.guided_decoding_checker is None:
|
|
err_msg = (
|
|
"guided_backend is None, use --guided-decoding-backend to specify the backend at server startup."
|
|
)
|
|
else:
|
|
request, err_msg = self.guided_decoding_checker.schema_format(request)
|
|
|
|
if err_msg is not None:
|
|
log_request_error(
|
|
message="request[{request_id}] error: {error}",
|
|
request_id=request.get("request_id"),
|
|
error=err_msg,
|
|
)
|
|
raise EngineError(err_msg, error_code=400)
|
|
|
|
request.metrics.preprocess_end_time = time.time()
|
|
request.metrics.scheduler_recv_req_time = time.time()
|
|
self.engine.scheduler.put_requests([request])
|
|
log_request(
|
|
RequestLogLevel.STAGES,
|
|
message="Cache task with request_id ({request_id})",
|
|
request_id=request.get("request_id"),
|
|
)
|
|
log_request(RequestLogLevel.FULL, message="cache task: {request}", request=request)
|
|
|
|
def _worker_processes_ready(self):
|
|
"""
|
|
judge if all worker processes are ready
|
|
|
|
"""
|
|
if np.sum(self.worker_ready_signal.value) == self.cfg.worker_num_per_node:
|
|
return True
|
|
return False
|
|
|
|
def _init_worker_signals(self):
|
|
"""
|
|
Initialize shared memory to indicate engine status
|
|
"""
|
|
# worker_ready_signal 用于worker进程感知engine是否启动完成
|
|
worker_ready_signal_data = np.zeros(shape=[self.cfg.worker_num_per_node], dtype=np.int32)
|
|
self.worker_ready_signal = IPCSignal(
|
|
name="worker_ready_signal",
|
|
array=worker_ready_signal_data,
|
|
dtype=np.int32,
|
|
suffix=self.ipc_signal_suffix,
|
|
create=True,
|
|
)
|
|
|
|
# launched_cache_manager_signal 用于感知engine是否启动了cache_manager
|
|
if self.cfg.cache_config.enable_prefix_caching or self.cfg.scheduler_config.splitwise_role != "mixed":
|
|
launched_cache_manager_signal_data = np.zeros([1], dtype=np.int32)
|
|
self.launched_cache_manager_signal = IPCSignal(
|
|
name="launched_cache_manager_signal",
|
|
array=launched_cache_manager_signal_data,
|
|
dtype=np.int32,
|
|
suffix=self.ipc_signal_suffix,
|
|
create=True,
|
|
)
|
|
|
|
# launched_expert_service_signal: Used to sense whether each expert_service is started successfully
|
|
if self.cfg.parallel_config.data_parallel_size > 1 and not envs.FD_ENABLE_MULTI_API_SERVER:
|
|
launched_expert_service_signal_data = np.zeros(
|
|
shape=[self.cfg.parallel_config.data_parallel_size // self.cfg.nnode], dtype=np.int32
|
|
)
|
|
self.launched_expert_service_signal = IPCSignal(
|
|
name="launched_expert_service_signal",
|
|
array=launched_expert_service_signal_data,
|
|
dtype=np.int32,
|
|
suffix=self.ipc_signal_suffix,
|
|
create=True,
|
|
)
|
|
|
|
# loaded_model_signal: Used to detect whether each worker has completed model loading
|
|
loaded_model_signal_data = np.zeros([1], dtype=np.int32)
|
|
self.loaded_model_signal = IPCSignal(
|
|
name="loaded_model_signal",
|
|
array=loaded_model_signal_data,
|
|
dtype=np.int32,
|
|
suffix=self.ipc_signal_suffix,
|
|
create=True,
|
|
)
|
|
|
|
if self.do_profile:
|
|
if paddle.is_compiled_with_custom_device("iluvatar_gpu"):
|
|
get_profile_block_num = np.zeros([self.cfg.worker_num_per_node], dtype=np.int32)
|
|
else:
|
|
get_profile_block_num = np.zeros([1], dtype=np.int32)
|
|
self.get_profile_block_num_signal = IPCSignal(
|
|
name="get_profile_block_num",
|
|
array=get_profile_block_num,
|
|
dtype=np.int32,
|
|
suffix=self.ipc_signal_suffix,
|
|
create=True,
|
|
)
|
|
|
|
def _exit_sub_services(self):
|
|
"""
|
|
exit sub services
|
|
"""
|
|
self.running = False
|
|
llm_logger.info("Engine shut down, exiting sub services...")
|
|
|
|
if hasattr(self, "cache_manager_processes"):
|
|
if hasattr(self.engine.resource_manager.cache_manager, "shm_cache_task_flag_broadcast"):
|
|
self.engine.resource_manager.cache_manager.shm_cache_task_flag_broadcast.clear()
|
|
if hasattr(self.engine.resource_manager.cache_manager, "cache_ready_signal"):
|
|
self.engine.resource_manager.cache_manager.cache_ready_signal.clear()
|
|
for p in self.cache_manager_processes:
|
|
llm_logger.info(f"Killing cache manager process {p.pid}")
|
|
try:
|
|
pgid = os.getpgid(p.pid)
|
|
os.killpg(pgid, signal.SIGTERM)
|
|
except Exception as e:
|
|
console_logger.error(
|
|
f"Error killing cache manager process {p.pid}: {e}, {str(traceback.format_exc())}"
|
|
)
|
|
self.worker_ready_signal.clear()
|
|
self.loaded_model_signal.clear()
|
|
|
|
if hasattr(self, "get_profile_block_num_signal"):
|
|
self.get_profile_block_num_signal.clear()
|
|
|
|
if hasattr(self, "worker_proc") and self.worker_proc is not None:
|
|
try:
|
|
pgid = os.getpgid(self.worker_proc.pid)
|
|
os.killpg(pgid, signal.SIGTERM)
|
|
except Exception as e:
|
|
console_logger.error(f"Error extracting sub services: {e}, {str(traceback.format_exc())}")
|
|
|
|
if hasattr(self, "zmq_server") and self.zmq_server is not None:
|
|
self.zmq_server.close()
|
|
|
|
if hasattr(self, "dp_processed"):
|
|
for p in self.dp_processed:
|
|
console_logger.info(f"Waiting for worker {p.pid} to exit")
|
|
p.join()
|
|
for p in self.dp_engine_worker_queue_server:
|
|
p.cleanup()
|
|
|
|
def _setting_environ_variables(self):
|
|
"""
|
|
配置环境变量
|
|
"""
|
|
variables = {
|
|
"ENABLE_FASTDEPLOY_LOAD_MODEL_CONCURRENCY": 0,
|
|
"LOAD_STATE_DICT_THREAD_NUM": len(self.cfg.parallel_config.device_ids.split(",")),
|
|
"PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION": "python",
|
|
"NCCL_ALGO": "Ring",
|
|
"FLAGS_max_partition_size": int(os.getenv("FLAGS_max_partition_size", 1024)),
|
|
"OMP_NUM_THREADS": 3,
|
|
"FD_ENABLE_PDL": envs.FD_ENABLE_PDL,
|
|
}
|
|
# environment variables needed by Dy2St
|
|
variables.update(
|
|
{
|
|
"SOT_LOG_LEVEL": os.getenv("SOT_LOG_LEVEL", default="0"),
|
|
"SOT_UNSAFE_CACHE_FASTPATH": os.getenv("SOT_UNSAFE_CACHE_FASTPATH", default="1"),
|
|
"SOT_ENABLE_0_SIZE_FALLBACK": os.getenv("SOT_ENABLE_0_SIZE_FALLBACK", default="0"),
|
|
"SOT_SPECIALIZED_DIM_NUMBERS": os.getenv("SOT_SPECIALIZED_DIM_NUMBERS", default="no"),
|
|
"SOT_ENABLE_COMPILE_TIME_LIMIT": os.getenv("SOT_ENABLE_COMPILE_TIME_LIMIT", default="0"),
|
|
"FLAGS_specialize_device_in_dy2st": os.getenv("FLAGS_specialize_device_in_dy2st", default="1"),
|
|
"FLAGS_enable_async_fast_gc": os.getenv("FLAGS_enable_async_fast_gc", default="0"),
|
|
"FLAGS_pir_interpreter_record_stream_for_gc_cache": os.getenv(
|
|
"FLAGS_pir_interpreter_record_stream_for_gc_cache", default="1"
|
|
),
|
|
"FLAGS_parameters_persistent_mode_in_dy2st": os.getenv(
|
|
"FLAGS_parameters_persistent_mode_in_dy2st", default="1"
|
|
),
|
|
}
|
|
)
|
|
|
|
if self.cfg.scheduler_config.splitwise_role != "mixed":
|
|
if envs.ENABLE_V1_KVCACHE_SCHEDULER:
|
|
variables["FLAGS_use_pd_disaggregation_per_chunk"] = 1
|
|
else:
|
|
variables["FLAGS_use_pd_disaggregation"] = 1
|
|
# TODO dynamic load environment variable
|
|
if self.cfg.scheduler_config.splitwise_role == "prefill":
|
|
variables["FLAGS_fmt_write_cache_completed_signal"] = 1
|
|
|
|
command_prefix = ""
|
|
for k, v in variables.items():
|
|
command_prefix += f"{k}={v} "
|
|
return command_prefix
|
|
|
|
def _start_worker_service(self):
|
|
"""
|
|
start gpu worker service
|
|
|
|
"""
|
|
console_logger.debug("Start worker process...")
|
|
log_dir = os.getenv("FD_LOG_DIR", default="log")
|
|
command_prefix = self._setting_environ_variables()
|
|
current_file_path = os.path.abspath(__file__)
|
|
current_dir_path = os.path.split(current_file_path)[0]
|
|
# TODO
|
|
uncache_worker_stdout = "" if os.getenv("UNCACHE_WORKER_STDOUT", "0") == 1 else "-u"
|
|
pd_cmd = f"{command_prefix} {sys.executable} {uncache_worker_stdout} -m paddle.distributed.launch"
|
|
pd_cmd = pd_cmd + f" --log_dir {log_dir}"
|
|
|
|
worker_path = "../worker/worker_process.py"
|
|
py_script = os.path.join(current_dir_path, worker_path)
|
|
|
|
ori_vocab_size = (
|
|
len(self.engine.data_processor.tokenizer.sp_model)
|
|
if hasattr(self.engine.data_processor.tokenizer, "sp_model")
|
|
else len(self.engine.data_processor.tokenizer.vocab)
|
|
)
|
|
|
|
think_start_id = self.data_processor.tokenizer.get_vocab().get("<think>", -1)
|
|
if think_start_id >= 0:
|
|
llm_logger.info(f"Get think_start_id {think_start_id} from vocab.")
|
|
else:
|
|
llm_logger.info("No <think> token found in vocabulary, the model can not do reasoning.")
|
|
think_end_id = self.data_processor.tokenizer.get_vocab().get("</think>", -1)
|
|
if think_end_id >= 0:
|
|
llm_logger.info(f"Get think_end_id {think_end_id} from vocab.")
|
|
else:
|
|
llm_logger.info("No </think> token found in vocabulary, the model can not do reasoning.")
|
|
image_patch_id = self.data_processor.tokenizer.get_vocab().get("<|IMAGE_PLACEHOLDER|>", -1)
|
|
line_break_id = self.data_processor.tokenizer.get_vocab().get("\n", -1)
|
|
if line_break_id < 0:
|
|
line_break_ids = self.data_processor.tokenizer.encode("\n", add_special_tokens=False)
|
|
if isinstance(line_break_ids, dict):
|
|
line_break_ids = line_break_ids.get("input_ids")
|
|
elif hasattr(line_break_ids, "input_ids"):
|
|
line_break_ids = line_break_ids.input_ids
|
|
if line_break_ids:
|
|
if isinstance(line_break_ids, (list, tuple)):
|
|
first = line_break_ids[0]
|
|
if isinstance(first, (list, tuple)):
|
|
line_break_id = int(first[0]) if first else -1
|
|
else:
|
|
line_break_id = int(first)
|
|
else:
|
|
line_break_id = int(line_break_ids)
|
|
if line_break_id >= 0:
|
|
llm_logger.info(f"Get line_break_id {line_break_id} from tokenizer.")
|
|
try:
|
|
think_truncate_prompt_ids = self.data_processor.tokenizer.convert_tokens_to_ids(
|
|
self.data_processor.tokenizer.tokenize(self.data_processor.tokenizer.think_truncate_prompt)
|
|
)
|
|
except Exception:
|
|
think_truncate_prompt_ids = self.data_processor.tokenizer.convert_tokens_to_ids(
|
|
self.data_processor.tokenizer.tokenize(envs.FD_LIMIT_THINKING_CONTENT_TRUNCATE_STR)
|
|
)
|
|
llm_logger.info(f"Get think_truncate_prompt_ids {think_truncate_prompt_ids} from tokenizer.")
|
|
|
|
ports = ",".join(map(str, self.cfg.parallel_config.engine_worker_queue_port))
|
|
ips = None
|
|
if self.cfg.ips is not None:
|
|
ips = ",".join(self.cfg.ips)
|
|
arguments = (
|
|
f" --devices {self.cfg.parallel_config.device_ids} {py_script}"
|
|
f" --max_num_seqs {self.cfg.scheduler_config.max_num_seqs} --max_model_len {self.cfg.model_config.max_model_len}"
|
|
f" --gpu_memory_utilization {self.cfg.cache_config.gpu_memory_utilization}"
|
|
f" --model {self.cfg.model_config.model!s}"
|
|
f" --device_ids {self.cfg.parallel_config.device_ids}"
|
|
f" --tensor_parallel_size {self.cfg.parallel_config.tensor_parallel_size}"
|
|
f" --engine_worker_queue_port {ports}"
|
|
f" --pod_ip {self.cfg.master_ip}"
|
|
f" --block_size {self.cfg.cache_config.block_size}"
|
|
f" --enc_dec_block_num {self.cfg.cache_config.enc_dec_block_num}"
|
|
f" --eos_tokens_lens {self.engine.data_processor.eos_token_id_len}"
|
|
f" --pad_token_id {self.engine.data_processor.pad_token_id}"
|
|
f" --engine_pid {self.cfg.parallel_config.engine_worker_queue_port[0]}"
|
|
f" --max_num_batched_tokens {self.cfg.scheduler_config.max_num_batched_tokens}"
|
|
f" --splitwise_role {self.cfg.scheduler_config.splitwise_role}"
|
|
f" --kv_cache_ratio {self.cfg.cache_config.kv_cache_ratio}"
|
|
f" --expert_parallel_size {self.cfg.parallel_config.expert_parallel_size}"
|
|
f" --chunked_moe_size {self.cfg.parallel_config.chunked_moe_size}"
|
|
f" --data_parallel_size {self.cfg.parallel_config.data_parallel_size}"
|
|
f" --quantization '{json.dumps(self.cfg.model_config.quantization)}'"
|
|
f" --ori_vocab_size {ori_vocab_size}"
|
|
f" --think_start_id {think_start_id}"
|
|
f" --think_end_id {think_end_id}"
|
|
f" --image_patch_id {image_patch_id}"
|
|
f" --line_break_id {line_break_id}"
|
|
f" --think_truncate_prompt_ids '{json.dumps(think_truncate_prompt_ids)}'"
|
|
f" --speculative_config '{self.cfg.speculative_config.to_json_string()}'"
|
|
f" --graph_optimization_config '{self.cfg.graph_opt_config.to_json_string()}'"
|
|
f" --guided_decoding_backend {self.cfg.structured_outputs_config.guided_decoding_backend}"
|
|
f" --load_strategy {self.cfg.load_config.load_strategy}"
|
|
f" --rsync_config '{json.dumps(self.cfg.load_config.rsync_config)}'"
|
|
f" --early_stop_config '{self.cfg.early_stop_config.to_json_string()}'"
|
|
f" --reasoning_parser {self.cfg.structured_outputs_config.reasoning_parser}"
|
|
f" --load_choices {self.cfg.load_config.load_choices}"
|
|
f" --model_loader_extra_config '{json.dumps(self.cfg.load_config.model_loader_extra_config)}'"
|
|
f" --plas_attention_config '{self.cfg.plas_attention_config.to_json_string()}'"
|
|
f" --ips {ips}"
|
|
f" --max_encoder_cache {self.cfg.cache_config.max_encoder_cache}"
|
|
f" --cache-transfer-protocol {self.cfg.cache_config.cache_transfer_protocol}"
|
|
f" --runner {self.cfg.model_config.runner}"
|
|
f" --convert {self.cfg.model_config.convert}"
|
|
f" --override-pooler-config {self.cfg.model_config.override_pooler_config}"
|
|
f" --logprobs_mode {self.cfg.model_config.logprobs_mode}"
|
|
f" --max_logprobs {self.cfg.model_config.max_logprobs}"
|
|
f" --eplb_config '{self.cfg.eplb_config.to_json_string()}'"
|
|
f" --routing_replay_config '{self.cfg.routing_replay_config.to_json_string()}'"
|
|
f" --model-impl {self.cfg.model_config.model_impl}"
|
|
f" --num_cpu_blocks {self.cfg.cache_config.num_cpu_blocks}"
|
|
f" --deploy_modality {self.cfg.deploy_modality.value}"
|
|
)
|
|
if self.cfg.structured_outputs_config.logits_processors is not None:
|
|
arguments += f" --logits-processors {' '.join(self.cfg.structured_outputs_config.logits_processors)}"
|
|
if self.engine.mm_max_tokens_per_item is not None:
|
|
arguments += f" --mm_max_tokens_per_item '{json.dumps(self.engine.mm_max_tokens_per_item)}'"
|
|
|
|
# TODO (iluvatar): remove after paddle fix launch error
|
|
if current_platform.is_iluvatar() and "CUDA_VISIBLE_DEVICES" in os.environ:
|
|
arguments = arguments.replace(f"--devices {self.cfg.parallel_config.device_ids}", "")
|
|
|
|
worker_store_true_flag = {
|
|
"enable_expert_parallel": self.cfg.parallel_config.enable_expert_parallel,
|
|
"enable_chunked_moe": self.cfg.parallel_config.enable_chunked_moe,
|
|
"enable_prefix_caching": self.cfg.cache_config.enable_prefix_caching,
|
|
"enable_chunked_prefill": self.cfg.cache_config.enable_chunked_prefill,
|
|
"do_profile": self.do_profile,
|
|
"dynamic_load_weight": self.cfg.load_config.dynamic_load_weight,
|
|
"disable_any_whitespace": self.cfg.structured_outputs_config.disable_any_whitespace,
|
|
"disable_custom_all_reduce": self.cfg.parallel_config.disable_custom_all_reduce,
|
|
"use_internode_ll_two_stage": self.cfg.parallel_config.use_internode_ll_two_stage,
|
|
"disable_sequence_parallel_moe": self.cfg.parallel_config.disable_sequence_parallel_moe,
|
|
"enable_logprob": self.cfg.model_config.enable_logprob,
|
|
"lm_head_fp32": self.cfg.model_config.lm_head_fp32,
|
|
"moe_gate_fp32": self.cfg.model_config.moe_gate_fp32,
|
|
"shutdown_comm_group_if_worker_idle": self.cfg.parallel_config.shutdown_comm_group_if_worker_idle,
|
|
"enable_entropy": self.cfg.model_config.enable_entropy,
|
|
"ep_prefill_use_worst_num_tokens": self.cfg.parallel_config.ep_prefill_use_worst_num_tokens,
|
|
"enable_overlap_schedule": self.cfg.scheduler_config.enable_overlap_schedule,
|
|
"enable_flashinfer_allreduce_fusion": self.cfg.parallel_config.enable_flashinfer_allreduce_fusion,
|
|
}
|
|
for worker_flag, value in worker_store_true_flag.items():
|
|
if value:
|
|
arguments = arguments + f" --{worker_flag}"
|
|
|
|
worker_default_none_flag = {
|
|
"num_gpu_blocks_override": self.cfg.cache_config.num_gpu_blocks_override,
|
|
"kvcache_storage_backend": self.cfg.cache_config.kvcache_storage_backend,
|
|
}
|
|
for worker_flag, value in worker_default_none_flag.items():
|
|
if value:
|
|
arguments = arguments + f" --{worker_flag} {value}"
|
|
|
|
if self.cfg.nnode > 1:
|
|
pd_cmd = pd_cmd + f" --ips {ips} --nnodes {len(self.cfg.ips)}"
|
|
pd_cmd = pd_cmd + arguments + f" 2>{log_dir}/launch_worker.log"
|
|
llm_logger.info(f"Launch worker service command: {pd_cmd}")
|
|
p = subprocess.Popen(
|
|
pd_cmd,
|
|
stdout=subprocess.PIPE,
|
|
shell=True,
|
|
preexec_fn=os.setsid,
|
|
)
|
|
return p
|
|
|
|
def _format_and_add_data(self, prompts: dict):
|
|
|
|
if "request_id" in prompts:
|
|
prompts["request_id"] = prompts["request_id"]
|
|
|
|
if "request_id" not in prompts:
|
|
request_id = str(uuid.uuid4())
|
|
prompts["request_id"] = request_id
|
|
query_list = []
|
|
|
|
if "context" in prompts:
|
|
for item in prompts["context"]:
|
|
if item["role"] == "system":
|
|
prompts["system"] = item["utterance"]
|
|
elif item["role"] in ["user", "assistant"]:
|
|
query_list.append(item["utterance"])
|
|
prompts["prompt"] = query_list
|
|
|
|
if "max_tokens" not in prompts:
|
|
prompts["max_tokens"] = self.cfg.model_config.max_model_len
|
|
|
|
self.add_requests(prompts)
|
|
return prompts["request_id"]
|
|
|
|
def generate(self, prompts, stream):
|
|
"""
|
|
Generates a response based on the given prompt using the model.
|
|
|
|
Args:
|
|
prompts (dict): The prompt to use for generating the response.
|
|
stream (bool): Whether to stream the output or wait until completion.
|
|
|
|
Yields:
|
|
dict: The generated response.
|
|
"""
|
|
log_request(RequestLogLevel.CONTENT, message="Starting generation for prompt: {prompts}", prompts=prompts)
|
|
try:
|
|
req_id = self._format_and_add_data(prompts)
|
|
except Exception as e:
|
|
log_request_error(
|
|
message="request[{request_id}] error while adding request: {error}, {traceback}",
|
|
request_id=prompts.get("request_id"),
|
|
error=str(e),
|
|
traceback=traceback.format_exc(),
|
|
)
|
|
raise EngineError(str(e), error_code=400)
|
|
|
|
# Get the result of the current request
|
|
for result in self._get_generated_tokens(req_id):
|
|
is_end = result.finished
|
|
if stream and not is_end:
|
|
output = self.engine.data_processor.process_response_dict(
|
|
result.to_dict(), stream=False, include_stop_str_in_output=False
|
|
)
|
|
if output is None:
|
|
continue
|
|
yield output
|
|
|
|
# Exit loop if termination condition is met
|
|
if is_end:
|
|
output = self.engine.data_processor.process_response_dict(
|
|
result.to_dict(), stream=False, include_stop_str_in_output=False, direct_decode=not stream
|
|
)
|
|
log_request(RequestLogLevel.FULL, message="Generate result: {output}", output=output)
|
|
if not stream:
|
|
yield output
|
|
else:
|
|
output["outputs"]["text"] = ""
|
|
output["outputs"]["reasoning_content"] = ""
|
|
yield output
|
|
|
|
self.engine.check_and_free_block_tables()
|
|
|
|
def _stop_profile(self):
|
|
"""
|
|
Stop profiling of the model server and reset variables.
|
|
"""
|
|
self.do_profile = 0
|
|
while self.get_profile_block_num_signal.value[0] == 0:
|
|
if hasattr(self, "worker_proc") and self.worker_proc is not None:
|
|
if self.worker_proc.poll() is not None:
|
|
console_logger.error(_format_worker_launch_failure_message(envs.FD_LOG_DIR))
|
|
return False
|
|
time.sleep(1)
|
|
num_gpu_blocks = self.get_profile_block_num_signal.value[0]
|
|
self.cfg.cache_config.reset(num_gpu_blocks)
|
|
self.engine.resource_manager.reset_cache_config(self.cfg.cache_config)
|
|
if self.cfg.cache_config.enable_prefix_caching or self.cfg.scheduler_config.splitwise_role != "mixed":
|
|
if not current_platform.is_intel_hpu() and not envs.ENABLE_V1_KVCACHE_MANAGER:
|
|
device_ids = self.cfg.parallel_config.device_ids.split(",")
|
|
self.cache_manager_processes = self.engine.start_cache_service(device_ids, self.ipc_signal_suffix)
|
|
return True
|
|
|
|
def check_health(self, time_interval_threashold=30):
|
|
"""
|
|
Check the health of the model server by checking whether all workers are alive.
|
|
|
|
"""
|
|
if self.engine.worker_healthy_live_signal.value[0]:
|
|
elapsed_time = time.time() - self.engine.worker_healthy_live_signal.value[0]
|
|
if elapsed_time > time_interval_threashold:
|
|
return False, "Worker Service Not Healthy"
|
|
|
|
return True, ""
|
|
|
|
def launch_components(self):
|
|
if self.cfg.scheduler_config.splitwise_role != "mixed":
|
|
self.splitwise_receive_thread = threading.Thread(
|
|
target=self.engine.split_connector.start_receiver, args=()
|
|
)
|
|
self.splitwise_receive_thread.daemon = True
|
|
self.splitwise_receive_thread.start()
|
|
|
|
role = self.cfg.scheduler_config.splitwise_role
|
|
host_ip = self.cfg.host_ip
|
|
if self.cfg.scheduler_config.name == "splitwise":
|
|
self.engine.scheduler.start(role, host_ip, self.cfg.register_info)
|
|
elif self.cfg.scheduler_config.name == "dp":
|
|
self.engine.scheduler.start(
|
|
self.cfg.node_rank * self.cfg.worker_num_per_node % self.cfg.worker_num_per_node,
|
|
)
|
|
|
|
if not envs.FD_ENABLE_MULTI_API_SERVER:
|
|
if self.cfg.parallel_config.data_parallel_size > 1:
|
|
self.launched_expert_service_signal.value[0] = 1
|
|
self.dp_processed = []
|
|
self.dp_engine_worker_queue_server = []
|
|
for i in range(
|
|
1,
|
|
self.cfg.parallel_config.data_parallel_size // self.cfg.nnode,
|
|
):
|
|
if not envs.FD_ENGINE_TASK_QUEUE_WITH_SHM:
|
|
address = (
|
|
self.cfg.master_ip,
|
|
int(self.cfg.parallel_config.engine_worker_queue_port[i]),
|
|
)
|
|
else:
|
|
address = f"/dev/shm/fd_task_queue_{self.cfg.parallel_config.engine_worker_queue_port[i]}.sock"
|
|
|
|
llm_logger.info(f"dp start queue service {address}")
|
|
self.dp_engine_worker_queue_server.append(
|
|
EngineWorkerQueue(
|
|
address=address,
|
|
is_server=True,
|
|
num_client=self.cfg.parallel_config.tensor_parallel_size,
|
|
local_data_parallel_size=self.cfg.parallel_config.data_parallel_size,
|
|
)
|
|
)
|
|
ctx = multiprocessing.get_context("fork")
|
|
cfg = copy.deepcopy(self.cfg)
|
|
self.dp_processed.append(
|
|
ctx.Process(
|
|
target=start_data_parallel_service,
|
|
args=(
|
|
cfg,
|
|
i,
|
|
None,
|
|
),
|
|
)
|
|
)
|
|
llm_logger.info(
|
|
f"Engine is initialized successfully with {self.cfg.parallel_config.tensor_parallel_size}"
|
|
+ f" data parallel id {i}"
|
|
)
|
|
self.dp_processed[-1].start()
|
|
|
|
for i in range(
|
|
1,
|
|
self.cfg.parallel_config.data_parallel_size // self.cfg.nnode,
|
|
):
|
|
|
|
while self.launched_expert_service_signal.value[i] == 0:
|
|
time.sleep(0.1)
|
|
|
|
def check_worker_initialize_status(self):
|
|
"""
|
|
Check the initlialize status of workers by stdout logging
|
|
"""
|
|
|
|
def detect_thread():
|
|
for line in self.worker_proc.stdout:
|
|
line = line.decode("utf-8", errors="ignore")
|
|
if self.worker_init_status.get("finished", False):
|
|
break
|
|
if match := re.search(
|
|
r"Loading (?:safetensors )?checkpoint shards:\s*(\d+)",
|
|
line,
|
|
):
|
|
self.worker_init_status["weight_loadding"] = eval(match.group(1)) * 1.0 / 100
|
|
elif (match := re.search(r"Start load layer (\d+)", line)) or (
|
|
match := re.search(r"set state for layer (\d+)", line)
|
|
):
|
|
progress = eval(match.group(1)) * 1.0 / self.cfg.model_config.num_hidden_layers
|
|
self.worker_init_status["layer_loadding"] = progress
|
|
if self.worker_init_status["layer_loadding"] == self.cfg.model_config.num_hidden_layers - 1:
|
|
self.worker_init_status["finished"] = True
|
|
|
|
self.checking_worker_status_thread = threading.Thread(target=detect_thread, daemon=True)
|
|
self.checking_worker_status_thread.start()
|
|
|
|
# display weight loadding progress
|
|
with tqdm(total=100, desc="Loading Weights") as pbar:
|
|
progress = 0
|
|
while progress < 100:
|
|
progress = int(self.worker_init_status.get("weight_loadding", 0) * 100)
|
|
if self.worker_init_status.get("layer_loadding", 0) > 0 or self._worker_processes_ready():
|
|
progress = 100
|
|
pbar.update(progress - pbar.n)
|
|
pbar.refresh()
|
|
time.sleep(0.5)
|
|
if self.worker_proc.poll() is not None:
|
|
return False
|
|
|
|
# display layer loadding progress
|
|
with tqdm(total=100, desc="Loading Layers") as pbar:
|
|
progress = 0
|
|
while progress < 100:
|
|
progress = int(self.worker_init_status.get("layer_loadding", 0) * 100)
|
|
if self._worker_processes_ready():
|
|
progress = 100
|
|
pbar.update(progress - pbar.n)
|
|
pbar.refresh()
|
|
time.sleep(0.5)
|
|
if self.worker_proc.poll() is not None:
|
|
return False
|
|
|
|
self.worker_init_status["finished"] = True
|
|
try:
|
|
self.checking_worker_status_thread.join(timeout=1)
|
|
except Exception:
|
|
pass
|
|
return True
|