mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2026-04-23 00:17:25 +08:00
Support MXFP4 for GPT-OSS (#5435)
* support mxfp4 in gpt-oss * support mxfp4 in gpt-oss * add scope for flashinfer * remove torch code * update envs.FD_MXFP4_BACKEND * update process_weights_after_loading * update env name * support tp in gpt-oss, add e2e test * add flashinfer-python-paddle in requirements * fix import error * add test * add test * add test * add test
This commit is contained in:
@@ -54,6 +54,8 @@ environment_variables: dict[str, Callable[[], Any]] = {
|
||||
"FD_SAMPLING_CLASS": lambda: os.getenv("FD_SAMPLING_CLASS", "base"),
|
||||
# Set moe backend."cutlass","marlin" and "triton" can be set currently.
|
||||
"FD_MOE_BACKEND": lambda: os.getenv("FD_MOE_BACKEND", "cutlass"),
|
||||
# Set mxfp4 backend."flashinfer" can be set currently.
|
||||
"FD_MOE_MXFP4_BACKEND": lambda: os.getenv("FD_MOE_MXFP4_BACKEND", "flashinfer"),
|
||||
# Whether to use Machete for wint4 dense gemm.
|
||||
"FD_USE_MACHETE": lambda: os.getenv("FD_USE_MACHETE", "1"),
|
||||
# Set whether to disable recompute the request when the KV cache is full.
|
||||
|
||||
Reference in New Issue
Block a user