[Feature] block sparse attention (#3668)

* 支持稀疏attn

* fix bug

* code style

* fix moba attn get kv shape

* 修复a100编译

* codestyle

* code style

* code style

* code style

* fix conflict

* 增加单侧

* code style

* 增加eblite 加载时间

* fix bug

* for ci

* for ci

* for ci

* for ci

* 支持mlp block size 128

* 增加小算子单测

* fix 单测 mlp

* 将环境变量加入到config里面

* fix rollout config

* 修复显存

* add test server

* add test server

* fix mlp  最后一层使用full attn
This commit is contained in:
yangjianfengo1
2025-08-29 19:46:30 +08:00
committed by GitHub
parent ccd52b5596
commit 3754a9906d
31 changed files with 6553 additions and 10 deletions
@@ -20,6 +20,7 @@ from .block_multihead_attn_backend import BlockAttentionBackend
from .flash_attn_backend import FlashAttentionBackend
from .iluvatar_attn_backend import IluvatarAttnBackend
from .mla_attention_backend import MLAAttentionBackend
from .moba_attention_backend import MobaAttentionBackend
from .native_paddle_backend import PaddleNativeAttnBackend
from .xpu_attn_backend import XPUAttentionBackend
@@ -34,4 +35,5 @@ __all__ = [
"IluvatarAttnBackend",
"BlockAttentionBackend",
"Attention",
"MobaAttentionBackend",
]