* enable trtllm_all_reduce fusion kernel in glm model
* fix conflict
* format update
* fix a bug
* modify test
* modify test
* support empty tensor and modify test
* fix test_linear config issues
* modify test name
* add edge test case
* modify format
* fix conflict
* modify default max token num in trtllm_allreduce_fusion
* add max token num branch for trtllm_allreduce_fusion
* fix format
* fix rmsnorm config issue
* modify 2025 to 2026
* using compat grard
* Lazily import flashinfer.comm and fix test config issue
* fix test issues
* add flashinfer cache dir clean machine
* fix some issues
* support mxfp4 in gpt-oss
* support mxfp4 in gpt-oss
* add scope for flashinfer
* remove torch code
* update envs.FD_MXFP4_BACKEND
* update process_weights_after_loading
* update env name
* support tp in gpt-oss, add e2e test
* add flashinfer-python-paddle in requirements
* fix import error
* add test
* add test
* add test
* add test