DeepEP
DeepEP: an efficient expert-parallel communication library
...Because MoE architectures require routing inputs to different experts, communication overhead can become a bottleneck — DeepEP addresses that by providing optimized GPU kernels and efficient dispatch/combining logic. The library also supports low-precision operations (such as FP8) to reduce memory and bandwidth usage during communication. DeepEP is aimed at large-scale model inference or training systems where expert parallelism is used to scale model capacity without replicating entire networks.