..
adagrad
CPUAdam fp16 and bf16 support ( #5409 )
2024-05-20 12:50:20 +00:00
adam
CPUAdam fp16 and bf16 support ( #5409 )
2024-05-20 12:50:20 +00:00
aio
AIO CPU Locked Tensor ( #6592 )
2024-10-09 21:07:31 +00:00
cpu
[CPU] Allow deepspeed.comm.inference_all_reduce in torch.compile graph ( #5604 )
2024-07-15 22:24:11 +00:00
deepspeed4science /evoformer_attn
Update clang-format version from 16 to 18. ( #5839 )
2024-08-06 09:14:21 -07:00
fp_quantizer
wrap include cuda_bf16.h with ifdef BF16_AVAILABLE ( #6520 )
2024-09-10 16:08:50 +00:00
gds
AIO CPU Locked Tensor ( #6592 )
2024-10-09 21:07:31 +00:00
includes
Update clang-format version from 16 to 18. ( #5839 )
2024-08-06 09:14:21 -07:00
lamb
Switch from HIP_PLATFORM_HCC to HIP_PLATFORM_AMD ( #4539 )
2023-10-19 21:01:48 +00:00
lion
CPUAdam fp16 and bf16 support ( #5409 )
2024-05-20 12:50:20 +00:00
quantization
Fixed the Windows build. ( #5596 )
2024-05-31 22:11:10 +00:00
random_ltd
Rocm warp size fix ( #5402 )
2024-05-17 20:35:58 +00:00
sparse_attention
Update DeepSpeed copyright license to Apache 2.0 ( #3111 )
2023-03-30 17:14:38 -07:00
spatial
Switch from HIP_PLATFORM_HCC to HIP_PLATFORM_AMD ( #4539 )
2023-10-19 21:01:48 +00:00
transformer
Rearrange inference OPS and stop using builder.load ( #5490 )
2024-10-09 01:22:28 +00:00
utils
Update DeepSpeed copyright license to Apache 2.0 ( #3111 )
2023-03-30 17:14:38 -07:00
xpu
[XPU] Support DeepNVMe new code structure ( #6532 )
2024-09-26 20:39:59 +00:00