deepspeed by davila7/claude-code-templates
npx skills add https://github.com/davila7/claude-code-templates --skill deepspeed基于官方文档生成的 DeepSpeed 开发综合助手。
此技能应在以下情况触发:
模式 1: DeepNVMe 内容要求 创建 DeepNVMe 句柄 使用 DeepNVMe 句柄 阻塞式文件写入 非阻塞式文件写入 并行文件写入 固定张量 综合示例 致谢 附录 高级句柄创建 性能调优 DeepNVMe API 通用 I/O API GDS 特定 API 句柄设置 API 本教程将展示如何使用 DeepNVMe 在持久存储与驻留在主机或设备内存中的张量之间进行数据传输。DeepNVMe 通过基于非易失性内存标准 (NVMe) 固态硬盘 (SSD)、Linux 异步 I/O (libaio) 和 NVIDIA Magnum IOTM GPUDirect® 存储 (GDS) 构建的强大优化,提高了深度学习应用中 I/O 操作的性能和效率。要求确保您的环境已正确配置以使用 DeepNVMe。首先,您需要安装 DeepSpeed 版本 >= 0.15.0。接下来,确保 DeepNVMe 操作符在 DeepSpeed 安装中可用。任何 DeepNVMe 功能都需要 async_io 操作符,而 GDS 功能仅需要 gds 操作符。您可以通过检查 ds_report 的输出以确认每个操作符的可用性,确保兼容状态为 [OKAY]。以下是 ds_report 输出的片段,确认了 async_io 和 gds 操作符的可用性。如果 async_io 操作符不可用,您需要为您的 Linux 发行版安装相应的 libaio 库二进制文件。例如,Ubuntu 用户需要运行 apt install libaio-dev。通常,您应该仔细检查 ds_report 的输出以获取有用的提示,例如:[WARNING] async_io 需要开发版 libaio .so 对象和头文件,但未找到。[WARNING] async_io:请使用 apt 安装 libaio-dev 包[WARNING] 如果 libaio 已安装(可能是从源代码安装),请尝试设置 CFLAGS 和 LDFLAGS 环境变量指向其所在位置。要启用 gds 操作符,您需要安装 NVIDIA GDS,请参考适用于裸机系统或 Azure VM 的相应指南(即将推出)。创建 DeepNVMe 句柄DeepNVMe 功能可以通过两个抽象来访问:aio_handle 和 gds_handle。aio_handle 可用于主机和设备张量,而 gds_handle 仅适用于 CUDA 张量,但效率更高。使用 DeepNVMe 的第一步是创建所需的句柄。aio_handle 需要 async_io 操作符,而 gds_handle 需要 async_io 和 gds 操作符。以下片段分别说明了 aio_handle 和 gds_handle 的创建。### 创建 aio_handlefrom deepspeed.ops.op_builder import AsyncIOBuilder aio_handle = AsyncIOBuilder().load().aio_handle()### 创建 gds_handlefrom deepspeed.ops.op_builder import GDSBuilder gds_handle = GDSBuilder().load().gds_handle()为简化起见,上述示例使用默认参数说明了句柄的创建。我们期望使用默认参数创建的句柄在大多数环境中能提供良好的性能。但是,您可以在下面查看高级句柄创建。使用 DeepNVMe 句柄aio_handle 和 gds_handle 提供了相同的 API,用于将张量存储到文件或从文件加载张量。这些 API 的一个共同特点是,它们接收一个张量和一个文件路径作为参数来执行所需的 I/O 操作。为了获得最佳性能,I/O 操作应使用固定的设备或主机张量(详见此处)。为简洁起见,本教程将使用 aio_handle 进行说明,但请记住 gds_handle 的工作方式类似。您可以通过在 aio_handle 对象上使用制表符补全,在 Python shell 中查看可用的 API。这通过 h. 的制表符补全来说明。>python Python 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] on linux Type "help", "copyright", "credits" or "license" for more information.>>> from deepspeed.ops.op_builder import AsyncIOBuilder>>> h = AsyncIOBuilder().load().aio_handle()>>> h. h.async_pread( h.free_cpu_locked_tensor( h.get_overlap_events( h.get_single_submit( h.new_cpu_locked_tensor( h.pwrite( h.sync_pread( h.wait( h.async_pwrite( h.get_block_size( h.get_queue_depth( h.get_intra_op_parallelism( h.pread( h.read( h.sync_pwrite( h.write(用于执行 I/O 操作的 API 是那些名称中包含 pread 和 pwrite 子字符串的 API。为简洁起见,我们将重点介绍文件写入 API,即 sync_pwrite、async_pwrite 和 pwrite。下面我们只讨论 sync_pwrite 和 async_pwrite,因为它们是 pwrite 的特化。阻塞式文件写入sync_pwrite 提供了 Python 文件写入的标准阻塞语义。下面的示例说明了使用 sync_pwrite 将 1GB 的 CUDA 张量存储到本地 NVMe 文件。>>> import os>>> os.path.isfile('/local_nvme/test_1GB.pt') False>>> import torch>>> t=torch.empty(10243, dtype=torch.uint8).cuda()>>> from deepspeed.ops.op_builder import AsyncIOBuilder>>> h = AsyncIOBuilder().load().aio_handle()>>> h.async_pwrite(t,'/local_nvme/test_1GB.pt')>>> h.wait() 1>>> os.path.isfile('/local_nvme/test_1GB.pt') True>>> os.path.getsize('/local_nvme/test_1GB.pt') 1073741824非阻塞 I/O 操作警告:为避免数据竞争和损坏,必须谨慎使用 .wait() 来序列化源张量的写入和目标张量的读取。例如,在非阻塞文件写入期间更新 t 是不安全的,可能会损坏 /local_nvme/test_1GB.pt。>>> t=torch.empty(10243, dtype=torch.uint8).cuda()>>> from deepspeed.ops.op_builder import AsyncIOBuilder>>> h = AsyncIOBuilder().load().aio_handle(intra_op_parallelism=4)>>> h.async_pwrite(t,'/local_nvme/test_1GB.pt')>>> h.wait() 1>>> os.path.isfile('/local_nvme/test_1GB.pt') True>>> os.path.getsize('/local_nvme/test_1GB.pt') 1073741824固定张量DeepNVMe 优化的一个关键部分是使用直接内存访问 (DMA) 进行 I/O 操作,这要求主机或设备张量被固定。要固定主机张量,您可以使用 PyTorch 或 DeepSpeed Accelerators 提供的机制。以下示例说明了将固定的 CPU 张量写入本地 NVMe 文件。>>> import os>>> os.path.isfile('/local_nvme/test_1GB.pt') False>>> import torch>>> t=torch.empty(10243, dtype=torch.uint8).cuda()>>> from deepspeed.ops.op_builder import GDSBuilder>>> h = GDSBuilder().load().gds_handle()>>> h.pin_device_tensor(t)>>> h.async_pwrite(t,'/local_nvme/test_1GB.pt')>>> h.wait() 1>>> os.path.isfile('/local_nvme/test_1GB.pt') True>>> os.path.getsize('/local_nvme/test_1GB.pt') 1073741824>>> h.unpin_device_tensor(t)综合示例我们希望上述材料能帮助您开始使用 DeepNVMe。您还可以使用以下链接查看 DeepNVMe 在真实世界深度学习应用中的使用情况。ZeRO-Inference 和 ZeRO-Infinity 中的参数交换器。ZeRO-Infinity 中的优化器交换器。ZeRO-Infinity 中的梯度交换器。简单的文件读写操作。致谢本教程在王冠华、Masahiro Tanaka 和 Stas Bekman 的反馈下得到了显著改进。附录高级句柄创建要使用 DeepNVMe 实现峰值 I/O 性能,需要仔细配置句柄创建。特别是,aio_handle 和 gds_handle 构造函数的参数对性能至关重要,因为它们决定了 DeepNVMe 与底层存储子系统(即 libaio、GDS、PCIe 和 SSD)交互的效率。为了方便起见,我们允许使用默认参数值创建句柄,这在大多数情况下能提供不错的性能。但是,要在您的环境中榨取所有可用性能,可能需要调整构造函数参数,即 block_size、queue_depth、single_submit、overlap_events 和 intra_op_parallelism。aio_handle 构造函数参数和默认值如下所示:>>> from deepspeed.ops.op_builder import AsyncIOBuilder>>> help(AsyncIOBuilder().load().aio_handle()) Help on aio_handle in module async_io object: class aio_handle(pybind11_builtins.pybind11_object) | Method resolution order: | aio_handle | pybind11_builtins.pybind11_object | builtins.object | | Methods defined here: | | (...) | (self: async_io.aio_handle, block_size: int = 1048576, queue_depth: int = 128, single_submit: bool = False, overlap_events: bool = False, intra_op_parallelism: int = 1) -> None | | AIO 句柄构造函数性能调优如前所述,为目标工作负载或环境实现 DeepNVMe 峰值性能需要使用最优配置的 aio_handle 或 gds_handle 句柄。为了方便配置,我们提供了一个名为 ds_nvme_tune 的实用程序,用于自动发现最优的 DeepNVMe 配置。ds_nvme_tune 自动探索用户指定或默认的配置空间,并推荐提供最佳读写性能的选项。以下是 ds_nvme_tune 的一个使用示例,用于调优 GPU 内存与安装在 /local_nvme 上的本地 NVMe SSD 之间的 aio_handle 数据传输。此示例使用了 ds_nvme_tune 的默认配置空间进行调优。$ ds_nvme_tune --nvme_dir /local_nvme --gpu 在 ['/local_nvme/'] 上运行 DeepNVMe 性能调优最佳性能 (GB/秒):读取 = 3.69,写入 = 3.18 { "aio": { "single_submit": "false", "overlap_events": "true", "intra_op_parallelism": 8, "queue_depth": 32, "block_size": 1048576 } }上述调优是在一台 Lambda 工作站上执行的,该工作站配备了两个 NVIDIA A6000-48GB GPU、252GB DRAM 和一个 CS3040 NVMe 2TB SSD,其峰值读写速度分别为 5.6 GB/s 和 4.3 GB/s。调优耗时约四分钟半。根据结果,通过使用如下配置的 aio_handle,可以预期分别达到 3.69 GB/秒 和 3.18 GB/秒 的读写传输速度。>>> from deepspeed.ops.op_builder import AsyncIOBuilder>>> h = AsyncIOBuilder().load().aio_handle(block_size=1048576, queue_depth=32, single_submit=False, overlap_events=True, intra_op_parallelism=8)可以通过正常的 -h 或 --help 获取 ds_nvme_tune 的完整命令行选项。用法:ds_nvme_tune [-h] --nvme_dir NVME_DIR [NVME_DIR ...] [--sweep_config SWEEP_CONFIG] [--no_read] [--no_write] [--io_size IO_SIZE] [--gpu] [--gds] [--flush_page_cache] [--log_dir LOG_DIR] [--loops LOOPS] [--verbose] 选项: -h, --help 显示此帮助信息并退出 --nvme_dir NVME_DIR [NVME_DIR ...] 执行 I/O 测试的目录。NVMe 设备上的可写目录。 --sweep_config SWEEP_CONFIG 性能扫描配置 json 文件。 --no_read 禁用读取性能测量。 --no_write 禁用写入性能测量。 --io_size IO_SIZE 用于性能测量的读取/写入的 I/O 字节数。 --gpu 测试 GPU 设备与 NVME 设备之间的张量传输。 --gds 在 NVIDIA GPUDirectStorage 操作符上运行扫描 --flush_page_cache 页面缓存将不会被刷新,报告的读取速度可能高于实际值 。 --log_dir LOG_DIR 性能日志文件的输出目录。默认为 ./_aio_bench_logs --loops LOOPS 操作重复次数 --verbose 打印调试信息。DeepNVMe API为方便起见,我们提供了 DeepNVMe API 的列表和简要说明。通用 I/O API以下函数用于 aio_handle 和 gds_handle 的 I/O 操作。函数 描述 async_pread 非阻塞文件读取到张量 sync_pread 阻塞文件读取到张量 pread 具有阻塞和非阻塞选项的文件读取 async_pwrite 从张量进行非阻塞文件写入 sync_pwrite 从张量进行阻塞文件写入 pwrite 具有阻塞和非阻塞选项的文件写入 wait 等待非阻塞 I/O 操作完成GDS 特定 API以下函数仅适用于 gds_handle函数 描述 new_pinned_device_tensor 分配并固定设备张量 free_pinned_device_tensor 取消固定并释放设备张量 pin_device_tensor 固定设备张量 unpin_device_tensor 取消固定设备张量句柄设置 API以下 API 可用于探测句柄配置。函数 描述 get_queue_depth 返回队列深度设置 get_single_submit 返回是否启用了 single_submit get_intra_op_parallelism 返回 I/O 并行度 get_block_size 返回 I/O 块大小设置 get_overlap_events 返回是否启用了 overlap_event更新日期:2025 年 11 月 5 日 上一页 下一页
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
h.wait() 来避免类似的安全问题也适用于在没有 .wait() 同步的情况下读取非阻塞文件读取的目标张量。并行文件写入DeepNVMe 的一个重要优化是能够并行化单个 I/O 操作。此优化通过在构造 DeepNVMe 句柄时指定所需的并行度来启用。随后使用该句柄的 I/O 操作将根据情况在请求数量的主机或设备线程上自动并行化。I/O 并行性可与阻塞或非阻塞 I/O API 组合使用。下面的示例说明了使用 async_pwrite 进行 4 路并行的文件写入。请注意使用 intra_op_parallelism 参数在句柄创建中指定所需的并行度。>>> import os>>> os.path.isfile('/local_nvme/test_1GB.pt') False>>> import torch>>> t=torch.empty(1024libaio
模式 2: 用于 NLG 模型的混合专家内容 1. 安装 2. 训练 NLG+MoE 模型 2.1. 对模型的更改 2.2. 预训练标准 MoE 模型 2.3. 预训练 PR-MoE 模型 2.4. 使用减小模型尺寸训练 MoS在本教程中,我们将介绍如何将 DeepSpeed 混合专家 (MoE) 应用于 NLG 模型,这可以将训练成本降低 5 倍,并将 MoE 模型尺寸减小 3 倍(详见我们的博客)。我们以 Megatron-LM 框架中的类 GPT-3 模型为例。在阅读本教程之前,我们建议先阅读关于混合专家和 Megatron-LM GPT 预训练的教程。1. 安装您需要安装 DeepSpeed v0.6.0 或更高版本才能使用 MoE 功能。NLG 模型的 MoE 示例位于 Megatron-DeepSpeed 仓库的 MoE 文件夹下。2. 训练 NLG+MoE 模型2.1. 对模型的更改为了将 MoE 应用于 GPT 风格的模型,我们在 Megatron 框架中做了几处更改,主要是在 megatron/model/ 中,我们将 MoE 层添加到了模型中。2.2. 预训练标准 MoE 模型我们在 examples_deepspeed/MoE 下提供了示例训练脚本,用于在我们的博客中执行实验。标准 MoE 模型有一些新的超参数:--num-experts:每个 MoE 层的专家数量。在我们的实验中,我们将其设置为 128。专家数量越多,收敛性往往越好,但这是收益递减的。--moe-expert-parallel-size:MoE 专家并行度。换句话说,每个 GPU 上将有 num-experts/moe-expert-parallel-size 个专家。因此,--moe-expert-parallel-size 不应超过 GPU 数量和 --num-experts。--moe-loss-coeff:将 MoE 损失添加到模型损失的缩放系数。在我们的实验中,我们发现 0.01 是一个很好的设置。--moe-train-capacity-factor, --moe-eval-capacity-factor, --moe-min-capacity:这些配置决定单个专家可以处理多少令牌。较大的数字可能导致更好的收敛性,但也会导致训练速度变慢,因为不同专家之间的负载会更不平衡。--disable-moe-token-dropping:这将完全移除对单个专家可以处理多少令牌的限制。出于上述相同原因,我们仅建议在推理/评估期间使用此选项。2.3. 预训练 PR-MoE 模型PR-MoE 是一种新设计的 MoE 模型,代表金字塔残差 MoE,与标准 MoE 相比,其参数效率提高了 3 倍。更多详情请参阅我们的博客。我们在 examples_deepspeed/MoE 下提供了示例训练脚本。与标准 MoE 相比,PR-MoE 模型有一些不同的超参数:--num-experts:不是提供单个数字,要启用金字塔 MoE,您需要提供一个列表,其长度与 MoE 层数相同。我们建议在模型的后阶段(接近输出)使用更多专家。--mlp-type:从 [standard, residual] 中选择。当选择 residual 时,启用残差 MoE。除了上述标准 MoE 和 PR-MoE 的新超参数外,对于 NLG+MoE 模型,我们发现与基础密集模型相比,降低学习率并增加学习率衰减持续时间是有帮助的。我们的调优细节可以在示例训练脚本中找到。关于训练数据,我们无法发布我们的内部数据,但任何用于 Megatron-LM 预训练的公共数据都可以直接用于训练 MoE 模型(需要注意的是,它可能无法提供与我们实验中完全相同的模型质量)。例如,我们评估了 The Pile 数据集 (pile.eleuther.ai, github.com/EleutherAI/the-pile) 用于密集和 MoE 模型。下表 1 显示,该公共数据提供了与我们内部数据相似的评估结果。模型尺寸 LAMBADA:补全预测 PIQA:常识推理 BoolQ:阅读理解 RACE-h:阅读理解 TriviaQA:问答 WebQs:问答密集 NLG:350M,内部数据 0.5203 0.6931 0.5364 0.3177 0.0321 0.0157 350M,公共 Pile 0.5106 0.6589 0.5933 0.3196 0.0257 0.0064标准 MoE NLG:350M+MoE-128,内部数据 0.6270 0.7459 0.6046 0.3560 0.1658 0.0517 350M+MoE-128,公共 Pile 0.6128 0.7323 0.6040 0.3349 0.1111 0.0335PR-MoE NLG:350M+MoE-128,内部数据 0.6365 0.7399 0.5988 0.3569 0.1630 0.0473PR-MoE + MoS NLG:350M+MoE-128,内部数据 0.6346 0.7334 0.5807 0.3483 0.1369 0.0522表 1:不同密集和 MoE NLG 模型的零样本评估结果(最后六列)。所有零样本评估结果均使用准确率指标。2.4. 使用减小模型尺寸训练 MoSMoS 代表学生混合,是一种基于分阶段蒸馏的技术,用于压缩大型 MoE 模型。MoS 进一步将模型尺寸减小了 12.5%,与标准 MoE 相比,结合 PR-MoE 可实现高达 3.7 倍的模型尺寸减小。减小的模型尺寸有助于降低推理过程中的延迟和成本。要训练 MoS 模型,需要指定一些额外的参数。我们将以 PR-MoE 为例:--mos:这将通过知识蒸馏启用学生混合。--load-teacher:这指定了教师模型检查点的路径。这是使用 MoS 的必需参数,教师模型检查点可以通过训练标准 MoE 或 PR-MoE 获得。num-layers-teacher, --hidden-size-teacher, --hidden-size-teacher, --num-experts-teacher:除了教师模型检查点路径外,我们还需要指定教师模型的架构,例如其层数、隐藏维度大小以及每个 MoE 层的专家数量。在 PR-MoE 的情况下,我们还需要为教师模型提供一个专家列表,我们从教师模型中移除了一些专家层。除了上述新参数外,我们观察到在整个训练过程中使用教师 PR-MoE 可能会对最终学生模型的准确性产生不利影响。在我们的实验中,我们使用分阶段蒸馏方法,在训练过程早期(例如,40 万步之后)停止蒸馏,并在剩余的训练中仅针对标准语言建模损失进行优化。我们在 examples_deepspeed/MoE 下提供了示例训练脚本。我们的参数设置细节可以在示例训练脚本中找到。MoS 的性能结果可以从我们的博客文章和论文中看到。更新日期:2025 年 11 月 5 日 上一页 下一页
megatron/model/
模式 3: MoS,代表学生混合,是一种基于分阶段蒸馏的技术,用于压缩大型 MoE 模型。MoS 进一步将模型尺寸减小了 12.5%,与标准 MoE 相比,结合 PR-MoE 可实现高达 3.7 倍的模型尺寸减小。减小的模型尺寸有助于降低推理过程中的延迟和成本。要训练 MoS 模型,需要指定一些额外的参数。我们将以 PR-MoE 为例:
--mos
模式 4: 学习率范围测试内容 学习率范围测试 (LRRT) 先决条件 LRRT 参数 必需的模型配置更改 PyTorch 示例:为大批量大小调优本教程展示了如何在 PyTorch 中执行学习率范围测试。学习率范围测试 (LRRT)学习率范围测试 (LRRT) 是一种用于发现可用于训练模型而不会发散的最大学习率值的方法。数据科学家通常对此信息感兴趣,因为大的学习率比小的学习率能带来更快的模型收敛。此外,大的学习率对于学习率调度(如 CLR 和 1Cycle)至关重要,这些调度用于有效地训练大批量大小。DeepSpeed 为 PyTorch 框架中的模型训练提供了 LRRT。先决条件要使用 DeepSpeed 的 LRRT,您必须满足以下两个条件:使用入门指南将 DeepSpeed 集成到您的训练脚本中。将配置 LRRT 的参数添加到模型的参数中。LRRT 参数定义如下。LRRT 参数LRRT 通过按预定义量、在预定义间隔内线性增加学习率来工作。因此,LRRT 是一种学习率调度的形式,因为它定义了学习率在模型训练期间应如何以及何时变化。要配置 LRRT,您需要设置这些参数:lr_range_test_min_lr:训练的初始学习率(浮点数)lr_range_test_step_size:扩大学习率的间隔,以训练步数定义(整数)lr_range_test_step_rate:增加学习率的缩放因子(浮点数)lr_range_test_staircase:如果为 true,则每 lr_range_test_step_size 个训练步更改一次学习率,否则在每个训练步更改学习率(布尔值)必需的模型配置更改我们将通过一个示例 LRRT 调度来说明必需的模型配置更改,该调度:以初始学习率 0.0001 开始训练使用缩放率 5使用缩放间隔 200 个训练步在每个训练步缩放学习率,即不使用阶梯式PyTorch对于 PyTorch 模型,LRRT 被实现为学习率调度器,这是 PyTorch 1.0.1 及更高版本中可用的功能。因此,您可以在模型配置中添加一个类型为 "LRRangeTest" 的 "scheduler" 条目,如下所示:"scheduler": { "type": "LRRangeTest", "params": { "lr_range_test_min_lr": 0.0001, "lr_range_test_step_size": 200, "lr_range_test_step_rate": 5, "lr_range_test_staircase": false } }示例:为大批量大小调优我们通过一段经验来说明 LRRT 如何使数据科学家受益,当时我们将内部生产模型从单 GPU(批量大小 512)扩展到四 GPU(批量大小 2048),以使其高效收敛。我们的目标是使用更大的批量大小训练模型,以匹配使用相同数据样本量时较小批量大小的性能。这里的挑战是众所周知的大批量大小训练收敛缓慢的问题。我们的方法是使用 DeepSpeed 中的 1Cycle 调度来解决这个问题,并使用 LRRT 来配置该调度。在下面的图中,我们说明了使用 LRRT 来发现批量大小为 2048 时有效训练的最大学习率。左图显示了大学习率对前 9000 个批次训练期间验证损失的影响。右图显示了同一训练期间的学习率值。通过网格搜索,我们发现批量大小为 2048 的最佳固定学习率是 0.0002。蓝线 (lr=0.0002) 代表使用此固定学习率进行训练。我们将两个 LRRT 调度与此固定学习率进行比较。橙色线 (lr_range_test_step_rate=5) 和灰色线 (lr_range_test_step_rate=50) 代表使用仅在 lr_range_test_step_rate 值上不同的类似 LRRT 调度进行训练。尽管 LRRT 调度从相同的基础学习率开始,但灰色线的学习率增长速度比橙色线快约 10 倍。此外,在所呈现的数据点中,LRRT 调度的学习率已经超过了蓝线的学习率。我们随后将灰色线称为“快速增长”,将橙色线称为“缓慢增长”的 LRRT 调度。我们从这个简单示例中得出以下观察结果。较大的学习率显然有利于模型性能,但有一定限度。快速增长 LRRT 调度在 3000 个批次后达到 0.46 的验证损失,而固定学习率在 9000 个批次后也未达到此分数。缓慢增长 LRRT 直到 6000 个批次后才匹配该分数,但它保持了相对于固定学习率不断增长的性能优势。学习率值存在一个对训练模型有用的上限。快速增长 LRRT 调度很快达到此边界并发散,而缓慢增长 LRRT 随后也会因相同原因发散。LRRT 帮助我们使用不到 2% 的训练数据快速发现了这些边界。这些边界对于构建学习率调度是有用的信息。来自 LRRT 的这些观察结果帮助我们配置了学习率边界和周期跨度,以构建一个解决此问题的 1Cycle 调度,如下所示。"OneCycle": { "cycle_min_lr": 0.002, "cycle_max_lr": 0.005, "cycle_first_step_size": 2000, "cycle_second_step_size": 2000, ... }根据我们的经验,这些是 1Cycle 调度中四个最关键的参数。我们选择使用较慢的 LRRT 调度 (lr_range_test_step_rate=5) 来设置 cycle_min_lr,因为它实现了最佳损失,而较快的调度发散得相当快。我们将 cycle_max_lr 设置为 0.005,尽管图中显示在略高的学习率下性能仍在提高。这是因为我们观察到,如果等到最大学习率,模型可能处于发散点而无法恢复。由于学习率达到 0.005 需要 8000 个批次,我们将 cycle_first_step_size 和 (cycle_second_step_size) 设置为 2000,这是四个 GPU 处理 8000 个批次所需的步数。我们希望这个简短的示例能激发您将 LRRT 用于您自己独特的调优挑战的想象力。更新日期:2025 年 11 月 5 日 上一页 下一页
lr_range_test_min_lr
模式 5: 训练概述和特性内容 概述 分布式、高效且有效的训练,轻松上手 速度 内存效率 可扩展性 通信效率 数据效率 支持长序列长度 快速收敛以实现有效性 良好的可用性 特性 分布式训练与混合精度 混合精度训练 单 GPU、多 GPU 和多节点训练 流水线并行 模型并行 支持自定义模型并行 与 Megatron-LM 集成 零冗余优化器 优化器状态和梯度分区 激活分区 常量缓冲区优化 (CBO) 连续内存优化 (CMO) ZeRO-Offload 额外的内存和带宽优化 智能梯度累积 通信重叠 训练特性 简化的训练 API 激活检查点 API 梯度裁剪 混合精度自动损失缩放 训练优化器 1-bit Adam、0/1 Adam 和 1-bit LAMB 优化器,通信量减少高达 26 倍 融合 Adam 优化器和任意 torch.optim.Optimizer CPU-Adam:Adam 的高性能向量化实现 内存带宽优化的 FP16 优化器 使用 LAMB 优化器进行大批量训练 使用 ZeRO 优化器进行内存高效训练 训练无关的检查点 高级参数搜索 学习率范围测试 1Cycle 学习率调度 简化的数据加载器 数据效率 课程学习 性能分析和调试 挂钟时间细分 计时激活检查点函数 Flops 分析器 自动调优 监控器 通信日志记录 稀疏注意力 混合专家 (MoE) 概述训练先进的深度学习模型具有挑战性。除了模型设计之外,模型科学家还需要设置最先进的训练技术,例如分布式训练、混合精度、梯度累积和检查点。然而,科学家们可能仍然无法达到期望的系统性能和收敛速度。大模型尺寸更具挑战性:大模型很容易在纯数据并行下耗尽内存,并且很难使用模型并行。DeepSpeed 解决了这些挑战,以加速模型开发和训练。分布式、高效且有效的训练,轻松上手DeepSpeed API 是 PyTorch 的一个轻量级包装器。这意味着您可以使用 PyTorch 中您喜欢的一切,而无需学习新平台。此外,DeepSpeed 管理所有最先进的训练样板技术,例如分布式训练、混合精度、梯度累积和检查点,因此您可以专注于模型开发。最重要的是,您只需对 PyTorch 模型进行几行代码更改,即可利用 DeepSpeed 独特的高效性和有效性优势来提高速度和规模。速度DeepSpeed 通过结合计算/通信/内存/IO 上的效率优化以及高级超参数调优和优化器上的有效性优化,实现了高性能和快速收敛。例如:DeepSpeed 使用 1024 个 V100 GPU(64 个 DGX-2 盒子)在 44 分钟内将 BERT-large 训练到同等水平,使用 256 个 GPU(16 个 DGX-2 盒子)在 2.4 小时内完成。BERT-large 训练时间 设备 来源 训练时间 1024 个 V100 GPU DeepSpeed 44 分钟 256 个 V100 GPU DeepSpeed 2.4 小时 64 个 V100 GPU DeepSpeed 8.68 小时 16 个 V100 GPU DeepSpeed 33.22 小时 BERT 代码和教程即将推出。DeepSpeed 在 Azure GPU 上训练 GPT2(15 亿参数)的速度比最先进的 NVIDIA Megatron 快 3.75 倍。阅读更多:GPT 教程内存效率DeepSpeed 提供内存高效的数据并行,并支持在没有模型并行的情况下训练模型。例如,DeepSpeed 可以在单个 GPU 上训练多达 130 亿参数的模型。相比之下,现有框架(例如,PyTorch 的分布式数据并行)在 14 亿参数模型上就会耗尽内存。DeepSpeed 通过一种称为零冗余优化器 (ZeRO) 的新颖解决方案减少了训练内存占用。与内存状态在数据并行进程间复制的基本数据并行不同,ZeRO 对模型状态和梯度进行分区以节省大量内存。此外,它还减少了激活内存和碎片化内存。当前实现 (ZeRO-2) 相对于最先进技术将内存减少了高达 8 倍。您可以在我们的论文中以及我们关于 ZeRO-1 和 ZeRO-2 的博客文章中阅读更多关于 ZeRO 的信息。凭借这种令人印象深刻的内存减少,DeepSpeed 的早期采用者已经生成了一个超过 170 亿参数的语言模型 (LM),称为 Turing-NLG,在 LM 类别中建立了新的 SOTA。对于 GPU 资源有限的模型科学家,ZeRO-Offload 利用 CPU 和 GPU 内存来训练大型模型。使用单 GPU 机器,我们的用户可以运行多达 130 亿参数的模型而不会耗尽内存,比现有方法大 10 倍,同时获得有竞争力的吞吐量。此功能使数十亿参数模型训练民主化,并为许多深度学习从业者探索更大更好的模型打开了窗口。可扩展性DeepSpeed 支持高效的数据并行、模型并行、流水线并行及其组合,我们称之为 3D 并行。DeepSpeed 的 3D 并行提供了系统支持,可以运行具有数万亿参数的模型,更多信息请参阅我们的新闻稿和教程。DeepSpeed 可以更高效地运行大型模型,对于各种尺寸(从 15 亿到数千亿)的模型,速度可提高高达 10 倍。更具体地说,由 ZeRO 提供支持的数据并行是互补的,并且可以与不同类型的模型并行结合使用。它允许 DeepSpeed 使用较低程度的模型并行和较大的批量大小来拟合模型,与单独使用模型并行相比,提供了显著的性能提升。阅读更多:ZeRO 论文和 GPT 教程。该图描绘了 DeepSpeed(将 ZeRO 驱动的数据并行与 NVIDIA Megatron-LM 的模型并行相结合)相对于单独使用 Megatron-LM 的系统吞吐量改进。通信效率DeepSpeed 的流水线并行减少了分布式训练期间的通信
Comprehensive assistance with deepspeed development, generated from official documentation.
This skill should be triggered when:
Pattern 1: DeepNVMe Contents Requirements Creating DeepNVMe Handles Using DeepNVMe Handles Blocking File Write Non-Blocking File Write Parallel File Write Pinned Tensors Putting it together Acknowledgements Appendix Advanced Handle Creation Performance Tuning DeepNVMe APIs General I/O APIs GDS-specific APIs Handle Settings APIs This tutorial will show how to use DeepNVMe for data transfers between persistent storage and tensors residing in host or device memory. DeepNVMe improves the performance and efficiency of I/O operations in Deep Learning applications through powerful optimizations built on Non-Volatile Memory Express (NVMe) Solid State Drives (SSDs), Linux Asynchronous I/O (libaio), and NVIDIA Magnum IOTM GPUDirect® Storage (GDS). Requirements Ensure your environment is properly configured to use DeepNVMe. First, you need to install DeepSpeed version >= 0.15.0. Next, ensure that the DeepNVMe operators are available in the DeepSpeed installation. The async_io operator is required for any DeepNVMe functionality, while the gds operator is required only for GDS functionality. You can confirm availability of each operator by inspecting the output of ds_report to check that compatible status is [OKAY]. Below is a snippet of ds_report output confirming the availability of both async_io and gds operators. If async_io operator is unavailable, you will need to install the appropriate libaio library binaries for your Linux flavor. For example, Ubuntu users will need to run apt install libaio-dev. In general, you should carefully inspect ds_report output for helpful tips such as the following: [WARNING] async_io requires the dev libaio .so object and headers but these were not found. [WARNING] async_io: please install the libaio-dev package with apt [WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found. To enable gds operator, you will need to install NVIDIA GDS by consulting the appropriate guide for bare-metal systems or Azure VMs (coming soon). Creating DeepNVMe Handles DeepNVMe functionality can be accessed through two abstractions: aio_handle and gds_handle. The aio_handle is usable on both host and device tensors. while gds_handle works only on CUDA tensors, but is more efficient. The first step to use DeepNVMe is to create a desired handle. aio_handle requires async_io operator, while gds_handle requires both async_io and gds operators. The following snippets illustrate aio_handle and gds_handle creation respectively. ### Create aio_handle from deepspeed.ops.op_builder import AsyncIOBuilder aio_handle = AsyncIOBuilder().load().aio_handle() ### Create gds_handle from deepspeed.ops.op_builder import GDSBuilder gds_handle = GDSBuilder().load().gds_handle() For simplicity, the above examples illustrate handle creation using default parameters. We expect that handles created with default parameters to provide good performance in most environments. However, you can see below for advanced handle creation. Using DeepNVMe Handles aio_handle and gds_handle provide identical APIs for storing tensors to files or loading tensors from files. A common feature of these APIs is that they take a tensor and a file path as arguments for the desired I/O operation. For best performance, pinned device or host tensors should be used for I/O operations (see here for details). For brevity, this tutorial will use aio_handle for illustration, but keep in mind that gds_handle works similarly. You can see the available APIs in a Python shell via tab completion on an aio_handle object . This is illustrated using tab completion of h.. >python Python 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from deepspeed.ops.op_builder import AsyncIOBuilder >>> h = AsyncIOBuilder().load().aio_handle() >>> h. h.async_pread( h.free_cpu_locked_tensor( h.get_overlap_events( h.get_single_submit( h.new_cpu_locked_tensor( h.pwrite( h.sync_pread( h.wait( h.async_pwrite( h.get_block_size( h.get_queue_depth( h.get_intra_op_parallelism( h.pread( h.read( h.sync_pwrite( h.write( The APIs of interest for performing I/O operations are those named with pread and pwrite substrings. For brevity, we will focus on the file write APIs, namely sync_pwrite, async_pwrite, and pwrite. We will discuss only sync_pwrite and async_pwrite below because they are specializations of pwrite. Blocking File Write sync_pwrite provides the standard blocking semantics of Python file write. The example below illustrates using sync_pwrite to store a 1GB CUDA tensor to a local NVMe file. >>> import os >>> os.path.isfile('/local_nvme/test_1GB.pt') False >>> import torch >>> t=torch.empty(10243, dtype=torch.uint8).cuda() >>> from deepspeed.ops.op_builder import AsyncIOBuilder >>> h = AsyncIOBuilder().load().aio_handle() >>> h.sync_pwrite(t,'/local_nvme/test_1GB.pt') >>> os.path.isfile('/local_nvme/test_1GB.pt') True >>> os.path.getsize('/local_nvme/test_1GB.pt') 1073741824 Non-Blocking File Write An important DeepNVMe optimization is the non-blocking I/O semantics which enables Python threads to overlap computations with I/O operations. async_pwrite provides the non-blocking semantics for file writes. The Python thread can later use wait() to synchronize with the I/O operation. async_write can also be used to submit multiple back-to-back non-blocking I/O operations, of which can then be later blocked on using a single wait(). The example below illustrates using async_pwrite to store a 1GB CUDA tensor to a local NVMe file. >>> import os >>> os.path.isfile('/local_nvme/test_1GB.pt') False >>> import torch >>> t=torch.empty(10243, dtype=torch.uint8).cuda() >>> from deepspeed.ops.op_builder import AsyncIOBuilder >>> h = AsyncIOBuilder().load().aio_handle() >>> h.async_pwrite(t,'/local_nvme/test_1GB.pt') >>> h.wait() 1 >>> os.path.isfile('/local_nvme/test_1GB.pt') True >>> os.path.getsize('/local_nvme/test_1GB.pt') 1073741824 Warning for non-blocking I/O operations: To avoid data races and corruptions, .wait() must be carefully used to serialize the writing of source tensors, and the reading of destination tensors. For example, the following update of t during a non-blocking file write is unsafe and could corrupt /local_nvme/test_1GB.pt. >>> t=torch.empty(10243, dtype=torch.uint8).cuda() >>> from deepspeed.ops.op_builder import AsyncIOBuilder >>> h = AsyncIOBuilder().load().aio_handle() >>> h.async_pwrite(t,'/local_nvme/test_1GB.pt') >>> t += 1 # <--- Data race; avoid by preceding with h.wait() Similar safety problems apply to reading the destination tensor of a non-blocking file read without .wait() synchronization. Parallel File Write An important DeepNVMe optimization is the ability to parallelize individual I/O operations. This optimization is enabled by specifying the desired parallelism degree when constructing a DeepNVMe handle. Subsequent I/O operations with that handle are automatically parallelized over the requested number of host or device threads, as appropriate. I/O parallelism is composable with either the blocking or non-blocking I/O APIs. The example below illustrates 4-way parallelism of a file write using async_pwrite. Note the use of intra_op_parallelism argument to specify the desired parallelism degree in handle creation. >>> import os >>> os.path.isfile('/local_nvme/test_1GB.pt') False >>> import torch >>> t=torch.empty(10243, dtype=torch.uint8).cuda() >>> from deepspeed.ops.op_builder import AsyncIOBuilder >>> h = AsyncIOBuilder().load().aio_handle(intra_op_parallelism=4) >>> h.async_pwrite(t,'/local_nvme/test_1GB.pt') >>> h.wait() 1 >>> os.path.isfile('/local_nvme/test_1GB.pt') True >>> os.path.getsize('/local_nvme/test_1GB.pt') 1073741824 Pinned Tensors A key part of DeepNVMe optimizations is using direct memory access (DMA) for I/O operations, which requires that the host or device tensor be pinned. To pin host tensors, you can use mechanisms provided by Pytorch or DeepSpeed Accelerators. The following example illustrates writing a pinned CPU tensor to a local NVMe file. >>> import os >>> os.path.isfile('/local_nvme/test_1GB.pt') False >>> import torch >>> t=torch.empty(10243, dtype=torch.uint8).pin_memory() >>> from deepspeed.ops.op_builder import AsyncIOBuilder >>> h = AsyncIOBuilder().load().aio_handle() >>> h.async_pwrite(t,'/local_nvme/test_1GB.pt') >>> h.wait() 1 >>> os.path.isfile('/local_nvme/test_1GB.pt') True >>> os.path.getsize('/local_nvme/test_1GB.pt') 1073741824 On the other hand,gds_handle provides new_pinned_device_tensor() and pin_device_tensor() functions for pinning CUDA tensors. The following example illustrates writing a pinned CUDA tensor to a local NVMe file. >>> import os >>> os.path.isfile('/local_nvme/test_1GB.pt') False >>> import torch >>> t=torch.empty(10243, dtype=torch.uint8).cuda() >>> from deepspeed.ops.op_builder import GDSBuilder >>> h = GDSBuilder().load().gds_handle() >>> h.pin_device_tensor(t) >>> h.async_pwrite(t,'/local_nvme/test_1GB.pt') >>> h.wait() 1 >>> os.path.isfile('/local_nvme/test_1GB.pt') True >>> os.path.getsize('/local_nvme/test_1GB.pt') 1073741824 >>> h.unpin_device_tensor(t) Putting it together We hope that the above material helps you to get started with DeepNVMe. You can also use the following links to see DeepNVMe usage in real-world Deep Learning applications. Parameter swapper in ZeRO-Inference and ZeRO-Infinity. Optimizer swapper in ZeRO-Infinity. Gradient swapper in ZeRO-Infinity. Simple file read and write operations. Acknowledgements This tutorial has been significantly improved by feedback from Guanhua Wang, Masahiro Tanaka, and Stas Bekman. Appendix Advanced Handle Creation Achieving peak I/O performance with DeepNVMe requires careful configuration of handle creation. In particular, the parameters of aio_handle and gds_handle constructors are performance-critical because they determine how efficiently DeepNVMe interacts with the underlying storage subsystem (i.e., libaio, GDS, PCIe, and SSD). For convenience we make it possible to create handles using default parameter values which will provide decent performance in most scenarios. However, squeezing out every available performance in your environment will likely require tuning the constructor parameters, namely block_size, queue_depth, single_submit, overlap_events, and intra_op_parallelism. The aio_handle constructor parameters and default values are illustrated below: >>> from deepspeed.ops.op_builder import AsyncIOBuilder >>> help(AsyncIOBuilder().load().aio_handle()) Help on aio_handle in module async_io object: class aio_handle(pybind11_builtins.pybind11_object) | Method resolution order: | aio_handle | pybind11_builtins.pybind11_object | builtins.object | | Methods defined here: | | init(...) | init(self: async_io.aio_handle, block_size: int = 1048576, queue_depth: int = 128, single_submit: bool = False, overlap_events: bool = False, intra_op_parallelism: int = 1) -> None | | AIO handle constructor Performance Tuning As discussed earlier, achieving peak DeepNVMe performance for a target workload or environment requires using optimally configured aio_handle or gds_handle handles. For configuration convenience, we provide a utility called ds_nvme_tune to automate the discovery of optimal DeepNVMe configurations. ds_nvme_tune automatically explores a user-specified or default configuration space and recommends the option that provides the best read and write performance. Below is an example usage of ds_nvme_tune to tune aio_handle data transfers between GPU memory and a local NVVMe SSD mounted on /local_nvme. This example used the default configuration space of ds_nvme_tune for tuning. $ ds_nvme_tune --nvme_dir /local_nvme --gpu Running DeepNVMe performance tuning on ['/local_nvme/'] Best performance (GB/sec): read = 3.69, write = 3.18 { "aio": { "single_submit": "false", "overlap_events": "true", "intra_op_parallelism": 8, "queue_depth": 32, "block_size": 1048576 } } The above tuning was executed on a Lambda workstation equipped with two NVIDIA A6000-48GB GPUs, 252GB of DRAM, and a CS3040 NVMe 2TB SDD with peak read and write speeds of 5.6 GB/s and 4.3 GB/s respectively. The tuning required about four and half minutes. Based on the results, one can expect to achieve read and write transfer speeds of 3.69 GB/sec and 3.18 GB/sec respectively by using an aio_handle configured as below. >>> from deepspeed.ops.op_builder import AsyncIOBuilder >>> h = AsyncIOBuilder().load().aio_handle(block_size=1048576, queue_depth=32, single_submit=False, overlap_events=True, intra_op_parallelism=8) The full command line options of ds_nvme_tune can be obtained via the normal -h or --help. usage: ds_nvme_tune [-h] --nvme_dir NVME_DIR [NVME_DIR ...] [--sweep_config SWEEP_CONFIG] [--no_read] [--no_write] [--io_size IO_SIZE] [--gpu] [--gds] [--flush_page_cache] [--log_dir LOG_DIR] [--loops LOOPS] [--verbose] options: -h, --help show this help message and exit --nvme_dir NVME_DIR [NVME_DIR ...] Directory in which to perform I/O tests. A writeable directory on a NVMe device. --sweep_config SWEEP_CONFIG Performance sweep configuration json file. --no_read Disable read performance measurements. --no_write Disable write performance measurements. --io_size IO_SIZE Number of I/O bytes to read/write for performance measurements. --gpu Test tensor transfers between GPU device and NVME device. --gds Run the sweep over NVIDIA GPUDirectStorage operator --flush_page_cache Page cache will not be flushed and reported read speeds may be higher than actual Requires sudo access. --log_dir LOG_DIR Output directory for performance log files. Default is ./_aio_bench_logs --loops LOOPS Count of operation repetitions --verbose Print debugging information. DeepNVMe APIs For convenience, we provide listing and brief descriptions of the DeepNVMe APIs. General I/O APIs The following functions are used for I/O operations with both aio_handle and gds_handle. Function Description async_pread Non-blocking file read into tensor sync_pread Blocking file read into tensor pread File read with blocking and non-blocking options async_pwrite Non-blocking file write from tensor sync_pwrite Blocking file write from tensor pwrite File write with blocking and non-blocking options wait Wait for non-blocking I/O operations to complete GDS-specific APIs The following functions are available only for gds_handle Function Description new_pinned_device_tensor Allocate and pin a device tensor free_pinned_device_tensor Unpin and free a device tensor pin_device_tensor Pin a device tensor unpin_device_tensor unpin a device tensor Handle Settings APIs The following APIs can be used to probe handle configuration. Function Description get_queue_depth Return queue depth setting get_single_submit Return whether single_submit is enabled get_intra_op_parallelism Return I/O parallelism degree get_block_size Return I/O block size setting get_overlap_events Return whether overlap_event is enabled Updated: November 5, 2025 Previous Next
libaio
Pattern 2: Mixture of Experts for NLG models Contents 1. Installation 2. Training NLG+MoE models 2.1. Changes to the model 2.2. Pre-training the Standard MoE model 2.3. Pre-training the PR-MoE model 2.4. Training MoS with reduced model size In this tutorial, we introduce how to apply DeepSpeed Mixture of Experts (MoE) to NLG models, which reduces the training cost by 5 times and reduce the MoE model size by 3 times (details in our Blog). We use the GPT-3 like models in Megatron-LM framework as the example. Before reading this tutorial, we recommend to first read the tutorials about Mixture of Experts and Megatron-LM GPT pre-training. 1. Installation You would need to install DeepSpeed v0.6.0 or higher to use the MoE feature. The MoE for NLG model examples are in the Megatron-DeepSpeed repo under the MoE folder. 2. Training NLG+MoE models 2.1. Changes to the model To apply MoE to the GPT-style model, we made several changes in Megatron framework, mostly in megatron/model/ where we add the MoE layers into the model. 2.2. Pre-training the Standard MoE model We provide example training scripts under examples_deepspeed/MoE which we used to perform the experiments in our Blog. There are a few new hyperparameters for standard MoE model: --num-experts: the number of experts per MoE layer. In our experiments we set it to 128. Larger number of experts tend to provide better convergence, but it’s a diminishing return. --moe-expert-parallel-size: degree of the MoE expert parallelism. In other words, there will be num-experts/moe-expert-parallel-size experts on each GPU. Thus --moe-expert-parallel-size should be no more than both number of GPUs, and --num-experts. --moe-loss-coeff: scaling coefficient for adding MoE loss to model loss. In our experiments we find that 0.01 is a good setting. --moe-train-capacity-factor, --moe-eval-capacity-factor, --moe-min-capacity: these configs determine how many tokens can a single expert handle. Larger numbers could lead to better convergence, but would also lead to slower training since the load would be more unbalanced on different experts. --disable-moe-token-dropping: this will completely remove the limitation of how many tokens can a single expert handle. For the same reason as above, we only recommend using this during inference/eval. 2.3. Pre-training the PR-MoE model PR-MoE is a new designed MoE models, standing for Pyramid-Residual-MoE, which improves the parameter efficiency up to 3x as compared to standard MoE. Please see our Blog for more details. We provide example training scripts under examples_deepspeed/MoE. There are a few different hyperparameters for PR-MoE model compared to standard MoE: --num-experts: Instead of providing a single number, to enable Pyramid-MoE, you need to provide a list, whose length is the same as the number of MoE layers. We suggest to use more experts in the latter stage (close to output) of the model. --mlp-type: chosen from [standard, residual]. When it is residual, Residual-MoE is enabled. In addition to the new hyperparameters above for standard MoE and PR-MoE, for NLG+MoE models we found that it’s helpful to lower the learning rate and increase the learning rate decay duration compared to the base dense model. Details of our tuning can be found in the example training scripts. Regarding training data, we are not able to release our internal data but any public data for Megatron-LM pre-training can be directly used to train MoE models (with the caveat that it might not provide the exact same model quality as in our experiments). For example, we evaluated The Pile dataset (pile.eleuther.ai, github.com/EleutherAI/the-pile) for both dense and MoE models. Table 1 below shows that this public data provides similar evaluation results as our internal data. Model size LAMBADA: completion prediction PIQA: commonsense reasoning BoolQ: reading comprehension RACE-h: reading comprehension TriviaQA: question answering WebQs: question answering Dense NLG: 350M, internal data 0.5203 0.6931 0.5364 0.3177 0.0321 0.0157 350M, public Pile 0.5106 0.6589 0.5933 0.3196 0.0257 0.0064 Standard MoE NLG: 350M+MoE-128, internal data 0.6270 0.7459 0.6046 0.3560 0.1658 0.0517 350M+MoE-128, public Pile 0.6128 0.7323 0.6040 0.3349 0.1111 0.0335 PR-MoE NLG: 350M+MoE-128, internal data 0.6365 0.7399 0.5988 0.3569 0.1630 0.0473 PR-MoE + MoS NLG: 350M+MoE-128, internal data 0.6346 0.7334 0.5807 0.3483 0.1369 0.0522 Table 1: Zero-shot evaluation results (last six columns) for different dense and MoE NLG models. All zero-shot evaluation results use the accuracy metric. 2.4. Training MoS with reduced model size MoS, standing for Mixture-of-Students, is a staged distillation-based technique for compressing large MoE models. MoS further reduces the model size by 12.5%, leading to up 3.7x model size reduction when combined with PR-MoE over the standard MoE. The reduced model size helps reduce the latency and cost during inference. To train an MoS model, one needs to specify a few additional parameters. We will use PR-MoE as an example: --mos: This would enable Mixture-of-Students via knowledge distillation. --load-teacher: This specifies the path to the teacher model checkpoint. This is a mandatory argument for using MoS and the teacher model checkpoint can be obtained by either training a standard MoE or the PR-MoE. num-layers-teacher, --hidden-size-teacher, --hidden-size-teacher, --num-experts-teacher: In addition to the teacher model checkpoint path, we also need to specify the model architecture of the teacher model such as its number of layers, hidden dimension size, and the number of experts per MoE layer. In the case of PR-MoE, we need to also provide a list of experts for the teacher model, where we remove a few expert layers from the teacher model. In addition to the new parameters above, we observe that using the teacher PR-MoE during the entire training process may adversely impact the final student model accuracy. In our experiments, we use a staged distillation method by stopping distillation early in the training process (e.g., after 400K steps) and perform optimization only against the standard language modeling loss for the rest of the training. We provide example training scripts under examples_deepspeed/MoE. Details of our parameter settings can be found in the example training scripts. The performance results of MoS can be seen from our blog post and our paper. Updated: November 5, 2025 Previous Next
megatron/model/
Pattern 3: MoS, standing for Mixture-of-Students, is a staged distillation-based technique for compressing large MoE models. MoS further reduces the model size by 12.5%, leading to up 3.7x model size reduction when combined with PR-MoE over the standard MoE. The reduced model size helps reduce the latency and cost during inference. To train an MoS model, one needs to specify a few additional parameters. We will use PR-MoE as an example:
--mos
Pattern 4: Learning Rate Range Test Contents Learning Rate Range Test (LRRT) Prerequisites LRRT Parameters Required Model Configuration Changes PyTorch Example: Tuning for Large Batch Sizes This tutorial shows how to use to perform Learning Rate range tests in PyTorch. Learning Rate Range Test (LRRT) Learning rate range test ( LRRT ) is a method for discovering the largest learning rate values that can be used to train a model without divergence. Data scientists are often interested in this information because large learning rates lead to faster model convergence than a small learning rates. Moreover, large learning rates are crucial in learning rate schedules such as CLR and 1Cycle, which are used to train effectively with large batch sizes. DeepSpeed provides LRRT for model training in PyTorch frameworks. Prerequisites To use DeepSpeed’s LRRT, you must satisfy the following two conditions: Integrate DeepSpeed into your training script using the Getting Started guide. Add the parameters to configure LRRT to the parameters of your model. The LRRT parameters are defined below. LRRT Parameters LRRT works by linearly increasing the learning rate by a predefined amount, at predefined intervals. Thus, LRRT is a form of learning rate schedule because it defines how and when the learning rate should change during model training. To configure LRRT, you will need to set these parameters: lr_range_test_min_lr : The initial learning rate for training (float) lr_range_test_step_size: The interval for scaling up learning rate, defined in training steps (integer) lr_range_test_step_rate: The scaling factor for increasing learning rate (float) lr_range_test_staircase: If true, learning rate is changed every lr_range_test_step_size training steps, otherwise learning rate is changed at every training step (boolean) Required Model Configuration Changes We will illustrate the required model configuration changes an example LRRT schedule that: Starts training with an initial learning rate of 0.0001 Uses a scaling rate of 5 Uses a scaling interval of 200 training steps Scales learning rate at every training step, i.e., does not use staircase PyTorch For PyTorch models, LRRT is implemented as a learning rate scheduler, a feature that is available in PyTorch versions 1.0.1 and newer. Thus, you can add a "scheduler" entry of type "LRRangeTest" into your model configuration as illustrated below: "scheduler": { "type": "LRRangeTest", "params": { "lr_range_test_min_lr": 0.0001, "lr_range_test_step_size": 200, "lr_range_test_step_rate": 5, "lr_range_test_staircase": false } } Example: Tuning for Large Batch Sizes We illustrate how LRRT can benefit data scientists with a snippet of our experience of tuning an internal production model to converge efficiently on larger batch sizes, as we scaled from one GPU (batch size 512) to four GPUs (batch size 2048). Our goal was to train the model with the larger batch size to match the performance of the smaller batch size using the same amount of data samples. The challenge here is the well known problem of slow convergence of large batch size training. Our approach was to use a 1Cycle schedule in DeepSpeed to tackle this problem, and we used LRRT to configure the schedule. In the plots below, we illustrate using LRRT to discover the maximum learning rates for effective training with batch size 2048. The plot on the left shows the impact of large learning rates on validation loss over the first 9000 batches of training. The plot on the right shows the learning rate values during the same period of training. Using grid search we discover that the best fixed learning rate for the batch size 2048 is 0.0002. The blue line (lr=0.0002) represents training with this fixed learning rate. We compare the two LRRT schedules with this fixed learning rate. The orange (lr_range_test_step_rate=5) and gray (lr_range_test_step_rate=50) lines represent training with similar LRRT schedules that differ only in lr_range_test_step_rate values. Although the LRRT schedules start from the same base learning rate, the gray line’s learning rate grows about 10 times faster than the orange line. Also, the learning rates of the LRRT schedules had grown larger than that of the blue line in the presented data points. We subsequently refer to the gray line as “fast growing”, and the orange line as “slow growing” LRRT schedules respectively. We make the following observations from this small example. Larger learning rates clearly benefit model performance, up to some point. The fast growing LRRT schedule achieves validation loss of 0.46 after 3000 batches, which the fixed learning rate does not achieve with 9000 batches. The slow growing LRRT does not match that score until after 6000 batches, however it maintains an increasing performance advantage over the fixed learning rate. There is an upper bound on learning rate values that are useful for training the model. The fast growing LRRT schedule hits this boundary quickly and diverges, while the slow growing LRRT will later diverge for the same reason. LRRT helped us discover these boundaries quickly, using less than 2% of the training data. These boundaries are useful information for constructing learning rate schedules. These observations from LRRT helped us to configure the learning rate boundaries and the cycle span for a 1Cycle schedule that solves the problem, as shown below. "OneCycle": { "cycle_min_lr": 0.002, "cycle_max_lr": 0.005, "cycle_first_step_size": 2000, "cycle_second_step_size": 2000, ... } In our experience these are four most critical parameters of 1Cycle schedules. We chose to use the slower LRRT schedule (lr_range_test_step_rate=5) to set cycle_min_lr because it achieves the best loss and the faster schedule diverges fairly quickly. We set cycle_max_lr to 0.005 even though the plot shows that performance was still improving at slightly higher learning rate. This is because we observed that if we wait till the maximum learning rate, the model could be at the point of divergence and impossible to recover. Since it takes 8000 batches for the learning rate to become 0.005, we set cycle_first_step_size and (cycle_second_step_size) to 2000 which is the number of steps that it takes for four GPUs to process 8000 batches. We hope this brief example sparks your imagination on using LRRT for your own unique tuning challenges. Updated: November 5, 2025 Previous Next
lr_range_test_min_lr
Pattern 5: Training Overview and Features Contents Overview Distributed, Effective, and Efficient Training with Ease Speed Memory efficiency Scalability Communication efficiency Data efficiency Supporting long sequence length Fast convergence for effectiveness Good Usability Features Distributed Training with Mixed Precision Mixed Precision Training Single-GPU, Multi-GPU, and Multi-Node Training Pipeline Parallelism Model Parallelism Support for Custom Model Parallelism Integration with Megatron-LM The Zero Redundancy Optimizer Optimizer State and Gradient Partitioning Activation Partitioning Constant Buffer Optimization (CBO) Contiguous Memory Optimization (CMO) ZeRO-Offload Additional Memory and Bandwidth Optimizations Smart Gradient Accumulation Communication Overlapping Training Features Simplified training API Activation Checkpointing API Gradient Clipping Automatic loss scaling with mixed precision Training Optimizers 1-bit Adam, 0/1 Adam and 1-bit LAMB optimizers with up to 26x less communication Fused Adam optimizer and arbitrary torch.optim.Optimizer CPU-Adam: High-Performance vectorized implementation of Adam Memory bandwidth optimized FP16 Optimizer Large Batch Training with LAMB Optimizer Memory-Efficient Training with ZeRO Optimizer Training Agnostic Checkpointing Advanced parameter search Learning Rate Range Test 1Cycle Learning Rate Schedule Simplified Data Loader Data Efficiency Curriculum Learning Performance Analysis and Debugging Wall Clock Breakdown Timing Activation Checkpoint Functions Flops Profiler Autotuning Monitor Communication Logging Sparse Attention Mixture of Experts (MoE) Overview Training advanced deep learning models is challenging. Beyond model design, model scientists also need to set up the state-of-the-art training techniques such as distributed training, mixed precision, gradient accumulation, and checkpointing. Yet still, scientists may not achieve the desired system performance and convergence rate. Large model sizes are even more challenging: a large model easily runs out of memory with pure data parallelism and it is difficult to use model parallelism. DeepSpeed addresses these challenges to accelerate model development and training. Distributed, Effective, and Efficient Training with Ease The DeepSpeed API is a lightweight wrapper on PyTorch. This means that you can use everything you love in PyTorch and without learning a new platform. In addition, DeepSpeed manages all of the boilerplate state-of-the-art training techniques, such as distributed training, mixed precision, gradient accumulation, and checkpoints so that you can focus on your model development. Most importantly, you can leverage the distinctive efficiency and effectiveness benefit of DeepSpeed to boost speed and scale with just a few lines of code changes to your PyTorch models. Speed DeepSpeed achieves high performance and fast convergence through a combination of efficiency optimizations on compute/communication/memory/IO and effectiveness optimizations on advanced hyperparameter tuning and optimizers. For example: DeepSpeed trains BERT-large to parity in 44 mins using 1024 V100 GPUs (64 DGX-2 boxes) and in 2.4 hours using 256 GPUs (16 DGX-2 boxes). BERT-large Training Times Devices Source Training Time 1024 V100 GPUs DeepSpeed 44 min 256 V100 GPUs DeepSpeed 2.4 hr 64 V100 GPUs DeepSpeed 8.68 hr 16 V100 GPUs DeepSpeed 33.22 hr BERT code and tutorials will be available soon. DeepSpeed trains GPT2 (1.5 billion parameters) 3.75x faster than state-of-art, NVIDIA Megatron on Azure GPUs. Read more: GPT tutorial Memory efficiency DeepSpeed provides memory-efficient data parallelism and enables training models without model parallelism. For example, DeepSpeed can train models with up to 13 billion parameters on a single GPU. In comparison, existing frameworks (e.g., PyTorch’s Distributed Data Parallel) run out of memory with 1.4 billion parameter models. DeepSpeed reduces the training memory footprint through a novel solution called Zero Redundancy Optimizer (ZeRO). Unlike basic data parallelism where memory states are replicated across data-parallel processes, ZeRO partitions model states and gradients to save significant memory. Furthermore, it also reduces activation memory and fragmented memory. The current implementation (ZeRO-2) reduces memory by up to 8x relative to the state-of-art. You can read more about ZeRO in our paper, and in our blog posts related to ZeRO-1 and ZeRO-2. With this impressive memory reduction, early adopters of DeepSpeed have already produced a language model (LM) with over 17B parameters called Turing-NLG, establishing a new SOTA in the LM category. For model scientists with limited GPU resources, ZeRO-Offload leverages both CPU and GPU memory for training large models. Using a machine with a single GPU, our users can run models of up to 13 billion parameters without running out of memory, 10x bigger than the existing approaches, while obtaining competitive throughput. This feature democratizes multi-billion-parameter model training and opens the window for many deep learning practitioners to explore bigger and better models. Scalability DeepSpeed supports efficient data parallelism, model parallelism, pipeline parallelism and their combinations, which we call 3D parallelism. 3D parallelism of DeepSpeed provides system support to run models with trillions of parameters, read more in our press-release and tutorial. DeepSpeed can run large models more efficiently, up to 10x faster for models with various sizes spanning 1.5B to hundred billion. More specifically, the data parallelism powered by ZeRO is complementary and can be combined with different types of model parallelism. It allows DeepSpeed to fit models using lower degree of model parallelism and higher batch size, offering significant performance gains compared to using model parallelism alone. Read more: ZeRO paper, and GPT tutorial. The figure depicts system throughput improvements of DeepSpeed (combining ZeRO-powered data parallelism with model parallelism of NVIDIA Megatron-LM) over using Megatron-LM alone. Communication efficiency Pipeline parallelism of DeepSpeed reduce communication volume during distributed training, which allows users to train multi-billion-parameter models 2–7x faster on clusters with limited network bandwidth. 1-bit Adam, 0/1 Adam and 1-bit LAMB reduce communication volume by up to 26x while achieving similar convergence efficiency to Adam, allowing for scaling to different types of GPU clusters and networks. 1-bit Adam blog post, 1-bit Adam tutorial, 0/1 Adam tutorial, 1-bit LAMB tutorial. Data efficiency DeepSpeed Data Efficiency Library provides efficient data sampling via curriculum learning and efficient data routing via random layerwise token dropping. The composed solution enables up to 2x data and 2x time saving during GPT-3/BERT pretraining and GPT/ViT finetuning, or further improve model quality under the same data/time. See more in the tutorial. Supporting long sequence length DeepSpeed offers sparse attention kernels—an instrumental technology to support long sequences of model inputs, whether for text, image, or sound. Compared with the classic dense Transformers, it powers an order-of-magnitude longer input sequence and obtains up to 6x faster execution with comparable accuracy. It also outperforms state-of-the-art sparse implementations with 1.5–3x faster execution. Furthermore, our sparse kernels support efficient execution of flexible sparse format and empower users to innovate on their custom sparse structures. Read more here. Fast convergence for effectiveness DeepSpeed supports advanced hyperparameter tuning and large batch size optimizers such as LAMB. These improve the effectiveness of model training and reduce the number of samples required to convergence to desired accuracy. Read more: Tuning tutorial. Good Usability Only a few lines of code changes are needed to enable a PyTorch model to use DeepSpeed and ZeRO. Compared to current model parallelism libraries, DeepSpeed does not require a code redesign or model refactoring. It also does not put limitations on model dimensions (such as number of attention heads, hidden sizes, and others), batch size, or any other training parameters. For models of up to 13 billion parameters, you can use ZeRO-powered data parallelism conveniently without requiring model parallelism, while in contrast, standard data parallelism will run out of memory for models with more than 1.4 billion parameters. In addition, DeepSpeed conveniently supports flexible combination of ZeRO-powered data parallelism with custom model parallelisms, such as tensor slicing of NVIDIA’s Megatron-LM. Features Below we provide a brief feature list, see our detailed feature overview for descriptions and usage. Distributed Training with Mixed Precision 16-bit mixed precision Single-GPU/Multi-GPU/Multi-Node Model Parallelism Support for Custom Model Parallelism Integration with Megatron-LM Pipeline Parallelism 3D Parallelism The Zero Redundancy Optimizer Optimizer State and Gradient Partitioning Activation Partitioning Constant Buffer Optimization Contiguous Memory Optimization ZeRO-Offload Leverage both CPU/GPU memory for model training Support 10B model training on a single GPU Ultra-fast dense transformer kernels Sparse attention Memory- and compute-efficient sparse kernels Support 10x long sequences than dense Flexible support to different sparse structures 1-bit Adam, 0/1 Adam and 1-bit LAMB Custom communication collective Up to 26x communication volume saving Additional Memory and Bandwidth Optimizations Smart Gradient Accumulation Communication/Computation Overlap Training Features Simplified training API Gradient Clipping Automatic loss scaling with mixed precision Training Optimizers Fused Adam optimizer and arbitrary torch.optim.Optimizer Memory bandwidth optimized FP16 Optimizer Large Batch Training with LAMB Optimizer Memory efficient Training with ZeRO Optimizer CPU-Adam Training Agnostic Checkpointing Advanced Parameter Search Learning Rate Range Test 1Cycle Learning Rate Schedule Simplified Data Loader Data Efficiency Efficient data sampling via curriculum learning and efficient data routing via random layerwise token dropping Up to 2x data and 2x time saving during GPT-3/BERT pretraining and GPT/ViT finetuning Or further improve model quality under the same data/time Curriculum Learning A curriculum learning-based data pipeline that presents easier or simpler examples earlier during training Stable and 3.3x faster GPT-2 pre-training with 8x/4x larger batch size/learning rate while maintaining token-wise convergence speed Complementary to many other DeepSpeed features Note that the Data Efficiency Library above provides more general curriculum learning support. This legacy curriculum learning feature is still supported but we recommend to use the Data Efficiency Library. Progressive Layer Dropping Efficient and robust compressed training Up to 2.5x convergence speedup for pre-training Performance Analysis and Debugging Mixture of Experts (MoE) title: “Feature Overview” layout: single permalink: /features/ toc: true toc_label: “Contents” — Distributed Training with Mixed Precision Mixed Precision Training Enable 16-bit (FP16) training by in the deepspeed_config JSON. "fp16": { "enabled": true, "loss_scale": 0, "loss_scale_window": 1000, "hysteresis": 2, "consecutive_hysteresis": false, "min_loss_scale": 1 } Single-GPU, Multi-GPU, and Multi-Node Training Easily switch between single-GPU, single-node multi-GPU, or multi-node multi-GPU execution by specifying resources with a hostfile. deepspeed --hostfile= \ <client_entry.py> \ --deepspeed --deepspeed_config ds_config.json The script <client_entry.py> will execute on the resources specified in . Pipeline Parallelism DeepSpeed provides pipeline parallelism for memory- and communication- efficient training. DeepSpeed supports a hybrid combination of data, model, and pipeline parallelism and has scaled to over one trillion parameters using 3D parallelism. Pipeline parallelism can also improve communication efficiency and has accelerated training by up to 7x on low-bandwidth clusters. Model Parallelism Support for Custom Model Parallelism DeepSpeed supports all forms of model parallelism including tensor slicing based approaches such as the Megatron-LM. It does so by only requiring the model parallelism framework to provide a model parallelism unit (mpu) that implements a few bookkeeping functionalities: mpu.get_model_parallel_rank() mpu.get_model_parallel_group() mpu.get_model_parallel_world_size() mpu.get_data_parallel_rank() mpu.get_data_parallel_group() mpu.get_data_parallel_world_size() Integration with Megatron-LM DeepSpeed is fully compatible with Megatron. Please see the Megatron-LM tutorial for details. The Zero Redundancy Optimizer The Zero Redundancy Optimizer (ZeRO) is at the heart of DeepSpeed and enables large model training at a scale that is simply not possible with model parallelism alone. When enabled, ZeRO allows training models with over 13 billion parameters without any model parallelism, and up to 200 billion parameter models with model parallelism on current generation hardware. For more details see the ZeRO paper, GPT tutorial on integration with DeepSpeed. Optimizer State and Gradient Partitioning Optimizer State and Gradient Partitioning in ZeRO reduces the memory consumption of the model states (optimizer states, gradients and parameters) by 8x compared to standard data parallelism by partitioning these states across data parallel process instead of replicating them. Activation Partitioning Activation Partitioning is a memory optimization in ZeRO that can reduce the memory consumed by activations during model parallel training (MP). In MP certain activations maybe required by all MP processes, resulting in a replication of activations across MP GPUs. Activation Partitioning stores these activations in a partitioned state once they are used for computation in the forward propagation. These activations are allgathered right before they are needed again during the backward propagation. By storing activations in a partitioned state, ZeRO in DeepSpeed can reduce the activation memory footprint proportional to the MP degree. Constant Buffer Optimization (CBO) CBO enables high network and memory throughput while restricting memory usage to a constant size. For memory- and network-bound operations such as normalization or allreduce collectives, the performance depends on the size of the operand. Simply fusing all operands into a single large operand can enable great throughput at the expense of unnecessary memory overhead. CBO in DeepSpeed fuses smaller operands into approximately a pre-defined sized buffer large enough to achieve great performance without the unnecessary memory overhead. Contiguous Memory Optimization (CMO) CMO reduces memory fragmentation during training, preventing out of memory errors due to lack of contiguous memory. Memory fragmentation is a result of interleaving between short lived and long lived memory objects. During the forward propagation activation checkpoints are long lived but the activations that recomputed are short lived. Similarly, during the backward computation, the activation gradients are short lived while the parameter gradients are long lived. CMO transfers activation checkpoints and parameter gradients to contiguous buffers preventing memory fragmentation. ZeRO-Offload ZeRO-Offload pushes the boundary of the maximum model size that can be trained efficiently using minimal GPU resources, by exploiting computational and memory resources on both GPUs and their host CPUs. It allows training up to 13-billion-parameter models on a single NVIDIA V100 GPU, 10x larger than the state-of-the-art, while retaining high training throughput of over 30 teraflops per GPU. For more details see the ZeRO-Offload release blog, and tutorial on integration with DeepSpeed. Additional Memory and Bandwidth Optimizations Smart Gradient Accumulation Gradient accumulation allows running larger batch size with limited memory by breaking an effective batch into several sequential micro-batches, and averaging the parameter gradients across these micro-batches. Furthermore, instead of averaging the gradients of each micro-batch across all GPUs, the gradients are averaged locally during each step of the sequence, and a single allreduce is done at the end of the sequence to produce the averaged gradients for the effective batch across all GPUs. This strategy significantly reduces the communication involved over the approach of averaging globally for each micro-batch, specially when the number of micro-batches per effective batch is large. Communication Overlapping During back propagation, DeepSpeed can overlap the communication required for averaging parameter gradients that have already been computed with the ongoing gradient computation. This computation-communication overlap allows DeepSpeed to achieve higher throughput even at modest batch sizes. Training Features Simplified training API The DeepSpeed core API consists of just a handful of methods: initialization: initialize training: backward and step argument parsing: add_config_arguments checkpointing : load_checkpoint and store_checkpoint DeepSpeed supports most of the features described in this document, via the use of these API, along with a deepspeed_config JSON file for enabling and disabling the features. Please see the core API doc for more details. Activation Checkpointing API DeepSpeed’s Activation Checkpointing API supports activation checkpoint partitioning, cpu checkpointing, and contiguous memory optimizations, while also allowing layerwise profiling. Please see the core API doc for more details. Gradient Clipping { "gradient_clipping": 1.0 } DeepSpeed handles gradient clipping under the hood based on the max gradient norm specified by the user. Please see the core API doc for more details. Automatic loss scaling with mixed precision DeepSpeed internally handles loss scaling for mixed precision training. The parameters for loss scaling can be specified in the deepspeed_config JSON file. Please see the core API doc for more details. Training Optimizers 1-bit Adam, 0/1 Adam and 1-bit LAMB optimizers with up to 26x less communication DeepSpeed has three communication-efficient optimizers called 1-bit Adam, 0/1 Adam and 1-bit LAMB. They offer the same convergence as Adam/LAMB, incur up to 26x less communication that enables up to 6.6x higher throughput for BERT-Large pretraining and up to 2.7x higher throughput for SQuAD fine-tuning on bandwidth-limited clusters. For more details on usage and performance, please refer to the 1-bit Adam tutorial, 1-bit Adam blog post, 0/1 Adam tutorial and 1-bit LAMB tutorial. For technical details, please refer to the 1-bit Adam paper, 0/1 Adam paper and 1-bit LAMB paper. Fused Adam optimizer and arbitrary torch.optim.Optimizer With DeepSpeed, the user can choose to use a high performance implementation of ADAM from NVIDIA, or any training optimizer that extends torch’s torch.optim.Optimizer class. CPU-Adam: High-Performance vectorized implementation of Adam We introduce an efficient implementation of Adam optimizer on CPU that improves the parameter-update performance by nearly an order of magnitude. We use the AVX SIMD instructions on Intel-x86 architecture for the CPU-Adam implementation. We support both AVX-512 and AVX-2 instruction sets. DeepSpeed uses AVX-2 by default which can be switched to AVX-512 by setting the build flag, DS_BUILD_AVX512 to 1 when installing DeepSpeed. Using AVX-512, we observe 5.1x to 6.5x speedups considering the model-size between 1 to 10 billion parameters with respect to torch-adam. Memory bandwidth optimized FP16 Optimizer Mixed precision training is handled by the DeepSpeed FP16 Optimizer. This optimizer not only handles FP16 training but is also highly efficient. The performance of weight update is primarily dominated by the memory bandwidth, and the achieved memory bandwidth is dependent on the size of the input operands. The FP16 Optimizer is designed to maximize the achievable memory bandwidth by merging all the parameters of the model into a single large buffer, and applying the weight updates in a single kernel, allowing it to achieve high memory bandwidth. Large Batch Training with LAMB Optimizer DeepSpeed makes it easy to train with large batch sizes by enabling the LAMB Optimizer. For more details on LAMB, see the LAMB paper. Memory-Efficient Training with ZeRO Optimizer DeepSpeed can train models with up to 13 billion parameters without model parallelism, and models with up to 200 billion parameters with 16-way model parallelism. This leap in model size is possible through the memory efficiency achieved via the ZeRO Optimizer. For more details see ZeRO paper . Training Agnostic Checkpointing DeepSpeed can simplify checkpointing for you regardless of whether you are using data parallel training, model parallel training, mixed-precision training, a mix of these three, or using the zero optimizer to enable larger model sizes. Please see the Getting Started guide and the core API doc for more details. Advanced parameter search DeepSpeed supports multiple Learning Rate Schedules to enable faster convergence for large batch scaling. Learning Rate Range Test Please refer to the Learning Rate Range Test tutorial. 1Cycle Learning Rate Schedule Please refer to the 1Cycle Learning Rate Schedule tutorial. Simplified Data Loader DeepSpeed abstracts away data parallelism and model parallelism from the user when it comes to data loading. Users simply provide a PyTorch dataset, and DeepSpeed data loader can automatically handle batch creation appropriately. Data Efficiency Please refer to the Data Efficiency tutorial. Curriculum Learning Please refer to the Curriculum Learning tutorial. Note that the Data Efficiency Library above provides more general curriculum learning support. This legacy curriculum learning feature is still supported but we recommend to use the Data Efficiency Library. Performance Analysis and Debugging DeepSpeed provides a set of tools for performance analysis and debugging. Wall Clock Breakdown DeepSpeed provides a detailed breakdown of the time spent in different parts of the training. This can be enabled by setting the following in the deepspeed_config file. { "wall_clock_breakdown": true, } Timing Activation Checkpoint Functions When activation checkpointing is enabled, profiling the forward and backward time of each checkpoint function can be enabled in the deepspeed_config file. { "activation_checkpointing": { "profile": true } } Flops Profiler The DeepSpeed flops profiler measures the time, flops and parameters of a PyTorch model and shows which modules or layers are the bottleneck. When used with the DeepSpeed runtime, the flops profiler can be configured in the deepspeed_config file as follows: { "flops_profiler": { "enabled": true, "profile_step": 1, "module_depth": -1, "top_modules": 3, "detailed": true, } } The flops profiler can also be used as a standalone package. Please refer to the Flops Profiler tutorial for more details. Autotuning The DeepSpeed Autotuner uses model information, system information, and heuristics to efficiently tune Zero stage, micro batch size, and other Zero configurations. Using the autotuning feature requires no code change from DeepSpeed users. While "autotuning": {"enabled": true} is the minimal required to enable autotuning, there are other parameters users can define to configure the autotuning process. Below shows major parameters and their default values in the autotuning configuration. Please refer to the Autotuning tutorial for more details. { "autotuning": { "enabled": true, "results_dir": null, "exps_dir": null, "overwrite": false, "metric": "throughput", "num_nodes": null, "num_gpus": null, "start_profile_step": 3, "end_profile_step": 5, "fast": true, "num_tuning_micro_batch_sizes": 3, "tuner_type": "model_based", "tuner_early_stopping": 5, "tuner_num_trials": 50, "arg_mappings": null } } The flops profiler can also be used as a standalone package. Please refer to the Flops Profiler tutorial for more details. Monitor The DeepSpeed Monitor logs live training metrics to one or more monitoring backends, including PyTorch’s TensorBoard, WandB, or simply to CSV files. The Monitor can be configured with one or more backends in the deepspeed_config file as follows: { "tensorboard": { "enabled": true, "output_path": "output/ds_logs/", "job_name": "train_bert" } "wandb": { "enabled": true, "team": "my_team", "group": "my_group", "project": "my_project" } "csv_monitor": { "enabled": true, "output_path": "output/ds_logs/", "job_name": "train_bert" } } The Monitor can also be added to log custom metrics and client codes. Please refer to the Monitor tutorial for more details. Communication Logging DeepSpeed provides logging of all communication operations launched within deepspeed.comm. The communication logger can be configured in the deepspeed_config file as follows: { "comms_logger": { "enabled": true, "verbose": false, "prof_all": true, "debug": false } } Client codes can then print a summary with a call to deepspeed.comm.log_summary(). For more details and example usage, see the Communication Logging tutorial. Sparse Attention DeepSpeed offers sparse attention to support long sequences. Please refer to the Sparse Attention tutorial. --deepspeed_sparse_attention "sparse_attention": { "mode": "fixed", "block": 16, "different_layout_per_head": true, "num_local_blocks": 4, "num_global_blocks": 1, "attention": "bidirectional", "horizontal_global_attention": false, "num_different_global_patterns": 4 } Mixture of Experts (MoE) To learn more about training Mixture of Experts (MoE) models with DeepSpeed, see our tutorial for more details.
torch.optim.Optimizer
Pattern 6: Flops Profiler Contents Overview Flops Measurement Multi-GPU, Multi-node, Data Parallelism, and Model Parallelism Usage Usage With the DeepSpeed Runtime Example: Megatron-LM Usage Outside the DeepSpeed Runtime In Model Inference Example: AlexNet Example: Bert In Model Training Workflow Example Training Workflow In this tutorial, we introduce the DeepSpeed Flops Profiler and provide examples of its usage. Overview Flops Measurement Multi-GPU, Multi-node, Data Parallelism, and Model Parallelism Usage Overview Effective use of hardware resources is critical to good performance, but performance inefficiency in existing implementations for large-scale model training and inference are often hard to spot and attribute to specific module components. DeepSpeed Flops Profiler helps users easily measure both the model training/inference speed (latency, throughput) and efficiency (floating-point operations per second, i.e., FLOPS) of a model and its submodules, with an eye towards eliminating inefficiencies in existing implementations. Below is an example output for BERT-Large(NVIDIA) on an A100 GPU with batch size 80: -------------------------- DeepSpeed Flops Profiler -------------------------- Profile Summary at step 10: Notations: data parallel size (dp_size), model parallel size(mp_size), number of parameters (params), number of multiply-accumulate operations(MACs), number of floating-point operations (flops), floating-point operations per second (FLOPS), fwd latency (forward propagation latency), bwd latency (backward propagation latency), step (weights update latency), iter latency (sum of fwd, bwd and step latency) world size: 1 data parallel size: 1 model parallel size: 1 batch size per GPU: 80 params per gpu: 336.23 M params of model = params per GPU * mp_size: 336.23 M fwd MACs per GPU: 3139.93 G fwd flops per GPU: 6279.86 G fwd flops of model = fwd flops per GPU * mp_size: 6279.86 G fwd latency: 76.67 ms bwd latency: 108.02 ms fwd FLOPS per GPU = fwd flops per GPU / fwd latency: 81.9 TFLOPS bwd FLOPS per GPU = 2 * fwd flops per GPU / bwd latency: 116.27 TFLOPS fwd+bwd FLOPS per GPU = 3 * fwd flops per GPU / (fwd+bwd latency): 102.0 TFLOPS step latency: 34.09 us iter latency: 184.73 ms samples/second: 433.07 ----------------------------- Aggregated Profile per GPU ----------------------------- Top modules in terms of params, MACs or fwd latency at different model depths: depth 0: params - {'BertForPreTrainingPreLN': '336.23 M'} MACs - {'BertForPreTrainingPreLN': '3139.93 GMACs'} fwd latency - {'BertForPreTrainingPreLN': '76.39 ms'} depth 1: params - {'BertModel': '335.15 M', 'BertPreTrainingHeads': '32.34 M'} MACs - {'BertModel': '3092.96 GMACs', 'BertPreTrainingHeads': '46.97 GMACs'} fwd latency - {'BertModel': '34.29 ms', 'BertPreTrainingHeads': '3.23 ms'} depth 2: params - {'BertEncoder': '302.31 M', 'BertLMPredictionHead': '32.34 M'} MACs - {'BertEncoder': '3092.88 GMACs', 'BertLMPredictionHead': '46.97 GMACs'} fwd latency - {'BertEncoder': '33.45 ms', 'BertLMPredictionHead': '2.61 ms'} depth 3: params - {'ModuleList': '302.31 M', 'Embedding': '31.79 M', 'Linear': '31.26 M'} MACs - {'ModuleList': '3092.88 GMACs', 'Linear': '36.23 GMACs'} fwd latency - {'ModuleList': '33.11 ms', 'BertPredictionHeadTransform': '1.83 ms''} depth 4: params - {'BertLayer': '302.31 M', 'LinearActivation': '1.05 M''} MACs - {'BertLayer': '3092.88 GMACs', 'LinearActivation': '10.74 GMACs'} fwd latency - {'BertLayer': '33.11 ms', 'LinearActivation': '1.43 ms'} depth 5: params - {'BertAttention': '100.76 M', 'BertIntermediate': '100.76 M'} MACs - {'BertAttention': '1031.3 GMACs', 'BertIntermediate': '1030.79 GMACs'} fwd latency - {'BertAttention': '19.83 ms', 'BertOutput': '4.38 ms'} depth 6: params - {'LinearActivation': '100.76 M', 'Linear': '100.69 M'} MACs - {'LinearActivation': '1030.79 GMACs', 'Linear': '1030.79 GMACs'} fwd latency - {'BertSelfAttention': '16.29 ms', 'LinearActivation': '3.48 ms'} ------------------------------ Detailed Profile per GPU ------------------------------ Each module profile is listed after its name in the following order: params, percentage of total params, MACs, percentage of total MACs, fwd latency, percentage of total fwd latency, fwd FLOPS BertForPreTrainingPreLN( 336.23 M, 100.00% Params, 3139.93 GMACs, 100.00% MACs, 76.39 ms, 100.00% latency, 82.21 TFLOPS, (bert): BertModel( 335.15 M, 99.68% Params, 3092.96 GMACs, 98.50% MACs, 34.29 ms, 44.89% latency, 180.4 TFLOPS, (embeddings): BertEmbeddings(...) (encoder): BertEncoder( 302.31 M, 89.91% Params, 3092.88 GMACs, 98.50% MACs, 33.45 ms, 43.79% latency, 184.93 TFLOPS, (FinalLayerNorm): FusedLayerNorm(...) (layer): ModuleList( 302.31 M, 89.91% Params, 3092.88 GMACs, 98.50% MACs, 33.11 ms, 43.35% latency, 186.8 TFLOPS, (0): BertLayer( 12.6 M, 3.75% Params, 128.87 GMACs, 4.10% MACs, 1.29 ms, 1.69% latency, 199.49 TFLOPS, (attention): BertAttention( 4.2 M, 1.25% Params, 42.97 GMACs, 1.37% MACs, 833.75 us, 1.09% latency, 103.08 TFLOPS, (self): BertSelfAttention( 3.15 M, 0.94% Params, 32.23 GMACs, 1.03% MACs, 699.04 us, 0.92% latency, 92.22 TFLOPS, (query): Linear(1.05 M, 0.31% Params, 10.74 GMACs, 0.34% MACs, 182.39 us, 0.24% latency, 117.74 TFLOPS,...) (key): Linear(1.05 M, 0.31% Params, 10.74 GMACs, 0.34% MACs, 57.22 us, 0.07% latency, 375.3 TFLOPS,...) (value): Linear(1.05 M, 0.31% Params, 10.74 GMACs, 0.34% MACs, 53.17 us, 0.07% latency, 403.91 TFLOPS,...) (dropout): Dropout(...) (softmax): Softmax(...) ) (output): BertSelfOutput( 1.05 M, 0.31% Params, 10.74 GMACs, 0.34% MACs, 114.68 us, 0.15% latency, 187.26 TFLOPS, (dense): Linear(1.05 M, 0.31% Params, 10.74 GMACs, 0.34% MACs, 64.13 us, 0.08% latency, 334.84 TFLOPS, ...) (dropout): Dropout(...) ) ) (PreAttentionLayerNorm): FusedLayerNorm(...) (PostAttentionLayerNorm): FusedLayerNorm(...) (intermediate): BertIntermediate( 4.2 M, 1.25% Params, 42.95 GMACs, 1.37% MACs, 186.68 us, 0.24% latency, 460.14 TFLOPS, (dense_act): LinearActivation(4.2 M, 1.25% Params, 42.95 GMACs, 1.37% MACs, 175.0 us, 0.23% latency, 490.86 TFLOPS,...) ) (output): BertOutput( 4.2 M, 1.25% Params, 42.95 GMACs, 1.37% MACs, 116.83 us, 0.15% latency, 735.28 TFLOPS, (dense): Linear(4.2 M, 1.25% Params, 42.95 GMACs, 1.37% MACs, 65.57 us, 0.09% latency, 1310.14 TFLOPS,...) (dropout): Dropout(...) ) ) ... (23): BertLayer(...) ) ) (pooler): BertPooler(...) ) (cls): BertPreTrainingHeads(...) ) ------------------------------------------------------------------------------ In the summary profile, the DeepSpeed Flops Profiler outputs the number of parameters, floating-point operations (flops), FLOPS, latency, and throughput in samples/second of the model. This profile shows how much performance gap (compared to the peak hardware performance) the current model execution has and helps users tune the training or inference setup (e.g., hyperparameters, data parallelism, model parallelism, system configurations, etc.) for better performance. The DeepSpeed Flops Profiler also measures significant modules at different model depths (aggregated profile) and module-specific profile in the model architecture (detailed profile). Using these profiles, DeepSpeed users can understand how each layer or submodule contributes to the overall model complexity/performance. Then users can adjust or refactor the model design to improve performance. For example, using the profiler, DeepSpeed users can quantitatively tell if stacking smaller layers is lighter or more performant than having bigger ones. The aggregated and detailed profiles also allow users to quickly identify bottleneck modules. In the BERT-Large example above, using the DeepSpeed Flops Profiler, we find that BertLayer is the most significant layer and contains quite a few dropout, softmax, and layer norm along with linear modules. These modules are not heavy in flops and would trigger many GPU kernel invocations and create excessive read/write requests to memory. The pattern shown in the detailed profile suggests this is a perfect match for kernel fusion, and we developed fused transformer-kernels to reduce data movement (see DeepSpeedBert). After applying our optimizations, we see a 25% improvement in FLOPS per GPU and overall training samples/second in the DeepSpeed Flops Profiler output. The DeepSpeed Flops Profiler can be used with the DeepSpeed runtime without any user code change or be used independently from DeepSpeed as a standalone package. When using DeepSpeed for model training, the profiler can be enabled in the DeepSpeed configuration file. As a standalone package, the profiler API can be used in both training and inference code. The DeepSpeed profiler is still under active development and includes just initial features. Stay connected for more exciting features to be added soon. Flops Measurement Similar to existing flops calculation tools or methods, the DeepSpeed Flops Profiler measures the flops of the forward pass of a module and the flops of the backward pass is estimated as 2 times of that of the forward pass. Different from the PyTorch profiler which calculates the flops of PyTorch operators, the DeepSpeed Flops Profiler measures the flops within modules in a model and provides more insights to the users about the model execution. The flops estimation is partly inspired by ptflops with the major difference being that the DeepSpeed Flops Profiler not only supports flops computation directly at module level, but can also capture torch.nn.functional invoked in a module to estimate the flops. Thus the DeepSpeed Flops Profiler allows for customized modules in the model, e.g., ParallelTransformerLayerworks, ParallelSelfAttention, RowParallelLinear, etc. in Megatron-LM. This is in contrast to ptflops which requires users to write customized flops calculation functions for each customized module. Multi-GPU, Multi-node, Data Parallelism, and Model Parallelism The DeepSpeed Flops Profiler outputs the per GPU profile as well as the world size, data parallel size, and model parallel size. For models running on multi-GPU or multi-node, only change of the model parallelism (e.g., --model-parallel-size in Megatron-LM) affects the number of flops and parameters profiled, i.e., model_parallel_size * flops = total_flops and model_parallel_size * parameters = total_parameters. The data parallel size or world size (related to the number of GPUs or nodes) does not affect the per GPU profile. Usage The DeepSpeed Flops Profiler can be used with the DeepSpeed runtime or as a standalo
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
111,800 周安装