DDP运行报错(单卡无错):ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1)
使用DDP时出现错误,但是单卡跑无错误。
错误记录如下:
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument
find_unused_parameters=True
totorch.nn.parallel.DistributedDataParallel
, and by
making sure allforward
function outputs participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module'sforward
function. Please include the loss function and the structure of the return value offorward
of your module when reporting this issue (e.g. list, dict, iterable).
Parameter indices which did not receive grad for rank 1: 4 5 6 7
In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error
一度以为是DDP的bug,仔细阅读报错发现,关键在于
This error indicates that your module has parameters that were not used in producing loss.
即有参数未参与到loss生成过程中,换句话说就是有参数在init中定义,但是未在forward中使用,就会造成这样的结果。原来为了不断调优模型,我将几个待选网络模块都写在了init函数中,然后这样只需要在forward中改变调用的模块就可以了。在单机运行中这样是可行的无错的,但是在DDP中由于需要多卡进行loss的reduce,为了防止出错,ddp就强行设置了这样的规则,但是可以通过如上错误提示里面的参数更改此设置,但是尽量不要修改。
解决方法:将init函数中未使用到的模块注释掉即可。
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· TypeScript + Deepseek 打造卜卦网站:技术与玄学的结合
· Manus的开源复刻OpenManus初探
· 写一个简单的SQL生成工具
· AI 智能体引爆开源社区「GitHub 热点速览」
· C#/.NET/.NET Core技术前沿周刊 | 第 29 期(2025年3.1-3.9)