YOLOX multiGPU training

将yolox/core/trainer.py中的self.tblogger.add_scalar 语句放在if self.rank == 0: 中,结束后需要synchronize()
        if self.rank == 0:
            self.tblogger.add_scalar('lr', lr, self.epoch)
            self.tblogger.add_scalar('loss', loss, self.epoch)
        Add :synchronize()
#4个 GPU 训练的命令
#python -m torch.distributed.launch --nproc_per_node=4 tools/train.py -n exps/example/yolox_voc/yolox_voc_s.py -b 32

posted @ 2021-10-16 10:40  jimchen1218  阅读(506)  评论(0编辑  收藏  举报