PyTorch学习记录(三):pth模型转onnx
原则:
Python for Training
C++ for Inference
PyTorch模型导出:
torch.save()
: 保存模型的时候保存哪些东西,除了模型权重以外的其他变量- https://github.com/alibaba/cascade-stereo/blob/master/CasMVSNet/train.py#L98-L102
- https://github.com/yihuacheng/GazeTR/blob/master/trainer/total.py#L136-L143
PyTorch使用.pth
文件来对神经网络的权重进行保存,.pth
文件中的模型权重则是按照字典格式进行保存的,但是.pth
文件中没有网络的结果信息。需要借助开放神经网络交换(Open Neural Network Exchange, ONNX)框架将模型导出为结构和权重完整的.onnx
文件。
模型参数量
model = FPN()
num_params = sum(p.numel() for p in model.parameters())
print("num of params: {:.2f}k".format(num_params/1000.0))
# torch.numel()返回tensor的元素数目,即number of elements
打印模型
model = FPN()
num_params = sum(p.numel() for p in model.parameters())
print("num of params: {:.2f}k".format(num_params/1000.0))
print("===========================")
#for p in model.parameters():
# print(p.name)
print(model)
参考资料
- 文档
https://pytorch.org/docs/stable/onnx.html - 教程
https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html - 代码 (Python)
https://github.com/pytorch/tutorials/blob/master/advanced_source/super_resolution_with_onnxruntime.py - ONNXRuntime C++ API
https://github.com/leimao/ONNX-Runtime-Inference - PyTorch保存/加载模型
https://pytorch.org/tutorials/beginner/saving_loading_models.html - Netron Issue
Not able to visualize PyTorch model