windows c++ onnx部署
参考https://blog.csdn.net/weixin_45493537/article/details/123829142
1,下载https://onnxruntime.ai/
2,先建一个文件夹,文件名可以随意C:\Users\27795\Desktop\cc,然后将nupkg文件拷贝进去(microsoft.ml.onnxruntime.1.7.0.nupkg),通过vs2019解析nupkg包 新建一个控制台应用程序,项目名称可以随意
3,选择 工具->NuGet管理包->程序包管理控制台,输入以下命令 Install-Package Microsoft.ML.OnnxRuntime -Source C:\Users\27795\Desktop\cc
4,看到在项目文件夹下生成Microsoft.ML.OnnxRuntime.1.7.0文件
5,将Microsoft.ML.OnnxRuntime.1.7.0直接拷贝到environment下面
6,包含目录 D:\environment\Microsoft.ML.OnnxRuntime.1.15.1\build\native\include
库目录:D:\environment\Microsoft.ML.OnnxRuntime.1.15.1\runtimes\win-x64\native
链接器——输入——附加依赖项:onnxruntime.lib (路径:E:\Microsoft.ML.OnnxRuntime.1.5.0\runtimes\win-x64\native\下面)
7,还需要将onnxruntime.dll放入debug或者Release(这一步自己没有用到)
测试代码:
#include <iostream> #include <assert.h> #include <vector> #include <onnxruntime_cxx_api.h> #include <string> int main(int argc, char* argv[]) { Ort::Env env(ORT_LOGGING_LEVEL_WARNING, "test"); Ort::SessionOptions session_options; session_options.SetIntraOpNumThreads(1); session_options.SetGraphOptimizationLevel(GraphOptimizationLevel::ORT_ENABLE_BASIC); const wchar_t* model_path = L"D:/02python/01code/yolo/ultralytics-main/2023025best.onnx"; Ort::Session session(env, model_path, session_options); // print model input layer (node names, types, shape etc.) Ort::AllocatorWithDefaultOptions allocator; size_t num_input_nodes = session.GetInputCount(); std::cout << session.GetInputNameAllocated(0, allocator) << std::endl; std::cout << session.GetOutputNameAllocated(0, allocator) << std::endl; system("pause"); return 0; }