tvm安装教程 及 LLVM基础学习:LLVM的编译安装和基本使用
tvm编译:
https://blog.csdn.net/u010420283/article/details/134635586
llvm编译:
https://www.cnblogs.com/robotech/p/16370415.html
注意版本问题:
tvm 13.0 llvm 13.0.0 (本地测试通过)
tvm下载地址:https://tvm.apache.org/download
llvm下载地址: https://releases.llvm.org/download.html
安装参考文档:
https://blog.csdn.net/gasolinesky/article/details/130091169
及tvm中文站安装教程: https://tvm.hyper.ai/docs/install/from_source/
windows安装 tvm:
https://blog.csdn.net/wsp_1138886114/article/details/135123205
https://blog.csdn.net/weixin_50836014/article/details/127512029 // 其中包含llvm的安装路径: https://codeload.github.com/llvm/llvm-project/zip/refs/tags/llvmorg-13.0.0
报语法错误,重装vs;
测试可用例子:
https://blog.csdn.net/qq_32460819/article/details/109244214
错误问题汇总:
https://blog.csdn.net/JerryLiu1998/article/details/108407804
【从零开始学深度学习编译器】五,TVM Relay以及Pass简介
https://blog.csdn.net/just_sort/article/details/116355215
基于Relay构建一个自定义的神经网络示例
我们基于Relay的接口定义一个Conv+BN+ReLU的小网络,展示一下Relay接口应该如何使用,这里TVM版本是0.8.0.dev,代码如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 | #coding=utf-8 import tvm from tvm import relay import numpy as np from tvm.contrib import graph_executor # 构造BN def batch_norm(data, gamma = None , beta = None , moving_mean = None , moving_var = None , * * kwargs): name = kwargs.get( "name" ) kwargs.pop( "name" ) if not gamma: gamma = relay.var(name + "_gamma" ) if not beta: beta = relay.var(name + "_beta" ) if not moving_mean: moving_mean = relay.var(name + "_moving_mean" ) if not moving_var: moving_var = relay.var(name + "_moving_var" ) return relay.nn.batch_norm(data, gamma = gamma, beta = beta, moving_mean = moving_mean, moving_var = moving_var, * * kwargs)[ 0 ] # 构造卷积 def conv2d(data, weight = None , * * kwargs): name = kwargs.get( "name" ) kwargs.pop( "name" ) if not weight: weight = relay.var(name + "_weight" ) return relay.nn.conv2d(data, weight, * * kwargs) # 构造卷积+BN+ReLU的simpleNet def simplenet(data, name, channels, kernel_size = ( 3 , 3 ), strides = ( 1 , 1 ), padding = ( 1 , 1 ), epsilon = 1e - 5 ): conv = conv2d( data = data, channels = channels, kernel_size = kernel_size, strides = strides, padding = padding, data_layout = 'NCHW' , name = name + '_conv' ) bn = batch_norm(data = conv, epsilon = epsilon, name = name + '_bn' ) act = relay.nn.relu(data = bn) return act data_shape = ( 1 , 3 , 224 , 224 ) kernel_shape = ( 32 , 3 , 3 , 3 ) dtype = "float32" data = relay.var( "data" , shape = data_shape, dtype = dtype) act = simplenet(data, "graph" , 32 , strides = ( 2 , 2 )) func = relay.Function(relay.analysis.free_vars(act), act) print (func) np_data = np.random.uniform( - 1 , 1 , ( 1 , 3 , 224 , 224 )) params = { "graph_conv_weight" : tvm.nd.array(np.random.uniform( - 1 , 1 , ( 32 , 3 , 3 , 3 )).astype(dtype)), "graph_bn_gamma" : tvm.nd.array(np.random.uniform( - 1 , 1 , ( 32 )).astype(dtype)), "graph_bn_beta" : tvm.nd.array(np.random.uniform( - 1 , 1 , ( 32 )).astype(dtype)), "graph_bn_moving_mean" : tvm.nd.array(np.random.uniform( - 1 , 1 , ( 32 )).astype(dtype)), "graph_bn_moving_var" : tvm.nd.array(np.random.uniform( - 1 , 1 , ( 32 )).astype(dtype)), } with tvm.transform.PassContext(opt_level = 3 ): lib = relay.build(func, "llvm" , params = params) dev = tvm.cpu( 0 ) dtype = "float32" m = graph_executor.GraphModule(lib[ "default" ](dev)) # set inputs m.set_input( "data" , tvm.nd.array(np_data.astype(dtype))) # execute m.run() # get outputs tvm_output = m.get_output( 0 ) |
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 震惊!C++程序真的从main开始吗?99%的程序员都答错了
· 【硬核科普】Trae如何「偷看」你的代码?零基础破解AI编程运行原理
· 单元测试从入门到精通
· 上周热点回顾(3.3-3.9)
· winform 绘制太阳,地球,月球 运作规律
2018-04-02 理解 Python 中的 *args 和 **kwargs