TVM性能评估分析(六)

TVM性能评估分析(六)

 

 

 Figure 1.  The workflow of development PC, compile, deploy to the device, test, then modify the codes again to see whether it accelerates.

 

 

 Figure 2.   The Android APP takes shared library as input and runs compiled functions on the mobile phone. 

 

 

 Figure 3.  Build TVM functions and NDArrays on a remote device. The ability to cross-compile to different platforms makes it easy to develop on one platform and test on another.

 

 

 Figure 4.  The instruction to build for your Android device. Once the APK is built, sign it using apps/android_rpc/dev_tools and install it on the phone. 

 

 

 Figure 5.  The NNVM compiler support of TVM stack, we can now directly compile descriptions from deep learning frameworks and compile them to bare metal code that runs on AMD GPUs.

 

 

 Figure 6.  With ROCm backend, the generic workflow 

 

 

 Figure 7.   The ONNX library to load the ONNX model into the Protocol buffer object. 

 

 

 Figure 8.  An end to end compilation pipeline from front-end deep learning frameworks to bare metal hardwares.

 

Figure 9.  Typical workflow of NNVM Compiler

 

 

 Figure 10.  Separation of Optimization and Deployment

 

 

 Figure 11.  Time Cost of Inference on K80

 

 

 Figure 12.  The cost of inference on Raspberry PI

posted @   吴建明wujianming  阅读(97)  评论(0编辑  收藏  举报
编辑推荐:
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
· 理解Rust引用及其生命周期标识(上)
· 浏览器原生「磁吸」效果!Anchor Positioning 锚点定位神器解析
· 没有源码,如何修改代码逻辑?
阅读排行:
· 全程不用写代码,我用AI程序员写了一个飞机大战
· DeepSeek 开源周回顾「GitHub 热点速览」
· 记一次.NET内存居高不下排查解决与启示
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· .NET10 - 预览版1新功能体验(一)
历史上的今天:
2020-05-30 TensorFlow基础剖析
2020-05-30 Caffe框架GPU与MLU计算结果不一致请问如何调试?
2020-05-30 YOLOv5目标检测源码重磅发布了!
2020-05-30 NVIDIA深度学习Tensor Core性能解析(下)
2020-05-30 NVIDIA深度学习Tensor Core性能解析(上)
2020-05-30 Tensor Core技术解析(下)
2020-05-30 Tensor Core技术解析(上)
点击右上角即可分享
微信分享提示