TVM性能评估分析(七)

TVM性能评估分析(七)

 

 

 Figure 1.  Performance Improvement

 

 

 Figure 2.  Depthwise convolution

 

 

Figure 3.  Data Fusion

 

 

 Figure 4.  Data Fusion(2)

 

 

 Figure 5.  Shared memory can be seen as cache in GPU. It is on-chip and much faster than global memory.

 

 

 Figure 6.   Shared memory banks are organized such that successive addresses are assigned to successive banks. 

 

 

 Figure 7.  Consecutive threads access consecutive memory addresses, thus avoiding bank conflicts

 

 

 Figure 8.  Computational Graph

 

 

 Figure 9.  Sublinear memory optimization functionality that allows user to train 1000 layers of ImageNet ResNet on a single GPU.

 

 

 Figure 10.  We build a low level representation which is based on index formula, with additional support for recurrence computation.

 

 

 Figure 11.  The algorithms described in TVM are then processed in a scheduling phase to apply transformations that are tailored to the target hardware back-end.

 

 

 Figure 12.  Multi-language and Platform Support

 

 

 Figure 13.  Remote Deployment and Execution

 

 

 Table 1.  Raspberry Pi

 

 

 Figure 14.  GPU Results

 

posted @ 2021-05-30 08:52  吴建明wujianming  阅读(149)  评论(0编辑  收藏  举报