Tensorrt一些优化技术介绍

Tensorrt一些优化技术介绍

 

 

 

 Figure 1. A quantizable AveragePool layer (in blue) is fused with a DQ layer and a Q layer. All three layers are replaced by a quantized AveragePool layer (in green).

 

 Figure 2. An illustration depicting a DQ forward-propagation and Q backward-propagation.

 

 Figure 3. Two examples of how TensorRT fuses convolutional layers. On the left, only the inputs are quantized. On the right, both inputs and output are quantized.

 

 Figure 4. Example of a linear operation followed by an activation function.

 

 Figure 5. Batch normalization is fused with convolution and ReLU while keeping the same execution order as defined in the pre-fusion network. There is no need to simulate BN-folding in the training network.

 

 Figure 6. The precision of xf1 is floating-point, so the output of the fused convolution is limited to floating-point, and the trailing Q-layer cannot be fused with the convolution.

 

 Figure 7. When xf1 is quantized to INT8, the output of the fused convolution is also INT8, and the trailing Q-layer is fused with the convolution.

 

 Figure 8. An example of quantizing a quantizable-operator. An element-wise addition operator is fused with the input DQ operators and the output Q operator.

 

 Figure 9. An example of suboptimal quantization fusions: contrast the suboptimal fusion in A and the optimal fusion in B. The extra pair of Q/DQ operators (highlighted with a glowing-green border) forces the separation of the convolution operator from the element-wise addition operator.

 

 Figure 10. An example showing scales of Q1 and Q2 are compared for equality, and if equal, they are allowed to propagate backward. If the engine is refitted with new values for Q1 and Q2 such that Q1 != Q2, then an exception aborts the refitting process.

 

参考链接:

 

 https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html

posted @ 2021-12-13 05:48  吴建明wujianming  阅读(172)  评论(0编辑  收藏  举报