a new idea
RD性能:
结合图像重压缩的MLCC模型以及
自编码器中的注意力机制
The Devil Is in the Details: Window-based Attention for Image Compression(全注意力机制 可以换成ViT)
Joint Global and Local Hierarchical Priors for Learned Image Compression(CNN+Transfomer)
做剪枝important maps
Learning Convolutional Networks for Content-weighted Image Compression
解码速度:
《Channel-wise Autoregressive Entropy Models For Learned Image Compression》
提出通道上下文解码
《Checkerboard context model for efficient learned image compression》
提出棋盘格上下文模型
《ElIC:Efficient learned image compression with unevenly grouped spacechannel contextual adaptive coding》
提出不均匀通道上下文解码,并与棋盘模型相结合。
《High-Efficiency Lossy Image Coding Through Adaptive Neighborhood Information Aggregation》
在特征提取的变换阶段引入注意力机制,并在解码阶段使用多阶段上下文模型(可学习通道分组+改进的棋盘模型)。
QARV: Quantization-Aware ResNet VAE for Lossy Image Compression
一个共享编解码模块的模型,多层次图像压缩方法。