In this paper, we propose a novel approach to improve the efficiency of deep neural networks (DNNs) in image processing tasks. We introduce two main techniques: adaptive scale interpolation and switchable endpoint mode. The former helps mitigate the effects of outliers in DNNs by adjusting the interpolation scale based on the level of sparsity, while the latter provides a flexible way to balance accuracy and compression by utilizing both minimum and maximum endpoints.
To demonstrate the effectiveness of our techniques, we applied them to several image processing tasks, including classification, semantic segmentation, super-resolution, and more. Our experiments showed that our proposed methods can significantly reduce the bitrate while maintaining the accuracy of the models. In fact, for some tasks, our approach outperformed existing state-of-the-art methods in terms of both efficiency and accuracy.
To better understand these techniques, let’s break them down:
Adaptive Scale Interpolation
Imagine you have a toy box filled with different sizes of blocks. Each block represents a small portion of an image that has been processed by a DNN. Now, imagine you want to take a picture of the entire toy box, but you only have a small camera that can capture one block at a time. The camera will only capture the biggest blocks (i.e., the most important parts of the image), and ignore the smaller ones (i.e., the less important parts). This is similar to how DNNs work when processing images.
In traditional DNNs, each layer uses a fixed-size filter that scans the entire image at once, regardless of its complexity. However, this can lead to inefficient use of computational resources, especially when dealing with sparse images (i.e., images with few non-zero elements). To address this issue, we propose adaptive scale interpolation, which adjusts the filter size based on the level of sparsity in the image. By doing so, we can reduce the computational complexity of the DNN without sacrificing accuracy.
Switchable Endpoint Mode
Now, imagine you have a toy box filled with different blocks of varying sizes. Some of these blocks are very big, while others are very small. When building a tower with these blocks, you may want to use the biggest blocks on the bottom and gradually decrease their size as you go up. This way, the tower will be more stable and less likely to collapse.
In traditional DNNs, all layers have fixed-size endpoints (i.e., the maximum and minimum values of each block). However, this can lead to inefficient use of computational resources when dealing with sparse images. To address this issue, we propose switchable endpoint mode, which allows us to use either the maximum or minimum endpoint depending on the level of sparsity in the image. By doing so, we can reduce the computational complexity of the DNN without sacrificing accuracy.
In summary, our proposed techniques aim to improve the efficiency of deep neural networks in image processing tasks by adaptively adjusting the filter size and using flexible endpoints. These techniques help mitigate the effects of outliers and preserve the accuracy of the models while reducing their computational complexity.