CompressGAN: A Generative Adversarial Network-Based Framework for Learned Image Compression
Main Article Content
Abstract
With the rise of high-resolution digital material, there is a much greater need for effective picture compression methods. Traditional image compression methods, like JPEG and PNG, have a hard time keeping the compression rate and quality of the picture in balance, especially when the bitrate is low. We present CompressGAN, a new method for learning to compress images that is based on Generative Adversarial Networks (GANs). An encoder-decoder network and a discriminator are paired in the suggested design so that it can learn small latent representations while keeping the quality of the images. CompressGAN is different from other methods because it uses hostile training to find features that are important to perception. This makes reconstructions that look good even at high compression levels. The decoder shrinks the picture data into a small hidden code. This code is then quantised and entropy-coded to make it easier to store. The decoder builds the picture back up from the compressed version using both pixel-wise reconstruction loss and hostile loss from the discriminator as guides. We add a perceived loss calculated from a feature extractor network that has already been trained to improve realism and lower artefacts. CompressGAN does better than traditional codecs and a few new learnt compression models in terms of PSNR, SSIM, and perceived quality measures, as shown by experiments done on standard picture datasets. Furthermore, our approach keeps conceptual continuity and visual features better than baselines, especially when bit rates are low. CompressGAN can be trained from start to finish and can be changed to work with different compression rates by changing the quantisation levels or latent space density. This framework looks like a good way to handle apps with limited storage space and bandwidth, like mobile images, monitoring systems, and online content delivery.