You can view the full text of this article for free using the link below.

Title: Unsupervised image transformation for long wave infrared and visual image matching using two channel convolutional autoencoder network

Authors: Kavitha Kuppala; Sandhya Banda; S. Sagar Imambi

Addresses: Department of Computer Science and Engineering, K.L. University, Guntur, Andhra Pradesh, India ' Department of Computer Science and Engineering, Maturi Venkata Subba Rao Engineering College, Hyderabad, Telangana, India ' Department of Computer Science and Engineering, K.L. University, Guntur, Andhra Pradesh, India

Abstract: Pixel level matching of multi-spectral images is an important precursor to a wide range of applications. An efficient feature representation which can address the inherent dissimilar characteristics of acquisition by the respective sensors is essential for finding similarity between visual and thermal image regions. Lack of sufficient benchmark datasets of corresponding visual and LWIR images hinders the training of supervised learning approaches, such as CNN. To address both the issues of nonlinear variations and unavailability of huge data, we propose a novel two channel non-weight sharing convolutional autoencoder architecture, which computes similarity using encodings of the image regions. One channel is used to generate an efficient representation of the visible image patch, whereas the second channel is used to transform an infrared patch to a corresponding visual region using encoded representation. Results are shown by computing patch similarity using representations generated from various encoder architectures, evaluated on two datasets.

Keywords: convolutional autoencoder; CAE; multi-spectral image matching; transformation network; two channel siamese architecture; structual similarity measure; SSIM; KAIST dataset; mean squared error; MSE; peak signal to noise ratio; PSNR; Earth mover’s distance; EMD.

DOI: 10.1504/IJCVR.2024.135132

International Journal of Computational Vision and Robotics, 2024 Vol.14 No.1, pp.63 - 83

Received: 23 Sep 2021
Accepted: 24 Jun 2022

Published online: 01 Dec 2023 *

Full-text access for editors Full-text access for subscribers Free access Comment on this article