Title: Data augmentation using fast converging CIELAB-GAN for efficient deep learning dataset generation

Authors: Amin Fadaeddini; Babak Majidi; Alireza Souri; Mohammad Eshghi

Addresses: Department of Computer Engineering, Khatam University, Tehran, Iran ' Department of Computer Engineering, Khatam University, Tehran, Iran; Advanced Disaster, Emergency and Rapid Response Simulation (ADERSIM) Artificial Intelligence Group, Faculty of Liberal Arts and Professional Studies, York University, Toronto, Canada ' Department of Computer Engineering, Khatam University, Tehran, Iran; Department of Computer Engineering, Haliç University, Istanbul, Turkey ' Computer Engineering Department, Shahid Beheshti University, Tehran, Iran

Abstract: The commercial deep learning applications require large training datasets with many samples from different classes. The generative adversarial networks (GAN) are able to create new data samples for training these machine learning models. However, the low speed of training these GANs in image and multimedia applications is a major constraint. Therefore, in this paper a fast converging GAN called CIELAB-GAN for synthesising new data samples for image data augmentation is proposed. The CIELAB-GAN simplifies the training process of GANs by transforming the images to CIELAB colour space with fewer parameters. Then, the CIELAB-GAN translates the generated grey-scale images into colourised samples using an autoencoder. The experimental results show that the CIELAB-GAN has lower computational complexity of 20% compared to the state of the art GAN models and is able to be trained substantially faster. The proposed CIELAB-GAN can be used for generating new image samples for various deep learning applications.

Keywords: generative adversarial networks; GAN; deep learning; machine learning; data augmentation; image processing.

DOI: 10.1504/IJCSE.2023.132152

International Journal of Computational Science and Engineering, 2023 Vol.26 No.4, pp.459 - 469

Received: 08 Jan 2022
Received in revised form: 17 Apr 2022
Accepted: 10 May 2022

Published online: 12 Jul 2023 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article