Forthcoming Articles

International Journal of Cloud Computing

International Journal of Cloud Computing (IJCC)

Forthcoming articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Online First articles are also listed here. Online First articles are fully citeable, complete with a DOI. They can be cited, read, and downloaded. Online First articles are published as Open Access (OA) articles to make the latest research available as early as possible.

Open AccessArticles marked with this Open Access icon are Online First articles. They are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

Register for our alerting service, which notifies you by email when new issues are published online.

International Journal of Cloud Computing (8 papers in press)

Regular Issues

  • Neural Network Optimization Combining Feature Filtering and Cross Entropy in Software Defined Network Security   Order a copy of this article
    by Lu Liu 
    Abstract: Software defined networks (SDN) are an emerging network architecture with high flexibility and editable capabilities. However, the centralised control plane of SDN makes it vulnerable to abnormal traffic attacks, while traditional detection methods face challenges such as feature redundancy and data imbalance. To improve the stability and security of SDN, this study proposes a lightweight federated learning-based SDN anomaly detection model that combines a feature filtering module with a cross-entropy loss function optimisation. The results showed that after five iterations, the loss values of all three models reached convergence. The federated learning model without compression had the worst convergence effect, and the convergence of the two models trained 20 and 15 times was basically the same. After completing the model training, the loss values of these three models remained around 1.0. The software defined network abnormal traffic detection model could reduce the loss value to around 1.0 during training, maintain recall and accuracy at around 0.99, and maintain precision at around 0.98. The software defined network abnormal traffic detection model can effectively identify attack behaviours in the network, improve the security protection level, and protect the privacy of users during network use.
    Keywords: Software defined network; Deep learning; Cross entropy; Feature selection; Abnormal traffic.
    DOI: 10.1504/IJCC.2025.10072390
     
  • Cloud Tourism Scene Image Processing Technology Based on K-means and Image Brightness Enhancement Algorithm   Order a copy of this article
    by Xiaomei Sun 
    Abstract: To improve segmentation accuracy and visual quality in cloud tourism images, this study proposes an enhanced framework combining a refined K-means algorithm and a DCGAN-based brightness enhancement network. K-means is improved using Canny edge detection for clearer boundaries, maximum contour suppression to avoid misclassification in bright areas, and weighted cluster updates for better texture handling. Simultaneously, a Convolutional Block Attention Module is added to the DCGAN generator to emphasise critical spatial and channel features. Experiments on COCO and Cityscapes datasets yield segmentation accuracies of 98.53% and 98.04%, with PSNR reaching 33.4?dB and SSIM at 0.93, confirming the method's effectiveness.
    Keywords: K-means; DCGAN; Image processing; Cloud tourism; Image segmentation; CBAM.
    DOI: 10.1504/IJCC.2025.10072836
     
  • An Image Semantic Understanding Model based on Double-Layer LSTM with Information Gain   Order a copy of this article
    by Chen Li 
    Abstract: In the era of big data, efficient semantic parsing of multi-modal data is crucial for intelligent service systems. However, existing image semantic understanding methods face issues such as cross-modal semantic gaps and insufficient modelling of long-range dependencies. To address these challenges, this paper proposes a novel hybrid network architecture that combines convolutional neural networks, recursive auto-encoders, and a dual-layer long short-term memory (LSTM) network guided by information gain. The proposed model achieves a highest semantic description score of 0.168 and improves both type agnostic accuracy and type aware accuracy to 0.932 and 0.901, respectively outperforming three baseline methods. Compared to the original model, it increases accuracy by 0.016 and 0.010. This architecture effectively bridges cross-modal gaps and enhances feature selection and long-term dependency modelling. The model demonstrates strong potential for deployment in cloud services, semantic web platforms, and virtualised infrastructures to support fault detection, resource optimisation, and intelligent quality management.
    Keywords: Information gain; LSTM; CNN; Image; Semantics; RAE.
    DOI: 10.1504/IJCC.2025.10073180
     
  • Scalable and adaptable hybrid LSTM model with multi-algorithm optimisation for load balancing and task scheduling in dynamic cloud computing environments   Order a copy of this article
    by Mubarak Idris, Mustapha Aminu Bagiwa, Muhammad Abdulkarim, Nurudeen Jibrin, Mardiyya Lawal Bagiwa 
    Abstract: Cloud computing delivers scalable, flexible resources, but dynamic workloads challenge efficient resource management, especially in load balancing and task scheduling. Addressing these challenges is vital for optimal performance, cost efficiency, and meeting growing application demands. This study proposes the MultiOpt_LSTM model, a hybrid approach that integrates long short-term memory (LSTM) networks with multi-algorithm optimisation techniques, including binary particle swarm optimisation (BPSO), genetic algorithm (GA), and simulated annealing (SA). The goal is to optimise resource allocation, reduce response times, and ensure balanced workload distribution across virtual machines. The proposed model is evaluated using both real-world and simulated cloud environments, comparing its performance with state-of-the-art techniques such as ANN-BPSO and heuristic-FSA. Key performance indicators like response time, resource utilisation, and degree of imbalance are used to measure efficiency. Results show that the MultiOpt_LSTM model outperforms competing methods, achieving near-zero imbalance at higher task volumes and demonstrating superior resource utilisation and reduced response times. For example, at 3,000 tasks, the model maintains a balanced distribution, outperforming traditional methods like IBPSO-LBS by a significant margin. While the simulation results are promising, future work will focus on real-world implementations to assess the model's scalability and adaptability in diverse cloud environments.
    Keywords: cloud computing; load balancing; task scheduling; hybrid LSTM model; optimisation algorithms; resource utilisation; response time; degree of imbalance.
    DOI: 10.1504/IJCC.2025.10071475
     
  • GuCA-KFDCN: gull cruise attack optimised hybrid kernel filter enabled deep learning model for attack detection and mitigation in cloud computing environment   Order a copy of this article
    by Yogesh B. Sanap, Pushpalata G. Aher 
    Abstract: In a cloud computing environment, resources are provided as services over the internet, eliminating the need for significant upfront capital expenditure. However, distributed denial of service (DDoS) attacks creates a considerable threat to this availability, making detection a critical aspect. These attacks can disrupt access, undermining the trust and reliability of cloud services. The conventional approaches employed for DDoS attack detection pose significant challenges regarding overfitting issues, computational complexity, and limited generalisability. As a result, to mitigate these challenges this research offers a Gull cruise attack optimised HybridKernel filter enabled deep convolutional neural network (GuCA-KFDCN) model. The utilisation of hybrid kernel filters integrates three different kernel functions, which effectively capture the complex attack patterns. Furthermore, the gull cruise attack optimisation (GuCAO) algorithm refines the performance of the model by optimising the parameters of the proposed model, ensuring robust performance. In addition, the GuCAO algorithm effectively chooses optimal key values for oversampling, which improves detection performance. The experimental outcomes show the efficacy of the proposed model interms of sensitivity of 95.29%, accuracy of 96.84%, and specificity of 97.74% for training percentage 80.
    Keywords: deep convolutional neural network; cloud computing; gull cruise attack optimisation; GuCAO; distributed denial of service attack; hybridkernel filter.
    DOI: 10.1504/IJCC.2025.10072473
     
  • Optimised elliptic curve cryptography for data security in cloud computing utilising the CSLEHO algorithm   Order a copy of this article
    by Najimoddin Khairoddin Shaikh, Rahat Afreen Khan 
    Abstract: Ensuring security and privacy has become one of the most challenging and critical responsibilities for cloud data in recent years. To enhance security, it is vital to protect sensitive data from unauthorised access. As a result, traditional studies on cloud privacy preservation have developed various decryption, key generation, and encryption techniques. However, challenges such as high computational complexity, and security concerns persist. One public key encryption method, elliptic curve cryptography (ECC), utilises the elliptic curve theory to generate more efficient cryptographic keys for cloud applications. This paper proposes an optimised elliptic curve cryptography (ECC) model for data security in the cloud. In this model, key generation is optimised through an innovative strategy, introducing a new algorithm called the combined sealion and elephant herding optimisation (CSLEHO) model. Furthermore, the results of the proposed approach are compared to those of existing models.
    Keywords: elliptic curve cryptography; cloud computing; data security; optimal key generation; optimisation; CSLEHO.
    DOI: 10.1504/IJCC.2025.10072309
     
  • BERA-CLOUD: resource allocation in cloud computing using a bald eagle optimised spiking neural network   Order a copy of this article
    by Nikhil Kumar Marriwala, Sunita Panda, Priya Dasarwar, Pooja Singh, C. Gnana Kousalya 
    Abstract: In this paper, a novel bald eagle optimised resource allocation in cloud is developed for effective resource management and dynamic task scheduling in cloud networks. Initially, the user provides a task list that is needed to be scheduled and prioritised. The task list is fed into a spiking neural network to identify patterns and prioritise tasks based on their importance. The prioritised tasks are then used for dynamic prediction, which involves forecasting future states or requirements. It also makes it easy for the system to adapt to change since is able to estimate other future resource necessities or modification in tasks. The anticipated tasks are then solved by applying the bald eagle search algorithm, which is designed to find effective solutions aligning between exploitation and exploration to reach the best solutions within the identified search space functions. The optimisation process interacts with cloud storage to retrieve and store data.
    Keywords: internet of things; IoT; cloud networks; bald eagle search algorithm; spiking neural network; SNN; virtual machines; cloud computing.
    DOI: 10.1504/IJCC.2025.10073235
     
  • A novel dual-purpose metaheuristic based MCDM model with an optimal task scheduling algorithm for cloud computing   Order a copy of this article
    by Malatesh Kamatar, Bindhu P. Madhavi 
    Abstract: The research focuses on the main issues that arise in cloud computing regarding how to schedule tasks and pick the right virtual machines due to growing need for virtual resources. Therefore, an innovative task scheduling model is proposed by integrating multi-criteria decision making (MCDM) and the shortest job first (SJF) technique. An approach using VIKOR in MCDM is followed to highlight tasks by their queuing, task and resource importance, thereby cutting down waiting time. If priorities are equal, SJF algorithm prefers to select shorter tasks to ensure no conflicts occur. An original metaheuristic algorithm called chaotic artificial bee colony with quantum (CABCQ) is proposed to ensure best choice of VMs by considering the time needed to execute, transfer and process tasks. Cloud task assignment and VM usage improve due to the models, which boosts Python CloudSim's performance in terms of makespan, latency, user priority, energy consumption, computation time and cost.
    Keywords: cloud computing; task scheduling; multi-criteria decision making; MCDM; shortest job first; SJF; chaotic artificial bee colony with quantum algorithm; CABCQ.
    DOI: 10.1504/IJCC.2025.10073236