Forthcoming and Online First Articles

International Journal of Cloud Computing

International Journal of Cloud Computing (IJCC)

Forthcoming articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Online First articles are published online here, before they appear in a journal issue. Online First articles are fully citeable, complete with a DOI. They can be cited, read, and downloaded. Online First articles are published as Open Access (OA) articles to make the latest research available as early as possible.

Open AccessArticles marked with this Open Access icon are Online First articles. They are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

Register for our alerting service, which notifies you by email when new issues are published online.

International Journal of Cloud Computing (9 papers in press)

Regular Issues

  • Scalable and Adaptable Hybrid LSTM Model with Multi-Algorithm Optimisation for Load Balancing and Task Scheduling in Dynamic Cloud Computing Environments   Order a copy of this article
    by Mubarak Idris, Mustapha Aminu Bagiwa, Muhammad Abdulkarim, Nurudeen Jibrin, Mardiyya Lawal Bagiwa 
    Abstract: Cloud computing delivers scalable, flexible resources, but dynamic workloads challenge efficient resource management, especially in load balancing and task scheduling. Addressing these challenges is vital for optimal performance, cost efficiency, and meeting growing application demands. This study proposes the MultiOpt_LSTM model, a hybrid approach that integrates long short-term memory (LSTM) networks with multi-algorithm optimisation techniques, including binary particle swarm optimisation (BPSO), genetic algorithm (GA), and simulated annealing (SA). The goal is to optimise resource allocation, reduce response times, and ensure balanced workload distribution across virtual machines. The proposed model is evaluated using both real-world and simulated cloud environments, comparing its performance with state-of-the-art techniques such as ANN-BPSO and heuristic-FSA. Key performance indicators like response time, resource utilisation, and degree of imbalance are used to measure efficiency. Results show that the MultiOpt_LSTM model outperforms competing methods, achieving near-zero imbalance at higher task volumes and demonstrating superior resource utilisation and reduced response times. For example, at 3,000 tasks, the model maintains a balanced distribution, outperforming traditional methods like IBPSO-LBS by a significant margin. While the simulation results are promising, future work will focus on real-world implementations to assess the models scalability and adaptability in diverse cloud environments.
    Keywords: Cloud computing; load balancing; task scheduling; hybrid LSTM model; optimization algorithms; resource utilization; response time; degree of imbalance.
    DOI: 10.1504/IJCC.2025.10071475
     
  • Optimized Elliptic Curve Cryptography for Data Security in Cloud Computing utilising the CSLEHO algorithm   Order a copy of this article
    by Najimoddin Khairoddin Shaikh, Rahat Afreen Khan 
    Abstract: The suitability of pit lake closure option was assessed for ten open-pit quarry mines in Lugoba and Msata, Tanzania. The study integrated hydrological, geochemical and geotechnical assessment. Hydrological assessment addressed current and future water quality for pit lakes. Geochemical characterisation established hazardous elements and potential for acid generation. Geotechnical analysis established prior and current stability of pit walls. Hydrological assessment revealed good water quality, with pit lakes attaining equilibrium in 14 to 126 years, at depths of 18 to 40 metres. Geochemical assessment showed albite, quartz, and calcite as the dominant minerals. Aluminium and iron mobilisation was considered negligible due to absence of acidic conditions. Geotechnical assessment revealed high stability of pit walls before and after formation of pit lakes. The study proved that pit lake closure is a viable and sustainable option for quarry mine sites. This transforms former quarries into water resources for irrigation, livestock and other secondary uses, while mitigating post-mining environmental risks in nearby communities.
    Keywords: Elliptic Curve Cryptography; Cloud Computing; Data Security; Optimal Key Generation; Optimization; CSLEHO.
    DOI: 10.1504/IJCC.2025.10072309
     
  • Neural Network Optimization Combining Feature Filtering and Cross Entropy in Software Defined Network Security   Order a copy of this article
    by Lu Liu 
    Abstract: Software defined networks (SDN) are an emerging network architecture with high flexibility and editable capabilities. However, the centralised control plane of SDN makes it vulnerable to abnormal traffic attacks, while traditional detection methods face challenges such as feature redundancy and data imbalance. To improve the stability and security of SDN, this study proposes a lightweight federated learning-based SDN anomaly detection model that combines a feature filtering module with a cross-entropy loss function optimisation. The results showed that after five iterations, the loss values of all three models reached convergence. The federated learning model without compression had the worst convergence effect, and the convergence of the two models trained 20 and 15 times was basically the same. After completing the model training, the loss values of these three models remained around 1.0. The software defined network abnormal traffic detection model could reduce the loss value to around 1.0 during training, maintain recall and accuracy at around 0.99, and maintain precision at around 0.98. The software defined network abnormal traffic detection model can effectively identify attack behaviours in the network, improve the security protection level, and protect the privacy of users during network use.
    Keywords: Software defined network; Deep learning; Cross entropy; Feature selection; Abnormal traffic.
    DOI: 10.1504/IJCC.2025.10072390
     
  • GuCA-KFDCN: Gull Cruise Attack Optimised Hybrid Kernel Filter Enabled Deep Learning Model For Attack Detection And Mitigation In Cloud Computing Environment   Order a copy of this article
    by Yogesh B. Sanap, Pushpalata G. Aher 
    Abstract: In a cloud computing environment, resources are provided as services over the internet, eliminating the need for significant upfront capital expenditure. However, Distributed Denial of Service (DDoS) attacks create a considerable threat to this availability, making detection a critical aspect. These attacks can disrupt access, undermining the trust and reliability of cloud services. The conventional approaches employed for DDoS attack detection pose significant challenges regarding overfitting issues, computational complexity, and limited generalisability. As a result, to mitigate these challenges this research offers a Gull cruise attack optimized HybridKernel Filter enabled Deep Convolutional Neural Network (GuCA-KFDCN) model. The utilisation of hybrid kernel filters integrates three different kernel functions, which effectively capture the complex attack patterns. Furthermore, the Gull Cruise Attack Optimization (GuCAO) algorithm refines the performance of the model by optimizing the parameters of the proposed model, ensuring robust performance. In addition, the GuCAO algorithm effectively chooses optimal key values for oversampling, which improves detection performance. The experimental outcomes show the efficacy of the proposed model interms of sensitivity of 95.29%, accuracy of96.84%, and specificity of 97.74% for training percentage 80.
    Keywords: Deep Convolutional Neural Network; Cloud computing; Gull cruise attack optimization; Distributed Denial of Service attack; HybridKernel Filter.
    DOI: 10.1504/IJCC.2025.10072473
     
  • Adaptive online task scheduling algorithm for resource regulation on heterogeneous platforms   Order a copy of this article
    by Yongqing Liu, Fan Yang, Fuqiang Tian, Jun Mou, Bo Hu, Peiyang Wu 
    Abstract: As computing technology advances, resource regulation on heterogeneous platforms has emerged as a key research area for future computing environments. In cloud task scheduling, studies focus on intelligent agent models and performance indicators that balance user experience and cost-effectiveness. Research into deep reinforcement learning and deep deterministic policy gradient (DDPG) algorithms has been conducted, incorporating heterogeneous resource regulation to address the varied needs of different data centres. Key task characteristics include length, average instruction length, and average CPU utilisation, with significant standard deviations. During training, a Poisson distribution parameter with a lambda value of 1 was used, leading to convergence in the loss curve. Although the DDPG algorithm had a slightly higher virtual machine usage cost and an instruction response time of 306.5, it provided notable economic benefits, demonstrating improved management and utilisation of computing resources.
    Keywords: heterogeneous resource regulation; cloud task scheduling; deep reinforcement learning; data centre heterogeneity; computational resource management.
    DOI: 10.1504/IJCC.2025.10070909
     
  • Geo-distributed multi-cloud data centre storage tiering and selection with zero-suppressed binary decision diagrams   Order a copy of this article
    by Brian Lim, Miguel Saavedra, Renzo Tan, Kazushi Ikeda, William Yu 
    Abstract: The exponential growth of data in recent years prompted cloud providers to introduce diverse geo-distributed storage solutions for various needs. The vast amount of storage options, however, presents organisations with a challenge in determining the ideal data placement configuration. The study introduces a novel optimisation algorithm utilising the zero-suppressed binary decision diagram to select the optimal data centre, storage tiers, and cloud provider. The algorithm takes on a holistic approach that considers cost, latency, and high availability, applicable to both geo-distributed on-premise environments and public cloud providers. Furthermore, the proposed methodology leverages the recursive structure of the zero-suppressed binary decision diagram, allowing for the enumeration and ranking of all valid configurations based on total cost. Overall, the study offers flexibility for organisations in addressing specific priorities for cloud storage solutions by providing alternative near-optimal configurations.
    Keywords: cloud provider; data centre; discrete optimisation; storage solution; storage tier; zero-suppressed binary decision diagram; ZDD.
    DOI: 10.1504/IJCC.2025.10071085
     
  • Application of clustering algorithm and cloud computing in IoT data mining   Order a copy of this article
    by Xu Wu 
    Abstract: In order to improve the accuracy of data mining in the internet of things system and shorten the time of data mining, this study first uses the mapping protocol cloud computing programming model to optimise the density based noise application spatial clustering algorithm. Then it improves the current data mining technology based on the optimised algorithm, and finally uses the improved data mining technology to mine the data in the internet of things, so as to improve the efficiency of data mining in the internet of things. The improved algorithm is applied to an IoT monitoring system, showing excellent performance in extracting data features and eliminating noise with a 100% removal rate. The system identifies abnormal data in just 0.9 ms with 100% accuracy. These results demonstrate that the enhanced data mining technique significantly improves mining efficiency, laying a foundation for better service quality and commercial value in IoT applications.
    Keywords: density-based noise application spatial clustering algorithm; MapReduce cloud computing programming model; internet of things; IoT; data mining; internet of things monitoring system.
    DOI: 10.1504/IJCC.2025.10071464
     
  • Design of IoT data security storage and allocation model based on cloud and mist integration algorithm   Order a copy of this article
    by Keqing Guan, Xianli Kong 
    Abstract: As internet and communication technology evolve, the industrial internet of things (IIoT) has rapidly developed. However, existing IIoT systems struggle to ensure the timely and secure transmission of user data. This study introduces a cloud fog hybrid network architecture and establishes a latency and data security model for individual users, employing an improved ant colony algorithm for minimising latency under security constraints. For multi-user scenarios, software-defined networks enhance the architecture, and a refined allocation model is developed. Experiments indicate that, at 500 iterations, the root mean square errors (RMSE) for various algorithms were 0.51, 0.43, 0.28, and 0.14, respectively. With five users and a data volume of 50 MB, the latencies observed were 24, 22, 18, and 14 seconds, respectively. These findings demonstrate that the proposed method effectively secures data storage and reduces latency in IIoT environments.
    Keywords: industrial internet of things; IIoT; fog computing; cloud computing; data security; time delay; root mean square errors; RMSE.
    DOI: 10.1504/IJCC.2025.10071459
     
  • Modified sorted prioritisation-based task scheduling in cloud computing   Order a copy of this article
    by J. Magelin Mary, D.I. George Amalarethinam 
    Abstract: Cloud computing is commonly used to provide internet-based, pay-per-use, self-service access to scalable, on-demand computing resources. Task scheduling is a major difficulty in this approach for cost efficiency and resource usage. Task scheduling optimises computing activities to save expenses and resource consumption. MSPTS is a new scheduling method introduced in this paper. Modified sorted prioritisation-based task scheduling in cloud computing improves resource allocation and efficiency with sophisticated algorithms. MSPTS sorts jobs and resources by priority and property to improve task scheduling. MSPTS chooses the best resource for each job based on resource wait time, task processing time, and task priority. This approach optimises task execution and resource allocation, enhancing system performance. The MSPTS method was compared to other scheduling algorithms using CloudSim tools, a popular cloud simulation tool, to evaluate its efficacy. MSPTS greatly outperforms standard scheduling algorithms in various areas, according to experiments. MSPTS improves makespan, cost efficiency, and resource use. These data suggest that MSPTS is a better cloud computing task scheduling solution, improving performance and resource management.
    Keywords: cloud computing; pay-per-use; task scheduling; task priority; resource utilisation; makespan and cost; CloudSim tools; cloud environment; task scheduling.
    DOI: 10.1504/IJCC.2025.10071549