Forthcoming and Online First Articles

International Journal of Computational Science and Engineering

International Journal of Computational Science and Engineering (IJCSE)

Forthcoming articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Online First articles are published online here, before they appear in a journal issue. Online First articles are fully citeable, complete with a DOI. They can be cited, read, and downloaded. Online First articles are published as Open Access (OA) articles to make the latest research available as early as possible.

Open AccessArticles marked with this Open Access icon are Online First articles. They are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

Register for our alerting service, which notifies you by email when new issues are published online.

International Journal of Computational Science and Engineering (34 papers in press)

Regular Issues

  • Self-supervised learning with split batch repetition strategy for long-tail recognition   Order a copy of this article
    by Zhangze Liao, Liyan Ma, Xiangfeng Luo, Shaorong Xie 
    Abstract: Deep neural networks cannot be well applied to balance testing when the training data present a long tail distribution. Existing works improve the performance of the model in long tail recognition by changing the model training strategy, data expansion, and model structure optimisation. However, they tend to use supervised approaches when training the model representations, which makes the model difficult to learn the features of the tail classes. In this paper, we use self-supervised representation learning (SSRL) to enhance the model's representations and design a three-branch network to merge SSRL with decoupled learning. Each branch adopts different learning goals to enable the model to learn balanced image features in the long-tail data. In addition, we propose a Split Batch Repetition strategy for long-tailed datasets to improve the model. Our experiments on the Imbalance CIFAR-10, Imbalance CIFAR-100, and ImageNet-LT datasets outperform existing similar methods. The ablation experiments prove that our method performs better on more imbalanced datasets. All experiments demonstrate the effectiveness of incorporating the self-supervised representation learning model and split batch repetition strategy.
    Keywords: long-tail recognition; self-supervised learning; decoupled learning; image classification; deep learning; neural network; computer vision;.

  • Research on assessment of hybrid teaching mode in colleges stems from deep learning algorithm   Order a copy of this article
    by Jinghui Xiu, Yingnan Ye 
    Abstract: The blended learning model combines traditional classroom instruction with online learning and has shown significant impact in higher education. Analysis of its effectiveness reveals a decrease in the root-mean-square deviation and the smallest mean squared error, indicating optimal training results. The network intrusion detection model has the lowest mean absolute error compared with other models. The SecRPSO-SVM model has the smallest average absolute percentage error. This innovative teaching model promotes personalised and autonomous learning, cooperative learning, and interactive communication. The use of deep learning algorithms provides new methods for educational assessment and personalised learning, positively impacting the future development of higher education.
    Keywords: deep learning; mixed teaching; evaluation model; SecRPSO-SVM; principal component analysis.
    DOI: 10.1504/IJCSE.2023.10059160
     
  • An investigation of CNN-LSTM music recognition algorithm in ethnic vocal technique singing   Order a copy of this article
    by Fang Dong 
    Abstract: A HPSS separation algorithm considering time and frequency features is proposed to address the issue of poor performance in music style recognition and classification. A CNN network structure was designed and the influence of different parameters in the network structure on recognition rate was studied. A deep hash learning method is proposed to address the issues of weak feature expression ability and high feature dimension in existing CNN, which is combined with LSTM networks to integrate temporal dimension information. The results show that, compared with other models such as GRU+LSTM, the double-layer LSTM model used in the study had higher recognition results, with a size of over 75%. This indicates that combining feature learning with hash encoding learning can achieve higher accuracy. Therefore, this model is more suitable for music style recognition technology, which helps in music information retrieval and improves the classification accuracy of music recognition.
    Keywords: music recognition; ethnic vocal music; LSTM; CNN; hash layer.
    DOI: 10.1504/IJCSE.2023.10059161
     
  • Cost-sensitive budget adaptive label thresholding algorithms for large-scale online multi-label classification.   Order a copy of this article
    by Rui Ding, Tingting Zhai 
    Abstract: Kernel-based methods have proven effective in addressing nonlinear online multi-label classification tasks. However, their scalability is hampered by the curse of kernelisation when handling large-scale tasks. Additionally, the class-imbalance of multi-label data can significantly impact their performance. To mitigate both challenges, we propose two cost-sensitive budget adaptive label thresholding algorithms. Firstly, we introduce a cost-sensitive strategy to assign varying costs to the misclassification of different labels, building upon the first-order adaptive label thresholding algorithm. Furthermore, we present two merging budget maintenance strategies: 1) a global strategy where all predictive models share one support vector pool and undergo simultaneous budgeting; 2) a separate strategy that utilises two independent support vector pools
    Keywords: kernel-based methods; large-scale online multi-label classification; curse of kernelisation; class-imbalance; cost-sensitive; budget maintenance.
    DOI: 10.1504/IJCSE.2023.10060217
     
  • Alliance: a makespan delay and bill payment cost saving-based resource orchestration framework for the blockchain and hybrid cloud enhanced 6G serverless computing network   Order a copy of this article
    by Mahfuzul H. Chowdhury 
    Abstract: Serverless computing technology with the function-as-a-service and backend-as-a-service platforms provides on-demand service, high elasticity, automatic scaling, service provider-based server/operating system management, and no idle capacity charges facilities to the users. Owing to limited resources, the traditional research articles on public/private cloud-based serverless computing cannot meet the user's IoT application requirements. The current works are limited only to a single job type and objectives. There is a lack of appropriate resource orchestration schemes for low latency, bill payment, and energy-cost-based serverless computing job execution networks with hybrid cloud and blockchain. To overpower these issues, this paper instigates a low latency, energy-cost, and bill payment-based multiple types of job scheduling, resource orchestration, network, and mathematical model for the blockchain and hybrid cloud-enhanced serverless computing network. The experimental results delineated that up to 70% makespan delay and 30.23% bill payment gain are acquired in the proposed alliance scheme over the baseline scheme.
    Keywords: serverless computing; job scheduling; worker selection; resource orchestration; blockchain; cloud computing; 6G; job execution time; user bill payment.
    DOI: 10.1504/IJCSE.2023.10060388
     
  • An improved continuous and discrete Harris Hawks optimiser applied to feature selection for image steganalysis   Order a copy of this article
    by Ankita Gupta, Rita Chhikara, Prabha Sharma 
    Abstract: To attack advanced steganography, high-dimensional rich feature set such as spatial rich model (SRM) (34671-dimensional) is extracted for image steganalysis. To address the dimensionality curse, researchers utilised feature selection techniques and developed efficient steganalysers. In this study, the Harris Hawks optimiser (HHO) is combined with particle swarm optimisation (PSO) and differential evolution (DE) to increase the exploitation and exploration capabilities of HHO respectively. This hybridised HHO is called DEHHPSO and is giving good results on continuous optimisation problems as well as on feature selection problems. Initially, the Fisher filter method is used to discard some irrelevant features and the resultant features are passed to the proposed DEHHPSO feature selection method. The combined approach removes more than 94% features of the SRM feature set with improved detection accuracy when compared with state-of-the-art methods. The classification performance using the selected features is also superior to the several deep learning networks of steganalysis.
    Keywords: steganalysis; steganography; spatial rich model; SRM; Harris Hawks optimiser; HHO; particle swarm optimisation; PSO; differential evolution.
    DOI: 10.1504/IJCSE.2023.10060447
     
  • A novel DenseNet-based architecture for liver and liver tumour segmentation   Order a copy of this article
    by Deepak Jayaprakash Doggalli, B. S. Sunil Kumar 
    Abstract: The segmentation of the liver and lesions plays an important role in the clinical interpretation and therapeutic planning of hepatic disorders. Analysing volumetric computed tomography scans physically is time-consuming and imprecise. In this work, we propose an automatic segmentation technique based on DenseNet, in which each layer receives feature maps from all layers before it and uses summation instead of concatenation for combining the features. The architecture uses the U-Net model as its basis, and all blocks in each tier of the U-Net architecture are replaced with DenseNet blocks. Utilising DenseNets improves the gradient flow throughout the network, allowing each layer to recognise more diverse and low-complexity features. Other U-Net-based hybrid models use two-dimensional filters that might overlook contextual information. In contrast, the proposed model accepts as input a stack of five adjacent slices and returns a segmentation map for the middle slice. Consequently, this method is highly parameter-efficient and enables enhanced feature extraction. On the LiTs dataset, the experimental findings reveal a liver segmentation accuracy of 94.9% dice score and a tumour segmentation accuracy of 79.2% dice score. On the LiTs competition website, the result indicated that the accuracy was comparable to that of the most effective techniques.
    Keywords: liver lesion segmentation; U-Net; DenseNet; computed tomography images; convolutional neural networks; Kaiming initialisation; leaky Relu activation function.
    DOI: 10.1504/IJCSE.2023.10060448
     
  • Fake content detection on benchmark dataset using various deep learning models   Order a copy of this article
    by Chetana Thaokar, Jitendra Kumar Rout, Himansu Das, Minakhi Rout 
    Abstract: The widespread use of social media and its development have offered a medium for the propagation of fake contents quickly among the masses. Fake contents frequently misguide individuals and lead to erroneous social judgments. Individuals and society have been harmed by the dissemination of low-quality news content on social media. In this paper, we have worked on a benchmark dataset of news content and proposed an approach comprising basic natural language processing techniques with different deep learning models for categorising content as real or fake. Different deep learning models employed are LSTM, bi-LSTM, LSTM and bi-LSTM with an attention mechanism. We compared the outcomes by using one hot word embedding and pre-trained GloVe technique. On benchmark LIAR dataset, the LSTM achieved a better accuracy of 67.2%, while the bi-LSTM with GloVe word embedding reached an accuracy of 67%. An accuracy of 98.22% is achieved using bi-LSTM and 97.98% using LSTM on Real-Fake dataset. Fake news can be a menace to society, so if it is detected early, harmony can be maintained in society and individuals can avoid being misled.
    Keywords: fake news; word embedding; global vectors; GloVe; LIAR dataset; deep learning models; bidirectional encoder representations from transformer; BERT.
    DOI: 10.1504/IJCSE.2023.10060449
     
  • Performance assessment of multi-unit web and database servers distributed system   Order a copy of this article
    by Muhammad Salihu Isa, Jinbiao Wu, Ibrahim Yusuf, U.A. Ali, Tijjani W. Ali, Abubakar Sadiq Abdulkadir 
    Abstract: The present paper focuses on evaluating the performance of a complex multi-unit computer networking system. Understanding the performance of such systems is crucial for ensuring efficient operation and identifying areas for improvement. The networking system being analysed consists of four interconnected subsystems. This architecture with subsystems connected in a series-parallel is commonly found in real-world computer networks. The present paper considers two types of system failures: degraded failure and total failure. Understanding failure scenarios and their consequences helps in designing resilient systems and developing effective fault-tolerant mechanisms. The study utilises supplementary variables techniques, Laplace transforms, Copula and general distribution to analyse and model system’s behaviour. The findings can inform the design and optimisation of similar complex networking systems, leading to improved performance, fault tolerance and overall system reliability. The insights gained from this study may also aid in decision-making processes related to network architecture, load balancing strategies and system resilience.
    Keywords: performance; web server; database server; multi-unit; k-out-of-n: G policy and availability.
    DOI: 10.1504/IJCSE.2023.10060450
     
  • Sparse landmarks for facial action unit detection using vision transformer and perceiver   Order a copy of this article
    by Duygu Cakir, Gorkem Yilmaz, Nafiz Arica 
    Abstract: The ability to accurately detect facial expressions, represented by facial action units (AUs), holds significant implications across diverse fields such as mental health diagnosis, security, and human-computer interaction. Although earlier approaches have made progress, the burgeoning complexity of facial actions demands more nuanced, computationally efficient techniques. This study pioneers the integration of sparse learning with vision transformer (ViT) and perceiver networks, focusing on the most active and descriptive landmarks for AU detection across both controlled (DISFA, BP4D) and in-the-wild (EmotioNet) datasets. Our novel approach, employing active landmark patches instead of the whole face, not only attains state-of-the-art performance but also uncovers insights into the differing attention mechanisms of ViT and perceiver. This fusion of techniques marks a significant advancement in facial analysis, potentially reshaping strategies in noise reduction and patch optimisation, setting a robust foundation for future research in the domain.
    Keywords: action unit detection; sparse learning; vision transformer; perceiver.
    DOI: 10.1504/IJCSE.2023.10060451
     
  • Enhancing e-commerce product recommendations through statistical settings and product-specific insights   Order a copy of this article
    by Onur Dogan 
    Abstract: In the e-commerce industry, effectively guiding customers to select desired products poses a significant challenge, necessitating the use of technology and data-driven solutions. To address the extensive range of product varieties and enhance product recommendations, this study improves upon the conventional association rule mining (ARM) approach by incorporating statistical settings. By examining sales transactions, the study assesses the statistical significance of correlations, taking into account specific product details such as product name, discount rates, and the number of favourites. The findings offer valuable insights with managerial implications. For instance, the study recommends that if a customer adds products with a high discount rate to their basket, the company should suggest products with a lower discount rate. Furthermore, the traditional rules are augmented by incorporating product features. Specifically, when the total number of favourites is below 7,500 and the discount rate is less than 75%, the similarity ratio of the recommended products should be below 0.50. These enhancements contribute significantly to the field, providing actionable recommendations for e-commerce companies to optimise their product recommendation strategies.
    Keywords: association rules; basket analysis; statistical tests; e-commerce.
    DOI: 10.1504/IJCSE.2023.10060975
     
  • ORNAIC-TDNSI optimal RetinaNet with artificial immune classification for text detection on natural scene images   Order a copy of this article
    by Sharfuddin Waseem Mohammed, Brindha Murugan 
    Abstract: Text detection and recognition from natural scene images is helpful in many industrial, surveillance, and security applications. Text detection in natural scenes is a vital but challenging issue due to differences in line orientation, text fonts and size. This study introduces an optimal RetinaNet with artificial immune classification for text detection on natural scene images (ORNAIC-TDNSI). The ORNAIC-TDNSI model encompasses two major processes namely textual region detection and text recognition from detected regions. At the initial stage, the RetinaNet object detector is applied for detection of textual regions in the natural scene images. For enhancing the detection efficiency of the RetinaNet model, group teaching optimisation algorithm (GTOA) is utilised. Next, artificial immune classification (AIC) model is applied for accurate text recognition. The experimental validation of the ORNAIC-TDNSI model is tested on ICDAR-2015, ICDAR-2017, and Total-Text datasets. The comparison study reported that the ORNAIC-TRNSI model outperforms the other DL models.
    Keywords: natural scene images; text recognition; deep learning; RetinaNet; artificial immune classification; AIC.
    DOI: 10.1504/IJCSE.2023.10061549
     
  • Value chain for smart grid data: a brief review   Order a copy of this article
    by Feng Chen, Huan Xu, Jigang Zhang, Guiyu Li 
    Abstract: Smart grids are now crucial infrastructures in many countries. They build two-way communication between customers and utility enterprises. Since power and energy are associated with human activities, smart grid data are extremely valuable. At present, these data are currently being used in some areas. As a novel asset, the value of smart grid data needs quantitative measurement for evaluation and pricing. To achieve this, it is essential to analyze the overall process of value creation, which can help calculate costs and discover potential applications. The process can be effectively revealed by building a data value chain for smart grid data, which illustrates the data flow and clarifies the data sources, analytics, utilization, and monetization. This article provides a three-step data value chain for smart grid data and expounds on each step. This article also reviews various methods and some challenges with smart grid data.
    Keywords: smart grid; data value chain; data collection; data analysis; data monetisation.
    DOI: 10.1504/IJCSE.2024.10061602
     
  • Integrating rich event-level and schema-level information for script event prediction   Order a copy of this article
    by Wei Qin, Xiangfeng Luo, Hao Wang 
    Abstract: A script is consists of a series of structured event sequences extracted from the texts.Given historical scripts, script event prediction aims to predict the subsequent event.The critical aspect in script event prediction is how to effectively represent events, which plays an important role in making accurate predictions. Most existing methods describe events through verbs and a few core independent arguments (i.e. subject, object, and indirect object), which lack the capability to deal with complex and sparse event data. In this paper, we propose a Hierarchical Event Prediction (HEP) model, which integrates information from both event-level and schema-level. At the event level, HEP enriches the existing event representation with extra arguments (i.e., time, place) and modifiers, which provides in-depth event information. At the schema level, it induces the sparse events into conceptual schema, which improves the model’s generalization ability to make more reasonable predictions. To more effectively integrate these two...
    Keywords: script event prediction; SEP; event-level; schema-level; contrast learning; bimodal cross attention; BCA.
    DOI: 10.1504/IJCSE.2024.10061887
     
  • An efficient stitching algorithm for aerial images with low-overlap   Order a copy of this article
    by Qingshan Tang, Huang Jiang, Sijie Li 
    Abstract: In fields such as military reconnaissance, the overlap between images captured by UAVs is limited to 15%-30%. To achieve larger perspective panoramic images from low-overlap aerial images, this study proposes an efficient algorithm for image stitching. Specifically, the algorithm utilises the oriented fast and rotated brief (ORB) and grid-based motion statistics (GMS) algorithms. Next, image alignment is achieved by calculating location-dependent homographies based on the grid. For seamless integration, the algorithm combines optimal seam blending and gradient-domain fusion techniques. Experimental results demonstrate that the proposed algorithm outperforms the scale-invariant feature transform (SIFT) and the affine-scale invariant feature transform (ASIFT) algorithms in terms of feature point matching accuracy. Moreover, comparisons with other algorithms, such as the as-projective-as-possible (APAP), adaptive as-natural-as-possible (AANAP), and single-perspective-warps (SPW), it is proved that the proposed algorithm can obtain high-quality stitched images. While the problem of slow stitching speed, poor alignment, and ghosting is effectively solved.
    Keywords: aerial image; image stitching; motion grid statistics; optimal stitching; gradient fusion.
    DOI: 10.1504/IJCSE.2023.10061946
     
  • A quantum evolutionary algorithm inspired by manta ray foraging optimisation   Order a copy of this article
    by Shikha Gupta, Naveen Kumar 
    Abstract: Manta Ray Foraging Optimisation (MRFO) algorithm is a relatively recent bio-inspired technique that has been tested for optimisation problems and proven to be effective in several ways, such as better accuracy, enhanced performance, and lower computational cost. Quantum-motivated computing has been aimed at improving our ability to solve complex combinatorial optimisation problems. The present work proposes a novel continuous space optimization approach inspired by quantum computing and the MRFO algorithm. The performance of the proposed algorithm is examined vis-a-vis the standard MRFO algorithm for optimizing the value of 20 benchmark functions. While both algorithms compete well in finding the best fitness values, the proposed approach shows better convergence for 16 out of 20 functions. Wilcoxon signed ranks test is used to evaluate the significance of the improved convergence for the proposed algorithm. Results show that introducing the quantum computing mechanism is effective in improving the convergence of the MRFO algorithm.
    Keywords: angle-coded; bio-inspired; qubits encoding; Bloch coordinates; meta-heuristic approach.
    DOI: 10.1504/IJCSE.2024.10062570
     
  • TSALSHADE: improved LSHADE algorithm with tangent search   Order a copy of this article
    by Abdesslem Layeb 
    Abstract: DE algorithm is among the most successful algorithm for numerical optimization However, like other metaheuristics, DE suffers from several weaknesses like weak exploration and local minimum stagnation problems Besides, most DE variants including the most efficient ones like LSHADE variants, suffer in presence of hard composition functions containing global optima hard to reach On the other hand, Tangent Search Algorithm (TSA) has shown an effective capacity to deal with hard optimization functions thanks to the tangent flight operator This one offers a good way to escape from local optima of hard test functions while preserving good exploration capacity In this scope, a hybrid TSA and LSHADE algorithm called TSALSHADE is proposed The main advantage of the new proposed algorithm is its capacity to deal with hard composite functions The experimental study on the latest CEC 2022 benchmark functions has shown that TSALSHADE provides very promising and competitive results.
    Keywords: differential evolution; LSHADE; tangent search algorithm; optimisation.
    DOI: 10.1504/IJCSE.2024.10062887
     
  • Multi-label classification and fuzzy similarity-based expert identification techniques for software bug assignment   Order a copy of this article
    by Rama Ranjan Panda, Naresh Kumar Nagwani 
    Abstract: In software development, bug fixing is a time-consuming and labor-intensive process. A bug can occur due to multiple failures in software, and it may require multiple developers to fix it. Machine learning approaches belong to discriminative learning, and a developer is assigned to a software bug with an agreed level of opinion from the assigner. But instances of software bugs are textual and fuzzy. Hence, it cannot be classified as having a clear-cut outcome. Furthermore, the bug assigner faces lots of difficulties as the bug belongs to multiple categories. This has motivated the authors to devise two fuzzy system-based automatic software bug assignment techniques, namely, fuzzy bug assignment technique for software developers and unique term relationships (FDUR) and fuzzy bug assignment technique for software developers and category relationships (FDCR). To measure and compare the performance of both techniques with other techniques, experiments are carried out on the benchmark software repositories.
    Keywords: bug assignment; expert finding; decision making; fuzzy logic; mining bug repositories; machine learning; fuzzy similarity.
    DOI: 10.1504/IJCSE.2024.10063031
     
  • A predictive model based on the LSTM technique for the maintenance of railway track system   Order a copy of this article
    by Sharad Nigam, Divya Kumar 
    Abstract: Maintenance is a substantial process to sustain the operations of transportation system. Railway is a major way of transportation, defect and failure in track side equipment and track itself may causes major loss of human lives. So, an effective maintenance technique is needed which saves the passenger lives and maximise the utilisation of railway track equipment, just before it fails, that cause casualty. There are various types of track defects to be inspected, but this paper only deals with surface defects, cross level, and DIP. By examining the track condition and expecting the RUL, Railway industry can schedule maintenance of railway tracks. In this research paper, we have tried to conduct a relative analysis of various machine learning algorithms by comparing their performance for estimation future failure point of railway tracks to inspect the reliability of LSTM technique. Dataset is taken from the trusted source RAS Track geometry analytics (2015).
    Keywords: predictive maintenance; K-nearest neighbour; KNN; machine learning; long short-term memory; LSTM; railway track; support vector machine; SVM.
    DOI: 10.1504/IJCSE.2024.10063100
     
  • DMRFO-CD: a discrete manta ray foraging inspired optimisation algorithm for community detection in networks   Order a copy of this article
    by Priyanka Gupta, Shikha Gupta, Naveen Kumar 
    Abstract: Evolutionary algorithms are meta-heuristic approaches that have effectively addressed complex optimization problems. The problem of detecting communities in networks using evolutionary algorithms has received substantial attention from researchers. Manta Ray Foraging Optimization (MRFO), a recently proposed real-valued evolutionary algorithm, has demonstrated superior performance in challenging optimization engineering problems. The present work adapts the MRFO algorithm for the discrete-valued community detection problem while optimizing (maximizing) network modularity, a measure of the density of connections within a community. Experiments on synthetic and real-world benchmark networks show that the proposed approach successfully detects community structures with high modularity. Normalized Mutual Information (NMI) was computed to determine the quality of the detected communities. High NMI values (> 0.75) were obtained for most of the networks. However, since the proposed method maximizes the modularity value at the cost of closeness with the original community structure, one-third of networks exhibited NMI values below 0.5.
    Keywords: evolutionary algorithms; network modularity; normalised mutual information; metaheuristic; swarm-based; partitioning.
    DOI: 10.1504/IJCSE.2024.10063171
     
  • A food safety traceability system based on trusted double chain   Order a copy of this article
    by Haoran Chen, Jiafan Wang, Hongwei Tao, Yinghui Hu, Yanan Du 
    Abstract: In recent years, global food safety issues have been increasingly prevalent, posing threats to people’s health and lives. Traditional food traceability systems face significant challenges due to centralised storage, data silos, and the potential for information tampering. This article proposes an improved RAFT consensus algorithm for the first time, applied to trace the production process of food based on private chains. Subsequently, an enhanced PBFT consensus algorithm is introduced for tracing the distribution process of food based on consortium chains. Finally, a reliable dual-chain food quality and safety traceability system based on the Ethereum blockchain platform is presented. This system effectively addresses data reliability issues on the chain by introducing a reliability measurement evaluation module. Moreover, the application of aggregated signature technology enhances the performance of the traceability system, ensuring the authenticity, reliability, and tamper resistance of traceability information. This innovation not only strengthens the privacy protection and data security of food quality and safety tracking but also helps maintain the commercial interests of all parties involved.
    Keywords: food traceability; blockchain; cross-agency traceability system.
    DOI: 10.1504/IJCSE.2024.10063226
     
  • Robust link-assessment-based approach for detection and isolation of blackhole attacker in resource constraint internet of things   Order a copy of this article
    by Himanshu Patel, Devesh Jinwala 
    Abstract: Blackhole is one of the crucial packet-dropping attacks that can be launched on Routing Protocol for Low-power lossy networks (RPL). In this paper, we propose an improved specification-based approach for textit{integrating trust} value in specification-based approach to detect and isolate blackhole attackers, based on a textit{tangible value} of packet-forwarding behaviour of the nodes in the network. To our knowledge, this is the first attempt to integrate packet-forwarding trust with the specification-based approach. In our proposed approach, lightweight-specification data are maintained at each device and shared periodically with the resource-rich edge device. Resource-insensitive computations are carried out at the edge device. The proposed approach uses a robust data collaboration mechanism that ensures detection-data delivery between nodes and edge devices, even in the presence of 10% of attacker nodes. Our simulations on Cooja simulator show that in an environment with 20% data loss, the proposed approach yields more than 80% TPR
    Keywords: internet of things; RPL; blackhole attack; trust.
    DOI: 10.1504/IJCSE.2024.10063354
     
  • Finite element analysis of the liver subjected to non-invasive indirect mechanical loading   Order a copy of this article
    by Samar Shaabeth, Amina Kadhem, Hassanain Ali Lafta 
    Abstract: Injuries from non-invasive abdominal blunt trauma represent 75% of no visible bleeding traumas. As a step for additional faster and more reliable diagnostic tool, simulation was performed of karate kick, punch, and pinpoint loading on the front, back, right, and left sides of the abdomen for 31-and 50-years old females and a 45-year-old male 2D segmented computed tomography images. The organs densities and mechanical properties were applied. Mechanical analysis was performed during 0.5 s, 1 s, 1.5 s and 2 s loading time. The results, compatible with previous literature, indicated the affected regions of liver, spleen, pancreas, kidney, and colon by the trauma.
    Keywords: liver; blunt abdominal trauma; BAT; simulation; finite element; indirect loading; non-invasive.
    DOI: 10.1504/IJCSE.2024.10063606
     
  • Black widow optimisation with deep learning-based feature fusion model for remote sensing image analysis   Order a copy of this article
    by Vaishnavee Rathod, Dipti Rana, Rupa Mehta 
    Abstract: Recently, achieving accurate remote sensing images (RSI) classification has been a primary goal in deep learning, given its extensive applications, including urban planning and disaster management. The performance of existing convolutional neural networks (CNN)-based strategies is primarily influenced by their parameter settings, necessitating automated hyperparameter tuning through metaheuristic methods. The proposed BWODLF-RSI technique integrates black widow optimisation with a deep learning feature fusion model for enhanced RSI analysis. The preliminary processing step to enhance RSI quality using noise reduction through a Gaussian filter (GF), enhancing contrast with the help of contrast limited adaptive histogram equalisation (CLAHE), and data augmentation to prevent overfitting. It is followed by employing Inception v3 and DenseNet201 to extract and fuse potent features. A critical aspect of this strategy is using black widow optimisation to fine-tune the kernel extreme learning machine (KELM) model, attaining a notable RSI classification accuracy of 94.05%. When tested on UCM and AID datasets, the BWODLF-RSI approach demonstrated superior feature selection and RSI analysis performance.
    Keywords: remote sensing; image classification; deep learning; pre-processing; feature fusion.
    DOI: 10.1504/IJCSE.2024.10063761
     
  • Behaviour recognition system of underground drilling operators based on MA-STGCN   Order a copy of this article
    by Cai Meng, Wang Xichao, Baojiang Li, Haiyan Wang, Xiangqing Dong, Chen Guochu 
    Abstract: In the intricate operations of mining rods, precise behavior recognition is paramount for operational safety. Addressing target detection and posture feature extraction challenges, this study proposes a method that integrates attention mechanisms with a Spatial-Temporal Graph Convolutional Network. An efficient channel attention mechanism is introduced during target detection, allocating weights to each channel to adapt to diverse features accurately. Multihead attention modules are incorporated in posture feature extraction, effectively capturing critical behavioral information. Behavior classification is achieved through the SoftMax function. Experimental results demonstrate the method's accuracy of 95.3% and a recall rate of 91.6% on the custom mining dataset. On the NTU-RGB+D public dataset, the method significantly improves accuracy and recognition speed. This research provides an innovative approach to behavior recognition in complex environments, ensuring precise identification of various behaviors in real-world scenarios, safeguarding worker safety, and holding crucial implications for applying behavior recognition technology in industrial fields.
    Keywords: drilling operation; attention mechanism; spatio-temporal graph convolution; behaviour recognition; pose estimation.
    DOI: 10.1504/IJCSE.2024.10064092
     
  • Secure forensic image analysis by optimised iterative model with random consensus approaches   Order a copy of this article
    by S.B. Gurumurthy, Ajit Danti 
    Abstract: Future measurements, software, and scalability testing related to cloud performance are required for forensic image scalability (FIS) optimisations and advancements. An advanced iterative reconstruction model and consensus mechanism must be used to quantitatively evaluate image quality in any blockchain framework since this will have a direct impact on the security and usability of the framework. This work addresses these problems by presenting a fast and efficient forgery detection system based on optimal security, feature extraction, and pre-processing. This will render conventional media security and forensic techniques meaningless. In this work, a random sample consensus (RSC) method is proposed for the analysis of FIS. To ensure that the architecture is as strong and secure as possible, the iterative reconstruction model (IRM) is employed. Initially, one may consider channel processing to be a form of database picture pre-processing. One perspective state that the enhanced chicken swarm optimisation (ECSO) algorithm is used to advance the scaling settings to balance invisibility and power. This RSC’s threshold setting reduces the number of excluded matches as well as the root mean square error (RMSE). Enhancement of scalability as well as picture reconstruction demonstrate the utility of the proposed technology. The simulation findings on multiple retinal image datasets demonstrate that the proposed method further enhances accuracy matching by 10.56% and rate of progress by 30% on average compared with the RSC-IRM strategy.
    Keywords: image reconstruction; scalability; optimisation; image security; forensic image.
    DOI: 10.1504/IJCSE.2024.10064093
     
  • Unlocking the potential of deepfake generation and detection with a hybrid approach   Order a copy of this article
    by Shourya Chambial, Tanisha Pandey, Rishabh Budhia, Balakrushna Tripathy, Anurag Tripathy 
    Abstract: With the use of numerous software programs and cutting-edge AI technologies, a lot of phony movies and photos are produced today, although the modification is rarely obvious The videos can be exploited in a variety of unethical ways to frighten, fight, or threaten individuals People should be careful these days to avoid using such techniques to produce phony videos Deep Fake is the name of an AI-based method for creating synthetic versions of human photographs This research paper proposes a hybrid model of Inception ResNetV2 and Xception for the purpose of deepfake detection With the rise of deepfake technology, detecting altered images and videos has become a crucial task for ensuring authenticity and preventing misinformation The hybrid model developed here is using a dataset of fake and real videos and the images and achieved a classification accuracy of over 96 75%.
    Keywords: fake video; convolutional neural network; CNN; RNN.
    DOI: 10.1504/IJCSE.2024.10064094
     
  • Optimised ICP algorithm based on simulated-annealing strategy   Order a copy of this article
    by Wei Huang, Hui Wang, Xinghong Ling 
    Abstract: How to process the point cloud data is a hotspot, among which point cloud registration directly affects synthesis results. The Iterative Closest Point (ICP) algorithm is a common method. However, it requires initial distribution of the registration point cloud and usually falls into optimal solution trap. To address the problem, an optimized ICP algorithm based on a simulated annealing strategy is proposed, which divides the registration process into filtering, coarse registration and precise registration. In filtering process, denoising and down sampling are performed to reduce the data size and improve the subsequent iteration rate; then the point cloud with a closer initial distribution is obtained by coarse registration. Finally, in the precise registration, we introduce the simulated annealing strategy, preventing the local optimum trap. Experiments show that our method has a higher accuracy rate and contributes to the generation of more accurate and complete models in 3D data reconstruction.
    Keywords: iterative closest point; simulated annealing; point cloud registration; normal distributions transform; filtering.
    DOI: 10.1504/IJCSE.2023.10064119
     
  • Development of a sorting system for mango fruit varieties using convolutional neural network   Order a copy of this article
    by Adejumobi Philip Oluwaseun, John A. Ojo, Adejumobi Israel Oluwamayowa, Adebisi Oluwadare Adepeju, Ayanlade Oladayo Samson 
    Abstract: Mango is a tropical fruit with numerous varieties, these varieties intermix during harvest and post-harvest procedures thereby causing complications and inability to accurately identify specific varieties at the retail stage. Accuracy of existing sorting techniques does not perfectly fit well to real-world scenarios. This research introduces an enhanced sorting system for mango fruits to address these challenges. Our approach involved building a comprehensive database by photographing six distinct mango fruit varieties prevalent in South-West Nigeria using a digital camera. The captured images underwent quality enhancement through histogram equalization and noise reduction via median filtering. The convolutional neural network framework was used in the creation of a model named AdeNet to facilitate feature extraction and classification within the system. The experimental result achieved 99.0% accuracy and F1-score of 97.6% which is better than the performance of existing mango sorting technique. The work will enhance the efficiency of mango industries.
    Keywords: artificial intelligence; conventional neural network; deep learning; mango fruit; automatic sorting system.
    DOI: 10.1504/IJCSE.2024.10064199
     
  • Privacy-preserving SQL queries on cross-organisation databases   Order a copy of this article
    by Ye Han, Xiaojie Guo, Tong Li, Xiaotao Liu 
    Abstract: In recent years, much industrial interest has been paid to SQL queries on a joint database contributed by several mutually distributed companies or organizations. However, privacy regulations and commercial interest prevent these entities to trivially share their local databases with each other. To enable such SQL queries still, some privacy-preserving technologies should be applied. In this work, we outline a provably secure MPC framework of privacy-preserving SQL queries for industrial applications, such as medical research in hospitals, financial oversight, business cooperation, etc. In particular, this framework is secure against any semi-honest adversary, which is a popular threat model in real-life systems. This framework also models a common efficiency optimization of SQL query plans at the cost of mild leakage.
    Keywords: SQL queries; secure multi-party computation; privacy.
    DOI: 10.1504/IJCSE.2024.10064247
     
  • A transfer learning approach for adverse drug reactions detection in bio-medical domain based on knowledge graph   Order a copy of this article
    by Monika Yadav, Prachi Ahlawat, Vijendra Singh 
    Abstract: Among the top causes of mortality, adverse reactions of drugs (ADRs) are dominant. This imposes severe health risks and a significant financial burden on patients. Consequently, timely prediction of possible ADRs of a drug has become an essential concern in the clinical domain. However, it is challenging to recognise the adverse reactions of all drugs using existing ADR data sources. Recently, a semantic-rich knowledge base and machine learning techniques have shown high accuracy in predicting ADRs in advance. This paper introduces a new framework, knowledge graph slot-filling clinical bi-directional encoder representations from transformers (KG-SF Clinical BERT), which takes triples of knowledge graph as text sequences. It applies transformer-based multi-task learning with slot-filling for ADR classification and fine-tuned on bio-medical domain to detect ADRs. The KG-SF clinical BERT brings remarkable performance gain with AUC of 0.88 on drug bank and SIDER datasets and 0.99 AUC on PubMed dataset.
    Keywords: adverse drug reactions; clinical BERT; knowledge graph; KG; transfer learning.
    DOI: 10.1504/IJCSE.2024.10064638
     
  • Modified glowworm swarm optimisation-based cluster head selection and enhanced energy-efficient clustering protocol for IoT-WSN   Order a copy of this article
    by T. Kanimozhi, S. Belina V.J. Sara 
    Abstract: The maintenance cost of flat-based wireless sensor network-internet of things (WSN-IoT) is high. Clustering is recommended to reduce message overhead, manage congestion, and simplify topology repairs. A clustering protocol enhances energy efficiency, prolongs network lifespan by grouping nodes into clusters, and reduces transmission distance to the base station (BS). Depending on parameters like quality of service (QoS), energy consumption, and network load, a clustering technique organises nodes into clusters. Each cluster is led by one or more cluster heads (CH) that collect and transmit data to the BS directly or through intermediary nodes. To enhance WSN-based IoT longevity, this study presents an enhanced energy-efficient clustering protocol (EEECP). It establishes an optimal number of clusters, utilises the modified fuzzy C means (MFCM) algorithm to stabilise and reduce sensor node energy consumption, and introduces the modified glowworm swarm optimisation (MGSO) algorithm for CH selection. MGSO incorporates a dynamic threshold for balanced CH longevity within clusters. Performance is evaluated using metrics including first node dies (FND), last node dies (LND), half node dies (HND), weighted first node dies (WFND), energy usage, and network lifetime compared to existing protocols.
    Keywords: modified glowworm swarm optimisation; modified fuzzy C-means algorithm; cluster head; quality of service; enhanced energy-efficient clustering protocol.
    DOI: 10.1504/IJCSE.2024.10064839
     
  • Optimisation of quantum circuits using cost effective quantum gates   Order a copy of this article
    by Swathi Mummadi, Bhawana Rudra 
    Abstract: The importance of reversible operations has increased with the emergence of new technologies. Reversible operations are crucial for developing energy-efficient and cost-efficient circuits. The efficiency of a quantum circuit is measured in terms of quantum cost and quantum depth. In this paper, we propose an optimisation algorithm for reversible gates such as the Peres gate, Toffoli gate, and the entanglement purification method. Peres and Toffoli gates play an important role in quantum circuit implementation, and entanglement purification plays a key role in various applications such as quantum teleportation, secure communication, and quantum key distribution. The proposed algorithm optimises the quantum cost and quantum depth to 20% compared with the existing approaches.
    Keywords: reversible computation; quantum computation; quantum csx and sxdg gates; entanglement purification; reversible logic gates.
    DOI: 10.1504/IJCSE.2024.10065245
     
  • WBDPR: a way for big data provenance relationship   Order a copy of this article
    by Zhiwen Zheng, Ying Song, Yunmei Shi, Bo Wang 
    Abstract: With the increasing complexity of data generation relationships, existing provenance frameworks face challenges such as resource consumption, redundant storage, and slow query times. This paper proposes WBDPR (A Way for Big Data Provenance Relationship), a solution for efficient data provenance in the Hadoop scenario. WBDPR addresses these issues by supporting asynchronous provenance log integration and introducing a provenance storage mode and query algorithm based on PROV-DAG (Provenance Directed Acyclic Graph). Experimental results demonstrate that WBDPR reduces memory occupation by 56% and index disk storage by 75%. Additionally, it improves query performance by 80% in 64% of leaf and intermediate nodes. Compared to RAMP, Newt, and Atlas systems, WBDPR achieves up to 5.1% reduction in tracing time. WBDPR technology is a fault-tolerant technology that records the provenance information of data and its calculation process, and its significance lies in ensuring the integrity and reliability of data.
    Keywords: data provenance; provenance graph; provenance model; data storage; Hadoop.
    DOI: 10.1504/IJCSE.2024.10065396