Explore our journals

Browse journals by subject

Research picks

  • Research in the International Journal of Computational Science and Engineering describes a new approach to spotting messages hidden in digital images. The work contributes to the field of steganalysis, which plays a key role in cybersecurity and digital forensics.

    Steganography involves embedding data within a common media, such as words hidden among the bits and bytes of a digital image. The image looks no different when displayed on a screen, but someone who knows there is a hidden message can extract or display the message. Given the vast numbers of digital images that now exist, and that number grows at a remarkable rate every day, it is difficult to see how such hidden information might be found by a third party, such as law enforcement. Indeed, in a sense it is security by obscurity, but it is a powerful technique nevertheless. There are legitimate uses of steganography, of course, but there are perhaps more nefarious uses and so effective detection is important for law enforcement and security.

    Ankita Gupta, Rita Chhikara, and Prabha Sharma of The NorthCap University in Gurugram, India, have introduced a new approach that improves detection accuracy while addressing the computational challenges associated with processing the requisite large amounts of data.

    Steganalysis involves identifying whether an image contains hidden data. Usually, the spatial rich model (SRM) is employed to detect those hidden messages. It analyses the image to identify tiny changes in the fingerprint that would be present due to the addition of hidden data. However, SRM is complex, has a large number of features, and can overwhelm detection algorithms, leading to reduced effectiveness. This issue is often referred to as the "curse of dimensionality."

    The team has turned to a hybrid optimisation algorithm called DEHHPSO, which combines three algorithms: the Harris Hawks Optimiser (HHO), Particle Swarm Optimisation (PSO), and Differential Evolution (DE). Each of these algorithms was inspired by natural processes. For example, the HHO algorithm simulates the hunting behaviour of Harris Hawks and balances exploration of the environment with targeting optimal solutions. The team explains that by combining HHO, PSO, and DE, they can work through complex feature sets much more quickly than is possible with a current single algorithm, however sophisticated.

    The hybrid approach reduces computational demand by eliminating more than 94% of the features that would otherwise have to be processed. The stripped back information can then be processed with a support vector machine (SVM) classifier. The team says this method works better than meta-heuristic (essentially trial-and-error methods) and better even than several deep learning methods, which are usually used to solve more complex problems than steganalysis.

    Gupta, A., Chhikara, R. and Sharma, P. (2024) 'An improved continuous and discrete Harris Hawks optimiser applied to feature selection for image steganalysis', Int. J. Computational Science and Engineering, Vol. 27, No. 5, pp.515–535.
    DOI: 10.1504/IJCSE.2024.141339

  • Cloud computing has become an important part of information technology ventures. It offers a flexible and cost-effective alternative to conventional desktop and local computer infrastructures for storage, processing, and other activities. The biggest advantage to startup companies is that while conventional systems require significant upfront investment in hardware and software, cloud computing gives them the power and capacity on a "pay-as-you-go" basis. This model not only reduces initial capital expenditures at a time when a company may need to invest elsewhere but also allows businesses to scale their resources based on demand without extensive, repeated, and costly physical upgrades.

    A study in the International Journal of Business Information Systems has highlighted the role of fuzzy logic in evaluating the cost benefits of migrating to cloud computing. Fuzzy logic, a method for dealing with uncertainty and imprecision, offers a more flexible approach compared to traditional binary logic. Fuzzy logic recognises the shades of grey inherent in most business decisions rather than seeing things in black and white.

    The team, Aveek Basu and Sraboni Dutta of the Birla Institute of Technology in Jharkhand, and Sanchita Ghosh of the Salt Lake City Electronics Complex, Kolkata, India, explains that conventional cost-benefit analyses often fall short when assessing cloud migration due to the inherent unpredictability in factors such as data duplication, workload fluctuations, and capital expenditures. Fuzzy logic, on the other hand, addresses these challenges by allowing decisions to be made that take into account the uncertainties of the real world.

    The team applied fuzzy logic to evaluate three factors associated with the adoption of cloud computing platforms. First, the probability of data duplication, secondly capital expenditure, and finally workload variation. By incorporating these different factors into the analysis, the team obtained a comprehensive view of the potential benefits and drawbacks of cloud computing from the perspective of a startup company. The approach offers a more adaptable assessment than traditional models.

    One of the key findings is that cloud computing leads to a huge reduction in the complexity and costs associated with managing business software and the requisite hardware as well as the endless upgrades and IT support often needed. Cloud service providers manage all of that on behalf of their clients, allowing the business to focus instead on its primary operations rather than IT.

    Basu, A., Ghosh, S. and Dutta, S. (2024) 'Analysing the cloud efficacy by fuzzy logic', Int. J. Business Information Systems, Vol. 46, No. 4, pp.460–490.
    DOI: 10.1504/IJBIS.2024.141318

  • Research in the International Journal of Global Energy Issues has looked at the volatility of rare earth metals traded on the London Stock Exchange. The work used an advanced statistical model known as gjrGARCH(1,1) to follow and predict market turbulence. It was found to be the best fit for predicting rare earth price volatility and offers important insights into the stability of these crucial resources.

    Auguste Mpacko Priso of Paris-Saclay University, France and the Open Knowledge Higher Institute (OKHI), Cameroon, with OKHI colleague explain that the rare earths, are a group of 17 metals* with unique and useful chemical properties. They are essential to high-tech products and industry, particularly electric vehicle batteries and renewable energy infrastructure. They are also used in other electronic components, lasers, glass, magnetic materials, and as components of catalysts for a range of industrial processes. As the global transition to reduced-carbon and even zero-carbon technologies moves forward, there is an urgent need to understand the pricing of rare earth metals, as they are such an important part of the technology we need for that environment friendly future.

    The team compared the volatility of rare earth prices with that of other metals and stocks. Volatility, or the degree of price fluctuation, was found to be persistent in rare earths, meaning that prices tend to fluctuate continually over time rather than reaching a stable point quickly. For investors and manufacturers dependent on these metals, such constant volatility poses a substantial economic risk. As such, forecasting the price changes might be used to mitigate that. It might lead to greater stability and allowing investors to work in this area secure in the returns they hope to see.

    Other models used in stock price prediction failed to model the volatility of the rare earth metals well, suggesting that this market has distinctive characteristics that affect prices differently from other more familiar commodities. Given that the demand and use of rare earth metals is set to surge, there is a need to understand their price volatility and to take this into account in green investments and development. It is worth noting that there is a major political component in this volatility given that China, and other nations, with vast reserves of rare earth metal ores, do not necessarily share the political views or purpose of the nations demanding these resources.

    Mpacko Priso, A. and Doumbia, S. (2024) 'Price and volatility of rare earths', Int. J. Global Energy Issues, Vol. 46, No. 5, pp.436–453.
    DOI: 10.1504/IJGEI.2024.140736

    *Rare earth metals: cerium, dysprosium, erbium, europium, gadolinium, holmium, lanthanum (sometimes considered a transition metal), lutetium, neodymium, praseodymium, promethium, samarium, scandium, terbium, thulium, ytterbium, yttrium

  • Container ports are important hubs in the global trade network. They have seen enormous growth in their roles over recent years and operational demands are always changing, especially as more sophisticated logistics systems emerge. A study in the International Journal of Shipping and Transport Logistics sheds new light on how the changes in this sector are affecting port efficiency, the focus is on the different types of container activities.

    Fernando González-Laxe of the University Institute of Maritime Studies, A Coruña University and Xose Luis Fernández and Pablo Coto-Millán of the Universidad de Cantabria, Santander, Spain, explain that container ports handle cargo that is packed in standardized shipping containers, the big metal boxes with which many people are familiar commonly transported en masse on vast sea-going vessels, unloaded port-side, and loaded on to trains and road transporters for their onward journey. The increasing size of ships used for transporting these containers, some of which can carry up to 25000 TEUs (twenty-foot equivalent units, the containers), means there is increasing pressure on ports to increase their capacity. As such, there is a lot of ongoing effort to automate processes and optimize port operations to allow the big container ports to remain viable and competitive.

    The team used Data Envelopment Analysis (DEA) to evaluate the efficiency of container ports by comparing the input and output of their operations. The focused on ten major Spanish container ports – among them the major ports of Algeciras, Barcelona, and Valencia – in order to understand how various types of container activities – import/export, transshipment, and cabotage (coastal shipping) – influence port performance.

    One of the key findings from the study is the relationship between port efficiency and the types of container activities handled. The team found that there is an inverted U-shape relationship: ports that balanced transshipment (transferring containers between ships at intermediate points) with import/export activities tended to perform better than those that specialized in only one type of activity. This suggests that a diversified approach to container activities may enhance port efficiency.

    The work suggests that by adopting a balanced approach to their activities, container ports could boost efficiency and reinforce their role in the global supply chain.

    González-Laxe, F., Fernández, X.L. and Coto-Millán, P. (2024) 'Transhipment: when movement matters in port efficiency', Int. J. Shipping and Transport Logistics, Vol. 18, No. 4, pp.383–402.
    DOI: 10.1504/IJSTL.2024.140429

  • Dr Dolittle eat your heart out! Researchers writing in the International Journal of Engineering Systems Modelling and Simulation demonstrate how a trained algorithm can identify the trumpeting calls of elephants, distinguishing them from human and other animals sounds in the environment. The work could improve safety for villagers and help farmers protect their crops and homesteads from wild elephants in India.

    T. Thomas Leonid of the KCG College of Technology and R. Jayaparvathyof the SSN College of Engineering in Chennai, India, explain how conflicts between people and elephants are becoming increasingly common, especially in areas where human activity has encroached on natural elephant habitats. This is particularly true where agriculture meets forested land. These conflicts are not just an environmental concern, they pose a thread to human life and livelihoods.

    In India, wild elephants are responsible for more human fatalities than large predators. Their presence also leads to the destruction of crops and infrastructure, which creates a heavy financial burden on rural communities. Of course, the elephants are not to blame, they are wild animals, doing their best to survive. The root causes lie in habitat destruction due to human activities such as mining, dam construction, and increasing encroachment into forests for resources like firewood and water.

    As such, finding effective solutions to mitigate human-elephant encounters is becoming increasingly urgent. The team suggests that a way to reduce the number of tragic and costly outcomes would be to put in place an early-warning system. Such a system would recognise elephant behaviour from their vocalisations and allow farmers and others to avoid the elephants or perhaps even safely divert an incoming herd before it becomes a serious and damaging hazard.

    The researchers compared several machine learning models to determine which one best detects and classifies elephant sounds. The models tested included Support Vector Machines (SVM), K-nearest Neighbours (KNN), Naive Bayes, and Convolutional Neural Networks (CNN). They trained each of these algorithms on a dataset of 450 animal sound samples from five different species. One of the key steps in the process is feature extraction, which involves identifying distinctive characteristics within the audio signals, such as frequency, amplitude, and the temporal structure of the sounds. These features are then used to train the machine learning models to recognise elephant calls.

    The most accurate was the Convolutional Neural Network (CNN), a deep learning model that automatically learns complex features from raw data. CNNs are particularly well-suited for this type of task due to their ability to recognise intricate patterns in sound data. The CNN had a high accuracy of 84 percent, far better than the models. This might be improved, but is sufficiently accurate to have potential for a reliable, automated system to detect elephants on the march that might be heading towards homes and farms.

    Leonid, T.T. and Jayaparvathy, R. (2024) 'Elephant sound classification using machine learning algorithms for mitigation strategy', Int. J. Engineering Systems Modelling and Simulation, Vol. 15, No. 5, pp.248–252.
    DOI: 10.1504/IJESMS.2024.140803

  • Research in the International Journal of Biometrics introduces a method to improve the accuracy and speed of dynamic emotion recognition using a convolutional neural network (CNN) to analyse faces. The work undertaken by Lanbo Xu of Northeastern University in Shenyang, China, could have applications mental health, human-computer interaction, security, and other areas.

    Facial expressions are a major part of non-verbal communication, providing clues about an individual's emotional state. Until now, emotion recognition systems have used static images, which means they cannot capture the changing nature of emotions as they play out over a person's face during a conversation, interview or other interaction. Xu's work addresses this by focusing on video sequences. The system can track changing facial expressions over a series of video frames and then offer a detailed analysis of how a person's emotions unfold in real time.

    However, prior to analysis, the system applies an algorithm, the "chaotic frog leap algorithm" to sharpen key facial features. The algorithm mimics the foraging behaviour of frogs to find optimal parameters in the digital images. The CNN trained on a dataset of human expressions is the most important part of the approach, allowing Xu to process visual data by recognizing patterns in new images that intersect with the training data. By analysing several frames from video footage, the system can capture movements of the mouth, eyes, and eyebrows, which are often subtle but important indicators of emotional changes.

    Xu reports an accuracy of up to 99 percent, with the system providing an ouput in a fraction of a second. Such precision and speed is ideal for real-time use in various areas where detecting emotion might be useful without the need for subjective assessment by another person or team. Its potential applications lie in improving user experiences with computer interactions where the computer can respond appropriately to the user's emotional state, such as frustration, anger, or boredom.

    The system might be useful in screening people for emotional disorders without initial human intervention. It could also be used in enhancing security systems allowing access to resources but only to those in a particular emotional state and barring entry to an angry or upset person, perhaps. The same system could even be used to identify driver fatigue on transport systems or even in one's own vehicle. The entertainment and marketing sectors might also see applications where understanding emotional responses could improve content development, delivery, and consumer engagement.

    Xu, L. (2024) 'Dynamic emotion recognition of human face based on convolutional neural network', Int. J. Biometrics, Vol. 16, No. 5, pp.533–551.
    DOI: 10.1504/IJBM.2024.140785

  • As computer network security threats continue to grow in complexity, the need for more advanced security systems is obvious. Indeed, traditional methods of intrusion detection have struggled to keep pace with the changes and so researchers are looking to explore alternatives. A study in the International Journal of Computational Systems Engineering suggests that the integration of data augmentation and ensemble learning methods could be used to improve the accuracy of intrusion detection systems.

    Xiaoli Zhou of the School of Information Engineering at Sichuan Top IT Vocational Institute in Chengdu, China, has focused on a Wasserstein Generative Adversarial Network with Gradient Penalty (WGAN-GP). This is an advanced version of the standard machine learning model and can create realistic data through a process of competition between two neural networks. Conventional GANs often suffer from unstable training and pattern collapse, where the model fails to generate diverse data. The WGAN-GP variant mitigates these issues by incorporating a gradient penalty, according to the research, this helps to stabilize the training process and improve the quality of the generated data. It can then be used effectively to simulate network traffic for intrusion detection with a view to blocking hacking attempts.

    There is the potential to enhance the WGAN-GP data quality still further by combining it with a stacking learning module. Stacking is an ensemble learning technique that involves training multiple models and then combining their outputs using a meta-classifier. In Zhou's work, the stacking module integrates the predictions from several WGAN-GP models to allow them to be classified as normal or intrusive.

    The approach was tested against well-established data augmentation methods, including the Synthetic Minority Over-sampling Technique (SMOTE), Adaptive Synthetic Sampling (ADASYN), and a simple version of WGAN. The results showed that the WGAN-GP-based model had an accuracy rate of almost 90%, which is better than the scores for the other techniques tested. The model can thus distinguish between legitimate and potentially harmful network activity effectively. Optimisation might improve the accuracy and allow the system to be used to protect governments, corporations, individual, and others at risk from network security threats.

    Zhou, X. (2024) 'Research on network intrusion detection model that integrates WGAN-GP algorithm and stacking learning module', Int. J. Computational Systems Engineering, Vol. 8, No. 6, pp.1–10.
    DOI: 10.1504/IJCSYSE.2024.140760

  • Science-based university spin-offs, especially in the biotech sector, play an important role in transforming cutting-edge academic science into marketable technological products. However, such start-ups face lots of challenges that can be very different from those encountered by conventional startups. Research in the International Journal of Technology Management has looked at the complexities and potential of such spin-offs and sheds new light on the role played by the academic scientists involved in the process and how launch timing can make all the difference.

    Andrew Park of the University of Victoria, Canada, and colleagues explain that unlike typical start-ups, which might bring a product to market relatively quickly, new biotechnology companies often have long periods of financial investment and require lengthy development, testing, and regulatory periods for their products. This is particularly true in drug development, where the path from the laboratory bench to the marketplace can span a decade or more, not least because of the need for extensive clinical trials and the completion of regulatory requirements. As such, there is often a greater need to plan strategically and to use resources more effectively even before the spin-off company is officially launched.

    Many laboratory scientists make the leap from bench to business, some with much greater success than others. The successful scientist-entrepreneurs bring with them their research acumen and intellectual property, but also various intangible assets that can make or break a spin-off company. Among those intangibles might be research publications and patents, networks of contacts and collaborators, and access to funding opportunities that might be unavailable to companies with no direct academic links.

    The paper's case studies of three biotechnology spin-offs within the British Columbia innovation ecosystem suggests that the value of intangible assets is usually only realised when strong entrepreneurial capabilities are available to the start-up company. These capabilities are not just about business acumen but also an understanding of how to align the technology with market needs, protect intellectual property effectively, and mentor the founding team to reach biotech commercialization successfully. Critically, the timing of a company launch can correlate strongly with success or failure, the researchers found.

    Park, A., Goudarzi, A., Yaghmaie, P., Thomas, V.J. and Maine, E. (2024) 'The role of pre-formation intangible assets in the endowment of science-based university spin-offs', Int. J. Technology Management, Vol. 96, No. 4, pp.230–260.
    DOI: 10.1504/IJTM.2024.140712

  • A multi-centre research team writing in the International Journal of Metadata, Semantics and Ontologies discusses how they hope to fill a significant gap in the documentation and sharing of research data by focusing on "contextual metadata." The researchers explain that traditionally, research metadata has usually been about research outputs, such as publications or datasets. The new stance considers the detailed information about the research process, such as how the data was generated, the techniques used, and the specific conditions under which the research was conducted.

    The project considered six research domains across the life sciences, social science, and the humanities. Semi-structured interviews and literature review allowed the team to unravel how researchers in each domain manage this kind of contextual metadata. They found that although a considerable amount of such metadata is available, it is often implicit and scattered across various documentation fields. This fragmentation makes it difficult to identify and use the information effectively.

    The team thus suggests that there is a need for a standardized framework for contextual metadata that could be used across all disciplines. Such a framework would support future work to look at the replicability and reproducibility of research, which are important in scientific integrity and validation. Replicability refers to the ability to duplicate a study's results under the same conditions, while reproducibility involves obtaining consistent results using the same datasets and methods.

    Additionally, a standardized approach to contextual metadata could reduce research waste and even help reduce research misconduct by providing a clearer and more consistent way to document research processes. However, there remain many challenges because of the diverse nature of research practices across different disciplines. Differences in funding models, regulatory requirements, and methods mean that a universal framework might not be directly applicable to all fields. As such, the team has proposed a generic framework that recognize the need for domain-specific adaptations.

    Ohmann, C., Panagiotopoulou, M., Canham, S., Holub, P., Majcen, K., Saunders, G., Fratelli, M., Tang, J., Gribbon, P., Karki, R., Kleemola, M., Moilanen, K., Broeder, D., Daelemans, W. and Fivez, P. (2023) 'Proposal for a framework of contextual metadata in selected research infrastructures of the life sciences and the social sciences & humanities', Int. J. Metadata Semantics and Ontologies, Vol. 16, No. 4, pp.261–277.
    DOI: 10.1504/IJMSO.2023.140695

  • The COVID-19 pandemic not only gave us a global health crisis but also an infodemic, a term coined by the World Health Organization (WHO) to describe the overwhelming flood of information – both accurate and misleading – that inundated media channels. This information complicated the public understanding and response to the pandemic as people struggled to separate fact from fiction.

    Researchers writing in the International Journal of Advanced Media and Communication suggest that a lot of attention has been paid to tracking and mitigating the spread of misinformation, but there has been less focus on the characteristics of the messages and sources that allow information to spread. This gap in the research literature has implications for how we might develop better strategies to counteract misinformation, particularly in times of crisis.

    Ezgi Akar of the University of Wisconsin, USA, looked at social media updates, "Tweets" as they were once referred on the Twitter microblogging platform. Twitter has since been rebranded as "X". At the time of the pandemic, Twitter had famously risen to the point where it was a powerful tool that could shape public discourse and at the time played an important role in the dissemination of information and social interaction, and, unfortunately, the spread of misinformation.

    The research hoped to reveal how the content of a given update and the credibility of its source might contribute to its spread, or reach, across the social media platform, and beyond. The aim would be to see what factors might then be influenced to reduce the spread of false information, often referred to as fake news in the vernacular of the time.

    Akar's model used three main theoretical frameworks: the Undeutsch hypothesis, which examines the credibility of statements; the four-factor theory, which looks at the various aspects that influence how believable a message is; and source credibility theory, which explores how the perceived reliability of a source affects the dissemination of information. He then used the model to analyse a dataset of tweets, both true and false to look for patterns.

    The findings of the study reveal that while the content of an update – such as the use of extreme sentiments, external links, and media, such as photos and videos – affects the likelihood of the update being "liked" or shared "retweeted", the credibility of the source has more effect on how widely the information spreads. This suggests that users will engage more with content from seemingly credible sources, even if the content itself is not particularly compelling.

    An additional finding, that updates in all capital letters were more likely to be shared if they were providing true information. Usually, messages written in all capital letters are perceived as aggressive, akin to shouting, or naïve. But, "all caps" in the case of an important and urgent message seems to override typical user behaviour in certain situations.

    Akar, E. (2024) 'Unmasking an infodemic: what characteristics are fuelling misinformation on social media?', Int. J. Advanced Media and Communication, Vol. 8, No. 1, pp.53–76.
    DOI: 10.1504/IJAMC.2024.140646

News

Prof. Rongbo Zhu appointed as new Editor in Chief of International Journal of Radio Frequency Identification Technology and Applications

Prof. Rongbo Zhu from Huazhong Agricultural University in China has been appointed to take over editorship of the International Journal of Radio Frequency Identification Technology and Applications.

Associate Prof. Debiao Meng appointed as new Editor in Chief of International Journal of Ocean Systems Management

Associate Prof. Debiao Meng from the University of Electronic Science and Technology of China has been appointed to take over editorship of the International Journal of Ocean Systems Management.

Prof. Yixiang Chen appointed as new Editor in Chief of International Journal of Big Data Intelligence

Prof. Yixiang Chen from East China Normal University has been appointed to take over editorship of the International Journal of Big Data Intelligence.

International Journal of Computational Systems Engineering is now an open access-only journal 

Inderscience's Editorial Office has announced that the International Journal of Computational Systems Engineering is now an Open Access-only journal. All accepted articles submitted from 15 August 2024 onwards will be Open Access and will require an article processing charge of USD $1600. Authors who have submitted articles prior to 15 August 2024 will still have a choice of publishing as a standard or an Open Access article. You can find more information on Open Access here.

Dr. Luigi Aldieri appointed as new Editor in Chief of International Journal of Governance and Financial Intermediation

Dr. Luigi Aldieri from the University of Salerno in Italy has been appointed to take over editorship of the International Journal of Governance and Financial Intermediation.