International Journal of Computer Network and Information Security (IJCNIS)

ISSN: 2074-9090 (Print)

ISSN: 2074-9104 (Online)

DOI: https://doi.org/10.5815/ijcnis

Website: https://www.mecs-press.org/ijcnis

Published By: MECS Press

Frequency: 6 issues per year

Number(s) Available: 136

(IJCNIS) in Google Scholar Citations / h5-index

IJCNIS is committed to bridge the theory and practice of computer network and information security. From innovative ideas to specific algorithms and full system implementations, IJCNIS publishes original, peer-reviewed, and high quality articles in the areas of computer network and information security. IJCNIS is well-indexed scholarly journal and is indispensable reading and references for people working at the cutting edge of computer network, information security, and their applications.

 

IJCNIS has been abstracted or indexed by several world class databases: ScopusSCImago, Google Scholar, Microsoft Academic Search, CrossRef, Baidu Wenku, IndexCopernicus, IET Inspec, EBSCO, VINITI, JournalSeek, ULRICH's Periodicals Directory, WorldCat, Scirus, Academic Journals Database, Stanford University Libraries, Cornell University Library, UniSA Library, CNKI Scholar, ProQuest, J-Gate, ZDB, BASE, OhioLINK, iThenticate, Open Access Articles, Open Science Directory, National Science Library of Chinese Academy of Sciences, The HKU Scholars Hub, etc..

Latest Issue
Most Viewed
Most Downloaded

IJCNIS Vol. 17, No. 2, Apr. 2025

REGULAR PAPERS

Design of Congestion Prediction Model on Network Towers using Rough Set Theory for High Speed Low Latency Communication

By D. Priyanka Y.K. Sundara Krishna

DOI: https://doi.org/10.5815/ijcnis.2025.02.01, Pub. Date: 8 Apr. 2025

Wireless communication for data and a variety of wireless interacted devices have increased dramatically in the past few years. Millimeter wave (mmWave) technology can serve the primary objectives of 5G networks, which include high data throughput and low latency. But mmWave signals for communications lacking substantial diffraction and are consequently more susceptible to obstruction by environmental physical objects, which could cause communication lines to be disrupted and congestion takes place. Wireless data transmission suffers from blockages and path loss, causes high latency as well as reduces the data transmission speed and degrades in quality performance. To overcome the limitations, Rough Set Theory with hypertuned SVM is implemented and designed the congestion prediction model based on the behaviour of network towers for low latency and high-speed data transmission. The data from the different towers is initially collected and created as a dataset. Super MICE is a technique to replace the missing data. Then, the Rough Set Theory is utilized to cluster the data into equivalent classes based on the behaviour of 5G, 4G and 3G wireless network. Hypertuned SVM with a Gazelle optimization algorithm is applied to predict the congestion level by accurately selecting the hyperparameter. By employing performance metrics, the proposed approach is examined and contrasted with existing techniques. The evaluation of performance measurements for the proposed method includes informedness attained as 91%, Adjusted Rand Index obtained value as 0.83, Jaccard as 0.737. Accuracy, precision, sensitivity, error, F1_score, and NPV are also achieved at 93%, 92%, 94%, 7%, 92%, and 90%, respectively. According to this evaluation, the proposed model is superior to perform than the earlier used existing methods.  

[...] Read more.
∆DHT-Zip: A Delta-difference Hybrid Tree Coding Scheme for End-to-end Packet Compression Framework in Network-on-Chips

By T. Pullaiah K. Manjunathachari B. L. Malleswari

DOI: https://doi.org/10.5815/ijcnis.2025.02.02, Pub. Date: 8 Apr. 2025

Due to the maximal transistor count, Multi-Processor System-on-Chip (MPSoC) delivers more performance than uniprocessor systems. Network on Chip (NoC) in MPSoC provides scalable connectivity compared to traditional bus-based interconnects. Still, NoC designs significantly impact MPSoC design as it increases power consumption and network latency. A solution to this problem is packet compression which minimizes the data redundancy within NoC packets and reduces the overall power consumption of the whole network by minimizing a data packet size. Latency and overhead of compressor and decompressor require more memory access time, even though the packet compression is good for the improved performance of NoC. So, this problem demands a simple and lightweight compression method like delta compression. Consequently, this research proposes a new delta-difference Hybrid Tree coding (∆DHT-Zip) to de/compress the data packet in the NoC framework. In this compression approach, the Delta encoding, Huffman encoding and DNA tree (deoxyribonucleic acid) coding are hybridized to perform the data packet de/compression approach. Moreover, a time series approach named Run Length Encoding (RLE) is used to compress the metadata obtained from both the encoding and decoding processes. This research produces decreased packet loss and significant power savings by using the proposed ∆DHT-Zip method. The simulation results show that the proposed ∆DHT-Zip algorithm minimizes packet latency and outperforms existing data compression approaches with a mean Compression Ratio (CR) of 1.2%, which is 79.06% greater than the existing Flitzip algorithm. 

[...] Read more.
GMM-based Imbalanced Fractional Whale Particle Filter for Multiple Object Tracking in Surveillance Videos

By Avinash Ratre

DOI: https://doi.org/10.5815/ijcnis.2025.02.03, Pub. Date: 8 Apr. 2025

The imbalanced surveillance video dataset consists of majority and minority classes as normal and anomalous instances in the nonlinear and non-Gaussian framework. The normal and anomalous instances cause majority and minority samples or particles associated with high and low probable regions when considering the standard particle filter. The minority particles tend to be at high risk of being suppressed by the majority particles, as the proposal probability density function (pdf) encourages the highly probable regions of the input data space to remain a biased distribution. The standard particle filter-based tracker afflicts with sample degeneration and sample impoverishment due to the biased proposal pdf ignoring the minority particles. The difficulty in designing the correct proposal pdf prevents particle filter-based tracking in the imbalanced video data. The existing methods do not discuss the imbalanced nature of particle filter-based tracking. To alleviate this problem and tracking challenges, this paper proposes a novel fractional whale particle filter (FWPF) that fuses the fractional calculus-based whale optimization algorithm (FWOA) and the standard particle filter under weighted sum rule fusion. Integrating the FWPF with an iterative Gaussian mixture model (GMM) with unbiased sample variance and sample mean allows the proposal pdf to be adaptive to the imbalanced video data. The adaptive proposal pdf leads the FWPF to a minimum variance unbiased estimator for effectively detecting and tracking multiple objects in the imbalanced video data. The fractional calculus up to the first four terms makes the FWOA a local and global search operator with inherent memory property. The fractional calculus in the FWOA oversamples minority particles to be diversified with multiple imputations to eliminate data distortion with low bias and low variance. The proposed FWPF presents a novel imbalance evaluation metric, tracking distance correlation for the imbalanced tracking over UCSD surveillance video data and shows greater efficacy in mitigating the effects of the imbalanced nature of video data compared to other existing methods. The proposed method also outshines the existing methods regarding precision and accuracy in tracking multiple objects. The consistent tracking distance correlation near zero values provides efficient imbalance reduction through bias-variance correction compared to the existing methods.

[...] Read more.
Agile Methodology for Identifying Original and Fake Printed Documents based on Secret Raster Formation

By Mariia Nazarkevych Victoria Vysotska Vasyl Lytvyn Yuriy Ushenko Dmytro Uhryn Zhengbing Hu

DOI: https://doi.org/10.5815/ijcnis.2025.02.04, Pub. Date: 8 Apr. 2025

A method of identification of original and fake prints has been developed. Security elements are printed using an offset printing method, which we will call original printing. In parallel, we will print bitmap security elements on copiers. We will call this process fake printing. Such types of rasterisation were developed to make the difference between an original print and a fake print visible to the naked eye. A method of detecting fake printing has also been developed by measuring the change in the percentage of raster dot, dot gain, trapping, optical density, ∆lab, and change in tonality. The protection of the printed document is created when the image is transformed by amplitude-modulated rasterisation based on the mathematical apparatus of Ateb-functions. During rasterisation, we create thin graphic elements that have different shapes and are calculated according to developed methods. The size of a single dot of a raster element depends on the selection of the rasterisation method and the tonal gradation value of each corresponding pixel in the image. During rasterisation, a raster structure is formed, in which the value of each raster element is related by the value of the Ateb-function with tonal gradation, as well as a change in the angle, lines and shapes of the curves of a single raster. We offer raster image printing on various paper samples that are widely used today.

[...] Read more.
An Efficient IoT Based Intrusion Detection System Using Optimization Kernel Extreme Learning Machine

By Laiby Thomas Anoop B. K.

DOI: https://doi.org/10.5815/ijcnis.2025.02.05, Pub. Date: 8 Apr. 2025

The Internet of Things (IoT) is an ever-expanding network that links all objects to the web so that they can communicate with one another using standardized protocols. Recently, IoT networks have been extensively used in advanced applications like smart factories, smart homes, smart grids, smart cities, etc. They can be used in conjunction with artificial intelligence (AI) and machine learning to facilitate a data collection procedure that is both simplified and more dynamic. Along with the services provided by IoT applications, various security issues are also raised. The accessing of IoT devices is mainly through an untrusted network like the Internet which makes them unprotected against a wide range of malicious attacks. The detection performance of current IDSs is hindered by issues including false alarms, low detection rate, an unbalanced dataset, and slow response time. This study proposes a new intrusion detection system (IDS) for the IoT that utilizes the chaotic improved Black Widow Optimization Kernel Extreme Learning Machine (CIBWO-KELM) algorithm to address these problems. Initially, the pre-processing of the dataset is carried out using min-max normalization, changing string values to numerical values and changing IP address to numerical values. The selection of the highest performing feature set is achieved through the information gain method (IGM), and finally, the intrusion detection is performed by the CIBWO-KELM algorithm. Python is the tool utilized for testing, while the BoT-IoT dataset is used for simulation analysis. The suggested model achieves an accuracy level of 99.7% when applied to the BoT-IoT dataset. In addition, the results of the studies demonstrate that the proposed model outperforms other current techniques.

[...] Read more.
Traffic Adaptive Small Cell Planning in Heterogeneous Networks

By Kuna Venkateswararao Tejas M. Modi Pravati Swain Srinivasa Rao Bendi

DOI: https://doi.org/10.5815/ijcnis.2025.02.06, Pub. Date: 8 Apr. 2025

Small cell is a key enabler for massive connectivity and higher data rate in the future generation of a cellular communication system. Few challenges in heterogeneous networks (HetNets) are effective resource utilization and de- ployment of optimal small base stations (SBSs) under dynamic mobile traffic patterns. In this paper, we design a traffic adaptive small cell planning (TASCP) schema to minimize the deployment of SBSs, enhancing the network energy efficiency without compromising the user equipment’s QoS (UEs). The proposed TASCP consists of two phases: small cell formation (SCF) and small Cell optimization (SCO). SCF creates the initial association between the UEs and SBS. The SCF operates the modes (active/sleep) of SBSs according to the dynamic traffic load. Changing the mode of SBS from an active mode to a sleep mode is based on the traffic load shared by other neighboring SBSs, cooperatively. The proposed TASCP method is compared with state-of-the-art algorithms, i.e., the Self-organized SBS Deployment Strategy (SSDS) and UE Association and SBS On/Off (USOF) algorithm. The network performance is calculated in terms of network energy efficiency, throughput, convergence time, and active small base stations. The performance of the proposed TASCP significantly increases as compared to state-of-the-art algorithms.

[...] Read more.
Efficient Cloud Computing Security Using Hybrid Optimized AES-IQCP-ABE Cryptography Algorithm

By Jayaprakash Jayachandran Dahlia Sam Kanya Nataraj

DOI: https://doi.org/10.5815/ijcnis.2025.02.07, Pub. Date: 8 Apr. 2025

Data management has been revolutionized because cloud computing technologies have increased user barriers to expensive infrastructure and storage limits. The advantages of the cloud have made it possible for significant cloud implementation in major businesses. However, the privacy of cloud-based data remains the significant and most crucial problem for data owners due to various security risks. Many researchers have proposed various methods to maintain the confidentiality of the data, including attribute-based encryption (ABE). Though, the cloud is still dogged mainly by the security issue. To protect data privacy, the new encryption model "Advanced Encryption Standard- Improved Quantum Ciphertext Policy and Attribute-based Encryption" (AES-IQCP-ABE) is introduced in the present research. The suggested method twice encrypts the data and the attributes using the ABE at first. Second, using the AES technique, the encrypted data is encrypted before being delivered to authorized users. The dynamic, chaotic map function is used in the proposed approach to protecting user attributes throughout the initialization of the key, encryption of data, and decryption of data processes. For the encryption process, the inputs used in the proposed research are both unstructured and structured extensive medical data. Regarding computational memory, time for cloud data encryption, and decryption, the proposed model outperforms the previous ABE-based encryption and decryption algorithms.

[...] Read more.
Channel Aware Power Allocation and Diversity Gain Selection for Mimo Noma System

By Suprith P. G. Mohammed Riyaz Ahmed Mithileysh Sathiyanarayanan

DOI: https://doi.org/10.5815/ijcnis.2025.02.08, Pub. Date: 8 Apr. 2025

In recent communications, multiple-input-multiple-output (MIMO), orthogonal frequency division multiplexing (OFDM) and Non-Orthogonal Multiple Access (NOMA), and are major sub-system techniques of 5G wireless communications for optimization of latency, Bit Error Rate (BER) and improvement of throughput. In this paper, the proposed design, manages the resource allocations among the techniques to meet the requirements using NOMA and MIMO. The interactive waterfilling based PA in MIMO and NOMA to improve Quality of Service (QoS) and investigated NOMA cell free massive MIMO system by considering effect of linear and individual channel estimations. The proposed system also optimizes user pairing approach for group users that optimize downlink rate per user so that PA can be acceptable at cost of involvedness. Lastly, the proposed system demonstrates experimental results to different noisy channel to minimize the BER and latency that does not degraded in performance compared to the existing PA. The design is validated under single user, 2, 4, 8 users under different noisy channels. The proposed system also validated for up-link transmission under same channels by interactive waterfilling based PA in MIMO and NOMA. Based on obtained simulation results, BER is optimized by 8%, SNR, throughput and PAPR are optimally obtained by 5.5%, 7% and 6% respectively.

[...] Read more.
Machine Learning-based Intrusion Detection Technique for IoT: Simulation with Cooja

By Ali H. Farea Kerem Kucuk

DOI: https://doi.org/10.5815/ijcnis.2024.01.01, Pub. Date: 8 Feb. 2024

The Internet of Things (IoT) is one of the promising technologies of the future. It offers many attractive features that we depend on nowadays with less effort and faster in real-time. However, it is still vulnerable to various threats and attacks due to the obstacles of its heterogeneous ecosystem, adaptive protocols, and self-configurations. In this paper, three different 6LoWPAN attacks are implemented in the IoT via Contiki OS to generate the proposed dataset that reflects the 6LoWPAN features in IoT. For analyzed attacks, six scenarios have been implemented. Three of these are free of malicious nodes, and the others scenarios include malicious nodes. The typical scenarios are a benchmark for the malicious scenarios for comparison, extraction, and exploration of the features that are affected by attackers. These features are used as criteria input to train and test our proposed hybrid Intrusion Detection and Prevention System (IDPS) to detect and prevent 6LoWPAN attacks in the IoT ecosystem. The proposed hybrid IDPS has been trained and tested with improved accuracy on both KoU-6LoWPAN-IoT and Edge IIoT datasets. In the proposed hybrid IDPS for the detention phase, the Artificial Neural Network (ANN) classifier achieved the highest accuracy among the models in both the 2-class and N-class. Before the accuracy improved in our proposed dataset with the 4-class and 2-class mode, the ANN classifier achieved 95.65% and 99.95%, respectively, while after the accuracy optimization reached 99.84% and 99.97%, respectively. For the Edge IIoT dataset, before the accuracy improved with the 15-class and 2-class modes, the ANN classifier achieved 95.14% and 99.86%, respectively, while after the accuracy optimized up to 97.64% and 99.94%, respectively. Also, the decision tree-based models achieved lightweight models due to their lower computational complexity, so these have an appropriate edge computing deployment. Whereas other ML models reach heavyweight models and are required more computational complexity, these models have an appropriate deployment in cloud or fog computing in IoT networks.

[...] Read more.
D2D Communication Using Distributive Deep Learning with Coot Bird Optimization Algorithm

By Nethravathi H. M. Akhila S. Vinayakumar Ravi

DOI: https://doi.org/10.5815/ijcnis.2023.05.01, Pub. Date: 8 Oct. 2023

D2D (Device-to-device) communication has a major role in communication technology with resource and power allocation being a major attribute of the network. The existing method for D2D communication has several problems like slow convergence, low accuracy, etc. To overcome these, a D2D communication using distributed deep learning with a coot bird optimization algorithm has been proposed. In this work, D2D communication is combined with the Coot Bird Optimization algorithm to enhance the performance of distributed deep learning. Reducing the interference of eNB with the use of deep learning can achieve near-optimal throughput. Distributed deep learning trains the devices as a group and it works independently to reduce the training time of the devices. This model confirms the independent resource allocation with optimized power value and the least Bit Error Rate for D2D communication while sustaining the quality of services. The model is finally trained and tested successfully and is found to work for power allocation with an accuracy of 99.34%, giving the best fitness of 80%, the worst fitness value of 46%, mean value of 6.76 and 0.55 STD value showing better performance compared to the existing works.

[...] Read more.
Classification of HHO-based Machine Learning Techniques for Clone Attack Detection in WSN

By Ramesh Vatambeti Vijay Kumar Damera Karthikeyan H. Manohar M. Sharon Roji Priya C. M. S. Mekala

DOI: https://doi.org/10.5815/ijcnis.2023.06.01, Pub. Date: 8 Dec. 2023

Thanks to recent technological advancements, low-cost sensors with dispensation and communication capabilities are now feasible. As an example, a Wireless Sensor Network (WSN) is a network in which the nodes are mobile computers that exchange data with one another over wireless connections rather than relying on a central server. These inexpensive sensor nodes are particularly vulnerable to a clone node or replication assault because of their limited processing power, memory, battery life, and absence of tamper-resistant hardware. Once an attacker compromises a sensor node, they can create many copies of it elsewhere in the network that share the same ID. This would give the attacker complete internal control of the network, allowing them to mimic the genuine nodes' behavior. This is why scientists are so intent on developing better clone assault detection procedures. This research proposes a machine learning based clone node detection (ML-CND) technique to identify clone nodes in wireless networks. The goal is to identify clones effectively enough to prevent cloning attacks from happening in the first place. Use a low-cost identity verification process to identify clones in specific locations as well as around the globe. Using the Optimized Extreme Learning Machine (OELM), with kernels of ELM ideally determined through the Horse Herd Metaheuristic Optimization Algorithm (HHO), this technique safeguards the network from node identity replicas. Using the node identity replicas, the most reliable transmission path may be selected. The procedure is meant to be used to retrieve data from a network node. The simulation result demonstrates the performance analysis of several factors, including sensitivity, specificity, recall, and detection.

[...] Read more.
Public vs Private vs Hybrid vs Community - Cloud Computing: A Critical Review

By Sumit Goyal

DOI: https://doi.org/10.5815/ijcnis.2014.03.03, Pub. Date: 8 Feb. 2014

These days cloud computing is booming like no other technology. Every organization whether it’s small, mid-sized or big, wants to adapt this cutting edge technology for its business. As cloud technology becomes immensely popular among these businesses, the question arises: Which cloud model to consider for your business? There are four types of cloud models available in the market: Public, Private, Hybrid and Community. This review paper answers the question, which model would be most beneficial for your business. All the four models are defined, discussed and compared with the benefits and pitfalls, thus giving you a clear idea, which model to adopt for your organization.

[...] Read more.
A Critical appraisal on Password based Authentication

By Amanpreet A. Kaur Khurram K. Mustafa

DOI: https://doi.org/10.5815/ijcnis.2019.01.05, Pub. Date: 8 Jan. 2019

There is no doubt that, even after the development of many other authentication schemes, passwords remain one of the most popular means of authentication. A review in the field of password based authentication is addressed, by introducing and analyzing different schemes of authentication, respective advantages and disadvantages, and probable causes of the ‘very disconnect’ between user and password mechanisms. The evolution of passwords and how they have deep-rooted in our life is remarkable. This paper addresses the gap between the user and industry perspectives of password authentication, the state of art of password authentication and how the most investigated topic in password authentication changed over time. The author’s tries to distinguish password based authentication into two levels ‘User Centric Design Level’ and the ‘Machine Centric Protocol Level’ under one framework. The paper concludes with the special section covering the ways in which password based authentication system can be strengthened on the issues which are currently holding-in the password based authentication.

[...] Read more.
Social Engineering: I-E based Model of Human Weakness for Attack and Defense Investigations

By Wenjun Fan Kevin Lwakatare Rong Rong

DOI: https://doi.org/10.5815/ijcnis.2017.01.01, Pub. Date: 8 Jan. 2017

Social engineering is the attack aimed to manipulate dupe to divulge sensitive information or take actions to help the adversary bypass the secure perimeter in front of the information-related resources so that the attacking goals can be completed. Though there are a number of security tools, such as firewalls and intrusion detection systems which are used to protect machines from being attacked, widely accepted mechanism to prevent dupe from fraud is lacking. However, the human element is often the weakest link of an information security chain, especially, in a human-centered environment. In this paper, we reveal that the human psychological weaknesses result in the main vulnerabilities that can be exploited by social engineering attacks. Also, we capture two essential levels, internal characteristics of human nature and external circumstance influences, to explore the root cause of the human weaknesses. We unveil that the internal characteristics of human nature can be converted into weaknesses by external circumstance influences. So, we propose the I-E based model of human weakness for social engineering investigation. Based on this model, we analyzed the vulnerabilities exploited by different techniques of social engineering, and also, we conclude several defense approaches to fix the human weaknesses. This work can help the security researchers to gain insights into social engineering from a different perspective, and in particular, enhance the current and future research on social engineering defense mechanisms.

[...] Read more.
Forensics Image Acquisition Process of Digital Evidence

By Erhan Akbal Sengul Dogan

DOI: https://doi.org/10.5815/ijcnis.2018.05.01, Pub. Date: 8 May 2018

For solving the crimes committed on digital materials, they have to be copied. An evidence must be copied properly in valid methods that provide legal availability. Otherwise, the material cannot be used as an evidence. Image acquisition of the materials from the crime scene by using the proper hardware and software tools makes the obtained data legal evidence. Choosing the proper format and verification function when image acquisition affects the steps in the research process. For this purpose, investigators use hardware and software tools. Hardware tools assure the integrity and trueness of the image through write-protected method. As for software tools, they provide usage of certain write-protect hardware tools or acquisition of the disks that are directly linked to a computer. Image acquisition through write-protect hardware tools assures them the feature of forensic copy. Image acquisition only through software tools do not ensure the forensic copy feature. During the image acquisition process, different formats like E01, AFF, DD can be chosen. In order to provide the integrity and trueness of the copy, hash values have to be calculated using verification functions like SHA and MD series. In this study, image acquisition process through hardware-software are shown. Hardware acquisition of a 200 GB capacity hard disk is made through Tableau TD3 and CRU Ditto. The images of the same storage are taken through Tableau, CRU and RTX USB bridge and through FTK imager and Forensic Imager; then comparative performance assessment results are presented.

[...] Read more.
Statistical Techniques for Detecting Cyberattacks on Computer Networks Based on an Analysis of Abnormal Traffic Behavior

By Zhengbing Hu Roman Odarchenko Sergiy Gnatyuk Maksym Zaliskyi Anastasia Chaplits Sergiy Bondar Vadim Borovik

DOI: https://doi.org/10.5815/ijcnis.2020.06.01, Pub. Date: 8 Dec. 2020

Represented paper is currently topical, because of year on year increasing quantity and diversity of attacks on computer networks that causes significant losses for companies. This work provides abilities of such problems solving as: existing methods of location of anomalies and current hazards at networks, statistical methods consideration, as effective methods of anomaly detection and experimental discovery of choosed method effectiveness. The method of network traffic capture and analysis during the network segment passive monitoring is considered in this work. Also, the processing way of numerous network traffic indexes for further network information safety level evaluation is proposed. Represented methods and concepts usage allows increasing of network segment reliability at the expense of operative network anomalies capturing, that could testify about possible hazards and such information is very useful for the network administrator. To get a proof of the method effectiveness, several network attacks, whose data is storing in specialised DARPA dataset, were chosen. Relevant parameters for every attack type were calculated. In such a way, start and termination time of the attack could be obtained by this method with insignificant error for some methods.

[...] Read more.
Comparative Analysis of KNN Algorithm using Various Normalization Techniques

By Amit Pandey Achin Jain

DOI: https://doi.org/10.5815/ijcnis.2017.11.04, Pub. Date: 8 Nov. 2017

Classification is the technique of identifying and assigning individual quantities to a group or a set. In pattern recognition, K-Nearest Neighbors algorithm is a non-parametric method for classification and regression. The K-Nearest Neighbor (kNN) technique has been widely used in data mining and machine learning because it is simple yet very useful with distinguished performance. Classification is used to predict the labels of test data points after training sample data. Over the past few decades, researchers have proposed many classification methods, but still, KNN (K-Nearest Neighbor) is one of the most popular methods to classify the data set. The input consists of k closest examples in each space, the neighbors are picked up from a set of objects or objects having same properties or value, this can be considered as a training dataset. In this paper, we have used two normalization techniques to classify the IRIS Dataset and measure the accuracy of classification using Cross-Validation method using R-Programming. The two approaches considered in this paper are - Data with Z-Score Normalization and Data with Min-Max Normalization.

[...] Read more.
Performance Analysis of 5G New Radio LDPC over Different Multipath Fading Channel Models

By Mohammed Hussein Ali Ghanim A. Al-Rubaye

DOI: https://doi.org/10.5815/ijcnis.2023.04.01, Pub. Date: 8 Aug. 2023

The creation and developing of a wireless network communication that is fast, secure, dependable, and cost-effective enough to suit the needs of the modern world is a difficult undertaking. Channel coding schemes must be chosen carefully to ensure timely and error-free data transfer in a noisy and fading channel. To ensure that the data received matches the data transmitted, channel coding is an essential part of the communication system's architecture. NR LDPC (New Radio Low Density Parity Check) code has been recommended for the fifth-generation (5G) to achieve the need for more internet traffic capacity in mobile communications and to provide both high coding gain and low energy consumption. This research presents NR-LDPC for data transmission over two different multipath fading channel models, such as Nakagami-m and Rayleigh in AWGN. The BER performance of the NR-LDPC code using two kinds of rate-compatible base graphs has been examined for the QAM-OFDM (Quadrature Amplitude Modulation-Orthogonal Frequency Division Multiplexing) system and compared to the uncoded QAM-OFDM system. The BER performance obtained via Monte Carlo simulation demonstrates that the LDPC works efficiently with two different kinds of channel models: those that do not fade and those that fade and achieves significant BER improvements with high coding gain. It makes sense to use LDPC codes in 5G because they are more efficient for long data transmissions, and the key to a good code is an effective decoding algorithm. The results demonstrated a coding gain improvement of up to 15 dB at 10-3 BER.

[...] Read more.
Optimal Route Based Advanced Algorithm using Hot Link Split Multi-Path Routing Algorithm

By Akhilesh A. Waoo Sanjay Sharma Manjhari Jain

DOI: https://doi.org/10.5815/ijcnis.2014.08.07, Pub. Date: 8 Jul. 2014

Present research work describes advancement in standard routing protocol AODV for mobile ad-hoc networks. Our mechanism sets up multiple optimal paths with the criteria of bandwidth and delay to store multiple optimal paths in the network. At time of link failure, it will switch to next available path. We have used the information that we get in the RREQ packet and also send RREP packet to more than one path, to set up multiple paths, It reduces overhead of local route discovery at the time of link failure and because of this End to End Delay and Drop Ratio decreases. The main feature of our mechanism is its simplicity and improved efficiency. This evaluates through simulations the performance of the AODV routing protocol including our scheme and we compare it with HLSMPRA (Hot Link Split Multi-Path Routing Algorithm) Algorithm. Indeed, our scheme reduces routing load of network, end to end delay, packet drop ratio, and route error sent. The simulations have been performed using network simulator OPNET. The network simulator OPNET is discrete event simulation software for network simulations which means it simulates events not only sending and receiving packets but also forwarding and dropping packets. This modified algorithm has improved efficiency, with more reliability than Previous Algorithm.

[...] Read more.
Classification of HHO-based Machine Learning Techniques for Clone Attack Detection in WSN

By Ramesh Vatambeti Vijay Kumar Damera Karthikeyan H. Manohar M. Sharon Roji Priya C. M. S. Mekala

DOI: https://doi.org/10.5815/ijcnis.2023.06.01, Pub. Date: 8 Dec. 2023

Thanks to recent technological advancements, low-cost sensors with dispensation and communication capabilities are now feasible. As an example, a Wireless Sensor Network (WSN) is a network in which the nodes are mobile computers that exchange data with one another over wireless connections rather than relying on a central server. These inexpensive sensor nodes are particularly vulnerable to a clone node or replication assault because of their limited processing power, memory, battery life, and absence of tamper-resistant hardware. Once an attacker compromises a sensor node, they can create many copies of it elsewhere in the network that share the same ID. This would give the attacker complete internal control of the network, allowing them to mimic the genuine nodes' behavior. This is why scientists are so intent on developing better clone assault detection procedures. This research proposes a machine learning based clone node detection (ML-CND) technique to identify clone nodes in wireless networks. The goal is to identify clones effectively enough to prevent cloning attacks from happening in the first place. Use a low-cost identity verification process to identify clones in specific locations as well as around the globe. Using the Optimized Extreme Learning Machine (OELM), with kernels of ELM ideally determined through the Horse Herd Metaheuristic Optimization Algorithm (HHO), this technique safeguards the network from node identity replicas. Using the node identity replicas, the most reliable transmission path may be selected. The procedure is meant to be used to retrieve data from a network node. The simulation result demonstrates the performance analysis of several factors, including sensitivity, specificity, recall, and detection.

[...] Read more.
D2D Communication Using Distributive Deep Learning with Coot Bird Optimization Algorithm

By Nethravathi H. M. Akhila S. Vinayakumar Ravi

DOI: https://doi.org/10.5815/ijcnis.2023.05.01, Pub. Date: 8 Oct. 2023

D2D (Device-to-device) communication has a major role in communication technology with resource and power allocation being a major attribute of the network. The existing method for D2D communication has several problems like slow convergence, low accuracy, etc. To overcome these, a D2D communication using distributed deep learning with a coot bird optimization algorithm has been proposed. In this work, D2D communication is combined with the Coot Bird Optimization algorithm to enhance the performance of distributed deep learning. Reducing the interference of eNB with the use of deep learning can achieve near-optimal throughput. Distributed deep learning trains the devices as a group and it works independently to reduce the training time of the devices. This model confirms the independent resource allocation with optimized power value and the least Bit Error Rate for D2D communication while sustaining the quality of services. The model is finally trained and tested successfully and is found to work for power allocation with an accuracy of 99.34%, giving the best fitness of 80%, the worst fitness value of 46%, mean value of 6.76 and 0.55 STD value showing better performance compared to the existing works.

[...] Read more.
Machine Learning-based Intrusion Detection Technique for IoT: Simulation with Cooja

By Ali H. Farea Kerem Kucuk

DOI: https://doi.org/10.5815/ijcnis.2024.01.01, Pub. Date: 8 Feb. 2024

The Internet of Things (IoT) is one of the promising technologies of the future. It offers many attractive features that we depend on nowadays with less effort and faster in real-time. However, it is still vulnerable to various threats and attacks due to the obstacles of its heterogeneous ecosystem, adaptive protocols, and self-configurations. In this paper, three different 6LoWPAN attacks are implemented in the IoT via Contiki OS to generate the proposed dataset that reflects the 6LoWPAN features in IoT. For analyzed attacks, six scenarios have been implemented. Three of these are free of malicious nodes, and the others scenarios include malicious nodes. The typical scenarios are a benchmark for the malicious scenarios for comparison, extraction, and exploration of the features that are affected by attackers. These features are used as criteria input to train and test our proposed hybrid Intrusion Detection and Prevention System (IDPS) to detect and prevent 6LoWPAN attacks in the IoT ecosystem. The proposed hybrid IDPS has been trained and tested with improved accuracy on both KoU-6LoWPAN-IoT and Edge IIoT datasets. In the proposed hybrid IDPS for the detention phase, the Artificial Neural Network (ANN) classifier achieved the highest accuracy among the models in both the 2-class and N-class. Before the accuracy improved in our proposed dataset with the 4-class and 2-class mode, the ANN classifier achieved 95.65% and 99.95%, respectively, while after the accuracy optimization reached 99.84% and 99.97%, respectively. For the Edge IIoT dataset, before the accuracy improved with the 15-class and 2-class modes, the ANN classifier achieved 95.14% and 99.86%, respectively, while after the accuracy optimized up to 97.64% and 99.94%, respectively. Also, the decision tree-based models achieved lightweight models due to their lower computational complexity, so these have an appropriate edge computing deployment. Whereas other ML models reach heavyweight models and are required more computational complexity, these models have an appropriate deployment in cloud or fog computing in IoT networks.

[...] Read more.
A Critical appraisal on Password based Authentication

By Amanpreet A. Kaur Khurram K. Mustafa

DOI: https://doi.org/10.5815/ijcnis.2019.01.05, Pub. Date: 8 Jan. 2019

There is no doubt that, even after the development of many other authentication schemes, passwords remain one of the most popular means of authentication. A review in the field of password based authentication is addressed, by introducing and analyzing different schemes of authentication, respective advantages and disadvantages, and probable causes of the ‘very disconnect’ between user and password mechanisms. The evolution of passwords and how they have deep-rooted in our life is remarkable. This paper addresses the gap between the user and industry perspectives of password authentication, the state of art of password authentication and how the most investigated topic in password authentication changed over time. The author’s tries to distinguish password based authentication into two levels ‘User Centric Design Level’ and the ‘Machine Centric Protocol Level’ under one framework. The paper concludes with the special section covering the ways in which password based authentication system can be strengthened on the issues which are currently holding-in the password based authentication.

[...] Read more.
Public vs Private vs Hybrid vs Community - Cloud Computing: A Critical Review

By Sumit Goyal

DOI: https://doi.org/10.5815/ijcnis.2014.03.03, Pub. Date: 8 Feb. 2014

These days cloud computing is booming like no other technology. Every organization whether it’s small, mid-sized or big, wants to adapt this cutting edge technology for its business. As cloud technology becomes immensely popular among these businesses, the question arises: Which cloud model to consider for your business? There are four types of cloud models available in the market: Public, Private, Hybrid and Community. This review paper answers the question, which model would be most beneficial for your business. All the four models are defined, discussed and compared with the benefits and pitfalls, thus giving you a clear idea, which model to adopt for your organization.

[...] Read more.
Detecting Remote Access Network Attacks Using Supervised Machine Learning Methods

By Samuel Ndichu Sylvester McOyowo Henry Okoyo Cyrus Wekesa

DOI: https://doi.org/10.5815/ijcnis.2023.02.04, Pub. Date: 8 Apr. 2023

Remote access technologies encrypt data to enforce policies and ensure protection. Attackers leverage such techniques to launch carefully crafted evasion attacks introducing malware and other unwanted traffic to the internal network. Traditional security controls such as anti-virus software, firewall, and intrusion detection systems (IDS) decrypt network traffic and employ signature and heuristic-based approaches for malware inspection. In the past, machine learning (ML) approaches have been proposed for specific malware detection and traffic type characterization. However, decryption introduces computational overheads and dilutes the privacy goal of encryption. The ML approaches employ limited features and are not objectively developed for remote access security. This paper presents a novel ML-based approach to encrypted remote access attack detection using a weighted random forest (W-RF) algorithm. Key features are determined using feature importance scores. Class weighing is used to address the imbalanced data distribution problem common in remote access network traffic where attacks comprise only a small proportion of network traffic. Results obtained during the evaluation of the approach on benign virtual private network (VPN) and attack network traffic datasets that comprise verified normal hosts and common attacks in real-world network traffic are presented. With recall and precision of 100%, the approach demonstrates effective performance. The results for k-fold cross-validation and receiver operating characteristic (ROC) mean area under the curve (AUC) demonstrate that the approach effectively detects attacks in encrypted remote access network traffic, successfully averting attackers and network intrusions.

[...] Read more.
Synthesis of the Structure of a Computer System Functioning in Residual Classes

By Victor Krasnobayev Alexandr Kuznetsov Kateryna Kuznetsova

DOI: https://doi.org/10.5815/ijcnis.2023.01.01, Pub. Date: 8 Feb. 2023

An important task of designing complex computer systems is to ensure high reliability. Many authors investigate this problem and solve it in various ways. Most known methods are based on the use of natural or artificially introduced redundancy. This redundancy can be used passively and/or actively with (or without) restructuring of the computer system. This article explores new technologies for improving fault tolerance through the use of natural and artificially introduced redundancy of the applied number system. We consider a non-positional number system in residual classes and use the following properties: independence, equality, and small capacity of residues that define a non-positional code structure. This allows you to: parallelize arithmetic calculations at the level of decomposition of the remainders of numbers; implement spatial spacing of data elements with the possibility of their subsequent asynchronous independent processing; perform tabular execution of arithmetic operations of the base set and polynomial functions with single-cycle sampling of the result of a modular operation. Using specific examples, we present the calculation and comparative analysis of the reliability of computer systems. The conducted studies have shown that the use of non-positional code structures in the system of residual classes provides high reliability. In addition, with an increase in the bit grid of computing devices, the efficiency of using the system of residual classes increases. Our studies show that in order to increase reliability, it is advisable to reserve small nodes and blocks of a complex system, since the failure rate of individual elements is always less than the failure rate of the entire computer system.

[...] Read more.
Two-Layer Security of Images Using Elliptic Curve Cryptography with Discrete Wavelet Transform

By Ganavi M. Prabhudeva S.

DOI: https://doi.org/10.5815/ijcnis.2023.02.03, Pub. Date: 8 Apr. 2023

Information security is an important part of the current interactive world. It is very much essential for the end-user to preserve the confidentiality and integrity of their sensitive data. As such, information encoding is significant to defend against access from the non-authorized user. This paper is presented with an aim to build a system with a fusion of Cryptography and Steganography methods for scrambling the input image and embed into a carrier media by enhancing the security level. Elliptic Curve Cryptography (ECC) is helpful in achieving high security with a smaller key size. In this paper, ECC with modification is used to encrypt and decrypt the input image. Carrier media is transformed into frequency bands by utilizing Discrete Wavelet Transform (DWT). The encrypted hash of the input is hidden in high-frequency bands of carrier media by the process of Least-Significant-Bit (LSB). This approach is successful to achieve data confidentiality along with data integrity. Data integrity is verified by using SHA-256. Simulation outcomes of this method have been analyzed by measuring performance metrics. This method enhances the security of images obtained with 82.7528db of PSNR, 0.0012 of MSE, and SSIM as 1 compared to other existing scrambling methods.

[...] Read more.
Information Technology Risk Management Using ISO 31000 Based on ISSAF Framework Penetration Testing (Case Study: Election Commission of X City)

By I Gede Ary Suta Sanjaya Gusti Made Arya Sasmita Dewa Made Sri Arsa

DOI: https://doi.org/10.5815/ijcnis.2020.04.03, Pub. Date: 8 Aug. 2020

Election Commission of X City is an institution that serves as the organizer of elections in the X City, which has a website as a medium in the delivery of information to the public and as a medium for the management and structuring of voter data in the domicile of X City. As a website that stores sensitive data, it is necessary to have risk management aimed at improving the security aspects of the website of Election Commission of X City. The Information System Security Assessment Framework (ISSAF) is a penetration testing standard used to test website resilience, with nine stages of attack testing which has several advantages over existing security controls against threats and security gaps, and serves as a bridge between technical and managerial views of penetration testing by applying the necessary controls on both aspects. Penetration testing is carried out to find security holes on the website, which can then be used for assessment on ISO 31000 risk management which includes the stages of risk identification, risk analysis, and risk evaluation. The main findings of this study are testing a combination of penetration testing using the ISSAF framework and ISO 31000 risk management to obtain the security risks posed by a website. Based on this research, obtained the results that there are 18 security gaps from penetration testing, which based on ISO 31000 risk management assessment there are two types of security risks with high level, eight risks of medium level security vulnerabilities, and eight risks of security vulnerability with low levels. Some recommendations are given to overcome the risk of gaps found on the website.

[...] Read more.