ISSN: 2074-9090 (Print)
ISSN: 2074-9104 (Online)
DOI: https://doi.org/10.5815/ijcnis
Website: https://www.mecs-press.org/ijcnis
Published By: MECS Press
Frequency: 6 issues per year
Number(s) Available: 133
IJCNIS is committed to bridge the theory and practice of computer network and information security. From innovative ideas to specific algorithms and full system implementations, IJCNIS publishes original, peer-reviewed, and high quality articles in the areas of computer network and information security. IJCNIS is well-indexed scholarly journal and is indispensable reading and references for people working at the cutting edge of computer network, information security, and their applications.
IJCNIS has been abstracted or indexed by several world class databases: Scopus, SCImago, Google Scholar, Microsoft Academic Search, CrossRef, Baidu Wenku, IndexCopernicus, IET Inspec, EBSCO, VINITI, JournalSeek, ULRICH's Periodicals Directory, WorldCat, Scirus, Academic Journals Database, Stanford University Libraries, Cornell University Library, UniSA Library, CNKI Scholar, ProQuest, J-Gate, ZDB, BASE, OhioLINK, iThenticate, Open Access Articles, Open Science Directory, National Science Library of Chinese Academy of Sciences, The HKU Scholars Hub, etc..
IJCNIS Vol. 16, No. 5, Oct. 2024
REGULAR PAPERS
A method of traffic engineering (TE) based on the method of multi-path routing is proposed in the study. Today, one of the main challenges in networking is to organize an efficient TE system that will provide such parameters of quality of service (QoS) as the allowable value of packet loss and time for traffic re-routing. Traditional one-way routing facilities do not provide the required quality of service (QoS) parameters for TE. Modern computer networks use static and dynamic routing algorithms, which are characterized by big time complexity and a large amount of service information. This negatively affects the overall state of the network, namely: leads to network congestion, device failure, loss of information during routing and increases the time for traffic re-routing. Research has shown that the most promising way to solve the TE problem in computer networks is a comprehensive approach, which consists of multi-path routing, SDN technology and monitoring of the overall situation of the network. This paper proposes a method of traffic engineering in a software-defined network with specified quality of service parameters, which has reduced the time of traffic re-routing and the percentage of packet loss due to the combination of the centralized TE method and multi-path routing. From a practical point of view, the obtained method, will improve the quality of service in computer networks in comparison with the known method of traffic construction.
[...] Read more.The use of a person's cell phone to commit fraud is known as cell phone fraud. Such scams are usually carried out through fake phone calls or text messages. The victim receives a call from a cell phone scammer, usually claiming to have an emergency or a legal problem. The purpose of the scam is usually to convince the victim to provide personal or financial information. This may include private information such as social security numbers, bank account details or credit card information. In addition, users are often subjected to unsolicited calls for marketing and information gathering initiatives such as campaigns, advertisements and surveys. In this study, a smartphone application built on the blockchain is created to stop these nuisance actions. Transaction times and performance tests have been rigorously performed according to the difficulty levels of the blockchain structure.
[...] Read more.Security attacks has become major obstacles in Wireless Sensor Networks (WSN) and Trust Aware Routing is second line of defense. With an aim to improve on the existing routing mechanisms, in this paper, we propose Interactive, Onlooker and Capability Trust Aware Routing (IOC-TAR), a multi-trust attribute framework for trust management in WSNs. IOC-TAR employs three trust features to establish a trustworthy relationship between sensor nodes for their cooperation. Interactive trust uses communication interactions, onlooker trust uses neighbor node’s opinions and capability trust uses stability and fault tolerance for trust assessment. For, each node, one composite trust factor is formulated and decides its trustworthiness. Extensive simulation experiments are conducted to evaluate the effectiveness and efficiency of proposed IOC-TAR in the identification of malicious nodes and the provision of attack resilience. The results declare that the IOC-TAR enhances the attack resilience by improving Malicious Detection rate and reducing False Positive Rate.
[...] Read more.Maintaining privacy is becoming increasingly challenging due to growing reliance on cloud services and software, respectively. Our data is stored in a virtual environment on unreliable cloud machines, making it susceptible to privacy breaches if not handled properly. Encrypting data before uploading it can be a solution to this problem, but it can be time-consuming. However, all of the encryption methods used to safeguard digital data so far did not fulfillment privacy and integration requirements. This is because encryption cannot function independently. Data that is encrypted and stored on a single cloud server can still be accessed by attackers, compromising the privacy of the data. In this paper, we propose a new model based on the user's classification of privacy level. The proposed model divides the digital file into multiple fragments and separately encrypts each fragment; each fragment is encrypted as separated blocks. Additionally, permutation is implemented on encrypted fragments because they are stored in the cloud with replication fragments on another cloud service. This approach ensures that even if the attacker’s gains access to one fragment, they would not be able to access the entire file, thereby safeguarding the privacy of the data.
[...] Read more.In Software Defined Networking (SDN) the data plane is separated from the controller plane to achieve better functionality than the traditional networking. Although this approach poses a lot of security vulnerabilities due to its centralized approach. One significant issue is compromised SDN switches because the switches are dumb in SDN architecture and in absence of any intelligence it can be a easy target to the attackers. If one or more switches are attacked and compromised by the attackers, then the whole network might be down or defunct. Therefore, in this work we have devised a strategy to successfully detect the compromised SDN switches, isolate them and then reconstruct the whole network flow again by bypassing the compromised switches. In our proposed approach of detection, we have used two controllers, one as primary and another as secondary which is used to run and validate our algorithm in the detection process. Flow reconstruction is the next job of the secondary controller which after execution is conveyed to the primary controller. A two-controller strategy has been used to balance the additional load of detection and reconstruction activity from the master controller and thus achieved a balanced outcome in terms of running time and CPU utilization. All the propositions are validated by experimental analysis of the results and compared with existing state of the art to satisfy our claim.
[...] Read more.A new method of propaganda analysis is proposed to identify signs and change the dynamics of the behaviour of coordinated groups based on machine learning at the processing disinformation stages. In the course of the work, two models were implemented to recognise propaganda in textual data - at the message level and the phrase level. Within the framework of solving the problem of analysis and recognition of text data, in particular, fake news on the Internet, an important component of NLP technology (natural language processing) is the classification of words in text data. In this context, classification is the assignment or assignment of textual data to one or more predefined categories or classes. For this purpose, the task of binary text classification was solved. Both models are built based on logistic regression, and in the process of data preparation and feature extraction, such methods as vectorisation using TF-IDF vectorisation (Term Frequency – Inverse Document Frequency), the BOW model (Bag-of-Words), POS marking (Part-Of-Speech), word embedding using the Word2Vec two-layer neural network, as well as manual feature extraction methods aimed at identifying specific methods of political propaganda in texts are used. The analogues of the project under development are analysed the subject area (the propaganda used in the media and the basis of its production methods) is studied. The software implementation is carried out in Python, using the seaborn, matplotlib, genism, spacy, NLTK (Natural Language Toolkit), NumPy, pandas, scikit-learn libraries. The model's score for propaganda recognition at the phrase level was obtained: 0.74, and at the message level: 0.99. The implementation of the results will significantly reduce the time required to make the most appropriate decision on the implementation of counter-disinformation measures concerning the identified coordinated groups of disinformation generation, fake news and propaganda. Different classification algorithms for detecting fake news and non-fakes or fakes identification accuracy from Internet resources ana social mass media are used as the decision tree (for non-fakes identification accuracy 0.98 and fakes identification accuracy 0.9903), the k-nearest neighbours (0.83/0.999), the random forest (0.991/0.933), the multilayer perceptron (0.9979/0.9945), the logistic regression (0.9965/0.9988), and the Bayes classifier (0.998/0.913). The logistic regression (0.9965) the multilayer perceptron (0.9979) and the Bayesian classifier (0.998) are more optimal for non-fakes news identification. The logistic regression (0.9988), the multilayer perceptron (0.9945), and k-nearest neighbours (0.999) are more optimal for identifying fake news identification.
[...] Read more.A fundamental and vital aspect of Smart Metering infrastructure is the communication technologies and techniques associated with it, especially between the Smart Meters and the Data Concentrator Unit. Among many available communication technologies, ZigBee provides a low-cost, low-power, and easy-to-deploy network solution for a Smart Meter network. There exists limited literature that discusses ZigBee as a potential communication technology for long-range networks. Hence thorough analysis is demanded on the suitability of ZigBee for smart meter deployment under different types of environmental conditions, coverage ranges, and obstacles. This work evaluates the performance of an extended ZigBee module in outdoor as well as indoor conditions in the presence of different types of obstacles. Parameters are obtained for path loss exponent and the standard deviation of the Gaussian Random variable to validate the Log Normal Shadowing model for modeling long-range ZigBee communication. The impact of obstacles on path loss is also considered. The results show that the Log Normal Shadowing model is a good approximation for the behavior of ZigBee path loss. Accordingly, the suitability of ZigBee for a Smart Meter network spanned as a Neighborhood Area Network is also assessed based on the approximated model.
[...] Read more.The Internet of Things (IoT) is revolutionizing the technological market with exponential growth year wise. This revolution of IoT applications has also brought hackers and malware to gain remote access to IoT devices. The security of IoT systems has become more critical for consumers and businesses because of their inherent heterogenous design and open interfaces. Since the release of Mirai in 2016, IoT malware has gained an exponential growth rate. As IoT system and their infrastructure have become critical resources that triggers IoT malware injected by various shareholders in different settings. The enormous applications cause flooding of insecure packets and commands that fueled threats for IoT applications. IoT botnet is one of the most critical malwares that keeps evolving with the network traffic and may harm the privacy of IoT devices. In this work, we presented several sets of malware analysis mechanisms to understand the behavior of IoT malware. We devise an intelligent and hybrid model (IHBOT) that integrates the malware analysis and distinct machine learning algorithms for the identification and classification of the different IoT malware family based on network traffic. The clustering mechanism is also integrated with the proposed model for the identification of malware families based on similarity index. We have also applied YARA rules for the mitigation of IoT botnet traffic.
[...] Read more.There is an increasing usage in the banking sector of Smartphones enabled with Near Field Communication (NFC), to improve the services offered for the customers. This usage requires a security enhancement of the systems that employ this technology like the Automated Teller Machines (ATMs). One example is the Color Wheel Personal Identification Number (CWPIN) security protocol designed to authenticate users on ATMs using NFC enabled smartphones without typing the PIN code directly. CWPIN has been compared in the literature to several other protocols and was considered easier to use, more cost-effective and more resistant to various attacks on ATMs such as card reader skimming, keylogger injection, shoulder surfing, etc. Nevertheless, we demonstrate in this paper that CWPIN is vulnerable to the multiple video recordings intersection attack. We do so through concrete examples and a thorough analysis that reveals a high theoretical probability of attack success. A malicious party can use one or two hidden cameras to record the ATM and smartphone screens during several authentication sessions, then disclose the user's PIN code by intersecting the information extracted from the video recordings. In a more complex scenario, these video recordings could be obtained by malware injected into the ATM and the user's smartphone to record their screens during CWPIN authentication sessions. Our intersection attack requires a few recordings, usually three or four, to reveal the PIN code and can lead to unauthorized transactions if the user's smartphone is stolen. We also propose a mitigation of the identified attack through several modifications to the CWPIN protocol and discuss its strengths and limitations.
[...] Read more.Multiple access technologies have grown hand in hand from the first generation to the 5th Generation (5G) with both performance and quality improvement. Non-Orthogonal Multiple Access (NOMA) is the recent multiple access technology adopted in the 5G communication technology. Capacity requirements of wireless networks have grown to a large extent with the penetration of ultra-high-definition video transmission, Internet of Things (IoT), and virtual reality applications taking ground in the recent future. This paper develops the Physical Layer Network Coding (PNC) for collision resolution in a NOMA environment with two users. Traditionally NOMA uses Successive Interference Cancellation (SIC) for collision resolution. While additionally a decoding algorithm is added along with SIC to improve the performance of the collision resolution. MATLAB-based simulation is developed on the NOMA environment with two users using Viterbi coding, Low-Density Parity Check (LDPC), and Turbo coding. Performance parameters of Bit Error Rate (BER) and throughput are compared for these three algorithms. It is observed that the Turbo coding performed better among these three algorithms both in the BER and throughput. The BER obtained from the SIC- Turbo is found to be performing well with an increase of about 14% from the ordinary SIC implementation. The performance of the collision resolution has increased by 13% to 14% when joint decoding techniques are used and thus increasing the throughput of the NOMA paradigm.
[...] Read more.D2D (Device-to-device) communication has a major role in communication technology with resource and power allocation being a major attribute of the network. The existing method for D2D communication has several problems like slow convergence, low accuracy, etc. To overcome these, a D2D communication using distributed deep learning with a coot bird optimization algorithm has been proposed. In this work, D2D communication is combined with the Coot Bird Optimization algorithm to enhance the performance of distributed deep learning. Reducing the interference of eNB with the use of deep learning can achieve near-optimal throughput. Distributed deep learning trains the devices as a group and it works independently to reduce the training time of the devices. This model confirms the independent resource allocation with optimized power value and the least Bit Error Rate for D2D communication while sustaining the quality of services. The model is finally trained and tested successfully and is found to work for power allocation with an accuracy of 99.34%, giving the best fitness of 80%, the worst fitness value of 46%, mean value of 6.76 and 0.55 STD value showing better performance compared to the existing works.
[...] Read more.Thanks to recent technological advancements, low-cost sensors with dispensation and communication capabilities are now feasible. As an example, a Wireless Sensor Network (WSN) is a network in which the nodes are mobile computers that exchange data with one another over wireless connections rather than relying on a central server. These inexpensive sensor nodes are particularly vulnerable to a clone node or replication assault because of their limited processing power, memory, battery life, and absence of tamper-resistant hardware. Once an attacker compromises a sensor node, they can create many copies of it elsewhere in the network that share the same ID. This would give the attacker complete internal control of the network, allowing them to mimic the genuine nodes' behavior. This is why scientists are so intent on developing better clone assault detection procedures. This research proposes a machine learning based clone node detection (ML-CND) technique to identify clone nodes in wireless networks. The goal is to identify clones effectively enough to prevent cloning attacks from happening in the first place. Use a low-cost identity verification process to identify clones in specific locations as well as around the globe. Using the Optimized Extreme Learning Machine (OELM), with kernels of ELM ideally determined through the Horse Herd Metaheuristic Optimization Algorithm (HHO), this technique safeguards the network from node identity replicas. Using the node identity replicas, the most reliable transmission path may be selected. The procedure is meant to be used to retrieve data from a network node. The simulation result demonstrates the performance analysis of several factors, including sensitivity, specificity, recall, and detection.
[...] Read more.The Internet of Things (IoT) is one of the promising technologies of the future. It offers many attractive features that we depend on nowadays with less effort and faster in real-time. However, it is still vulnerable to various threats and attacks due to the obstacles of its heterogeneous ecosystem, adaptive protocols, and self-configurations. In this paper, three different 6LoWPAN attacks are implemented in the IoT via Contiki OS to generate the proposed dataset that reflects the 6LoWPAN features in IoT. For analyzed attacks, six scenarios have been implemented. Three of these are free of malicious nodes, and the others scenarios include malicious nodes. The typical scenarios are a benchmark for the malicious scenarios for comparison, extraction, and exploration of the features that are affected by attackers. These features are used as criteria input to train and test our proposed hybrid Intrusion Detection and Prevention System (IDPS) to detect and prevent 6LoWPAN attacks in the IoT ecosystem. The proposed hybrid IDPS has been trained and tested with improved accuracy on both KoU-6LoWPAN-IoT and Edge IIoT datasets. In the proposed hybrid IDPS for the detention phase, the Artificial Neural Network (ANN) classifier achieved the highest accuracy among the models in both the 2-class and N-class. Before the accuracy improved in our proposed dataset with the 4-class and 2-class mode, the ANN classifier achieved 95.65% and 99.95%, respectively, while after the accuracy optimization reached 99.84% and 99.97%, respectively. For the Edge IIoT dataset, before the accuracy improved with the 15-class and 2-class modes, the ANN classifier achieved 95.14% and 99.86%, respectively, while after the accuracy optimized up to 97.64% and 99.94%, respectively. Also, the decision tree-based models achieved lightweight models due to their lower computational complexity, so these have an appropriate edge computing deployment. Whereas other ML models reach heavyweight models and are required more computational complexity, these models have an appropriate deployment in cloud or fog computing in IoT networks.
[...] Read more.There is no doubt that, even after the development of many other authentication schemes, passwords remain one of the most popular means of authentication. A review in the field of password based authentication is addressed, by introducing and analyzing different schemes of authentication, respective advantages and disadvantages, and probable causes of the ‘very disconnect’ between user and password mechanisms. The evolution of passwords and how they have deep-rooted in our life is remarkable. This paper addresses the gap between the user and industry perspectives of password authentication, the state of art of password authentication and how the most investigated topic in password authentication changed over time. The author’s tries to distinguish password based authentication into two levels ‘User Centric Design Level’ and the ‘Machine Centric Protocol Level’ under one framework. The paper concludes with the special section covering the ways in which password based authentication system can be strengthened on the issues which are currently holding-in the password based authentication.
[...] Read more.For solving the crimes committed on digital materials, they have to be copied. An evidence must be copied properly in valid methods that provide legal availability. Otherwise, the material cannot be used as an evidence. Image acquisition of the materials from the crime scene by using the proper hardware and software tools makes the obtained data legal evidence. Choosing the proper format and verification function when image acquisition affects the steps in the research process. For this purpose, investigators use hardware and software tools. Hardware tools assure the integrity and trueness of the image through write-protected method. As for software tools, they provide usage of certain write-protect hardware tools or acquisition of the disks that are directly linked to a computer. Image acquisition through write-protect hardware tools assures them the feature of forensic copy. Image acquisition only through software tools do not ensure the forensic copy feature. During the image acquisition process, different formats like E01, AFF, DD can be chosen. In order to provide the integrity and trueness of the copy, hash values have to be calculated using verification functions like SHA and MD series. In this study, image acquisition process through hardware-software are shown. Hardware acquisition of a 200 GB capacity hard disk is made through Tableau TD3 and CRU Ditto. The images of the same storage are taken through Tableau, CRU and RTX USB bridge and through FTK imager and Forensic Imager; then comparative performance assessment results are presented.
[...] Read more.Social engineering is the attack aimed to manipulate dupe to divulge sensitive information or take actions to help the adversary bypass the secure perimeter in front of the information-related resources so that the attacking goals can be completed. Though there are a number of security tools, such as firewalls and intrusion detection systems which are used to protect machines from being attacked, widely accepted mechanism to prevent dupe from fraud is lacking. However, the human element is often the weakest link of an information security chain, especially, in a human-centered environment. In this paper, we reveal that the human psychological weaknesses result in the main vulnerabilities that can be exploited by social engineering attacks. Also, we capture two essential levels, internal characteristics of human nature and external circumstance influences, to explore the root cause of the human weaknesses. We unveil that the internal characteristics of human nature can be converted into weaknesses by external circumstance influences. So, we propose the I-E based model of human weakness for social engineering investigation. Based on this model, we analyzed the vulnerabilities exploited by different techniques of social engineering, and also, we conclude several defense approaches to fix the human weaknesses. This work can help the security researchers to gain insights into social engineering from a different perspective, and in particular, enhance the current and future research on social engineering defense mechanisms.
[...] Read more.These days cloud computing is booming like no other technology. Every organization whether it’s small, mid-sized or big, wants to adapt this cutting edge technology for its business. As cloud technology becomes immensely popular among these businesses, the question arises: Which cloud model to consider for your business? There are four types of cloud models available in the market: Public, Private, Hybrid and Community. This review paper answers the question, which model would be most beneficial for your business. All the four models are defined, discussed and compared with the benefits and pitfalls, thus giving you a clear idea, which model to adopt for your organization.
[...] Read more.Represented paper is currently topical, because of year on year increasing quantity and diversity of attacks on computer networks that causes significant losses for companies. This work provides abilities of such problems solving as: existing methods of location of anomalies and current hazards at networks, statistical methods consideration, as effective methods of anomaly detection and experimental discovery of choosed method effectiveness. The method of network traffic capture and analysis during the network segment passive monitoring is considered in this work. Also, the processing way of numerous network traffic indexes for further network information safety level evaluation is proposed. Represented methods and concepts usage allows increasing of network segment reliability at the expense of operative network anomalies capturing, that could testify about possible hazards and such information is very useful for the network administrator. To get a proof of the method effectiveness, several network attacks, whose data is storing in specialised DARPA dataset, were chosen. Relevant parameters for every attack type were calculated. In such a way, start and termination time of the attack could be obtained by this method with insignificant error for some methods.
[...] Read more.The creation and developing of a wireless network communication that is fast, secure, dependable, and cost-effective enough to suit the needs of the modern world is a difficult undertaking. Channel coding schemes must be chosen carefully to ensure timely and error-free data transfer in a noisy and fading channel. To ensure that the data received matches the data transmitted, channel coding is an essential part of the communication system's architecture. NR LDPC (New Radio Low Density Parity Check) code has been recommended for the fifth-generation (5G) to achieve the need for more internet traffic capacity in mobile communications and to provide both high coding gain and low energy consumption. This research presents NR-LDPC for data transmission over two different multipath fading channel models, such as Nakagami-m and Rayleigh in AWGN. The BER performance of the NR-LDPC code using two kinds of rate-compatible base graphs has been examined for the QAM-OFDM (Quadrature Amplitude Modulation-Orthogonal Frequency Division Multiplexing) system and compared to the uncoded QAM-OFDM system. The BER performance obtained via Monte Carlo simulation demonstrates that the LDPC works efficiently with two different kinds of channel models: those that do not fade and those that fade and achieves significant BER improvements with high coding gain. It makes sense to use LDPC codes in 5G because they are more efficient for long data transmissions, and the key to a good code is an effective decoding algorithm. The results demonstrated a coding gain improvement of up to 15 dB at 10-3 BER.
[...] Read more.Classification is the technique of identifying and assigning individual quantities to a group or a set. In pattern recognition, K-Nearest Neighbors algorithm is a non-parametric method for classification and regression. The K-Nearest Neighbor (kNN) technique has been widely used in data mining and machine learning because it is simple yet very useful with distinguished performance. Classification is used to predict the labels of test data points after training sample data. Over the past few decades, researchers have proposed many classification methods, but still, KNN (K-Nearest Neighbor) is one of the most popular methods to classify the data set. The input consists of k closest examples in each space, the neighbors are picked up from a set of objects or objects having same properties or value, this can be considered as a training dataset. In this paper, we have used two normalization techniques to classify the IRIS Dataset and measure the accuracy of classification using Cross-Validation method using R-Programming. The two approaches considered in this paper are - Data with Z-Score Normalization and Data with Min-Max Normalization.
[...] Read more.Present research work describes advancement in standard routing protocol AODV for mobile ad-hoc networks. Our mechanism sets up multiple optimal paths with the criteria of bandwidth and delay to store multiple optimal paths in the network. At time of link failure, it will switch to next available path. We have used the information that we get in the RREQ packet and also send RREP packet to more than one path, to set up multiple paths, It reduces overhead of local route discovery at the time of link failure and because of this End to End Delay and Drop Ratio decreases. The main feature of our mechanism is its simplicity and improved efficiency. This evaluates through simulations the performance of the AODV routing protocol including our scheme and we compare it with HLSMPRA (Hot Link Split Multi-Path Routing Algorithm) Algorithm. Indeed, our scheme reduces routing load of network, end to end delay, packet drop ratio, and route error sent. The simulations have been performed using network simulator OPNET. The network simulator OPNET is discrete event simulation software for network simulations which means it simulates events not only sending and receiving packets but also forwarding and dropping packets. This modified algorithm has improved efficiency, with more reliability than Previous Algorithm.
[...] Read more.Thanks to recent technological advancements, low-cost sensors with dispensation and communication capabilities are now feasible. As an example, a Wireless Sensor Network (WSN) is a network in which the nodes are mobile computers that exchange data with one another over wireless connections rather than relying on a central server. These inexpensive sensor nodes are particularly vulnerable to a clone node or replication assault because of their limited processing power, memory, battery life, and absence of tamper-resistant hardware. Once an attacker compromises a sensor node, they can create many copies of it elsewhere in the network that share the same ID. This would give the attacker complete internal control of the network, allowing them to mimic the genuine nodes' behavior. This is why scientists are so intent on developing better clone assault detection procedures. This research proposes a machine learning based clone node detection (ML-CND) technique to identify clone nodes in wireless networks. The goal is to identify clones effectively enough to prevent cloning attacks from happening in the first place. Use a low-cost identity verification process to identify clones in specific locations as well as around the globe. Using the Optimized Extreme Learning Machine (OELM), with kernels of ELM ideally determined through the Horse Herd Metaheuristic Optimization Algorithm (HHO), this technique safeguards the network from node identity replicas. Using the node identity replicas, the most reliable transmission path may be selected. The procedure is meant to be used to retrieve data from a network node. The simulation result demonstrates the performance analysis of several factors, including sensitivity, specificity, recall, and detection.
[...] Read more.D2D (Device-to-device) communication has a major role in communication technology with resource and power allocation being a major attribute of the network. The existing method for D2D communication has several problems like slow convergence, low accuracy, etc. To overcome these, a D2D communication using distributed deep learning with a coot bird optimization algorithm has been proposed. In this work, D2D communication is combined with the Coot Bird Optimization algorithm to enhance the performance of distributed deep learning. Reducing the interference of eNB with the use of deep learning can achieve near-optimal throughput. Distributed deep learning trains the devices as a group and it works independently to reduce the training time of the devices. This model confirms the independent resource allocation with optimized power value and the least Bit Error Rate for D2D communication while sustaining the quality of services. The model is finally trained and tested successfully and is found to work for power allocation with an accuracy of 99.34%, giving the best fitness of 80%, the worst fitness value of 46%, mean value of 6.76 and 0.55 STD value showing better performance compared to the existing works.
[...] Read more.There is no doubt that, even after the development of many other authentication schemes, passwords remain one of the most popular means of authentication. A review in the field of password based authentication is addressed, by introducing and analyzing different schemes of authentication, respective advantages and disadvantages, and probable causes of the ‘very disconnect’ between user and password mechanisms. The evolution of passwords and how they have deep-rooted in our life is remarkable. This paper addresses the gap between the user and industry perspectives of password authentication, the state of art of password authentication and how the most investigated topic in password authentication changed over time. The author’s tries to distinguish password based authentication into two levels ‘User Centric Design Level’ and the ‘Machine Centric Protocol Level’ under one framework. The paper concludes with the special section covering the ways in which password based authentication system can be strengthened on the issues which are currently holding-in the password based authentication.
[...] Read more.The Internet of Things (IoT) is one of the promising technologies of the future. It offers many attractive features that we depend on nowadays with less effort and faster in real-time. However, it is still vulnerable to various threats and attacks due to the obstacles of its heterogeneous ecosystem, adaptive protocols, and self-configurations. In this paper, three different 6LoWPAN attacks are implemented in the IoT via Contiki OS to generate the proposed dataset that reflects the 6LoWPAN features in IoT. For analyzed attacks, six scenarios have been implemented. Three of these are free of malicious nodes, and the others scenarios include malicious nodes. The typical scenarios are a benchmark for the malicious scenarios for comparison, extraction, and exploration of the features that are affected by attackers. These features are used as criteria input to train and test our proposed hybrid Intrusion Detection and Prevention System (IDPS) to detect and prevent 6LoWPAN attacks in the IoT ecosystem. The proposed hybrid IDPS has been trained and tested with improved accuracy on both KoU-6LoWPAN-IoT and Edge IIoT datasets. In the proposed hybrid IDPS for the detention phase, the Artificial Neural Network (ANN) classifier achieved the highest accuracy among the models in both the 2-class and N-class. Before the accuracy improved in our proposed dataset with the 4-class and 2-class mode, the ANN classifier achieved 95.65% and 99.95%, respectively, while after the accuracy optimization reached 99.84% and 99.97%, respectively. For the Edge IIoT dataset, before the accuracy improved with the 15-class and 2-class modes, the ANN classifier achieved 95.14% and 99.86%, respectively, while after the accuracy optimized up to 97.64% and 99.94%, respectively. Also, the decision tree-based models achieved lightweight models due to their lower computational complexity, so these have an appropriate edge computing deployment. Whereas other ML models reach heavyweight models and are required more computational complexity, these models have an appropriate deployment in cloud or fog computing in IoT networks.
[...] Read more.An important task of designing complex computer systems is to ensure high reliability. Many authors investigate this problem and solve it in various ways. Most known methods are based on the use of natural or artificially introduced redundancy. This redundancy can be used passively and/or actively with (or without) restructuring of the computer system. This article explores new technologies for improving fault tolerance through the use of natural and artificially introduced redundancy of the applied number system. We consider a non-positional number system in residual classes and use the following properties: independence, equality, and small capacity of residues that define a non-positional code structure. This allows you to: parallelize arithmetic calculations at the level of decomposition of the remainders of numbers; implement spatial spacing of data elements with the possibility of their subsequent asynchronous independent processing; perform tabular execution of arithmetic operations of the base set and polynomial functions with single-cycle sampling of the result of a modular operation. Using specific examples, we present the calculation and comparative analysis of the reliability of computer systems. The conducted studies have shown that the use of non-positional code structures in the system of residual classes provides high reliability. In addition, with an increase in the bit grid of computing devices, the efficiency of using the system of residual classes increases. Our studies show that in order to increase reliability, it is advisable to reserve small nodes and blocks of a complex system, since the failure rate of individual elements is always less than the failure rate of the entire computer system.
[...] Read more.These days cloud computing is booming like no other technology. Every organization whether it’s small, mid-sized or big, wants to adapt this cutting edge technology for its business. As cloud technology becomes immensely popular among these businesses, the question arises: Which cloud model to consider for your business? There are four types of cloud models available in the market: Public, Private, Hybrid and Community. This review paper answers the question, which model would be most beneficial for your business. All the four models are defined, discussed and compared with the benefits and pitfalls, thus giving you a clear idea, which model to adopt for your organization.
[...] Read more.Remote access technologies encrypt data to enforce policies and ensure protection. Attackers leverage such techniques to launch carefully crafted evasion attacks introducing malware and other unwanted traffic to the internal network. Traditional security controls such as anti-virus software, firewall, and intrusion detection systems (IDS) decrypt network traffic and employ signature and heuristic-based approaches for malware inspection. In the past, machine learning (ML) approaches have been proposed for specific malware detection and traffic type characterization. However, decryption introduces computational overheads and dilutes the privacy goal of encryption. The ML approaches employ limited features and are not objectively developed for remote access security. This paper presents a novel ML-based approach to encrypted remote access attack detection using a weighted random forest (W-RF) algorithm. Key features are determined using feature importance scores. Class weighing is used to address the imbalanced data distribution problem common in remote access network traffic where attacks comprise only a small proportion of network traffic. Results obtained during the evaluation of the approach on benign virtual private network (VPN) and attack network traffic datasets that comprise verified normal hosts and common attacks in real-world network traffic are presented. With recall and precision of 100%, the approach demonstrates effective performance. The results for k-fold cross-validation and receiver operating characteristic (ROC) mean area under the curve (AUC) demonstrate that the approach effectively detects attacks in encrypted remote access network traffic, successfully averting attackers and network intrusions.
[...] Read more.Information security is an important part of the current interactive world. It is very much essential for the end-user to preserve the confidentiality and integrity of their sensitive data. As such, information encoding is significant to defend against access from the non-authorized user. This paper is presented with an aim to build a system with a fusion of Cryptography and Steganography methods for scrambling the input image and embed into a carrier media by enhancing the security level. Elliptic Curve Cryptography (ECC) is helpful in achieving high security with a smaller key size. In this paper, ECC with modification is used to encrypt and decrypt the input image. Carrier media is transformed into frequency bands by utilizing Discrete Wavelet Transform (DWT). The encrypted hash of the input is hidden in high-frequency bands of carrier media by the process of Least-Significant-Bit (LSB). This approach is successful to achieve data confidentiality along with data integrity. Data integrity is verified by using SHA-256. Simulation outcomes of this method have been analyzed by measuring performance metrics. This method enhances the security of images obtained with 82.7528db of PSNR, 0.0012 of MSE, and SSIM as 1 compared to other existing scrambling methods.
[...] Read more.Phishing attacks by malicious URL/web links are common nowadays. The user data, such as login credentials and credit card numbers can be stolen by their careless clicking on these links. Moreover, this can lead to installation of malware on the target systems to freeze their activities, perform ransomware attack or reveal sensitive information. Recently, GAN-based models have been attractive for anti-phishing URLs. The general motivation is using Generator network (G) to generate fake URL strings and Discriminator network (D) to distinguish the real and the fake URL samples. This is operated in adversarial way between G and D so that the synthesized URL samples by G become more and more similar to the real ones. From the perspective of cybersecurity defense, GAN-based motivation can be exploited for D as a phishing URL detector or classifier. This means after training GAN on both malign and benign URL strings, a strong classifier/detector D can be achieved. From the perspective of cyberattack, the attackers would like to to create fake URLs that are as close to the real ones as possible to perform phishing attacks. This makes them easier to fool users and detectors. In the related proposals, GAN-based models are mainly exploited for anti-phishing URLs. There have been no evaluations specific for GAN-generated fake URLs. The attacker can make use of these URL strings for phishing attacks. In this work, we propose to use TLD (Top-level Domain) and SSIM (Structural Similarity Index Score) scores for evaluation the GAN-synthesized URL strings in terms of the structural similariy with the real ones. The more similar in the structure of the GAN-generated URLs are to the real ones, the more likely they are to fool the classifiers. Different GAN models from basic GAN to others GAN extensions of DCGAN, WGAN, SEQGAN are explored in this work. We show from the intensive experiments that D classifier of basic GAN and DCGAN surpasses other GAN models of WGAN and SegGAN. The effectiveness of the fake URL patterns generated from SeqGAN is the best compared to other GAN models in both structural similarity and the ability in deceiving the phishing URL classifiers of LSTM (Long Short Term Memory) and RF (Random Forest).
[...] Read more.