ISSN: 2074-9090 (Print)
ISSN: 2074-9104 (Online)
DOI: https://doi.org/10.5815/ijcnis
Website: https://www.mecs-press.org/ijcnis
Published By: MECS Press
Frequency: 6 issues per year
Number(s) Available: 128
ICV: 2014 8.19
SJR: 2021 0.438
IJCNIS is committed to bridge the theory and practice of computer network and information security. From innovative ideas to specific algorithms and full system implementations, IJCNIS publishes original, peer-reviewed, and high quality articles in the areas of computer network and information security. IJCNIS is well-indexed scholarly journal and is indispensable reading and references for people working at the cutting edge of computer network, information security, and their applications.
IJCNIS has been abstracted or indexed by several world class databases: Scopus, SCImago, Google Scholar, Microsoft Academic Search, CrossRef, Baidu Wenku, IndexCopernicus, IET Inspec, EBSCO, VINITI, JournalSeek, ULRICH's Periodicals Directory, WorldCat, Scirus, Academic Journals Database, Stanford University Libraries, Cornell University Library, UniSA Library, CNKI Scholar, ProQuest, J-Gate, ZDB, BASE, OhioLINK, iThenticate, Open Access Articles, Open Science Directory, National Science Library of Chinese Academy of Sciences, The HKU Scholars Hub, etc..
IJCNIS Vol. 15, No. 6, Dec. 2023
REGULAR PAPERS
Thanks to recent technological advancements, low-cost sensors with dispensation and communication capabilities are now feasible. As an example, a Wireless Sensor Network (WSN) is a network in which the nodes are mobile computers that exchange data with one another over wireless connections rather than relying on a central server. These inexpensive sensor nodes are particularly vulnerable to a clone node or replication assault because of their limited processing power, memory, battery life, and absence of tamper-resistant hardware. Once an attacker compromises a sensor node, they can create many copies of it elsewhere in the network that share the same ID. This would give the attacker complete internal control of the network, allowing them to mimic the genuine nodes' behavior. This is why scientists are so intent on developing better clone assault detection procedures. This research proposes a machine learning based clone node detection (ML-CND) technique to identify clone nodes in wireless networks. The goal is to identify clones effectively enough to prevent cloning attacks from happening in the first place. Use a low-cost identity verification process to identify clones in specific locations as well as around the globe. Using the Optimized Extreme Learning Machine (OELM), with kernels of ELM ideally determined through the Horse Herd Metaheuristic Optimization Algorithm (HHO), this technique safeguards the network from node identity replicas. Using the node identity replicas, the most reliable transmission path may be selected. The procedure is meant to be used to retrieve data from a network node. The simulation result demonstrates the performance analysis of several factors, including sensitivity, specificity, recall, and detection.
[...] Read more.Underwater communication is one of the important research areas which involves design and development of communication systems that can demonstrate high data rate and low Bit Error Rate (BER). In this work three different modulation schemes are compared for their performances in terms of BER and Peak to Average Power Ratio (PAPR). The realistic channel model called WATERMARK is used as a benchmark to evaluate channel performances. The mathematical model is developed in MATLAB and channel environments such as Norway Oslo fjord (NOF1), Norway Continental Shelf (NCS1), Brest Commercial Harbour (BCH1), Kauai (KAU1, KAU2) are considered for modelling different underwater channels. The data symbols are modulated using Dual Tree Complex Wavelet Transform (DTCWT) Orthogonal Frequency Division Multiplexing (OFDM) model to generate multi subcarriers and are demodulated at the receiver considering underwater channel environments. The BER results are evaluated for channel depth less than 10m and 10-50m. An improvement of 2x10-2 in terms of BER is observed when compared with Fast Fourier Transform (FFT) based OFDM model.
[...] Read more.A mobile agent is a small piece of software which works on direction of its source platform on a regular basis. Because mobile agents roam around wide area networks autonomously, the protection of the agents and platforms is a serious worry. The number of mobile agents-based software applications has increased dramatically over the past year. It has also enhanced the security risks associated with such applications. Most of the security mechanisms in the mobile agent architecture focus solely on platform security, leaving mobile agent safety to be a significant challenge. An efficient authentication scheme is proposed in this article to address the situation of protection and authentication of mobile agent at the hour of migration of across multiple platforms in malicious environment. An authentication mechanism for the mobile agent based on the Hopfield neural network proposed. The mobile agent’s identity and password are authenticate using the specified mechanism at the moment of execution of assigned operation. An evaluative assessment has been offered, along with their complex character, in comparison to numerous agent authentication approaches. The proposed method has been put into practice, and its different aspects have been put to the test. In contrasted to typical client-server and code-on-demand approaches, the analysis shows that computation here is often more safe and simpler.
[...] Read more.Cloud computing's popularity and success are directly related to improvements in the use of Information and Communication Technologies (ICT). The adoption of cloud implementation and services has become crucial due to security and privacy concerns raised by outsourcing data and business applications to the cloud or a third party. To protect the confidentiality and security of cloud networks, a variety of Intrusion Detection System (IDS) frameworks have been developed in the conventional works. However, the main issues with the current works are their lengthy nature, difficulty in intrusion detection, over-fitting, high error rate, and false alarm rates. As a result, the proposed study attempts to create a compact IDS architecture based on cryptography for cloud security. Here, the balanced and normalized dataset is produced using the z-score preprocessing procedure. The best attributes for enhancing intrusion detection accuracy are then selected using an Intelligent Adorn Dragonfly Optimization (IADO). In addition, the trained features are used to classify the normal and attacking data using an Intermittent Deep Neural Network (IDNN) classification model. Finally, the Searchable Encryption (SE) mechanism is applied to ensure the security of cloud data against intruders. In this study, a thorough analysis has been conducted utilizing various parameters to validate the intrusion detection performance of the proposed I2ADO-DNN model.
[...] Read more.In Mobile Adhoc Networks (MANETs), nodes are mobile and interact through wireless links. Mobility is a significant advantage of MANETs. However, due to the unpredictable nature of mobility, the link may fail frequently, degrading the Quality of Service (QoS) of MANETs applications. This paper outlines a novel Ad hoc On-Demand Distance Vector with Proactive Alternate Route Discovery (AODV-PARD) routing protocol that uses signal strength-based link failure time estimation. The node predicts the link failure time and warns the upstream node through a warning message about the failure. On the basis of this information, a mechanism for identifying alternate routes is started in order to reroute traffic to the alternate route prior to the link failure. It significantly reduces packet loss and improves all the QoS parameters. The suggested protocol is compared to the traditional Ad hoc On-Demand Distance Vector (AODV) routing protocol and shows that the outlined protocol results in an improvement in QoS.
[...] Read more.The increased number of cellular network subscribers is giving rise to the network densification in next generation networks further increasing the greenhouse gas emission and the operational cost of network. Such issues have ignited a keen interest in the deployment of energy-efficient communication technologies rather than modifying the infrastructure of cellular networks. In cellular network largest portion of the power is consumed at the Base stations (BSs). Hence application of energy saving techniques at the BS will help reduce the power consumption of the cellular network further enhancing the energy efficiency (EE) of the network. As a result, BS sleep/wake-up techniques may significantly enhance cellular networks' energy efficiency. In the proposed work traffic and interference aware BS sleeping technique is proposed with an aim of reducing the power consumption of network while offering the desired Quality of Service (QoS) to the users. To implement the BS sleep modes in an efficient manner the prediction of network traffic load is carried out for future time slots. The Long Short term Memory model is used for prediction of network traffic load. Simulation results show that the proposed system provides significant reduction in power consumption as compared with the existing techniques while assuring the QoS requirements. With the proposed system the power saving is enhanced by approximately 2% when compared with the existing techniques. His proposed system will help in establishing green communication networks with reduced energy and power consumption.
[...] Read more.The broadband access networks require suitable differential modulation techniques that can provide better performance in real-time fading channels. A heterogeneous optical access network adopting spectrally efficient DAPSK – Orthogonal Frequency Division Multiple (OFDMA) - Passive Optical Network (PON) is proposed and simulated. The performance of the proposed heterogeneous network is analyzed in terms of received Bit Error Rate (BER) and spectral efficiency. The results show that 64 DAPSK – OFDMA over the proposed architecture achieves a better spectral efficiency of about 1.062 bps/Hz than 64 QAM – OFDMA with less degradation in error performance.
[...] Read more.Energy–aware routing in wireless ad hoc networks is one of the demanding fields of research. Nodes of the network are battery operated that are difficult to recharge and replace, that's why while developing a routing protocol energy consumption metric should always be at high priority. Nodes of mobile ad hoc networks are distributed in different directions forming arbitrary topology instantly. To propose an energy-efficient routing protocol for such a dynamic, self-organized, self-configured, and self-controlled network is certainly a challenge and an open research problem. Energy constraints and mobility leading to link breakage are the motivating factors behind the development of the proposed Optimized Priority-based Ad Hoc on Demand Multi-path Distance Vector Energy Efficient Routing Protocol (OPAOMDV-EE). The routing protocol added three fields (CE, MAX_E, MIN_E) to the traditional AOMDV RREQ and RREP packets, which are further used for calculating total priority field value. This value is used by the source node for selecting an optimal prioritized energy-efficient route. The proposed OPAOMDV-EE protocol has been simulated on Network Simulator-2 (NS-2) for two different scenarios that prove the effectiveness of OPAOMDV-EE in terms of various performance metrics with reduced energy consumption.
[...] Read more.The healthcare industry makes rampant strides in sharing electronic health records with upgraded efficiency and delivery. Electronic health records comprise personal and sensitive information of patients that are confidential. The current security mechanism in cloud computing to store and share electronic health records results in data breaches. In the recent era, blockchain is introduced in storing and accessing electronic health records. Blockchain is utilized for numerous applications in the healthcare industry, such as remote patient tracking, biomedical research, collaborative decision making and patient-centric data sharing with multiple healthcare providers. In all circumstances, blockchain guarantees immutability, data privacy, data integrity, transparency, interoperability, and user privacy that are strictly required to access electronic health records. This review paper provides a systematic study of the security of blockchain-based electronic health records. Moreover, based on thematic content analysis of various research literature, this paper provides open challenges in the blockchain-based electronic health record.
[...] Read more.D2D (Device-to-device) communication has a major role in communication technology with resource and power allocation being a major attribute of the network. The existing method for D2D communication has several problems like slow convergence, low accuracy, etc. To overcome these, a D2D communication using distributed deep learning with a coot bird optimization algorithm has been proposed. In this work, D2D communication is combined with the Coot Bird Optimization algorithm to enhance the performance of distributed deep learning. Reducing the interference of eNB with the use of deep learning can achieve near-optimal throughput. Distributed deep learning trains the devices as a group and it works independently to reduce the training time of the devices. This model confirms the independent resource allocation with optimized power value and the least Bit Error Rate for D2D communication while sustaining the quality of services. The model is finally trained and tested successfully and is found to work for power allocation with an accuracy of 99.34%, giving the best fitness of 80%, the worst fitness value of 46%, mean value of 6.76 and 0.55 STD value showing better performance compared to the existing works.
[...] Read more.A hybrid network, which consists of the sections of communication lines with the transmission of signals of different physical nature on different transmission media, has been considered. Communication lines respond differently to threats, which allows to choose the line with the best performance for the transmission of information. The causal diagram of events that determine the state of the information transmission network, such as changes in emergency/accident-free time intervals, has been presented. The application scheme of the protection measures against dangerous events has been shown. To verify the measures, a matrix of their compliance with typical natural disasters has been developed and relevant examples have been given. It is suggested to evaluate the flexibility of the telecommunication network by its connectivity, characterized by the numbers of vertex and edge connectivity, the probability of connectivity. The presented scheme of the device for multi-channel information transmission in a hybrid network allows the choice for the transmission of information to the channel with the best performance. Using this device is the essence of the suggestion about increasing the flexibility of the network.
[...] Read more.Represented paper is currently topical, because of year on year increasing quantity and diversity of attacks on computer networks that causes significant losses for companies. This work provides abilities of such problems solving as: existing methods of location of anomalies and current hazards at networks, statistical methods consideration, as effective methods of anomaly detection and experimental discovery of choosed method effectiveness. The method of network traffic capture and analysis during the network segment passive monitoring is considered in this work. Also, the processing way of numerous network traffic indexes for further network information safety level evaluation is proposed. Represented methods and concepts usage allows increasing of network segment reliability at the expense of operative network anomalies capturing, that could testify about possible hazards and such information is very useful for the network administrator. To get a proof of the method effectiveness, several network attacks, whose data is storing in specialised DARPA dataset, were chosen. Relevant parameters for every attack type were calculated. In such a way, start and termination time of the attack could be obtained by this method with insignificant error for some methods.
[...] Read more.There is no doubt that, even after the development of many other authentication schemes, passwords remain one of the most popular means of authentication. A review in the field of password based authentication is addressed, by introducing and analyzing different schemes of authentication, respective advantages and disadvantages, and probable causes of the ‘very disconnect’ between user and password mechanisms. The evolution of passwords and how they have deep-rooted in our life is remarkable. This paper addresses the gap between the user and industry perspectives of password authentication, the state of art of password authentication and how the most investigated topic in password authentication changed over time. The author’s tries to distinguish password based authentication into two levels ‘User Centric Design Level’ and the ‘Machine Centric Protocol Level’ under one framework. The paper concludes with the special section covering the ways in which password based authentication system can be strengthened on the issues which are currently holding-in the password based authentication.
[...] Read more.An important task of designing complex computer systems is to ensure high reliability. Many authors investigate this problem and solve it in various ways. Most known methods are based on the use of natural or artificially introduced redundancy. This redundancy can be used passively and/or actively with (or without) restructuring of the computer system. This article explores new technologies for improving fault tolerance through the use of natural and artificially introduced redundancy of the applied number system. We consider a non-positional number system in residual classes and use the following properties: independence, equality, and small capacity of residues that define a non-positional code structure. This allows you to: parallelize arithmetic calculations at the level of decomposition of the remainders of numbers; implement spatial spacing of data elements with the possibility of their subsequent asynchronous independent processing; perform tabular execution of arithmetic operations of the base set and polynomial functions with single-cycle sampling of the result of a modular operation. Using specific examples, we present the calculation and comparative analysis of the reliability of computer systems. The conducted studies have shown that the use of non-positional code structures in the system of residual classes provides high reliability. In addition, with an increase in the bit grid of computing devices, the efficiency of using the system of residual classes increases. Our studies show that in order to increase reliability, it is advisable to reserve small nodes and blocks of a complex system, since the failure rate of individual elements is always less than the failure rate of the entire computer system.
[...] Read more.Remote access technologies encrypt data to enforce policies and ensure protection. Attackers leverage such techniques to launch carefully crafted evasion attacks introducing malware and other unwanted traffic to the internal network. Traditional security controls such as anti-virus software, firewall, and intrusion detection systems (IDS) decrypt network traffic and employ signature and heuristic-based approaches for malware inspection. In the past, machine learning (ML) approaches have been proposed for specific malware detection and traffic type characterization. However, decryption introduces computational overheads and dilutes the privacy goal of encryption. The ML approaches employ limited features and are not objectively developed for remote access security. This paper presents a novel ML-based approach to encrypted remote access attack detection using a weighted random forest (W-RF) algorithm. Key features are determined using feature importance scores. Class weighing is used to address the imbalanced data distribution problem common in remote access network traffic where attacks comprise only a small proportion of network traffic. Results obtained during the evaluation of the approach on benign virtual private network (VPN) and attack network traffic datasets that comprise verified normal hosts and common attacks in real-world network traffic are presented. With recall and precision of 100%, the approach demonstrates effective performance. The results for k-fold cross-validation and receiver operating characteristic (ROC) mean area under the curve (AUC) demonstrate that the approach effectively detects attacks in encrypted remote access network traffic, successfully averting attackers and network intrusions.
[...] Read more.These days cloud computing is booming like no other technology. Every organization whether it’s small, mid-sized or big, wants to adapt this cutting edge technology for its business. As cloud technology becomes immensely popular among these businesses, the question arises: Which cloud model to consider for your business? There are four types of cloud models available in the market: Public, Private, Hybrid and Community. This review paper answers the question, which model would be most beneficial for your business. All the four models are defined, discussed and compared with the benefits and pitfalls, thus giving you a clear idea, which model to adopt for your organization.
[...] Read more.Social engineering is the attack aimed to manipulate dupe to divulge sensitive information or take actions to help the adversary bypass the secure perimeter in front of the information-related resources so that the attacking goals can be completed. Though there are a number of security tools, such as firewalls and intrusion detection systems which are used to protect machines from being attacked, widely accepted mechanism to prevent dupe from fraud is lacking. However, the human element is often the weakest link of an information security chain, especially, in a human-centered environment. In this paper, we reveal that the human psychological weaknesses result in the main vulnerabilities that can be exploited by social engineering attacks. Also, we capture two essential levels, internal characteristics of human nature and external circumstance influences, to explore the root cause of the human weaknesses. We unveil that the internal characteristics of human nature can be converted into weaknesses by external circumstance influences. So, we propose the I-E based model of human weakness for social engineering investigation. Based on this model, we analyzed the vulnerabilities exploited by different techniques of social engineering, and also, we conclude several defense approaches to fix the human weaknesses. This work can help the security researchers to gain insights into social engineering from a different perspective, and in particular, enhance the current and future research on social engineering defense mechanisms.
[...] Read more.Activities in network traffic can be broadly classified into two categories: normal and malicious. Malicious activities are harmful and their detection is necessary for security reasons. The intrusion detection process monitors network traffic to identify malicious activities in the system. Any algorithm that divides objects into two categories, such as good or bad, is a binary class predictor or binary classifier. In this paper, we utilized the Nearest Neighbor Distance Variance (NNDV) classifier for the prediction of intrusion. NNDV is a binary class predictor and uses the concept of variance on the distance between objects. We used KDD CUP 99 dataset to evaluate the NNDV and compared the predictive accuracy of NNDV with that of the KNN or K Nearest Neighbor classifier. KNN is an efficient general purpose classifier, but we only considered its binary aspect. The results are quite satisfactory to show that NNDV is comparable to KNN. Many times, the performance of NNDV is better than KNN. We experimented with normalized and unnormalized data for NNDV and found that the accuracy results are generally better for normalized data. We also compared the accuracy results of different cross validation techniques such as 2 fold, 5 fold, 10 fold, and leave one out on the NNDV for the KDD CUP 99 dataset. Cross validation results can be helpful in determining the parameters of the algorithm.
[...] Read more.The Internet is the most essential tool for communication in today's world. As a result, cyber-attacks are growing more often, and the severity of the consequences has risen as well. Distributed Denial of Service is one of the most effective and costly top five cyber attacks. Distributed Denial of Service (DDoS) is a type of cyber attack that prevents legitimate users from accessing network system resources. To minimize major damage, quick and accurate DDoS attack detection techniques are essential. To classify target classes, machine learning classification algorithms are faster and more accurate than traditional classification methods. This is a quantitative research applies Logistic Regression, Decision Tree, Random Forest, Ada Boost, Gradient Boost, KNN, and Naive Bayes classification algorithms to detect DDoS attacks on the CIC-DDoS2019 data set, which contains eleven different DDoS attacks each containing 87 features. In addition, evaluated classifiers’ performances in terms of evaluation metrics. Experimental results show that AdaBoost and Gradient Boost algorithms give the best classification results, Logistic Regression, KNN, and Naive Bayes give good classification results, Decision Tree and Random Forest produce poor classification results.
[...] Read more.D2D (Device-to-device) communication has a major role in communication technology with resource and power allocation being a major attribute of the network. The existing method for D2D communication has several problems like slow convergence, low accuracy, etc. To overcome these, a D2D communication using distributed deep learning with a coot bird optimization algorithm has been proposed. In this work, D2D communication is combined with the Coot Bird Optimization algorithm to enhance the performance of distributed deep learning. Reducing the interference of eNB with the use of deep learning can achieve near-optimal throughput. Distributed deep learning trains the devices as a group and it works independently to reduce the training time of the devices. This model confirms the independent resource allocation with optimized power value and the least Bit Error Rate for D2D communication while sustaining the quality of services. The model is finally trained and tested successfully and is found to work for power allocation with an accuracy of 99.34%, giving the best fitness of 80%, the worst fitness value of 46%, mean value of 6.76 and 0.55 STD value showing better performance compared to the existing works.
[...] Read more.An important task of designing complex computer systems is to ensure high reliability. Many authors investigate this problem and solve it in various ways. Most known methods are based on the use of natural or artificially introduced redundancy. This redundancy can be used passively and/or actively with (or without) restructuring of the computer system. This article explores new technologies for improving fault tolerance through the use of natural and artificially introduced redundancy of the applied number system. We consider a non-positional number system in residual classes and use the following properties: independence, equality, and small capacity of residues that define a non-positional code structure. This allows you to: parallelize arithmetic calculations at the level of decomposition of the remainders of numbers; implement spatial spacing of data elements with the possibility of their subsequent asynchronous independent processing; perform tabular execution of arithmetic operations of the base set and polynomial functions with single-cycle sampling of the result of a modular operation. Using specific examples, we present the calculation and comparative analysis of the reliability of computer systems. The conducted studies have shown that the use of non-positional code structures in the system of residual classes provides high reliability. In addition, with an increase in the bit grid of computing devices, the efficiency of using the system of residual classes increases. Our studies show that in order to increase reliability, it is advisable to reserve small nodes and blocks of a complex system, since the failure rate of individual elements is always less than the failure rate of the entire computer system.
[...] Read more.Over the last two years,the number of cyberattacks has grown significantly, paralleling the emergence of new attack types as intruder’s skill sets have improved. It is possible to attack other devices on a botnet and launch a man-in-the-middle attack with an IOT device that is present in the home network. As time passes, an ever-increasing number of devices are added to a network. Such devices will be destroyed completely if one or both of them are disconnected from a network. Detection of intrusions in a network becomes more difficult because of this. In most cases, manual detection and intervention is ineffective or impossible. Consequently, it's vital that numerous types of network threats can be better identified with less computational complexity and time spent on processing. Numerous studies have already taken place, and specific attacks are being examined. In order to quickly detect an attack, an IDS uses a well-trained classification model. In this study, multi-layer perceptron classifier along with random forest is used to examine the accuracy, precision, recall and f-score of IDS. IoT environment-based intrusion related benchmark datasets UNSWNB-15 and N_BaIoT are utilized in the experiment. Both of these datasets are relatively newer than other datasets, which represents the latest attack. Additionally, ensembles of different tree sizes and grid search algorithms are employed to determine the best classifier learning parameters. The research experiment's outcomes demonstrate the effectiveness of the IDS model using random forest over the multi-layer perceptron neural network model since it outperforms comparable ensembles analyzed in the literature in terms of K-fold cross validation techniques.
[...] Read more.In the cloud computing platform, DDoS (Distributed Denial-of-service) attacks are one of the most commonly occurring attacks. Research studies on DDoS mitigation rarely considered the data shift problem in real-time implementation. Concurrently, existing studies have attempted to perform DDoS attack detection. Nevertheless, they have been deficient regarding the detection rate. Hence, the proposed study proposes a novel DDoS mitigation scheme using LCDT-M (Log-Cluster DDoS Tree Mitigation) framework for the hybrid cloud environment. LCDT-M detects and mitigates DDoS attacks in the Software-Defined Network (SDN) based cloud environment. The LCDT-M comprises three algorithms: GFS (Greedy Feature Selection), TLMC (Two Log Mean Clustering), and DM (Detection-Mitigation) based on DT (Decision Tree) to optimize the detection of DDoS attacks along with mitigation in SDN. The study simulated the defined cloud environment and considered the data shift problem during the real-time implementation. As a result, the proposed architecture achieved an accuracy of about 99.83%, confirming its superior performance.
[...] Read more.There is no doubt that, even after the development of many other authentication schemes, passwords remain one of the most popular means of authentication. A review in the field of password based authentication is addressed, by introducing and analyzing different schemes of authentication, respective advantages and disadvantages, and probable causes of the ‘very disconnect’ between user and password mechanisms. The evolution of passwords and how they have deep-rooted in our life is remarkable. This paper addresses the gap between the user and industry perspectives of password authentication, the state of art of password authentication and how the most investigated topic in password authentication changed over time. The author’s tries to distinguish password based authentication into two levels ‘User Centric Design Level’ and the ‘Machine Centric Protocol Level’ under one framework. The paper concludes with the special section covering the ways in which password based authentication system can be strengthened on the issues which are currently holding-in the password based authentication.
[...] Read more.An important part of based on elliptical curves cryptographic data protection is multipliers of Galois fields. For based on elliptical curves digital signatures, not only prime but also extended Galois fields GF(pm) are used. The article provides a theoretical justification for the use of extended Galois fields GF(dm) with characteristics d > 2, and a criterion for determining the best field is presented. With the use of the proposed criterion, the best fields, which are advisable to use in data protection, are determined.
Cores (VHDL descriptions of digital units) are considered as structural part of based on FPGA devices. In the article methods for cryptoprocessors cores creating were analyzed. The article describes the generator of VHDL descriptions of extended Galois field multipliers with big characteristic (up to 2998). The use of mathematical packages for calculations to improve the quality of information security is also considered.
The Galois field multipliers generator creates the VHDL description of multipliers schemes, describes connections of their parts and generates VHDL descriptions of these parts as result of Quine-McCluskey Boolean functions minimization method. However, the execution time of the algorithm increases with increasing amount of input data. Accordingly, generating field multipliers with large characteristic can take frерom a few seconds to several tens of seconds.
It's important to simplify the design and minimize logic gates number in a field programmable gate array (FPGA) because it will speed up the operation of multipliers. The generator creates multipliers according to the three variants.
The efficiency of using multipliers for fields with different characteristics was compared in article.
The expediency of using extended Galois fields GF(dm) with characteristics d > 2 in data protection tools is analyzed, a criterion for comparing data protection tools based on such Galois fields is determined, and the best fields according to the selected criterion when implemented according to a certain algorithm are determined.
A hybrid network, which consists of the sections of communication lines with the transmission of signals of different physical nature on different transmission media, has been considered. Communication lines respond differently to threats, which allows to choose the line with the best performance for the transmission of information. The causal diagram of events that determine the state of the information transmission network, such as changes in emergency/accident-free time intervals, has been presented. The application scheme of the protection measures against dangerous events has been shown. To verify the measures, a matrix of their compliance with typical natural disasters has been developed and relevant examples have been given. It is suggested to evaluate the flexibility of the telecommunication network by its connectivity, characterized by the numbers of vertex and edge connectivity, the probability of connectivity. The presented scheme of the device for multi-channel information transmission in a hybrid network allows the choice for the transmission of information to the channel with the best performance. Using this device is the essence of the suggestion about increasing the flexibility of the network.
[...] Read more.The huge availability and prosperity of net technology results in raised on-line media sharing over the cloud platform which has become one of the important resources and tools for development in our societies. So, in the epoch of enormous data great amount of sensitive information and transmission of different media transmitted over the net for communication. And recently, fog computing has captured the world's attention due to their inherent features relevant compared to the cloud domain, But this push to head for many issues related to data security and privacy in fog computing which it's still under studied in their initial juncture. Therefore, in this paper, we will review a security system that relies on encryption as a kind of effective solution to secure image data. We use an approach of using chaotic map plus space curve techniques moreover the confusion and diffusion strategies are carried out utilizing Hilbert curvature and chaotic map such as two-dimensional Henon map (2D-HM) to assert image confusion with pixel level permutation .Also we relied in our system the way of shuffling the image with blocks and use a key for each block, which is chooses randomly to have a high degree of security. The efficiency of the proposed technique has been tested utilizing different investigations like analysis of entropy [7.9993], NPCR [99.6908%] and finally UACI [33.6247%]. Analysis of results revealed that the proposed system of image encryption technique has favorable effects, and can achieve a good results moreover it fights different attacks and by comparing with another techniques denote that our proposed fulfills high security level with high quality.
[...] Read more.Phishing attacks by malicious URL/web links are common nowadays. The user data, such as login credentials and credit card numbers can be stolen by their careless clicking on these links. Moreover, this can lead to installation of malware on the target systems to freeze their activities, perform ransomware attack or reveal sensitive information. Recently, GAN-based models have been attractive for anti-phishing URLs. The general motivation is using Generator network (G) to generate fake URL strings and Discriminator network (D) to distinguish the real and the fake URL samples. This is operated in adversarial way between G and D so that the synthesized URL samples by G become more and more similar to the real ones. From the perspective of cybersecurity defense, GAN-based motivation can be exploited for D as a phishing URL detector or classifier. This means after training GAN on both malign and benign URL strings, a strong classifier/detector D can be achieved. From the perspective of cyberattack, the attackers would like to to create fake URLs that are as close to the real ones as possible to perform phishing attacks. This makes them easier to fool users and detectors. In the related proposals, GAN-based models are mainly exploited for anti-phishing URLs. There have been no evaluations specific for GAN-generated fake URLs. The attacker can make use of these URL strings for phishing attacks. In this work, we propose to use TLD (Top-level Domain) and SSIM (Structural Similarity Index Score) scores for evaluation the GAN-synthesized URL strings in terms of the structural similariy with the real ones. The more similar in the structure of the GAN-generated URLs are to the real ones, the more likely they are to fool the classifiers. Different GAN models from basic GAN to others GAN extensions of DCGAN, WGAN, SEQGAN are explored in this work. We show from the intensive experiments that D classifier of basic GAN and DCGAN surpasses other GAN models of WGAN and SegGAN. The effectiveness of the fake URL patterns generated from SeqGAN is the best compared to other GAN models in both structural similarity and the ability in deceiving the phishing URL classifiers of LSTM (Long Short Term Memory) and RF (Random Forest).
[...] Read more.Blockchain technology unarguably has over a decade gained widespread attention owing to its often-tagged disruptive nature and remarkable features of decentralization, immutability and transparency among others. However, the technology comes bundled with challenges. At center-stage of these challenges is privacy-preservation which has massively been researched with diverse solutions proposed geared towards privacy protection for transaction initiators, recipients and transaction data. Dual-key stealth address protocol for IoT (DkSAP-IoT) is one of such solutions aimed at privacy protection for transaction recipients. Induced by the need to reuse locally stored data, the current implementation of DkSAP-IoT is deficient in the realms of data confidentiality, integrity and availability consequently defeating the core essence of the protocol in the event of unauthorized access, disclosure or data tampering emanating from a hack and theft or loss of the device. Data unavailability and other security-related data breaches in effect render the existing protocol inoperable. In this paper, we propose and implement solutions to augment data confidentiality, integrity and availability in DkSAP-IoT in accordance with the tenets of information security using symmetric encryption and data storage leveraging decentralized storage architecture consequently providing data integrity. Experimental results show that our solution provides content confidentiality consequently strengthening privacy owing to the encryption utilized. We make the full code of our solution publicly available on GitHub.
[...] Read more.