Work place: National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Kyiv, Ukraine
E-mail: mivaschenko_51@lll.kpi.ua
Website:
Research Interests: Mathematics, Comparative Programming Language Analysis, Programming Language Theory, Mathematics of Computing, Data Structures and Algorithms, Computer systems and computational processes
Biography
Mykhailo Ivashchenko is a graduate student. He currently pursues Master’s degrees in the National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Kyiv, Ukraine, and at the University of Nebraska-Lincoln, Lincoln, NE, USA. The author’s main research fields are dedicated to machine learning and neural networks, specifically: compositional/modular neural network learning, neural network verification, safe reinforcement learning.
By Zhengbing Hu Mykhailo Ivashchenko Lesya Lyushenko Dmytro Klyushnyk
DOI: https://doi.org/10.5815/ijmecs.2021.03.02, Pub. Date: 8 Jun. 2021
One of the trends in information technologies is implementing neural networks in modern software packages [1]. The fact that neural networks cannot be directly programmed (but trained) is their distinctive feature. In this regard, the urgent task is to ensure sufficient speed and quality of neural network training procedures. The process of neural network training can differ significantly depending on the problem. There are verification methods that correspond to the task’s constraints; they are used to assess the training results. Verification methods provide an estimate of the entire cardinal set of examples but do not allow to estimate which subset of those causes a significant error. This fact leads to neural networks’ failure to perform with the given set of hyperparameters, making training a new one time-consuming.
On the other hand, existing empirical assessment methods of neural networks training use discrete sets of examples. With this approach, it is impossible to say that the network is suitable for classification on the whole cardinal set of examples.
This paper proposes a criterion for assessing the quality of classification results. The criterion is formed by describing the training states of the neural network. Each state is specified by the correspondence of the set of errors to the function range representing a cardinal set of test examples. The criterion usage allows tracking the network’s classification defects and marking them as safe or unsafe. As a result, it is possible to formally assess how the training and validation data sets must be altered to improve the network’s performance, while existing verification methods do not provide any information on which part of the dataset causes the network to underperform.
By Zhengbing Hu Ivan Dychka Mykola Onai Mykhailo Ivashchenko Su Jun
DOI: https://doi.org/10.5815/ijisa.2018.12.03, Pub. Date: 8 Dec. 2018
As elliptic curve cryptography is one of the popular ways of constructing an encoding and decoding processes, public-key algorithms as its basis provide people a comfortable way of exchanging pieces of encoded information. As the time goes by, a lot of algorithms have emerged, some of them are still in use today; some others are still being developed into new forms. The main point of algorithm innovation is to reduce the number of processed operations during every possible step to find maximum efficiency and highest speed while performing the calculations. This article describes an improved method of the López-Dahab-Montgomery (LD-Montgomery) scalar point multiplication in terms of working with binary elliptic curves. It is shown in the article that the possible improvement lies in reordering the set of operations which is used in LD-Montgomery scalar point multiplication algorithm. The algorithm is used to compute point multiplication results of the curves over binary Galois Fields featuring the following m values: . The article also presents the experimental results based on different scalars.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals