Work place: Yuriy Fedkovych Chernivtsi National University, Chernivtsi, 58012, Ukraine
E-mail: y.ushenko@chnu.edu.ua
Website: https://orcid.org/0000-0003-1767-1882
Research Interests: Data Mining, Pattern Recognition
Biography
Yuriy Ushenko is a Professor at the Computer Science Department, Chernivtsi National University, Chernivtsi, Ukraine. Research Interests: Data Mining and Analysis, Computer Vision and Pattern Recognition, Optics & Photonics, Biophysics. His research interests include information systems design, data mining, pattern recognition and digital image processing, artificial neural networks, laser polarimetry and interferometry.
By Victoria Vysotska Denys Shavaiev Michal Gregus Yuriy Ushenko Zhengbing Hu Dmytro Uhryn
DOI: https://doi.org/10.5815/ijmecs.2024.05.05, Pub. Date: 8 Oct. 2024
The growing use of social networks and the steady popularity of online communication make the task of detecting gender from posts necessary for a variety of applications, including modern education, political research, public opinion analysis, personalized advertising, cyber security and biometric systems, marketing research, etc. This study aims to develop information technology for gender voice recognition by sound based on supervised learning using machine learning algorithms. A model, methods and means of recognition and gender classification of voice speech samples are proposed based on their acoustic properties and machine learning. In our voice gender recognition project, we used a model built based on the neural network using the TensorFlow library and Keras. The speaker’s voice was analysed for various acoustic features, such as frequency, spectral characteristics, amplitude, modulation, etc. The basic model we created is a typical neural network for text classification. It consists of the input layer, hidden layers, and the output layer. For text processing, we use a pre-trained word vector space such as Word2Vec or GloVe. We also used such techniques as dropout to prevent model overtraining, such activation functions as ReLU (Rectified Linear Unit) for non-linearity, and a softmax function in the last layer to obtain class probabilities. To train a model, we used the Adam optimizer, which is a popular gradient descent optimization method, and the “sparse categorical cross-entropy” loss function, since we are dealing with multi-class classification. After training the model, we saved it to a file for further use and evaluation of new data. The application of neural networks in our project allowed us to build a powerful model that can recognize a speaker’s gender by voice with high accuracy. The intelligent system was trained using machine learning methods with each of the methods being analysed for accuracy: K-Nearest Neighbours (98.10%), Decision Tree (96,69%), Logistic Regression (98.11%), Random Forest (96.65%), Support Vector Machine (98.26%), neural networks (98.11%). Additional techniques such as regularization and optimization can be used to improve model performance and prevent overtraining.
[...] Read more.By Yevgen Burov Victoria Vysotska Lyubomyr Chyrun Yuriy Ushenko Dmytro Uhryn Zhengbing Hu
DOI: https://doi.org/10.5815/ijieeb.2024.05.01, Pub. Date: 8 Oct. 2024
The use of ontological models for intelligent systems construction allows for improved quality characteristics at all stages of the life cycle of a software product. The main source of improvement in quality characteristics is the possibility of reusing the conceptualization and code provided by the corresponding models. Due to the use of a single conceptualization when creating various software products, the degree of interoperability and code portability increases. The new-generation electronic business analytics systems implementation is based on the use of active models for business processes (BP). Such models, on the one hand, reflect the BPs taking place in the organization on a real-time scale, and on the other hand, embody corporate and other regulatory rules and restrictions and monitor their compliance. The purpose of this article is to research the methods of presenting and building active executable BP models, determining the methods of their execution and coordination, and building the resulting intelligent network of BP models. In the process of its implementation, such a network ensures the implementation, support of decision-making and compliance with regulatory rules in the relevant real BPs. A formal specification of an intelligent system for modelling a complex of BPs of the enterprise using models has been proposed. A hierarchical approach to the introduction of intelligent functions into the modelling system has been proposed. The simulation system is designed to be used for the design and management of complex intelligent systems. Achieving the set goal involves solving several development tasks: methods of presenting BP models for different types of such models; methods of analysis and display of time relations and attributes in BP models; ways of presenting the association of artefacts, and business analytics models with individual BP operations; metric ratios for evaluating the quality of process execution; methods of interaction of various BPs and coordination of their implementation. The purpose of functioning an intelligent model-driven software system is achieved through the interaction of a large number of simple models. At the same time, each model encapsulates a certain aspect of the expert's knowledge about the subject area. To apply executable conceptual models in the field of modelling BPes, it is necessary to determine the types of conceptual models used, their purpose and functions, and the role they play in the operation of an intelligent system. Models used in modelling BPes can be classified according to various characteristics. At the same time, the same model can be included in different classifications.
[...] Read more.By Victoria Vysotska Krzysztof Przystupa Lyubomyr Chyrun Serhii Vladov Yuriy Ushenko Dmytro Uhryn Zhengbing Hu
DOI: https://doi.org/10.5815/ijcnis.2024.05.06, Pub. Date: 8 Oct. 2024
A new method of propaganda analysis is proposed to identify signs and change the dynamics of the behaviour of coordinated groups based on machine learning at the processing disinformation stages. In the course of the work, two models were implemented to recognise propaganda in textual data - at the message level and the phrase level. Within the framework of solving the problem of analysis and recognition of text data, in particular, fake news on the Internet, an important component of NLP technology (natural language processing) is the classification of words in text data. In this context, classification is the assignment or assignment of textual data to one or more predefined categories or classes. For this purpose, the task of binary text classification was solved. Both models are built based on logistic regression, and in the process of data preparation and feature extraction, such methods as vectorisation using TF-IDF vectorisation (Term Frequency – Inverse Document Frequency), the BOW model (Bag-of-Words), POS marking (Part-Of-Speech), word embedding using the Word2Vec two-layer neural network, as well as manual feature extraction methods aimed at identifying specific methods of political propaganda in texts are used. The analogues of the project under development are analysed the subject area (the propaganda used in the media and the basis of its production methods) is studied. The software implementation is carried out in Python, using the seaborn, matplotlib, genism, spacy, NLTK (Natural Language Toolkit), NumPy, pandas, scikit-learn libraries. The model's score for propaganda recognition at the phrase level was obtained: 0.74, and at the message level: 0.99. The implementation of the results will significantly reduce the time required to make the most appropriate decision on the implementation of counter-disinformation measures concerning the identified coordinated groups of disinformation generation, fake news and propaganda. Different classification algorithms for detecting fake news and non-fakes or fakes identification accuracy from Internet resources ana social mass media are used as the decision tree (for non-fakes identification accuracy 0.98 and fakes identification accuracy 0.9903), the k-nearest neighbours (0.83/0.999), the random forest (0.991/0.933), the multilayer perceptron (0.9979/0.9945), the logistic regression (0.9965/0.9988), and the Bayes classifier (0.998/0.913). The logistic regression (0.9965) the multilayer perceptron (0.9979) and the Bayesian classifier (0.998) are more optimal for non-fakes news identification. The logistic regression (0.9988), the multilayer perceptron (0.9945), and k-nearest neighbours (0.999) are more optimal for identifying fake news identification.
[...] Read more.By Serhii Vladov Ruslan Yakovliev Victoria Vysotska Dmytro Uhryn Yuriy Ushenko
DOI: https://doi.org/10.5815/ijcnis.2024.04.05, Pub. Date: 8 Aug. 2024
This work focuses on developing a universal onboard neural network system for restoring information when helicopter turboshaft engine sensors fail. A mathematical task was formulated to determine the occurrence and location of these sensor failures using a multi-class Bayesian classification model that incorporates prior knowledge and updates probabilities with new data. The Bayesian approach was employed for identifying and localizing sensor failures, utilizing a Bayesian neural network with a 4–6–3 structure as the core of the developed system. A training algorithm for the Bayesian neural network was created, which estimates the prior distribution of network parameters through variational approximation, maximizes the evidence lower bound of direct likelihood instead, and updates parameters by calculating gradients of the log-likelihood and evidence lower bound, while adding regularization terms for warnings, distributions, and uncertainty estimates to interpret results. This approach ensures balanced data handling, effective training (achieving nearly 100% accuracy on both training and validation sets), and improved model understanding (with training losses not exceeding 2.5%). An example is provided that demonstrates solving the information restoration task in the event of a gas-generator rotor r.p.m. sensor failure in the TV3-117 helicopter turboshaft engine. The developed onboard neural network system implementing feasibility on a helicopter using the neuro-processor Intel Neural Compute Stick 2 has been analytically proven.
[...] Read more.By Taras Basyuk Andrii Vasyliuk Yuriy Ushenko Dmytro Uhryn Zhengbing Hu Mariia Talakh
DOI: https://doi.org/10.5815/ijmecs.2024.04.07, Pub. Date: 8 Aug. 2024
The article is dedicated to solving the problem of modeling and developing a computer simulator with the creation of working scenarios for training operating personnel in object detection. The analysis of the features of human operator activity is carried out, the model of his behavior is described, and it is shown that for the presented task, the following three levels must be taken into account: behavior based on abilities (skills), behavior based on rules, behavior based on knowledge. User models that are used in man-machine systems were created, and their use in the process of modeling operator activity from the point of view of regular and irregular exposure was shown. This made it possible to create a prototype of a graphical window using a user-friendly interface. A system model of human-machine interface for processing and recognition of visual information is mathematically described and a model of image representation based on three possible scenarios of their formation is formed. The result of the study was the software implementation of an effective educational tool prototype that accurately replicates real-world conditions for the formation of working scenarios. The conducted experimental research showed the possibility of general image recognition tests, selection of different test modes, and support for arbitrary sets of image test tasks. Further research will be aimed at expanding the
functionality of the created prototype, developing additional modules, automatically generating scenarios and verifying work.
By Vitaliy Danylyk Victoria Vysotska Vasyl Andrunyk Dmytro Uhryn Yuriy Ushenko
DOI: https://doi.org/10.5815/ijcnis.2024.03.09, Pub. Date: 8 Jun. 2024
In the modern world, the military sphere occupies a very high place in the life of the country. At the same time, this area needs quick and accurate solutions. This decision can greatly affect the unfolding of events on the battlefield and indicate that they must be used carefully, using all possible means. During the war, the speed and importance of decisions are very important, and we note that the relevance of this topic is growing sharply. The purpose of the work is to create a comprehensive information system that facilitates the work of commanders of tactical units, which organizes the visualization and classification of aerial objects in real-time, the classification of objects for radio-technical intelligence, the structuring of military information and facilitates the perception of military information. The object of research/development is a phenomenon that creates a problematic problem, has the presence of slowing factors in the process of command and control, using teams of tactical links, which can slow down decision-making, as well as affect their correctness. The research/development aims to address emerging bottlenecks in the command-and-control process performed by tactical link teams, providing improved visualization, analysis and work with military data. The result of the work is an information system for processing military data to help commanders of tactical units. This system significantly improves on known officer assistance tools, although it includes a set of programs that have been used in parallel on an as-needed basis. Using modern information technologies and ease of use, the system covers problems that may arise for commanders. Also, each program included in the complex information system has its degree of innovation. The information system for structuring military information is distinguished by the possibility of use on any device. The information system for the visualization and clustering of aerial objects and the information system for the classification of objects for radio technical intelligence are distinguished by their component nature. This means that the application can use sources of input information and provides an API to use other processing information. Regarding the information system for integration into information materials, largely unknown terms and abbreviations are defined, so such solutions, cannot integrate the required data into real documents. Therefore, using this comprehensive information system, the command of tactical units will have the opportunity to improve the quality and achieve the command-and-control process.
[...] Read more.By Oleksandr Ushenko Oleksandr Saleha Yuriy Ushenko Ivan Gordey Oleksandra Litvinenko
DOI: https://doi.org/10.5815/ijigsp.2024.02.03, Pub. Date: 8 Apr. 2024
The fundamental component of the work contains a summary of the theoretical foundations of the algorithms of the scale-self-similar approach for the analysis of digital Mueller-matrix images of birefringent architectonics of biological tissues. The theoretical consideration of multifractal analysis and determination of singularity spectra of fractal dimensions of coordinate distributions of matrix elements (Mueller-matrix images - MMI) of biological tissue preparations is based on the method of maxima of amplitude modules of the wavelet transform (WTMM). The applied part of the work is devoted to the comparison of diagnostic capabilities for determining the prescription of mechanical brain injury using algorithms of statistical (central statistical moments of the 1st - 4th orders), fractal (approximating curves to logarithmic dependences of power spectra) and multifractal (WTMM) analysis of MMI linear birefringence of fibrillar networks of neurons of nervous tissue. Excellent (~95%) accuracy of differential diagnosis of the prescription of mechanical injury has been achieved.
[...] Read more.By Serhiy Balovsyak Oleksandr Derevyanchuk Vasyl Kovalchuk Hanna Kravchenko Yuriy Ushenko Zhengbing Hu
DOI: https://doi.org/10.5815/ijmecs.2024.02.04, Pub. Date: 8 Apr. 2024
A STEM project was implemented, which is intended for students of technical specialties to study the principles of building and using a computer system for segmentation of images of railway transport using fuzzy logic. The project consists of 4 stages, namely stage #1 "Reading images from video cameras using a personal computer or Raspberry Pi microcomputer", stage #2 "Digital image pre-processing (noise removal, contrast enhancement, contour selection)", stage #3 "Segmentation of images", stage #4 "Detection and analysis of objects on segmented images by means of fuzzy logic". Hardware and software tools have been developed for the implementation of the STEM project. A personal computer and a Raspberry Pi 3B+ microcomputer with attached video cameras were used as hardware. Software tools are implemented in the Python language using the Google Colab cloud platform. At each stage of the project, students deepen their knowledge and gain practical skills: they perform hardware and software settings, change program code, and process experimental images of vehicles. It is shown that the processing of experimental images ensures the correct selection of meaningful parts in images of vehicles, for example, windows and number plates in images of locomotives. Assessment of students' educational achievements was carried out by testing them before the start of the STEM project, as well as after the completion of the project. The topics of the test tasks corresponded to the topics of the stages of the STEM project. Improvements in educational achievements were obtained for all stages of the project.
[...] Read more.By Oleksandr Mediakov Victoria Vysotska Dmytro Uhryn Yuriy Ushenko Cennuo Hu
DOI: https://doi.org/10.5815/ijmecs.2024.01.03, Pub. Date: 8 Feb. 2024
The article develops technology for generating song lyrics extensions using large language models, in particular the T5 model, to speed up, supplement, and increase the flexibility of the process of writing lyrics to songs with/without taking into account the style of a particular author. To create the data, 10 different artists were selected, and then their lyrics were selected. A total of 626 unique songs were obtained. After splitting each song into several pairs of input-output tapes, 1874 training instances and 465 test instances were obtained. Two language models, NSA and SA, were retrained for the task of generating song lyrics. For both models, t5-base was chosen as the base model. This version of T5 contains 223 million parameters. The analysis of the original data showed that the NSA model has less degraded results, and for the SA model, it is necessary to balance the amount of text for each author. Several text metrics such as BLEU, RougeL, and RougeN were calculated to quantitatively compare the results of the models and generation strategies. The value of the BLEU metric is the most diverse, and its value varies significantly depending on the strategy. At the same time, Rouge metrics have less variability and a smaller range of values. In total, for comparison, we used 8 different decoding methods for text generation supported by the transformers library, including Greedy search, Beam search, Diverse beam search, Multinomial sampling, Beam-search multinomial sampling, Top-k sampling, Top-p sampling, and Contrastive search. All the results of the lyrics comparison show that the best method for generating lyrics is beam search and its variations, including ray sampling. The contrastive search usually outperformed the usual greedy approach. The top-p and top-k methods do not have a clear advantage over each other, and in different situations, they produced different results.
[...] Read more.By Yuriy Ushenko Valentina Dvorzhak Oleksandr Dubolazov Oleksandr Ushenko Ivan Mikirin Zhengbing Hu
DOI: https://doi.org/10.5815/ijigsp.2023.06.04, Pub. Date: 8 Dec. 2023
A new local-topological approach to describe the spatial and angular distributions of polarization parameters of multiply scattered optically anisotropic biological layers of laser fields is considered. A new analytical parameter to describe the local polarization structure of a set of points of coherent object fields, the degree of local depolarization (DLD), is introduced for the first time. The experimental scheme and the technique of measuring coordinate distributions (maps) of DLD The new method of local polarimetry was experimentally tested on histological specimens of biopsy sections of operatively extracted breast tumors. The measured DLD maps were processed using statistical, autocorrelation and scale-sampling approaches. Markers for differential diagnosis of benign (fibroadenoma) and malignant (sarcoma) breast tumors were defined.
[...] Read more.By Serhiy Balovsyak Oleksandr Derevyanchuk Hanna Kravchenko Yuriy Ushenko Zhengbing Hu
DOI: https://doi.org/10.5815/ijmecs.2023.06.03, Pub. Date: 8 Dec. 2023
The software for clustering students according to their educational achievements using fuzzy logic was developed in Python using the Google Colab cloud service. In the process of analyzing educational data, the problems of Data Mining are solved, since only some characteristics of the educational process are obtained from a large sample of data. Data clustering was performed using the classic K-Means method, which is characterized by simplicity and high speed. Cluster analysis was performed in the space of two features using the machine learning library scikit-learn (Python). The obtained clusters are described by fuzzy triangular membership functions, which allowed to correctly determine the membership of each student to a certain cluster. Creation of fuzzy membership functions is done using the scikit-fuzzy library. The development of fuzzy functions of objects belonging to clusters is also useful for educational purposes, as it allows a better understanding of the principles of using fuzzy logic. As a result of processing test educational data using the developed software, correct results were obtained. It is shown that the use of fuzzy membership functions makes it possible to correctly determine the belonging of students to certain clusters, even if such clusters are not clearly separated. Due to this, it is possible to more accurately determine the recommended level of difficulty of tasks for each student, depending on his previous evaluations.
[...] Read more.By Yuriy Ushenko Ivan Gordey Yuriy Tomka Irina Soltys Oksana Bakun Zhengbing Hu
DOI: https://doi.org/10.5815/ijigsp.2023.05.06, Pub. Date: 8 Oct. 2023
At the current moment, all developed polarization methods utilize "single-point" statistical analysis algorithms for laser fields. A relevant task is to generalize traditional techniques by incorporating new correlation-based "two-point" algorithms for the analysis of polarization images. Theoretical foundations of the mutual and autocorrelation processing of phase maps of polarization-structural images of samples of dehydrated serum films are given. The maps of a new polarization-correlation parameters, namely complex degree of coherence (CDC) and complex degree of mutual polarization (CDMP) of soft matter layer boundary field by the example of dehydrated serum film samples are investigated. Two groups of representative samples, uterine myoma patients (control group 1) and patients with external genital endometriosis (study group 2), were considered. We applied a complex algorithm of analytical data processing - statistical (1stand 4th central statistical moments), correlation (Gram-Charlie expansion coefficients of autocorrelation functions) and fractal (fractal dimensions) parameters of polarization-correlation parameters maps. Objective markers for diagnosing extragenital endometriosis were found.
[...] Read more.By Dmytro Uhryn Yuriy Ushenko Vasyl Lytvyn Zhengbing Hu Olga Lozynska Victor Ilin Artur Hostiuk
DOI: https://doi.org/10.5815/ijmecs.2023.04.06, Pub. Date: 8 Aug. 2023
A generalized model of population migration is proposed. On its basis, models of the set of directions of population flows, the duration of migration, which is determined by its nature in time, type and form of migration, are developed. The model of indicators of actual migration (resettlement) is developed and their groups are divided. The results of population migration are described, characterized by a number of absolute and relative indicators for the purpose of regression analysis of data. To obtain the results of migration, the author takes into account the power of migration flows, which depend on the population of the territories between which the exchange takes place and on their location on the basis of the coefficients of the effectiveness of migration ties and the intensity of migration ties. The types of migration intensity coefficients depending on the properties are formed. The lightgbm algorithm for predicting population migration is implemented in the intelligent geographic information system. The migration forecasting system is also capable of predicting international migration or migration between different countries. The significance of conducting this survey lies in the increasing need for accurate and reliable migration forecasts. With globalization and the connectivity of nations, understanding and predicting migration patterns have become crucial for various domains, including social planning, resource allocation, and economic development. Through extensive experimentation and evaluation, developed migration forecasting system has demonstrated results of human migration based on machine learning algorithms. Performance metrics of migration flow forecasting models are investigated, which made it possible to present the results obtained from the evaluation of these models using various performance indicators, including the mean square error (MSE), root mean square error (RMSE) and R-squared (R2). The MSE and RMSE measure the root mean square difference between predicted and actual values, while the R2 represents the proportion of variance explained by the model.
[...] Read more.By Oleh Prokipchuk Victoria Vysotska Petro Pukach Vasyl Lytvyn Dmytro Uhryn Yuriy Ushenko Zhengbing Hu
DOI: https://doi.org/10.5815/ijmecs.2023.03.06, Pub. Date: 8 Jun. 2023
The article develops a technology for finding tweet trends based on clustering, which forms a data stream in the form of short representations of clusters and their popularity for further research of public opinion. The accuracy of their result is affected by the natural language feature of the information flow of tweets. An effective approach to tweet collection, filtering, cleaning and pre-processing based on a comparative analysis of Bag of Words, TF-IDF and BERT algorithms is described. The impact of stemming and lemmatization on the quality of the obtained clusters was determined. Stemming and lemmatization allow for significant reduction of the input vocabulary of Ukrainian words by 40.21% and 32.52% respectively. And optimal combinations of clustering methods (K-Means, Agglomerative Hierarchical Clustering and HDBSCAN) and vectorization of tweets were found based on the analysis of 27 clustering of one data sample. The method of presenting clusters of tweets in a short format is selected. Algorithms using the Levenstein Distance, i.e. fuzz sort, fuzz set and Levenshtein, showed the best results. These algorithms quickly perform checks, have a greater difference in similarities, so it is possible to more accurately determine the limit of similarity. According to the results of the clustering, the optimal solutions are to use the HDBSCAN clustering algorithm and the BERT vectorization algorithm to achieve the most accurate results, and to use K-Means together with TF-IDF to achieve the best speed with the optimal result. Stemming can be used to reduce execution time. In this study, the optimal options for comparing cluster fingerprints among the following similarity search methods were experimentally found: Fuzz Sort, Fuzz Set, Levenshtein, Jaro Winkler, Jaccard, Sorensen, Cosine, Sift4. In some algorithms, the average fingerprint similarity reaches above 70%. Three effective tools were found to compare their similarity, as they show a sufficient difference between comparisons of similar and different clusters (> 20%).
The experimental testing was conducted based on the analysis of 90,000 tweets over 7 days for 5 different weekly topics: President Volodymyr Zelenskyi, Leopard tanks, Boris Johnson, Europe, and the bright memory of the deceased. The research was carried out using a combination of K-Means and TF-IDF methods, Agglomerative Hierarchical Clustering and TF-IDF, HDBSCAN and BERT for clustering and vectorization processes. Additionally, fuzz sort was implemented for comparing cluster fingerprints with a similarity threshold of 55%. For comparing fingerprints, the most optimal methods were fuzz sort, fuzz set, and Levenshtein. In terms of execution speed, the best result was achieved with the Levenshtein method. The other two methods performed three times worse in terms of speed, but they are nearly 13 times faster than Sift4. The fastest method is Jaro Winkler, but it has a 19.51% difference in similarities. The method with the best difference in similarities is fuzz set (60.29%). Fuzz sort (32.28%) and Levenshtein (28.43%) took the second and third place respectively. These methods utilize the Levenshtein distance in their work, indicating that such an approach works well for comparing sets of keywords. Other algorithms fail to show significant differences between different fingerprints, suggesting that they are not adapted to this type of task.
By Vasyl Lytvyn Olga Lozynska Dmytro Uhryn Myroslava Vovk Yuriy Ushenko Zhengbing Hu
DOI: https://doi.org/10.5815/ijmecs.2023.02.06, Pub. Date: 8 Apr. 2023
A method of choosing swarm optimization algorithms and using swarm intelligence for solving a certain class of optimization tasks in industry-specific geographic information systems was developed considering the stationarity characteristic of such systems. The method consists of 8 stages. Classes of swarm algorithms were studied. It is shown which classes of swarm algorithms should be used depending on the stationarity, quasi-stationarity or dynamics of the task solved by an industry geographic information system. An information model of geodata that consists in a formalized combination of their spatial and attributive components, which allows considering the relational, semantic and frame models of knowledge representation of the attributive component, was developed. A method of choosing optimization methods designed to work as part of a decision support system within an industry-specific geographic information system was developed. It includes conceptual information modeling, optimization criteria selection, and objective function analysis and modeling. This method allows choosing the most suitable swarm optimization method (or a set of methods).
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals