IJEME Vol. 15, No. 2, Apr. 2025
Cover page and Table of Contents: PDF (size: 542KB)
REGULAR PAPERS
Data leakage is the deliberate or accidental transfer of data of institutions or individuals to a different source. Especially, with the increasing use of IT assets after the pandemic, data leaks are more common. Firewalls, anti-virus software, Intrusion Prevention Systems (IPS), or Intrusion Detection Systems (IDS) products are preferred within the network to ensure the security of data sources. However, this type of security software works server-based and often protects the network from outside attacks. It is seen that the main source of data leaks experienced recently is internal vulnerabilities. Data Loss Prevention (DLP), which is the right choice for preventing data leaks, is a system developed to identify, monitor, and protect data in motion or stored in a database. DLPs are preferred to prevent unauthorized distribution of data at the source. DLP software is recommended for technical measures against data security, especially the Personal Data Protection Law (KVKK) in Turkey and General Data Protection Regulation (GDPR) in the European Union.
Test virtual machines were set up for implementation in real-world scenarios and using personal and corporate data, the behavior and durability of DLP software in cases of unauthorized data upload to USB, CD/DVD, cloud resources, office software, e-mail or ftp server were evaluated. It was observed that potential leaks and risks occur in data discovery, data masking, data hiding and data encryption according to the data density in data leakage prevention.
Search engine acts as an interface between users and computers. Online search is a very quick and impactful evolution of human experience. It is becoming a key technology that people rely on every day to get information about almost everything. Searching is typically performed with a common purpose underlying the query. If the user does not know the knowledge of the keywords to be searched, spends more time to frame the query. The search may not contain the user’s intended answers. Understanding the meaning of the query given by the user is the important role of the search engine. The query auto-completion feature is important for search engines. The query auto-completion process occurs uninterruptedly, dynamically listing terms with each click. It provides recommendations that facilitate query formulation and improve the relevancy of the search. Graphs and additional data structures are used frequently in computer science and related fields. The applications of graph machine learning include data recovery, friendship recommendation, and social networking. Heterogeneous graphs (HGs) consist of different kinds of nodes and links, and are useful for defining a wide range of complicated real-world systems a robust graph neural architecture for encoding a knowledge graph is the Relational Graph Convolutional Network (R-GCN).The proposed model uses the supervised Relational Graph Convolutional Network(R-GCN), Long Short-Term Memory (LSTM) for completion of the query. The model predicts the object given the subject and predicate and the accuracy is 92.4%.
[...] Read more.The objective of this article is to characterize the personal epistemological beliefs of prospective primary school teachers regarding their learning experiences through specific digital tools related to their digital culture. The research is conducted in the form of a survey among a sample of prospective primary school teachers in Morocco. The results of this study indicate that the beliefs of prospective teachers vary depending on the digital tool used. Prospective teachers state that the content posted on social networks and personal blogs of trainers is understandable. They are willing to question content published on social networks, while a great number of subjects are inclined to question content published on personal blogs and institutional portals. They fully accept ideas and content published on networks and personal blogs by authors whom they consider more experienced than themselves. They also state that they rely on experts rather than trusting a consensus of respondents regarding a given conten.
[...] Read more.Distance learning as a modality has been growing for some time, however it received a significant boost from 2020. As teachers' activities have increased, Intelligent Virtual Assistants (IVAs) have helped to cope with the high workload and volume of requests. However, IVAs that include modules on empathy and teaching personalization are scarce. In the current work, we intend to map, through a systematic literature review, the level of maturity of the IVAs and how they can include empathy and personalization to improve results in conversations. The study considers a systematic review methodology that analyzes a series of works involving the use of IVAs and empathetic modules, and the platforms, resources, and functionalities available. We demonstrate the relevance of the topic in the scientific area, the diversity of countries involved, and the limitations and challenges that still need to be discussed.
[...] Read more.The dark web is an overwhelming and mysterious place that comprises hidden services. Dark web hidden services contain illegal or offensive content. Hidden services are not accessible through regular search engines or browsers and can only be accessed via specific software. The proposed work aims to identify these hidden services by analyzing their associated images and text data. Doing so, one can better understand the types of activities on the dark web and what kind of content is available. First, a dark web crawler is developed to collect dark web services. Images are then manually classified into four categories: Cards, Devices, Hackers, and Money. Next, preprocessing the collected dataset removed irrelevant images, and a Convolutional Neural Network (CNN) was trained to identify new dark web image classes. Finally, quantum Transfer Learning (QTL) improved the model’s performance. The proposed work goes beyond conventional methods of categorizing datasets by including new categories of image classes of dark web hidden services that have not been considered before. Also, the work examines image data and related text to establish a strong correlation between them. The proposed approach will provide insights into the dark web hidden service by confirming the relationship between the image and text data of the respective hidden-services.
[...] Read more.