Work place: Department of Computer Science and Informatics University of Energy and Natural Resources
Research Interests: Computer Vision, Artificial Intelligence, Graph and Image Processing, Embedded System, Image Compression, Image Manipulation, Image Processing
ObedAppiah received the BSc degree in Computer Science from the Kwame Nkrumah University of Science and Technology (KNUST), Kumasi, Ghana in 2005, and his MPhil Computer Science degree from the same institution in 2013. He is currently a Lecture at the University of Energy and Natural Resources (UENR) - Sunyani, and a PhD Computer Science student at KNUST.
DOI: https://doi.org/10.5815/ijmecs.2023.05.06, Pub. Date: 8 Oct. 2023
During COVID-19 pandemic, most tertiary institutions in Ghana were compelled to continue delivering of lectures online using internet technologies as was in the case of other countries. Senior high schools in Ghana were, however, not asked to do same, currently, the setting of most literature on blended or online learning in Ghana is focused on tertiary education. This paper situates the blended learning model in a less endowed senior high school to unearth the prospect of its implementation. The research provides an alternative to the traditional face-to-face learning, which is faced with the challenge of inadequate infrastructure, high number of students to class ratio, less compatibility with 21st learning skills and long-life learning in Ghana.
A customed Moodle application as web application tool, hosted students online in both synchronous and asynchronous interactions. Purposive quota sampling size technique was used to select an appreciable sample size to fully go through the traditional face-face model for a term and then study through the blended learning model for another term. Students’ examination performances for both were analyzed with a paired t test statistical model. Interviews with participants were conducted to ascertain their evaluation of the blended learning model and questionnaires were also administered to discover the institutional, technological, and human resource readiness for blended learning in senior high schools. The analysis of the data gathered, proved that blended learning in senior high schools has high prospect and is better alternative to face-to-face learning in Ghana.
[...] Read more.
DOI: https://doi.org/10.5815/ijigsp.2021.03.03, Pub. Date: 8 Jun. 2021
This paper presents a digital image watermarking scheme in the frequency domain for colour images. The algorithm was developed using the digital wavelet transform together with fractal encryption. A host image was first transformed into a frequency domain using the discrete wavelet transform after which a binary watermark was permuted and encrypted with a fractal generated from the watermark and a random key. The encrypted watermark is then embedded into the host image in the frequency domain to form a watermarked image. The algorithm’s performance was examined based on the image quality of the watermarked image using peak signal to noise ratio. A perceptual metrics called the structural similarity index metric was further used to examine the structural similarity of the watermarked image and the extracted watermark. Again, the normalised cross-correlation was introduced to further assess the robustness of the algorithm. Our algorithm produced a peak signal to noise ratio of 51.1382dB and a structural similarity index of 0.9999 when tested on colour images of Lena, baboon and pepper indicating the quality of the watermarked images produced and hence indicates a higher imperceptibility of the proposed algorithm. The extracted watermark also had a structural similarity of 1 and a normalised cross correlation of 1 indicating a perfect similarity between the original watermark and the extracted watermark hence shows a higher performance of the proposed algorithm. The algorithm also showed a very good level of robustness when various attacks such as Gaussian noise, Poisson noise, salt and pepper noise, speckle noise and filtering were applied.[...] Read more.
DOI: https://doi.org/10.5815/ijigsp.2018.03.04, Pub. Date: 8 Mar. 2018
The process of generating histogram from a given image is a common practice in the image processing domain. Statistical information that is generated using histograms enables various algorithms to perform a lot of pre-processing task within the field of image processing and computer vision. The statistical subtasks of most algorithms are normally effectively computed when the histogram of the image is known. Information such as mean, median, mode, variance, standard deviation, etc. can easily be computed when the histogram of a given dataset is provided. Image brightness, entropy, contrast enhancement, threshold value estimation and image compression models or algorithms employ histogram to get the work done successfully. The challenge with the generation of the histogram is that, as the size of the image increases, the time expected to traverse all elements in the image also increases. This results in high computational time complexity for algorithms that employs the generation histogram as subtask. Generally the time complexity of histogram algorithms can be estimated as O(N2) where the height of the image and its width are almost the same. This paper proposes an approximated method for the generation of the histogram that can reduce significantly the time expected to complete a histogram generation and also produce histograms that are acceptable for further processing. The method can theoretically reduce the computational time to a fraction of the time expected by the actual method and still generate outputs of acceptable level for algorithms such as Histogram Equalization (HE) for contrast enhancement and Otsu automatic threshold estimation.[...] Read more.
DOI: https://doi.org/10.5815/ijigsp.2017.02.01, Pub. Date: 8 Feb. 2017
Image processing techniques for object tracking, identification and classification have become common today as a result of improved quality of cameras as well as prices of cameras becoming cheaper and cheaper day by day. The use of cameras also make it possible for human analysis of video streams or images where it is difficult for robots or algorithms or machines to effectively deal with the images. However, the use of cameras for basic tracking and analysing do not come without challenges such as issues with sudden changes in illumination, shadows, occlusion, noise, and high computational time and space complexities of algorithms. A typical image processing task may involve several subtasks such as capturing, and pre-processing which demand high computational resources to complete. One of the main pre-processing tasks used in image processing is image segmentation which enables images to be divided into sections of interest in order to perform analysis on them. Background Subtraction is commonly used to segment images into Background and Foreground for further processing. Algorithms producing highly accurate results during this segmentation task normally demand high computation time or memory space, while algorithms that use smaller memory space and shorter time to complete this segmentation task may also suffer from limitations that may lead to undesired results at some point in time. Poor outputs from algorithms will eventually lead to system failure which must be avoided as much as possible. This paper proposes a median based background updating algorithm which determines the median of a buffer containing values that are highly correlated. The algorithm achieves this by deletingan extreme valuefrom the buffer whenever data is to be added to it.Experiments show that the method produces good results with less computational time which will make it possible to implement on devices that do not have much computation resources.[...] Read more.
Subscribe to receive issue release notifications and newsletters from MECS Press journals