Work place: Department of Computer Science, Virtual University of Pakistan
E-mail: shabib.aftab@gmail.com
Website:
Research Interests:
Biography
Dr. Shabib Aftab (Senior Member, IEEE) earned his Ph.D. in Computer Science from National College of Business Administration and Economics, Lahore. He previously received his M.S. degree in Computer Science from COMSATS University, Lahore, and his M.Sc. degree in Information Technology from the Punjab University College of Information Technology, Lahore. Currently, he is serving as a Lecturer in the Department of Computer Science at the Virtual University of Pakistan. He has research interests in applied machine learning and software process improvement.
By Samia Akhtar Shabib Aftab Munir Ahmad Asma Akhtar
DOI: https://doi.org/10.5815/ijem.2024.06.04, Pub. Date: 8 Dec. 2024
Diabetic Retinopathy is a severe eye condition originating as a result of long term diabetes mellitus. Timely detection is essential to prevent it from progressing to more advanced stages. Manual detection of DR is labor-intensive and time-consuming, requiring expertise and extensive image analysis. Our research aims to develop a robust and automated deep learning model to assist healthcare professionals by streamlining the detection process and improving diagnostic accuracy. This research proposes a multi-classification framework using Transfer Learning for diabetic retinopathy grading among diabetic patients. An image based dataset, APTOS 2019 Blindness Detection, is utilized for our model training and testing. Our methodology involves three key preprocessing steps: 1) Cropping to remove extraneous background regions, 2) Contrast enhancement using CLAHE (Contrast Limited Adaptive Histogram Equalization) and 3) Resizing to a consistent dimension of 224x224x3. To address class imbalance, we applied SMOTE (Synthetic Minority Over-sampling Technique) for balancing the dataset. Data augmentation techniques such as rotation, zooming, shifting, and brightness adjustment are used to further enhance the model's generalization. The dataset is split to a 70:10:20 ratios for training, validation and testing. For classification, EfficientNetB3 and Xception, two transfer learning models, are used after fine-tuning which includes addition of dense, dropout and fully connected layers. Hyper parameters such as batch size, no. of epochs, optimizer etc were adjusted prioir model training. The performance of our model is evaluated using various performance metrics including accuracy, specificity, sensitivity and others. Results reveal the highest test accuracy of 95.16% on the APTOS dataset for grading diabetic retinopathy into five classes using the EfficientNetB3 model followed by a test accuracy of 92.66% using Xception model. Our top-performing model, EfficientNetB3, was compared against various state-of-the-art approaches, including DenseNet-169, hybrid models, and ResNet-50, where our model outperformed all these methodologies.
[...] Read more.DOI: https://doi.org/10.5815/ijitcs.2024.06.05, Pub. Date: 8 Dec. 2024
Diabetic retinopathy stands as a significant concern for individuals managing diabetes. It is a severe eye condition that targets the delicate blood vessels within the retina. As it advances, it can inflict severe vision impairment or complete blindness in extreme cases. Regular eye examinations are vital for individuals with diabetes to detect abnormalities early. Detection of diabetic retinopathy is challenging and a time-consuming process, but deep learning and transfer learning techniques offer vital support by automating the process, providing accurate predictions, and simplifying diagnostic procedures for healthcare professionals. This study introduces a multi-classification framework for grading diabetic retinopathy into five classes using Transfer Learning and data fusion. The objective is to develop a robust, automated model for diabetic retinopathy detection to enhance the diagnostic process for healthcare professionals. We fused two distinct datasets, APTOS and IDRiD, which resulted in a total of 4178 fundus images. The merged dataset underwent preprocessing to enhance image quality and to remove unwanted regions, noise and artifacts from the fundus images. The pre-processed dataset is then resized and a balancing technique called SMOTE is applied to it due to uneven class distribution present among classes. To increase diversity and size of the dataset, data augmentation techniques including flipping, brightness adjustment and contrast adjustment are applied. The dataset is split into 80:10:10 ratios for training, validation, and testing. Two pre-trained models, EfficientNetB5 and DenseNet121, are fine-tuned and training parameters like batch size, number of epochs, learning rate etc. are adjusted. The results demonstrate the highest test accuracy of 96.06% is achieved by using EfficientNetB5 model followed by 91.40% test accuracy using DenseNet121 model. The performance of our best model i.e. EfficientNetB5, is compared with several state-of-the-art approaches, including DenseNet-169, Hybrid models and ResNet-50 where our model outperformed these methodologies in terms of test accuracy.
[...] Read more.By Umair Ali Shabib Aftab Ahmed Iqbal Zahid Nawaz Muhammad Salman Bashir Muhammad Anwaar Saeed
DOI: https://doi.org/10.5815/ijmecs.2020.05.03, Pub. Date: 8 Oct. 2020
Testing is considered as one of the expensive activities in software development process. Fixing the defects during testing process can increase the cost as well as the completion time of the project. Cost of testing process can be reduced by identifying the defective modules during the development (before testing) stage. This process is known as “Software Defect Prediction”, which has been widely focused by many researchers in the last two decades. This research proposes a classification framework for the prediction of defective modules using variant based ensemble learning and feature selection techniques. Variant selection activity identifies the best optimized versions of classification techniques so that their ensemble can achieve high performance whereas feature selection is performed to get rid of such features which do not participate in classification and become the cause of lower performance. The proposed framework is implemented on four cleaned NASA datasets from MDP repository and evaluated by using three performance measures, including: F-measure, Accuracy, and MCC. According to results, the proposed framework outperformed 10 widely used supervised classification techniques, including: “Naïve Bayes (NB), Multi-Layer Perceptron (MLP), Radial Basis Function (RBF), Support Vector Machine (SVM), K Nearest Neighbor (KNN), kStar (K*), One Rule (OneR), PART, Decision Tree (DT), and Random Forest (RF)”.
[...] Read more.DOI: https://doi.org/10.5815/ijitcs.2020.03.04, Pub. Date: 8 Jun. 2020
Prediction of defect prone software modules is now considered as an important activity of software quality assurance. This approach uses the software metrics to predict whether the developed module is defective or not. This research presents MLP based ensemble classification framework to predict the defect prone software modules. The framework predicts the defective modules by using three dimensions: 1) Tuned MLP, 2) Tuned MLP with Bagging 3) Tuned MLP with Boosting. In first dimension only the MLP is used for the classification after optimization. In second dimension, the optimized MLP is integrated with bagging technique. In third dimension, the optimized MLP is integrated with boosting technique. Four publically available cleaned NASA MDP datasets are used for the implementation of proposed framework and the performance is evaluated by using F-measure, Accuracy, Roc Area and MCC. The performance of the proposed framework is compared with ten widely used supervised classification techniques by using Scott-Knott ESD test and the results reflects the high performance of the proposed framework.
[...] Read more.DOI: https://doi.org/10.5815/ijmecs.2020.01.03, Pub. Date: 8 Feb. 2020
Production of high quality software at lower cost can be possible by detecting defect prone software modules before the testing process. With this approach, less time and resources are required to produce a high quality software as only those modules are thoroughly tested which are predicted as defective. This paper presents a classification framework which uses Multi-Filter feature selection technique and Multi-Layer Perceptron (MLP) to predict defect prone software modules. The proposed framework works in two dimensions: 1) with oversampling technique, 2) without oversampling technique. Oversampling is introduced in the framework to analyze the effect of class imbalance issue on the performance of classification techniques. The framework is implemented by using twelve cleaned NASA MDP datasets and performance is evaluated by using: F-measure, Accuracy, MCC and ROC. According to results the proposed framework with class balancing technique performed well in all of the used datasets.
[...] Read more.By Faseeha Matloob Shabib Aftab Ahmed Iqbal
DOI: https://doi.org/10.5815/ijmecs.2019.12.02, Pub. Date: 8 Dec. 2019
Testing is one of the crucial activities of software development life cycle which ensures the delivery of high quality product. As software testing consumes significant amount of resources so, if, instead of all software modules, only those are thoroughly tested which are likely to be defective then a high quality software can be delivered at lower cost. Software defect prediction, which has now become an essential part of software testing, can achieve this goal. This research presents a framework for software defect prediction by using feature selection and ensemble learning techniques. The framework consists of four stages: 1) Dataset Selection, 2) Pre Processing, 3) Classification, and 4) Reflection of Results. The framework is implemented on six publically available Cleaned NASA MDP datasets and performance is reflected by using various measures including: F-measure, Accuracy, MCC and ROC. First the performance of all search methods within the framework on each dataset is compared with each other and the method with highest score in each performance measure is identified. Secondly, the results of proposed framework with all search methods are compared with the results of 10 well-known supervised classification techniques. The results reflect that the proposed framework outperformed all of other classification techniques.
[...] Read more.By Ahmed Iqbal Shabib Aftab Faseeha Matloob
DOI: https://doi.org/10.5815/ijitcs.2019.11.05, Pub. Date: 8 Nov. 2019
Predicting the defects at early stage of software development life cycle can improve the quality of end product at lower cost. Machine learning techniques have been proved to be an effective way for software defect prediction however an imbalance dataset of software defects is the main issue of lower and biased performance of classifiers. This issue can be resolved by applying the re-sampling methods on software defect dataset before the classification process. This research analyzes the performance of three widely used resampling techniques on class imbalance issue for software defect prediction. The resampling techniques include: “Random Under Sampling”, “Random Over Sampling” and “Synthetic Minority Oversampling Technique (SMOTE)”. For experiments, 12 publically available cleaned NASA MDP datasets are used with 10 widely used supervised machine learning classifiers. The performance is evaluated through various measures including: F-measure, Accuracy, MCC and ROC. According to results, most of the classifiers performed better with “Random Over Sampling” technique in many datasets.
[...] Read more.By Ahmed Iqbal Shabib Aftab Israr Ullah Muhammad Salman Bashir Muhammad Anwaar Saeed
DOI: https://doi.org/10.5815/ijmecs.2019.09.06, Pub. Date: 8 Sep. 2019
Software defect prediction is one of the emerging research areas of software engineering. The prediction of defects at early stage of development process can produce high quality software at lower cost. This research contributes by presenting a feature selection based ensemble classification framework which consists of four stages: 1) Dataset selection, 2) Feature Selection, 3) Classification, and 4) Results. The proposed framework is implemented from two dimensions, one with feature selection and second without feature selection. The performance is evaluated through various measures including: Precision, Recall, F-measure, Accuracy, MCC and ROC. 12 Cleaned publically available NASA datasets are used for experiments. The results of both the dimensions of proposed framework are compared with the other widely used classification techniques such as: “Naïve Bayes (NB), Multi-Layer Perceptron (MLP). Radial Basis Function (RBF), Support Vector Machine (SVM), K Nearest Neighbor (KNN), kStar (K*), One Rule (OneR), PART, Decision Tree (DT), and Random Forest (RF)”. Results reflect that the proposed framework outperformed other classification techniques in some of the used datasets however class imbalance issue could not be fully resolved.
[...] Read more.By Ahmed Iqbal Shabib Aftab Israr Ullah Muhammad Anwaar Saeed Arif Husen
DOI: https://doi.org/10.5815/ijcnis.2019.09.05, Pub. Date: 8 Sep. 2019
The exponent increase in the use of online information systems triggered the demand of secure networks so that any intrusion can be detected and aborted. Intrusion detection is considered as one of the emerging research areas now days. This paper presents a machine learning based classification framework to detect the Denial of Service (DoS) attacks. The framework consists of five stages, including: 1) selection of the relevant Dataset, 2) Data pre-processing, 3) Feature Selection, 4) Detection, and 5) reflection of Results. The feature selection stage incudes the Decision Tree (DT) classifier as subset evaluator with four well known selection techniques including: Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Best First (BF), and Rank Search (RS). Moreover, for detection, Decision Tree (DT) is used with bagging technique. Proposed framework is compared with 10 widely used classification techniques including Naïve Bayes (NB), Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), K-Nearest Neighbor (kNN), Decision Tree (DT), Radial Basis Function (RBF), One Rule (OneR), PART, Bayesian Network (BN) and Random Tree (RT). A part of NSL-KDD dataset related to Denial of Service attack is used for experiments and performance is evaluated by using various accuracy measures including: Precision, Recall, F measure, FP rate, Accuracy, MCC, and ROC. The results reflected that the proposed framework outperformed all other classifiers.
[...] Read more.DOI: https://doi.org/10.5815/ijcnis.2019.04.03, Pub. Date: 8 Apr. 2019
Network security is an essential element in the day-to-day IT operations of nearly every organization in business. Securing a computer network means considering the threats and vulnerabilities and arrange the countermeasures. Network security threats are increasing rapidly and making wireless network and internet services unreliable and insecure. Intrusion Detection System plays a protective role in shielding a network from potential intrusions. In this research paper, Feed Forward Neural Network and Pattern Recognition Neural Network are designed and tested for the detection of various attacks by using modified KDD Cup99 dataset. In our proposed models, Bayesian Regularization and Scaled Conjugate Gradient, training functions are used to train the Artificial Neural Networks. Various performance measures such as Accuracy, MCC, R-squared, MSE, DR, FAR and AROC are used to evaluate the performance of proposed Neural Network Models. The results have shown that both the models have outperformed each other in different performance measures on different attack detections.
[...] Read more.DOI: https://doi.org/10.5815/ijmecs.2018.01.03, Pub. Date: 8 Jan. 2018
Scrum has emerged as a most adopted and most desired Agile approach that provides corporate strategic competency by laying a firm foundation for project management. Scrum, being more of a framework than a rigid methodology, offers maximum flexibility to its practitioners. However, there are several challenges confronted during its implementation for which certain researchers not only adapted, but also augmented Scrum with other Agile practices. One such effort is IScrum, an Improved Scrum process model. In this paper an empirical study has been conducted for analyzing the two models i.e. classical Agile Scrum model and IScrum process model. There are two goals of this study: first is to validate the IScrum and the second goal is to evaluate it in comparison with the traditional Scrum model. Subsequently, the study will describe and highlight which characteristics of Scrum are enhanced in IScrum. Furthermore, a survey is used to investigate the teams’ experience with both models. The results of survey and case-study have been examined and compared to find out if IScrum performs well than Scrum in software development. The outcomes advocate that the improvements were quite effective in resolving most of the problem areas. The IScrum can thus be adopted by industry practitioners as best choice.
[...] Read more.DOI: https://doi.org/10.5815/ijmecs.2017.12.04, Pub. Date: 8 Dec. 2017
Software development process model plays a key role in developing high quality software. However there is no fit-for-all type of process model exist in software industry. To accommodate some specific project’s needs, process models have to be tailored. Extreme Programming (XP) is a well-known agile model. Due to its simplicity, best practices and disciplined approach researchers tried to mold it for various types of projects and situations. As a result a large number of customized versions of XP are available now days. The aim of this paper is to analyze the latest customizations of XP. For this purpose a systematic literature review is conducted on studies published during 2013 to 2017. This detailed review identifies the objectives of customizations, specific areas in which customizations are done and practices & phases which are being targeted for customizations. This work will not only serve the best for scholars to find the current XP states but will also help researchers to predict the future directions of software development with XP.
[...] Read more.DOI: https://doi.org/10.5815/ijmecs.2017.11.07, Pub. Date: 8 Nov. 2017
Agile mania has revolutionized the software industry. Scrum, being a widely adopted mainstream production process, has dominated other Agile family members. Both industrial and academic researchers eagerly tailored and adapted the Scrum framework in quest of software process improvement. Their burning desire for innovation drive them to integrate other software development models with it to leverage the forte of all the models combined and stifle the weaknesses. This paper aims at providing state-of-the-art insightful understanding of how practices from different Agile process models have been plugged into the Scrum framework to bring about improvements in different extents of development that ensued enhanced productivity, and product quality. To gain the in-depth perception, a systematic mapping study has been planned. This study will identify researches on hybrid models of Scrum within agile family, published between 2011 and 2017. Subsequently, these hybrid models of Scrum will be examined broadly by classifying and thematically analyzing the literature, and outcomes will be presented. This study will contribute a latest coarse-grained overview that in turn may guide researchers for future research endeavors.
[...] Read more.DOI: https://doi.org/10.5815/ijmecs.2017.10.04, Pub. Date: 8 Oct. 2017
Social media and micro-blogging websites have become the popular platforms where anyone can express his/her thoughts about any particular news, event or product etc. The problem of analyzing this massive amount of user-generated data is one of the hot topics today. The term sentiment analysis includes the classification of a particular text as positive, negative or neutral, is known as polarity detection. Support Vector Machine (SVM) is one of the widely used machine learning algorithms for sentiment analysis. In this research, we have proposed a Sentiment Analysis Framework and by using this framework, analyzed the performance of SVM for textual polarity detection. We have used three datasets for experiment, two from twitter and one from IMDB reviews. For performance evaluation of SVM, we have used three different ratios of training data and test data, 70:30, 50:50 and 30:70. Performance is measured in terms of precision, recall and f-measure for each dataset.
[...] Read more.By Zahid Nawaz Shabib Aftab Faiza Anwer
DOI: https://doi.org/10.5815/ijmecs.2017.09.06, Pub. Date: 8 Sep. 2017
Feature driven development (FDD) is a process oriented and client centric agile software development model which develops a software according to client valued features. Like other agile models it also has adaptive and incremental nature to implement required functionality in short iterations. FDD mainly focus on designing and building aspects of software development with more emphasis on quality. However less responsiveness to changing requirements, reliance on experienced staff and less appropriateness for small scale projects are the main problems. To overcome these problems a Simplified Feature Driven Development (SFDD) model is proposed in this paper. In SFDD we have modified the phases of classical FDD for small to medium scale projects that can handle changing requirements with small teams in efficient and effective manner.
[...] Read more.DOI: https://doi.org/10.5815/ijmecs.2017.08.03, Pub. Date: 8 Aug. 2017
Resolving a wide domain of issues and offering a variety of benefits to software engineering, makes the Agile process models attractive for researchers. Scrum has been recognized as one of the most promising and successfully adopted agile process models at software industry. The reason behind vast recognition is its contribution towards increased productivity, improved collaboration, quick response to fluctuating market needs and faster delivery of quality product. Though Scrum performs better for small projects but there are certain challenges that practitioners encounter while implementing it. Experts have made some efforts to adapt the Scrum in a way that could remove those drawbacks and limitations, however, no single effort addresses all the issues. This paper is intended to present a tailored version of Scrum aimed at improving documentation, team’s performance, and visibility of work, testing, and maintenance. The proposed model involves adapting and innovating the traditional Scrum practices and roles to overcome the problems while preserving the integrity and simplicity of the model.
[...] Read more.DOI: https://doi.org/10.5815/ijmecs.2017.07.02, Pub. Date: 8 Jul. 2017
Owing to a big deal of benefits that Agile process models offer to the software industry, they have been the center of attention for a couple of decades for researchers. Scrum has emerged as one of the most prevalent contemporary Agile approaches. It's adaptive and versatile nature makes it appropriate for adoption. Experts have been experimenting and tweaking the practices for last many years to enrich the Scrum. This paper is intended to provide the latest insightful understanding of how the Agile Scrum tailored and adapted in different areas for software process improvement that in turn lead to increased productivity and product quality. A research strategy has been designed to extract the literature since 2016, based on pragmatic transformations of Scrum, subsequently gaining the in-depth perception that is presented in the paper as a comprehensive review and the outcomes are discussed. This work will contribute a state-of-the-art objective summary from which advance research activities can be planned and carried out.
[...] Read more.DOI: https://doi.org/10.5815/ijmecs.2017.06.04, Pub. Date: 8 Jun. 2017
Extreme programming is one of the widely used agile models in the software industry. It can handle unclear and changing requirements with the good level of customer satisfaction. However Lack of documentation, poor architectural structure and less focus on design are its major drawbacks that affects its performance. Due to these problems it cannot be used for all kinds of projects. It is considered suitable for small and low risk projects. It also has some controversial practices that cannot be applied in each and every situation like pair programming and on-site customer. To overcome these limitations a modified version of XP called “Simplified Extreme Programming” is proposed in this paper. This model provides solution of these problems without affecting simplicity and agility of extreme programming.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals