IJIGSP Vol. 11, No. 8, 8 Aug. 2019
Cover page and Table of Contents: PDF (size: 1724KB)
Full Text (PDF, 1724KB), PP.1-18
Views: 0 Downloads: 0
Image Processing, Edge Sharpening, Object Region Segmentation, Fruit Localization, Fruit Recognition, Convolutional Neural Networks
In this paper, an efficient approach has been proposed to localize every clearly visible object or region of object from an image, using less memory and computing power. For object detection we have processed every input image to overcome several complexities, which are the main limitations to achieve better result, such as overlap between multiple objects, noise in the image background, poor resolution etc. We have also implemented an improved Convolutional Neural Network based classification or recognition algorithm which has proved to provide better performance than baseline works. Combining these two detection and recognition approaches, we have developed a competent multi-class Fruit Detection and Recognition (FDR) model that is very proficient regardless of different limitations such as high and poor image quality, complex background or lightening condition, different fruits of same shape and color, multiple overlapped fruits, existence of non-fruit object in the image and the variety in size, shape, angel and feature of fruit. This proposed FDR model is also capable of detecting every single fruit separately from a set of overlapping fruits. Another major contribution of our FDR model is that it is not a dataset oriented model which works better on only a particular dataset as it has been proved to provide better performance while applying on both real world images (e.g., our own dataset) and several states of art datasets. Nevertheless, taking a number of challenges into consideration, our proposed model is capable of detecting and recognizing fruits from image with a better accuracy and average precision rate of about 0.9875.
Rafflesia Khan, Rameswar Debnath, " Multi Class Fruit Classification Using Efficient Object Detection and Recognition Techniques", International Journal of Image, Graphics and Signal Processing(IJIGSP), Vol.11, No.8, pp. 1-18, 2019. DOI: 10.5815/ijigsp.2019.08.01
[1]Y. LeCUN, B. Boser, J. Denker, et al., “Hubbard. w., and jackel, ld:â˘AŸhand written digit recognition with a back-propagation network â˘AŸ, in â˘AŸ,” Advances in neural information processing systems 2, 396404 (1989).
[2]P. Sermanet, S. Chintala, and Y. LeCun, “Convolutional neural networks applied to house numbers digit classification,” in Pattern Recognition (ICPR), 2012 21st International Conference on, 3288–3291, IEEE (2012).
[3]D. Ciresan, U. Meier, and J. Schmidhuber, “Multicolumn deep neural networks for image classification,” (2012). [doi:10.1109/cvpr.2012.6248110].
[4]P. Sermanet and Y. LeCun, “Traffic sign recognition with multi-scale convolutional networks,” in Neural Networks (IJCNN), The 2011 International Joint Conference on, 2809–2813, IEEE (2011) [doi:10.1109/IJCNN.2011. 6033589].
[5]M. Everingham, L. Van Gool, C. K. Williams, et al., “The pascal visual object classes (voc) challenge,” International journal of computer vision 88(2), 303–338 (2010). [doi:10.1007/s11263-009-0275-4].
[6]J. Deng, W. Dong, R. Socher, et al., “Imagenet: A large-scale hierarchical image database,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, 248–255, Ieee (2009). [doi:10.1109/CVPR.2009.5206848].
[7]K. Jarrett, K. Kavukcuoglu, Y. LeCun, et al., “What is the best multi-stage architecture for object recognition?,” in Computer Vision, 2009 IEEE 12th International Conference on, 2146–2153, IEEE (2009). [doi:10.1109/ICCV.2009.5459469].
[8]B. V. Biradar and S. P. Shrikhande, “Flower detection and counting using morphological and segmentation technique,” Int. J. Comput. Sci. Inform. Technol 6, 2498–2501 (2015).
[9]H. Mure¸san and M. Oltean, “Fruit recognition from images using deep learning,” Acta Universitatis Sapientiae, Informatica 10(1), 26–42 (2018).
[10]H. Kuang, C. Liu, L. L. H. Chan, et al., “Multi-class fruit detection based on image region selection and improved object proposals,” Neurocomputing 283, 241–255 (2018). [doi:10.1016/j.neucom.2017.12.057].
[11]Y. Lu, D. Allegra, M. Anthimopoulos, et al., “A multi-task learning approach for meal assessment,” in Proceedings of the Joint Workshop on Multimedia for Cooking and Eating Activities and Multimedia Assisted Dietary Management, 46–52, ACM (2018).
[12]Kaggle, “Dogs vs. cats,create an algorithm to distinguish dogs from cats.” https://www.kaggle.com/c/dogs-vs-cats (2013).
[13]J. Rakun, D. Stajnko, and D. Zazula, “Detecting fruits in natural scenes by using spatial-frequency based texture analysis and multiview geometry,” Computers and Electronics in Agriculture 76(1), 80–88 (2011). [doi:10.1016/j.compag.2011.01.007].
[14]R. Thendral, A. Suhasini, and N. Senthil, “A comparative analysis of edge and color based segmentation for orange fruit recognition,” in Communications and Signal Processing (ICCSP), 2014 International Conference on, 463–466, IEEE (2014). [doi:10.1109/ICCSP.2014.6949884].
[15]E. Parrish, A. Goksel, et al., “Pictorial pattern recognition applied to fruit harvesting,” Transactions of the ASAE 20(5), 822–0827 (1977).
[16]G. R. A. Grand D Esnon and R.Pellenc, “A selfpropelled robot to pick apples,” in ASAE paper, (87-1037) (1987).
[17]S. Ren, K. He, R. Girshick, et al., “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, et al., Eds., 91–99, Curran Associates, Inc. (2015).
[18]J. Redmon, S. Divvala, R. Girshick, et al., “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 779–788 (2016).
[19]R. Girshick, J. Donahue, T. Darrell, et al., “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 580–587 (2014).
[20]F. Garcia, J. Cervantes, A. Lopez, et al., “Fruit classification by extracting color chromaticity, shape and texture features: towards an application for supermarkets,” IEEE Latin America Transactions 14(7), 3434–3443 (2016). [doi:10.1109/TLA.2016.7587652].
[21]S. Riyadi, A. J. Ishak, M. M. Mustafa, et al., “Wavelet-based feature extraction technique for fruit shape classification,” in Mechatronics and Its Applications, 2008. ISMA 2008. 5th International Symposium on, 1–5, IEEE (2008). "[doi:10.1109/ISMA.2008.4648858]".
[22]S. Jana and R. Parekh, “Shape-based fruit recognition and classification,” in International Conference on Computational Intelligence, Communications, and Business Analytics, 184–196, Springer (2017). [doi:10.1007/978-981-10-6430-2_15].
[23]J.-y. Kim, M. Vogl, and S.-D. Kim, “A code based fruit recognition method via image convertion using multiple features,” in IT Convergence and Security (ICITCS), 2014 International Conference on, 1–4, IEEE (2014). [doi:10.1109/ICITCS.2014.7021706].
[24]C. Hung, J. Underwood, J. Nieto, et al., “A feature learning based approach for automated fruit yield estimation,” in Field and Service Robotics, 485–498, Springer (2015). [doi:10.1007/978-3-319-07488-7_-33].
[25]Z. S. Pothen and S. Nuske, “Texture-based fruit detection via images using the smooth patterns on the fruit,” in Robotics and Automation (ICRA), 2016 IEEE International Conference on, 5171–5176, IEEE (2016). [doi:10.1109/ICRA.2016.7487722].
[26]H. N. Patel, R. Jain, and M. V. Joshi, “Fruit detection using improved multiple features based algorithm,” International journal of computer applications 13(2), 1–5 (2011).
[27]W. Liu, D. Anguelov, D. Erhan, et al., “Ssd: Single shot multibox detector,” in European conference on computer vision, 21–37, Springer (2016). [doi:10.1007/978-3-319-46448-0_2].
[28]H. Kuang, “Hulin kuang. the university of calgary..” https://www.researchgate.net/profile/Hulin_Kuang.
[29]C. L. Zitnick and P. Dollár, “Edge boxes: Locating object proposals from edges,” in European conference on computer vision, 391–405, Springer (2014). [doi:10.1007/978-3-319-10602-1_26].
[30]P. F. Felzenszwalb, R. B. Girshick, D. McAllester, et al., “Object detection with discriminatively trained part-based models,” IEEE transactions on pattern analysis and machine intelligence 32(9), 1627–1645 (2010). [doi:10.1109/TPAMI.2009.167].
[31]Kaggle, “Fruits 360 dataset.” https://www.kaggle.com/moltean/fruits.
[32]L. Hou, Q. Wu, Q. Sun, et al., “Fruit recognition based on convolution neural network,” in Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), 2016 12th International Conference on, 18–22, IEEE (2016). [doi:10.1109/FSKD.2016.7603144].
[33]L. Jidong, Z. De-An, J. Wei, et al., “Recognition of apple fruit in natural environment,” Optik-International Journal for Light and Electron Optics 127(3), 1354–1362 (2016). [doi:10.1016/j.ijleo.2015.10.177].
[34]R. Khan, T. Fariha Raisa, and R. Debnath, “An efficient contour based fine-grained algorithm for multi category object detection,” Journal of Image and Graphics 6, 127–136 (2018).
[35]E. S. Gastal and M. M. Oliveira, “Domain transform for edge-aware image and video processing,” in ACM Transactions on Graphics (ToG), 30(4), 69, ACM (2011). [doi:10.1145/2010324.1964964].
[36]E. Gastal, “Non photorealistic rendering using opencv( python, c++ ) | learn opencv.” https://www.learnopencv.com/non-photorealisticrendering- using-opencv-python-c/.
[37]P. O. A. Thresholding, “Adaptive Thresholdings,” (2003). [Online; last accessed 06-April-2019].
[38]S. F. BogoToBogo_K Hong Ph.D.Golden Gate Ave, “Image thresholding and segmentation..” https://www.bogotobogo.com/python/OpenCV_Python/python_opencv3_Image_Global_ Thresholding_Adaptive_Thresholding_Otsus_Binarization_Segmentations.php(2013). [Online;accessed 19-July-2018].
[39]Homepages.inf.ed.ac.uk, “Morphology - Dilation,” (2003). [Online; last accessed 06-April-2019].
[40]O. S. C. V. OpenCV, “Morphological transformations.” https://docs.opencv.org/3.4/d9/d61/tutorial_py_morphological_ops.html.
[41]J. T. Springenberg, A. Dosovitskiy, T. Brox, et al., “Striving for simplicity: The all convolutional net,” arXiv preprint arXiv:1412.6806 (2014).
[42]T. Github, “Build a Convolutional Neural Network using Estimators.”https://www.tensorflow.org/tutorials/estimators/cnn (2008). [Online; accessed 19-July-2008].
[43]A. R. Jiménez, A. K. Jain, R. Ceres, et al., “Automatic fruit recognition: a survey and new results using range/attenuation images,” Pattern recognition 32(10), 1719–1736 (1999).
[44]W. Zhang, Y. Zhang, J. Zhai, et al., “Multi-source data fusion using deep learning for smart refrigerators,” Computers in Industry 95, 15–21 (2018).