A Novel Image Acquisition Technique for Classifying Whole and Split Cashew Nuts Images Using Multi-CNN

PDF (953KB), PP.27-40

Views: 0 Downloads: 0

Author(s)

A. Sivaranjani 1,* S. Senthilrani 2 A. Senthil Murugan 3 B. Ashokkumar 4

1. Department of AIML Malla Reddy University Telangana, India

2. Department of Electrical and Electronics Engineering Velammal College of Engineering and Technology, Madurai, Tamil Nadu, India

3. Dept of CSE St.Peters Engineering College Telangana, India

4. Dept of EEE Thiagarajar College of Engineering, Madurai Tamil Nadu, India

* Corresponding author.

DOI: https://doi.org/10.5815/ijem.2024.06.03

Received: 17 Jul. 2024 / Revised: 15 Aug. 2024 / Accepted: 4 Oct. 2024 / Published: 8 Dec. 2024

Index Terms

Image classification, Convolutional neural network, Multi-view, Image acquisition, Feature extraction

Abstract

Multi CNN has recently gained popularity in image classification applications. In particular, Computer vision has acquired a lot of attraction due to its numerous potential uses in food quality management. Among all the dry fruits available in India, the cashew nut is a significant crop. Specifically high-quality cashew nuts are quite popular on the worldwide market. Although there are a variety of approaches for automatically identifying cashew nuts, the majority of them concentrate on a single view image of the cashew nut. The fundamental issue with current methods for recognizing whole and split cashew nuts is that a single view image of a cashew nut cannot encompass the entire view of a cashew nut, resulting in low classification accuracy. We proposed Multi-view CNN to provide a novel framework for classifying three types of cashew nuts. Images of the sample cashew nuts are taken from three distinct angles (top, left, and right) and fed into the proposed modified CNN architecture. For categorization, the modified CNN extracts and combines many elements from these three images and obtains the accuracy of 98.87%.

Cite This Paper

A. Sivaranjani, S. Senthilrani, A. Senthil Murugan, B. Ashokkumar, "A Novel Image Acquisition Technique for Classifying Whole and Split Cashew Nuts Images Using Multi-CNN", International Journal of Engineering and Manufacturing (IJEM), Vol.14, No.6, pp. 27-40, 2024. DOI:10.5815/ijem.2024.06.03

Reference

[1]N.Mahantesh and P.Manjunatha, “Trends in Area, Production, Yield and Export-Import of Cashew in India- An Economic Analysis”, International Journal of Current Microbiology and Applied Sciences, vol.7, 2018, pp.1088-1098, doi:10.20546/ijcmas.2018.712.135. 
[2]A. Sivaranjani, S. Senthilrani, B. Ashokkumar and A. Senthil Murugan, "Computer Vision Based Method for Counting Various Food Products with Different Shapes and Size", Proc. International Web Conference on Innovations in Communication and Computing,( ICICC '20). 
[3]V. Polisetty and B.Krishna, “An Economic Analysis of Cashew Industry in India”, International Journal of Management Studies, vol.85, Oct.2018,  pp.85-92, doi:10.18843/ijms/v5i4(4)/11.
[4]A.Sivaranjani, S.Senthilrani, B.Ashokumar, A.SenthilMurugan, “CashNet-15:An Optimized Cashew Nut Grading Using Deep CNN and Data Augmentation”, Proc. IEEE conference on system, Computation, Automation and networking (ICSCAN’19), Mar.2019, pp.1-5, doi: 10.1109/ICSCAN.2019.8878725.
[5]A.Sivaranjani, S.Senthilrani, B.Ashokumar, A.SenthilMurugan, “An Improvised Algorithm For Computer Vision Based Cashew Grading System Using Deep CNN”. Proc. IEEE conference on system, Computation, Automation and networking (ICSCAN’18), Mar.2018, pp.1-5, doi:10.1109/ICSCAN.2018.8541176. 
[6]D.Oppenheim and G. Shani, “Potato Disease Classification Using Convolution Neural Networks. Advances in Animal Biosciences”, Advances in Animal Biosciences: Precision Agriculture (ECPA), Vol.8, 2017, pp.244-249, doi: 10.1017/S2040470017001376.
[7]J. Naranjo Torres, M. Mora, R. Hernández García, R. J. Barrientos, C. Fredes, V. Andres, “ A Review of Convolutional Neural Network Applied to Fruit Image Processing”,  Applied Sciences, vol.10, 2020, pp.1-31, doi:10.3390/app10103443.
[8]Y. Jiang and C. Li, "Convolutional Neural Networks for Image-Based High-Throughput Plant Phenotyping: A Review", Plant Phenomics, vol. 2020, 2020, pp. 1-22,  doi: 10.34133/2020/4152816.
[9]A. Kamilaris and F. Prenafeta-Boldú, "A review of the use of convolutional neural networks in agriculture", The Journal of Agricultural Science, vol. 156, no. 3, 2018, pp. 312-322, doi: 10.1017/s0021859618000436.
[10]A. Kamilaris and F. Prenafeta-Boldú, "Deep learning in agriculture: A survey", Computers and Electronics in Agriculture, vol. 147, 2018, pp. 70-90, doi: 10.1016/j.compag.2018.02.016. 
[11]K. Hameed, D. Chai and A. Rassau, "A comprehensive review of fruit and vegetable classification techniques", Image and Vision Computing, vol. 80, 2018, pp. 24-44, doi: 10.1016/j.imavis.2018.09.016.
[12]Y. Lu, S. Yi, N. Zeng, Y. Liu and Y. Zhang, "Identification of rice diseases using deep convolutional neural networks", Neurocomputing, vol. 267, 2017,  pp. 378-384, doi: 10.1016/j.neucom.2017.06.023.
[13]A. dos Santos Ferreira, D. Matte Freitas, G. Gonçalves da Silva, H. Pistori and M. Theophilo Folhes, "Weed detection in soybean crops using ConvNets", Computers and Electronics in Agriculture, vol. 143, 2017,  pp. 314-324, doi: 10.1016/j.compag.2017.10.027.
[14]P. Barré, B. Stöver, K. Müller and V. Steinhage, "LeafNet: A computer vision system for automatic plant species identification", Ecological Informatics, vol. 40, 2017, pp. 50-56, doi: 10.1016/j.ecoinf.2017.05.005.
[15]K. Ferentinos, "Deep learning models for plant disease detection and diagnosis", Computers and Electronics in Agriculture, vol. 145, 2018, pp. 311-318, doi:10.1016/j.compag.2018.01.009.
[16]F. Jia, Y. Lei, N. Lu and S. Xing, "Deep normalized convolutional neural network for imbalanced fault classification of machinery and its understanding via visualization", Mechanical Systems and Signal Processing, vol. 110, 2018, pp. 349-367, doi: 10.1016/j.ymssp.2018.03.025. 
[17]R. Thomaz, P. Carneiro and A. Patrocinio, "Feature extraction using convolutional neural network for classifying breast density in mammographic images", Medical Imaging 2017: Computer-Aided Diagnosis, 2017, doi: 10.1117/12.2254633. 
[18]Y. Zhang et al., "Image based fruit category classification by 13-layer deep convolutional neural network and data augmentation", Multimedia Tools and Applications, vol. 78, no. 3, 2017, pp. 3613-3632, doi: 10.1007/s11042-017-5243-3. 
[19]A. Gómez-Ríos, S. Tabik, J. Luengo, A. Shihavuddin, B. Krawczyk and F. Herrera, "Towards highly accurate coral texture images classification using deep convolutional neural networks and data augmentation", Expert Systems with Applications, vol. 118, 2019, pp. 315-328, doi: 10.1016/j.eswa.2018.10.010.
[20]B. Traore, B. Kamsu-Foguem and F. Tangara, "Deep convolution neural network for image recognition", Ecological Informatics, vol. 48, 2018, pp. 257-268, doi: 10.1016/j.ecoinf.2018.10.002. 
[21]C. Bai, L. Huang, X. Pan, J. Zheng and S. Chen, "Optimization of deep convolutional neural network for large scale image retrieval", Neurocomputing, vol. 303, 2018, pp. 60-67, doi: 10.1016/j.neucom.2018.04.034. 
[22]R.F. Rachmadi, K.E.Purnamay, and M.H.Purnomoz, “Vehicle Color Recognition using Convolutional Neural Network” , Cornell university, computer vision and pattern recoginition, Oct 2017, pp.1-5,  http://arxiv.org/abs/1510.07391
[23]Y. Seo and K. Shin, "Hierarchical convolutional neural networks for fashion image classification", Expert Systems with Applications, vol. 116, 2019, pp. 328-339, doi: 10.1016/j.eswa.2018.09.022.
[24]F. Han, J. Yao, H. Zhu and C. Wang, "Underwater Image Processing and Object Detection Based on Deep CNN Method", Journal of Sensors, vol. 2020, 2020, pp. 1-20, doi: 10.1155/2020/6707328.
[25]F. Ting, Y. Tan and K. Sim, "Convolutional neural network improvement for breast cancer classification", Expert Systems with Applications, vol.120, 2019, pp.103-115, doi: 10.1016/j.eswa.2018.11.008. 
[26]S. Li and X. Zhao, "Image-Based Concrete Crack Detection Using Convolutional Neural Network and Exhaustive Search Technique", Advances in Civil Engineering, vol.2019, 2019, pp. 1-12, doi: 10.1155/2019/6520620. 
[27]H. Song, H. Liang, H. Li, Z. Dai and X. Yun, "Vision-based vehicle detection and counting system using deep learning in highway scenes", European Transport Research Review, vol. 11, no. 1, 2019, doi: 10.1186/s12544-019-0390-4. 
[28]L. Chen, G. Zhao, J. Zhou and L. Kuang, "Real-Time Traffic Sign Classification Using Combined Convolutional Neural Networks", 2017 4th IAPR Asian Conference on Pattern Recognition (ACPR), 2017, doi: 10.1109/acpr.2017.12.
[29]A. Sharma, A. Biswas, A. Gandhi, S. Patil, and O.Deshmukh, “LIVELINET: A Multimodal Deep Recurrent Neural Network to Predict Liveliness in Educational Videos”, International Conference on Educational Data Mining (EDM), 2016.
[30]E. Tatulli and T. Hueber, "Feature extraction using multimodal convolutional neural networks for visual speech recognition", 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017, Doi: 10.1109/icassp.2017.7952701.
[31]S. Zhang, S. Zhang, T. Huang and W. Gao, "Multimodal Deep Convolutional Neural Network for Audio-Visual Emotion Recognition", Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval, 2016, doi: 10.1145/2911996.2912051.
[32]V.G Narendra and S.Hareesha, “Recognition and classification of White Wholes (WW) grade cashew kernel using artificial neural networks”, Acta Scientiarum, Agronomy, vol.38, issue.145, 2016.
[33]M. Kadhim and M. Abed, "Convolutional Neural Network for Satellite Image Classification", Intelligent Information and Database Systems: Recent Developments, 2019, pp. 165-178,  doi: 10.1007/978-3-030-14132-5_13. 
[34]C. Guo, G. Liu and C. Chen, "Air Pollution Concentration Forecast Method Based on the Deep Ensemble Neural Network", Wireless Communications and Mobile Computing, vol. 2020, 2020, pp. 1-13,  doi: 10.1155/2020/8854649. 
[35]C. Zhang, J. Yan, C. Li, X. Rui, L. Liu and R. Bie, "On Estimating Air Pollution from Photos Using Convolutional Neural Network", Proceedings of the 24th ACM international conference on Multimedia, 2016, doi: 10.1145/2964284.2967230.
[36]Elhoseiny, M., Huang, S. and Elgammal, A.,”Weather classification with deep convolutional neural networks”, IEEE International Conference on Image Processing (ICIP), 2015, doi: 10.1109/ICIP.2015.7351424.
[37]Weyn, J., Durran, D. and Caruana, R., “Improving Data‐Driven Global Weather Prediction Using Deep Convolutional Neural Networks on a Cubed Sphere”, Journal of Advances in Modeling Earth Systems, vol.12, no.9, 2020, doi: 10.1029/2020ms002109.
[38]Y. Cha, W. Choi and O. Büyüköztürk, "Deep Learning-Based Crack Damage Detection Using Convolutional Neural Networks", Computer-Aided Civil and Infrastructure Engineering, vol. 32, no.5, 2017,  pp.361-378, doi: 10.1111/mice.12263.
[39]V.Narendra and K.Hareesh, “Computer Vision System To Estimate Cashew Kernel (White Wholes) Grade Geometric And Colour Parameters”, Electronic Journal of Polish Agricultural Universities (EJPAU), vol.17, no.4, 2014.
[40]A. Khamparia, D. Gupta, N. Nguyen, A. Khanna, B. Pandey and P. Tiwari, "Sound Classification Using Convolutional Neural Network and Tensor Deep Stacking Network", IEEE Access, vol. 7, 2019, pp. 7717-7727, doi: 10.1109/access.2018.2888882 [Accessed 4 March 2021].
[41]W. Zhao, H. Qi, Y. Jiang, C. Wang and F. Wei, "A convolutional neural network accelerator for real-time underwater image recognition of autonomous underwater vehicle", Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering, 2020, pp.1-10, doi: 10.1177/0959651820958208.
[42]C. Lin, C. Lin and S. Jeng, "Using Feature Fusion and Parameter Optimization of Dual-input Convolutional Neural Network for Face Gender Recognition", Applied Sciences, vol. 10, no. 9, 2020, p. 3166, doi: 10.3390/app10093166.
[43]Y. Sun, L. Zhu, G. Wang and F. Zhao, "Multi-Input Convolutional Neural Network for Flower Grading", Journal of Electrical and Computer Engineering, vol. 2017, 2017, pp. 1-8, doi: 10.1155/2017/9240407.
[44]H. Su, S. Maji, E. Kalogerakis and E. Learned-Miller, "Multi-view Convolutional Neural Networks for 3D Shape Recognition", 2015 IEEE International Conference on Computer Vision (ICCV), 2015. doi: 10.1109/iccv.2015.114.
[45]L. Sun, J. Wang, Z. Hu, Y. Xu and Z. Cui, "Multi-View Convolutional Neural Networks for Mammographic Image Classification", IEEE Access, vol. 7, 2019, pp. 126273-126282, doi: 10.1109/access.2019.2939167.
[46]S. Jiang, W. Min, L. Liu and Z. Luo, "Multi-Scale Multi-View Deep Feature Aggregation for Food Recognition", IEEE Transactions on Image Processing, vol. 29, 2020, pp. 265-276, doi: 10.1109/tip.2019.2929447.
[47]S.Ioffe and C.Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift”, ICML'15: Proceedings of the 32nd International Conference on International Conference on Machine Learning, vol.37, July 2015, pp.448–456.
[48]W.Nadar and P.Vadak. “Pattern Recognition System for Cashew Classification”, International Journal of Advanced Research in Computer Science and Software Engineering,  vol.6, no.3, March 2016. pp. 108-112. 
[49]M.Arora and V. Devi, “A machine vision-based approach to Cashew Kernel grading for efficient industry grade application”,  International Journal of Advance Research, Ideas and Innovations in Technology, vol.4, 2019, pp.865-871.
[50]V.Narendra and K.Hareesh, “Cashew kernels classification using texture features”, International Journal of Machine Intelligence, vol. 3. 2011, pp.45-51, doi:10.9735/0975-2927.3.2.45-51.
[51]V.Narendra and K.Hareesh, “Cashew kernels classification using colour features”, International Journal of Machine Intelligence, vol.3, no. 2, 2011, 2011, pp.52-57.
[52]A. Sivaranjani, S. Senthilrani, “Computer Vision based cashew grading system using machine learning system”, Journal of Circuits, Systems and Computers. World Scientific publishing company, 32(3), October 2022. 
[53]A. Sivaranjani, S. Senthilrani, B. Ashokkumar, A. Senthil Murugan, “An Overview of Various Computer Vision based Grading System for Various Agricultural Products”, The Journal of Horticultural Science & Biotechnology, Taylor and Francis,97(2),  pp: 137-159, 2022.
[54]Emmanuel Pintelas, Ioannis E. Livieris, Sotiris Kotsiantis, Panagiotis Pintelas, “A multi-view-CNN framework for deep representation learning in image classification”, Computer Vision and Image Understanding, vol.232, 2023.