IJISA Vol. 17, No. 1, 8 Feb. 2025
Cover page and Table of Contents: PDF (size: 952KB)
PDF (952KB), PP.31-52
Views: 0 Downloads: 0
Smart Farming, Machine Learning, GDPR, XAI, LIME, Dice_ml.
Smart farming is undergoing a transformation with the integration of machine learning (ML) and artificial intelligence (AI) to improve crop recommendations. Despite the advancements, a critical gap exists in opaque ML models that need to explain their predictions, leading to a trust deficit among farmers. This research addresses the gap by implementing explainable AI (XAI) techniques, specifically focusing on the crop recommendation technique in smart farming.
An experiment was conducted using a Crop recommendation dataset, applying XAI algorithms such as Local Interpretable Model-agnostic Explanations (LIME), Differentiable InterCounterfactual Explanations (dice_ml), and SHapley Additive exPlanations (SHAP). These algorithms were used to generate local and counterfactual explanations, enhancing model transparency in compliance with the General Data Protection Regulation (GDPR), which mandates the right to explanation.
The results demonstrated the effectiveness of XAI in making ML models more interpretable and trustworthy. For instance, local explanations from LIME provided insights into individual predictions, while counterfactual scenarios from dice_ml offered alternative crop cultivation suggestions. Feature importance from SHAP gave a global perspective on the factors influencing the model's decisions. The study's statistical analysis revealed that the integration of XAI increased the farmers' understanding of the AI system's recommendations, potentially reducing food insufficiency by enabling the cultivation of alternative crops on the same land.
Yaganteeswarudu Akkem, Saroj Kumar Biswas, Aruna Varanasi, "Role of Explainable AI in Crop Recommendation Technique of Smart Farming", International Journal of Intelligent Systems and Applications(IJISA), Vol.17, No.1, pp.31-52, 2025. DOI:10.5815/ijisa.2025.01.03
[1]Marco Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 97–101, San Diego, California. Association for Computational Linguistics.
[2]Linardatos P, Papastefanopoulos V, Kotsiantis S. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy. 2021; 23(1):18. https://doi.org/10.3390/e23010018
[3]Dutta, P., Muppalaneni, N. B., & Patgiri, R. (2022). A Survey on Explainability in Artificial Intelligence. In P. Krishna (Ed.), Handbook of Research on Advances in Data Analytics and Complex Communication Networks (pp. 55-75). IGI Global. https://doi.org/10.4018/978-1-7998-7685-4.ch004
[4]Riccardo Galanti, Massimiliano de Leoni, Merylin Monaro, Nicolò Navarin, Alan Marazzi, Brigida Di Stasi, Stéphanie Maldera, An explainable decision support system for predictive process analytics, Engineering Applications of Artificial Intelligence, Volume 120,2023,105904, ISSN 0952-1976,https://doi.org/10.1016/j.engappai.2023.105904.
[5]Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. ArXiv. https://doi.org/10.48550/arXiv.1711.00399
[6]Lynn Vonder Haar, Timothy Elvira, Omar Ochoa, An analysis of explainability methods for convolutional neural networks, Engineering Applications of Artificial Intelligence, Volume 117, Part A,2023,105606, ISSN 0952-1976,https://doi.org/10.1016/j.engappai.2022.105606.
[7]Adadi, A.; Berrada, M. Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access 2018, 6, 52138–52160.
[8]Guidotti, R.; Monreale, A.; Ruggieri, S.; Turini, F.; Giannotti, F.; Pedreschi, D. A survey of methods for explaining black box models. ACM Comput. Surv. 2018, 51, 1–42.
[9]Gilpin, L.H.; Bau, D.; Yuan, B.Z.; Bajwa, A.; Specter, M.; Kagal, L. Explaining explanations: An overview of interpretability of machine learning. In Proceedings of the 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), Turin, Italy, 1–3 October 2018; pp. 80–89.
[10]Di Martino, F., Delmastro, F. Explainable AI for clinical and remote health applications: a survey on tabular and time series data. Artif Intell Rev (2022). https://doi.org/10.1007/s10462-022-10304-3
[11]Minh, D., Wang, H.X., Li, Y.F. et al. Explainable artificial intelligence: a comprehensive review. Artif Intell Rev 55, 3503–3568 (2022). https://doi.org/10.1007/s10462-021-10088-y
[12]Ferraro, A., Galli, A., Moscato, V. et al. Evaluating eXplainable artificial intelligence tools for hard disk drive predictive maintenance. Artif Intell Rev (2022). https://doi.org/10.1007/s10462-022-10354-7
[13]El-Sappagh, S., Alonso-Moral, J.M., Abuhmed, T. et al. Trustworthy artificial intelligence in Alzheimer’s disease: state of the art, opportunities, and challenges. Artif Intell Rev (2023). https://doi.org/10.1007/s10462-023-10415-5
[14]Rudresh Dwivedi, Devam Dave, Het Naik, Smiti Singhal, Rana Omer, Pankesh Patel, Bin Qian, Zhenyu Wen, Tejal Shah, Graham Morgan, and Rajiv Ranjan. 2023. Explainable AI (XAI): Core Ideas, Techniques, and Solutions. ACM Comput. Surv. 55, 9, Article 194 (September 2023), 33 pages. https://doi.org/10.1145/3561048
[15]Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, Jörg Schlötterer, Maurice van Keulen, and Christin Seifert. 2023. From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI. ACM Comput. Surv. Just Accepted (February 2023). https://doi.org/10.1145/3583558
[16]Rawal, J. McCoy, D. B. Rawat, B. M. Sadler and R. S. Amant, Recent Advances in Trustworthy Explainable Artificial Intelligence: Status, Challenges, and Perspectives in IEEE Transactions on Artificial Intelligence, vol. 3, no. 6, pp. 852-866, Dec. 2022
[17]Holzinger, A.; Saranti, A.; Angerschmid, A.; Retzlaff, C.O.; Gronauer, A.; Pejakovic, V.; Medel-Jimenez, F.; Krexner, T.; Gollob, C.; Stampfer, K. Digital Transformation in Smart Farm and Forest Operations Needs Human-Centered AI: Challenges and Future Directions. Sensors 2022, 22, 3043. https://doi.org/10.3390/s22083043
[18]Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W. (2022). Explainable AI Methods - A Brief Overview. In: Holzinger, A., Goebel, R., Fong, R., Moon, T., Müller, KR., Samek, W. (eds) xxAI - Beyond Explainable AI. xxAI 2020. Lecture Notes in Computer Science(), vol 13200. Springer, Cham. https://doi.org/10.1007/978-3-031-04083-2_2
[19]Masahiro Ryo, Explainable artificial intelligence and interpretable machine learning for agricultural data analysis, Artificial Intelligence in Agriculture, Volume 6,2022, Pages 257-265, ISSN 2589-7217,https://doi.org/10.1016/j.aiia.2022.11.003.
[20]F. Sabrina, S. Sohail, F. Farid, S. Jahan, F. Ahamed et al., "An interpretable artificial intelligence based smart agriculture system," Computers, Materials & Continua, vol. 72, no.2, pp. 3777–3797, 2022
[21]Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., ... & Lee, S. I. (2020). From Local Explanations to Global Understanding with Explainable AI for Trees. Nature Machine Intelligence, 2(1), 56-67.
[22]Jain, S., Wallace, B. C., & Paul, M. J. (2020). Attention is not explanation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), 1904-1915.
[23]Yaganteeswarudu Akkem, Saroj Kumar Biswas, Aruna Varanasi, Smart farming using artificial intelligence: A review, Engineering Applications of Artificial Intelligence, Volume 120,2023,105899, ISSN 0952-1976,https://doi.org/10.1016/j.engappai.2023.105899.
[24]Borrego-Díaz, J., Galán-Páez, J. Explainable Artificial Intelligence in Data Science. Minds & Machines 32, 485–531 (2022). https://doi.org/10.1007/s11023-022-09603-z
[25]de Fine Licht, K., de Fine Licht, J. Artificial intelligence, transparency, and public decision-making. AI & Soc 35, 917–926 (2020). https://doi.org/10.1007/s00146-020-00960-w
[26]Främling, K. (2020). Decision Theory Meets Explainable AI. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds) Explainable, Transparent Autonomous Agents and Multi-Agent Systems. EXTRAAMAS 2020. Lecture Notes in Computer Science(), vol 12175. Springer, Cham. https://doi.org/10.1007/978-3-030-51924-7_4
[27]Vassiliades, A., Bassiliades, N., & Patkos, T. (2021). Argumentation and explainable artificial intelligence: A survey. The Knowledge Engineering Review, 36, E5. doi:10.1017/S0269888921000011
[28]Weiping Ding, Mohamed Abdel-Basset, Hossam Hawash, Ahmed M. Ali, Explainability of artificial intelligence methods, applications and challenges: A comprehensive survey, Information Sciences, Volume 615,2022, Pages 238-292, ISSN 0020-0255,https://doi.org/10.1016/j.ins.2022.10.013.
[29]Hanif, X. Zhang and S. Wood, "A Survey on Explainable Artificial Intelligence Techniques and Challenges," 2021 IEEE 25th International Enterprise Distributed Object Computing Workshop (EDOCW), Gold Coast, Australia, 2021, pp. 81-89, doi: 10.1109/EDOCW52865.2021.00036.
[30]Yang, W., Wei, Y., Wei, H. et al. Survey on Explainable AI: From Approaches, Limitations and Applications Aspects. Hum-Cent Intell Syst 3, 161–188 (2023). https://doi.org/10.1007/s44230-023-00038-y
[31]Zhang Y, Tiňo P, Leonardis A, Tang K. A survey on neural network interpretability. IEEE Trans Emerg Top Comput Intell. 2021;20:20.
[32]Arrieta AB, Díaz-Rodríguez N, Del Ser J, Bennetot A, Tabik S, Barbado A, García S, Gil-López S, Molina D, Benjamins R, et al. Explainable artifcial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf Fusion. 2020;58:82–115.
[33]Das A, Rad P. Opportunities and challenges in explainable artificial intelligence (XAI): a survey. arXiv:2006.11371 (arXiv preprint) (2020).
[34]Ooge J, Verbert K. Explaining artificial intelligence with tailored interactive visualisations. In: 27th international conference on intelligent user interfaces; 2022. p. 120
[35]Saeed W, Omlin C. Explainable AI (XAI): a systematic meta survey of current challenges and future opportunities. Knowl Based Syst. 2023;11:0273.
[36]Kotriwala A, Klöpper B, Dix M, Gopalakrishnan G, Ziobro D, Potschka A. Xai for operations in the process industry applications, theses, and research directions. In: AAAI spring symposium: combining machine learning with knowledge engineering; 2021.
[37]Albahri A, Duhaim AM, Fadhel MA, Alnoor A, Baqer NS, Alzubaidi L, Albahri O, Alamoodi A, Bai J, Salhi A, et al. A systematic review of trustworthy and explainable artifcial intelligence in healthcare: assessment of quality, bias risk, and data fusion. Inf Fusion. 2023;20:20.
[38]Chaddad A, Peng J, Xu J, Bouridane A. Survey of explainable AI techniques in healthcare. Sensors. 2023;23(2):634.
[39]Tjoa E, Guan C. A survey on explainable artifcial intelligence (XAI): toward medical XAI. IEEE Trans Neural Netw Learn Syst. 2020;32(11):4793–813.
[40]Gozzi N, Malandri L, Mercorio F, Pedrocchi A. Xai for myo controlled prosthesis: explaining EMG data for hand gesture classifcation. Knowl-Based Syst. 2022;240:108053.
[41]Ahmed U, Jhaveri RH, Srivastava G, Lin JC-W. Explainable deep attention active learning for sentimental analytics of mental dis order. Trans Asian Low-Resour Lang Inf Proces. 2022;20:22.
[42]Palatnik de Sousa I, Maria Bernardes Rebuzzi Vellasco M, Costa da Silva E. Local interpretable model-agnostic explanations for classification of lymph node metastases. Sensors. 2019;19(13):2969.
[43]Sarp S, Kuzlu M, Wilson E, Cali U, Guler O. The enlightening role of explainable artificial intelligence in chronic wound classification. Electronics. 2021;10(12):1406.
[44]Gulmezoglu B. Xai-based microarchitectural side-channel analysis for website fingerprinting attacks and defenses. IEEE Trans Depend Sec Comput. 2021;20:10.
[45]Melo E, Silva I, Costa DG, Viegas CM, Barros TM. On the use of explainable artificial intelligence to evaluate school dropout. Educ Sci. 2022;12(12):845.
[46]Conati C, Barral O, Putnam V, Rieger L. Toward personalized XAI: a case study in intelligent tutoring systems. Artif Intell. 2021;298:103503.
[47]Omeiza D, Webb H, Jirotka M, Kunze L. Explanations in autonomous driving: a survey. IEEE Trans Intell Transport Syst. 2021;20:20.
[48]Machlev R, Heistrene L, Perl M, Levy K, Belikov J, Mannor S, Levron Y. Explainable artificial intelligence (XAI) techniques for energy and power systems: review, challenges and opportunities. Energy AI. 2022; 20:100169.
[49]Capuano N, Fenza G, Loia V, Stanzione C. Explainable artificial intelligence in cybersecurity: a survey. IEEE Access. 2022;10:93575–600.
[50]Bussmann N, Giudici P, Marinelli D, Papenbrock J. Explain able machine learning in credit risk management. Comput Econ. 2021;57(1):203–16.
[51]Yilin Ning, Marcus Eng Hock Ong, Bibhas Chakraborty, Benjamin Alan Goldstein, Daniel Shu Wei Ting, Roger Vaughan, Nan Liu, Shapley variable importance cloud for interpretable machine learning, Patterns, Volume 3, Issue 4,2022,100452, ISSN 2666-3899,https://doi.org/10.1016/j.patter.2022.100452.
[52]Gramegna A, Giudici P. Shap and lime: an evaluation of discriminative power in credit risk. Front Artif Intell. 2021;140:25.
[53]Mardaoui D, Garreau D. An analysis of lime for text data. In: International conference on artificial intelligence and statistics; 2021. p. 3493–501. PMLR.