Comparing the Explainability of Different Crop Disease Identification Models Using LIME
Document Type
Dissertation
Rights
This item is available under a Creative Commons License for non-commercial use only
Disciplines
Computer Sciences
Abstract
The complexity of state-of-the-art modelling techniques for image classification impedes the ability to explain model predictions in an interpretable way. Existing explanation methods generally create importance rankings in terms of pixels or pixel groups. However, the resulting explanations lack an optimal size, do not consider feature dependence and are only related to one class. Counterfactual explanation methods are considered promising to explain complex model decisions since they are associated with a high degree of human interpretability. In this thesis, LIME is introduced as a model agnostic instance-level explanation method for crop disease identification. For a given image and a classification model, LIME searches a small set of segments that contributes to the identification of a disease in a leaf. As image classification tasks are typically multiclass problems, deep learning provides with a variety of algorithms to detect the plant diseases. There are two types of models built in this research; (i) CNN model built from the scratch and (ii) pre-trained transfer learning model (VGG19 and ResNet50) for the classification of image disease identification. This research aimed at deepening into the explainability of the models. Rather than focusing on obtaining the best accuracy for a classification model, this research explored whether the models were making their predictions based on the relevant pixels of the images. To the end, a popular dataset called plant village dataset was used for plant disease classification where each leaf had some parts related to the disease. Three different types of models were built for the classification of plant diseases. Then, two different processes were applied to the original image. In the first process, the exact pixels contributing to the diseases were identified. In the second process, using LIME technique, the pixels contributing to the prediction were identified from the three classification models. Finally, a comparison was made from the pixels related to the disease in the first process with the pixels selected by LIME in the second process to evaluate how well the models were making their predictions based on the relevant pixels of the image. Since the dataset is unbalanced, that is to say, there are more pixels of non-relevant class compared to the relevant class, this value can be better expressed using the well-known metric F1-Score. The transfer learning models performed better in terms of explainability when compared with the models built from scratch. The F1 score of ResNet50 model was 0.58 while CNN model built from scratch scored 0.38 which clearly states that the model includes large amount of false positive and false negative values.
DOI
https://doi.org/10.21427/3qfj-4995
Recommended Citation
Samuel, T. (2021). Comparing the explainability of different crop disease identification models using LIME. Dissertation. Dublin: Technological University Dublin. doi:10.21427/3qfj-4995
Publication Details
A dissertation submitted in partial fufilment of the requirements of Dublin Institute of Technology for the degree of M.Sc. in Computing (Data Analytics)