Use of explainable AI to interpret the results of NLP models for sentimental analysis

Vijaykumar Bidve, Pathan Mohd Shafi, Pakiriswamy Sarasu, Aruna Pavate, Ashfaq Shaikh, Santosh Borde, Veer Bhadra Pratap Singh, Rahul Raut


The use of artificial intelligence (AI) systems is significantly increased in the past few years. AI system is expected to provide accurate predictions and it is also crucial that the decisions made by the AI systems are humanly interpretable i.e. anyone must be able to understand and comprehend the results produced by the AI system. AI systems are being implemented even for simple decision support and are easily accessible to the common man on the tip of their fingers. The increase in usage of AI has come with its own limitation, i.e. its interpretability. This work contributes towards the use of explainability methods such as local interpretable model-agnostic explanations (LIME) to interpret the results of various black box models. The conclusion is that, the bidirectional long short-term memory (LSTM) model is superior for sentiment analysis. The operations of a random forest classifier, a black box model, using explainable artificial intelligence (XAI) techniques like LIME is used in this work. The features used by the random forest model for classification are not entirely correct. The use of LIME made this possible. The proposed model can be used to enhance performance, which raises the trustworthiness and legitimacy of AI systems.


Artificial intelligence; Explainability; Machine learning; Natural language processing; Sentimental analysis

Full Text:




  • There are currently no refbacks.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

The Indonesian Journal of Electrical Engineering and Computer Science (IJEECS)
p-ISSN: 2502-4752, e-ISSN: 2502-4760
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).

shopify stats IJEECS visitor statistics