Transformer based multi-head attention network for aspect-based sentiment classification

Abhinandan Shirahatti, Vijay Rajpurohit, Sanjeev Sannakki

Abstract


Aspect-based sentiment classification is vital in helping manufacturers identify the pros and cons of their products and features. In the latest days, there has been a tremendous surge of interest in aspect-based sentiment classification (ABSC). Since it predicts an aspect term sentiment polarity in a sentence rather than the whole sentence. Most of the existing methods have used recurrent neural networks and attention mechanisms which fail to capture global dependencies of the input sequence and it leads to some information loss and some of the existing methods used sequence models for this task, but training these models is a bit tedious. Here, we propose the multi-head attention transformation (MHAT) network the MHAT utilizes a transformer encoder in order to minimize training time for ABSC tasks. First, we used a pre-trained Global vectors for word representation (GloVe) for word and aspect term embeddings. Second, part-of-speech (POS) features are fused with MHAT to extract grammatical aspects of an input sentence. Whereas most of the existing methods have neglected this. Using the SemEval 2014 dataset, the proposed model consistently outperforms the state-of-the-art methods on aspect-based sentiment classification tasks.

Keywords


Aspect; Attention; Parts-of-speech; Sentiment; Transformer;

Full Text:

PDF


DOI: http://doi.org/10.11591/ijeecs.v26.i1.pp472-481

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

shopify stats IJEECS visitor statistics