A comparative study of deep learning based language representation learning models

Mohammed Boukabous, Mostafa Azizi

Abstract


Deep learning (DL) approaches use various processing layers to learn hierarchical representations of data. Recently, many methods and designs of natural language processing (NLP) models have shown significant development, especially in text mining and analysis. For learning vector-space representations of text, there are famous models like Word2vec, GloVe, and fastText. In fact, NLP took a big step forward when BERT and recently GTP-3 came out. In this paper, we highlight the most important language representation learning models in NLP and provide an insight of their evolution. We also summarize, compare and contrast these different models on sentiment analysis, and thus discuss their main strengths and limitations. Our obtained results show that BERT is the best language representation learning model.

Keywords


Natural Language Processing; Representation Models; BERT; GPT-2; Deep Learning; Sentiment Analysis

Full Text:

PDF


DOI: http://doi.org/10.11591/ijeecs.v22.i2.pp1032-1040

Refbacks



Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Indonesian Journal of Electrical Engineering and Computer Science (IJEECS)
p-ISSN: 2502-4752, e-ISSN: 2502-4760
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).

shopify stats IJEECS visitor statistics