Dual embedding with input embedding and output embedding for better word representation

Yoonjoo Ahn, Eugene Rhee, Jihoon Lee


RecentĀ studies in distributed vector representations for words have variety of ways to represent words. We propose a various ways using input embedding and output embedding to better represent words than single model. We compared the performance in terms of word analogy and word similarity with each input and output embeddings and various dual embeddings which are the combination of those two embeddings. Performance evaluation results show that the proposed dual embeddings outperform each single embedding, especially with the way of simply adding input and output embeddings. We figured out two things in this paper, i) not only input embedding but also output embedding has such meaning to represent the words and ii) combining input embedding and output embedding as dual embedding outperforms the single embedding when we use input embedding and output embedding individually.


Dual embedding; Natural language processing; Word embedding; Word representation; Word2Vec;

Full Text:


DOI: http://doi.org/10.11591/ijeecs.v27.i2.pp1091-1099


  • There are currently no refbacks.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

The Indonesian Journal of Electrical Engineering and Computer Science (IJEECS)
p-ISSN: 2502-4752, e-ISSN: 2502-4760
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).

shopify stats IJEECS visitor statistics