Recognizing geographical locations using a GAN-based text-to-image approach
Abstract
Generating photo-realistic images that align with the text descriptions is the goal of the text-to-image generation (T2I) model. They can assist in visualizing the descriptions thanks to advancements in machine learning algorithms. Using text as a source, generative adversarial networks (GANs) can generate a series of pictures that serve as descriptions. Recent GANs have allowed oldest T2I models to achieve remarkable gains. However, they have some limitations. The main target of this study is to address these limitations to enhance the text-to-image generation models to enhance location services. To produce high-quality photos utilizing a multi-step approach, we build an attentional generating network called AttnGAN. The fine-grained image-text matching loss needed to train the AttnGAN’s generator is computed using our multimodal similarity model. With an inception score of 4.81 on the PatternNet dataset, our AttnGAN model achieves an impressive R-precision value of 70.61 percent. Because the PatternNet dataset comprises photographs, we’ve added verbal descriptions to each one to make it a text-based dataset instead. Many experiments have shown that AttnGAN’s proposed attention procedures, which are critical for text-to-image production in complex circumstances, are effective.
Keywords
AttnGAN model; Deep learning; Generative adversarial networks; Location-based services; Road infrastructure; Text reading; Text-to-image
Full Text:
PDFDOI: http://doi.org/10.11591/ijeecs.v37.i2.pp1168-1182
Refbacks
- There are currently no refbacks.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Indonesian Journal of Electrical Engineering and Computer Science (IJEECS)
p-ISSN: 2502-4752, e-ISSN: 2502-4760
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).