FPGA-based architecture of hybrid multilayered perceptron neural network

Lee Yee Ann, P. Ehkan, M. Y. Mashor, S. M. Sharun

Abstract


The HMLP is an ANN similar to the MLP, but with extra weighted connections that connect the input nodes directly to the output nodes. The architecture of the HMLP neural network for implementation on FPGA is proposed. The HMLP architecture is designed to be concurrent to demonstrate the parallel nature of the HMLP where each hidden or output node within the same hidden or output layer of the HMLP can calculate its output independently. The HMLP architecture is designed to be modular as well, such that if modification to a module is necessary, only the specific module need to be modified and all other modules can be retained. This modularity will be especially helpful when different activation function is to be swapped in to replace current activation function. All calculations in the HMLP are performed in floating-point arithmetic. The HMLP architecture is compiled, simulated and finally implemented on the Cyclone V FPGA of DE1-SoC board. The simulation outcome and FPGA outputs showed that the developed HMLP architecture is able to calculate correct output values for all test datasets.

Keywords


FPGA-based architecture, Artificial Neural Network, Hybrid Multilayered Perceptron Neural Network, VHDL

Full Text:

PDF


DOI: http://doi.org/10.11591/ijeecs.v14.i2.pp949-956

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

The Indonesian Journal of Electrical Engineering and Computer Science (IJEECS)
p-ISSN: 2502-4752, e-ISSN: 2502-4760
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).

shopify stats IJEECS visitor statistics