Enhancing the ternary neural networks with adaptive threshold quantization
Abstract
Ternary neural networks (TNNs) with weights constrained to –1, 0, and +1 offer an efficient deep learning solution for low-cost computing platforms such as embedded systems and edge computing devices. These weights are typically obtained by quantizing the real weight during the training process. In this work, we propose an adaptive threshold quantization method that dynamically adjusts the threshold based on the mean of weight distribution. Unlike fixed-threshold approaches, our method recalculates the quantization threshold at each training epoch according to the distribution of real valued synaptic weights. This adaptation significantly enhances both training speed and model accuracy. Experimental results on the MNIST dataset demonstrates a 2.5× reduction in training time compared to conventional methods, with a 2% improvement in recognition accuracy. On Google Speech Command dataset, the proposed method achieves an 8% improvement in recognition accuracy and a 50% reduction in training time, compared to fixed-threshold quantization. These results highlight the effectiveness of adaptive quantization in improving the efficiency of TNNs, making them well-suited for deployment on resource constrained edge devices.
Keywords
Deep neural network; Image recognition; Speech recognition; Ternary neural network; Binary neural network
Full Text:
PDFDOI: http://doi.org/10.11591/ijeecs.v40.i2.pp700-706
Refbacks
- There are currently no refbacks.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Indonesian Journal of Electrical Engineering and Computer Science (IJEECS)
p-ISSN: 2502-4752, e-ISSN: 2502-4760
This journal is published by the Institute of Advanced Engineering and Science (IAES).