Static hand gesture recognition of Arabic sign language by using deep CNNs

Mohammad H. Ismail, Shefa A. Dawwd, Fakhradeen H. Ali


An Arabic sign language recognition using two concatenated deep convolution neural network models DenseNet121 & VGG16 is presented. The pre-trained models are fed with images, and then the system can automatically recognize the Arabic sign language. To evaluate the performance of concatenated two models in the Arabic sign language recognition, the RGB images for various static signs are collected in a dataset. The dataset comprises 220,000 images for 44 categories: 32 letters, 11 numbers (0:10), and 1 for none. For each of the static signs, there are 5000 images collected from different volunteers. The pre-trained models were used and trained on prepared Arabic Sign Language data. These models were used after some modification. Also, an attempt has been made to adopt two models from the previously trained models, where they are trained in Parallel Deep Feature Extractions. Then they are combined and prepared for the classification stage. The results demonstrate the comparison between the performance of the single model and multi-model. It appears that most of the multi-model is better in feature extraction and classification than the single models. And also show that when depending on the total number of Incorrect Recognize sign image in training, validation and testing dataset, the best CNN model in feature extraction and classification Arabic sign language is the DenseNet121 for a single model using and DenseNet121 & VGG16 for multi-model using.


Arabic sign language; Convolutional neural network; Deep learning; Multi-model; Static hand gesture



  • There are currently no refbacks.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

shopify stats IJEECS visitor statistics