Human ear print recognition based on fusion of difference theoretic texture and gradient direction pattern features

Kawther Thabt Saleh, Raniah Ali Mustafa, Haitham Salman Chyad

Abstract


Human ear recognition can be defined as a branch of biometrics that uses images of the ears to identify people. This paper provides a new ear print recognition approach depending on the combination of gradient direction pattern (GDP2) and difference theoretic texture features (DTTF) features. The region of interest (ROI), the gray scale of the ear print was cut off, noise removal by the median filter, histogram equalization, and local normalization (LN) are the first steps in this approach. After the image has been processed, it is used as input for the fusion of GDP2 and DTTF for extracting the features of ear print images. Lastly, the Gaussian distribution (GD) was utilized to compute the distance among fusion feature vectors (FV) for ear print images for recognizing ear print images for people using a set of images that had been trained and tested. The unconstrained ear recognition challenge (UERC) database, which comprises 330 subjects for ear print images, provides the approach that was suggested by employing ear print databases. Furthermore, experimental results on images from a benchmark dataset reveal that statistical-rely super-resolution methods outperform other algorithms in ear recognition accuracy, which was around 93.70% in this case.

Keywords


Human ear recognition; Fusion feature vector; Difference theoretic texture features; Gradient direction pattern; Gaussian distribution

Full Text:

PDF


DOI: http://doi.org/10.11591/ijeecs.v29.i2.pp1017-1029

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

shopify stats IJEECS visitor statistics