Fingerprint recognition using mel-frequency cepstral coefficients

Share Embed


Descripción

APPLICATIONS PROBLEMS

Fingerprint Recognition Using MelFrequency Cepstral Coefficients1 F. G. Hashad, T. M. Halim, S. M. Diab, B. M. Sallam, and F. E. Abd ElSamie Department of Electronics and Electrical communications, Faculty of Electronic Engineering, Menoufia University, Menouf, 32952 Egypt email: [email protected], [email protected] Abstract—This paper presents a new fingerprint recognition method based on melfrequency cepstral coef ficients (MFCCs). In this method, cepstral features are extracted from a group of fingerprint images, which are transformed first to 1D signals by lexicographic ordering. MFCCs and polynomial shape coefficients are extracted from these 1D signals or their transforms to generate a database of features, which can be used to train a neural network. The fingerprint recognition can be performed by extracting features from any new fin gerprint image with the same method used in the training phase. These features are tested with the neural net work. The different domains are tested and compared for efficient feature extraction from the lexicographi cally ordered 1D signals. Experimental results show the success of the proposed cepstral method for finger print recognition at low as well as high signal to noise ratios (SNRs). Results also show that the discrete cosine transform (DCT) is the most appropriate domain for feature extraction. Key words: fingerprint recognition, MFCCs, DCT, DST, DWT. DOI: 10.1134/S1054661810030120

1. INTRODUCTION Fingerprints are biometric signs that can be uti lized for identification and authentication purposes in biometric systems. Among all the biometric indi cators, fingerprints have one of the highest levels of reliability [1]. The main reasons for the popularity of the fingerprintbased identification are the unique ness and permanence of fingerprints. It has been claimed that no two individuals, including identical twins, have the exact same fingerprints. It has also been claimed that the fingerprint of an individual does not change throughout his lifetime, with the exception of a significant injury to the finger that cre ates a permanent scar [2]. Fingerprints are graphical patterns of locally paral lel ridges and valleys with welldefined orientations on the surface of fingertips. Ridges are the lines on the tip of one’s finger. The unique pattern of lines can either be loop, whorl, or arch pattern. Valleys are the spaces or gaps that are on either side of a ridge. The most important features in fingerprints are called the minu tiae, which are usually defined as the ridge endings and the ridge bifurcations. A ridge ending is the point, where a ridge ends abruptly. A ridge bifurcation is the point, where a ridge forks into a branch ridge [3]. Examples of minutiae are shown in Fig. 1. A full fin

gerprint normally contains between 50 to 80 minutiae. A partial fingerprint may contain fewer than 20 minu tiae. According to the Federal Bureau of Investiga tions, it suffices to identify a fingerprint by matching 12 minutiae, but it has been reported that in most cases, 8 matched minutiae are enough. Several algorithms have been proposed in the liter ature for fingerprint recognition. Most of these algo rithms are based on extracting geometrical features from the fingerprints and using them for fingerprint matching with available templates. Some of these algorithms take the minutiae and the singular points, including their coordinates and directions, as the dis tinctive features to represent the fingerprint in the matching process [4–7]. Then, the minutiae features are compared with the minutiae templates; if the matching score exceeds a predefined threshold, these two fingerprints can be regarded as belonging to the same finger. These geometrical methods have some limitations such as the difficulty to locate the minutiae correctly and difficulty to work with distorted images.

(a)

Ridge ending

(b)

1

The article is published in the original.

Received September 2, 2009

ISSN 10546618, Pattern Recognition and Image Analysis, 2010, Vol. 20, No. 3, pp. 360–369. © Pleiades Publishing, Ltd., 2010.

Fig. 1.

Bifurcation

FINGERPRINT RECOGNITION USING MELFREQUENCY CEPSTRAL COEFFICIENTS

This paper presents a new cepstral method, which is not based on geometrical features, for fingerprint pattern recognition. This method is based on generat ing a database of fingerprint features using the MFCCs and polynomial shape coefficients extracted from dif ferent fingerprint images with different dimensions after they are lexicographically ordered into 1D sig nals. A matching process can be performed for any new fingerprint image to classify it, as belonging to the database or not, using a trained neural network. These coefficients are widely used in speaker identification, because they are robust to noise and insensitive to time shifts in signals. As a result, there is no need for regis tration of images, and the extracted features can be very useful for fingerprint recognition in the presence of degradations. The rest of the paper is organized as follows. Sec tion 2 gives the steps of the proposed fingerprint recog nition method. Section 3 discusses the process of fea ture extraction. Feature matching is discussed in Sec tion 4. In Section 5, the experimental results are given. Finally, Section 6 summarizes the concluding remarks. 2. THE PROPOSED FINGERPRINT PATTERN RECOGNITION METHOD The proposed fingerprint recognition method has two phases; a training phase and a testing phase. In the training phase, a database of fingerprint images is used to extract features from each image. These features are used to train a neural network. In the testing phase, features are extracted from every incoming image and a feature matching process is performed to decide whether these features belong to a previously known fingerprint pattern or not. A schematic diagram of the steps of the proposed detection system is shown in Fig. 2. The steps of the feature extraction process from a fingerprint image can be summarized as follows: (1) The image is lexicographically ordered into a 1D signal. (2) The obtained 1D signal can be used in time domain or in another discrete transform domain. The DCT, DST, and DWT can be used for this purpose. (3) MFCCs and polynomial shape coefficients are extracted from either the 1D signal, the discrete transform of the signal or both of them. 3. FEATURE EXTRACTION The concept of feature extraction using the MFCCs is widely known in speaker identification [8– 17]. It contributes to the goal of identifying speakers based on the lowlevel properties. Fingerprint images after lexicographic ordering are treated in this paper like speech signals. It is clear that the fingerprint has oscillatory patterns, which supports the application of the cepstral method used with speech signals for fea PATTERN RECOGNITION AND IMAGE ANALYSIS

361

ture extraction from these signals. In speaker identifi cation, the extraction produces sufficient information for good speaker discrimination. Experimental results will show a great success if the ideas of feature extrac tion from speech signals are applied to fingerprint 1D signals. Feature extraction can be defined as the pro cess of reducing the amount of data present in a given fingerprint signal, while retaining the signal discrimi native information. In the following subsections, an explanation for the extraction of the MFCCs and the polynomial coefficients is presented. 3.1. Extraction of MFCCs The MFCCs are commonly extracted from signals through cepstral analysis. The input signal is first framed and windowed, the Fourier transform is then taken and the magnitude of the resulting spectrum is warped by the Melscale. The log of this spectrum is then taken and the DCT is applied [8, 9]. Figure 3 shows the proposed steps of extraction of MFCCs from an image. The 1D signal must first be broken up into small sections; each of N samples. These sections are called frames and the motivation for this framing process is the quasistationary nature of the 1D signals. How ever, if we examine the signal over discrete sections, which are sufficiently short in duration, then these sections can be considered as stationary and exhibit stable characteristics [8, 9]. To avoid loss of informa tion, frame overlap is used. Each frame begins at some offset of L samples with respect to the previous frame where L ≤ N. For each frame, a windowing function is usually applied to increase the continuity between adjacent frames. Common windowing functions include the rectangular window, the Hamming window, the Black man window and flattop window. Windowing in time domain is a pointwise multipli cation of the frame and the window function. Accord ing to the convolution theorem, the windowing corre sponds to a convolution between the short term spec trum and the window function frequency response. A good window function has a narrow main lobe and low side lobe levels in its frequency response. The most commonly used window is the Hamming window. The DFT of a windowed frame of the 1D signal is com puted to obtain the magnitude spectrum as follows [8, 9]: N–1

X(k) =

∑ w ( n )x ( n )e

– j2πkn/N

,

(1)

n=0

where x(n) is a time sample of the windowed frame, w(n) is the Hamming window. The magnitude spectrum |X(k)| is now scaled in both frequency and magnitude. First, the frequency is Vol. 20

No. 3

2010

362

HASHAD et al. Training phase Fingerprint image

Lexicographic ordering

(a)

Discrete transform (DCT, DST, or DWT)

Feature extraction (MFCCs + polynomial coefficients)

Training of a neural network

To database Testing phase Lexicographic ordering

Test image

Identifield fingerprint or not

Discrete transform (DCT, DST, or DWT)

(b) Feature extraction (MFCCs + polynomial coefficients)

Feature matching with the trained neural network

Decision making

Fig. 2. Schematic diagram of the proposed fingerprint recognition method.

Fingerprint image

Lexicographic ordering

MFCCs

DFT and calculation of magnitude spectrum

Windowing

DCT

Melfilter bank and summation for all filter output

Ln

Fig. 3. Extraction of MFCCs from an image.

scaled logarithmically using the socalled Mel filter bank H(k, m), and then the logarithm is taken, giving: ⎛N – 1 ⎞ X' ( m ) = ln ⎜ X ( k ) H ( k, m )⎟ ⎝k = 0 ⎠



ters which affect the recognition accuracy of the sys tem. Finally, the MFCCs are obtained by computing the DCT of X'(m) using [8, 9]:

(2)

for m = 1, 2, …, M, where M is the number of filter banks and M Ⰶ N. The Mel filter bank is a collection of triangular fil ters defined by center frequencies calculated on the Mel scale [8, 9]. The triangular filters are spread over the entire frequency range from zero to the Nyquist frequency. The number of filters is one of the parame

M

cl =

π

∑ X' ( m ) cos ⎛⎝ l M ⎛⎝ m – 2⎞⎠ ⎞⎠ 1

(3)

m=1

for l = 1, 2, …, M, where cl is the lth MFCC. The num ber of the resulting MFCCs is chosen between 12 and 20, since most of the signal information is represented by the first few coefficients. The 0th coefficient repre sents the average log energy of the frame.

PATTERN RECOGNITION AND IMAGE ANALYSIS

Vol. 20

No. 3

2010

FINGERPRINT RECOGNITION USING MELFREQUENCY CEPSTRAL COEFFICIENTS

3.2. Extraction of Polynomial Coefficients

1

The MFCCs are sensitive to mismatches or time shifts between training and testing data. As a result, there is a need for other coefficients to be added to the MFCCs to reduce this sensitivity. Polynomial coeffi cients can be used for this purpose. These coefficients can help in increasing the similarity between the train ing and the test signals. If each MFCC is modeled as a time waveform over adjacent frames, polynomial coef ficients can be used to model the slope and curvature of this time waveform. Adding these polynomial coef ficients to the MFCCs vector will be helpful in reduc ing the sensitivity to any mismatches between the training and testing data [15–17]. To calculate the polynomial coefficients, the time waveforms of the cepstral coefficients are expanded by orthogonal polynomials. The following two orthogo nal polynomials can be used [15]: (4) P 1 ( i ) = i – 5, 2

P 2 ( i ) = i – 10i + 55/3.

(5)

To model the shape of the MFCCs time functions, a nine elements window at each MFCC is used. Based on this window assumption, the polynomial coeffi cients can be calculated as follows [15]: 9

∑ P ( i )c ( t + i – 1 ) 1

al ( t ) =

l

i =1 , 9 2 P1 ( i )

(6)



bl ( t ) =

2

1 X2

woh

2 Yo

X3

3 Single neuron of the output layer

h X1

H1

I

Input layer

H Hidden layer

Fig. 4. An MLP neutral network.

because it is suitable for the problem considered in this paper. Each neuron in the neural network is characterized by an activation function and its bias, and each con nection between two neurons by a weight factor. In this paper, the neurons from the input and output layers have linear activation functions and the hidden neu rons have a sigmoid activation function F(u) = 1/(1 + e–u). Therefore, for an input vector X, the neural net work output vector Y can be obtained according to the following matrix equation [18, 19]: Y = W2 ∗ F ( W1 ∗ X + B1 ) + B2 ,

9

∑ P ( i )c ( t + i – 1 ) 2

whr

X1

i=1

l

i =1 , 9 2 P2 ( i )

363

(7)



j=1

where al(t) and bl(t) are the slope, and the curvature of cl in the tth frame. The vectors containing all cl, al, and bl are concatenated to form a single feature vector. 4. FEATURE MATCHING USING ARTIFICIAL NEURAL NETWORKS The classification step in the proposed detection method is in fact a feature matching process between the features of a new fingerprint image and the features saved in the database. Neural Networks are widely used for feature matching. Multilayer perceptrons (MLPs) consisting of an input layer, one or more hid den layers and an output layer can be used for this pur pose [18, 19]. Figure 4 shows an MLP having an input layer, a single hidden layer and an output layer. A single neuron only of the output layer is shown for simplicity. This structure will be used for feature matching, PATTERN RECOGNITION AND IMAGE ANALYSIS

(8)

where W1 and W2 are the weight matrices between the input and the hidden layer and between the hidden and the output layer, respectively, and B1 and B2 are bias matrices for the hidden and the output layer, respec tively. Training a neural network is accomplished by adjusting its weights using a training algorithm. The training algorithm adapts the weights by attempting to minimize the sum of the squared error between a desired output and the actual output of the output neurons given by [18, 19]: E = 1 2

O

∑ (D

2

o

– Yo ) ,

(9)

o=1

where Do and Yo are the desired and actual outputs of the oth output neuron. O is the number of output neu rons. Each weight in the neural network is adjusted by adding an increment to reduce E as rapidly as possible. The adjustment is carried out over several training iter ations until a satisfactorily small value of E is obtained or a given number of epochs is reached. The error Vol. 20

No. 3

2010

364

HASHAD et al.

Fig. 5. Samples of the fingerprint images used in the training phase.

backpropagation algorithm can be used for this task [18, 19]. 5. EXPERIMENTAL RESULTS In this section, several experiments are carried out to test the performance of the proposed fingerprint image

recognition method. Time and transform domains are used for feature extraction. The degradations consid ered are additive white Gaussian noise (AWGN), impulsive noise, and speckle noise with and without blurring. These degradations are severe cases, which are rarely studied by researchers in the field of fingerprint recognition. In the training phase of the proposed rec

Recognition rate 100 90 80 Features from signal Features from the DWT of the signal Features from the signal plus DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DCT of signal Features from signal plus DCT of signal

70 60 50 40 30 20 10 0

0.01

0.02

0.03

0.04 0.05 0.06

0.07

0.08 0.09 0.10 SNR, dB

Fig. 6. Recognition rate vs. the signal to noise ration (SNR) for the different feature extraction methods from fingerprint images contaminated by AWGN. PATTERN RECOGNITION AND IMAGE ANALYSIS

Vol. 20

No. 3

2010

FINGERPRINT RECOGNITION USING MELFREQUENCY CEPSTRAL COEFFICIENTS

365

Recognition rate 100 90 80 Features from signal Features from the DWT of the signal Features from the signal plus DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DCT of signal Features from signal plus DCT of signal

70 60 50 40 30 20 10 0

0.01

0.02

0.03

0.04 0.05

0.06 0.07

0.08 0.09 0.10 Noise variance

Fig. 7. Recognition rate vs. the SNR for the different feature extraction methods from blurred fingerprint images contaminated by AWGN.

Recognition rate 100 90 80 Features from signal Features from the DWT of the signal Features from the signal plus DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DCT of signal Features from signal plus DCT of signal

70 60 50 40 30 20 10 0

0.01

0.02

0.03

0.04 0.05 0.06

0.07

0.08 0.09 0.10 SNR, dB

Fig. 8. Recognition rate vs. the percentage error for the different feature extraction methods from fingerprint images contami nated by impulsive noise.

ognition method, a database is first composed. Twenty fingerprint images are used to generate this database. The MFCCs and polynomial coefficients are estimated to form the feature vectors of the database. In the testing PATTERN RECOGNITION AND IMAGE ANALYSIS

phase, similar features to that used in the training are extracted from 100 degraded fingerprint images and used for matching. Samples of the fingerprint images used in the training phase are shown in Fig. 5. Vol. 20

No. 3

2010

366

HASHAD et al. Recognition rate 100 90 80 Features from signal Features from the DWT of the signal Features from the signal plus DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DCT of signal Features from signal plus DCT of signal

70 60 50 40 30 20 10 0

0.01

0.02

0.03

0.04 0.05 0.06

0.07

0.08 0.09 0.10 Noise variance

Fig. 9. Recognition rate vs. the percentage error for the different feature extraction methods from blurred fingerprint images con taminated by impulsive noise.

Recognition rate 100 90 80 Features from signal Features from the DWT of the signal Features from the signal plus DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DCT of signal Features from signal plus DCT of signal

70 60 50 40 30 20 10 0

0.01

0.02

0.03

0.04 0.05 0.06

0.07

0.08 0.09 0.10 Noise variance

Fig. 10. Recognition rate vs. the noise variance for the different feature extraction methods from fingerprint images contaminated by speckle noise.

The features used in all experiments are 13 MFCCs and 26 polynomial coefficients forming feature vectors of 39 coefficients for each frame of the image. Seven methods for extracting these features are adopted in

the paper. In the first method, the MFCCs and the polynomial coefficients are extracted from the time domain signals, only. In the second method, the fea tures are extracted from the DWT of these signals. In

PATTERN RECOGNITION AND IMAGE ANALYSIS

Vol. 20

No. 3

2010

FINGERPRINT RECOGNITION USING MELFREQUENCY CEPSTRAL COEFFICIENTS

367

Recognition rate 100 90 80 70

Features from signal Features from the DWT of the signal Features from the signal plus DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DCT of signal Features from signal plus DCT of signal

60 50 40 30 20 10 0

0.01

0.02

0.03

0.04 0.05 0.06

0.07

0.08 0.09 0.10 Noise variance

Fig. 11. Recognition rate vs. the noise variance for the different feature extraction methods from blurred fingerprint images con taminated by speckle noise.

the third method, the features are extracted from both the original signals and the DWT of these signals and concatenated in single feature vectors. In the fourth method, the features are extracted from the DCT of the time domain signals. In the fifth method, the fea tures are extracted from both the original signals and the DCT of these signals and concatenated in single feature vectors. In the sixth method, the features are extracted from the DST of the time domain signals. In the last method, the features are extracted from both the original signals and the DST of these signals and concatenated in single feature vectors. A comparison study is held between all these extraction methods for the above mentioned degra dation cases, and the results are given in Figs. 6–11. From this comparison, it is clear that the features extracted from the DCT of the 1D signals achieve the highest recognition rates. This is attributed to the energy compaction property of the DCT, which enables accurate feature extraction from the first frames of the 1D signals after the DCT that can characterize each signal. It is also clear that a recog nition rate of about 100% can be achieved with the proposed method at low degradation cases, unlike the traditional minutiae based fingerprint recogni tion techniques. 6. CONCLUSIONS This paper presented a new cepstral method for feature extraction from fingerprint image and finger PATTERN RECOGNITION AND IMAGE ANALYSIS

print recognition. In this method images are trans formed to 1D signals and the MFCCs and polyno mial coefficients are extracted from the signals. Fea tures are extracted from the 1D signals and/or their transforms. The proposed method has two phases; a training phase and a testing phase. A database of the cepstral features of fingerprint images is generated in the training phase and used for feature matching in the testing phase. The proposed method is mostly used in speaker identification, but experimental results show that this method can also be used for fea ture extraction from images. Feature extraction from the different transform domains have been tested, and it has also been shown that features extracted from the DCT of the 1D fingerprint signals are the most robust among all other features. This is attrib uted to the energy compaction property of the DCT, which makes the features extracted from the first frames after the DCT robust enough to characterize the signals. Results also show that recognition rates up to 100% for fingerprints are possible in the absence of degradations. REFERENCES 1. W. Chaohong, S. Zhixin, and V. Govindaraju, “Fin gerprint Image Enhancement Method Using Direc tional Median Filter,” Proc. SPIE 5404, 66–75 (2004). 2. S. Kasaei, M. Deriche, and B. Boashash, “Fingerprint Feature Enhancement Using BlockDirection on Reconstructed Images,” International Conference on Vol. 20

No. 3

2010

368

HASHAD et al. Information, Communications, and Signal Processing, 1997, pp. 721–725.

3. L. Hong, Y. Wan, and A. Jain, “Fingerprint Image Enhancement: Algorithm and Performance Evalua tion,” IEEE Trans. Pattern Analysis Machine Intelli gence 20 (8), 777–789 (1998). 4. A. K. Jain, R. Bolle, and S. Pankanti, BIOMETRICS: Personal Identification in Networked Society (Kluwer, New York, 1999). 5. D. Zhang, Automated Biometrics: Technologies and Sys tems (Kluwer, New York, 2000). 6. K. Hrechak and J. A. McHugh, “Automated Finger print Recognition Using Structural Matching,” Pattern Recognit. 23, 893–904 (1990). 7. A. Jain, H. Lin, and R. Bolle, “OnLine Fingerprint Verification,” IEEE Trans. Pattern Anal. Mach. Intell. 19 (4), 302–314 (1997). 8. T. Kinnunen, “Spectral Features for Automatic Text Independent Speaker Recognition,” Licentiate’s The sis (University of Joensuu, Department of Computer Science, Finland, 2003). 9. R. Vergin, D. O. Shaughnessy, and A. Farhat, “Gener alized MelFrequency Cepstral Coefficients for Large Vocabulary SpeakerIndependent ContinuousSpeech Recognition,” IEEE Trans. Speech Audio Proc. 7 (5), 525–532 (1999). 10. R. Chengalvarayan and L. Deng, “Speech Trajectory Discrimination Using the Minimum Classification Error Learning,” IEEE Trans. Speech Audio Proc. 6 (6), 505–515 (1998). 11. P. D. Polur and G. E. Miller, “Experiments with Fast Fourier Transform, Linear Predictive and Cepstral Coefficients in Dysarthric Speech Recognition Algo rithms Using Hidden Markov Model,” IEEE Trans. Neural Systems Arid Rehabilitation Eng. 13 (4), 558– 561 (2005). 12. S. Dharanipragada, U. H. Yapanel, and B. D. Rao, “Robust Feature Extraction for Continuous Speech Recognition Using the MVDR Spectrum Estimation Method,” IEEE Trans. Audio, Speech, Language Proc. 15 (1), 224–234 (2007). 13. Z. Tufekci, PhD Dissertation (Clemson University, 2001). 14. R. Sarikaya, PhD Dissertation (Duke University, 2001). 15. S. Furui, “Cepstral Analysis Technique for Automatic Speaker Verification,” IEEE Trans. Acoust., Speech, Signal Proc. ASSP29 (2), 254–272 (1981). 16. R. Gandhiraj and P. S. Sathidevi, “AuditoryBased Wavelet Packet FilterBank for Speech Recognition Using Neural Network,” Proceedings of the 15th Inter national Conference on Advanced Computing and Com munications, 2007, pp. 666–671.

17. A. Katsamanis, G. Papandreou, and P. Maragos, “Face Active Appearance Modeling and Speech Acoustic Information to Recover Articulation,” IEEE Trans. Audio, Speech, Language Proc. 17 (3), 411–422 (2009). 18. A. I. Galushkin, Neural Networks Theory (Springer Verlag, Berlin Heidelberg, 2007). 19. G. Dreyfus, Neural Networks Methodology and Applica tions (SpringerVerlag, Berlin Heidelberg, 2005).

Fatma G. Hashad received the B.Sc. degree from the Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt, in 2001. She is currently working towards her M.Sc. degree in electrical communi cations engineering. Her current research interests are in image pro cessing.

Tadros M. Halim received his B.Sc. in Communications Engineer ing from Cairo University, Faculty of Engineering in June, 1958. From 1958 to 1960, he was a full time train ing engineer in the Egyptian Air Force Training Centre. He received his Ph.D. in the research of deflec tion defocusing in T.V. Cathode—ray tubes from the Electrotechnical Institute of Communications, Mos cow, VSSR, 1965. He joined the teaching staff of the Department of Electronics and Electri cal Communications, Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt since 1966. He has published several scientific papers in national and interna tional conferences and journals. His current research areas of interest include adaptive signal processing techniques, superresolution reconstruction of images, speech process ing, fingerprint processing, and spread spectrum communi cations. Salaheldin M. Diab has received the B.Sc. degree from the Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt, in 1973, the M.Sc. degree from the Faculty of Engineering, Helwan University, Cairo, Egypt, in 1981 and the Ph.D. degree from Menoufia University, in 1987. He joined the teaching staff of the Department of Electronics and Electrical Communications, Faculty of Electronic Engineering. Menoufia University, Menouf, Egypt since 1987. He has published several scientific papers in national and international con ferences and journals. His current research areas of interest include adaptive signal processing techniques, superresolu tion reconstruction of images, speech processing and spread spectrum communications.

PATTERN RECOGNITION AND IMAGE ANALYSIS

Vol. 20

No. 3

2010

FINGERPRINT RECOGNITION USING MELFREQUENCY CEPSTRAL COEFFICIENTS Bassiouny M. Sallam has received the B.Sc. degree from the Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt, in 1975, the M.Sc. degree from the Faculty of Engineering, Cairo University, Cairo, Egypt, in 1982 and the Ph.D. degree from Drexel University, USA, in 1989. He joined the teaching staff of the Department of Electronics and Electrical Communications, Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt since 1989. He has published about forty scientific papers in national and international conferences and journals. He has received the most cited paper award from Digital Signal Processing journal for 2008. His current research areas of interest include adaptive signal processing techniques, superresolution reconstruction of images, speech processing and spread spectrum communi cations.

PATTERN RECOGNITION AND IMAGE ANALYSIS

View publication stats

369

Fathi E. Abd ElSamie received the B.Sc. (Honors), M.Sc., and Ph.D. from the Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt, in 1998, 2001, and 2005, respectively. He joined the teaching staff of the Department of Electronics and Electrical Commu nications, Faculty of Electronic Engi neering, Menoufia University, Men ouf, Egypt, in 2005. He is a coauthor of about 100 papers in national and international conference proceedings and journals. He has received the most cited paper award from Digital Signal Pro cessing journal for 2008. His current research areas of inter est include image enhancement, image restoration, image interpolation, superresolution reconstruction of images, data hiding, multimedia communications, medical image processing, optical signal processing, and digital communi cations.

Vol. 20

No. 3

2010

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.