![]() (2004) Bearing Fault Diagnosis on Wavelet Transform and Fuzzy Inference. 2nd Edition, Prentice-Hall, Englewood Cliffs. (1999) Neural Networks: A Comprehensive Foundation. Expert System with Applications, 32, 485-498. (2007) A New Optimum Feature Extraction and Classification Method for Speaker Recognition: GWPNN. (1984) Cycle-Octave and Related Transforms in Seismic Signal Analysis. Proceedings of the IEEE Nordic Signal Processing Symposium, Denmark, 81-84. (1998) Wavelet Packet Transform Features with Application to Speaker Identification. Expert Systems with Applications, 31, 495-503. ![]() (2006) Speech Recognition Using a Wavelet Packet Adaptive Network Based Fuzzy Inference System. (2006) Wavelet Feature Selection Based Neural Networks with Application to the Text Independent Speaker Identification. Signal Processing: An International Journal, 4, 68-74.ĮLSDSR Database for Speaker Recognition (2004). (2010) A Text-Independent Speaker Identification System Based on the Zak Transform. (2011) Wavelet Packet and Percent of Energy Distribution with Neural Networks Based Gender Identification System. (2010) An Overview of Text-Independent Speaker Recognition: From Features to Supervectors. Expert Systems with Applications, 36, 3136-3143. (2009) Speaker Identification Using Discrete Wavelet Packet Transform Technique with Irregular Decomposition. Journal of Computer and Systems Sciences International, 45, 958-969. (2006) A System of Algorithms for Stable Human Recognition. Showed efficiency results were better than the well-known Mel FrequencyĬepstral Coefficient (MFCC) and the Zak transform.ĭesyatchikov, A.A., Kovkov, D.V., Lobantsov, V.V., Makovkin, K.A., Matveev, I.A., Murynin, A.B. Testing file for each speaker from the ELSDSR database. Of the proposed system is about 100% for training files and 95.7% for one Our results showed that the rate of correct recognition ![]() The performance of the proposed system is evaluated by Recognition (ELSDSR) database which composes of audio files for training and This paper are drawn from the English Language Speech Database for Speaker The Feed Forward Back-propagation Neural Network (FFBPNN). One hundred twenty eight features vector for each speaker was fed to Signal at level 7 with Daubechies 20-tap (db20), secondly, the energyĬorresponding to each WPT node is calculated which collected to form a features Theįeatures vectors are formed after two steps: firstly, decomposing the speech (WPT) and features matching by using Artificial Neural Networks (ANNs). Of a combination of three stages: signal pre-processing, features extraction by These featuresĪre extracted from human’s voice, so the system is called Voice Recognition Without any contact with the registering sensor is presented. System for security based on biometric human features that can be obtained
0 Comments
Leave a Reply. |