6533b823fe1ef96bd127f477
RESEARCH PRODUCT
Experimental studies on continuous speech recognition using neural architectures with “adaptive” hidden activation functions
Torbjørn SvendsenFilippo SorbelloSabato Marco SiniscalchiChin-hui Leesubject
VocabularyArtificial neural networkbusiness.industryGeneralizationComputer sciencemedia_common.quotation_subjectSpeech recognitionPattern recognitionTIMITPerceptronField (computer science)Orthonormal basisArtificial intelligencebusinessHidden Markov modelmedia_commondescription
The choice of hidden non-linearity in a feed-forward multi-layer perceptron (MLP) architecture is crucial to obtain good generalization capability and better performance. Nonetheless, little attention has been paid to this aspect in the ASR field. In this work, we present some initial, yet promising, studies toward improving ASR performance by adopting hidden activation functions that can be automatically learned from the data and change shape during training. This adaptive capability is achieved through the use of orthonormal Hermite polynomials. The “adaptive” MLP is used in two neural architectures that generate phone posterior estimates, namely, a standalone configuration and a hierarchical structure. The posteriors are input to a hybrid phone recognition system with good results on the TIMIT corpus. A scheme for optimizing the contributions of high-accuracy neural architectures is also investigated, resulting in a relative improvement of ∼9.0% over a non-optimized combination. Finally, initial experiments on the WSJ Nov92 task show that the proposed technique scales well up to large vocabulary continuous speech recognition (LVCSR) tasks.
year | journal | country | edition | language |
---|---|---|---|---|
2010-03-01 | 2010 IEEE International Conference on Acoustics, Speech and Signal Processing |