Abstract
|
|
---|---|
In this paper we present our results on using RNN-based LM scores trained on different phone-gram orders and using different phonetic ASR recognizers. In order to avoid data sparseness problems and to reduce the vocabulary of all possible n-gram combinations, a K-means clustering procedure was performed using phone-vector embeddings as a pre-processing step. Additional experiments to optimize the amount of classes, batch-size, hidden neurons, state-unfolding, are also presented. We have worked with the KALAKA-3 database for the plenty-closed condition [1]. Thanks to our clustering technique and the combination of high level phonegrams, our phonotactic system performs ~13% better than the unigram-based RNNLM system. Also, the obtained RNNLM scores are calibrated and fused with other scores from an acoustic-based i-vector system and a traditional PPRLM system. This fusion provides additional improvements showing that they provide complementary information to the LID system. | |
International
|
Si |
Congress
|
Odyssey 2016 - The Speaker and Language Recognition Workshop |
|
960 |
Place
|
Bilbao - España |
Reviewers
|
Si |
ISBN/ISSN
|
2312-2846 |
|
|
Start Date
|
21/06/2016 |
End Date
|
24/06/2017 |
From page
|
117 |
To page
|
123 |
|
Odyssey 2016: The Speaker and Language Recognition Workshop |