Descripción
|
|
---|---|
We propose three architectures for a word vector prediction system (WVPS) built with LSTMs that consider both past and future contexts of a word for predicting a vector in an embedded space where its surrounding area is semantically related to the considered word. We introduce an attention mechanism in one of the architectures so the system is able to assess the specific contribution of each context word to the prediction. All the architectures are trained under the same conditions and the same training material, following a curricular-learning fashion in the presentation of the data. For the inputs, we employ pretrained word embeddings. We evaluate the systems after the same number of training steps, over two different corpora composed of ground-truth speech transcriptions in Spanish language from TCSTAR and TV recordings used in the Search on Speech Challenge of IberSPEECH 2018. The results show that we are able to reach significant differences between the architectures, consistently across both corpora. The attention-based architecture achieves the best results, suggesting its adequacy for the task. Also, we illustrate the usefulness of the systems for resolving out-of-vocabulary (OOV) regions marked by an ASR system capable of detecting OOV occurrences | |
Internacional
|
Si |
Nombre congreso
|
The 20th Annual Conference of the International Speech Communication Association - Interspeech 2019, |
Tipo de participación
|
960 |
Lugar del congreso
|
Austria |
Revisores
|
Si |
ISBN o ISSN
|
1990-9772 |
DOI
|
http://dx.doi.org/10.21437/Interspeech.2019-2347 |
Fecha inicio congreso
|
15/09/2019 |
Fecha fin congreso
|
19/09/2019 |
Desde la página
|
3520 |
Hasta la página
|
3523 |
Título de las actas
|
Actas 20th Annual Conference of the International Speech Communication Association - Interspeech 2019, |