Descripción
|
|
---|---|
This paper introduces a continuous system capable of automatically producing the most adequate speaking style to synthesize a desired target text. This is done thanks to a joint modeling of the acoustic and lexical parameters of the speaker models by adapting the CVSM projection of the training texts using MR-HMM techniques. As such, we consider that as long as sufficient variety in the training data is available, we should be able to model a continuous lexical space into a continuous acoustic space. The proposed continuous automatic text to speech system was evaluated by means of a perceptual evaluation in order to compare them with traditional approaches to the task. The system proved to be capable of conveying the correct expressiveness (average adequacy of 3.6) with an expressive strength comparable to oracle traditional expressive speech synthesis (average of 3.6) although with a drop in speech quality mainly due to the semi-continuous nature of the data (average quality of 2.9). This means that the proposed system is capable of improving traditional neutral systems without requiring any additional user interaction. | |
Internacional
|
Si |
Nombre congreso
|
COLING 2016, The 26th International Conference on Computational Linguistics |
Tipo de participación
|
960 |
Lugar del congreso
|
Osaka, Japan |
Revisores
|
Si |
ISBN o ISSN
|
9781510833388 |
DOI
|
|
Fecha inicio congreso
|
11/12/2016 |
Fecha fin congreso
|
16/12/2016 |
Desde la página
|
369 |
Hasta la página
|
376 |
Título de las actas
|
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics |