Descripción
|
|
---|---|
The lack of a standard emotion representation model hinders emotion analysis due to the incompatibility of annotation formats and models from different sources, tools and annotation services. This is also a limiting factor for multimodal analysis, since recognition services from different modalities (audio, video, text) tend to have different representation models (e. g., continuous vs. discrete emotions). This work presents a multi-disciplinary effort to alleviate this problem by formalizing conversion between emotion models. The specific contributions are: i) a semantic representation of emotion conversion; ii) an API proposal for services that perform automatic conversion; iii) a reference implementation of such a service; and iv) validation of the proposal through use cases that integrate different emotion models and service providers. | |
Internacional
|
Si |
Nombre congreso
|
Emotion and Sentiment in Social and Expressive Media: User Engagement and Interaction |
Tipo de participación
|
960 |
Lugar del congreso
|
Tejas, USA |
Revisores
|
Si |
ISBN o ISSN
|
978-1-5386-0680-3 |
DOI
|
10.1109/ACIIW.2017.8272599 |
Fecha inicio congreso
|
23/10/2017 |
Fecha fin congreso
|
26/10/2017 |
Desde la página
|
111 |
Hasta la página
|
116 |
Título de las actas
|
2017 Seventh International Conference on Affective Computing and Intelligent Interaction Workshops and Demos |