Contrast in concept-to-speech generation

In concept-to-speech systems, spoken output is generated on the basis of a text that has been produced by the system itself. In such systems, linguistic information from the text generation component may be exploited to achieve a higher prosodic quality of the speech output than can be obtained in a...

Full description

Saved in:
Bibliographic Details
Published in:Computer speech & language Vol. 16; no. 3; pp. 491 - 531
Main Author: Theune, Mariët
Format: Journal Article
Language:English
Published: Elsevier Ltd 01-07-2002
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In concept-to-speech systems, spoken output is generated on the basis of a text that has been produced by the system itself. In such systems, linguistic information from the text generation component may be exploited to achieve a higher prosodic quality of the speech output than can be obtained in a plain text-to-speech system. In this paper we discuss how information from natural language generation can be used to compute prosody in a concept-to-speech system, focusing on the automatic marking of contrastive accents on the basis of information about the preceding discourse. We discuss and compare some formal approaches to this problem and present the results of a small perception experiment that was carried out to test which discourse contexts trigger a preference for contrastive accent, and which do not. Finally, we describe a method for marking contrastive accent in a generic concept-to-speech system called D2S. In D2S, contrastive accent is assigned to generated phrases expressing different aspects of similar events. Unlike in previous approaches, there is no restriction on the kind of entities that may be considered contrastive. This is in line with the observation that, given the `right' context, any two items may stand in contrast to each other.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ObjectType-Article-1
ObjectType-Feature-2
ISSN:0885-2308
1095-8363
DOI:10.1016/S0885-2308(02)00010-4