Ctrl-P: Temporal Control of Prosodic Variation for Speech Synthesis

Authors: Devang S Ram Mohan*, Vivian Hu*, Tian Huey Teh*, Alexandra Torresquintero, Christopher Wallis, Marlene Staib, Lorenzo Foglianti, Jiameng Gao, Simon King. (*: equal contribution)

Abstract: Text does not fully specify the spoken form, so text-to-speech models must be able to learn from speech data that vary in ways not explained by the corresponding text. One way to reduce the amount of unexplained variation in training data is to provide acoustic information as an additional learning signal. When generating speech, modifying this acoustic information enables multiple distinct renditions of a text to be produced.

Since much of the unexplained variation is in the prosody, we propose a model that generates speech explicitly conditioned on the three primary acoustic correlates of prosody: F0, energy and duration. The model is flexible about how the values of these features are specified: they can be externally provided, or predicted from text, or predicted then subsequently modified.

Compared to a model that employs a variational auto-encoder to learn unsupervised latent features, our model provides more interpretable, temporally-precise, and disentangled control. When automatically predicting the acoustic features from text, it generates speech that is more natural than that from a Tacotron 2 model with reference encoder. Subsequent human-in-the-loop modification of the predicted acoustic features can significantly further increase naturalness.

Disentanglement

Description:

For each utterance, we shift the entire contour of an individual input feature/latent dimension (holding the others fixed) by a multiple of the speaker-specific standard deviation for that dimension.

Sentence 1

Ctrl-P (our model)

Shifted Feature

−0.5σ

−0.25σ

0.0σ

0.25σ

0.5σ

F0

play
play
play
play
play

Energy

play
play
play
play
play

Duration

play
play
play
play
play

T-VAE Baseline

Shifted Feature

−2.0σ

−1.0σ

0.0σ

1.0σ

2.0σ

Latent 1

play
play
play
play
play

Latent 2

play
play
play
play
play

Latent 3

play
play
play
play
play

Sentence 2

Ctrl-P (our model)

Shifted Feature

−0.5σ

−0.25σ

0.0σ

0.25σ

0.5σ

F0

play
play
play
play
play

Energy

play
play
play
play
play

Duration

play
play
play
play
play

T-VAE Baseline

Shifted Feature

−2.0σ

−1.0σ

0.0σ

1.0σ

2.0σ

Latent 1

play
play
play
play
play

Latent 2

play
play
play
play
play

Latent 3

play
play
play
play
play

Temporal Controllability

Description:

To demonstrate the ability of our model to provide fine-grained temporal control, we synthesise the same sentence but modify the acoustic features corresponding to specific phones to elicit semantically distinct renditions.

Male 1

play
play
play

Female 1

play
play
play

Speaker

María nunca me pide dinero prestado

María nunca me pide dinero prestado

María nunca me pide dinero prestado

© 2021 Papercup Technologies Ltd.