This example contains code used to train a JETS model with Chinese Standard Mandarin Speech Copus.
Download CSMSC from it's Official Website.
We use MFA to get phonemes and durations for JETS. You can download from here baker_alignment_tone.tar.gz, or train your MFA model reference to mfa example of our repo.
Assume the path to the dataset is ~/datasets/BZNSYP. Assume the path to the MFA result of CSMSC is ./baker_alignment_tone. Run the command below to
- source path.
- preprocess the dataset.
- train the model.
- synthesize wavs.
- synthesize waveform from
metadata.jsonl. - synthesize waveform from a text file.
- synthesize waveform from
./run.shYou can choose a range of stages you want to run, or set stage equal to stop-stage to use only one stage, for example, running the following command will only preprocess the dataset.
./run.sh --stage 0 --stop-stage 0./local/preprocess.sh ${conf_path}When it is done. A dump folder is created in the current directory. The structure of the dump folder is listed below.
dump ├── dev │ ├── norm │ └── raw ├── phone_id_map.txt ├── speaker_id_map.txt ├── test │ ├── norm │ └── raw └── train ├── feats_stats.npy ├── norm └── raw The dataset is split into 3 parts, namely train, dev, and test, each of which contains a norm and raw subfolder. The raw folder contains wave、mel spectrogram、speech、pitch and energy features of each utterance, while the norm folder contains normalized ones. The statistics used to normalize features are computed from the training set, which is located in dump/train/feats_stats.npy.
Also, there is a metadata.jsonl in each subfolder. It is a table-like file that contains phones, text_lengths, the path of feats, feats_lengths, the path of pitch features, the path of energy features, the path of raw waves, speaker, and the id of each utterance.
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path}./local/train.sh calls ${BIN_DIR}/train.py. Here's the complete help message.
usage: train.py [-h] [--config CONFIG] [--train-metadata TRAIN_METADATA] [--dev-metadata DEV_METADATA] [--output-dir OUTPUT_DIR] [--ngpu NGPU] [--phones-dict PHONES_DICT] Train a JETS model. optional arguments: -h, --help show this help message and exit --config CONFIG config file to overwrite default config. --train-metadata TRAIN_METADATA training data. --dev-metadata DEV_METADATA dev data. --output-dir OUTPUT_DIR output dir. --ngpu NGPU if ngpu == 0, use cpu. --phones-dict PHONES_DICT phone vocabulary file. --configis a config file in yaml format to overwrite the default config, which can be found atconf/default.yaml.--train-metadataand--dev-metadatashould be the metadata file in the normalized subfolder oftrainanddevin thedumpfolder.--output-diris the directory to save the results of the experiment. Checkpoints are saved incheckpoints/inside this directory.--ngpuis the number of gpus to use, if ngpu == 0, use cpu.--phones-dictis the path of the phone vocabulary file.
./local/synthesize.sh calls ${BIN_DIR}/synthesize.py, which can synthesize waveform from metadata.jsonl.
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name}./local/synthesize_e2e.sh calls ${BIN_DIR}/synthesize_e2e.py, which can synthesize waveform from text file.
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name}The pretrained model can be downloaded here:
The static model can be downloaded here: