Danish ASR and TTS datasets and models, as part of the CoRal project, funded by the Innovation Fund.
Author and maintainer:
- Dan Saattrup Smart (dan.smart@alexandra.dk)
- Run
make install, which installsuv(if it isn't already installed), sets up a virtual environment and all Python dependencies therein. - Run
source .venv/bin/activateto activate the virtual environment. - Run
maketo see a list of available commands.
You can use the finetune_asr_model script to finetune your own ASR model:
python src/scripts/finetune_asr_model.py [key=value]...Here are some of the more important available keys:
model: The base model to finetune. Supports the following values:wav2vec2-smallwav2vec2-mediumwav2vec2-largewhisper-xxsmallwhisper-xsmallwhisper-smallwhisper-mediumwhisper-largewhisper-large-turbo
datasets: The datasets to finetune the models on. Can be a single dataset or an array of datasets (written like [dataset1,dataset2,...]). Supports the following values:coral_read_aloudcoral_conversationcoral_ttsfleursftspeechnotanst
dataset_probabilities: In case you are finetuning on several datasets, you need to specify the probability of sampling each one. This is an array of probabilities that need to sum to 1. If not set, the datasets are sampled uniformly.model_id: The model ID of the finetuned model. Defaults to the model type along with a timestamp.push_to_hub,hub_organisationandprivate: Whether to push the finetuned model to the Hugging Face Hub, and if so, which organisation to push it to. Ifprivateis set toTrue, the model will be private. The default is not to push the model to the Hub.enable_experiment_tracking: Whether training monitoring during training should be enabled. Defaults to false. You can also setexperiment_trackingto eitherwandbormlflowto specify which experiment tracking tool to use (wandbis used by default).per_device_batch_sizeanddataloader_num_workers: The batch size and number of workers to use for training. Defaults to 8 and 4, respectively. Tweak these if you are running out of GPU memory.model.learning_rate,total_batch_size,max_steps,warmup_steps: Training parameters that you can tweak, although it shouldn't really be needed.
See all the finetuning options in the config/asr_finetuning.yaml file.
You can use the evaluate_model script to evaluate an ASR model:
python src/scripts/evaluate_model.py [key=value]...Here are some of the more important available keys:
model_id(required): The Hugging Face model ID of the ASR model to evaluate.dataset: The ASR dataset to evaluate the model on. Can be any ASR dataset on the Hugging Face Hub. Note that subsets are separated with "::". Defaults toCoRal-project/coral_v3::conversation.eval_split_name: The dataset split to evaluate on. Defaults totest.text_column: The name of the column in the dataset that contains the text. Defaults totext.audio_column: The name of the column in the dataset that contains the audio. Defaults toaudio.
See all the evaluation options in the config/evaluation.yaml file.
If you're on MacOS and get an error saying something along the lines of "fatal error:
'lzma.h' file not found" then try the following and rerun make install afterwards:
export CPPFLAGS="-I$(brew --prefix)/include"
Another MacOS issue can happen if you get something like "fatal error: 'cstddef' file not found" and/or "fatal error: 'climits' file not found". In this case, first ensure that you have Homebrew installed, after which you run the following:
brew install cmake boost zlib eigen