Utility scripts below (setup, import, start_server) require a working bash
installation (preferably on a Linux system).
Prerequisites
If you're on Linux, you may need to install gcc
and build-essential
for the sqlite3
go adapter to work properly:$ sudo apt-get install gcc build-essential
Set up go
Download: https://golang.org/dl/ (1.13 or higher)
Installation instructions: https://golang.org/doc/install
Install database support
Sqlite3: On Linux systems with apt
, run sudo apt install sqlite3
MariaDB: On Linux systems with apt
, run sudo apt install mariadb-server
or similar (it should be version 10.1.3 or higher)
Please note that you need to install both databases if you intend to run unit tests or other automated tests
Clone the source code
$ git clone https://github.com/stts-se/pronlex.git
$ cd pronlex
Test (optional)
pronlex$ go test ./...
Download an SQL lexicon dump file. In the following example, we use a Swedish lexicon: https://github.com/stts-se/wikispeech-lexdata/blob/master/sv-se/nst/swe030224NST.pron-ws.utf8.sqlite.sql.gz
Pre-compile binaries (for faster execution times)
pronlex$ go build ./...
Create a database file (this takes a while):
pronlex$ importSql -db_engine sqlite -db_location ~/wikispeech/sqlite/ -db_name sv_db swe030224NST.pron-ws.utf8.sqlite.sql.gz
Test looking up a word:
pronlex$ lexlookup -db_engine sqlite -db_location ~/wikispeech/sqlite/ -db_name sv_db -lexicon swe_lex åsna
Setup the pronlex server:
pronlex$ bash scripts/setup.sh -a <application folder> -e <db engine> -l <db location>*
Example:pronlex$ bash scripts/setup.sh -a ~/wikispeech/sqlite -e sqlite
Usage info:pronlex$ bash scripts/setup.sh -h
Sets up the pronlex server using the specified database engine and specified location, and a set of test data. The db location folder is not required for sqlite (if it's not specified, the application folder will be used for db location).
If, for some reason, you are not using the above setup script to configure your pronlex installation, you need to configure mariadb using the mariadb setup script (as root):
sudo mysql -u root < scripts/mariadb_setup.sql
Import lexicon data (optional)
pronlex$ bash scripts/import.sh -a <application folder> -e <db engine> -l <db location>* -f <lexdata git>
Example:
pronlex$ bash scripts/import.sh -a ~/wikispeech/sqlite -e sqlite -f ~/git_repos/wikispeech-lexdata
Imports lexicon databases (sql dumps) for Swedish, Norwegian, US English, and a small set of test data for Arabic from the wikispeech-lexdata repository. If the <lexdata git>
folder exists on disk, lexicon resources will be read from this folder. If it doesn't exist, the lexicon data will be downloaded from github. The db location folder is not required for sqlite (if it's not specified, the application folder will be used for db location).
If you want to import other lexicon data, or just a subset of the data above, you can use one of the following methods:
You can create your own lexicon files, or you can use existing data in the wikispeech-lexdata repository. The lexicon file format is described here: https://godoc.org/github.com/stts-se/pronlex/line.
The server is started using this script:
pronlex$ bash scripts/start_server.sh -e <db engine> -l <db location> -a <application folder>
The startup script will run some init tests in a separate test server, before starting the standard server.
When the standard (non-testing) server is started, it always creates a demo database and lexicon, containing a few simple entries for demo and testing purposes. The server can thus be started and tested even if you haven't imported the lexicon data above.
For a complete set of options, run:pronlex$ bash scripts/start_server.sh -h
This work was supported by the Swedish Post and Telecom Authority (PTS) through the grant "Wikispeech – en användargenererad talsyntes på Wikipedia" (2016–2017).