CPU/GPU Converter from eBooks to audiobooks with chapters and metadata
using XTTSv2, Bark, Vits, Fairseq, YourTTS, Tacotron2 and more. Supports voice cloning and +1110 languages!
Important
This tool is intended for use with non-DRM, legally acquired eBooks only.
The authors are not responsible for any misuse of this software or any resulting legal consequences.
Use this tool responsibly and in accordance with all applicable laws.
New Default Voice Demo
Sherlock.mp4
More Demos
ASMR Voice
WhisperASMR-Demo.mp4
Rainy Day Voice
Rainy_Day_voice_Demo.mp4
Scarlett Voice
ScarlettJohansson-Demo.mp4
David Attenborough Voice
shortStory.mp4
Example
- π Splits eBook into chapters for organized audio.
- ποΈ High-quality text-to-speech with Coqui XTTSv2 and Fairseq (and more).
- π£οΈ Optional voice cloning with your own voice file.
- π Supports +1110 languages (English by default). List of Supported languages
- π₯οΈ Designed to run on 4GB RAM.
| Arabic (ar) | Chinese (zh) | English (en) | Spanish (es) |
|---|---|---|---|
| French (fr) | German (de) | Italian (it) | Portuguese (pt) |
| Polish (pl) | Turkish (tr) | Russian (ru) | Dutch (nl) |
| Czech (cs) | Japanese (ja) | Hindi (hi) | Bengali (bn) |
| Hungarian (hu) | Korean (ko) | Vietnamese (vi) | Swedish (sv) |
| Persian (fa) | Yoruba (yo) | Swahili (sw) | Indonesian (id) |
| Slovak (sk) | Croatian (hr) | Tamil (ta) | Danish (da) |
- 2gb RAM minimum, 8GB recommended
- Virtualization enabled if running on windows (Docker only)
- CPU (intel, AMD, ARM), GPU (Nvidia, AMD*, Intel*) (Recommended), MPS (Apple Silicon CPU) *available very soon
Important
Before to post an install or bug issue search carefully to the opened and closed issues TAB
to be sure your issue does not exist already.
Note
Lacking of any standards structure like what is a chapter, paragraph, preface etc.
you should first remove manually any text you don't want to be converted in audio.
- Clone repo
git clone https://github.com/DrewThomasson/ebook2audiobook.git
cd ebook2audiobook-
Run ebook2audiobook:
-
Linux/MacOS
./ebook2audiobook.sh # Run launch scriptNote for MacOS users: homebrew is installed to install missing programs.
-
Mac Launcher
Double clickMac Ebook2Audiobook Launcher.command -
Windows
ebook2audiobook.cmd # Run launch script or double click on itNote for Windows users: scoop is installed to install missing programs without administrator privileges.
-
Windows Launcher
Double clickebook2audiobook.cmd
-
-
Open the Web App: Click the URL provided in the terminal to access the web app and convert eBooks.
http://localhost:7860/ -
For Public Link:
./ebook2audiobook.sh --share(Linux/MacOS)ebook2audiobook.cmd --share(Windows)python app.py --share(all OS)
Important
If the script is stopped and run again, you need to refresh your gradio GUI interface
to let the web page reconnect to the new connection socket.
-
Linux/MacOS:
./ebook2audiobook.sh --headless --ebook <path_to_ebook_file> \ --voice [path_to_voice_file] --language [language_code]
-
Windows
ebook2audiobook.cmd --headless --ebook <path_to_ebook_file> --voice [path_to_voice_file] --language [language_code]
-
[--ebook]: Path to your eBook file
-
[--voice]: Voice cloning file path (optional)
-
[--language]: Language code in ISO-639-3 (i.e.: ita for italian, eng for english, deu for german...).
Default language is eng and --language is optional for default language set in ./lib/lang.py.
The ISO-639-1 2 letters codes are also supported.
(must be a .zip file containing the mandatory model files. Example for XTTSv2: config.json, model.pth, vocab.json and ref.wav)
-
Linux/MacOS
./ebook2audiobook.sh --headless --ebook <ebook_file_path> \ --language <language> --custom_model <custom_model_path>
-
Windows
ebook2audiobook.cmd --headless --ebook <ebook_file_path> \ --language <language> --custom_model <custom_model_path>
Note: the ref.wav of your custom model is always the voice selected for the conversion
-
<custom_model_path>: Path to
model_name.zipfile, which must contain (according to the tts engine) all the mandatory files
(see ./lib/models.py).
- Linux/MacOS
./ebook2audiobook.sh --help
- Windows
ebook2audiobook.cmd --help
- Or for all OS
python app.py --help
usage: app.py [-h] [--session SESSION] [--share] [--headless] [--ebook EBOOK] [--ebooks_dir EBOOKS_DIR]
[--language LANGUAGE] [--voice VOICE]
[--device {{'proc': 'cpu', 'found': True},{'proc': 'cuda', 'found': False},{'proc': 'mps', 'found': False},{'proc': 'rocm', 'found': False},{'proc': 'xpu', 'found': False}}]
[--tts_engine {XTTSv2,BARK,VITS,FAIRSEQ,TACOTRON2,YOURTTS,xtts,bark,vits,fairseq,tacotron,yourtts}]
[--custom_model CUSTOM_MODEL] [--fine_tuned FINE_TUNED] [--output_format OUTPUT_FORMAT]
[--output_channel OUTPUT_CHANNEL] [--temperature TEMPERATURE] [--length_penalty LENGTH_PENALTY]
[--num_beams NUM_BEAMS] [--repetition_penalty REPETITION_PENALTY] [--top_k TOP_K] [--top_p TOP_P]
[--speed SPEED] [--enable_text_splitting] [--text_temp TEXT_TEMP] [--waveform_temp WAVEFORM_TEMP]
[--output_dir OUTPUT_DIR] [--version]
Convert eBooks to Audiobooks using a Text-to-Speech model. You can either launch the Gradio interface or run the script in headless mode for direct conversion.
options:
-h, --help show this help message and exit
--session SESSION Session to resume the conversion in case of interruption, crash,
or reuse of custom models and custom cloning voices.
**** The following options are for all modes:
Optional
**** The following option are for gradio/gui mode only:
Optional
--share Enable a public shareable Gradio link.
**** The following options are for --headless mode only:
--headless Run the script in headless mode
--ebook EBOOK Path to the ebook file for conversion. Cannot be used when --ebooks_dir is present.
--ebooks_dir EBOOKS_DIR
Relative or absolute path of the directory containing the files to convert.
Cannot be used when --ebook is present.
--language LANGUAGE Language of the e-book. Default language is set
in ./lib/lang.py sed as default if not present. All compatible language codes are in ./lib/lang.py
optional parameters:
--voice VOICE (Optional) Path to the voice cloning file for TTS engine.
Uses the default voice if not present.
--device {{'proc': 'cpu', 'found': True},{'proc': 'cuda', 'found': False},{'proc': 'mps', 'found': False},{'proc': 'rocm', 'found': False},{'proc': 'xpu', 'found': False}}
(Optional) Pprocessor unit type for the conversion.
Default is set in ./lib/conf.py if not present. Fall back to CPU if CUDA or MPS is not available.
--tts_engine {XTTSv2,BARK,VITS,FAIRSEQ,TACOTRON2,YOURTTS,xtts,bark,vits,fairseq,tacotron,yourtts}
(Optional) Preferred TTS engine (available are: ['XTTSv2', 'BARK', 'VITS', 'FAIRSEQ', 'TACOTRON2', 'YOURTTS', 'xtts', 'bark', 'vits', 'fairseq', 'tacotron', 'yourtts'].
Default depends on the selected language. The tts engine should be compatible with the chosen language
--custom_model CUSTOM_MODEL
(Optional) Path to the custom model zip file cntaining mandatory model files.
Please refer to ./lib/models.py
--fine_tuned FINE_TUNED
(Optional) Fine tuned model path. Default is builtin model.
--output_format OUTPUT_FORMAT
(Optional) Output audio format. Default is m4b set in ./lib/conf.py
--output_channel OUTPUT_CHANNEL
(Optional) Output audio channel. Default is mono set in ./lib/conf.py
--temperature TEMPERATURE
(xtts only, optional) Temperature for the model.
Default to config.json model. Higher temperatures lead to more creative outputs.
--length_penalty LENGTH_PENALTY
(xtts only, optional) A length penalty applied to the autoregressive decoder.
Default to config.json model. Not applied to custom models.
--num_beams NUM_BEAMS
(xtts only, optional) Controls how many alternative sequences the model explores. Must be equal or greater than length penalty.
Default to config.json model.
--repetition_penalty REPETITION_PENALTY
(xtts only, optional) A penalty that prevents the autoregressive decoder from repeating itself.
Default to config.json model.
--top_k TOP_K (xtts only, optional) Top-k sampling.
Lower values mean more likely outputs and increased audio generation speed.
Default to config.json model.
--top_p TOP_P (xtts only, optional) Top-p sampling.
Lower values mean more likely outputs and increased audio generation speed. Default to config.json model.
--speed SPEED (xtts only, optional) Speed factor for the speech generation.
Default to config.json model.
--enable_text_splitting
(xtts only, optional) Enable TTS text splitting. This option is known to not be very efficient.
Default to config.json model.
--text_temp TEXT_TEMP
(bark only, optional) Text Temperature for the model.
Default to config.json model.
--waveform_temp WAVEFORM_TEMP
(bark only, optional) Waveform Temperature for the model.
Default to config.json model.
--output_dir OUTPUT_DIR
(Optional) Path to the output directory. Default is set in ./lib/conf.py
--version Show the version of the script and exit
Example usage:
Windows:
Gradio/GUI:
ebook2audiobook.cmd
Headless mode:
ebook2audiobook.cmd --headless --ebook '/path/to/file' --language eng
Linux/Mac:
Gradio/GUI:
./ebook2audiobook.sh
Headless mode:
./ebook2audiobook.sh --headless --ebook '/path/to/file' --language eng
Docker build image mode:
Windows:
ebook2audiobook.cmd --script_mode build_docker
Linux/Mac
./ebook2audiobook.sh --script_mode build_docker
Tip: to add of silence (random between 1.0 and 1.8 seconds) into your text just use "###" or "[pause]".
NOTE: in gradio/gui mode, to cancel a running conversion, just click on the [X] from the ebook upload component.
TIP: if it needs some more pauses, just add '###' or '[pause]' between the words you wish more pause. one [pause] equals to 1.4 seconds
For pre-built image enable #image: docker.io/athomasson2/ebook2audiobook:latest in docker-compose.yml
- Clone the Repository (if you haven't already):
git clone https://github.com/DrewThomasson/ebook2audiobook.git cd ebook2audiobook - Set GPU Support (disabled by default)
To enable GPU support, modify
docker-compose.ymland change*gpu-disabledto*gpu-enabled - Start the service:
# Docker docker-compose up -d # To rebuild add --build # To stop -> docker-compose down # Podman podman compose -f podman-compose.yml up -d # To rebuild add --build # To stop -> podman compose -f podman-compose.yml down
- Access the service: The service will be available at http://localhost:7860.
SKIP_XTTS_TEST: "true" # (Saves space by not baking xtts model into docker image)
TORCH_VERSION: cuda118 # Available tags: [cuda121, cuda118, cuda128, rocm, xpu, cpu]
# All CUDA version numbers should work, Ex: CUDA 11.6-> cuda116A headless example is already contained within the `docker-compose.yml` file.
The `docker-compose.yml` file will act as the base dir for any headless commands added.By Default: All compose containers share the contents your local `ebook2audiobook` folder- My NVIDIA GPU isnt being detected?? -> GPU ISSUES Wiki Page
For an XTTSv2 custom model a ref audio clip of the voice reference is mandatory:
.epub,.pdf,.mobi,.txt,.html,.rtf,.chm,.lit,.pdb,.fb2,.odt,.cbr,.cbz,.prc,.lrf,.pml,.snb,.cbc,.rb,.tcr- Best results:
.epubor.mobifor automatic chapter detection
- Creates a
['m4b', 'm4a', 'mp4', 'webm', 'mov', 'mp3', 'flac', 'wav', 'ogg', 'aac'](set in ./lib/conf.py) file with metadata and chapters.
git pull # Locally/Compose
docker pull athomasson2/ebook2audiobook:latest # For Pre-build docker imagesYou are free to modify libs/conf.py to add or remove the settings you wish. If you plan to do it just make a copy of the original conf.py so on each ebook2audiobook update you will backup your modified conf.py and put back the original one. You must plan the same process for models.py. If you wish to make your own custom model as an official ebook2audiobook fine tuned model so please contact us and we'll ad it to the models.py list.
Releases can be found -> here
git checkout tags/VERSION_NUM # Locally/Compose -> Example: git checkout tags/v25.7.7
athomasson2/ebook2audiobook:VERSION_NUM # For Pre-build docker images -> Example: athomasson2/ebook2audiobook:v25.7.7- My NVIDIA GPU isnt being detected?? -> GPU ISSUES Wiki Page
- CPU is slow (better on server smp CPU) while NVIDIA GPU can have almost real time conversion. Discussion about this For faster multilingual generation I would suggest my other project that uses piper-tts instead (It doesn't have zero-shot voice cloning though, and is Siri quality voices, but it is much faster on cpu).
- "I'm having dependency issues" - Just use the docker, its fully self contained and has a headless mode,
add
--helpparameter at the end of the docker run command for more information. - "Im getting a truncated audio issue!" - PLEASE MAKE AN ISSUE OF THIS, we don't speak every language and need advise from users to fine tune the sentence splitting logic.π
- Any help from people speaking any of the supported languages to help us improve the models
- Coqui TTS: Coqui TTS GitHub
- Calibre: Calibre Website
- FFmpeg: FFmpeg Website
- @shakenbake15 for better chapter saving method