Code repository for the paper Music Boundary Detection using Convolutional Neural Networks: A comparative analysis of combined input features. See arXiv preprint Currently being reviewed in the International Journal of Interactive Multimedia and Artifcial Intelligence Journal.
Music Structure Segmentation is a research part in Music Information Retrieval (MIR). Since 2009, MIREX's campaigns have been tested this algorithms which are composed by unsupervised and supervised neural networks methods. This methods take as inputs audio features such as MFCCs, chroma vectors or spectrograms, and the well-known self-similarity (lag) matrices SSM or SSLM.
Check SelfSimilarityMatrices repository and see the notebooks to follow the procedure of the inputs (SSLMs) calculation step by step.
- Fix Training script
- Fix Evaluation script
- Make a test script where an audio file is taken and the prediction is given as the output
boundariesdetectioncnn/
module containing VAE architecture, training and data uitilities.
boundariesdetectioncnn/models
CNN models.
boundariesdetectioncnn/data
data handling tools.
boundariesdetectioncnn/train
model training tools.
boundariesdetectioncnn/evaluation
model evaluation tools.
notebooks/
tutorial notebooks.
tests/
unit tests.
cd .path/to/timbre-vae
python setup.py install
Python 3.5 or later. In Ubuntu, Mint and Debian Python 3 can be installed like this:
sudo apt-get install python3 python3-pip
sudo pip install librosa
If you use conda/Anaconda environments, librosa can be installed from the conda-forge channel:
conda install -c conda-forge librosa
Here is a list of the databases that are used in Music Structural Analysis. This model has been trained, evaluated and tested with SALAMI 2.0 dataset.
RCW Goto Annotations: http://staff.aist.go.jp/m.goto/RWC-MDB/AIST-Annotation
RCW Quaero Project Annotations (MIREX10): http://musicdata.gforge.inria.fr/
Beatles-TUT Annotations: http://www.cs.tut.fi/sgn/arg/paulus/beatles_sections_TUT.zip
Isophonic Beatles or Beatles-ISO Annotations: http://isophonics.net/content/reference-annotations
2.0 version: https://ddmal.music.mcgill.ca/research/SALAMI/
[1] | Cohen-Hadria, A., & Peeters, G. (2017, June). Music structure boundaries estimation using multiple self-similarity matrices as input depth of convolutional neural networks. In Audio Engineering Society Conference: 2017 AES International Conference on Semantic Audio. Audio Engineering Society. |
[2] | Grill, T., & Schlüter, J. (2015, October). Music Boundary Detection Using Neural Networks on Combined Features and Two-Level Annotations. In ISMIR (pp. 531-537). |
[3] | Grill, T., & Schluter, J. (2015, August). Music boundary detection using neural networks on spectrograms and self-similarity lag matrices. In 2015 23rd European Signal Processing Conference (EUSIPCO) (pp. 1296-1300). IEEE. |
[4] | Serra, J., Müller, M., Grosche, P., & Arcos, J. L. (2012, July). Unsupervised detection of music boundaries by time series structure features. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 26, No. 1). |
- Carlos Hernández - carloshero@unizar.es
- David Díaz-Guerra - ddga@uniza.es
- José Ramón Beltrán - jrbelbla@unizar.es