{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T11:05:01Z","timestamp":1776078301156,"version":"3.50.1"},"reference-count":25,"publisher":"Wiley","license":[{"start":{"date-parts":[[2021,6,10]],"date-time":"2021-06-10T00:00:00Z","timestamp":1623283200000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Mobile Information Systems"],"published-print":{"date-parts":[[2021,6,10]]},"abstract":"<jats:p>With the advancement of multimedia and digital technologies, music resources are rapidly increasing over the Internet, which changed listeners\u2019 habits from hard drives to online music platforms. It has allowed the researchers to use classification technologies for efficient storage, organization, retrieval, and recommendation of music resources. The traditional music classification methods use many artificially designed acoustic features, which require knowledge in the music field. The features of different classification tasks are often not universal. This paper provides a solution to this problem by proposing a novel recurrent neural network method with a channel attention mechanism for music feature classification. The music classification method based on a convolutional neural network ignores the timing characteristics of the audio itself. Therefore, this paper combines convolution structure with the bidirectional recurrent neural network and uses the attention mechanism to assign different attention weights to the output of the recurrent neural network at different times; the weights are assigned for getting a better representation of the overall characteristics of the music. The classification accuracy of the model on the GTZAN data set has increased to 93.1%. The AUC on the multilabel labeling data set MagnaTagATune has reached 92.3%, surpassing other comparison methods. The labeling of different music labels has been analyzed. This method has good labeling ability for most of the labels of music genres. Also, it has good performance on some labels of musical instruments, singing, and emotion categories.<\/jats:p>","DOI":"10.1155\/2021\/7629994","type":"journal-article","created":{"date-parts":[[2021,6,11]],"date-time":"2021-06-11T18:20:13Z","timestamp":1623435613000},"page":"1-10","source":"Crossref","is-referenced-by-count":22,"title":["Music Feature Classification Based on Recurrent Neural Networks with Channel Attention Mechanism"],"prefix":"10.1155","volume":"2021","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-4351-0364","authenticated-orcid":true,"given":"Jie","family":"Gan","sequence":"first","affiliation":[{"name":"Huanghuai University, Zhumadian, Henan 463000, China"}]}],"member":"311","reference":[{"issue":"1","key":"1","article-title":"Design and implementation of fashion music resource website based on asp","volume":"1544","author":"Z. Li","year":"2020","journal-title":"Journal of Physics: Conference Series"},{"key":"2","doi-asserted-by":"publisher","DOI":"10.5334\/tismir.10"},{"key":"3","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-14802-7_10"},{"key":"4","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-63820-7_88"},{"key":"5","doi-asserted-by":"publisher","DOI":"10.1109\/jiot.2021.3071398"},{"key":"6","doi-asserted-by":"publisher","DOI":"10.1145\/3177849"},{"key":"7","doi-asserted-by":"publisher","DOI":"10.1002\/cpe.6147"},{"key":"8","doi-asserted-by":"publisher","DOI":"10.1109\/lgrs.2020.3026587"},{"key":"9","doi-asserted-by":"publisher","DOI":"10.1142\/s0218001418500428"},{"key":"10","doi-asserted-by":"publisher","DOI":"10.1007\/s11042-020-10188-x"},{"key":"11","doi-asserted-by":"publisher","DOI":"10.1109\/jiot.2021.3058982"},{"key":"12","doi-asserted-by":"publisher","DOI":"10.52810\/tpris.2021.100012"},{"issue":"4","key":"13","doi-asserted-by":"crossref","first-page":"614","DOI":"10.1109\/TAFFC.2018.2820691","article-title":"Novel audio features for music emotion recognition","volume":"11","author":"R. Panda","year":"2018","journal-title":"IEEE Transactions on Affective Computing"},{"key":"14","doi-asserted-by":"publisher","DOI":"10.52810\/tpris.2021.100018"},{"key":"15","doi-asserted-by":"publisher","DOI":"10.1002\/cpe.6276"},{"key":"16","article-title":"A tutorial on deep learning for music information retrieval","author":"K. Choi","year":"2017"},{"key":"17","doi-asserted-by":"publisher","DOI":"10.1016\/j.future.2021.01.004"},{"key":"18","doi-asserted-by":"publisher","DOI":"10.5334\/tismir.13"},{"key":"19","doi-asserted-by":"publisher","DOI":"10.3389\/fpsyg.2017.01179"},{"key":"20","first-page":"417","article-title":"Attentive neural architecture incorporating song features for music recommendation","author":"N. Sachdeva"},{"key":"21","article-title":"Multi-label music genre classification from audio, text, and images using deep features","author":"S. Oramas","year":"2017"},{"key":"22","doi-asserted-by":"publisher","DOI":"10.3390\/s21041129"},{"key":"23","doi-asserted-by":"publisher","DOI":"10.1109\/TITS.2021.3066240"},{"key":"24","doi-asserted-by":"publisher","DOI":"10.1007\/s12652-020-02572-0"},{"key":"25","doi-asserted-by":"publisher","DOI":"10.1109\/mnet.011.2000303"}],"container-title":["Mobile Information Systems"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/downloads.hindawi.com\/journals\/misy\/2021\/7629994.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/misy\/2021\/7629994.xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/misy\/2021\/7629994.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,6,11]],"date-time":"2021-06-11T18:20:18Z","timestamp":1623435618000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.hindawi.com\/journals\/misy\/2021\/7629994\/"}},"subtitle":[],"editor":[{"given":"Fazlullah","family":"Khan","sequence":"additional","affiliation":[]}],"short-title":[],"issued":{"date-parts":[[2021,6,10]]},"references-count":25,"alternative-id":["7629994","7629994"],"URL":"https:\/\/doi.org\/10.1155\/2021\/7629994","relation":{},"ISSN":["1875-905X","1574-017X"],"issn-type":[{"value":"1875-905X","type":"electronic"},{"value":"1574-017X","type":"print"}],"subject":[],"published":{"date-parts":[[2021,6,10]]}}}