Show simple item record

dc.creatoramelec, viloria
dc.creatorPineda Lezama, Omar Bonerge
dc.creatorCabrera, Danelys
dc.description.abstractOver the past few years there has been a tendency to store audio tracks for later use on CD-DVDs, HDD-SSDs as well as on the internet, which makes it challenging to classify the information either online or offline. For this purpose, the audio tracks must be tagged. Tags are said to be texts based on the semantic information of the sound [1]. Thus, music analysis can be done in several ways [2] since music is identified by its genre, artist, instruments and structure, by a tagging system that can be manual or automatic. The manual tagging allows the visualization of the behavior of an audio track either in time domain or in frequency domain as in the spectrogram, making it possible to classify the songs without listening to them. However, this process is very time consuming and labor intensive, including health problems [3] which shows that "the volume, sound sensitivity, time and cost required for a manual labeling process is generally prohibitive. Three fundamental steps are required to carry out automatic labelling: pre-processing, feature extraction and classification [4]. The present study developed an algorithm for performing automatic classification of music genres using a segmentation process employing spectral characteristics such as centroid (SC), flatness (SF) and spread (SS), as well as a time spectral
dc.publisherCorporación Universidad de la Costaspa
dc.rightsCC0 1.0 Universal*
dc.sourceProcedia Computer Sciencespa
dc.subjectSupervised learning algorithmsspa
dc.subjectMusic genres classificationspa
dc.subjectCentroid (SC)spa
dc.subjectFlatness (SF)spa
dc.subjectSpread (SS)spa
dc.titleSegmentation process and spectral characteristics in the determination of musical genresspa
dcterms.references[1] Viloria, A., Vargas, J., Cali, E. G., Sierra, D. M., Villalobos, A. P., Bilbao, O. R., … Hernández-Palma, H. (2020). Big Data Marketing During the Period 2012–2019: A Bibliometric Review. In Advances in Intelligent Systems and Computing (Vol. 1039, pp. 186–193). Springer.
dcterms.references[2] Mitrovic, D., Zeppelzauer, M., Eidenberger, H.: Analysis of the Data Quality of Audio Features of Environmental Sounds. Knowledge Creation Diffusion Utilization, pp. 4– 17 (2006)spa
dcterms.references[3] Juthi, J. H., Gomes, A., Bhuiyan, T., & Mahmud, I. (2020). Music Emotion Recognition with the Extraction of Audio Features Using Machine Learning Approaches. In Proceedings of ICETIT 2019 (pp. 318-329). Springer,
dcterms.references[4] Greece-Duan, S., Zhang, J., Roe, P.: A survey of tagging techniques for music, speech and environmental sound, pp. 637–661 (2014)spa
dcterms.references[5] Lee, C. S., Tsai, Y. L., Wang, M. H., Sekino, H., Huang, T. X., Hsieh, W. F., ... & Yamaguchi, T. (2019, November). FML-based Machine Learning Tool for Human Emotional Agent with BCI on Music Application. In 2019 International Conference on Technologies and Applications of Artificial Intelligence (TAAI) (pp. 1-6).
dcterms.references[6] Rana, D., & Sandhu, R. (2019). Music Recommendation System using Machine Learning
dcterms.references[7] Faisal-Ahmed, P.P., Paul, M.G.: Music Genre Classification Using a Gradiente-Based Local Texture descriptor. Springer International Publishing Switzerland, pp. 99–110 (2016)spa
dcterms.references[8] Tzanetakis, G.: Musical genre classification of audio signals. IEEE Transactions on Speech and Audio Processing, pp. 293–302 (2002)spa
dcterms.references[9] Munkhbat, K., & Ryu, K. H. (2020). Classifying Songs to Relieve Stress Using Machine Learning Algorithms. In Advances in Intelligent Information Hiding and Multimedia Signal Processing (pp. 411-417). Springer,
dcterms.references[10] Duarte, A. E. L. (2020). Algorithmic interactive music generation in videogames. SoundEffects-An Interdisciplinary Journal of Sound and Sound Experience, 9(1),
dcterms.references[11] Finley, M., & Razi, A. (2019, January). Musical Key Estimation with Unsupervised Pattern Recognition. In 2019 IEEE 9th Annual Computing and Communication Workshop and Conference (CCWC) (pp. 0401-0408).
dcterms.references[12] Pelchat, N., & Gelowitz, C. M. (2019, May). Neural Network Music Genre Classification. In 2019 IEEE Canadian Conference of Electrical and Computer Engineering (CCECE) (pp. 1-4).
dcterms.references[13] Choi, J., Lee, J., Park, J., & Nam, J. (2019). Zero-shot learning for audio-based music classification and tagging. arXiv preprint
dcterms.references[14] Ahuja, M., & Sangal, A. L. (2018, December). Opinion Mining and Classification of Music Lyrics Using Supervised Learning Algorithms. In 2018 First International Conference on Secure Cyber Computing and Communication (ICSCCC) (pp. 223-227).
dcterms.references[15] Calvo-Zaragoza, J., Micó, L., & Oncina, J. (2016). Music staff removal with supervised pixel classification. International Journal on Document Analysis and Recognition (IJDAR), 19(3),
dcterms.references[16] Schreiber, H., & Müller, M. (2017). A Post-Processing Procedure for Improving Music Tempo Estimates Using Supervised Learning. In ISMIR (pp. 235-242)
dcterms.references[17] Benavides, E. S., Charris, F. C., & Viloria, A. (2020). Inequality in Writing Competence at Higher Education in Colombia: With Linear Hierarchical Models. In Advances in Intelligent Systems and Computing (Vol. 1039, pp. 122–132). Springer. 030-30465-2_15spa
dcterms.references[18] Viloria, A., Lis-Gutiérrez, J. P., Gaitán-Angulo, M., Godoy, A. R. M., Moreno, G. C., & Kamatkar, S. J. (2018). Methodology for the design of a student pattern recognition tool to facilitate the teaching - Learning process through knowledge data discovery (big data). In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10943 LNCS, pp. 670–679). Springer Verlag.

Files in this item


This item appears in the following Collection(s)

Show simple item record

CC0 1.0 Universal
Except where otherwise noted, this item's license is described as CC0 1.0 Universal