Show simple item record

dc.contributor.authorTao Lien_US
dc.contributor.authorMitsunori Ogiharaen_US
dc.contributor.editorHolger H. Hoosen_US
dc.contributor.editorDavid Bainbridgeen_US
dc.date.accessioned2004-10-21T04:26:36Z
dc.date.available2004-10-21T04:26:36Z
dc.date.issued2003-10-26en_US
dc.identifier.isbn0-9746194-0-Xen_US
dc.identifier.urihttp://jhir.library.jhu.edu/handle/1774.2/41
dc.description.abstractDetection of emotion in music sounds is an important problem in music indexing. This paper studies the problem of identifying emotion in music by sound signal processing. The problem is cast as a multiclass classification problem, decomposed as a multiple binary classification problem, and is resolved with the use of Support Vector Machines trained on the timbral textures, rhythmic contents, and pitch contents extracted from the sound data. Experiments were carried out on a data set consisting of 499 30-second long music sounds over ambient, classical, fusion, and jazz. Classification into the ten adjective groups of Farnsworth (plus three additional groups) as well as classification into six supergroups that are formed by combining these basic groups was attempted. For some groups and supergroups reasonably accurate performance was achieved.en_US
dc.format.extent41901 bytes
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.publisherJohns Hopkins Universityen_US
dc.subjectIR Systems and Algorithmsen_US
dc.subjectDigital Librariesen_US
dc.titleDetecting Emotion in Musicen_US
dc.typeArticleen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record