Music identification by Leadsheets

Show simple item record

dc.contributor.author Frank Seifert en_US
dc.contributor.author Wolfgang Benn en_US
dc.contributor.editor Holger H. Hoos en_US
dc.contributor.editor David Bainbridge en_US
dc.date.accessioned 2004-10-21T04:26:41Z
dc.date.available 2004-10-21T04:26:41Z
dc.date.issued 2003-10-26 en_US
dc.identifier.isbn 0-9746194-0-X en_US
dc.identifier.uri http://jhir.library.jhu.edu/handle/1774.2/52
dc.description.abstract Most experimental research on content-based automatic recognition and identification of musical documents is founded on statistical distribution of timbre or simple retrieval mechanisms like comparison of melodic segments. Therefore often a vast number of relevant and irrelevant hits including multiple appearances of the same documents are returned or the actual document can’t be revealed at all. To improve this situation we propose a model for recognition of music that enables identification and comparison of musical documents without dependence on their actual instantiation. The resulting structures enclose musical meaning and can be used for estimation of identity and semantic relationship between musical documents. en_US
dc.description.provenance Made available in DSpace on 2004-10-21T04:26:41Z (GMT). No. of bitstreams: 1 paper.pdf: 414677 bytes, checksum: 4d865df9403270827699b381dbcafdb7 (MD5) Previous issue date: 2003-10-26 en
dc.format.extent 414677 bytes
dc.format.mimetype application/pdf
dc.language en en_US
dc.language.iso en_US
dc.publisher Johns Hopkins University en_US
dc.subject Music Analysis en_US
dc.subject Digital Libraries en_US
dc.title Music identification by Leadsheets en_US
dc.type article en_US

Files in this item

Files Size Format Download
paper.pdf 414.6Kb application/pdf Download

This item appears in the following Collection(s)

Show simple item record