Now showing items 1-20 of 48

    • Improving Polyphonic and Poly-Instrumental Music to Score Alignment 

      Ferréol Soulez; Xavier Rodet; Diemo Schwarz (Johns Hopkins University, 2003-10-26)
      Music alignment link events in a score and points on the audio performance time axis. All the parts of a recording can be thus indexed according to score information. The automatic alignment presented in this paper is based ...
    • Features for audio and music classification 

      Martin McKinney; Jeroen Breebaart (Johns Hopkins University, 2003-10-26)
      Four audio feature sets are evaluated in their ability to classify five general audio classes and seven popular music genres. The feature sets include low-level signal properties, mel-frequency spectral coefficients, and ...
    • Toward the Scientific Evaluation of Music Information Retrieval Systems 

      J. Stephen Downie (Johns Hopkins University, 2003-10-26)
      This paper outlines the findings-to-date of a project to assist in the efforts being made to establish a TREC-like evaluation paradigm within the Music Information Retrieval (MIR) research community. The findings and ...
    • Ground-Truth Transcriptions of Real Music from Force-Aligned MIDI Syntheses 

      Robert J. Turetsky; Daniel P. W. Ellis (Johns Hopkins University, 2003-10-26)
      Many modern polyphonic music transcription algorithms are presented in a statistical pattern recognition framework. But without a large corpus of real-world music transcribed at the note level, these algorithms are unable ...
    • Blind Clustering of Popular Music Recordings Based on Singer Voice Characteristics 

      Wei-Ho Tsai; Hsin-Min Wang; Dwight Rodgers; Shi-Sian Cheng; Hung-Min Yu (Johns Hopkins University, 2003-10-26)
      This paper presents an effective technique for automatically clustering undocumented music recordings based on their associated singer. This serves as an indispensable step towards indexing and content-based information ...
    • Automatic Segmentation, Learning and Retrieval of Melodies Using A Self-Organizing Neural Network 

      Steven Harford (Johns Hopkins University, 2003-10-26)
      We introduce a neural network, known as SONNET-MAP, capable of automatic segmentation, learning and retrieval of melodies. SONNET-MAP is a synthesis of Nigrin’s SONNET (Self-Organizing Neural NETwork) architecture and an ...
    • The C-BRAHMS project 

      Kjell Lemström; Veli Mäkinen; Anna Pienimäki; Mika Turkia; Esko Ukkonen (Johns Hopkins University, 2003-10-26)
      The C-BRAHMS project develops computational methods for content-based retrieval and analysis of music data. A summary of the recent algorithmic and experimental developments of the project is given. A search engine developed ...
    • A HMM-Based Pitch Tracker for Audio Queries 

      Nicola Orio; Matteo Sisti Sette (Johns Hopkins University, 2003-10-26)
      In this paper we present an approach to the transcription of musical queries based on a HMM. The HMM is used to model the audio features related to the singing voice, and the transcription is obtained through Viterbi ...
    • A scalable Peer-to-Peer System for Music Content and Information Retrieval 

      George Tzanetakis; Jun Gao; Peter Steenkiste (Johns Hopkins University, 2003-10-26)
      Currently a large percentage of internet traffice consists of music files, typically stored in MP3 compressed audio format, shared and exchanged over Peer-to-Peer (P2P) networks. Searching for music is performed by specifying ...
    • Quantitative Comparisons into Content-Based Music Recognition with the Self Organising Map 

      Gavin Wood; Simon O'Keefe (Johns Hopkins University, 2003-10-26)
      With so much modern music being so widely available both in electronic form and in more traditional physical formats, a great opportunity exists for the development of a general-purpose recognition and music classification ...
    • Automatic Music Transcription from Multiphonic MIDI Signals 

      Haruto Takeda; Takuya Nishimoto; Shigeki Sagayama (Johns Hopkins University, 2003-10-26)
      For automatically transcribing human-performed polyphonic music recorded in the MIDI format, rhythm and tempo are decomposed through probabilistic modeling using Viterbi search in HMM for recognizing the rhythm and EM ...
    • The MUSART Testbed for Query-By-Humming Evaluation 

      Roger Dannenberg; William Birmingham; George Tzanetakis; Colin Meek; Ning Hu; Bryan Pardo (Johns Hopkins University, 2003-10-26)
      Evaluating music information retrieval systems is acknowledged to be a difficult problem. We have created a database and a software testbed for the systematic evaluation of various query-by-humming (QBH) search systems. ...
    • Was Parsons right? An experiment in usability of music representations for melody-based music retrieval 

      Alexandra Uitdenbogerd; Yaw Wah Yap (Johns Hopkins University, 2003-10-26)
      In 1975 Parsons developed his dictionary of musical themes based on a simple contour representation. The motivation was that people with little training in music would be able to identify pieces of music. We decided to ...
    • Effects of song familiarity, singing training and recent song exposure on the singing of melodies 

      Steffen Pauws (Johns Hopkins University, 2003-10-26)
      Findings of a singing experiment are presented in which trained and untrained singers sang melodies of familiar and less familiar Beatles songs from memory and after listening to the original song on CD. Results showed ...
    • The MAMI Query-By-Voice Experiment: Collecting and annotating vocal queries for music information retrieval 

      Micheline Lesaffre; Koen Tanghe; Gaëtan Martens; Dirk Moelants; Marc Leman (Johns Hopkins University, 2003-10-26)
      The MIR research community requires coordinated strategies in dealing with databases for system development and experimentation. Manual annotated files can accelerate the development of accurate analysis tools for music ...
    • The dangers of parsimony in query-by-humming applications 

      Colin Meek; William Birmingham (Johns Hopkins University, 2003-10-26)
      Query-by-humming systems attempt to address the needs of the non-expert user, for whom the most natural query format -- for the purposes of finding a tune, hook or melody of unknown providence -- is to sing it. While human ...
    • Effectiveness of HMM-Based Retrieval on Large Databases 

      Jonah Shifrin; William Birmingham (Johns Hopkins University, 2003-10-26)
      We have investigated the performance of a hidden Markov model based QBH retrieval system on a large musical database. The database is synthetic, generated from statistics gleaned from our (smaller) database of musical ...
    • Chord Segmentation and Recognition using EM-Trained Hidden Markov Models 

      Alexander Sheh; Daniel P.W. Ellis (Johns Hopkins University, 2003-10-26)
      Automatic extraction of content description from commercial audio recordings has a number of important applications, from indexing and retrieval through to novel musicological analyses based on very large corpora of recorded ...
    • Geometric Algorithms for Transposition Invariant Content-Based Music Retrieval 

      Esko Ukkonen; Kjell Lemström; Veli Mäkinen (Johns Hopkins University, 2003-10-26)
      We represent music as sets of points or sets of horizontal line segments in the Euclidean plane. Via this geometric representation we cast transposition invariant content-based music retrieval problems as ones of matching ...
    • Classification of Dance Music by Periodicity Patterns 

      Simon Dixon; Elias Pampalk; Gerhard Widmer (Johns Hopkins University, 2003-10-26)
      This paper addresses the genre classification problem for a specific subset of music, standard and Latin ballroom dance music, using a classification method based only on timing information. We compare two methods of ...