Now showing items 1-20 of 48

    • An Auditory Model Based Transcriber of Vocal Queries 

      Tom De Mulder; Jean-Pierre Martens; Micheline Lesaffre; Marc Leman; Bernard De Baets; Hans De Meyer (Johns Hopkins University, 2003-10-26)
      In this paper a new auditory model-based transcriber of melodic queries produced by a human voice is presented. The newly presented system is tested systematically, together with some other state-of-the-art systems, on ...
    • Using morphological description for generic sound retrieval 

      Julien Ricard; Perfecto Herrera (Johns Hopkins University, 2003-10-26)
      Systems for sound retrieval are usually “source-centred”. This means that retrieval is based on using the proper keywords that define or specify a sound source. Although this type of description is of great interest, it ...
    • Music identification by Leadsheets 

      Frank Seifert; Wolfgang Benn (Johns Hopkins University, 2003-10-26)
      Most experimental research on content-based automatic recognition and identification of musical documents is founded on statistical distribution of timbre or simple retrieval mechanisms like comparison of melodic segments. ...
    • Design Patterns in XML Music Representation 

      Perry Roland (Johns Hopkins University, 2003-10-26)
      Design patterns attempt to formalize the discussion of recurring problems and their solutions. This paper introduces several XML design patterns and demonstrates their usefulness in the development of XML music representations. ...
    • Key-specific Shrinkage Techniques for Harmonic Models 

      Jeremy Pickens (Johns Hopkins University, 2003-10-26)
      Statistical modeling of music is rapidly gaining acceptance as viable approach to a host of Music Information Retrieval related tasks, from transcription to ad hoc retrieval. As music may be viewed as an evolving pattern ...
    • Chopin Early Editions: Construction and Usage of Online Digital Scores 

      Tod Olson; J. Stephen Downie (Johns Hopkins University, 2003-10-26)
      The University of Chicago Library has digitized a collection of 19th century music scores. The online collection is generated programmatically from the scanned images and human-created descriptive and structural metadata, ...
    • Automatic Music Transcription from Multiphonic MIDI Signals 

      Haruto Takeda; Takuya Nishimoto; Shigeki Sagayama (Johns Hopkins University, 2003-10-26)
      For automatically transcribing human-performed polyphonic music recorded in the MIDI format, rhythm and tempo are decomposed through probabilistic modeling using Viterbi search in HMM for recognizing the rhythm and EM ...
    • Rhythmic Similarity through Elaboration 

      Mitchell Parry; Irfan Essa (Johns Hopkins University, 2003-10-26)
      Rhythmic similarity techniques for audio tend to evaluate how close to identical two rhythms are. This paper proposes a similarity metric based on rhythmic elaboration that matches rhythms that share the same beats regardless ...
    • Music Notation as a MEI Feasibility Test 

      Baron Schwartz (Johns Hopkins University, 2003-10-26)
      This project demonstrated that enough information can be retrieved from MEI, an XML format for musical information representation, to transform it into music notation with good fidelity. The process involved writing an ...
    • Ground-Truth Transcriptions of Real Music from Force-Aligned MIDI Syntheses 

      Robert J. Turetsky; Daniel P. W. Ellis (Johns Hopkins University, 2003-10-26)
      Many modern polyphonic music transcription algorithms are presented in a statistical pattern recognition framework. But without a large corpus of real-world music transcribed at the note level, these algorithms are unable ...
    • Improving Polyphonic and Poly-Instrumental Music to Score Alignment 

      Ferréol Soulez; Xavier Rodet; Diemo Schwarz (Johns Hopkins University, 2003-10-26)
      Music alignment link events in a score and points on the audio performance time axis. All the parts of a recording can be thus indexed according to score information. The automatic alignment presented in this paper is based ...
    • The MUSART Testbed for Query-By-Humming Evaluation 

      Roger Dannenberg; William Birmingham; George Tzanetakis; Colin Meek; Ning Hu; Bryan Pardo (Johns Hopkins University, 2003-10-26)
      Evaluating music information retrieval systems is acknowledged to be a difficult problem. We have created a database and a software testbed for the systematic evaluation of various query-by-humming (QBH) search systems. ...
    • Blind Clustering of Popular Music Recordings Based on Singer Voice Characteristics 

      Wei-Ho Tsai; Hsin-Min Wang; Dwight Rodgers; Shi-Sian Cheng; Hung-Min Yu (Johns Hopkins University, 2003-10-26)
      This paper presents an effective technique for automatically clustering undocumented music recordings based on their associated singer. This serves as an indispensable step towards indexing and content-based information ...
    • Toward the Scientific Evaluation of Music Information Retrieval Systems 

      J. Stephen Downie (Johns Hopkins University, 2003-10-26)
      This paper outlines the findings-to-date of a project to assist in the efforts being made to establish a TREC-like evaluation paradigm within the Music Information Retrieval (MIR) research community. The findings and ...
    • Automatic Segmentation, Learning and Retrieval of Melodies Using A Self-Organizing Neural Network 

      Steven Harford (Johns Hopkins University, 2003-10-26)
      We introduce a neural network, known as SONNET-MAP, capable of automatic segmentation, learning and retrieval of melodies. SONNET-MAP is a synthesis of Nigrin’s SONNET (Self-Organizing Neural NETwork) architecture and an ...
    • The dangers of parsimony in query-by-humming applications 

      Colin Meek; William Birmingham (Johns Hopkins University, 2003-10-26)
      Query-by-humming systems attempt to address the needs of the non-expert user, for whom the most natural query format -- for the purposes of finding a tune, hook or melody of unknown providence -- is to sing it. While human ...
    • Effectiveness of HMM-Based Retrieval on Large Databases 

      Jonah Shifrin; William Birmingham (Johns Hopkins University, 2003-10-26)
      We have investigated the performance of a hidden Markov model based QBH retrieval system on a large musical database. The database is synthetic, generated from statistics gleaned from our (smaller) database of musical ...
    • Features for audio and music classification 

      Martin McKinney; Jeroen Breebaart (Johns Hopkins University, 2003-10-26)
      Four audio feature sets are evaluated in their ability to classify five general audio classes and seven popular music genres. The feature sets include low-level signal properties, mel-frequency spectral coefficients, and ...
    • The C-BRAHMS project 

      Kjell Lemström; Veli Mäkinen; Anna Pienimäki; Mika Turkia; Esko Ukkonen (Johns Hopkins University, 2003-10-26)
      The C-BRAHMS project develops computational methods for content-based retrieval and analysis of music data. A summary of the recent algorithmic and experimental developments of the project is given. A search engine developed ...
    • A HMM-Based Pitch Tracker for Audio Queries 

      Nicola Orio; Matteo Sisti Sette (Johns Hopkins University, 2003-10-26)
      In this paper we present an approach to the transcription of musical queries based on a HMM. The HMM is used to model the audio features related to the singing voice, and the transcription is obtained through Viterbi ...