Now showing items 1-20 of 48

    • Ground-Truth Transcriptions of Real Music from Force-Aligned MIDI Syntheses 

      Robert J. Turetsky; Daniel P. W. Ellis (Johns Hopkins University, 2003-10-26)
      Many modern polyphonic music transcription algorithms are presented in a statistical pattern recognition framework. But without a large corpus of real-world music transcribed at the note level, these algorithms are unable ...
    • Blind Clustering of Popular Music Recordings Based on Singer Voice Characteristics 

      Wei-Ho Tsai; Hsin-Min Wang; Dwight Rodgers; Shi-Sian Cheng; Hung-Min Yu (Johns Hopkins University, 2003-10-26)
      This paper presents an effective technique for automatically clustering undocumented music recordings based on their associated singer. This serves as an indispensable step towards indexing and content-based information ...
    • Automatic Segmentation, Learning and Retrieval of Melodies Using A Self-Organizing Neural Network 

      Steven Harford (Johns Hopkins University, 2003-10-26)
      We introduce a neural network, known as SONNET-MAP, capable of automatic segmentation, learning and retrieval of melodies. SONNET-MAP is a synthesis of Nigrin’s SONNET (Self-Organizing Neural NETwork) architecture and an ...
    • Improving Polyphonic and Poly-Instrumental Music to Score Alignment 

      Ferréol Soulez; Xavier Rodet; Diemo Schwarz (Johns Hopkins University, 2003-10-26)
      Music alignment link events in a score and points on the audio performance time axis. All the parts of a recording can be thus indexed according to score information. The automatic alignment presented in this paper is based ...
    • Features for audio and music classification 

      Martin McKinney; Jeroen Breebaart (Johns Hopkins University, 2003-10-26)
      Four audio feature sets are evaluated in their ability to classify five general audio classes and seven popular music genres. The feature sets include low-level signal properties, mel-frequency spectral coefficients, and ...
    • The C-BRAHMS project 

      Kjell Lemström; Veli Mäkinen; Anna Pienimäki; Mika Turkia; Esko Ukkonen (Johns Hopkins University, 2003-10-26)
      The C-BRAHMS project develops computational methods for content-based retrieval and analysis of music data. A summary of the recent algorithmic and experimental developments of the project is given. A search engine developed ...
    • A HMM-Based Pitch Tracker for Audio Queries 

      Nicola Orio; Matteo Sisti Sette (Johns Hopkins University, 2003-10-26)
      In this paper we present an approach to the transcription of musical queries based on a HMM. The HMM is used to model the audio features related to the singing voice, and the transcription is obtained through Viterbi ...
    • Toward the Scientific Evaluation of Music Information Retrieval Systems 

      J. Stephen Downie (Johns Hopkins University, 2003-10-26)
      This paper outlines the findings-to-date of a project to assist in the efforts being made to establish a TREC-like evaluation paradigm within the Music Information Retrieval (MIR) research community. The findings and ...
    • The MUSART Testbed for Query-By-Humming Evaluation 

      Roger Dannenberg; William Birmingham; George Tzanetakis; Colin Meek; Ning Hu; Bryan Pardo (Johns Hopkins University, 2003-10-26)
      Evaluating music information retrieval systems is acknowledged to be a difficult problem. We have created a database and a software testbed for the systematic evaluation of various query-by-humming (QBH) search systems. ...
    • Effectiveness of HMM-Based Retrieval on Large Databases 

      Jonah Shifrin; William Birmingham (Johns Hopkins University, 2003-10-26)
      We have investigated the performance of a hidden Markov model based QBH retrieval system on a large musical database. The database is synthetic, generated from statistics gleaned from our (smaller) database of musical ...
    • Automatic Labelling of Tabla Signals 

      Olivier Gillet; Gaël Richard (Johns Hopkins University, 2003-10-26)
      Most of the recent developments in the field of music indexing and music information retrieval are focused on western music. In this paper, we present an automatic music transcription system dedicated to Tabla - a North ...
    • Application of missing feature theory to the recognition of musical instruments in polyphonic audio 

      Jana Eggink; Guy J. Brown (Johns Hopkins University, 2003-10-26)
      A system for musical instrument recognition based on a Gaussian Mixture Model (GMM) classifier is introduced. To enable instrument recognition when more than one sound is present at the same time, ideas from missing feature ...
    • Using Transportation Distances for Measuring Melodic Similarity 

      Rainer Typke; Panos Giannopoulos; Remco C. Veltkamp; Frans Wiering; René van Oostrum (Johns Hopkins University, 2003-10-26)
      Most of the existing methods for measuring melodic similarity use one-dimensional textual representations of music notation, so that melodic similarity can be measured by calculating editing distances. We view notes as ...
    • A Large-Scale Evaluation of Acoustic and Subjective Music Similarity Measures 

      Adam Berenzweig; Beth Logan; Daniel Ellis; Brian Whitman (Johns Hopkins University, 2003-10-26)
      Subjective similarity between musical pieces and artists is an elusive concept, but one that music be pursued in support of applications to provide automatic organization of large music collections. In this paper, we examine ...
    • Discovering Musical Pattern through Perceptive Heuristics. 

      Olivier Lartillot (Johns Hopkins University, 2003-10-26)
      This paper defends the view that the intricate difficulties challenging the emerging domain of Musical Pattern Discovery, which is dedicated to the automation of motivic analysis, will be overcome only through a thorough ...
    • Was Parsons right? An experiment in usability of music representations for melody-based music retrieval 

      Alexandra Uitdenbogerd; Yaw Wah Yap (Johns Hopkins University, 2003-10-26)
      In 1975 Parsons developed his dictionary of musical themes based on a simple contour representation. The motivation was that people with little training in music would be able to identify pieces of music. We decided to ...
    • Effects of song familiarity, singing training and recent song exposure on the singing of melodies 

      Steffen Pauws (Johns Hopkins University, 2003-10-26)
      Findings of a singing experiment are presented in which trained and untrained singers sang melodies of familiar and less familiar Beatles songs from memory and after listening to the original song on CD. Results showed ...
    • The MAMI Query-By-Voice Experiment: Collecting and annotating vocal queries for music information retrieval 

      Micheline Lesaffre; Koen Tanghe; Gaëtan Martens; Dirk Moelants; Marc Leman (Johns Hopkins University, 2003-10-26)
      The MIR research community requires coordinated strategies in dealing with databases for system development and experimentation. Manual annotated files can accelerate the development of accurate analysis tools for music ...
    • The dangers of parsimony in query-by-humming applications 

      Colin Meek; William Birmingham (Johns Hopkins University, 2003-10-26)
      Query-by-humming systems attempt to address the needs of the non-expert user, for whom the most natural query format -- for the purposes of finding a tune, hook or melody of unknown providence -- is to sing it. While human ...
    • Automatic Mood Detection from Acoustic Music Data 

      Dan Liu; Lie Lu; Hong-Jiang Zhang (Johns Hopkins University, 2003-10-26)
      Music mood describes the inherent emotional meaning of a music clip. It is helpful in music understanding and music search and some music-related applications. In this paper, a hierarchical framework is presented to automate ...