Now showing items 21-40 of 48

    • A Large-Scale Evaluation of Acoustic and Subjective Music Similarity Measures 

      Adam Berenzweig; Beth Logan; Daniel Ellis; Brian Whitman (Johns Hopkins University, 2003-10-26)
      Subjective similarity between musical pieces and artists is an elusive concept, but one that music be pursued in support of applications to provide automatic organization of large music collections. In this paper, we examine ...
    • Discovering Musical Pattern through Perceptive Heuristics. 

      Olivier Lartillot (Johns Hopkins University, 2003-10-26)
      This paper defends the view that the intricate difficulties challenging the emerging domain of Musical Pattern Discovery, which is dedicated to the automation of motivic analysis, will be overcome only through a thorough ...
    • Automatic Mood Detection from Acoustic Music Data 

      Dan Liu; Lie Lu; Hong-Jiang Zhang (Johns Hopkins University, 2003-10-26)
      Music mood describes the inherent emotional meaning of a music clip. It is helpful in music understanding and music search and some music-related applications. In this paper, a hierarchical framework is presented to automate ...
    • Using Transportation Distances for Measuring Melodic Similarity 

      Rainer Typke; Panos Giannopoulos; Remco C. Veltkamp; Frans Wiering; René van Oostrum (Johns Hopkins University, 2003-10-26)
      Most of the existing methods for measuring melodic similarity use one-dimensional textual representations of music notation, so that melodic similarity can be measured by calculating editing distances. We view notes as ...
    • Automatic Labelling of Tabla Signals 

      Olivier Gillet; Gaël Richard (Johns Hopkins University, 2003-10-26)
      Most of the recent developments in the field of music indexing and music information retrieval are focused on western music. In this paper, we present an automatic music transcription system dedicated to Tabla - a North ...
    • Application of missing feature theory to the recognition of musical instruments in polyphonic audio 

      Jana Eggink; Guy J. Brown (Johns Hopkins University, 2003-10-26)
      A system for musical instrument recognition based on a Gaussian Mixture Model (GMM) classifier is introduced. To enable instrument recognition when more than one sound is present at the same time, ideas from missing feature ...
    • Ground-Truth Transcriptions of Real Music from Force-Aligned MIDI Syntheses 

      Robert J. Turetsky; Daniel P. W. Ellis (Johns Hopkins University, 2003-10-26)
      Many modern polyphonic music transcription algorithms are presented in a statistical pattern recognition framework. But without a large corpus of real-world music transcribed at the note level, these algorithms are unable ...
    • Classification of Dance Music by Periodicity Patterns 

      Simon Dixon; Elias Pampalk; Gerhard Widmer (Johns Hopkins University, 2003-10-26)
      This paper addresses the genre classification problem for a specific subset of music, standard and Latin ballroom dance music, using a classification method based only on timing information. We compare two methods of ...
    • Blind Clustering of Popular Music Recordings Based on Singer Voice Characteristics 

      Wei-Ho Tsai; Hsin-Min Wang; Dwight Rodgers; Shi-Sian Cheng; Hung-Min Yu (Johns Hopkins University, 2003-10-26)
      This paper presents an effective technique for automatically clustering undocumented music recordings based on their associated singer. This serves as an indispensable step towards indexing and content-based information ...
    • Improving Polyphonic and Poly-Instrumental Music to Score Alignment 

      Ferréol Soulez; Xavier Rodet; Diemo Schwarz (Johns Hopkins University, 2003-10-26)
      Music alignment link events in a score and points on the audio performance time axis. All the parts of a recording can be thus indexed according to score information. The automatic alignment presented in this paper is based ...
    • Three Dimensional Continuous DP Algorithm for Multiple Pitch Candidates in Music Information Retrieval System 

      Sungphil Heo; Motoyuki Suzuki; Akinori Ito; Shozo Makino (Johns Hopkins University, 2003-10-26)
      This paper threats theoretical and practical issues that implement a music information retrieval system based on query by humming. In order to extract accuracy features from the user's humming, we propose a new retrieval ...
    • Automatic Segmentation, Learning and Retrieval of Melodies Using A Self-Organizing Neural Network 

      Steven Harford (Johns Hopkins University, 2003-10-26)
      We introduce a neural network, known as SONNET-MAP, capable of automatic segmentation, learning and retrieval of melodies. SONNET-MAP is a synthesis of Nigrin’s SONNET (Self-Organizing Neural NETwork) architecture and an ...
    • Detecting Emotion in Music 

      Tao Li; Mitsunori Ogihara (Johns Hopkins University, 2003-10-26)
      Detection of emotion in music sounds is an important problem in music indexing. This paper studies the problem of identifying emotion in music by sound signal processing. The problem is cast as a multiclass classification ...
    • The C-BRAHMS project 

      Kjell Lemström; Veli Mäkinen; Anna Pienimäki; Mika Turkia; Esko Ukkonen (Johns Hopkins University, 2003-10-26)
      The C-BRAHMS project develops computational methods for content-based retrieval and analysis of music data. A summary of the recent algorithmic and experimental developments of the project is given. A search engine developed ...
    • The Importance of Cross Database Evaluation in Sound Classification 

      Arie Livshin; Xavier Rodet (Johns Hopkins University, 2003-10-26)
      In numerous articles (Martin and Kim, 1998; Fraser and Fujinaga, 1999; and many others) sound classification algorithms are evaluated using "self classification" - the learning and test groups are randomly selected out of ...
    • An Auditory Model Based Transcriber of Vocal Queries 

      Tom De Mulder; Jean-Pierre Martens; Micheline Lesaffre; Marc Leman; Bernard De Baets; Hans De Meyer (Johns Hopkins University, 2003-10-26)
      In this paper a new auditory model-based transcriber of melodic queries produced by a human voice is presented. The newly presented system is tested systematically, together with some other state-of-the-art systems, on ...
    • A SVM ¨C Based Classification Approach to Musical Audio 

      Namunu Chinthaka Maddage; Changsheng Xu; Ye Wang (Johns Hopkins University, 2003-10-26)
      This paper describes an automatic heirarchical music classification approach based on support vector machines (SVM). Based on the proposed method, the music is classified into coursed classes such as vocal, instrumental ...
    • A HMM-Based Pitch Tracker for Audio Queries 

      Nicola Orio; Matteo Sisti Sette (Johns Hopkins University, 2003-10-26)
      In this paper we present an approach to the transcription of musical queries based on a HMM. The HMM is used to model the audio features related to the singing voice, and the transcription is obtained through Viterbi ...
    • Rhythmic Similarity through Elaboration 

      Mitchell Parry; Irfan Essa (Johns Hopkins University, 2003-10-26)
      Rhythmic similarity techniques for audio tend to evaluate how close to identical two rhythms are. This paper proposes a similarity metric based on rhythmic elaboration that matches rhythms that share the same beats regardless ...
    • Chopin Early Editions: Construction and Usage of Online Digital Scores 

      Tod Olson; J. Stephen Downie (Johns Hopkins University, 2003-10-26)
      The University of Chicago Library has digitized a collection of 19th century music scores. The online collection is generated programmatically from the scanned images and human-created descriptive and structural metadata, ...