Sub-communities within this community

Recent Submissions

  • Automatic Music Transcription from Multiphonic MIDI Signals 

    Haruto Takeda; Takuya Nishimoto; Shigeki Sagayama (Johns Hopkins University, 2003-10-26)
    For automatically transcribing human-performed polyphonic music recorded in the MIDI format, rhythm and tempo are decomposed through probabilistic modeling using Viterbi search in HMM for recognizing the rhythm and EM ...
  • Music identification by Leadsheets 

    Frank Seifert; Wolfgang Benn (Johns Hopkins University, 2003-10-26)
    Most experimental research on content-based automatic recognition and identification of musical documents is founded on statistical distribution of timbre or simple retrieval mechanisms like comparison of melodic segments. ...
  • Quantitative Comparisons into Content-Based Music Recognition with the Self Organising Map 

    Gavin Wood; Simon O'Keefe (Johns Hopkins University, 2003-10-26)
    With so much modern music being so widely available both in electronic form and in more traditional physical formats, a great opportunity exists for the development of a general-purpose recognition and music classification ...
  • Using morphological description for generic sound retrieval 

    Julien Ricard; Perfecto Herrera (Johns Hopkins University, 2003-10-26)
    Systems for sound retrieval are usually “source-centred”. This means that retrieval is based on using the proper keywords that define or specify a sound source. Although this type of description is of great interest, it ...
  • Design Patterns in XML Music Representation 

    Perry Roland (Johns Hopkins University, 2003-10-26)
    Design patterns attempt to formalize the discussion of recurring problems and their solutions. This paper introduces several XML design patterns and demonstrates their usefulness in the development of XML music representations. ...
  • Music Notation as a MEI Feasibility Test 

    Baron Schwartz (Johns Hopkins University, 2003-10-26)
    This project demonstrated that enough information can be retrieved from MEI, an XML format for musical information representation, to transform it into music notation with good fidelity. The process involved writing an ...
  • Rhythmic Similarity through Elaboration 

    Mitchell Parry; Irfan Essa (Johns Hopkins University, 2003-10-26)
    Rhythmic similarity techniques for audio tend to evaluate how close to identical two rhythms are. This paper proposes a similarity metric based on rhythmic elaboration that matches rhythms that share the same beats regardless ...
  • Key-specific Shrinkage Techniques for Harmonic Models 

    Jeremy Pickens (Johns Hopkins University, 2003-10-26)
    Statistical modeling of music is rapidly gaining acceptance as viable approach to a host of Music Information Retrieval related tasks, from transcription to ad hoc retrieval. As music may be viewed as an evolving pattern ...
  • A HMM-Based Pitch Tracker for Audio Queries 

    Nicola Orio; Matteo Sisti Sette (Johns Hopkins University, 2003-10-26)
    In this paper we present an approach to the transcription of musical queries based on a HMM. The HMM is used to model the audio features related to the singing voice, and the transcription is obtained through Viterbi ...
  • Chopin Early Editions: Construction and Usage of Online Digital Scores 

    Tod Olson; J. Stephen Downie (Johns Hopkins University, 2003-10-26)
    The University of Chicago Library has digitized a collection of 19th century music scores. The online collection is generated programmatically from the scanned images and human-created descriptive and structural metadata, ...
  • The Importance of Cross Database Evaluation in Sound Classification 

    Arie Livshin; Xavier Rodet (Johns Hopkins University, 2003-10-26)
    In numerous articles (Martin and Kim, 1998; Fraser and Fujinaga, 1999; and many others) sound classification algorithms are evaluated using "self classification" - the learning and test groups are randomly selected out of ...
  • An Auditory Model Based Transcriber of Vocal Queries 

    Tom De Mulder; Jean-Pierre Martens; Micheline Lesaffre; Marc Leman; Bernard De Baets; Hans De Meyer (Johns Hopkins University, 2003-10-26)
    In this paper a new auditory model-based transcriber of melodic queries produced by a human voice is presented. The newly presented system is tested systematically, together with some other state-of-the-art systems, on ...
  • A SVM ¨C Based Classification Approach to Musical Audio 

    Namunu Chinthaka Maddage; Changsheng Xu; Ye Wang (Johns Hopkins University, 2003-10-26)
    This paper describes an automatic heirarchical music classification approach based on support vector machines (SVM). Based on the proposed method, the music is classified into coursed classes such as vocal, instrumental ...
  • Detecting Emotion in Music 

    Tao Li; Mitsunori Ogihara (Johns Hopkins University, 2003-10-26)
    Detection of emotion in music sounds is an important problem in music indexing. This paper studies the problem of identifying emotion in music by sound signal processing. The problem is cast as a multiclass classification ...
  • The C-BRAHMS project 

    Kjell Lemström; Veli Mäkinen; Anna Pienimäki; Mika Turkia; Esko Ukkonen (Johns Hopkins University, 2003-10-26)
    The C-BRAHMS project develops computational methods for content-based retrieval and analysis of music data. A summary of the recent algorithmic and experimental developments of the project is given. A search engine developed ...
  • Music Scene Description Project: Toward Audio-based Real-time Music Understanding 

    Masataka Goto (Johns Hopkins University, 2003-10-26)
    This paper reports a research project intended to build a real-time music-understanding system producing intuitively meaningful descriptions of real-world musical audio signals, such as the melody lines and chorus sections. ...
  • Three Dimensional Continuous DP Algorithm for Multiple Pitch Candidates in Music Information Retrieval System 

    Sungphil Heo; Motoyuki Suzuki; Akinori Ito; Shozo Makino (Johns Hopkins University, 2003-10-26)
    This paper threats theoretical and practical issues that implement a music information retrieval system based on query by humming. In order to extract accuracy features from the user's humming, we propose a new retrieval ...
  • Automatic Segmentation, Learning and Retrieval of Melodies Using A Self-Organizing Neural Network 

    Steven Harford (Johns Hopkins University, 2003-10-26)
    We introduce a neural network, known as SONNET-MAP, capable of automatic segmentation, learning and retrieval of melodies. SONNET-MAP is a synthesis of Nigrin’s SONNET (Self-Organizing Neural NETwork) architecture and an ...
  • RWC Music Database: Music Genre Database and Musical Instrument Sound Database 

    Masataka Goto; Hiroki Hashiguchi; Takuichi Nishimura; Ryuichi Oka (Johns Hopkins University, 2003-10-26)
    This paper describes the design policy and specifications of the RWC Music Database, a copyright-cleared music database (DB) compiled specifically for research purposes. Shared DBs are common in other research fields and ...
  • Position Indexing of Adjacent and Concurrent N-Grams for Polyphonic Music Retrieval 

    Shyamala Doraisamy; Stefan Rüger (Johns Hopkins University, 2003-10-26)
    In this paper we examine the retrieval performance of adjacent and concurrent n-grams generated from polyphonic music data. We deploy a method to index polyphonic music using a word position indexer with the n-gram approach. ...

View more