• Login
    Search 
    •   JScholarship Home
    • Library-Sponsored Conference Proceedings
    • Search
    •   JScholarship Home
    • Library-Sponsored Conference Proceedings
    • Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Search

    Show Advanced FiltersHide Advanced Filters

    Filters

    Use filters to refine the search results.

    Now showing items 1-10 of 48

    • Sort Options:
    • Relevance
    • Title Asc
    • Title Desc
    • Issue Date Asc
    • Issue Date Desc
    • Results Per Page:
    • 5
    • 10
    • 20
    • 40
    • 60
    • 80
    • 100
    Thumbnail

    Ground-Truth Transcriptions of Real Music from Force-Aligned MIDI Syntheses 

    Robert J. Turetsky; Daniel P. W. Ellis (Johns Hopkins University, 2003-10-26)
    Many modern polyphonic music transcription algorithms are presented in a statistical pattern recognition framework. But without a large corpus of real-world music transcribed at the note level, these algorithms are unable ...
    Thumbnail

    A HMM-Based Pitch Tracker for Audio Queries 

    Nicola Orio; Matteo Sisti Sette (Johns Hopkins University, 2003-10-26)
    In this paper we present an approach to the transcription of musical queries based on a HMM. The HMM is used to model the audio features related to the singing voice, and the transcription is obtained through Viterbi ...
    Thumbnail

    Toward the Scientific Evaluation of Music Information Retrieval Systems 

    J. Stephen Downie (Johns Hopkins University, 2003-10-26)
    This paper outlines the findings-to-date of a project to assist in the efforts being made to establish a TREC-like evaluation paradigm within the Music Information Retrieval (MIR) research community. The findings and ...
    Thumbnail

    Improving Polyphonic and Poly-Instrumental Music to Score Alignment 

    Ferréol Soulez; Xavier Rodet; Diemo Schwarz (Johns Hopkins University, 2003-10-26)
    Music alignment link events in a score and points on the audio performance time axis. All the parts of a recording can be thus indexed according to score information. The automatic alignment presented in this paper is based ...
    Thumbnail

    Features for audio and music classification 

    Martin McKinney; Jeroen Breebaart (Johns Hopkins University, 2003-10-26)
    Four audio feature sets are evaluated in their ability to classify five general audio classes and seven popular music genres. The feature sets include low-level signal properties, mel-frequency spectral coefficients, and ...
    Thumbnail

    Blind Clustering of Popular Music Recordings Based on Singer Voice Characteristics 

    Wei-Ho Tsai; Hsin-Min Wang; Dwight Rodgers; Shi-Sian Cheng; Hung-Min Yu (Johns Hopkins University, 2003-10-26)
    This paper presents an effective technique for automatically clustering undocumented music recordings based on their associated singer. This serves as an indispensable step towards indexing and content-based information ...
    Thumbnail

    The C-BRAHMS project 

    Kjell Lemström; Veli Mäkinen; Anna Pienimäki; Mika Turkia; Esko Ukkonen (Johns Hopkins University, 2003-10-26)
    The C-BRAHMS project develops computational methods for content-based retrieval and analysis of music data. A summary of the recent algorithmic and experimental developments of the project is given. A search engine developed ...
    Thumbnail

    Automatic Segmentation, Learning and Retrieval of Melodies Using A Self-Organizing Neural Network 

    Steven Harford (Johns Hopkins University, 2003-10-26)
    We introduce a neural network, known as SONNET-MAP, capable of automatic segmentation, learning and retrieval of melodies. SONNET-MAP is a synthesis of Nigrin’s SONNET (Self-Organizing Neural NETwork) architecture and an ...
    Thumbnail

    Automatic Music Transcription from Multiphonic MIDI Signals 

    Haruto Takeda; Takuya Nishimoto; Shigeki Sagayama (Johns Hopkins University, 2003-10-26)
    For automatically transcribing human-performed polyphonic music recorded in the MIDI format, rhythm and tempo are decomposed through probabilistic modeling using Viterbi search in HMM for recognizing the rhythm and EM ...
    Thumbnail

    Using morphological description for generic sound retrieval 

    Julien Ricard; Perfecto Herrera (Johns Hopkins University, 2003-10-26)
    Systems for sound retrieval are usually “source-centred”. This means that retrieval is based on using the proper keywords that define or specify a sound source. Although this type of description is of great interest, it ...
    • 1
    • 2
    • 3
    • 4
    • . . .
    • 5

    DSpace software copyright © 2002-2016  DuraSpace
    Policies | Contact Us | Send Feedback
    Theme by 
    Atmire NV
     

     

    Browse

    All of JScholarshipCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CommunityBy Issue DateAuthorsTitlesSubjects

    My Account

    LoginRegister

    Discover

    AuthorJ. Stephen Downie (3)William Birmingham (3)Colin Meek (2)Elias Pampalk (2)Esko Ukkonen (2)George Tzanetakis (2)Gerhard Widmer (2)Kjell Lemström (2)Marc Leman (2)Masataka Goto (2)... View MoreSubjectIR Systems and Algorithms (26)Music Analysis (19)Audio (13)Digital Libraries (12)Perception and Cognition (8)General Interest (6)Metadata (6)... View MoreDate Issued
    2003 (48)
    Has File(s)Yes (48)

    DSpace software copyright © 2002-2016  DuraSpace
    Policies | Contact Us | Send Feedback
    Theme by 
    Atmire NV