• Login
    Search 
    •   JScholarship Home
    • Library-Sponsored Conference Proceedings
    • Search
    •   JScholarship Home
    • Library-Sponsored Conference Proceedings
    • Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Search

    Show Advanced FiltersHide Advanced Filters

    Filters

    Use filters to refine the search results.

    Now showing items 1-10 of 19

    • Sort Options:
    • Relevance
    • Title Asc
    • Title Desc
    • Issue Date Asc
    • Issue Date Desc
    • Results Per Page:
    • 5
    • 10
    • 20
    • 40
    • 60
    • 80
    • 100
    Thumbnail

    Ground-Truth Transcriptions of Real Music from Force-Aligned MIDI Syntheses 

    Robert J. Turetsky; Daniel P. W. Ellis (Johns Hopkins University, 2003-10-26)
    Many modern polyphonic music transcription algorithms are presented in a statistical pattern recognition framework. But without a large corpus of real-world music transcribed at the note level, these algorithms are unable ...
    Thumbnail

    Improving Polyphonic and Poly-Instrumental Music to Score Alignment 

    Ferréol Soulez; Xavier Rodet; Diemo Schwarz (Johns Hopkins University, 2003-10-26)
    Music alignment link events in a score and points on the audio performance time axis. All the parts of a recording can be thus indexed according to score information. The automatic alignment presented in this paper is based ...
    Thumbnail

    Automatic Music Transcription from Multiphonic MIDI Signals 

    Haruto Takeda; Takuya Nishimoto; Shigeki Sagayama (Johns Hopkins University, 2003-10-26)
    For automatically transcribing human-performed polyphonic music recorded in the MIDI format, rhythm and tempo are decomposed through probabilistic modeling using Viterbi search in HMM for recognizing the rhythm and EM ...
    Thumbnail

    Using morphological description for generic sound retrieval 

    Julien Ricard; Perfecto Herrera (Johns Hopkins University, 2003-10-26)
    Systems for sound retrieval are usually “source-centred”. This means that retrieval is based on using the proper keywords that define or specify a sound source. Although this type of description is of great interest, it ...
    Thumbnail

    Music Notation as a MEI Feasibility Test 

    Baron Schwartz (Johns Hopkins University, 2003-10-26)
    This project demonstrated that enough information can be retrieved from MEI, an XML format for musical information representation, to transform it into music notation with good fidelity. The process involved writing an ...
    Thumbnail

    Music identification by Leadsheets 

    Frank Seifert; Wolfgang Benn (Johns Hopkins University, 2003-10-26)
    Most experimental research on content-based automatic recognition and identification of musical documents is founded on statistical distribution of timbre or simple retrieval mechanisms like comparison of melodic segments. ...
    Thumbnail

    Music Scene Description Project: Toward Audio-based Real-time Music Understanding 

    Masataka Goto (Johns Hopkins University, 2003-10-26)
    This paper reports a research project intended to build a real-time music-understanding system producing intuitively meaningful descriptions of real-world musical audio signals, such as the melody lines and chorus sections. ...
    Thumbnail

    Determining Context-Defining Windows: Pitch Spelling using the Spiral Array 

    Elaine Chew; Yun-Ching Chen (Johns Hopkins University, 2003-10-26)
    This paper presents algorithms for pitch spelling using the Spiral Array model. Accurate pitch selling, assigning contextually consistent letter names to pitch numbers (for example, MIDI), is a critical component of music ...
    Thumbnail

    Key-specific Shrinkage Techniques for Harmonic Models 

    Jeremy Pickens (Johns Hopkins University, 2003-10-26)
    Statistical modeling of music is rapidly gaining acceptance as viable approach to a host of Music Information Retrieval related tasks, from transcription to ad hoc retrieval. As music may be viewed as an evolving pattern ...
    Thumbnail

    An Auditory Model Based Transcriber of Vocal Queries 

    Tom De Mulder; Jean-Pierre Martens; Micheline Lesaffre; Marc Leman; Bernard De Baets; Hans De Meyer (Johns Hopkins University, 2003-10-26)
    In this paper a new auditory model-based transcriber of melodic queries produced by a human voice is presented. The newly presented system is tested systematically, together with some other state-of-the-art systems, on ...
    • 1
    • 2

    DSpace software copyright © 2002-2016  DuraSpace
    Policies | Contact Us | Send Feedback
    Theme by 
    Atmire NV
     

     

    Browse

    All of JScholarshipCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CommunityBy Issue DateAuthorsTitlesSubjects

    My Account

    LoginRegister

    Discover

    AuthorXavier Rodet (2)Akinori Ito (1)Alexander Sheh (1)Arie Livshin (1)Baron Schwartz (1)Bernard De Baets (1)Changsheng Xu (1)Christopher Raphael (1)Dan Liu (1)Daniel P. W. Ellis (1)... View MoreSubject
    Music Analysis (19)
    IR Systems and Algorithms (6)Audio (4)Perception and Cognition (4)Digital Libraries (1)Metadata (1)... View MoreDate Issued2003 (19)Has File(s)Yes (19)

    DSpace software copyright © 2002-2016  DuraSpace
    Policies | Contact Us | Send Feedback
    Theme by 
    Atmire NV