Search
Now showing items 1-10 of 48
Ground-Truth Transcriptions of Real Music from Force-Aligned MIDI Syntheses
(Johns Hopkins University, 2003-10-26)
Many modern polyphonic music transcription algorithms are presented in a statistical pattern recognition framework. But without a large corpus of real-world music transcribed at the note level, these algorithms are unable ...
A HMM-Based Pitch Tracker for Audio Queries
(Johns Hopkins University, 2003-10-26)
In this paper we present an approach to the transcription of musical queries based on a HMM. The HMM is used to model the audio features related to the singing voice, and the transcription is obtained through Viterbi ...
Toward the Scientific Evaluation of Music Information Retrieval Systems
(Johns Hopkins University, 2003-10-26)
This paper outlines the findings-to-date of a project to assist in the efforts being made to establish a TREC-like evaluation paradigm within the Music Information Retrieval (MIR) research community. The findings and ...
Improving Polyphonic and Poly-Instrumental Music to Score Alignment
(Johns Hopkins University, 2003-10-26)
Music alignment link events in a score and points on the audio performance time axis. All the parts of a recording can be thus indexed according to score information. The automatic alignment presented in this paper is based ...
Features for audio and music classification
(Johns Hopkins University, 2003-10-26)
Four audio feature sets are evaluated in their ability to classify five general audio classes and seven popular music genres. The feature sets include low-level signal properties, mel-frequency spectral coefficients, and ...
Blind Clustering of Popular Music Recordings Based on Singer Voice Characteristics
(Johns Hopkins University, 2003-10-26)
This paper presents an effective technique for automatically clustering undocumented music recordings based on their associated singer. This serves as an indispensable step towards indexing and content-based information ...
The C-BRAHMS project
(Johns Hopkins University, 2003-10-26)
The C-BRAHMS project develops computational methods for content-based retrieval and analysis of music data. A summary of the recent algorithmic and experimental developments of the project is given. A search engine developed ...
Automatic Segmentation, Learning and Retrieval of Melodies Using A Self-Organizing Neural Network
(Johns Hopkins University, 2003-10-26)
We introduce a neural network, known as SONNET-MAP, capable of automatic segmentation, learning and retrieval of melodies. SONNET-MAP is a synthesis of Nigrin’s SONNET (Self-Organizing Neural NETwork) architecture and an ...
Automatic Music Transcription from Multiphonic MIDI Signals
(Johns Hopkins University, 2003-10-26)
For automatically transcribing human-performed polyphonic music recorded in the MIDI format, rhythm and tempo are decomposed through probabilistic modeling using Viterbi search in HMM for recognizing the rhythm and EM ...
Using morphological description for generic sound retrieval
(Johns Hopkins University, 2003-10-26)
Systems for sound retrieval are usually “source-centred”. This means that retrieval is based on using the proper keywords that define or specify a sound source. Although this type of description is of great interest, it ...