PDCON:Conference/A Framework for Real-time Instrumental Sound Segmentation and Labeling: Difference between revisions
Glozt100sob (talk | contribs) No edit summary |
mNo edit summary |
||
(5 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
== A | == A framework for real-time instrumental sound segmentation and labeling == | ||
Authors: Adriano Monteiro and Jônatas Manzolli | |||
This paper presents a collection of Pure Data abstractions for real-time transcription of the audio signal produced by musical instruments. It is a tool | Download full paper: [[Media:A framework for real-time instrumental sound segmentation and labeling.pdf]] | ||
This paper presents a collection of Pure Data abstractions for real-time transcription of the audio signal produced by musical instruments. It is a tool | |||
# for musical notes labeling and targeting in time; | |||
# for handling the representation of musical data obtained in 1). | |||
It is part of the PDescriptors, a library for audio features extracting developed by the author, mostly based on BSP technique<ref>Barknecht, 2010</ref>. | |||
In the text we start introducing literature review on automatic music transcription and its central issues. it follows the models parametric transcription we adopted in the present research regarding onset detection, extraction of harmonic content and timbre classification of percussion instruments. Finally, we discuss and detail our Pure Data implementation and present some results related its aplications in a human-computer interaction system. | In the text we start introducing literature review on automatic music transcription and its central issues. it follows the models parametric transcription we adopted in the present research regarding onset detection, extraction of harmonic content and timbre classification of percussion instruments. Finally, we discuss and detail our Pure Data implementation and present some results related its aplications in a human-computer interaction system. | ||
<videoflash type="vimeo">36979356|700|400</videoflash> | |||
=== References === | |||
<references/> | |||
{{Template:PdCon11}} |
Latest revision as of 16:06, 20 February 2012
A framework for real-time instrumental sound segmentation and labeling
Authors: Adriano Monteiro and Jônatas Manzolli
Download full paper: Media:A framework for real-time instrumental sound segmentation and labeling.pdf
This paper presents a collection of Pure Data abstractions for real-time transcription of the audio signal produced by musical instruments. It is a tool
- for musical notes labeling and targeting in time;
- for handling the representation of musical data obtained in 1).
It is part of the PDescriptors, a library for audio features extracting developed by the author, mostly based on BSP technique[1]. In the text we start introducing literature review on automatic music transcription and its central issues. it follows the models parametric transcription we adopted in the present research regarding onset detection, extraction of harmonic content and timbre classification of percussion instruments. Finally, we discuss and detail our Pure Data implementation and present some results related its aplications in a human-computer interaction system.
<videoflash type="vimeo">36979356|700|400</videoflash>
References
- ↑ Barknecht, 2010
4th international Pure Data Convention 2011 Weimar ~ Berlin