GMU:Coding Gestalt/Rihards Vitols/Rihards Vitols: Difference between revisions

From Medien Wiki
(Created page with "=='''Documentation of Gestalt Choir on 12th September'''== <br><videoflash type=vimeo>101410956|610|340</videoflash> =='''Final project sketch'''== 610px ...")
 
No edit summary
 
Line 11: Line 11:




<#>1. Touch screen input. Input triggers generative visualizations. Human can interact whit them during performance + midi controller or sensors which changes processes in audio and/or in visuals.
#  Touch screen input. Input triggers generative visualizations. Human can interact whit them during<br> performance + midi controller or sensors which changes processes in audio and/or in visuals.<br><br>
 
# Visuals are being projected behind performing human.<br><br>
2. Visuals are being projected behind performing human.
# In the same time sound is being made by the visualization produced data.<br><br>
 
# The sound produced data is being analyze and the data triggers changes in the visualization. New<br> visualization produced data changes the sound. In the middle of all this still is human who can add<br> changes manually in both of this process.
3. In the same time sound is being made by the visualization produced data.
 
4. The sound produced data is being analyze and the data triggers changes in the visualization. New visualization produced data changes the sound. In the middle of all this still is human who can add changes manually in both of this process.


It's possible to exhibit it as installation. For that need: empty room, 8 loudspeakers and 4 projectors.
It's possible to exhibit it as installation. For that need: empty room, 8 loudspeakers and 4 projectors.

Latest revision as of 11:18, 6 August 2014

Documentation of Gestalt Choir on 12th September


<videoflash type=vimeo>101410956|610|340</videoflash>


Final project sketch

GC M.png


  1. Touch screen input. Input triggers generative visualizations. Human can interact whit them during
    performance + midi controller or sensors which changes processes in audio and/or in visuals.

  2. Visuals are being projected behind performing human.

  3. In the same time sound is being made by the visualization produced data.

  4. The sound produced data is being analyze and the data triggers changes in the visualization. New
    visualization produced data changes the sound. In the middle of all this still is human who can add
    changes manually in both of this process.

It's possible to exhibit it as installation. For that need: empty room, 8 loudspeakers and 4 projectors.