<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Palex</id>
	<title>Medien Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Palex"/>
	<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/Special:Contributions/Palex"/>
	<updated>2026-04-25T23:29:47Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.6</generator>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=User:Palex&amp;diff=76745</id>
		<title>User:Palex</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=User:Palex&amp;diff=76745"/>
		<updated>2015-12-16T15:04:52Z</updated>

		<summary type="html">&lt;p&gt;Palex: Created page with &amp;quot;Alexander Pospischil is a Sound and Media Artist who currently lives in Weimar. More Information can be found on his webpage:  [http://www.alexanderpospischil.de alexanderpospisc...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Alexander Pospischil is a Sound and Media Artist who currently lives in Weimar.&lt;br /&gt;
More Information can be found on his webpage:&lt;br /&gt;
&lt;br /&gt;
[http://www.alexanderpospischil.de alexanderpospischil.de]&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU_talk:Equipment&amp;diff=65827</id>
		<title>GMU talk:Equipment</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU_talk:Equipment&amp;diff=65827"/>
		<updated>2014-04-12T13:38:43Z</updated>

		<summary type="html">&lt;p&gt;Palex: /* Proposed */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;What do you think should be bought for the chair GMU?&lt;br /&gt;
&lt;br /&gt;
Proposals go here:&lt;br /&gt;
&lt;br /&gt;
== Proposed ==&lt;br /&gt;
&lt;br /&gt;
=== 3D-Drucker ===&lt;br /&gt;
&lt;br /&gt;
Der alte [[GMU:RapMan]] ist wohl leider nach wie vor kaputt…? Ich fände es wichtig, dass wir wieder die Möglichkeit haben, 3D zu drucken – beispielsweise mit einem [https://www.makerbot.com/  MakerBot].&lt;br /&gt;
&lt;br /&gt;
=== Wearable Tech Wishlist ===&lt;br /&gt;
&lt;br /&gt;
Make a wish ...&lt;br /&gt;
&lt;br /&gt;
=== Misc ===&lt;br /&gt;
We might want to get the following stuff:&lt;br /&gt;
* HD Projector&lt;br /&gt;
* [http://www.soundmakesthemovie.com Zoom Q3 HD]&lt;br /&gt;
* [http://www.enttec.com/index.php?main_menu=Products&amp;amp;prod=70304&amp;amp;show=descriptionUSB DMX Box] und dann [http://www.thomann.de/de/sgm_genio_mobile_258_linse.htm das] hier? -- Ist doch nett :) --[[User:Matthias.breuer|Matthias.breuer]] 14:16, 14 September 2010 (CEST)&lt;br /&gt;
&lt;br /&gt;
== Got it / fixed it ==&lt;br /&gt;
* &amp;lt;s&amp;gt;3D Scanner, for example [http://www.david-laserscanner.com/ this one]. Or directly in the onlineshop: [https://ssl.david-vision-systems.de/shop/product_info.php/info/p84_DAVID-Starter-Kit.html Buy here].&amp;lt;/s&amp;gt; I  think the microsoft [[Kinect]] replaces that one.&lt;br /&gt;
* &amp;lt;s&amp;gt;Sackkarre&amp;lt;/s&amp;gt; kannste bestimmt beim Hausmeister ausleihen&lt;br /&gt;
* &amp;lt;s&amp;gt;[http://www.vd-shop.de/vga2ethernetconvertervgadvigrabber-p-280.html VGA/DVI Framegrabber]&amp;lt;/s&amp;gt; bought&lt;br /&gt;
* &amp;lt;s&amp;gt;Studiolicht für Foto uns Video [http://www.atld.de/Messe-Ladenbeleuchtung/LED-Technik/Studio-Par-LED-weiss-24x1W-schwarz-VARYTEC::20068.html LED RBG Spot] oder [http://www.bestoftechnic.de/Studioleuchten/Flaechenleuchte/Studioset-DOMINO-600-HS.html LED Wannenlicht] [http://www.foto-walser.biz/shop/Artikel/3662/525/LED_walimex_pro_LED_500_Flaechenleuchte_WT_806_Stativ.htm walser]&amp;lt;/s&amp;gt; bought&lt;br /&gt;
* &amp;lt;s&amp;gt;Tripod for Video and Foto&amp;lt;/s&amp;gt; bought&lt;br /&gt;
* &amp;lt;s&amp;gt;HD Media Player with autostart and loop&amp;lt;/s&amp;gt; now available at the MediaPoint&lt;br /&gt;
* &amp;lt;s&amp;gt;3x Short Throw / Short Distance Projector&amp;lt;/s&amp;gt; now available at the MediaPoint: Sanyo&lt;br /&gt;
* &amp;lt;s&amp;gt;Neue Mäuse für die Macs, die Apple Mäuse sind schlecht --[[User:Matthias.breuer|Matthias.breuer]] 11:19, 27 September 2010 (CEST)&amp;lt;/s&amp;gt; bought&lt;br /&gt;
* &amp;lt;s&amp;gt;Mehrfachsteckdosen wären klasse&amp;lt;/s&amp;gt; we have more here, just tell us where you need them&lt;br /&gt;
* &amp;lt;s&amp;gt;HD Recorder (Zuspielgerät)&amp;lt;/s&amp;gt; since we can record directly to a card this isn&#039;t necessary any more&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=58388</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=58388"/>
		<updated>2013-06-07T19:37:01Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&#039;&#039;How do we think of places that we‘ve never been to before? How do we imagine a place in San Diego, if we only hear its soundscape? How do image movements affect the perception of unknown soundscapes?&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;edit: the following thoughts are formulated for one direction (San Diego to Weimar), but of course it should work vice versa, too.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
So, what happens if people in Weimar are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for Pd) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
===Current experiments===&lt;br /&gt;
* Setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- Pd patch. If we can train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don&#039;t? Are there interesting misinterpretations?&lt;br /&gt;
* Setting up an audiovisual data stream&lt;br /&gt;
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds → visualization: flocking; noise: surf → visualization: undulation …&lt;br /&gt;
* Writing an application ([http://www.openframeworks.cc/ openFrameworks]) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.&lt;br /&gt;
[[File:ImmersiveCollagePointcloudScreenshot1.png|thumb|some first pixel fun]]&lt;br /&gt;
[[File:ImmersiveCollagePointcloudScreenshot2.png|thumb|some first pixel fun]]&lt;br /&gt;
&lt;br /&gt;
===Video===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;videoflash type=vimeo&amp;gt;67899634|450|250&amp;lt;/videoflash&amp;gt;&lt;br /&gt;
here are some first renderings. i&#039;m sorry for the terrible quality, but my macbook had some hard times analyzing the audio, generating the animation and video-capturing the screen at the same time...&lt;br /&gt;
Special credits go to [[../Jonas/]], who was a great help explaining openGL&amp;amp;Pointclouds to me :D&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
* [https://vimeo.com/31319154 Ryoichi Kurokawa] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
* [https://vimeo.com/27500054 University of Dayton Interactive Wall] another example, this time interesting in terms of two visual levels: the partially displayed image content and the movement of the enclosed animation.&lt;br /&gt;
* [https://www.youtube.com/watch?v=-PYUZMj-lkY Jörn Loviscach] A series of video tutorials that give a great overview and introduction to semantical audio analysis. Unfortunately in german only...&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=58384</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=58384"/>
		<updated>2013-06-07T17:55:18Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&#039;&#039;How do we think of places that we‘ve never been to before? How do we imagine a place in San Diego, if we only hear its soundscape? How do image movements affect the perception of unknown soundscapes?&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;edit: the following thoughts are formulated for one direction (San Diego to Weimar), but of course it should work vice versa, too.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
So, what happens if people in Weimar are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for Pd) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
===Current experiments===&lt;br /&gt;
* Setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- Pd patch. If we can train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don&#039;t? Are there interesting misinterpretations?&lt;br /&gt;
* Setting up an audiovisual data stream&lt;br /&gt;
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds → visualization: flocking; noise: surf → visualization: undulation …&lt;br /&gt;
* Writing an application ([http://www.openframeworks.cc/ openFrameworks]) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.&lt;br /&gt;
[[File:ImmersiveCollagePointcloudScreenshot1.png|thumb|some first pixel fun]]&lt;br /&gt;
[[File:ImmersiveCollagePointcloudScreenshot2.png|thumb|some first pixel fun]]&lt;br /&gt;
&lt;br /&gt;
===Video===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;videoflash type=vimeo&amp;gt;67899634|450|250&amp;lt;/videoflash&amp;gt;&lt;br /&gt;
here are some first renderings. i&#039;m sorry for the terrible quality, but my macbook had some hard times analyzing the audio, generating the animation and video-capturing the screen at the same time...&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
* [https://vimeo.com/31319154 Ryoichi Kurokawa] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
* [https://vimeo.com/27500054 University of Dayton Interactive Wall] another example, this time interesting in terms of two visual levels: the partially displayed image content and the movement of the enclosed animation.&lt;br /&gt;
* [https://www.youtube.com/watch?v=-PYUZMj-lkY Jörn Loviscach] A series of video tutorials that give a great overview and introduction to semantical audio analysis. Unfortunately in german only...&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=58382</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=58382"/>
		<updated>2013-06-07T17:51:26Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&#039;&#039;How do we think of places that we‘ve never been to before? How do we imagine a place in San Diego, if we only hear its soundscape? How do image movements affect the perception of unknown soundscapes?&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;edit: the following thoughts are formulated for one direction (San Diego to Weimar), but of course it should work vice versa, too.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
So, what happens if people in Weimar are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for Pd) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
===Current experiments===&lt;br /&gt;
* Setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- Pd patch. If we can train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don&#039;t? Are there interesting misinterpretations?&lt;br /&gt;
* Setting up an audiovisual data stream&lt;br /&gt;
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds → visualization: flocking; noise: surf → visualization: undulation …&lt;br /&gt;
* Writing an application ([http://www.openframeworks.cc/ openFrameworks]) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.&lt;br /&gt;
[[File:ImmersiveCollagePointcloudScreenshot1.png|thumb|some first pixel fun]]&lt;br /&gt;
[[File:ImmersiveCollagePointcloudScreenshot2.png|thumb|some first pixel fun]]&lt;br /&gt;
&lt;br /&gt;
===Video===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;videoflash type=vimeo&amp;gt;67899634|450|250&amp;lt;/videoflash&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
* [https://vimeo.com/31319154 Ryoichi Kurokawa] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
* [https://vimeo.com/27500054 University of Dayton Interactive Wall] another example, this time interesting in terms of two visual levels: the partially displayed image content and the movement of the enclosed animation.&lt;br /&gt;
* [https://www.youtube.com/watch?v=-PYUZMj-lkY Jörn Loviscach] A series of video tutorials that give a great overview and introduction to semantical audio analysis. Unfortunately in german only...&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=58378</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=58378"/>
		<updated>2013-06-07T17:30:22Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&#039;&#039;How do we think of places that we‘ve never been to before? How do we imagine a place in San Diego, if we only hear its soundscape? How do image movements affect the perception of unknown soundscapes?&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;edit: the following thoughts are formulated for one direction (San Diego to Weimar), but of course it should work vice versa, too.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
So, what happens if people in Weimar are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for Pd) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
===Current experiments===&lt;br /&gt;
* Setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- Pd patch. If we can train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don&#039;t? Are there interesting misinterpretations?&lt;br /&gt;
* Setting up an audiovisual data stream&lt;br /&gt;
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds → visualization: flocking; noise: surf → visualization: undulation …&lt;br /&gt;
* Writing an application ([http://www.openframeworks.cc/ openFrameworks]) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.&lt;br /&gt;
[[File:ImmersiveCollagePointcloudScreenshot1.png|thumb|some first pixel fun]]&lt;br /&gt;
[[File:ImmersiveCollagePointcloudScreenshot2.png|thumb|some first pixel fun]]&lt;br /&gt;
&lt;br /&gt;
===Video===&lt;br /&gt;
&lt;br /&gt;
[https://www.dropbox.com/s/9ztw0wkz7t46xlf/WeSDPresentation.mov first look, provisional result]&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
* [https://vimeo.com/31319154 Ryoichi Kurokawa] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
* [https://vimeo.com/27500054 University of Dayton Interactive Wall] another example, this time interesting in terms of two visual levels: the partially displayed image content and the movement of the enclosed animation.&lt;br /&gt;
* [https://www.youtube.com/watch?v=-PYUZMj-lkY Jörn Loviscach] A series of video tutorials that give a great overview and introduction to semantical audio analysis. Unfortunately in german only...&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=58377</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=58377"/>
		<updated>2013-06-07T17:30:00Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&#039;&#039;How do we think of places that we‘ve never been to before? How do we imagine a place in San Diego, if we only hear its soundscape? How do image movements affect the perception of unknown soundscapes?&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;edit: the following thoughts are formulated for one direction (San Diego to Weimar), but of course it should work vice versa, too.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
So, what happens if people in Weimar are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for Pd) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
===Current experiments===&lt;br /&gt;
* Setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- Pd patch. If we can train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don&#039;t? Are there interesting misinterpretations?&lt;br /&gt;
* Setting up an audiovisual data stream&lt;br /&gt;
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds → visualization: flocking; noise: surf → visualization: undulation …&lt;br /&gt;
* Writing an application ([http://www.openframeworks.cc/ openFrameworks]) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.&lt;br /&gt;
[[File:ImmersiveCollagePointcloudScreenshot1.png|thumb|some first pixel fun]]&lt;br /&gt;
[[File:ImmersiveCollagePointcloudScreenshot2.png|thumb|some first pixel fun]]&lt;br /&gt;
&lt;br /&gt;
===Video===&lt;br /&gt;
&lt;br /&gt;
[https://www.dropbox.com/s/9ztw0wkz7t46xlf/WeSDPresentation.mov link first look, provisional result]&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
* [https://vimeo.com/31319154 Ryoichi Kurokawa] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
* [https://vimeo.com/27500054 University of Dayton Interactive Wall] another example, this time interesting in terms of two visual levels: the partially displayed image content and the movement of the enclosed animation.&lt;br /&gt;
* [https://www.youtube.com/watch?v=-PYUZMj-lkY Jörn Loviscach] A series of video tutorials that give a great overview and introduction to semantical audio analysis. Unfortunately in german only...&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=58319</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=58319"/>
		<updated>2013-06-03T16:57:08Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&#039;&#039;How do we think of places that we‘ve never been to before? How do we imagine a place in San Diego, if we only hear its soundscape? How do image movements affect the perception of unknown soundscapes?&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;edit: the following thoughts are formulated for one direction (San Diego to Weimar), but of course it should work vice versa, too.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
So, what happens if people in Weimar are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for Pd) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
===Current experiments===&lt;br /&gt;
* Setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- Pd patch. If we can train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don&#039;t? Are there interesting misinterpretations?&lt;br /&gt;
* Setting up an audiovisual data stream&lt;br /&gt;
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds → visualization: flocking; noise: surf → visualization: undulation …&lt;br /&gt;
* Writing an application ([http://www.openframeworks.cc/ openFrameworks]) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.&lt;br /&gt;
[[File:ImmersiveCollagePointcloudScreenshot1.png|thumb|some first pixel fun]]&lt;br /&gt;
[[File:ImmersiveCollagePointcloudScreenshot2.png|thumb|some first pixel fun]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
* [https://vimeo.com/31319154 Ryoichi Kurokawa] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
* [https://vimeo.com/27500054 University of Dayton Interactive Wall] another example, this time interesting in terms of two visual levels: the partially displayed image content and the movement of the enclosed animation.&lt;br /&gt;
* [https://www.youtube.com/watch?v=-PYUZMj-lkY Jörn Loviscach] A series of video tutorials that give a great overview and introduction to semantical audio analysis. Unfortunately in german only...&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:ImmersiveCollagePointcloudScreenshot2.png&amp;diff=58318</id>
		<title>File:ImmersiveCollagePointcloudScreenshot2.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:ImmersiveCollagePointcloudScreenshot2.png&amp;diff=58318"/>
		<updated>2013-06-03T16:56:34Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Licensing ==&lt;br /&gt;
{{self|c}}&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=58317</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=58317"/>
		<updated>2013-06-03T16:56:13Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&#039;&#039;How do we think of places that we‘ve never been to before? How do we imagine a place in San Diego, if we only hear its soundscape? How do image movements affect the perception of unknown soundscapes?&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;edit: the following thoughts are formulated for one direction (San Diego to Weimar), but of course it should work vice versa, too.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
So, what happens if people in Weimar are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for Pd) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
===Current experiments===&lt;br /&gt;
* Setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- Pd patch. If we can train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don&#039;t? Are there interesting misinterpretations?&lt;br /&gt;
* Setting up an audiovisual data stream&lt;br /&gt;
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds → visualization: flocking; noise: surf → visualization: undulation …&lt;br /&gt;
* Writing an application ([http://www.openframeworks.cc/ openFrameworks]) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.&lt;br /&gt;
[[File:ImmersiveCollagePointcloudScreenshot1.png|thumb|some first pixel fun]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
* [https://vimeo.com/31319154 Ryoichi Kurokawa] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
* [https://vimeo.com/27500054 University of Dayton Interactive Wall] another example, this time interesting in terms of two visual levels: the partially displayed image content and the movement of the enclosed animation.&lt;br /&gt;
* [https://www.youtube.com/watch?v=-PYUZMj-lkY Jörn Loviscach] A series of video tutorials that give a great overview and introduction to semantical audio analysis. Unfortunately in german only...&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:ImmersiveCollagePointcloudScreenshot1.png&amp;diff=58316</id>
		<title>File:ImmersiveCollagePointcloudScreenshot1.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:ImmersiveCollagePointcloudScreenshot1.png&amp;diff=58316"/>
		<updated>2013-06-03T16:54:33Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Licensing ==&lt;br /&gt;
{{self|c}}&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Immersive_Collage_Processing-Chain.jpg&amp;diff=58170</id>
		<title>File:Immersive Collage Processing-Chain.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Immersive_Collage_Processing-Chain.jpg&amp;diff=58170"/>
		<updated>2013-05-27T16:37:20Z</updated>

		<summary type="html">&lt;p&gt;Palex: uploaded a new version of &amp;amp;quot;File:Immersive Collage Processing-Chain.jpg&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Licensing ==&lt;br /&gt;
{{self|c}}&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=58031</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=58031"/>
		<updated>2013-05-26T13:37:00Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;How do we think of places that we‘ve never been to before? How do we imagine a place in San Diego, if we only hear its soundscape? How do image movements affect the perception of unknown soundscapes?&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;edit: the following thoughts are formulated for one direction (San Diego to Weimar), but of course it should work vice versa, too.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
So, what happens if people in Weimar are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for PD) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Current experiments===&lt;br /&gt;
* Setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- PD patch. If we can train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don&#039;t? Are there interesting misinterpretations?&lt;br /&gt;
&lt;br /&gt;
* Setting up an audiovisual data stream&lt;br /&gt;
&lt;br /&gt;
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds -&amp;gt; visualization: flocking; noise: surf -&amp;gt; visualization: undulation …&lt;br /&gt;
&lt;br /&gt;
* Writing an application ([http://www.openframeworks.cc/ openFrameworks]) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
&lt;br /&gt;
* Ryoichi Kurokawa: [https://vimeo.com/31319154] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
&lt;br /&gt;
* University of Dayton Interactive Wall: [https://vimeo.com/27500054] another example, this time interesting in terms of two visual levels: the partially displayed image content and the movement of the enclosed animation.&lt;br /&gt;
&lt;br /&gt;
* Jörn Loviscach: [https://www.youtube.com/watch?v=-PYUZMj-lkY] A series of video tutorials that give a great overview and introduction to semantical audio analysis. Unfortunately in german only...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57951</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57951"/>
		<updated>2013-05-25T14:57:42Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;How do we think of places that we‘ve never been to before? How do we imagine a place in San Diego, if we only hear its soundscape? How do image movements affect the perception of unknown soundscapes?&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;edit: the following thoughts are formulated for one direction (San Diego to Weimar), but of course it should work vice versa, too.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
So, what happens if people in Weimar are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for PD) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Current experiments===&lt;br /&gt;
* Setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- PD patch. If we can train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don&#039;t? Are there interesting misinterpretations?&lt;br /&gt;
&lt;br /&gt;
* Setting up an audiovisual data stream&lt;br /&gt;
&lt;br /&gt;
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds -&amp;gt; visualization: flocking; noise: surf -&amp;gt; visualization: undulation …&lt;br /&gt;
&lt;br /&gt;
* Writing an application ([http://www.openframeworks.cc/ openFrameworks]) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
&lt;br /&gt;
* Ryoichi Kurokawa: [https://vimeo.com/31319154] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
&lt;br /&gt;
* University of Dayton Interactive Wall: [https://vimeo.com/27500054] another example, this time interesting in terms of two visual levels: the partially displayed image content and the movement of the enclosed animation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Alex&amp;diff=57923</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Alex</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Alex&amp;diff=57923"/>
		<updated>2013-05-23T23:03:13Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:AlexPortrait.jpg|thumb]]&lt;br /&gt;
Hi everyone,&lt;br /&gt;
I&#039;m a 3rd semester Master Student of Medienkunst/Mediengestaltung (Media Art&amp;amp; Design) at Bauhaus-University Weimar. Before, I did my BA in Theater&amp;amp;Media at Bayreuth University (GER).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Interests==&lt;br /&gt;
SoundArt&amp;amp;Music, (Dance) Theater&amp;amp;Performance, Art Installations, Music Instruments&amp;amp;Interfaces, Digital Synesthesia, Physical Computing (Arduino), Generative Design (Processing)&lt;br /&gt;
&lt;br /&gt;
My latest work in these fields is the installation &amp;quot;ichi-kousatsu&amp;quot;:&lt;br /&gt;
&amp;lt;videoflash type=vimeo&amp;gt;64219870|437|236&amp;lt;/videoflash&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===My collaboration interest===&lt;br /&gt;
Mediated Theater&amp;amp;Performance, Audiovisual Installations, Digital Scenography&lt;br /&gt;
&lt;br /&gt;
==Project Idea==&lt;br /&gt;
[[../Immersive_Collage/]]&lt;br /&gt;
&lt;br /&gt;
==Platforms==&lt;br /&gt;
My platform is currently OS X, but I am open and interested to work on Raspberry Pi.&lt;br /&gt;
&lt;br /&gt;
==GetInTouch==&lt;br /&gt;
Mail: alexander {dot} p o s p i s c h i l {at} uni-weimar.de (no spaces)&lt;br /&gt;
Skype: Mallegalle&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Alex&amp;diff=57922</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Alex</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Alex&amp;diff=57922"/>
		<updated>2013-05-23T23:00:09Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:AlexPortrait.jpg|thumb]]&lt;br /&gt;
Hi everyone,&lt;br /&gt;
I&#039;m a 3rd semester Master Student of Medienkunst/Mediengestaltung (Media Art&amp;amp; Design) at Bauhaus-University Weimar. Before, I did my BA in Theater&amp;amp;Media at Bayreuth University (GER).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Interests==&lt;br /&gt;
SoundArt&amp;amp;Music, (Dance) Theater&amp;amp;Performance, Art Installations, Music Instruments&amp;amp;Interfaces, Digital Synesthesia, Physical Computing (Arduino), Generative Design (Processing)&lt;br /&gt;
&lt;br /&gt;
My latest work in these fields is the installation &amp;quot;ichi-kousatsu&amp;quot;:&lt;br /&gt;
&amp;lt;videoflash type=vimeo&amp;gt;64219870|437|236&amp;lt;/videoflash&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===My collaboration interest===&lt;br /&gt;
Mediated Theater&amp;amp;Performance, Audiovisual Installations, Digital Scenography&lt;br /&gt;
&lt;br /&gt;
==Project Idea==&lt;br /&gt;
[[http://www.uni-weimar.de/medien/wiki/EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage immersive collage]]&lt;br /&gt;
&lt;br /&gt;
==Platforms==&lt;br /&gt;
My platform is currently OS X, but I am open and interested to work on Raspberry Pi.&lt;br /&gt;
&lt;br /&gt;
==GetInTouch==&lt;br /&gt;
Mail: alexander {dot} p o s p i s c h i l {at} uni-weimar.de (no spaces)&lt;br /&gt;
Skype: Mallegalle&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57921</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57921"/>
		<updated>2013-05-23T22:59:59Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;How do we think of places that we‘ve never been to before? How do we imagine a place in San Diego, if we only hear its soundscape? How do image movements affect the perception of unknown soundscapes?&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;edit: the following thoughts are formulated for one direction (San Diego to Weimar), but of course it should work vice versa, too.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
So, what happens if people in Weimar are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for PD) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Current experiments===&lt;br /&gt;
* MIR (musical information retrieval): setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- PD patch. If we train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don&#039;t? Are there interesting misinterpretations?&lt;br /&gt;
&lt;br /&gt;
* Setting up an audiovisual data stream&lt;br /&gt;
&lt;br /&gt;
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds -&amp;gt; visualization: flocking; noise: surf -&amp;gt; visualization: undulation …&lt;br /&gt;
&lt;br /&gt;
* Writing an application ([http://www.openframeworks.cc/ openFrameworks]) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
&lt;br /&gt;
* Ryoichi Kurokawa: [https://vimeo.com/31319154] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
&lt;br /&gt;
* University of Dayton Interactive Wall: [https://vimeo.com/27500054] another example, this time interesting in terms of two visual levels: the partially displayed image content and the movement of the enclosed animation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57920</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57920"/>
		<updated>2013-05-23T22:59:34Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;How do we think of places that we‘ve never been to before? How do we imagine a place in San Diego, if we only hear its soundscape? How do image movements affect the perception of unknown soundscapes?&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;edit: the following thoughts are formulated for one direction (San Diego to Weimar), but of course it should work vice versa, too).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
So, what happens if people in Weimar are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for PD) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Current experiments===&lt;br /&gt;
* MIR (musical information retrieval): setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- PD patch. If we train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don&#039;t? Are there interesting misinterpretations?&lt;br /&gt;
&lt;br /&gt;
* Setting up an audiovisual data stream&lt;br /&gt;
&lt;br /&gt;
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds -&amp;gt; visualization: flocking; noise: surf -&amp;gt; visualization: undulation …&lt;br /&gt;
&lt;br /&gt;
* Writing an application ([http://www.openframeworks.cc/ openFrameworks]) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
&lt;br /&gt;
* Ryoichi Kurokawa: [https://vimeo.com/31319154] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
&lt;br /&gt;
* University of Dayton Interactive Wall: [https://vimeo.com/27500054] another example, this time interesting in terms of two visual levels: the partially displayed image content and the movement of the enclosed animation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57919</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57919"/>
		<updated>2013-05-23T21:50:13Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;How do we think of places that we‘ve never been to before? How do we imagine a place in San Diego, if we only hear its soundscape? How do image movements affect the perception of unknown soundscapes?&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
So, what happens if we are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for PD) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Current experiments===&lt;br /&gt;
* MIR (musical information retrieval): setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- PD patch. If we train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don&#039;t? Are there interesting misinterpretations?&lt;br /&gt;
&lt;br /&gt;
* Setting up an audiovisual data stream&lt;br /&gt;
&lt;br /&gt;
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds -&amp;gt; visualization: flocking; noise: surf -&amp;gt; visualization: undulation …&lt;br /&gt;
&lt;br /&gt;
* Writing an application ([http://www.openframeworks.cc/ openFrameworks]) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
&lt;br /&gt;
* Ryoichi Kurokawa: [https://vimeo.com/31319154] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
&lt;br /&gt;
* University of Dayton Interactive Wall: [https://vimeo.com/27500054] another example, this time interesting in terms of two visual levels: the partially displayed image content and the movement of the enclosed animation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57918</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57918"/>
		<updated>2013-05-23T21:46:19Z</updated>

		<summary type="html">&lt;p&gt;Palex: /* Links, Literature */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;How do we think of places that we‘ve never been to before? How do we imagine a place in San Diego, if we only hear its soundscape? How do image movements affect the perception of unknown soundscapes?&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
So, what happens if we are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for PD) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Current experiments===&lt;br /&gt;
* MIR (musical information retrieval): setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- PD patch. If we train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don&#039;t? Are there interesting misinterpretations?&lt;br /&gt;
&lt;br /&gt;
* Setting up an audiovisual data stream&lt;br /&gt;
&lt;br /&gt;
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds -&amp;gt; visualization: flocking; noise: surf -&amp;gt; visualization: undulation …&lt;br /&gt;
&lt;br /&gt;
* Writing a programm (with [http://www.openframeworks.cc/ openFrameworks], maybe in connection with [http://marsyasweb.appspot.com/ marsyas] if the timbreID analysis isn&#039;t satisfying) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
&lt;br /&gt;
* Ryoichi Kurokawa: [https://vimeo.com/31319154] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
&lt;br /&gt;
* University of Dayton Interactive Wall: [https://vimeo.com/27500054] another example, this time interesting in terms of two visual levels: the partially displayed image content and the movement of the enclosed animation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57917</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57917"/>
		<updated>2013-05-23T20:26:06Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;How do we think of places that we‘ve never been to before? How do we imagine a place in San Diego, if we only hear its soundscape? How do image movements affect the perception of unknown soundscapes?&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
So, what happens if we are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for PD) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Current experiments===&lt;br /&gt;
* MIR (musical information retrieval): setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- PD patch. If we train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don&#039;t? Are there interesting misinterpretations?&lt;br /&gt;
&lt;br /&gt;
* Setting up an audiovisual data stream&lt;br /&gt;
&lt;br /&gt;
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds -&amp;gt; visualization: flocking; noise: surf -&amp;gt; visualization: undulation …&lt;br /&gt;
&lt;br /&gt;
* Writing a programm (with [http://www.openframeworks.cc/ openFrameworks], maybe in connection with [http://marsyasweb.appspot.com/ marsyas] if the timbreID analysis isn&#039;t satisfying) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
&lt;br /&gt;
* Ryoichi Kurokawa: [https://vimeo.com/31319154] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57916</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57916"/>
		<updated>2013-05-23T20:25:17Z</updated>

		<summary type="html">&lt;p&gt;Palex: /* Idea */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;How do we think of places that we‘ve never been to before? How do we imagine a place in San Diego, if we only hear its soundscape? How do image movements affect the perception of unknown soundscapes?&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
So, what happens if we are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for PD) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Current experiments===&lt;br /&gt;
* MIR (musical information retrieval): setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- PD patch. If we train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don&#039;t? Are there interesting misinterpretations?&lt;br /&gt;
&lt;br /&gt;
* Setting up an audiovisual data stream&lt;br /&gt;
&lt;br /&gt;
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds -&amp;gt; visualization: flocking; noise: surf -&amp;gt; visualization: undulation …&lt;br /&gt;
&lt;br /&gt;
* Writing a programm (with [http://www.openframeworks.cc/ openFrameworks], maybe in connection with [http://marsyasweb.appspot.com/ marsyas] if the timbreID analysis isn&#039;t satisfying) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
&lt;br /&gt;
* Ryoichi Kurokawa: [https://vimeo.com/31319154] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57915</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57915"/>
		<updated>2013-05-23T20:23:27Z</updated>

		<summary type="html">&lt;p&gt;Palex: /* Current experiments */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How do we think of places that we‘ve never been to before?&lt;br /&gt;
How do we imagine a place in San Diego, if we only hear its soundscape?&lt;br /&gt;
How do images affect the perception of unknown soundscapes?&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
So, what happens if we are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for PD) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Current experiments===&lt;br /&gt;
* MIR (musical information retrieval): setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- PD patch. If we train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don&#039;t? Are there interesting misinterpretations?&lt;br /&gt;
&lt;br /&gt;
* Setting up an audiovisual data stream&lt;br /&gt;
&lt;br /&gt;
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds -&amp;gt; visualization: flocking; noise: surf -&amp;gt; visualization: undulation …&lt;br /&gt;
&lt;br /&gt;
* Writing a programm (with [http://www.openframeworks.cc/ openFrameworks], maybe in connection with [http://marsyasweb.appspot.com/ marsyas] if the timbreID analysis isn&#039;t satisfying) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
&lt;br /&gt;
* Ryoichi Kurokawa: [https://vimeo.com/31319154] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57914</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57914"/>
		<updated>2013-05-23T20:22:16Z</updated>

		<summary type="html">&lt;p&gt;Palex: /* Current experiments */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How do we think of places that we‘ve never been to before?&lt;br /&gt;
How do we imagine a place in San Diego, if we only hear its soundscape?&lt;br /&gt;
How do images affect the perception of unknown soundscapes?&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
So, what happens if we are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for PD) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Current experiments===&lt;br /&gt;
* MIR (musical information retrieval): setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- PD patch. If we train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don&#039;t? Are there interesting misinterpretations?&lt;br /&gt;
&lt;br /&gt;
* Setting up an audiovisual data stream&lt;br /&gt;
&lt;br /&gt;
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds -&amp;gt; visualization: flocking; noise: surf -&amp;gt; visualization: undulation …&lt;br /&gt;
&lt;br /&gt;
* Writing a programm (with [http://www.openframeworks.cc/ openFrameworks], maybe in connection with [http://marsyasweb.appspot.com/ marsyas] if the PD analysis isn&#039;t satisfying) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
&lt;br /&gt;
* Ryoichi Kurokawa: [https://vimeo.com/31319154] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57913</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57913"/>
		<updated>2013-05-23T20:21:33Z</updated>

		<summary type="html">&lt;p&gt;Palex: /* Current experiments */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How do we think of places that we‘ve never been to before?&lt;br /&gt;
How do we imagine a place in San Diego, if we only hear its soundscape?&lt;br /&gt;
How do images affect the perception of unknown soundscapes?&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
So, what happens if we are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for PD) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Current experiments===&lt;br /&gt;
* MIR (musical information retrieval): setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- PD patch. If we train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don&#039;t? Are there interesting misinterpretations?&lt;br /&gt;
&lt;br /&gt;
* Setting up an audiovisual data stream&lt;br /&gt;
&lt;br /&gt;
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds -&amp;gt; visualization: flocking; &lt;br /&gt;
noise: surf -&amp;gt; visualization: undulation&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
* Writing a programm (with [http://www.openframeworks.cc/ openFrameworks], maybe in connection with [http://marsyasweb.appspot.com/ marsyas] if the PD analysis isn&#039;t satisfying) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
&lt;br /&gt;
* Ryoichi Kurokawa: [https://vimeo.com/31319154] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57912</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57912"/>
		<updated>2013-05-23T20:21:01Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How do we think of places that we‘ve never been to before?&lt;br /&gt;
How do we imagine a place in San Diego, if we only hear its soundscape?&lt;br /&gt;
How do images affect the perception of unknown soundscapes?&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
So, what happens if we are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for PD) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Current experiments===&lt;br /&gt;
* MIR (musical information retreivment): setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- PD patch. If we train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don&#039;t? Are there interesting misinterpretations?&lt;br /&gt;
&lt;br /&gt;
* Setting up an audiovisual data stream&lt;br /&gt;
&lt;br /&gt;
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds -&amp;gt; visualization: flocking; &lt;br /&gt;
noise: surf -&amp;gt; visualization: undulation&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
* Writing a programm (with [http://www.openframeworks.cc/ openFrameworks], maybe in connection with [http://marsyasweb.appspot.com/ marsyas] if the PD analysis isn&#039;t satisfying) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
&lt;br /&gt;
* Ryoichi Kurokawa: [https://vimeo.com/31319154] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57911</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57911"/>
		<updated>2013-05-23T20:20:13Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How do we think of places that we‘ve never been to before?&lt;br /&gt;
How do we imagine a place in San Diego, if we only hear its soundscape?&lt;br /&gt;
How do images affect the perception of unknown soundscapes?&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
So, what happens if we are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for PD) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
===Current experiments===&lt;br /&gt;
* MIR (musical information retreivment): setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- PD patch. If we train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don&#039;t? Are there interesting misinterpretations?&lt;br /&gt;
&lt;br /&gt;
* Setting up an audiovisual data stream&lt;br /&gt;
&lt;br /&gt;
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds -&amp;gt; visualization: flocking; &lt;br /&gt;
noise: surf -&amp;gt; visualization: undulation&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
* Writing a programm (with [http://www.openframeworks.cc/ openFrameworks], maybe in connection with [http://marsyasweb.appspot.com/ marsyas] if the PD analysis isn&#039;t satisfying) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
&lt;br /&gt;
* Ryoichi Kurokawa: [https://vimeo.com/31319154] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57910</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57910"/>
		<updated>2013-05-23T20:19:32Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How do we think of places that we‘ve never been to before?&lt;br /&gt;
How do we imagine a place in San Diego, if we only hear its soundscape?&lt;br /&gt;
How do images affect the perception of unknown soundscapes?&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
So, what happens if we are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for PD) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
===Current experiments===&lt;br /&gt;
* MIR (musical information retreivment): setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- PD patch. If we train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don&#039;t? Are there interesting misinterpretations?&lt;br /&gt;
&lt;br /&gt;
* Setting up an audiovisual data stream&lt;br /&gt;
&lt;br /&gt;
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: &lt;br /&gt;
noise: bird sounds -&amp;gt; visualization: flocking; &lt;br /&gt;
noise: surf -&amp;gt; visualization: undulation&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
* Writing a programm (with [http://www.openframeworks.cc/ openFrameworks], maybe in connection with [http://marsyasweb.appspot.com/ marsyas] if the PD analysis isn&#039;t satisfying) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
&lt;br /&gt;
* Ryoichi Kurokawa: [https://vimeo.com/31319154] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57909</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57909"/>
		<updated>2013-05-23T19:46:18Z</updated>

		<summary type="html">&lt;p&gt;Palex: /* Technical approach */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How do we think of places that we‘ve never been to before?&lt;br /&gt;
How do we imagine a place in San Diego, if we only hear its soundscape?&lt;br /&gt;
How do images affect the perception of unknown soundscapes?&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
So, what happens if we are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current experiments are to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for PD) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
&lt;br /&gt;
* Ryoichi Kurokawa: [https://vimeo.com/31319154] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57908</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57908"/>
		<updated>2013-05-23T19:44:17Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How do we think of places that we‘ve never been to before?&lt;br /&gt;
How do we imagine a place in San Diego, if we only hear its soundscape?&lt;br /&gt;
How do images affect the perception of unknown soundscapes?&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
So, what happens if we are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar.  The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device &lt;br /&gt;
&#039;&#039;edit:&#039;&#039;if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...&lt;br /&gt;
&lt;br /&gt;
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.&lt;br /&gt;
&lt;br /&gt;
===Technical approach===&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
The current experiments are to analyze the incoming live audio data via some semantical analysis (timbreID for PD) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
&lt;br /&gt;
* Ryoichi Kurokawa: [https://vimeo.com/31319154] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57907</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57907"/>
		<updated>2013-05-23T19:03:23Z</updated>

		<summary type="html">&lt;p&gt;Palex: /* Links, Literature */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;These are just some first thoughts on a project idea. Of course there are still plenty of possibilities for changes or merging with other project ideas.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
How do we think of places that we‘ve never been to before?&lt;br /&gt;
How have the new media changed that way of thinking?&lt;br /&gt;
How do we imagine a place in San Diego, if we only hear its soundscape?&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
So, what happens if we are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
My idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar. I want the visitor to be confronted with some new, made-up space. I‘m thinking of an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device&lt;br /&gt;
&lt;br /&gt;
I want to discover new relations between San Diego soundscapes, google/panoramio-image data and the associations each visitor would have on his own, once the images are shown in a fragmental way only.&lt;br /&gt;
&lt;br /&gt;
The plan is play and to analyze (FFT) the incoming live audio data (potential latencies are not a problem) and use it to trigger a generative collage of the picture-footage that is found on the internet. I am thinking of designing some nice algorithms that put together different parts of images in relation to the sonic events. So maybe if you hear some waves of San Diego beach or some car traffic, pixels from the images that are located near to the recording device will appear in a wavelike movement etc...&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
&lt;br /&gt;
* Ryoichi Kurokawa: [https://vimeo.com/31319154] The impressive audiovisual installation &amp;quot;rheo&amp;quot; shows some interesting correspondences in sound and pixel processing.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57906</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57906"/>
		<updated>2013-05-23T19:00:09Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;These are just some first thoughts on a project idea. Of course there are still plenty of possibilities for changes or merging with other project ideas.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
How do we think of places that we‘ve never been to before?&lt;br /&gt;
How have the new media changed that way of thinking?&lt;br /&gt;
How do we imagine a place in San Diego, if we only hear its soundscape?&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
So, what happens if we are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
My idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar. I want the visitor to be confronted with some new, made-up space. I‘m thinking of an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device&lt;br /&gt;
&lt;br /&gt;
I want to discover new relations between San Diego soundscapes, google/panoramio-image data and the associations each visitor would have on his own, once the images are shown in a fragmental way only.&lt;br /&gt;
&lt;br /&gt;
The plan is play and to analyze (FFT) the incoming live audio data (potential latencies are not a problem) and use it to trigger a generative collage of the picture-footage that is found on the internet. I am thinking of designing some nice algorithms that put together different parts of images in relation to the sonic events. So maybe if you hear some waves of San Diego beach or some car traffic, pixels from the images that are located near to the recording device will appear in a wavelike movement etc...&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
* [[../Kevin/]]&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Immersive_Collage_Processing-Chain.jpg&amp;diff=57905</id>
		<title>File:Immersive Collage Processing-Chain.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Immersive_Collage_Processing-Chain.jpg&amp;diff=57905"/>
		<updated>2013-05-23T18:58:10Z</updated>

		<summary type="html">&lt;p&gt;Palex: uploaded a new version of &amp;amp;quot;File:Immersive Collage Processing-Chain.jpg&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Licensing ==&lt;br /&gt;
{{self|c}}&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Immersive_Collage_Processing-Chain.jpg&amp;diff=57904</id>
		<title>File:Immersive Collage Processing-Chain.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Immersive_Collage_Processing-Chain.jpg&amp;diff=57904"/>
		<updated>2013-05-23T18:55:37Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Licensing ==&lt;br /&gt;
{{self|c}}&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57544</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57544"/>
		<updated>2013-05-06T19:03:43Z</updated>

		<summary type="html">&lt;p&gt;Palex: /* Idea */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;These are just some first thoughts on a project idea. Of course there are still plenty of possibilities for changes or merging with other project ideas.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
How do we think of places that we‘ve never been to before?&lt;br /&gt;
How have the new media changed that way of thinking?&lt;br /&gt;
How do we imagine a place in San Diego, if we only hear it‘s soundscape?&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
So, what happens if we are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
My idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar. I want the visitor to be confronted with some new, made-up space. I‘m thinking of an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device&lt;br /&gt;
&lt;br /&gt;
I want to discover new relations between San Diego soundscapes, google/panoramio-image data and the associations each visitor would have on his own, once the images are shown in a fragmental way only.&lt;br /&gt;
&lt;br /&gt;
The plan is play and to analyze (FFT) the incoming live audio data (potential latencies are not a problem) and use it to trigger a generative collage of the picture-footage that is found on the internet. I am thinking of designing some nice algorithms that put together different parts of images in relation to the sonic events. So maybe if you hear some waves of San Diego beach or some car traffic, pixels from the images that are located near to the recording device will appear in a wavelike movement etc...&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments.&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57515</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57515"/>
		<updated>2013-05-06T16:32:05Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;These are just some first thoughts on a project idea. Of course there are still plenty of possibilities for changes or merging with other project ideas.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
How do we think of places that we‘ve never been to before?&lt;br /&gt;
How have the new media changed that way of thinking?&lt;br /&gt;
How do we imagine a place in San Diego, if we only hear it‘s soundscape?&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these „unknown“ places, which therefore become „mappings“ that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
So, what happens if we are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
My idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar. I want the visitor to be confronted with some new, made-up space. I‘m thinking of an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel (live-)audio-stream from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device&lt;br /&gt;
&lt;br /&gt;
I want to discover new relations between San Diego soundscapes, google/panoramio-image data and the associations each visitor would have on his own, once the images are shown in a fragmental way only.&lt;br /&gt;
&lt;br /&gt;
The plan is play and to analyze (FFT) the incoming live audio data (potential latencies are not a problem) and use it to trigger a generative collage of the picture-footage that is found on the internet. I am thinking of designing some nice algorithms that put together different parts of images in relation to the sonic events. So maybe if you hear some waves of San Diego beach or some car traffic, pixels from the images that are located near to the recording device will appear in a wavelike movement etc...&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Some first visualizations===&lt;br /&gt;
&lt;br /&gt;
[[File:360screenwall.jpg|left|thumb|some first thoughts on the environment]]&lt;br /&gt;
&lt;br /&gt;
[[File:WeimarCollagePanoramio1.jpg|left|thumb|collage of some panoramio pictures of weimar. still faar too glumpy and just not in the nice fine style i would like them to appear though, as i quickly did that one with photoshop ;)&lt;br /&gt;
]] &lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57504</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57504"/>
		<updated>2013-05-06T16:02:49Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;These are just some first thoughts on a project idea. Of course there are still plenty of possibilities for changes or merging with other project ideas.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
How do we think of places that we‘ve never been to before?&lt;br /&gt;
How have the new media changed that way of thinking?&lt;br /&gt;
How do we imagine a place in San Diego, if we only hear it‘s soundscape?&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these „unknown“ places, which therefore become „mappings“ that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
So, what happens if we are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
My idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar. I want the visitor to be confronted with some new, made-up space. I‘m thinking of an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel live audio from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device&lt;br /&gt;
&lt;br /&gt;
I want to discover new relations between San Diego soundscapes, google/panoramio-image data and the associations each visitor would have on his own, once the images are shown in a fragmental way only.&lt;br /&gt;
&lt;br /&gt;
The plan is play and to analyze (FFT) the incoming live audio data (potential latencies are not a problem) and use it to trigger a generative collage of the picture-footage that is found on the internet. I am thinking of designing some nice algorithms that put together different parts of images in relation to the sonic events. So maybe if you hear some waves of San Diego beach or some car traffic, pixels from the images that are located near to the recording device will appear in a wavelike movement etc...&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Some first visualizations===&lt;br /&gt;
&lt;br /&gt;
[[File:360screenwall.jpg|left|thumb|some first thoughts on the environment]]&lt;br /&gt;
&lt;br /&gt;
[[File:WeimarCollagePanoramio1.jpg|left|thumb|collage of some panoramio pictures of weimar. still faar too glumpy and just not in the nice fine style i would like them to appear though, as i quickly did that one with photoshop ;)&lt;br /&gt;
]] &lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:WeimarCollagePanoramio1.jpg&amp;diff=57496</id>
		<title>File:WeimarCollagePanoramio1.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:WeimarCollagePanoramio1.jpg&amp;diff=57496"/>
		<updated>2013-05-06T15:38:35Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:360screenwall.jpg&amp;diff=57495</id>
		<title>File:360screenwall.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:360screenwall.jpg&amp;diff=57495"/>
		<updated>2013-05-06T15:37:53Z</updated>

		<summary type="html">&lt;p&gt;Palex: some quick sketch of a 360° projection environment&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
some quick sketch of a 360° projection environment&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57494</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Immersive Collage</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Immersive_Collage&amp;diff=57494"/>
		<updated>2013-05-06T15:35:33Z</updated>

		<summary type="html">&lt;p&gt;Palex: Created page with &amp;quot;===Idea===  How do we think of places that we‘ve never been to before? How have the new media changed that way of thinking? How do we imagine a place in San Diego, if we only h...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Idea===&lt;br /&gt;
&lt;br /&gt;
How do we think of places that we‘ve never been to before?&lt;br /&gt;
How have the new media changed that way of thinking?&lt;br /&gt;
How do we imagine a place in San Diego, if we only hear it‘s soundscape?&lt;br /&gt;
&lt;br /&gt;
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these „unknown“ places, which therefore become „mappings“ that are provided with our own mind.&lt;br /&gt;
&lt;br /&gt;
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. &lt;br /&gt;
Still the place we expire via the internet is only a fragmental space.&lt;br /&gt;
&lt;br /&gt;
So, what happens if we are confronted with a new random environment of San Diego?&lt;br /&gt;
&lt;br /&gt;
My idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar. I want the visitor to be confronted with some new, made-up space. I‘m thinking of an audiovisual environment that is fed by two different sorts of data: &lt;br /&gt;
&lt;br /&gt;
* 4 channel live audio from San Diego&lt;br /&gt;
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device&lt;br /&gt;
&lt;br /&gt;
I want to discover new relations between San Diego soundscapes, google/panoramio-image data and the associations each visitor would have on his own, once the images are shown in a fragmental way only.&lt;br /&gt;
&lt;br /&gt;
The plan is play and to analyze (FFT) the incoming live audio data (potential latencies are not a problem) and use it to trigger a generative collage of the picture-footage that is found on the internet. I am thinking of designing some nice algorithms that put together different parts of images in relation to the sonic events. So maybe if you hear some waves of San Diego beach or some car traffic, pixels from the images that are located near to the recording device will appear in a wavelike movement etc...&lt;br /&gt;
The different parts of the images would then fade-out gently after some moments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Some first visualizations===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Participants===&lt;br /&gt;
&lt;br /&gt;
* [[../Alex/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Links, Literature===&lt;br /&gt;
more to come&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces&amp;diff=57492</id>
		<title>EKK:LoFi Sounds in HiFi Spaces</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces&amp;diff=57492"/>
		<updated>2013-05-06T15:31:10Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[:Category:Projektmodul|Projektmodul]]&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Lecturers:&#039;&#039; [[Robin Minard]], [[/miller|Miller Puckette]], Katharina Rosenberger, [[/Shahrokh|Shahrokh Yadegari]], [[Ludger Hennig]], [[Max Neupert]]&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Credits:&#039;&#039; 18 [[ECTS]], 16 [[SWS]]&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Date:&#039;&#039; Monday, 18:00 until 21:30 h (due to the time difference to San Diego)&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;Venue:&#039;&#039; [[Coudraystraße 13a]], SeaM Werkstattstudio&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;First meeting:&#039;&#039; 15.04.2013&lt;br /&gt;
&lt;br /&gt;
==Description==&lt;br /&gt;
The sensation of space through spatialized sound may be influenced by sources that differ in their quality and quantity. Within this project, we will investigate ways to modify, expand or augment how space can be experienced through sound. The outcome may be, for instance, an electroacoustic composition, a simulation of spatial perception, sonic architecture or a set-design as a virtual or performative environment.&lt;br /&gt;
&lt;br /&gt;
This project is the beginning of a long-term cooperation between the Bauhaus-Universität Weimar and the University of California in San Diego. Partners in Weimar are Prof. Robin Minard and Ludger Hennig from the Studio for Electroacoustic Music (SeaM) and Max Neupert (Medial Environments). Partners in San Diego are Prof. Miller Puckette (Computer Music), Prof. Katharina Rosenberger (Music) and Prof. Shahrokh Yadegari (Theatre and Dance). Telematic classes will allow students from both faculties to work together. Classes will take place in Weimar in the evening, mid-morning in San Diego. The cooperation is funded by the German Academic Exchange Service (DAAD).&lt;br /&gt;
&lt;br /&gt;
===German description===&lt;br /&gt;
Die Wahrnehmung von Raum durch verräumlichten Klang, kann durch Klangquellen beeinflusst werden, welche sich in ihrer Qualität und Quantität unterscheiden. In diesem Projekt werden wir untersuchen, auf welche Art wir die Erfahrung von Raum durch den Klang verändern, erweitern oder verbessern können. Das Ergebnis kann beispielsweise eine elektroakustische Komposition, eine Simulation einer räumlichen Wahrnehmung, klangliche Architektur oder ein Bühnenbild als virtuelle oder performative Umgebung sein.&lt;br /&gt;
&lt;br /&gt;
Dieses Projekt ist der Beginn einer langfristigen Zusammenarbeit zwischen der Bauhaus-Universität und der Universität von Kalifornien in San Diego. Partner aus Weimar sind Prof. Robin Minard und Ludger Hennig vom Studio für Elektoakustische Musik (SeaM) und Max Neupert (Gestaltung medialer Umgebungen). Partner aus San Diego sind Prof. Miller Puckette (Computermusik), Prof. Katharina Rosenberger (Musik) und Prof. Shahrokh Yadegari (Theater und Tanz). Gemeinsamer Unterricht mit einer Videoverbindung wird es den Studierenden beider Universitäten ermöglichen, zusammen zu arbeiten. Der Unterricht wird in Weimar am Abend und vormittags in San Diego stattfinden. Die Zusammenarbeit wird vom Deutschen Akademischen Austauschdienst (DAAD) gefördert.&lt;br /&gt;
&lt;br /&gt;
===Werk/Fachmodule===&lt;br /&gt;
We recommend the Werk/Fachmodul [[GMU:Dataflow II]] and/or Ludger Hennigs soundscapes class along with this Project module.&lt;br /&gt;
&lt;br /&gt;
==Assignments==&lt;br /&gt;
# [[/Making connection/]] due 04-22&lt;br /&gt;
# [[/Mingling sounds/]] due 04-29&lt;br /&gt;
# [[/Textualize ideas/]] due 05-06&lt;br /&gt;
# [[/Proposing concepts/]] due 05-13&lt;br /&gt;
&lt;br /&gt;
==Evaluation==&lt;br /&gt;
Active participation, presentation, artistic examination, documentation, edits in the wiki.&lt;br /&gt;
&lt;br /&gt;
==Participants==&lt;br /&gt;
===Weimar===&lt;br /&gt;
* [[/Rico/]]&lt;br /&gt;
* [[/Alex/]]&lt;br /&gt;
* [[/Alice/]]&lt;br /&gt;
* [[/Jonas/]]&lt;br /&gt;
* [[/Andre/]]&lt;br /&gt;
* [[/Paul/]]&lt;br /&gt;
* [[/Ben/]]&lt;br /&gt;
* [[/Ludwigberger/]]&lt;br /&gt;
* [[/TimH/]]&lt;br /&gt;
* [[/Dorian/]]&lt;br /&gt;
* [[/Clemens/]]&lt;br /&gt;
* [[/Ives/]]&lt;br /&gt;
* [[/Cornelio/]]&lt;br /&gt;
* [[/Johannes/]]&lt;br /&gt;
&lt;br /&gt;
* [[/Max/]] (you may use this as a template for your self-introduction)&lt;br /&gt;
&lt;br /&gt;
===San Diego===&lt;br /&gt;
* [[/Brendan Bernhardt Gaffney/]]&lt;br /&gt;
* [[/Kevin/]]&lt;br /&gt;
* [[/Melanie/]]&lt;br /&gt;
* [[/Jeffrey/]]&lt;br /&gt;
* [[/Bo/]]&lt;br /&gt;
* [[/Andy/]]&lt;br /&gt;
* [[/Jamilah/]]&lt;br /&gt;
* [[/Marcelo/]]&lt;br /&gt;
* [[/Ryan Welsh/]]&lt;br /&gt;
* [[/ElisabetCurbelo/]]&lt;br /&gt;
* [[/Joe Cantrell/]]&lt;br /&gt;
* [[/czyskows/]]&lt;br /&gt;
* [[/NickDrashner/]]&lt;br /&gt;
&lt;br /&gt;
* [[/miller/]]&lt;br /&gt;
* [[/Shahrokh/]]&lt;br /&gt;
* [[/CarolineMiller/]]&lt;br /&gt;
&lt;br /&gt;
==Collaborative Projects==&lt;br /&gt;
* [[/Convergence/]]&lt;br /&gt;
* [[/Wind stream/]]&lt;br /&gt;
* [[/Body Instrument/]]&lt;br /&gt;
* [[/The Working System/]]&lt;br /&gt;
* [[/Expedition/]]&lt;br /&gt;
* [[/Immersive Collage/]]&lt;br /&gt;
&lt;br /&gt;
==Syllabus==&lt;br /&gt;
ISO Standard: MONTH/DAY&lt;br /&gt;
# 04/01 Only in San Diego (Easter Monday is holiday in Germany; Robin, Ludger and Max join via Skype/Google+ Hangout) faculty presentations about technical means and/or artistic issues. Introduction to the wiki.&lt;br /&gt;
# 04/08 Only in San Diego – faculty presentations continued. Miller - Rasberry Pi as an audio and video platform. Shahrokh - Mugic protocol; video walls&lt;br /&gt;
# 04/15 First joint class&lt;br /&gt;
# 04/22 Robin, Ludger and Max are in San Diego&lt;br /&gt;
# 04/29&lt;br /&gt;
# 05/06&lt;br /&gt;
# 05/13&lt;br /&gt;
# 05/20&lt;br /&gt;
# 05/27 Only in Weimar (Memorial day in the U.S.)&lt;br /&gt;
# 06/03&lt;br /&gt;
# 06/10&lt;br /&gt;
# 06/17&lt;br /&gt;
# 06/24&lt;br /&gt;
# 07/01&lt;br /&gt;
&lt;br /&gt;
==SAM connection==&lt;br /&gt;
We have two dedicated machines to facilitate the connection between the two places. In Weimar it is remote@141.54.193.23 (Linux Mint), in San Diego it is ideas@ideas-mac.ucsd.edu (OS X)&lt;br /&gt;
&lt;br /&gt;
==Literature==&lt;br /&gt;
* Föllmer, Golo: Netzmusik, Wolke Verlag ISBN 3936000336&lt;br /&gt;
* LaBelle, Brandon: Background Noise ISBN 9780826418456&lt;br /&gt;
* LaBelle, Brandon: Site of Sound: Or Architecture and the Ear ISBN 978-0982743904&lt;br /&gt;
* Minard, Robin: Silent Music ISBN 978-3933257130&lt;br /&gt;
* Minard, Robin: Sound Installation Art&lt;br /&gt;
* Leitner, Bernard: Sound:Space ISBN 978-3893224449&lt;br /&gt;
&lt;br /&gt;
Pd books:&lt;br /&gt;
{{Template:PdBooks}}&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
* [https://groups.google.com/forum/#!forum/buw-ucsd-class Google Group BUW-UCSD-class]&lt;br /&gt;
* [https://puredata.info/docs/raspberry-pi Pure Data on the Raspberry Pi]&lt;br /&gt;
&lt;br /&gt;
===Sending data over the network===&lt;br /&gt;
* [[OSC]]&lt;br /&gt;
* netsend/netreceive, netserver/netclient. In Pd look at &amp;lt;tt&amp;gt;mrpeach&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;iemnet libraries&amp;lt;/tt&amp;gt;&lt;br /&gt;
* [http://ivl.calit2.net/wiki/index.php/StreamingGraphics Mugic]&lt;br /&gt;
&lt;br /&gt;
===Sending Audio and Video over the Internet===&lt;br /&gt;
* [[wikipedia:Audio Video Bridging]] AVB is a set of standards for audio and video transmission by the IEEE&lt;br /&gt;
&lt;br /&gt;
====Intended for Communcation====&lt;br /&gt;
* [http://www.skype.com Skype] [[File:Icon newtux 16x16.png]] [[File:Icon android 16x16.png]] [[File:Icon apple 16x16.png]] [[File:Icon windows8 16x16.png]]&lt;br /&gt;
* [https://tools.google.com/dlpage/hangoutplugin Google+ Hangout] [[File:Icon newtux 16x16.png]] [[File:Icon android 16x16.png]] [[File:Icon apple 16x16.png]] [[File:Icon windows8 16x16.png]] (Seems to work best in Chome/Chromium)&lt;br /&gt;
* [http://www.oovoo.com ooVoo] [[File:Icon android 16x16.png]] [[File:Icon apple 16x16.png]] [[File:Icon windows8 16x16.png]]&lt;br /&gt;
* [http://www.voxox.com voxox] [[File:Icon apple 16x16.png]] [[File:Icon windows8 16x16.png]]&lt;br /&gt;
* [http://line.naver.jp/en LINE] [[File:Icon android 16x16.png]] [[File:Icon apple 16x16.png]] [[File:Icon windows8 16x16.png]]&lt;br /&gt;
* [http://www.teamviewer.com TeamViewer] [[File:Icon newtux 16x16.png]] [[File:Icon android 16x16.png]] [[File:Icon apple 16x16.png]] [[File:Icon windows8 16x16.png]]&lt;br /&gt;
* [http://www.teamspeak.com TeamSpeak] [[File:Icon newtux 16x16.png]] [[File:Icon android 16x16.png]] [[File:Icon apple 16x16.png]] [[File:Icon windows8 16x16.png]]&lt;br /&gt;
&lt;br /&gt;
Teleconferencing&lt;br /&gt;
* [http://www.polycom.com Polycom] [[File:Icon windows8 16x16.png]] System available in Weimar&lt;br /&gt;
* [http://www.radvision.com/Products/Video-Conference-Systems Radvision Scopia] [[File:Icon windows8 16x16.png]]&lt;br /&gt;
* [http://www.lifesize.com Livesize] [[File:Icon apple 16x16.png]] [[File:Icon windows8 16x16.png]] System available in San Diego&lt;br /&gt;
&lt;br /&gt;
Unidirectional only (aka streaming)&lt;br /&gt;
* [http://giss.tv Global Independent Streaming Support], [http://giss.tv/wiki/index.php/Main_Page G.I.S.S wiki]&lt;br /&gt;
* [http://www.ustream.tv Ustream]&lt;br /&gt;
* [http://www.justin.tv justin.tv]&lt;br /&gt;
* [http://www.shoutcast.com/ shoutcast] also as vst plugin availlable&lt;br /&gt;
&lt;br /&gt;
====Intended for Performance/Creative Use====&lt;br /&gt;
* [http://code.sat.qc.ca/redmine/projects/scenic/wiki/Scenic_fr Scenic] Audio and video in high quality, as well as MIDI [[File:Icon newtux 16x16.png]]&lt;br /&gt;
* [http://code.google.com/p/jacktrip JackTrip] [[File:Icon newtux 16x16.png]] [[File:Icon apple 16x16.png]] a network extension for [http://jackaudio.org/ Jack Audio Connection Kit] [[File:Icon newtux 16x16.png]] [[File:Icon apple 16x16.png]] [[File:Icon windows8 16x16.png]]&lt;br /&gt;
* [http://gstreamer.freedesktop.org GStreamer] and bindings for Pd: [http://umlaeute.mur.at/Members/zmoelnig/projects/pdgst PdGst] [[File:Icon newtux 16x16.png]]&lt;br /&gt;
* netsend~ for Max/MSP and Pd: [http://www.nullmedium.de/dev/netsend~ Olaf Matthes&#039; Original Version] [[File:Icon newtux 16x16.png]] [[File:Icon apple 16x16.png]] [[File:Icon windows8 16x16.png]], [http://www.olivierguillerminet.com/portfolio/netsend Olivier Guillerminet&#039;s/Remu&#039;s Fork] [[File:Icon apple 16x16.png]] [[File:Icon windows8 16x16.png]]&lt;br /&gt;
* [http://code.google.com/p/streaming-audio Streaming Audio Manager (SAM)] by [https://ccrma.stanford.edu/~danielsm/research.html Michelle Daniels] [[File:Icon newtux 16x16.png]] [[File:Icon apple 16x16.png]] [[File:Icon windows8 16x16.png]]&lt;br /&gt;
&lt;br /&gt;
===Performing Remotely===&lt;br /&gt;
* [http://www.netpd.org NetPd] &lt;br /&gt;
* [http://at.or.at/serendipd SerendiPd] collaborative patching&lt;br /&gt;
&lt;br /&gt;
[[Category:SS13]]&lt;br /&gt;
[[Category:Projektmodul]]&lt;br /&gt;
[[Category:Robin Minard]]&lt;br /&gt;
[[Category:Miller Puckette]]&lt;br /&gt;
[[Category:Katharina Rosenberger]]&lt;br /&gt;
[[Category:Shahrokh Yadegari]]&lt;br /&gt;
[[Category:Ludger Hennig]]&lt;br /&gt;
[[Category:Max Neupert]]&lt;br /&gt;
[[Category:Pure Data]]&lt;br /&gt;
[[Category:Performance]]&lt;br /&gt;
[[Category:Community]]&lt;br /&gt;
[[Category:Max/MSP]]&lt;br /&gt;
[[Category:Telematic]]&lt;br /&gt;
[[Category:Coorperation]]&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Mingling_sounds/Alex_Kevin&amp;diff=57379</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Mingling sounds/Alex Kevin</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Mingling_sounds/Alex_Kevin&amp;diff=57379"/>
		<updated>2013-04-29T17:17:44Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* How does the sentence “The medium is the message” by Marshall McLuhan apply to your practice? Comment on this quote in the context of your own work and in regards to this transcontinental collaboration, etc.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; Like radio, telephony, and television before it, the emerging paradigm of screen-based, networked collaboration effectively eliminates distances.  Of those earlier mediums, only telephony was characterized by two-way communications.  Because of the power of vision, video as a bi-directional medium can bridge space in ways that sound alone can&#039;t.  It facilitates the connections of people but also their contexts - their environments.  This presents an interesting schism: on one hand, people who have never met are able to communicate intimately; on the other, the people we come to know in this way are people we are unlikely to ever meet in person.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex:&#039;&#039;&#039;&lt;br /&gt;
I think McLuhans thoughts are still interesting. Since the 20th century, the physical ascertainable reality is being changed. As reasons for that one can see two technological innovations, which can be recognized in the development of mass media as well as in pieces of contemporary art: First, it&#039;s the increase of velocity which is now beyond the skills of human cognition, and second, it&#039;s the digitalization, the conversion of an anlogue &amp;quot;reality&amp;quot; towards a code-based (and therefore arbitrarily changeable) level. According to McLuhan, the message of every medium is &amp;quot;the change of scale or pace or pattern&amp;quot; that a new invention or innovation &amp;quot;introduces into human affairs.&amp;quot; In other words: Media have the function to translate experiences into new forms, they are, as has often been said, the extensions of our sensory organs.&lt;br /&gt;
These possibilities of dealing with reality and of shaping the experience of old and new phenomenons in an artistic way are of a major importance to my own works. And also for the cooperation with our partners in San Diego, the thoughts of McLuhan are not unimportant, because the process of a mediatization of cognition is ever since not only a process of removal from physicalness. It is – due to the exceedings of physicalness and its implications – much more the transgression of the mechanistic system in general. Thus, the project-title &amp;quot;LoFi Sounds in HiFi Spaces&amp;quot; already implies two cores of our research and our projects. To say it with McLuhan, beyond a mechanistic point of view, we are approaching a comprehension of virtual systems, such as one is sound or the bridged-space we create.&lt;br /&gt;
Considering our common meetings which are mediated via skype or the small experiments like the interchanged audio-visual relations in our recent works, i think that the interesting implications in dealing with these specifics of recent media innovations are already beginning to be recognizable at this stage of our project, and of course, they will be marked clearly, they will be the message, no matter if they are LoFi or HiFi or raspberryPi.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* American sound artist Bill Fontana made several pieces in which he transfers sound from one location to another. How does this locational switch change our understanding of a the space(s) in question? What new aspects of a sonic environment might emerge? What happens to our perception of a location once it is stripped from its original sounds and these are replaced by sounds from another location?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; A recording, whether visual or auditory, establishes a scene - a set of expectations - of the things which are likely to occur.&lt;br /&gt;
&lt;br /&gt;
For the most part, we expect a correlation between the things we see and the things we hear.  Both senses alert us to things to pay attention to: our hearing can inform us of things we don&#039;t see, and our vision can inform us of things we don&#039;t hear.  The alert itself may be the fact that the two senses aren&#039;t in agreement, in which case we work to make them align.&lt;br /&gt;
&lt;br /&gt;
A location swap is powerful because it provides juxtaposition, an opportunity to contrast two states.  Attention is drawn to those characteristics which don&#039;t align, and to those that do resolve for unexpected reasons.  We realize the things that we take for granted, and glimpse the structures that frame our perception.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex:&#039;&#039;&#039; As Kevin already observed very well, we expect a correlation between the things we see and the things we hear. The perception of our natural environment has always been multimodal, it always occured with multiple sensory organs at once. That is due to the fact, that our natural environment does not appear in isolated and independent modalities, it appears much more as a unit of optical, acoustical and/or other stimulations. The &amp;quot;non-natural&amp;quot; ability for a splitted human perception is only possible due to an enormous effort in terms of abstraction. Therefore, works like those of Bill Fontana offer the possibility of a multimodal perception, but on the same hand play with our confirmed habits of trying to put everything in a relation. &lt;br /&gt;
I think that the locational switch of sound gives the place a different emotional shape and therefore influences our rational and visual perception. It challenges our habits and makes us reinterpret the occuring phenomenons, which is an ideal mode of artistic examination.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* How does an instrument through which sound is transmitted shape our expectation and the perception of it (loudspeaker, telephone, alarm-clock), in other words, what if the expectation is not met, what impact can this have on our perception?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; We have expectations of known things in terms of the kinds of sounds they&#039;re likely to make, and notions of what different sounds or qualities of sound might signify.  When our expectations don&#039;t align with our perceptions, it&#039;s a signal that our understanding of a situation may be in immediate need of re-evaluation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex:&#039;&#039;&#039; The need of re-evaluation is a nice point, because by doing this, it means that we pay much more attention towards something than we would have done otherwise.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the mingled videos:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;videoflash type=vimeo&amp;gt;65041245|437|236&amp;lt;/videoflash&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the original videos:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;videoflash type=vimeo&amp;gt;65041243|437|236&amp;lt;/videoflash&amp;gt;&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Mingling_sounds/Alex_Kevin&amp;diff=57376</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Mingling sounds/Alex Kevin</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Mingling_sounds/Alex_Kevin&amp;diff=57376"/>
		<updated>2013-04-29T17:08:43Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* How does the sentence “The medium is the message” by Marshall McLuhan apply to your practice? Comment on this quote in the context of your own work and in regards to this transcontinental collaboration, etc.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; Like radio, telephony, and television before it, the emerging paradigm of screen-based, networked collaboration effectively eliminates distances.  Of those earlier mediums, only telephony was characterized by two-way communications.  Because of the power of vision, video as a bi-directional medium can bridge space in ways that sound alone can&#039;t.  It facilitates the connections of people but also their contexts - their environments.  This presents an interesting schism: on one hand, people who have never met are able to communicate intimately; on the other, the people we come to know in this way are people we are unlikely to ever meet in person.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex:&#039;&#039;&#039;&lt;br /&gt;
I think McLuhans thoughts are still interesting. Since the 20th century, the physical ascertainable reality is being changed. As reasons for that one can see two technological innovations, which can be recognized in the development of mass media as well as in pieces of contemporary art: First, it&#039;s the increase of velocity which is now beyond the skills of human cognition, and second, it&#039;s the digitalization, the conversion of an anlogue &amp;quot;reality&amp;quot; towards a code-based (and therefore arbitrarily changeable) level. According to McLuhan, the message of every medium is &amp;quot;the change of scale or pace or pattern&amp;quot; that a new invention or innovation &amp;quot;introduces into human affairs.&amp;quot; In other words: Media have the function to translate experiences into new forms, they are, as has often been said, the extensions of our sensory organs.&lt;br /&gt;
These possibilities of dealing with reality and of shaping the experience of old and new phenomenons in an artistic way are of a major importance to my own works. And also for the cooperation with our partners in San Diego, the thoughts of McLuhan are not unimportant, because the process of a mediatization of cognition is ever since not only a process of removal from physicalness. It is – due to the exceedings of physicalness and its implications – much more the transgression of the mechanistic system in general. Thus, the project-title &amp;quot;LoFi Sounds in HiFi Spaces&amp;quot; already implies two cores of our research and our projects. To say it with McLuhan, beyond a mechanistic point of view, we are approaching a comprehension of virtual systems, such as one is sound or the bridged-space we create.&lt;br /&gt;
Considering our common meetings which are mediated via skype or the small experiments like the interchanged audio-visual relations in our recent works, i think that the interesting implications in dealing with these specifics of recent media innovations are already beginning to be recognizable at this stage of our project, and of course, they will be marked clearly, they will be the message, no matter if they are LoFi or HiFi or raspberryPi.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* American sound artist Bill Fontana made several pieces in which he transfers sound from one location to another. How does this locational switch change our understanding of a the space(s) in question? What new aspects of a sonic environment might emerge? What happens to our perception of a location once it is stripped from its original sounds and these are replaced by sounds from another location?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; A recording, whether visual or auditory, establishes a scene - a set of expectations - of the things which are likely to occur.&lt;br /&gt;
&lt;br /&gt;
For the most part, we expect a correlation between the things we see and the things we hear.  Both senses alert us to things to pay attention to: our hearing can inform us of things we don&#039;t see, and our vision can inform us of things we don&#039;t hear.  The alert itself may be the fact that the two senses aren&#039;t in agreement, in which case we work to make them align.&lt;br /&gt;
&lt;br /&gt;
A location swap is powerful because it provides juxtaposition, an opportunity to contrast two states.  Attention is drawn to those characteristics which don&#039;t align, and to those that do resolve for unexpected reasons.  We realize the things that we take for granted, and glimpse the structures that frame our perception.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex:&#039;&#039;&#039; As Kevin already observed very well, we expect a correlation between the things we see and the things we hear. The perception of our natural environment has always been multimodal, it always occured with multiple sensory organs at once. That is due to the fact, that our natural environment does not appear in isolated and independent modalities, it appears much more as a unit of optical, acoustical and/or other stimulations. The &amp;quot;non-natural&amp;quot; ability for a splitted human perception is only possible due to an enormous effort in terms of abstraction. Therefore, works like those of Bill Fontana offer the possibility of a multimodal perception, but on the same hand play with our confirmed habits of trying to put everything in a relation. &lt;br /&gt;
I think that the locational switch of sound gives the place a different emotional shape and therefore influences our rational and visual perception. It challenges our habits and makes us reinterpret the occuring phenomenons, which is an ideal mode of artistic examination.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* How does an instrument through which sound is transmitted shape our expectation and the perception of it (loudspeaker, telephone, alarm-clock), in other words, what if the expectation is not met, what impact can this have on our perception?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; We have expectations of known things in terms of the kinds of sounds they&#039;re likely to make, and notions of what different sounds or qualities of sound might signify.  When our expectations don&#039;t align with our perceptions, it&#039;s a signal that our understanding of a situation may be in immediate need of re-evaluation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex:&#039;&#039;&#039; The need of re-evaluation is a nice point, because by doing this, it means that we pay much more attention towards something than we would have done otherwise.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See the mingled videos:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;videoflash type=vimeo&amp;gt;65041245|437|236&amp;lt;/videoflash&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://www.dropbox.com/s/at8acfblodwj8om/IlmarkparkGliderport.mp4 Gliderport and Ilmpark]&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Mingling_sounds/Alex_Kevin&amp;diff=57349</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Mingling sounds/Alex Kevin</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Mingling_sounds/Alex_Kevin&amp;diff=57349"/>
		<updated>2013-04-29T15:19:42Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* How does the sentence “The medium is the message” by Marshall McLuhan apply to your practice? Comment on this quote in the context of your own work and in regards to this transcontinental collaboration, etc.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; Like radio, telephony, and television before it, the emerging paradigm of screen-based, networked collaboration effectively eliminates distances.  Of those earlier mediums, only telephony was characterized by two-way communications.  Because of the power of vision, video as a bi-directional medium can bridge space in ways that sound alone can&#039;t.  It facilitates the connections of people but also their contexts - their environments.  This presents an interesting schism: on one hand, people who have never met are able to communicate intimately; on the other, the people we come to know in this way are people we are unlikely to ever meet in person.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex:&#039;&#039;&#039;&lt;br /&gt;
I think McLuhans thoughts are still interesting. Since the 20th century, the physical ascertainable reality is being changed. As reasons for that one can see two technological innovations, which can be recognized in the development of mass media as well as in pieces of contemporary art: First, it&#039;s the increase of velocity which is now beyond the skills of human cognition, and second, it&#039;s the digitalization, the conversion of an anlogue &amp;quot;reality&amp;quot; towards a code-based (and therefore arbitrarily changeable) level. According to McLuhan, the message of every medium is &amp;quot;the change of scale or pace or pattern&amp;quot; that a new invention or innovation &amp;quot;introduces into human affairs.&amp;quot; In other words: Media have the function to translate experiences into new forms, they are, as has often been said, the extensions of our sensory organs.&lt;br /&gt;
These possibilities of dealing with reality and of shaping the experience of old and new phenomenons in an artistic way are of a major importance to my own works. And also for the cooperation with our partners in San Diego, the thoughts of McLuhan are not unimportant, because the process of a mediatization of cognition is ever since not only a process of removal from physicalness. It is – due to the exceedings of physicalness and its implications – much more the transgression of the mechanistic system in general. Thus, the project-title &amp;quot;LoFi Sounds in HiFi Spaces&amp;quot; already implies two cores of our research and our projects. To say it with McLuhan, beyond a mechanistic point of view, we are approaching a comprehension of virtual systems, such as one is sound or the bridged-space we create.&lt;br /&gt;
Considering our common meetings which are mediated via skype or the small experiments like the interchanged audio-visual relations in our recent works, i think that the interesting implications in dealing with these specifics of recent media innovations are already beginning to be recognizable at this stage of our project, and of course, they will be marked clearly, they will be the message, no matter if they are LoFi or HiFi or raspberryPi.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* American sound artist Bill Fontana made several pieces in which he transfers sound from one location to another. How does this locational switch change our understanding of a the space(s) in question? What new aspects of a sonic environment might emerge? What happens to our perception of a location once it is stripped from its original sounds and these are replaced by sounds from another location?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; A recording, whether visual or auditory, establishes a scene - a set of expectations - of the things which are likely to occur.&lt;br /&gt;
&lt;br /&gt;
For the most part, we expect a correlation between the things we see and the things we hear.  Both senses alert us to things to pay attention to: our hearing can inform us of things we don&#039;t see, and our vision can inform us of things we don&#039;t hear.  The alert itself may be the fact that the two senses aren&#039;t in agreement, in which case we work to make them align.&lt;br /&gt;
&lt;br /&gt;
A location swap is powerful because it provides juxtaposition, an opportunity to contrast two states.  Attention is drawn to those characteristics which don&#039;t align, and to those that do resolve for unexpected reasons.  We realize the things that we take for granted, and glimpse the structures that frame our perception.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex:&#039;&#039;&#039; As Kevin already observed very well, we expect a correlation between the things we see and the things we hear. The perception of our natural environment has always been multimodal, it always occured with multiple sensory organs at once. That is due to the fact, that our natural environment does not appear in isolated and independent modalities, it appears much more as a unit of optical, acoustical and/or other stimulations. The &amp;quot;non-natural&amp;quot; ability for a splitted human perception is only possible due to an enormous effort in terms of abstraction. Therefore, works like those of Bill Fontana offer the possibility of a multimodal perception, but on the same hand play with our confirmed habits of trying to put everything in a relation. &lt;br /&gt;
I think that the locational switch of sound gives the place a different emotional shape and therefore influences our rational and visual perception. It challenges our habits and makes us reinterpret the occuring phenomenons, which is an ideal mode of artistic examination.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* How does an instrument through which sound is transmitted shape our expectation and the perception of it (loudspeaker, telephone, alarm-clock), in other words, what if the expectation is not met, what impact can this have on our perception?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; We have expectations of known things in terms of the kinds of sounds they&#039;re likely to make, and notions of what different sounds or qualities of sound might signify.  When our expectations don&#039;t align with our perceptions, it&#039;s a signal that our understanding of a situation may be in immediate need of re-evaluation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex:&#039;&#039;&#039; The need of re-evaluation is a nice point, because by doing this, it means that we pay much more attention towards something than we would have done otherwise.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://www.dropbox.com/s/uh9e6gqr3qx56ab/gliderportIlmparkModified.mp4 | see the mingled videos: Ilmpark and Gliderport]&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Mingling_sounds/Alex_Kevin&amp;diff=57348</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Mingling sounds/Alex Kevin</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Mingling_sounds/Alex_Kevin&amp;diff=57348"/>
		<updated>2013-04-29T15:18:48Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* How does the sentence “The medium is the message” by Marshall McLuhan apply to your practice? Comment on this quote in the context of your own work and in regards to this transcontinental collaboration, etc.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; Like radio, telephony, and television before it, the emerging paradigm of screen-based, networked collaboration effectively eliminates distances.  Of those earlier mediums, only telephony was characterized by two-way communications.  Because of the power of vision, video as a bi-directional medium can bridge space in ways that sound alone can&#039;t.  It facilitates the connections of people but also their contexts - their environments.  This presents an interesting schism: on one hand, people who have never met are able to communicate intimately; on the other, the people we come to know in this way are people we are unlikely to ever meet in person.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex:&#039;&#039;&#039;&lt;br /&gt;
I think McLuhans thoughts are still interesting. Since the 20th century, the physical ascertainable reality is being changed. As reasons for that one can see two technological innovations, which can be recognized in the development of mass media as well as in pieces of contemporary art: First, it&#039;s the increase of velocity which is now beyond the skills of human cognition, and second, it&#039;s the digitalization, the conversion of an anlogue &amp;quot;reality&amp;quot; towards a code-based (and therefore arbitrarily changeable) level. According to McLuhan, the message of every medium is &amp;quot;the change of scale or pace or pattern&amp;quot; that a new invention or innovation &amp;quot;introduces into human affairs.&amp;quot; In other words: Media have the function to translate experiences into new forms, they are, as has often been said, the extensions of our sensory organs.&lt;br /&gt;
These possibilities of dealing with reality and of shaping the experience of old and new phenomenons in an artistic way are of a major importance to my own works. And also for the cooperation with our partners in San Diego, the thoughts of McLuhan are not unimportant, because the process of a mediatization of cognition is ever since not only a process of removal from physicalness. It is – due to the exceedings of physicalness and its implications – much more the transgression of the mechanistic system in general. Thus, the project-title &amp;quot;LoFi Sounds in HiFi Spaces&amp;quot; already implies two cores of our research and our projects. To say it with McLuhan, beyond a mechanistic point of view, we are approaching a comprehension of virtual systems, such as one is sound or the bridged-space we create.&lt;br /&gt;
Considering our common meetings which are mediated via skype or the small experiments like the interchanged audio-visual relations in our recent works, i think that the interesting implications in dealing with these specifics of recent media innovations are already beginning to be recognizable at this stage of our project, and of course, they will be marked clearly, they will be the message, no matter if they are LoFi or HiFi or raspberryPi.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* American sound artist Bill Fontana made several pieces in which he transfers sound from one location to another. How does this locational switch change our understanding of a the space(s) in question? What new aspects of a sonic environment might emerge? What happens to our perception of a location once it is stripped from its original sounds and these are replaced by sounds from another location?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; A recording, whether visual or auditory, establishes a scene - a set of expectations - of the things which are likely to occur.&lt;br /&gt;
&lt;br /&gt;
For the most part, we expect a correlation between the things we see and the things we hear.  Both senses alert us to things to pay attention to: our hearing can inform us of things we don&#039;t see, and our vision can inform us of things we don&#039;t hear.  The alert itself may be the fact that the two senses aren&#039;t in agreement, in which case we work to make them align.&lt;br /&gt;
&lt;br /&gt;
A location swap is powerful because it provides juxtaposition, an opportunity to contrast two states.  Attention is drawn to those characteristics which don&#039;t align, and to those that do resolve for unexpected reasons.  We realize the things that we take for granted, and glimpse the structures that frame our perception.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex:&#039;&#039;&#039; As Kevin already observed very well, we expect a correlation between the things we see and the things we hear. The perception of our natural environment has always been multimodal, it always occured with multiple sensory organs at once. That is due to the fact, that our natural environment does not appear in isolated and independent modalities, it appears much more as a unit of optical, acoustical and/or other stimulations. The &amp;quot;non-natural&amp;quot; ability for a splitted human perception is only possible due to an enormous effort in terms of abstraction. Therefore, works like those of Bill Fontana offer the possibility of a multimodal perception, but on the same hand play with our confirmed habits of trying to put everything in a relation. &lt;br /&gt;
I think that the locational switch of sound gives the place a different emotional shape and therefore influences our rational and visual perception. It challenges our habits and makes us reinterpret the occuring phenomenons, which is an ideal mode of artistic examination.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* How does an instrument through which sound is transmitted shape our expectation and the perception of it (loudspeaker, telephone, alarm-clock), in other words, what if the expectation is not met, what impact can this have on our perception?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; We have expectations of known things in terms of the kinds of sounds they&#039;re likely to make, and notions of what different sounds or qualities of sound might signify.  When our expectations don&#039;t align with our perceptions, it&#039;s a signal that our understanding of a situation may be in immediate need of re-evaluation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex:&#039;&#039;&#039; The need of re-evaluation is a nice point, because by doing this, it means that we pay much more attention towards something than we would have done otherwise.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://www.dropbox.com/s/uh9e6gqr3qx56ab/gliderportIlmparkModified.mp4| see the mingled videos: Ilmpark and Gliderport]&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Mingling_sounds/Alex_Kevin&amp;diff=57345</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Mingling sounds/Alex Kevin</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Mingling_sounds/Alex_Kevin&amp;diff=57345"/>
		<updated>2013-04-29T14:52:15Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* How does the sentence “The medium is the message” by Marshall McLuhan apply to your practice? Comment on this quote in the context of your own work and in regards to this transcontinental collaboration, etc.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; Like radio, telephony, and television before it, the emerging paradigm of screen-based, networked collaboration effectively eliminates distances.  Of those earlier mediums, only telephony was characterized by two-way communications.  Because of the power of vision, video as a bi-directional medium can bridge space in ways that sound alone can&#039;t.  It facilitates the connections of people but also their contexts - their environments.  This presents an interesting schism: on one hand, people who have never met are able to communicate intimately; on the other, the people we come to know in this way are people we are unlikely to ever meet in person.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex:&#039;&#039;&#039;&lt;br /&gt;
I think McLuhans thoughts are still interesting. Since the 20th century, the physical ascertainable reality is being changed. As reasons for that one can see two technological innovations, which can be recognized in the development of mass media as well as in pieces of contemporary art: First, it&#039;s the increase of velocity which is now beyond the skills of human cognition, and second, it&#039;s the digitalization, the conversion of an anlogue &amp;quot;reality&amp;quot; towards a code-based (and therefore arbitrarily changeable) level. According to McLuhan, the message of every medium is &amp;quot;the change of scale or pace or pattern&amp;quot; that a new invention or innovation &amp;quot;introduces into human affairs.&amp;quot; In other words: Media have the function to translate experiences into new forms, they are, as has often been said, the extensions of our sensory organs.&lt;br /&gt;
These possibilities of dealing with reality and of shaping the experience of old and new phenomenons in an artistic way are of a major importance to my own works. And also for the cooperation with our partners in San Diego, the thoughts of McLuhan are not unimportant, because the process of a mediatization of cognition is ever since not only a process of removal from physicalness. It is – due to the exceedings of physicalness and its implications – much more the transgression of the mechanistic system in general. Thus, the project-title &amp;quot;LoFi Sounds in HiFi Spaces&amp;quot; already implies two cores of our research and our projects. To say it with McLuhan, beyond a mechanistic point of view, we are approaching a comprehension of virtual systems, such as one is sound or the bridged-space we create.&lt;br /&gt;
Considering our common meetings which are mediated via skype or the small experiments like the interchanged audio-visual relations in our recent works, i think that the interesting implications in dealing with these specifics of recent media innovations are already beginning to be recognizable at this stage of our project, and of course, they will be marked clearly, they will be the message, no matter if they are LoFi or HiFi or raspberryPi.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* American sound artist Bill Fontana made several pieces in which he transfers sound from one location to another. How does this locational switch change our understanding of a the space(s) in question? What new aspects of a sonic environment might emerge? What happens to our perception of a location once it is stripped from its original sounds and these are replaced by sounds from another location?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; A recording, whether visual or auditory, establishes a scene - a set of expectations - of the things which are likely to occur.&lt;br /&gt;
&lt;br /&gt;
For the most part, we expect a correlation between the things we see and the things we hear.  Both senses alert us to things to pay attention to: our hearing can inform us of things we don&#039;t see, and our vision can inform us of things we don&#039;t hear.  The alert itself may be the fact that the two senses aren&#039;t in agreement, in which case we work to make them align.&lt;br /&gt;
&lt;br /&gt;
A location swap is powerful because it provides juxtaposition, an opportunity to contrast two states.  Attention is drawn to those characteristics which don&#039;t align, and to those that do resolve for unexpected reasons.  We realize the things that we take for granted, and glimpse the structures that frame our perception.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex:&#039;&#039;&#039; As Kevin already observed very well, we expect a correlation between the things we see and the things we hear. The perception of our natural environment has always been multimodal, it always occured with multiple sensory organs at once. That is due to the fact, that our natural environment does not appear in isolated and independent modalities, it appears much more as a unit of optical, acoustical and/or other stimulations. The &amp;quot;non-natural&amp;quot; ability for a splitted human perception is only possible due to an enormous effort in terms of abstraction. Therefore, works like those of Bill Fontana offer the possibility of a multimodal perception, but on the same hand play with our confirmed habits of trying to put everything in a relation. &lt;br /&gt;
I think that the locational switch of sound gives the place a different emotional shape and therefore influences our rational and visual perception. It challenges our habits and makes us reinterpret the occuring phenomenons, which is an ideal mode of artistic examination.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* How does an instrument through which sound is transmitted shape our expectation and the perception of it (loudspeaker, telephone, alarm-clock), in other words, what if the expectation is not met, what impact can this have on our perception?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; We have expectations of known things in terms of the kinds of sounds they&#039;re likely to make, and notions of what different sounds or qualities of sound might signify.  When our expectations don&#039;t align with our perceptions, it&#039;s a signal that our understanding of a situation may be in immediate need of re-evaluation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex:&#039;&#039;&#039; The need of re-evaluation is a nice point, because by doing this, it means that we pay much more attention towards something than we would have done otherwise.&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Mingling_sounds/Alex_Kevin&amp;diff=57344</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Mingling sounds/Alex Kevin</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Mingling_sounds/Alex_Kevin&amp;diff=57344"/>
		<updated>2013-04-29T14:51:37Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* How does the sentence “The medium is the message” by Marshall McLuhan apply to your practice? Comment on this quote in the context of your own work and in regards to this transcontinental collaboration, etc.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; Like radio, telephony, and television before it, the emerging paradigm of screen-based, networked collaboration effectively eliminates distances.  Of those earlier mediums, only telephony was characterized by two-way communications.  Because of the power of vision, video as a bi-directional medium can bridge space in ways that sound alone can&#039;t.  It facilitates the connections of people but also their contexts - their environments.  This presents an interesting schism: on one hand, people who have never met are able to communicate intimately; on the other, the people we come to know in this way are people we are unlikely to ever meet in person.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex:&#039;&#039;&#039;&lt;br /&gt;
I think McLuhans thoughts are still interesting. Since the 20th century, the physical ascertainable reality is being changed. As reasons for that one can see two technological innovations, which can be recognized in the development of mass media as well as in pieces of contemporary art: First, it&#039;s the increase of velocity which is now beyond the skills of human cognition, and second, it&#039;s the digitalization, the conversion of an anlogue &amp;quot;reality&amp;quot; towards a code-based (and therefore arbitrarily changeable) level. According to McLuhan, the message of every medium is &amp;quot;the change of scale or pace or pattern&amp;quot; that a new invention or innovation &amp;quot;introduces into human affairs.&amp;quot; In other words: Media have the function to translate experiences into new forms, they are, as has often been said, the extensions of our sensory organs.&lt;br /&gt;
These possibilities of dealing with reality and of shaping the experience of old and new phenomenons in an artistic way are of a major importance to my own works. And also for the cooperation with our partners in San Diego, the thoughts of McLuhan are not unimportant, because the process of a mediatization of cognition is ever since not only a process of removal from physicalness. It is – due to the exceedings of physicalness and its implications – much more the transgression of the mechanistic system in general. Thus, the project-title &amp;quot;LoFi Sounds in HiFi Spaces&amp;quot; already implies two cores of our research and our projects. To say it with McLuhan, beyond a mechanistic point of view, we are approaching a comprehension of virtual systems, such as one is sound or the bridged-space we create.&lt;br /&gt;
Considering our common meetings which are mediated via skype or the small experiments like the interchanged audio-visual relations in our recent works, i think that the interesting implications in dealing with these specifics of recent media innovations are already beginning to be recognizable at this stage of our project, and of course, they will be marked clearly, they will be the message, no matter if they are LoFi or HiFi or raspberryPi.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* American sound artist Bill Fontana made several pieces in which he transfers sound from one location to another. How does this locational switch change our understanding of a the space(s) in question? What new aspects of a sonic environment might emerge? What happens to our perception of a location once it is stripped from its original sounds and these are replaced by sounds from another location?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; A recording, whether visual or auditory, establishes a scene - a set of expectations - of the things which are likely to occur.&lt;br /&gt;
&lt;br /&gt;
For the most part, we expect a correlation between the things we see and the things we hear.  Both senses alert us to things to pay attention to: our hearing can inform us of things we don&#039;t see, and our vision can inform us of things we don&#039;t hear.  The alert itself may be the fact that the two senses aren&#039;t in agreement, in which case we work to make them align.&lt;br /&gt;
&lt;br /&gt;
A location swap is powerful because it provides juxtaposition, an opportunity to contrast two states.  Attention is drawn to those characteristics which don&#039;t align, and to those that do resolve for unexpected reasons.  We realize the things that we take for granted, and glimpse the structures that frame our perception.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex:&#039;&#039;&#039; As Kevin already observed very well, we expect a correlation between the things we see and the things we hear. The perception of our natural environment has always been multimodal, it always occured with multiple sensory organs at once. That is due to the fact, that our natural environment does not appear in isolated and independent modalities, it appears much more as a unit of optical, acoustical and/or other stimulations. The &amp;quot;non-natural&amp;quot; ability for a splitted human perception is only possible due to an enormous effort in terms of abstraction. Therefore works like those of Bill Fontana offer the possibility of a multimodal perception, but on the same hand play with our confirmed habits of trying to put everything in a relation. &lt;br /&gt;
I think that the locational switch of sound gives the place a different emotional shape and therefore influences our rational and visual perception. It challenges our habits and makes us reinterpret the occuring phenomenons, which is an ideal mode of artistic examination.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* How does an instrument through which sound is transmitted shape our expectation and the perception of it (loudspeaker, telephone, alarm-clock), in other words, what if the expectation is not met, what impact can this have on our perception?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; We have expectations of known things in terms of the kinds of sounds they&#039;re likely to make, and notions of what different sounds or qualities of sound might signify.  When our expectations don&#039;t align with our perceptions, it&#039;s a signal that our understanding of a situation may be in immediate need of re-evaluation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex&#039;&#039;&#039; The need of re-evaluation is a nice point, because by doing this, it means that we pay much more attention towards something than we would have done otherwise.&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Mingling_sounds/Alex_Kevin&amp;diff=57342</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Mingling sounds/Alex Kevin</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Mingling_sounds/Alex_Kevin&amp;diff=57342"/>
		<updated>2013-04-29T14:40:33Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* How does the sentence “The medium is the message” by Marshall McLuhan apply to your practice? Comment on this quote in the context of your own work and in regards to this transcontinental collaboration, etc.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; Like radio, telephony, and television before it, the emerging paradigm of screen-based, networked collaboration effectively eliminates distances.  Of those earlier mediums, only telephony was characterized by two-way communications.  Because of the power of vision, video as a bi-directional medium can bridge space in ways that sound alone can&#039;t.  It facilitates the connections of people but also their contexts - their environments.  This presents an interesting schism: on one hand, people who have never met are able to communicate intimately; on the other, the people we come to know in this way are people we are unlikely to ever meet in person.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex:&#039;&#039;&#039;&lt;br /&gt;
I think McLuhans thoughts are still interesting. Since the 20th century, the physical ascertainable reality is being changed. As reasons for that one can see two technological innovations, which can be recognized in the development of mass media as well as in pieces of contemporary art: First, it&#039;s the increase of velocity which is now beyond the skills of human cognition, and second, it&#039;s the digitalization, the conversion of an anlogue &amp;quot;reality&amp;quot; towards a code-based (and therefore arbitrarily changeable) level. According to McLuhan, the message of every medium is &amp;quot;the change of scale or pace or pattern&amp;quot; that a new invention or innovation &amp;quot;introduces into human affairs.&amp;quot; In other words: Media have the function to translate experiences into new forms, they are, as has often been said, the extensions of our sensory organs.&lt;br /&gt;
These possibilities of dealing with reality and of shaping the experience of old and new phenomenons in an artistic way are of a major importance to my own works. And also for the cooperation with our partners in San Diego, the thoughts of McLuhan are not unimportant, because the process of a mediatization of cognition is ever since not only a process of removal from physicalness. It is – due to the exceedings of physicalness and its implications – much more the transgression of the mechanistic system in general. Thus, the project-title &amp;quot;LoFi Sounds in HiFi Spaces&amp;quot; already implies two cores of our research and our projects. To say it with McLuhan, beyond a mechanistic point of view, we are approaching a comprehension of virtual systems, such as one is sound or the bridged-space we create.&lt;br /&gt;
Considering our common meetings which are mediated via skype or the small experiments like the interchanged audio-visual relations in our recent works, i think that the interesting implications in dealing with these specifics of recent media innovations are already beginning to be recognizable at this stage of our project, and of course, they will be marked clearly, they will be the message, no matter if they are LoFi or HiFi or raspberryPi.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* American sound artist Bill Fontana made several pieces in which he transfers sound from one location to another. How does this locational switch change our understanding of a the space(s) in question? What new aspects of a sonic environment might emerge? What happens to our perception of a location once it is stripped from its original sounds and these are replaced by sounds from another location?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; A recording, whether visual or auditory, establishes a scene - a set of expectations - of the things which are likely to occur.&lt;br /&gt;
&lt;br /&gt;
For the most part, we expect a correlation between the things we see and the things we hear.  Both senses alert us to things to pay attention to: our hearing can inform us of things we don&#039;t see, and our vision can inform us of things we don&#039;t hear.  The alert itself may be the fact that the two senses aren&#039;t in agreement, in which case we work to make them align.&lt;br /&gt;
&lt;br /&gt;
A location swap is powerful because it provides juxtaposition, an opportunity to contrast two states.  Attention is drawn to those characteristics which don&#039;t align, and to those that do resolve for unexpected reasons.  We realize the things that we take for granted, and glimpse the structures that frame our perception.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex:&#039;&#039;&#039; As Kevin already observed very well, we expect a correlation between the things we see and the things we hear. The perception of our natural environment has always been multimodal, it always occured with multiple sensory organs at once. That is due to the fact, that our natural environment does not appear in isolated and independent modalities, it appears much more as a unit of optical, acoustical and/or other stimulations. The &amp;quot;non-natural&amp;quot; ability for a splitted human perception is only possible due to an enormous effort in terms of abstraction. Therefore works like those of Bill Fontana offer the possibility of a multimodal perception, but on the same hand play with our confirmed habits of trying to put everything in a relation. &lt;br /&gt;
I think that the locational switch of sound gives the place a different emotional shape and therefore influences our rational and visual perception. It challenges our habits and makes us reinterpret the occuring phenomenons, which is an ideal mode of artistic examination.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* How does an instrument through which sound is transmitted shape our expectation and the perception of it (loudspeaker, telephone, alarm-clock), in other words, what if the expectation is not met, what impact can this have on our perception?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; We have expectations of known things in terms of the kinds of sounds they&#039;re likely to make, and notions of what different sounds or qualities of sound might signify.  When our expectations don&#039;t align with our perceptions, it&#039;s a signal that our understanding of a situation may be in immediate need of re-evaluation.&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Mingling_sounds/Alex_Kevin&amp;diff=57312</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Mingling sounds/Alex Kevin</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Mingling_sounds/Alex_Kevin&amp;diff=57312"/>
		<updated>2013-04-29T10:54:32Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* How does the sentence “The medium is the message” by Marshall McLuhan apply to your practice? Comment on this quote in the context of your own work and in regards to this transcontinental collaboration, etc.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; Like radio, telephony, and television before it, the emerging paradigm of screen-based, networked collaboration effectively eliminates distances.  Of those earlier mediums, only telephony was characterized by two-way communications.  Because of the power of vision, video as a bi-directional medium can bridge space in ways that sound alone can&#039;t.  It facilitates the connections of people but also their contexts - their environments.  This presents an interesting schism: on one hand, people who have never met are able to communicate intimately; on the other, the people we come to know in this way are people we are unlikely to ever meet in person.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex:&#039;&#039;&#039;&lt;br /&gt;
I think McLuhans thoughts are still interesting. Since the 20th century, the physical ascertainable reality is being changed. As reasons for that one can see two technological innovations, which can be recognized in the development of mass media as well as in pieces of contemporary art: First, it&#039;s the increase of velocity which is now beyond the skills of human cognition, and second, it&#039;s the digitalization, the conversion of an anlogue &amp;quot;reality&amp;quot; towards a code-based (and therefore arbitrarily changeable) level. According to McLuhan, the message of every medium is &amp;quot;the change of scale or pace or pattern&amp;quot; that a new invention or innovation &amp;quot;introduces into human affairs.&amp;quot; In other words: Media have the function to translate experiences into new forms, they are, as has often been said, the extensions of our sensory organs.&lt;br /&gt;
These possibilities of dealing with reality and of shaping the experience of old and new phenomenons in an artistic way are of a major importance to my own works. And also for the cooperation with our partners in San Diego, the thoughts of McLuhan are not unimportant, because the process of a mediatization of cognition is ever since not only a process of removal from physicalness. It is – due to the exceedings of physicalness and its implications – much more the transgression of the mechanistic system in general. Thus, the project-title &amp;quot;LoFi Sounds in HiFi Spaces&amp;quot; already implies two cores of our research and our projects. To say it with McLuhan, beyond a mechanistic point of view, we are approaching a comprehension of virtual systems, such as one is sound or the bridged-space we create.&lt;br /&gt;
Considering our common meetings which are mediated via skype or the small experiments like the interchanged audio-visual relations in our recent works, i think that the interesting implications in dealing with these specifics of recent media innovations are already beginning to be recognizable at this stage of our project, and of course, they will be marked clearly, they will be the message, no matter if they are LoFi or HiFi or raspberryPi.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* American sound artist Bill Fontana made several pieces in which he transfers sound from one location to another. How does this locational switch change our understanding of a the space(s) in question? What new aspects of a sonic environment might emerge? What happens to our perception of a location once it is stripped from its original sounds and these are replaced by sounds from another location?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; A recording, whether visual or auditory, establishes a scene - a set of expectations - of the things which are likely to occur.&lt;br /&gt;
&lt;br /&gt;
For the most part, we expect a correlation between the things we see and the things we hear.  Both senses alert us to things to pay attention to: our hearing can inform us of things we don&#039;t see, and our vision can inform us of things we don&#039;t hear.  The alert itself may be the fact that the two senses aren&#039;t in agreement, in which case we work to make them align.&lt;br /&gt;
&lt;br /&gt;
A location swap is powerful because it provides juxtaposition, an opportunity to contrast two states.  Attention is drawn to those characteristics which don&#039;t align, and to those that do resolve for unexpected reasons.  We realize the things that we take for granted, and glimpse the structures that frame our perception.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* How does an instrument through which sound is transmitted shape our expectation and the perception of it (loudspeaker, telephone, alarm-clock), in other words, what if the expectation is not met, what impact can this have on our perception?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; We have expectations of known things in terms of the kinds of sounds they&#039;re likely to make, and notions of what different sounds or qualities of sound might signify.  When our expectations don&#039;t align with our perceptions, it&#039;s a signal that our understanding of a situation may be in immediate need of re-evaluation.&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Mingling_sounds/Alex_Kevin&amp;diff=57311</id>
		<title>EKK:LoFi Sounds in HiFi Spaces/Mingling sounds/Alex Kevin</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=EKK:LoFi_Sounds_in_HiFi_Spaces/Mingling_sounds/Alex_Kevin&amp;diff=57311"/>
		<updated>2013-04-29T10:44:52Z</updated>

		<summary type="html">&lt;p&gt;Palex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* How does the sentence “The medium is the message” by Marshall McLuhan apply to your practice? Comment on this quote in the context of your own work and in regards to this transcontinental collaboration, etc.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Kevin:&#039;&#039;&#039; Like radio, telephony, and television before it, the emerging paradigm of screen-based, networked collaboration effectively eliminates distances.  Of those earlier mediums, only telephony was characterized by two-way communications.  Because of the power of vision, video as a bi-directional medium can bridge space in ways that sound alone can&#039;t.  It facilitates the connections of people but also their contexts - their environments.  This presents an interesting schism: on one hand, people who have never met are able to communicate intimately; on the other, the people we come to know in this way are people we are unlikely to ever meet in person.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alex:&#039;&#039;&#039;&lt;br /&gt;
I think McLuhans thoughts are still interesting. Since the 20th century, the physical ascertainable reality is being changed. As reasons for that one can see two technological innovations, which can be recognized in the development of mass media as well as in pieces of contemporary art: First, it&#039;s the increase of velocity which is now beyond the skills of human cognition, and second, it&#039;s the digitalization, the conversion of an anlogue &amp;quot;reality&amp;quot; towards a code-based (and therefore arbitrarily changeable) level. According to McLuhan, the message of every medium is &amp;quot;the change of scale or pace or pattern&amp;quot; that a new invention or innovation &amp;quot;introduces into human affairs.&amp;quot; In other words: Media have the function to translate experiences into new forms, they are, as has often been said, the extensions of our sensory organs.&lt;br /&gt;
These possibilities of dealing with reality and of shaping the experience of old and new phenomenons in an artistic way are of a major importance to my own works. And also for the cooperation with our partners in San Diego, the thoughts of McLuhan are not unimportant, because the process of a mediatization of cognition is ever since not only a process of removal from physicalness. It is – due to the exceedings of physicalness and its implications – much more the transgression of the mechanistic system in general. Thus, the project-title &amp;quot;LoFi Sounds in HiFi Spaces&amp;quot; already implies two cores of our research and our projects. To say it with McLuhan, beyond a mechanistic point of view, we are approaching a comprehension of virtual systems, such as one is sound or the bridged-space we create.&lt;br /&gt;
Considering our common meetings which are mediated via skype or the small experiments like the interchanged audio-visual relations in our recent works, i think that the interesting implications in dealing with these specifics of recent media innovations are already beginning to be recognizable at this stage of our project, and of course, they will be marked clearly, they will be the message, no matter if they are LoFi or HiFi or raspberryPi.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* American sound artist Bill Fontana made several pieces in which he transfers sound from one location to another. How does this locational switch change our understanding of a the space(s) in question? What new aspects of a sonic environment might emerge? What happens to our perception of a location once it is stripped from its original sounds and these are replaced by sounds from another location?&lt;br /&gt;
&lt;br /&gt;
A recording, whether visual or auditory, establishes a scene - a set of expectations - of the things which are likely to occur.&lt;br /&gt;
&lt;br /&gt;
For the most part, we expect a correlation between the things we see and the things we hear.  Both senses alert us to things to pay attention to: our hearing can inform us of things we don&#039;t see, and our vision can inform us of things we don&#039;t hear.  The alert itself may be the fact that the two senses aren&#039;t in agreement, in which case we work to make them align.&lt;br /&gt;
&lt;br /&gt;
A location swap is powerful because it provides juxtaposition, an opportunity to contrast two states.  Attention is drawn to those characteristics which don&#039;t align, and to those that do resolve for unexpected reasons.  We realize the things that we take for granted, and glimpse the structures that frame our perception.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* How does an instrument through which sound is transmitted shape our expectation and the perception of it (loudspeaker, telephone, alarm-clock), in other words, what if the expectation is not met, what impact can this have on our perception?&lt;br /&gt;
&lt;br /&gt;
We have expectations of known things in terms of the kinds of sounds they&#039;re likely to make, and notions of what different sounds or qualities of sound might signify.  When our expectations don&#039;t align with our perceptions, it&#039;s a signal that our understanding of a situation may be in immediate need of re-evaluation.&lt;/div&gt;</summary>
		<author><name>Palex</name></author>
	</entry>
</feed>