<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Krnd</id>
	<title>Medien Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Krnd"/>
	<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/Special:Contributions/Krnd"/>
	<updated>2026-05-01T13:09:58Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.6</generator>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=122282</id>
		<title>GMU:Critical VR Lab I - Unity Introduction/Konrad Behr</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=122282"/>
		<updated>2021-03-30T18:54:29Z</updated>

		<summary type="html">&lt;p&gt;Krnd: /* Project 3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preview of &amp;quot;Radio Dance 3D&amp;quot;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Radio Dance 3D&amp;quot; is an invisible dance performance, acousmatic composition and live radio performance. The basis of &amp;quot;Radio Dance 3D&amp;quot; is a score developed by Konrad Behr with the dancer Emelie Bardon in the winter of 2018 at the Studio For Electroacoustic Music Weimar (seam.hfm-weimar.de). With the focus of the sounds a human body causes while performing a modern dance piece, both artists co-created a composition for one dancer.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recommended for headphones!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://www.youtube.com/watch?v=73BU_Y8e3S0&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===details===&lt;br /&gt;
# I places several sound sources and video material at the virtual performance place&lt;br /&gt;
# 4 sources with the recording of the live performance at seam&lt;br /&gt;
# 2 descriptions (english/swedish)&lt;br /&gt;
# video of &amp;quot;behind the stage&amp;quot; documentation &lt;br /&gt;
# graphical score with marker following in time&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More about the project in general: [https://konrad-behr.de/projekt-radio-dance www.konrad-behr.de/projekt-radio-dance]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Project sound particles&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For my music electronic/dub project “bad comfort” I wanted to create a new way of spatialization of the sound sources. To have a new experience in listening to my music, I tried to map parts of my composition to objects at the virtual Unity world. I general, my way of producing music under my moniker “bad comfort” is marked by randomizing of almost all musical  parameters. For this kind of organic sound aesthetic I wanted to create a immersive stage where all sounds are able to move in space to have a binaural experience by listening to headphones.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot jeskola buzz 1.jpeg]]&lt;br /&gt;
&lt;br /&gt;
Two channel stereo version of the song &amp;quot;150309-05&amp;quot;: &lt;br /&gt;
&lt;br /&gt;
[[File:Bad comfort - 150309-05-scetch.ogg]]&lt;br /&gt;
&lt;br /&gt;
To extract all sound of my pre existing song, I had to rearrange my sound patch which I created with the software [http://jeskola.net/buzz/ Jeskola Buzz]. Therefore I had to reconnect all sound sources to the master output to create mono sound files of each sound.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot jeskola buzz 2.jpeg]]&lt;br /&gt;
&lt;br /&gt;
I had to decide which sound effect I want to render into this mono file and which “room” effect will be created from the Unity reverb effect. I decided to use the Unity reverb effect only. All glitch and delay effects where baked into the mono file. To have a visual control to the sound I imported the files at the DAW Reaper.&lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-reaper-sound-particals.jpg]]&lt;br /&gt;
&lt;br /&gt;
Now I had to connect all this mono files to virtual sound sources in my Unity project. To have also an optical experience I also connected some simple objects to the sound sources. To visualize the path of the “flying sounds” I used the component “Tail Renderer”. So every sound hat some kind of tail behind the sound. To create a immersive experience in sound I had to move now all the sounds with the Animator function of Unity. I also created objects like a huge movable ball at which I connected to some basic sounds. Those objects could also collide to the border of the stage, on to each other and to the virtual visitor. The camera was connected to a FPS controller which made the visitor possible to walk around in sound. &lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-unity-sound-praticals-scetch.jpg]]&lt;br /&gt;
&lt;br /&gt;
In a early stage the colour of the “sound objects” were different in a later version I decided to make everything in black and white because it simply looked somehow too childish.&lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-unity-sound-praticals-minimal.jpg]]&lt;br /&gt;
&lt;br /&gt;
In this stage of the project I have to say especial the visual component of the experience is not very innovative. I hoped that the spatialization of the sound in connection to the visual objects a new way of listening could happened. &lt;br /&gt;
&lt;br /&gt;
Unitiy version of the song &amp;quot;150309-05&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://youtu.be/UWNMgUPRjXU &amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At the moment it looks like a screen saver of an operation system from 1995. May be using VR-Glasses and a head tracking audio system makes a new immersive moment. Until now I see more aesthetic problems the new artistic horizons.&lt;br /&gt;
&lt;br /&gt;
More music from bad comfort: [https://soundcloud.com/knrd/sets/bad-comfort soundcloud.com/knrd]&lt;br /&gt;
&lt;br /&gt;
== Project 3 ==&lt;br /&gt;
&lt;br /&gt;
A non-Unitiy, but somehow VR-related project I worked on this semester was a 360-degree guided tour through the vacant building &amp;quot;Lehngericht&amp;quot; in Augsburg Saxony. Motivated and inspired by the input from the Critical VR Lab course, I borrowed a 306° camera and tried to photograph all the rooms. Challenging was the fact of no electricity - especially in the basement. After creating this guide, I placed a QR code with the URL to this website at the main entrance. I hope that the people of Augustusburg will take advantage of this chance to see the building from the inside and use it again for cultural purposes. For embedding of the VR-Pictures I used the [https://wordpress.org/plugins/ipanorama-360-virtual-tour-builder-lite iPanorama 360 WordPress Virtual Tour Builder] plug-in.&lt;br /&gt;
&lt;br /&gt;
[[File:Vr-gericht.jpg ]]&lt;br /&gt;
&lt;br /&gt;
[https://augustusburg.blog/ipanorama/virtualtour/1 https://augustusburg.blog/ipanorama/virtualtour/1]&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=122281</id>
		<title>GMU:Critical VR Lab I - Unity Introduction/Konrad Behr</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=122281"/>
		<updated>2021-03-30T18:53:40Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preview of &amp;quot;Radio Dance 3D&amp;quot;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Radio Dance 3D&amp;quot; is an invisible dance performance, acousmatic composition and live radio performance. The basis of &amp;quot;Radio Dance 3D&amp;quot; is a score developed by Konrad Behr with the dancer Emelie Bardon in the winter of 2018 at the Studio For Electroacoustic Music Weimar (seam.hfm-weimar.de). With the focus of the sounds a human body causes while performing a modern dance piece, both artists co-created a composition for one dancer.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recommended for headphones!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://www.youtube.com/watch?v=73BU_Y8e3S0&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===details===&lt;br /&gt;
# I places several sound sources and video material at the virtual performance place&lt;br /&gt;
# 4 sources with the recording of the live performance at seam&lt;br /&gt;
# 2 descriptions (english/swedish)&lt;br /&gt;
# video of &amp;quot;behind the stage&amp;quot; documentation &lt;br /&gt;
# graphical score with marker following in time&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More about the project in general: [https://konrad-behr.de/projekt-radio-dance www.konrad-behr.de/projekt-radio-dance]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Project sound particles&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For my music electronic/dub project “bad comfort” I wanted to create a new way of spatialization of the sound sources. To have a new experience in listening to my music, I tried to map parts of my composition to objects at the virtual Unity world. I general, my way of producing music under my moniker “bad comfort” is marked by randomizing of almost all musical  parameters. For this kind of organic sound aesthetic I wanted to create a immersive stage where all sounds are able to move in space to have a binaural experience by listening to headphones.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot jeskola buzz 1.jpeg]]&lt;br /&gt;
&lt;br /&gt;
Two channel stereo version of the song &amp;quot;150309-05&amp;quot;: &lt;br /&gt;
&lt;br /&gt;
[[File:Bad comfort - 150309-05-scetch.ogg]]&lt;br /&gt;
&lt;br /&gt;
To extract all sound of my pre existing song, I had to rearrange my sound patch which I created with the software [http://jeskola.net/buzz/ Jeskola Buzz]. Therefore I had to reconnect all sound sources to the master output to create mono sound files of each sound.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot jeskola buzz 2.jpeg]]&lt;br /&gt;
&lt;br /&gt;
I had to decide which sound effect I want to render into this mono file and which “room” effect will be created from the Unity reverb effect. I decided to use the Unity reverb effect only. All glitch and delay effects where baked into the mono file. To have a visual control to the sound I imported the files at the DAW Reaper.&lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-reaper-sound-particals.jpg]]&lt;br /&gt;
&lt;br /&gt;
Now I had to connect all this mono files to virtual sound sources in my Unity project. To have also an optical experience I also connected some simple objects to the sound sources. To visualize the path of the “flying sounds” I used the component “Tail Renderer”. So every sound hat some kind of tail behind the sound. To create a immersive experience in sound I had to move now all the sounds with the Animator function of Unity. I also created objects like a huge movable ball at which I connected to some basic sounds. Those objects could also collide to the border of the stage, on to each other and to the virtual visitor. The camera was connected to a FPS controller which made the visitor possible to walk around in sound. &lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-unity-sound-praticals-scetch.jpg]]&lt;br /&gt;
&lt;br /&gt;
In a early stage the colour of the “sound objects” were different in a later version I decided to make everything in black and white because it simply looked somehow too childish.&lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-unity-sound-praticals-minimal.jpg]]&lt;br /&gt;
&lt;br /&gt;
In this stage of the project I have to say especial the visual component of the experience is not very innovative. I hoped that the spatialization of the sound in connection to the visual objects a new way of listening could happened. &lt;br /&gt;
&lt;br /&gt;
Unitiy version of the song &amp;quot;150309-05&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://youtu.be/UWNMgUPRjXU &amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At the moment it looks like a screen saver of an operation system from 1995. May be using VR-Glasses and a head tracking audio system makes a new immersive moment. Until now I see more aesthetic problems the new artistic horizons.&lt;br /&gt;
&lt;br /&gt;
More music from bad comfort: [https://soundcloud.com/knrd/sets/bad-comfort soundcloud.com/knrd]&lt;br /&gt;
&lt;br /&gt;
== Project 3 ==&lt;br /&gt;
&lt;br /&gt;
A non-Unitiy, but somehow VR-related project I worked on this semester was a 360-degree guided tour through the vacant building &amp;quot;Lehngericht&amp;quot; in Augsburg Saxony. Motivated and inspired by the input from the Critical VR Lab course, I borrowed a 306° camera and tried to photograph all the rooms. Challenging was the fact of no electricity - especially in the basement. After creating this guide, I placed a QR code with the URL to this website at the main entrance. I hope that the people of Augustusburg will take advantage of this chance to see the building from the inside and use it again for cultural purposes. For embedding of the VR-Pictures I used the [https://wordpress.org/plugins/ipanorama-360-virtual-tour-builder-lite iPanorama 360 WordPress Virtual Tour Builder] plug-in.&lt;br /&gt;
&lt;br /&gt;
[[File:Vr-gericht.jpg ]]&lt;br /&gt;
&lt;br /&gt;
[https://augustusburg.blog/ipanorama/virtualtour/1]&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=122280</id>
		<title>GMU:Critical VR Lab I - Unity Introduction/Konrad Behr</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=122280"/>
		<updated>2021-03-30T18:52:00Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preview of &amp;quot;Radio Dance 3D&amp;quot;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Radio Dance 3D&amp;quot; is an invisible dance performance, acousmatic composition and live radio performance. The basis of &amp;quot;Radio Dance 3D&amp;quot; is a score developed by Konrad Behr with the dancer Emelie Bardon in the winter of 2018 at the Studio For Electroacoustic Music Weimar (seam.hfm-weimar.de). With the focus of the sounds a human body causes while performing a modern dance piece, both artists co-created a composition for one dancer.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recommended for headphones!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://www.youtube.com/watch?v=73BU_Y8e3S0&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===details===&lt;br /&gt;
# I places several sound sources and video material at the virtual performance place&lt;br /&gt;
# 4 sources with the recording of the live performance at seam&lt;br /&gt;
# 2 descriptions (english/swedish)&lt;br /&gt;
# video of &amp;quot;behind the stage&amp;quot; documentation &lt;br /&gt;
# graphical score with marker following in time&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More about the project in general: [https://konrad-behr.de/projekt-radio-dance www.konrad-behr.de/projekt-radio-dance]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Project sound particles&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For my music electronic/dub project “bad comfort” I wanted to create a new way of spatialization of the sound sources. To have a new experience in listening to my music, I tried to map parts of my composition to objects at the virtual Unity world. I general, my way of producing music under my moniker “bad comfort” is marked by randomizing of almost all musical  parameters. For this kind of organic sound aesthetic I wanted to create a immersive stage where all sounds are able to move in space to have a binaural experience by listening to headphones.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot jeskola buzz 1.jpeg]]&lt;br /&gt;
&lt;br /&gt;
Two channel stereo version of the song &amp;quot;150309-05&amp;quot;: &lt;br /&gt;
&lt;br /&gt;
[[File:Bad comfort - 150309-05-scetch.ogg]]&lt;br /&gt;
&lt;br /&gt;
To extract all sound of my pre existing song, I had to rearrange my sound patch which I created with the software [http://jeskola.net/buzz/ Jeskola Buzz]. Therefore I had to reconnect all sound sources to the master output to create mono sound files of each sound.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot jeskola buzz 2.jpeg]]&lt;br /&gt;
&lt;br /&gt;
I had to decide which sound effect I want to render into this mono file and which “room” effect will be created from the Unity reverb effect. I decided to use the Unity reverb effect only. All glitch and delay effects where baked into the mono file. To have a visual control to the sound I imported the files at the DAW Reaper.&lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-reaper-sound-particals.jpg]]&lt;br /&gt;
&lt;br /&gt;
Now I had to connect all this mono files to virtual sound sources in my Unity project. To have also an optical experience I also connected some simple objects to the sound sources. To visualize the path of the “flying sounds” I used the component “Tail Renderer”. So every sound hat some kind of tail behind the sound. To create a immersive experience in sound I had to move now all the sounds with the Animator function of Unity. I also created objects like a huge movable ball at which I connected to some basic sounds. Those objects could also collide to the border of the stage, on to each other and to the virtual visitor. The camera was connected to a FPS controller which made the visitor possible to walk around in sound. &lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-unity-sound-praticals-scetch.jpg]]&lt;br /&gt;
&lt;br /&gt;
In a early stage the colour of the “sound objects” were different in a later version I decided to make everything in black and white because it simply looked somehow too childish.&lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-unity-sound-praticals-minimal.jpg]]&lt;br /&gt;
&lt;br /&gt;
In this stage of the project I have to say especial the visual component of the experience is not very innovative. I hoped that the spatialization of the sound in connection to the visual objects a new way of listening could happened. &lt;br /&gt;
&lt;br /&gt;
Unitiy version of the song &amp;quot;150309-05&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://youtu.be/UWNMgUPRjXU &amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At the moment it looks like a screen saver of an operation system from 1995. May be using VR-Glasses and a head tracking audio system makes a new immersive moment. Until now I see more aesthetic problems the new artistic horizons.&lt;br /&gt;
&lt;br /&gt;
More music from bad comfort: [https://soundcloud.com/knrd/sets/bad-comfort soundcloud.com/knrd]&lt;br /&gt;
&lt;br /&gt;
== Project 3 ==&lt;br /&gt;
&lt;br /&gt;
A non-Unitiy, but somehow VR-related project I worked on this semester was a 360-degree guided tour through the vacant building &amp;quot;Lehngericht&amp;quot; in Augsburg Saxony. Motivated and inspired by the input from the Critical VR Lab course, I borrowed a 306° camera and tried to photograph all the rooms. Challenging was the fact of no electricity - especially in the basement. After creating this guide, I placed a QR code with the URL to this website at the main entrance. I hope that the people of Augustusburg will take advantage of this chance to see the building from the inside and use it again for cultural purposes. For embedding of the VR-Pictures I used the [https://wordpress.org/plugins/ipanorama-360-virtual-tour-builder-lite iPanorama 360 WordPress Virtual Tour Builder] plug-in.&lt;br /&gt;
&lt;br /&gt;
[https://augustusburg.blog/ipanorama/virtualtour/1 [File:Vr-gericht.jpg ]]&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Vr-gericht.jpg&amp;diff=122279</id>
		<title>File:Vr-gericht.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Vr-gericht.jpg&amp;diff=122279"/>
		<updated>2021-03-30T18:49:00Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=122277</id>
		<title>GMU:Critical VR Lab I - Unity Introduction/Konrad Behr</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=122277"/>
		<updated>2021-03-30T18:21:48Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preview of &amp;quot;Radio Dance 3D&amp;quot;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Radio Dance 3D&amp;quot; is an invisible dance performance, acousmatic composition and live radio performance. The basis of &amp;quot;Radio Dance 3D&amp;quot; is a score developed by Konrad Behr with the dancer Emelie Bardon in the winter of 2018 at the Studio For Electroacoustic Music Weimar (seam.hfm-weimar.de). With the focus of the sounds a human body causes while performing a modern dance piece, both artists co-created a composition for one dancer.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recommended for headphones!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://www.youtube.com/watch?v=73BU_Y8e3S0&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===details===&lt;br /&gt;
# I places several sound sources and video material at the virtual performance place&lt;br /&gt;
# 4 sources with the recording of the live performance at seam&lt;br /&gt;
# 2 descriptions (english/swedish)&lt;br /&gt;
# video of &amp;quot;behind the stage&amp;quot; documentation &lt;br /&gt;
# graphical score with marker following in time&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More about the project in general: [https://konrad-behr.de/projekt-radio-dance www.konrad-behr.de/projekt-radio-dance]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Project sound particles&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For my music electronic/dub project “bad comfort” I wanted to create a new way of spatialization of the sound sources. To have a new experience in listening to my music, I tried to map parts of my composition to objects at the virtual Unity world. I general, my way of producing music under my moniker “bad comfort” is marked by randomizing of almost all musical  parameters. For this kind of organic sound aesthetic I wanted to create a immersive stage where all sounds are able to move in space to have a binaural experience by listening to headphones.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot jeskola buzz 1.jpeg]]&lt;br /&gt;
&lt;br /&gt;
Two channel stereo version of the song &amp;quot;150309-05&amp;quot;: &lt;br /&gt;
&lt;br /&gt;
[[File:Bad comfort - 150309-05-scetch.ogg]]&lt;br /&gt;
&lt;br /&gt;
To extract all sound of my pre existing song, I had to rearrange my sound patch which I created with the software [http://jeskola.net/buzz/ Jeskola Buzz]. Therefore I had to reconnect all sound sources to the master output to create mono sound files of each sound.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot jeskola buzz 2.jpeg]]&lt;br /&gt;
&lt;br /&gt;
I had to decide which sound effect I want to render into this mono file and which “room” effect will be created from the Unity reverb effect. I decided to use the Unity reverb effect only. All glitch and delay effects where baked into the mono file. To have a visual control to the sound I imported the files at the DAW Reaper.&lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-reaper-sound-particals.jpg]]&lt;br /&gt;
&lt;br /&gt;
Now I had to connect all this mono files to virtual sound sources in my Unity project. To have also an optical experience I also connected some simple objects to the sound sources. To visualize the path of the “flying sounds” I used the component “Tail Renderer”. So every sound hat some kind of tail behind the sound. To create a immersive experience in sound I had to move now all the sounds with the Animator function of Unity. I also created objects like a huge movable ball at which I connected to some basic sounds. Those objects could also collide to the border of the stage, on to each other and to the virtual visitor. The camera was connected to a FPS controller which made the visitor possible to walk around in sound. &lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-unity-sound-praticals-scetch.jpg]]&lt;br /&gt;
&lt;br /&gt;
In a early stage the colour of the “sound objects” were different in a later version I decided to make everything in black and white because it simply looked somehow too childish.&lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-unity-sound-praticals-minimal.jpg]]&lt;br /&gt;
&lt;br /&gt;
In this stage of the project I have to say especial the visual component of the experience is not very innovative. I hoped that the spatialization of the sound in connection to the visual objects a new way of listening could happened. &lt;br /&gt;
&lt;br /&gt;
Unitiy version of the song &amp;quot;150309-05&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://youtu.be/UWNMgUPRjXU &amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At the moment it looks like a screen saver of an operation system from 1995. May be using VR-Glasses and a head tracking audio system makes a new immersive moment. Until now I see more aesthetic problems the new artistic horizons.&lt;br /&gt;
&lt;br /&gt;
More music from bad comfort: [https://soundcloud.com/knrd/sets/bad-comfort soundcloud.com/knrd]&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=122259</id>
		<title>GMU:Critical VR Lab I - Unity Introduction/Konrad Behr</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=122259"/>
		<updated>2021-03-30T18:19:46Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preview of &amp;quot;Radio Dance 3D&amp;quot;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Radio Dance 3D&amp;quot; is an invisible dance performance, acousmatic composition and live radio performance. The basis of &amp;quot;Radio Dance 3D&amp;quot; is a score developed by Konrad Behr with the dancer Emelie Bardon in the winter of 2018 at the Studio For Electroacoustic Music Weimar (seam.hfm-weimar.de). With the focus of the sounds a human body causes while performing a modern dance piece, both artists co-created a composition for one dancer.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recommended for headphones!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://www.youtube.com/watch?v=73BU_Y8e3S0&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===details===&lt;br /&gt;
# I places several sound sources and video material at the virtual performance place&lt;br /&gt;
# 4 sources with the recording of the live performance at seam&lt;br /&gt;
# 2 descriptions (english/swedish)&lt;br /&gt;
# video of &amp;quot;behind the stage&amp;quot; documentation &lt;br /&gt;
# graphical score with marker following in time&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More about the project in general: [https://konrad-behr.de/projekt-radio-dance www.konrad-behr.de/projekt-radio-dance]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Project sound particles&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For my music electronic/dub project “bad comfort” I wanted to create a new way of spatialization of the sound sources. To have a new experience in listening to my music, I tried to map parts of my composition to objects at the virtual Unity world. I general, my way of producing music under my moniker “bad comfort” is marked by randomizing of almost all musical  parameters. For this kind of organic sound aesthetic I wanted to create a immersive stage where all sounds are able to move in space to have a binaural experience by listening to headphones.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot jeskola buzz 1.jpeg]]&lt;br /&gt;
&lt;br /&gt;
Two channel stereo version of the song &amp;quot;150309-05&amp;quot;: &lt;br /&gt;
&lt;br /&gt;
[[File:Bad comfort - 150309-05-scetch.ogg]]&lt;br /&gt;
&lt;br /&gt;
To extract all sound of my pre existing song, I had to rearrange my sound patch which I created with the software [http://jeskola.net/buzz/ Jeskola Buzz] Therefore I had to reconnect all sound sources to the master output to create mono sound files of each sound.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot jeskola buzz 2.jpeg]]&lt;br /&gt;
&lt;br /&gt;
I had to decide which sound effect I want to render into this mono file and which “room” effect will be created from the Unity reverb effect. I decided to use the Unity reverb effect only. All glitch and delay effects where baked into the mono file. To have a visual control to the sound I imported the files at the DAW Reaper.&lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-reaper-sound-particals.jpg]]&lt;br /&gt;
&lt;br /&gt;
Now I had to connect all this mono files to virtual sound sources in my Unity project. To have also an optical experience I also connected some simple objects to the sound sources. To visualize the path of the “flying sounds” I used the component “Tail Renderer”. So every sound hat some kind of tail behind the sound. To create a immersive experience in sound I had to move now all the sounds with the Animator function of Unity. I also created objects like a huge movable ball at which I connected to some basic sounds. Those objects could also collide to the border of the stage, on to each other and to the virtual visitor. The camera was connected to a FPS controller which made the visitor possible to walk around in sound. &lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-unity-sound-praticals-scetch.jpg]]&lt;br /&gt;
&lt;br /&gt;
In a early stage the colour of the “sound objects” were different in a later version I decided to make everything in black and white because it simply looked somehow too childish.&lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-unity-sound-praticals-minimal.jpg]]&lt;br /&gt;
&lt;br /&gt;
In this stage of the project I have to say especial the visual component of the experience is not very innovative. I hoped that the spatialization of the sound in connection to the visual objects a new way of listening could happened. &lt;br /&gt;
&lt;br /&gt;
Unitiy version of the song &amp;quot;150309-05&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://youtu.be/UWNMgUPRjXU &amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At the moment it looks like a screen saver of an operation system from 1995. May be using VR-Glasses and a head tracking audio system makes a new immersive moment. Until now I see more aesthetic problems the new artistic horizons.&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=122258</id>
		<title>GMU:Critical VR Lab I - Unity Introduction/Konrad Behr</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=122258"/>
		<updated>2021-03-30T18:18:52Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preview of &amp;quot;Radio Dance 3D&amp;quot;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Radio Dance 3D&amp;quot; is an invisible dance performance, acousmatic composition and live radio performance. The basis of &amp;quot;Radio Dance 3D&amp;quot; is a score developed by Konrad Behr with the dancer Emelie Bardon in the winter of 2018 at the Studio For Electroacoustic Music Weimar (seam.hfm-weimar.de). With the focus of the sounds a human body causes while performing a modern dance piece, both artists co-created a composition for one dancer.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recommended for headphones!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://www.youtube.com/watch?v=73BU_Y8e3S0&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===details===&lt;br /&gt;
# I places several sound sources and video material at the virtual performance place&lt;br /&gt;
# 4 sources with the recording of the live performance at seam&lt;br /&gt;
# 2 descriptions (english/swedish)&lt;br /&gt;
# video of &amp;quot;behind the stage&amp;quot; documentation &lt;br /&gt;
# graphical score with marker following in time&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More about the project in general: [https://konrad-behr.de/projekt-radio-dance www.konrad-behr.de/projekt-radio-dance]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Project sound particles&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For my music electronic/dub project “bad comfort” I wanted to create a new way of spatialization of the sound sources. To have a new experience in listening to my music, I tried to map parts of my composition to objects at the virtual Unity world. I general, my way of producing music under my moniker “bad comfort” is marked by randomizing of almost all musical  parameters. For this kind of organic sound aesthetic I wanted to create a immersive stage where all sounds are able to move in space to have a binaural experience by listening to headphones.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot jeskola buzz 1.jpeg]]&lt;br /&gt;
&lt;br /&gt;
Two channel stereo version of the song &amp;quot;150309-05&amp;quot;: &lt;br /&gt;
&lt;br /&gt;
[[File:Bad comfort - 150309-05-scetch.ogg]]&lt;br /&gt;
&lt;br /&gt;
To extract all sound of my pre existing song, I had to rearrange my sound patch which I created with the software Jeskola Buzz.([http://jeskola.net/buzz/]) Therefore I had to reconnect all sound sources to the master output to create mono sound files of each sound.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot jeskola buzz 2.jpeg]]&lt;br /&gt;
&lt;br /&gt;
I had to decide which sound effect I want to render into this mono file and which “room” effect will be created from the Unity reverb effect. I decided to use the Unity reverb effect only. All glitch and delay effects where baked into the mono file. To have a visual control to the sound I imported the files at the DAW Reaper.&lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-reaper-sound-particals.jpg]]&lt;br /&gt;
&lt;br /&gt;
Now I had to connect all this mono files to virtual sound sources in my Unity project. To have also an optical experience I also connected some simple objects to the sound sources. To visualize the path of the “flying sounds” I used the component “Tail Renderer”. So every sound hat some kind of tail behind the sound. To create a immersive experience in sound I had to move now all the sounds with the Animator function of Unity. I also created objects like a huge movable ball at which I connected to some basic sounds. Those objects could also collide to the border of the stage, on to each other and to the virtual visitor. The camera was connected to a FPS controller which made the visitor possible to walk around in sound. &lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-unity-sound-praticals-scetch.jpg]]&lt;br /&gt;
&lt;br /&gt;
In a early stage the colour of the “sound objects” were different in a later version I decided to make everything in black and white because it simply looked somehow too childish.&lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-unity-sound-praticals-minimal.jpg]]&lt;br /&gt;
&lt;br /&gt;
In this stage of the project I have to say especial the visual component of the experience is not very innovative. I hoped that the spatialization of the sound in connection to the visual objects a new way of listening could happened. &lt;br /&gt;
&lt;br /&gt;
Unitiy version of the song &amp;quot;150309-05&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://youtu.be/UWNMgUPRjXU &amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At the moment it looks like a screen saver of an operation system from 1995. May be using VR-Glasses and a head tracking audio system makes a new immersive moment. Until now I see more aesthetic problems the new artistic horizons.&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=122257</id>
		<title>GMU:Critical VR Lab I - Unity Introduction/Konrad Behr</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=122257"/>
		<updated>2021-03-30T18:16:27Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preview of &amp;quot;Radio Dance 3D&amp;quot;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Radio Dance 3D&amp;quot; is an invisible dance performance, acousmatic composition and live radio performance. The basis of &amp;quot;Radio Dance 3D&amp;quot; is a score developed by Konrad Behr with the dancer Emelie Bardon in the winter of 2018 at the Studio For Electroacoustic Music Weimar (seam.hfm-weimar.de). With the focus of the sounds a human body causes while performing a modern dance piece, both artists co-created a composition for one dancer.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recommended for headphones!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://www.youtube.com/watch?v=73BU_Y8e3S0&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===details===&lt;br /&gt;
# I places several sound sources and video material at the virtual performance place&lt;br /&gt;
# 4 sources with the recording of the live performance at seam&lt;br /&gt;
# 2 descriptions (english/swedish)&lt;br /&gt;
# video of &amp;quot;behind the stage&amp;quot; documentation &lt;br /&gt;
# graphical score with marker following in time&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More about the project in general: [https://konrad-behr.de/projekt-radio-dance www.konrad-behr.de/projekt-radio-dance]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Project sound particles&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For my music electronic/dub project “bad comfort” I wanted to create a new way of spatialization of the sound sources. To have a new experience in listening to my music, I tried to map parts of my composition to objects at the virtual Unity world. I general, my way of producing music under my moniker “bad comfort” is marked by randomizing of almost all musical  parameters. For this kind of organic sound aesthetic I wanted to create a immersive stage where all sounds are able to move in space to have a binaural experience by listening to headphones.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot jeskola buzz 1.jpeg]]&lt;br /&gt;
&lt;br /&gt;
Two channel stereo versione of the song: &lt;br /&gt;
&lt;br /&gt;
[[File:Bad comfort - 150309-05-scetch.ogg]]&lt;br /&gt;
&lt;br /&gt;
To extract all sound of my pre existing song, I had to rearrange my sound patch which I created with the software Jeskola Buzz. Therefore I had to reconnect all sound sources to the master output to create mono sound files of each sound.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot jeskola buzz 2.jpeg]]&lt;br /&gt;
&lt;br /&gt;
I had to decide which sound effect I want to render into this mono file and which “room” effect will be created from the Unity reverb effect. I decided to use the Unity reverb effect only. All glitch and delay effects where baked into the mono file. To have a visual control to the sound I imported the files at the DAW Reaper.&lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-reaper-sound-particals.jpg]]&lt;br /&gt;
&lt;br /&gt;
Now I had to connect all this mono files to virtual sound sources in my Unity project. To have also an optical experience I also connected some simple objects to the sound sources. To visualize the path of the “flying sounds” I used the component “Tail Renderer”. So every sound hat some kind of tail behind the sound. To create a immersive experience in sound I had to move now all the sounds with the Animator function of Unity. I also created objects like a huge movable ball at which I connected to some basic sounds. Those objects could also collide to the border of the stage, on to each other and to the virtual visitor. The camera was connected to a FPS controller which made the visitor possible to walk around in sound. &lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-unity-sound-praticals-scetch.jpg]]&lt;br /&gt;
&lt;br /&gt;
In a early stage the colour of the “sound objects” were different in a later version I decided to make everything in black and white because it simply looked somehow too childish.&lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-unity-sound-praticals-minimal.jpg]]&lt;br /&gt;
&lt;br /&gt;
In this stage of the project I have to say especial the visual component of the experience is not very innovative. I hoped that the spatialization of the sound in connection to the visual objects a new way of listening could happened. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://youtu.be/UWNMgUPRjXU &amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At the moment it looks like a screen saver of an operation system from 1995. May be using VR-Glasses and a head tracking audio system makes a new immersive moment. Until now I see more aesthetic problems the new artistic horizons.&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=122243</id>
		<title>GMU:Critical VR Lab I - Unity Introduction/Konrad Behr</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=122243"/>
		<updated>2021-03-30T16:39:10Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preview of &amp;quot;Radio Dance 3D&amp;quot;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Radio Dance 3D&amp;quot; is an invisible dance performance, acousmatic composition and live radio performance. The basis of &amp;quot;Radio Dance 3D&amp;quot; is a score developed by Konrad Behr with the dancer Emelie Bardon in the winter of 2018 at the Studio For Electroacoustic Music Weimar (seam.hfm-weimar.de). With the focus of the sounds a human body causes while performing a modern dance piece, both artists co-created a composition for one dancer.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recommended for headphones!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://www.youtube.com/watch?v=73BU_Y8e3S0&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===details===&lt;br /&gt;
# I places several sound sources and video material at the virtual performance place&lt;br /&gt;
# 4 sources with the recording of the live performance at seam&lt;br /&gt;
# 2 descriptions (english/swedish)&lt;br /&gt;
# video of &amp;quot;behind the stage&amp;quot; documentation &lt;br /&gt;
# graphical score with marker following in time&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More about the project in general: [https://konrad-behr.de/projekt-radio-dance www.konrad-behr.de/projekt-radio-dance]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Project sound particles&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For my music electronic/dub project “bad comfort” I wanted to create a new way of spatialization of the sound sources. To have a new experience in listening to my music, I tried to map parts of my composition to objects at the virtual Unity world. I general, my way of producing music under my moniker “bad comfort” is marked by randomizing of almost all musical  parameters. For this kind of organic sound aesthetic I wanted to create a immersive stage where all sounds are able to move in space to have a binaural experience by listening to headphones.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot jeskola buzz 1.jpeg]]&lt;br /&gt;
&lt;br /&gt;
To extract all sound of my pre existing song, I had to rearrange my sound patch which I created with the software Jeskola Buzz. Therefore I had to reconnect all sound sources to the master output to create mono sound files of each sound.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot jeskola buzz 2.jpeg]]&lt;br /&gt;
&lt;br /&gt;
I had to decide which sound effect I want to render into this mono file and which “room” effect will be created from the Unity reverb effect. I decided to use the Unity reverb effect only. All glitch and delay effects where baked into the mono file. To have a visual control to the sound I imported the files at the DAW Reaper.&lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-reaper-sound-particals.jpg]]&lt;br /&gt;
&lt;br /&gt;
Now I had to connect all this mono files to virtual sound sources in my Unity project. To have also an optical experience I also connected some simple objects to the sound sources. To visualize the path of the “flying sounds” I used the component “Tail Renderer”. So every sound hat some kind of tail behind the sound. To create a immersive experience in sound I had to move now all the sounds with the Animator function of Unity. I also created objects like a huge movable ball at which I connected to some basic sounds. Those objects could also collide to the border of the stage, on to each other and to the virtual visitor. The camera was connected to a FPS controller which made the visitor possible to walk around in sound. &lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-unity-sound-praticals-scetch.jpg]]&lt;br /&gt;
&lt;br /&gt;
In a early stage the colour of the “sound objects” were different in a later version I decided to make everything in black and white because it simply looked somehow too childish.&lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-unity-sound-praticals-minimal.jpg]]&lt;br /&gt;
&lt;br /&gt;
In this stage of the project I have to say especial the visual component of the experience is not very innovative. I hoped that the spatialization of the sound in connection to the visual objects a new way of listening could happened. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://youtu.be/UWNMgUPRjXU &amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At the moment it looks like a screen saver of an operation system from 1995. May be using VR-Glasses and a head tracking audio system makes a new immersive moment. Until now I see more aesthetic problems the new artistic horizons.&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Bad_comfort_-_150309-05-scetch.ogg&amp;diff=122207</id>
		<title>File:Bad comfort - 150309-05-scetch.ogg</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Bad_comfort_-_150309-05-scetch.ogg&amp;diff=122207"/>
		<updated>2021-03-30T15:25:24Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=122206</id>
		<title>GMU:Critical VR Lab I - Unity Introduction/Konrad Behr</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=122206"/>
		<updated>2021-03-30T14:48:23Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preview of &amp;quot;Radio Dance 3D&amp;quot;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Radio Dance 3D&amp;quot; is an invisible dance performance, acousmatic composition and live radio performance. The basis of &amp;quot;Radio Dance 3D&amp;quot; is a score developed by Konrad Behr with the dancer Emelie Bardon in the winter of 2018 at the Studio For Electroacoustic Music Weimar (seam.hfm-weimar.de). With the focus of the sounds a human body causes while performing a modern dance piece, both artists co-created a composition for one dancer.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recommended for headphones!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://www.youtube.com/watch?v=73BU_Y8e3S0&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===details===&lt;br /&gt;
# I places several sound sources and video material at the virtual performance place&lt;br /&gt;
# 4 sources with the recording of the live performance at seam&lt;br /&gt;
# 2 descriptions (english/swedish)&lt;br /&gt;
# video of &amp;quot;behind the stage&amp;quot; documentation &lt;br /&gt;
# graphical score with marker following in time&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More about the project in general: [https://konrad-behr.de/projekt-radio-dance www.konrad-behr.de/projekt-radio-dance]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Project sound particles&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For my music electronic/dub project “bad comfort” I wanted to create a new way of spatialization of the sound sources. To have a new experience in listening to my music, I tried to map parts of my composition to objects at the virtual Unity world. I general, my way of producing music under my moniker “bad comfort” is marked by randomizing of almost all musical  parameters. For this kind of organic sound aesthetic I wanted to create a immersive stage where all sounds are able to move in space to have a binaural experience by listening to headphones.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot jeskola buzz 1.jpeg]]&lt;br /&gt;
&lt;br /&gt;
To extract all sound of my pre existing song, I had to rearrange my sound patch which I created with the software Jeskola Buzz. Therefore I had to reconnect all sound sources to the master output to create mono sound files of each sound.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot jeskola buzz 2.jpeg]]&lt;br /&gt;
&lt;br /&gt;
I had to decide which sound effect I want to render into this mono file and which “room” effect will be created from the Unity reverb effect. I decided to use the Unity reverb effect only. All glitch and delay effects where baked into the mono file. To have a visual control to the sound I imported the files at the DAW Reaper.&lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-reaper-sound-particals.jpg]]&lt;br /&gt;
&lt;br /&gt;
Now I had to connect all this mono files to virtual sound sources in my Unity project. To have also an optical experience I also connected some simple objects to the sound sources. To visualize the path of the “flying sounds” I used the component “Tail Renderer”. So every sound hat some kind of tail behind the sound. To create a immersive experience in sound I had to move now all the sounds with the Animator function of Unity. I also created objects like a huge movable ball at which I connected to some basic sounds. Those objects could also collide to the border of the stage, on to each other and to the virtual visitor. The camera was connected to a FPS controller which made the visitor possible to walk around in sound. &lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-unity-sound-praticals-scetch.jpg]]&lt;br /&gt;
&lt;br /&gt;
In a early stage the colour of the “sound objects” were different in a later version I decided to make everything in black and white because it simply looked somehow too childish.&lt;br /&gt;
&lt;br /&gt;
[[File:Screentshots-unity-sound-praticals-minimal.jpg]]&lt;br /&gt;
&lt;br /&gt;
In this stage of the project I have to say especial the visual component of the experience is not very innovative. I hoped that the spatialization of the sound in connection to the visual objects a new way of listening could happened. At the moment it looks like a screen saver of an operation system from the nineteenth. May be using VR-Glasses and a head tracking audio system makes a new immersive moment. Until now I see more aesthetic problems the new artistic horizons.&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Screentshots-unity-sound-praticals-minimal.jpg&amp;diff=122205</id>
		<title>File:Screentshots-unity-sound-praticals-minimal.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Screentshots-unity-sound-praticals-minimal.jpg&amp;diff=122205"/>
		<updated>2021-03-30T14:48:03Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Screentshots-unity-sound-praticals-scetch.jpg&amp;diff=122204</id>
		<title>File:Screentshots-unity-sound-praticals-scetch.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Screentshots-unity-sound-praticals-scetch.jpg&amp;diff=122204"/>
		<updated>2021-03-30T14:47:35Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Screentshots-reaper-sound-particals.jpg&amp;diff=122203</id>
		<title>File:Screentshots-reaper-sound-particals.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Screentshots-reaper-sound-particals.jpg&amp;diff=122203"/>
		<updated>2021-03-30T14:46:58Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Screenshot_jeskola_buzz_2.jpeg&amp;diff=122202</id>
		<title>File:Screenshot jeskola buzz 2.jpeg</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Screenshot_jeskola_buzz_2.jpeg&amp;diff=122202"/>
		<updated>2021-03-30T14:46:26Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Screenshot_jeskola_buzz_1.jpeg&amp;diff=122201</id>
		<title>File:Screenshot jeskola buzz 1.jpeg</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Screenshot_jeskola_buzz_1.jpeg&amp;diff=122201"/>
		<updated>2021-03-30T14:45:30Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction&amp;diff=120548</id>
		<title>GMU:Critical VR Lab I - Unity Introduction</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction&amp;diff=120548"/>
		<updated>2021-01-07T17:18:19Z</updated>

		<summary type="html">&lt;p&gt;Krnd: /* 14.01.2021 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Lecturer: [[Jörg Brinkmann]]&amp;lt;br&amp;gt;&lt;br /&gt;
Credits: 6 [[ECTS]], 4 [[SWS]]&amp;lt;br&amp;gt;&lt;br /&gt;
Date: Thursdays, 15:30 – 18:30&amp;lt;br&amp;gt; &lt;br /&gt;
Venue: Online Seminar&amp;lt;br&amp;gt;&lt;br /&gt;
First Date: Thursday, 05.11.2020, 15:30&lt;br /&gt;
&lt;br /&gt;
==Description==&lt;br /&gt;
&#039;&#039;&#039;Critical VR Lab I – Unity Introduction&#039;&#039;&#039; is a beginner module that offers an Introduction to the Unity game engine. The whole course will be taught online. Participants will be introduced to Unity through video tutorials, accompanied by PDFs and it will be possible to communicate through online meetings and individual consultations.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The course will be taught in two phases:&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
In phase one participants will be introduced to an overview of the Unity interface and different techniques (Lights, Skyboxes, Prefabs, Timeline, Animation). The learning phase will be accompanied by lectures and discussions that are focused on finding strategies for dealing with the possibilities and challenges of working artistically with Game Engines and VR technologies. At the end of phase one, students will have created an experience with Unity and documented it on our GMU Wiki-Page. In phase two students are challenged to reflect, discuss, rethink and rework their Unity experience, based on individual research and experienced insights.&lt;br /&gt;
&lt;br /&gt;
The course emphasises on artistic and opposing ways of working with Virtual Reality concepts. It’s aim is to establish individual approaches to VR, a challenging medium which offers artists new possibilities for expression and intercultural communication, especially at times of the Corona Crisis which creates new challenges to the way we interact with each other. Please watch the video for more information.&lt;br /&gt;
&lt;br /&gt;
==Recommended Requirements==&lt;br /&gt;
No previous knowledge of Unity or other 3D software is needed, but applicants should have access to the Internet, a Computer and Headphones.&lt;br /&gt;
&lt;br /&gt;
==Criteria for passing==&lt;br /&gt;
In order to successfully participate you will have to develop and document your own project on the GMU Wiki. Also, complete the exercises and comply with the submission deadlines&lt;br /&gt;
&lt;br /&gt;
==Communication throughout the semester==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The GMU-Wiki&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
Here you can find all the info about our class and also present your work linked under your name&amp;lt;br&amp;gt; &lt;br /&gt;
Please create your account under this link with your University email adress:&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://www.uni-weimar.de/kunst-und-gestaltung/wiki/Special:CreateAccount&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Email&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For individual communication. I might send you download links to material&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Telegram&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
CriticalVR1 will be our collective chatroom&amp;lt;br&amp;gt;&lt;br /&gt;
Please download Telegram and join for group discussions: &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://t.me/joinchat/HZlOrxuHLynKhLE44cts_Q&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; BigBlueButton&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
This video conference utility will be used for our meetings and consultations&amp;lt;br&amp;gt;&lt;br /&gt;
To start the conversation click on the link&lt;br /&gt;
&lt;br /&gt;
==Student works (after login, please add your name and create a page)==&lt;br /&gt;
*[[Mudassir Sheikh]]&lt;br /&gt;
* [[/Ruo-Xuan Wu (Russian)/]]&lt;br /&gt;
*[[/Dominik_Kämmer/]]&lt;br /&gt;
* [[/Ulrike Katzschmann/]]&lt;br /&gt;
*[[/Laura Toledo/]]&lt;br /&gt;
* [[/Jin Wang/]]&lt;br /&gt;
* [[/Christian Dähne/]]&lt;br /&gt;
* [[/JCA/]]&lt;br /&gt;
* [[/Elia Zeißig/]]&lt;br /&gt;
* [[/Victoria Joelle Hesselbach/]]&lt;br /&gt;
* [[/Quan Zhou/]]&lt;br /&gt;
* [[/Deborah Meyer/]]&lt;br /&gt;
*[[/Beatrice Perlato/]]&lt;br /&gt;
*[[/Alena Isai/]]&lt;br /&gt;
*[[/Alexander Leschinez/]]&lt;br /&gt;
*[[/Sebastian Richter/]]&lt;br /&gt;
*[[/Konrad Behr/]]&lt;br /&gt;
&lt;br /&gt;
==Tutorials (Video + PDF)==&lt;br /&gt;
&lt;br /&gt;
===1.) Getting to know the Unity Editor===&lt;br /&gt;
&lt;br /&gt;
===2.) Lights, Skyboxes, Prefabs, Timeline===&lt;br /&gt;
&lt;br /&gt;
===3.) Animation in Mixamo and Unity===&lt;br /&gt;
&lt;br /&gt;
==Syllabus==&lt;br /&gt;
&lt;br /&gt;
===05.11.2020===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Online meeting via BBB from 15:30 - 18:30 CET&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
please watch the video to get some information about the course &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://youtu.be/sbhbC5y-jpI&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
if you have any questions, feel free to ask me anything at the online meeting&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===12.11.2020===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Online meeting via BBB from 15:30 - 18:30 CET&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===19.11.2020===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Online meeting via BBB from 15:30 - 18:30 CET&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===26.11.2020===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Consultations via BBB&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
Choose your timeslot for an individual consultation. To start the conversation open the link which will start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
15:30 - 16:00 CEST – (Ruo-Xuan Wu) &amp;lt;br&amp;gt;&lt;br /&gt;
16:00 - 16:30 CEST – (put your name here)&amp;lt;br&amp;gt;&lt;br /&gt;
16:30 - 17:00 CEST – (put your name here) &amp;lt;br&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
17:00 - 17:30 CEST – Elia Zeissig &amp;lt;br&amp;gt;&lt;br /&gt;
17:30 - 18:00 CEST – (Juro Carl Anton Reinhardt) &amp;lt;br&amp;gt;&lt;br /&gt;
18:00 - 18:30 CEST – (Jin Wang)&lt;br /&gt;
&lt;br /&gt;
===03.12.2020===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Consultations via BBB&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
15:30 - 16:00 CEST – (Quan Zhou) &amp;lt;br&amp;gt;&lt;br /&gt;
16:00 - 16:30 CEST – Sebastian &amp;lt;br&amp;gt;&lt;br /&gt;
16:30 - 17:00 CEST – Beatrice Perlato &amp;lt;br&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
17:00 - 17:30 CEST – Victoria Joelle Hesselbach &amp;lt;br&amp;gt;&lt;br /&gt;
17:30 - 18:00 CEST – Laura Toledo &amp;lt;br&amp;gt;&lt;br /&gt;
18:00 - 18:30 CEST – (Mudassir Sheikh)&lt;br /&gt;
&lt;br /&gt;
===10.12.2020===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Consultations via BBB&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
15:30 - 16:00 CEST – Elia Zeissig &amp;lt;br&amp;gt;&lt;br /&gt;
16:00 - 16:30 CEST – Sebastian &amp;lt;br&amp;gt;&lt;br /&gt;
16:30 - 17:00 CEST – Dominik Kämmer &amp;lt;br&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
17:00 - 17:30 CEST – (Deborah Meyer) &amp;lt;br&amp;gt;&lt;br /&gt;
17:30 - 18:00 CEST – (Ulrike Katzschmann) &amp;lt;br&amp;gt;&lt;br /&gt;
18:00 - 18:30 CEST – (put your name here)&lt;br /&gt;
&lt;br /&gt;
===17.12.2020===&lt;br /&gt;
&#039;&#039;&#039;Consultations via BBB&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
15:30 - 16:00 CEST – (Quan Zhou) &amp;lt;br&amp;gt;&lt;br /&gt;
16:00 - 16:30 CEST – (Ruo-Xuan Wu)&amp;lt;br&amp;gt;&lt;br /&gt;
16:30 - 17:00 CEST – (put your name) &amp;lt;br&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Online meeting via BBB from 17:30 - 18:30 CET&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===21.12.2020===&lt;br /&gt;
&#039;&#039;&#039;Deadline:&#039;&#039;&#039;&amp;lt;br&amp;gt; &lt;br /&gt;
&#039;&#039;&#039;1.) Please create an Experience in Unity. Record a video of your Experience with OBS (https://obsproject.com) and post it in our Wiki (GMU:Critical VR Lab I) under your name&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Christmas and New Year Holidays===&lt;br /&gt;
&lt;br /&gt;
===07.01.2021===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Online meeting via BBB from 15:30 - 18:30 CET&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===14.01.2021===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Consultations via BBB&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
15:30 - 16:00 CEST – (Quan Zhou) &amp;lt;br&amp;gt;&lt;br /&gt;
16:00 - 16:30 CEST – Eric Schlossberg &amp;lt;br&amp;gt;&lt;br /&gt;
16:30 - 17:00 CEST – Konrad Behr &amp;lt;br&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
17:00 - 17:30 CEST – Christian Dähne &amp;lt;br&amp;gt;&lt;br /&gt;
17:30 - 18:00 CEST – (put your name here) &amp;lt;br&amp;gt;&lt;br /&gt;
18:00 - 18:30 CEST – (put your name here)&lt;br /&gt;
&lt;br /&gt;
===21.01.2021===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Consultations via BBB&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
15:30 - 16:00 CEST – (Jin Wang) &amp;lt;br&amp;gt;&lt;br /&gt;
16:00 - 16:30 CEST – (put your name here) &amp;lt;br&amp;gt;&lt;br /&gt;
16:30 - 17:00 CEST – (put your name here) &amp;lt;br&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
17:00 - 17:30 CEST – (put your name here) &amp;lt;br&amp;gt;&lt;br /&gt;
17:30 - 18:00 CEST – (put your name here) &amp;lt;br&amp;gt;&lt;br /&gt;
18:00 - 18:30 CEST – (put your name here)&lt;br /&gt;
&lt;br /&gt;
===28.01.2021===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Online meeting via BBB from 15:30 - 18:30 CET&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===04.02.2021===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Consultations via BBB&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
15:30 - 16:00 CEST – (Quan Zhou) &amp;lt;br&amp;gt;&lt;br /&gt;
16:00 - 16:30 CEST – (put your name here) &amp;lt;br&amp;gt;&lt;br /&gt;
16:30 - 17:00 CEST – (put your name here) &amp;lt;br&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
17:00 - 17:30 CEST – (put your name here) &amp;lt;br&amp;gt;&lt;br /&gt;
17:30 - 18:00 CEST – (put your name here) &amp;lt;br&amp;gt;&lt;br /&gt;
18:00 - 18:30 CEST – (Jin Wang)&lt;br /&gt;
&lt;br /&gt;
===31.03.2021===&lt;br /&gt;
&#039;&#039;&#039;Final Deadline:&#039;&#039;&#039;&amp;lt;br&amp;gt; &lt;br /&gt;
&#039;&#039;&#039;Please document your final project on our Wiki (GMU:Critical VR Lab I) under your name&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Template:VR Toolbox}}&lt;br /&gt;
&lt;br /&gt;
[[Category:SS20]]&lt;br /&gt;
[[Category:Werkmodul]]&lt;br /&gt;
[[Category:Fachmodul]]&lt;br /&gt;
[[Category:Jörg Brinkmann]]&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=120510</id>
		<title>GMU:Critical VR Lab I - Unity Introduction/Konrad Behr</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=120510"/>
		<updated>2021-01-07T15:31:45Z</updated>

		<summary type="html">&lt;p&gt;Krnd: /* details */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Preview of &amp;quot;Radio Dance 3D&amp;quot;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Radio Dance 3D&amp;quot; is an invisible dance performance, acousmatic composition and live radio performance. The basis of &amp;quot;Radio Dance 3D&amp;quot; is a score developed by Konrad Behr with the dancer Emelie Bardon in the winter of 2018 at the Studio For Electroacoustic Music Weimar (seam.hfm-weimar.de). With the focus of the sounds a human body causes while performing a modern dance piece, both artists co-created a composition for one dancer.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recommended for headphones!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://www.youtube.com/watch?v=73BU_Y8e3S0&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===details===&lt;br /&gt;
# I places several sound sources and video material at the virtual performance place&lt;br /&gt;
# 4 sources with the recording of the live performance at seam&lt;br /&gt;
# 2 descriptions (english/swedish)&lt;br /&gt;
# video of &amp;quot;behind the stage&amp;quot; documentation &lt;br /&gt;
# graphical score with marker following in time&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More about the project in general: [https://konrad-behr.de/projekt-radio-dance www.konrad-behr.de/projekt-radio-dance]&lt;br /&gt;
&lt;br /&gt;
===future plans / another project===&lt;br /&gt;
&lt;br /&gt;
making a spacial &amp;quot;3D sound&amp;quot; version of one of my tracks of my music project bad comfort&lt;br /&gt;
&lt;br /&gt;
https://soundcloud.com/knrd/sets/bad-comfort&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=120125</id>
		<title>GMU:Critical VR Lab I - Unity Introduction/Konrad Behr</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=120125"/>
		<updated>2020-12-22T14:08:59Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Preview of &amp;quot;Radio Dance 3D&amp;quot;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Radio Dance 3D&amp;quot; is an invisible dance performance, acousmatic composition and live radio performance. The basis of &amp;quot;Radio Dance 3D&amp;quot; is a score developed by Konrad Behr with the dancer Emelie Bardon in the winter of 2018 at the Studio For Electroacoustic Music Weimar (seam.hfm-weimar.de). With the focus of the sounds a human body causes while performing a modern dance piece, both artists co-created a composition for one dancer.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recommended for headphones!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://www.youtube.com/watch?v=73BU_Y8e3S0&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===details===&lt;br /&gt;
# I places several sound sources and video material at the virtual performance place&lt;br /&gt;
# 4 sources with the recording of the live performance at seam&lt;br /&gt;
# 2 descriptions (english/swedish)&lt;br /&gt;
# video of &amp;quot;behind the stage&amp;quot; documentation &lt;br /&gt;
# graphical score with marker following in time&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More about the project in general: [https://konrad-behr.de/projekt-radio-dance www.konrad-behr.de/projekt-radio-dance]&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=120124</id>
		<title>GMU:Critical VR Lab I - Unity Introduction/Konrad Behr</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=120124"/>
		<updated>2020-12-22T14:08:15Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Preview of &amp;quot;Radio Dance 3D&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Radio Dance 3D&amp;quot; is an invisible dance performance, acousmatic composition and live radio performance. The basis of &amp;quot;Radio Dance 3D&amp;quot; is a score developed by Konrad Behr with the dancer Emelie Bardon in the winter of 2018 at the Studio For Electroacoustic Music Weimar (seam.hfm-weimar.de). With the focus of the sounds a human body causes while performing a modern dance piece, both artists co-created a composition for one dancer.&lt;br /&gt;
&lt;br /&gt;
Recommended for headphones!&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://www.youtube.com/watch?v=73BU_Y8e3S0&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===details===&lt;br /&gt;
# I places several sound sources and video material at the virtual performance place&lt;br /&gt;
# 4 sources with the recording of the live performance at seam&lt;br /&gt;
# 2 descriptions (english/swedish)&lt;br /&gt;
# video of &amp;quot;behind the stage&amp;quot; documentation &lt;br /&gt;
# graphical score with marker following in time&lt;br /&gt;
&lt;br /&gt;
More about the project in general: [https://konrad-behr.de/projekt-radio-dance www.konrad-behr.de/projekt-radio-dance]&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=120123</id>
		<title>GMU:Critical VR Lab I - Unity Introduction/Konrad Behr</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=120123"/>
		<updated>2020-12-22T14:07:30Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Preview of &amp;quot;Radio Dance 3D&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Radio Dance 3D&amp;quot; is an invisible dance performance, acousmatic composition and live radio performance. The basis of &amp;quot;Radio Dance 3D&amp;quot; is a score developed by Konrad Behr with the dancer Emelie Bardon in the winter of 2018 at the Studio For Electroacoustic Music Weimar (seam.hfm-weimar.de). With the focus of the sounds a human body causes while performing a modern dance piece, both artists co-created a composition for one dancer.&lt;br /&gt;
&lt;br /&gt;
Recommended for headphones!&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://www.youtube.com/watch?v=73BU_Y8e3S0&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===details===&lt;br /&gt;
# I places several sound sources and video material at the virtual performance place&lt;br /&gt;
# 4 sources with the recording of the live performance at seam&lt;br /&gt;
# 2 descriptions (english/swedish)&lt;br /&gt;
# video of &amp;quot;behind the stage&amp;quot; documentation &lt;br /&gt;
# graphical score with marker following in time&lt;br /&gt;
&lt;br /&gt;
More about the project in general: [https://konrad-behr.de/projekt-radio-dance]&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=120122</id>
		<title>GMU:Critical VR Lab I - Unity Introduction/Konrad Behr</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=120122"/>
		<updated>2020-12-22T13:51:31Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Preview of &amp;quot;Radio Dance 3D&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Radio Dance 3D&amp;quot; is an invisible dance performance, acousmatic composition and live radio performance. The basis of &amp;quot;Radio Dance 3D&amp;quot; is a score developed by Konrad Behr with the dancer Emelie Bardon in the winter of 2018 at the Studio For Electroacoustic Music Weimar (seam.hfm-weimar.de). With the focus of the sounds a human body causes while performing a modern dance piece, both artists co-created a composition for one dancer.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://www.youtube.com/watch?v=73BU_Y8e3S0&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===details===&lt;br /&gt;
# I places several sound sources and video material at the virtual performance place&lt;br /&gt;
# 4 sources with the recording of the live performance at seam&lt;br /&gt;
# 2 descriptions (english/swedish)&lt;br /&gt;
# video of &amp;quot;behind the stage&amp;quot; documentation &lt;br /&gt;
# graphical score with marker following in time&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=120121</id>
		<title>GMU:Critical VR Lab I - Unity Introduction/Konrad Behr</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=120121"/>
		<updated>2020-12-22T13:46:30Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Preview of &amp;quot;Radio Dance 3D&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Radio Dance 3D&amp;quot; is an invisible dance performance, acousmatic composition and live radio performance. The basis of &amp;quot;Radio Dance 3D&amp;quot; is a score developed by Konrad Behr with the dancer Emelie Bardon in the winter of 2018 at the Studio For Electroacoustic Music Weimar (seam.hfm-weimar.de). With the focus of the sounds a human body causes while performing a modern dance piece, both artists co-created a composition for one dancer.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;youtube&amp;quot;&amp;gt;https://www.youtube.com/watch?v=73BU_Y8e3S0&amp;lt;/embedvideo&amp;gt;&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=120120</id>
		<title>GMU:Critical VR Lab I - Unity Introduction/Konrad Behr</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction/Konrad_Behr&amp;diff=120120"/>
		<updated>2020-12-22T13:35:14Z</updated>

		<summary type="html">&lt;p&gt;Krnd: Created page with &amp;quot;Preview of &amp;quot;Radio Dance 3D&amp;quot;  &amp;quot;Radio Dance 3D&amp;quot; is an invisible dance performance, acousmatic composition and live radio performance. The basis of &amp;quot;Radio Dance 3D&amp;quot; is a score de...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Preview of &amp;quot;Radio Dance 3D&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Radio Dance 3D&amp;quot; is an invisible dance performance, acousmatic composition and live radio performance. The basis of &amp;quot;Radio Dance 3D&amp;quot; is a score developed by Konrad Behr with the dancer Emelie Bardon in the winter of 2018 at the Studio For Electroacoustic Music Weimar (seam.hfm-weimar.de). With the focus of the sounds a human body causes while performing a modern dance piece, both artists co-created a composition for one dancer.&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction&amp;diff=120083</id>
		<title>GMU:Critical VR Lab I - Unity Introduction</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Critical_VR_Lab_I_-_Unity_Introduction&amp;diff=120083"/>
		<updated>2020-12-21T19:46:30Z</updated>

		<summary type="html">&lt;p&gt;Krnd: /* Student works (after login, please add your name and create a page) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Lecturer: [[Jörg Brinkmann]]&amp;lt;br&amp;gt;&lt;br /&gt;
Credits: 6 [[ECTS]], 4 [[SWS]]&amp;lt;br&amp;gt;&lt;br /&gt;
Date: Thursdays, 15:30 – 18:30&amp;lt;br&amp;gt; &lt;br /&gt;
Venue: Online Seminar&amp;lt;br&amp;gt;&lt;br /&gt;
First Date: Thursday, 05.11.2020, 15:30&lt;br /&gt;
&lt;br /&gt;
==Description==&lt;br /&gt;
&#039;&#039;&#039;Critical VR Lab I – Unity Introduction&#039;&#039;&#039; is a beginner module that offers an Introduction to the Unity game engine. The whole course will be taught online. Participants will be introduced to Unity through video tutorials, accompanied by PDFs and it will be possible to communicate through online meetings and individual consultations.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The course will be taught in two phases:&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
In phase one participants will be introduced to an overview of the Unity interface and different techniques (Lights, Skyboxes, Prefabs, Timeline, Animation). The learning phase will be accompanied by lectures and discussions that are focused on finding strategies for dealing with the possibilities and challenges of working artistically with Game Engines and VR technologies. At the end of phase one, students will have created an experience with Unity and documented it on our GMU Wiki-Page. In phase two students are challenged to reflect, discuss, rethink and rework their Unity experience, based on individual research and experienced insights.&lt;br /&gt;
&lt;br /&gt;
The course emphasises on artistic and opposing ways of working with Virtual Reality concepts. It’s aim is to establish individual approaches to VR, a challenging medium which offers artists new possibilities for expression and intercultural communication, especially at times of the Corona Crisis which creates new challenges to the way we interact with each other. Please watch the video for more information.&lt;br /&gt;
&lt;br /&gt;
==Recommended Requirements==&lt;br /&gt;
No previous knowledge of Unity or other 3D software is needed, but applicants should have access to the Internet, a Computer and Headphones.&lt;br /&gt;
&lt;br /&gt;
==Criteria for passing==&lt;br /&gt;
In order to successfully participate you will have to develop and document your own project on the GMU Wiki. Also, complete the exercises and comply with the submission deadlines&lt;br /&gt;
&lt;br /&gt;
==Communication throughout the semester==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The GMU-Wiki&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
Here you can find all the info about our class and also present your work linked under your name&amp;lt;br&amp;gt; &lt;br /&gt;
Please create your account under this link with your University email adress:&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://www.uni-weimar.de/kunst-und-gestaltung/wiki/Special:CreateAccount&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Email&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For individual communication. I might send you download links to material&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Telegram&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
CriticalVR1 will be our collective chatroom&amp;lt;br&amp;gt;&lt;br /&gt;
Please download Telegram and join for group discussions: &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://t.me/joinchat/HZlOrxuHLynKhLE44cts_Q&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; BigBlueButton&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
This video conference utility will be used for our meetings and consultations&amp;lt;br&amp;gt;&lt;br /&gt;
To start the conversation click on the link&lt;br /&gt;
&lt;br /&gt;
==Student works (after login, please add your name and create a page)==&lt;br /&gt;
*[[Mudassir Sheikh]]&lt;br /&gt;
* [[/Ruo-Xuan Wu (Russian)/]]&lt;br /&gt;
*[[/Dominik_Kämmer/]]&lt;br /&gt;
* [[/Ulrike Sophie Katzschmann/]]&lt;br /&gt;
*[[/Laura Toledo/]]&lt;br /&gt;
* [[/Jin Wang/]]&lt;br /&gt;
* [[/Christian Dähne/]]&lt;br /&gt;
* [[/JCA/]]&lt;br /&gt;
* [[/Elia Zeißig/]]&lt;br /&gt;
* [[/Victoria Joelle Hesselbach/]]&lt;br /&gt;
* [[/Quan Zhou/]]&lt;br /&gt;
* [[/Deborah Meyer/]]&lt;br /&gt;
*[[/Beatrice Perlato/]]&lt;br /&gt;
*[[/Alena Isai/]]&lt;br /&gt;
*[[/Alexander Leschinez/]]&lt;br /&gt;
*[[/Sebastian Richter/]]&lt;br /&gt;
*[[/Konrad Behr/]]&lt;br /&gt;
&lt;br /&gt;
==Tutorials (Video + PDF)==&lt;br /&gt;
&lt;br /&gt;
===1.) Getting to know the Unity Editor===&lt;br /&gt;
&lt;br /&gt;
===2.) Lights, Skyboxes, Prefabs, Timeline===&lt;br /&gt;
&lt;br /&gt;
===3.) Animation in Mixamo and Unity===&lt;br /&gt;
&lt;br /&gt;
==Syllabus==&lt;br /&gt;
&lt;br /&gt;
===05.11.2020===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Online meeting via BBB from 15:30 - 18:30 CET&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
please watch the video to get some information about the course &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://youtu.be/sbhbC5y-jpI&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
if you have any questions, feel free to ask me anything at the online meeting&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===12.11.2020===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Online meeting via BBB from 15:30 - 18:30 CET&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===19.11.2020===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Online meeting via BBB from 15:30 - 18:30 CET&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===26.11.2020===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Consultations via BBB&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
Choose your timeslot for an individual consultation. To start the conversation open the link which will start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
15:30 - 16:00 CEST – (Ruo-Xuan Wu) &amp;lt;br&amp;gt;&lt;br /&gt;
16:00 - 16:30 CEST – (put your name here)&amp;lt;br&amp;gt;&lt;br /&gt;
16:30 - 17:00 CEST – (put your name here) &amp;lt;br&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
17:00 - 17:30 CEST – Elia Zeissig &amp;lt;br&amp;gt;&lt;br /&gt;
17:30 - 18:00 CEST – (Juro Carl Anton Reinhardt) &amp;lt;br&amp;gt;&lt;br /&gt;
18:00 - 18:30 CEST – (Jin Wang)&lt;br /&gt;
&lt;br /&gt;
===03.12.2020===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Consultations via BBB&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
15:30 - 16:00 CEST – (Quan Zhou) &amp;lt;br&amp;gt;&lt;br /&gt;
16:00 - 16:30 CEST – Sebastian &amp;lt;br&amp;gt;&lt;br /&gt;
16:30 - 17:00 CEST – Beatrice Perlato &amp;lt;br&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
17:00 - 17:30 CEST – Victoria Joelle Hesselbach &amp;lt;br&amp;gt;&lt;br /&gt;
17:30 - 18:00 CEST – Laura Toledo &amp;lt;br&amp;gt;&lt;br /&gt;
18:00 - 18:30 CEST – (Mudassir Sheikh)&lt;br /&gt;
&lt;br /&gt;
===10.12.2020===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Consultations via BBB&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
15:30 - 16:00 CEST – Elia Zeissig &amp;lt;br&amp;gt;&lt;br /&gt;
16:00 - 16:30 CEST – Sebastian &amp;lt;br&amp;gt;&lt;br /&gt;
16:30 - 17:00 CEST – Dominik Kämmer &amp;lt;br&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
17:00 - 17:30 CEST – (Deborah Meyer) &amp;lt;br&amp;gt;&lt;br /&gt;
17:30 - 18:00 CEST – (Ulrike Katzschmann) &amp;lt;br&amp;gt;&lt;br /&gt;
18:00 - 18:30 CEST – (put your name here)&lt;br /&gt;
&lt;br /&gt;
===17.12.2020===&lt;br /&gt;
&#039;&#039;&#039;Consultations via BBB&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
15:30 - 16:00 CEST – (Quan Zhou) &amp;lt;br&amp;gt;&lt;br /&gt;
16:00 - 16:30 CEST – (Ruo-Xuan Wu)&amp;lt;br&amp;gt;&lt;br /&gt;
16:30 - 17:00 CEST – (put your name) &amp;lt;br&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Online meeting via BBB from 17:30 - 18:30 CET&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===21.12.2020===&lt;br /&gt;
&#039;&#039;&#039;Deadline:&#039;&#039;&#039;&amp;lt;br&amp;gt; &lt;br /&gt;
&#039;&#039;&#039;1.) Please create an Experience in Unity. Record a video of your Experience with OBS (https://obsproject.com) and post it in our Wiki (GMU:Critical VR Lab I) under your name&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Christmas and New Year Holidays===&lt;br /&gt;
&lt;br /&gt;
===07.01.2021===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Online meeting via BBB from 15:30 - 18:30 CET&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===14.01.2021===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Consultations via BBB&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
15:30 - 16:00 CEST – (Quan Zhou) &amp;lt;br&amp;gt;&lt;br /&gt;
16:00 - 16:30 CEST – Eric Schlossberg &amp;lt;br&amp;gt;&lt;br /&gt;
16:30 - 17:00 CEST – Sebastian &amp;lt;br&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
17:00 - 17:30 CEST – Christian Dähne &amp;lt;br&amp;gt;&lt;br /&gt;
17:30 - 18:00 CEST – (put your name here) &amp;lt;br&amp;gt;&lt;br /&gt;
18:00 - 18:30 CEST – (put your name here)&lt;br /&gt;
&lt;br /&gt;
===21.01.2021===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Consultations via BBB&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
15:30 - 16:00 CEST – (Jin Wang) &amp;lt;br&amp;gt;&lt;br /&gt;
16:00 - 16:30 CEST – Sebastian &amp;lt;br&amp;gt;&lt;br /&gt;
16:30 - 17:00 CEST – (put your name here) &amp;lt;br&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
17:00 - 17:30 CEST – (put your name here) &amp;lt;br&amp;gt;&lt;br /&gt;
17:30 - 18:00 CEST – (put your name here) &amp;lt;br&amp;gt;&lt;br /&gt;
18:00 - 18:30 CEST – (put your name here)&lt;br /&gt;
&lt;br /&gt;
===28.01.2021===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Online meeting via BBB from 15:30 - 18:30 CET&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===04.02.2021===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Consultations via BBB&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
open the link, to start the video chat in your browser and click &#039;join&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;https://meeting.uni-weimar.de/b/jor-4gu-qjz&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
15:30 - 16:00 CEST – (Quan Zhou) &amp;lt;br&amp;gt;&lt;br /&gt;
16:00 - 16:30 CEST – Sebastian &amp;lt;br&amp;gt;&lt;br /&gt;
16:30 - 17:00 CEST – (put your name here) &amp;lt;br&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
17:00 - 17:30 CEST – (put your name here) &amp;lt;br&amp;gt;&lt;br /&gt;
17:30 - 18:00 CEST – (put your name here) &amp;lt;br&amp;gt;&lt;br /&gt;
18:00 - 18:30 CEST – (Jin Wang)&lt;br /&gt;
&lt;br /&gt;
===31.03.2021===&lt;br /&gt;
&#039;&#039;&#039;Final Deadline:&#039;&#039;&#039;&amp;lt;br&amp;gt; &lt;br /&gt;
&#039;&#039;&#039;Please document your final project on our Wiki (GMU:Critical VR Lab I) under your name&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Template:VR Toolbox}}&lt;br /&gt;
&lt;br /&gt;
[[Category:SS20]]&lt;br /&gt;
[[Category:Werkmodul]]&lt;br /&gt;
[[Category:Fachmodul]]&lt;br /&gt;
[[Category:Jörg Brinkmann]]&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113844</id>
		<title>GMU:Procedural Cut/Konrad</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113844"/>
		<updated>2020-03-12T19:58:23Z</updated>

		<summary type="html">&lt;p&gt;Krnd: /* &amp;quot;Car chase&amp;quot; by Konrad Behr */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;quot;Car chase&amp;quot; by Konrad Behr ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;vimeo&amp;quot;&amp;gt;https://vimeo.com/392177952&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
„Car chase“ is a Super Cut of footage of Police pursuits broadcast by US-American television stations. The editing is based on search words and phrases of the original live comments of the helicopter pilots and news reporters at the studio. To make the workflow more efficient I used the automatic subtitle function the YouTube footage. Scanning for search terms like “dangerous”, “you see ...” and emotional reactions like “Wow” or “Ohh” of the commentators presorted the material. To lead the viewers focus in to the material I organized the editing starting from silent watching to a peak of accidents back to the top view. &lt;br /&gt;
&lt;br /&gt;
The video shows the morally questionable live description between &amp;quot;Attention, there is danger there&amp;quot; to &amp;quot;We&#039;ll stay on until the end&amp;quot;. The high probability of a violent end of the chase combined with the hint &amp;quot;viewer description is advised&amp;quot; and the expectant reactions of news reporters in case of a &amp;quot;close call&amp;quot; produces a dubious media event.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;my work process:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
#YouTube research for car chase videos.&lt;br /&gt;
# Download YouTube video file using command tool [https://github.com/ytdl-org/youtube-dl youtube-dl]&lt;br /&gt;
# extract srt subtitle file using web service downsub [https://downsub.com downsub.com]&lt;br /&gt;
#Importing video file and srt file using DAW [http://reaper.fm reaper] and Reaper extension software package SWS [https://www.sws-extension.org sws-extension.org]&lt;br /&gt;
# Filter timeline regions after keywords (e.g. “dangerous”) using filter function of Reaper&lt;br /&gt;
# Select IN and OUT points of wanted phrase or word as Region and mark them&lt;br /&gt;
# Export filtered marked sections as mp4 video with useful names (all files with “dangerous” as ‘dangerous_[regionnumber].mp4’&lt;br /&gt;
#Import all the new created clips into video editing software [https://www.edius.de edius]&lt;br /&gt;
#combine clips with certain keywords randomly or after visual aesthetically or word content&lt;br /&gt;
#render final video as master file&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Screenshots:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:Car chase screenshot youtube.png|thumb|footage found on YouTube]]&lt;br /&gt;
&lt;br /&gt;
[[File:Car chase screenshot downsub.png|thumb|extracting the subtitle as srt-file]]&lt;br /&gt;
&lt;br /&gt;
[[File:Car chase screenshot reaper.png|thumb|merging dubtitle and video to regions in Reaper]]&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot Edius.png|thumb|final cut at EDIUS]]&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113841</id>
		<title>GMU:Procedural Cut/Konrad</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113841"/>
		<updated>2020-03-12T19:53:02Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;quot;Car chase&amp;quot; by Konrad Behr ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;vimeo&amp;quot;&amp;gt;https://vimeo.com/392177952&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
„Car chase“ is a Super Cut of footage of Police pursuits broadcast by US-American television stations. The editing is based on search words and phrases of the original live comments of the helicopter pilots and news reporters at the studio. To make the workflow more efficient I used the automatic subtitle function the YouTube footage. Scanning for search terms like “dangerous”, “you see ...” and emotional reactions like “Wow” or “Ohh” of the commentators presorted the material. To lead the viewers focus in to the material I organized the editing starting from silent watching to a peak of accidents back to the top view. &lt;br /&gt;
&lt;br /&gt;
The video shows the morally questionable live description between &amp;quot;Attention, there is danger there&amp;quot; to &amp;quot;We&#039;ll stay on until the end&amp;quot;. The high probability of a violent end of the chase combined with the hint &amp;quot;viewer description is advised&amp;quot; and the expectant reactions of news reporters in case of a &amp;quot;close call&amp;quot; produces a dubious media event.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;my work process:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
#YouTube research for car chase videos.&lt;br /&gt;
# Download YouTube video file using command tool [https://github.com/ytdl-org/youtube-dl youtube-dl]&lt;br /&gt;
# extract srt subtitle file using web service downsub [https://downsub.com downsub.com]&lt;br /&gt;
#Importing video file and srt file using DAW [http://reaper.fm reaper] and Reaper extension software package SWS [https://www.sws-extension.org sws-extension.org]&lt;br /&gt;
# Filter timeline regions after keywords (e.g. “dangerous”) using filter function of Reaper&lt;br /&gt;
# Select IN and OUT points of wanted phrase or word as Region and mark them&lt;br /&gt;
# Export filtered marked sections as mp4 video with useful names (all files with “dangerous” as ‘dangerous_[regionnumber].mp4’&lt;br /&gt;
#Import all the new created clips into video editing software [https://www.edius.de edius.de]&lt;br /&gt;
#combine clips with certain keywords randomly or after visual aesthetically or word content&lt;br /&gt;
#render final video as master file&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Screenshots:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:Car chase screenshot youtube.png|thumb|footage found on YouTube]]&lt;br /&gt;
&lt;br /&gt;
[[File:Car chase screenshot downsub.png|thumb|extracting the subtitle as srt-file]]&lt;br /&gt;
&lt;br /&gt;
[[File:Car chase screenshot reaper.png|thumb|merging dubtitle and video to regions in Reaper]]&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot Edius.png|thumb|final cut at EDIUS]]&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113840</id>
		<title>GMU:Procedural Cut/Konrad</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113840"/>
		<updated>2020-03-12T19:51:14Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;quot;Car chase&amp;quot; by Konrad Behr ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;vimeo&amp;quot;&amp;gt;https://vimeo.com/392177952&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
„Car chase“ is a Super Cut of footage of Police pursuits broadcast by US-American television stations. The editing is based on search words and phrases of the original live comments of the helicopter pilots and news reporters at the studio. To make the workflow more efficient I used the automatic subtitle function the YouTube footage. Scanning for search terms like “dangerous”, “you see ...” and emotional reactions like “Wow” or “Ohh” of the commentators presorted the material. To lead the viewers focus in to the material I organized the editing starting from silent watching to a peak of accidents back to the top view. &lt;br /&gt;
&lt;br /&gt;
The video shows the morally questionable live description between &amp;quot;Attention, there is danger there&amp;quot; to &amp;quot;We&#039;ll stay on until the end&amp;quot;. The high probability of a violent end of the chase combined with the hint &amp;quot;viewer description is advised&amp;quot; and the expectant reactions of news reporters in case of a &amp;quot;close call&amp;quot; produces a dubious media event.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;my work process:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
#YouTube research for car chase videos.&lt;br /&gt;
# Download YouTube video file using command tool [https://github.com/ytdl-org/youtube-dl youtube-dl]&lt;br /&gt;
# extract srt subtitle file using web service downsub [https://downsub.com downsub.com]&lt;br /&gt;
#Importing video file and srt file using DAW [http://reaper.fm reaper] and Reaper extension software package SWS [https://www.sws-extension.org sws-extension.org]&lt;br /&gt;
# Filter timeline regions after keywords (e.g. “dangerous”) using filter function of Reaper&lt;br /&gt;
# Select IN and OUT points of wanted phrase or word as Region and mark them&lt;br /&gt;
# Export filtered marked sections as mp4 video with useful names (all files with “dangerous” as ‘dangerous_[regionnumber].mp4’&lt;br /&gt;
#Import all the new created clips into video editing software [https://www.edius.de edius.de]&lt;br /&gt;
#combine clips with certain keywords randomly or after visual aesthetically or word content&lt;br /&gt;
#render final video as master file&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Screenshots:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:Car chase screenshot youtube.png|thumb|caption]]&lt;br /&gt;
&lt;br /&gt;
[[File:Car chase screenshot downsub.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Car chase screenshot reaper.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot Edius.png]]&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113839</id>
		<title>GMU:Procedural Cut/Konrad</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113839"/>
		<updated>2020-03-12T19:49:55Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;quot;Car chase&amp;quot; by Konrad Behr ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;vimeo&amp;quot;&amp;gt;https://vimeo.com/392177952&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
„Car chase“ is a Super Cut of footage of Police pursuits broadcast by US-American television stations. The editing is based on search words and phrases of the original live comments of the helicopter pilots and news reporters at the studio. To make the workflow more efficient I used the automatic subtitle function the YouTube footage. Scanning for search terms like “dangerous”, “you see ...” and emotional reactions like “Wow” or “Ohh” of the commentators presorted the material. To lead the viewers focus in to the material I organized the editing starting from silent watching to a peak of accidents back to the top view. &lt;br /&gt;
&lt;br /&gt;
The video shows the morally questionable live description between &amp;quot;Attention, there is danger there&amp;quot; to &amp;quot;We&#039;ll stay on until the end&amp;quot;. The high probability of a violent end of the chase combined with the hint &amp;quot;viewer description is advised&amp;quot; and the expectant reactions of news reporters in case of a &amp;quot;close call&amp;quot; produces a dubious media event.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;my work process:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
#YouTube research for car chase videos.&lt;br /&gt;
# Download YouTube video file using command tool [https://github.com/ytdl-org/youtube-dl youtube-dl]&lt;br /&gt;
# extract srt subtitle file using web service downsub [https://downsub.com downsub.com]&lt;br /&gt;
#Importing video file and srt file using DAW [http://reaper.fm reaper] and Reaper extension software package SWS [https://www.sws-extension.org sws-extension.org]&lt;br /&gt;
# Filter timeline regions after keywords (e.g. “dangerous”) using filter function of Reaper&lt;br /&gt;
# Select IN and OUT points of wanted phrase or word as Region and mark them&lt;br /&gt;
# Export filtered marked sections as mp4 video with useful names (all files with “dangerous” as ‘dangerous_[regionnumber].mp4’&lt;br /&gt;
#Import all the new created clips into video editing software [https://www.edius.de edius.de]&lt;br /&gt;
#combine clips with certain keywords randomly or after visual aesthetically or word content&lt;br /&gt;
#render final video as master file&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Screenshots:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:Car chase screenshot youtube.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Car chase screenshot downsub.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Car chase screenshot reaper.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot Edius.png]]&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113838</id>
		<title>GMU:Procedural Cut/Konrad</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113838"/>
		<updated>2020-03-12T19:49:15Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;quot;Car chase&amp;quot; by Konrad Behr ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;vimeo&amp;quot;&amp;gt;https://vimeo.com/392177952&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
„Car chase“ is a Super Cut of footage of Police pursuits broadcast by US-American television stations. The editing is based on search words and phrases of the original live comments of the helicopter pilots and news reporters at the studio. To make the workflow more efficient I used the automatic subtitle function the YouTube footage. Scanning for search terms like “dangerous”, “you see ...” and emotional reactions like “Wow” or “Ohh” of the commentators presorted the material. To lead the viewers focus in to the material I organized the editing starting from silent watching to a peak of accidents back to the top view. &lt;br /&gt;
&lt;br /&gt;
The video shows the morally questionable live description between &amp;quot;Attention, there is danger there&amp;quot; to &amp;quot;We&#039;ll stay on until the end&amp;quot;. The high probability of a violent end of the chase combined with the hint &amp;quot;viewer description is advised&amp;quot; and the expectant reactions of news reporters in case of a &amp;quot;close call&amp;quot; produces a dubious media event.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;my work process:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
#YouTube research for car chase videos.&lt;br /&gt;
# Download YouTube video file using command tool [https://github.com/ytdl-org/youtube-dl youtube-dl]&lt;br /&gt;
# extract srt subtitle file using web service downsub [https://downsub.com downsub.com]&lt;br /&gt;
#Importing video file and srt file using DAW [http://reaper.fm reaper] and Reaper extension software package SWS [https://www.sws-extension.org sws-extension.org]&lt;br /&gt;
# Filter timeline regions after keywords (e.g. “dangerous”) using filter function of Reaper&lt;br /&gt;
# Select IN and OUT points of wanted phrase or word as Region and mark them&lt;br /&gt;
# Export filtered marked sections as mp4 video with useful names (all files with “dangerous” as ‘dangerous_[regionnumber].mp4’&lt;br /&gt;
#Import all the new created clips into video editing software [https://www.edius.de edius.de]&lt;br /&gt;
#combine clips with certain keywords randomly or after visual aesthetically or word content&lt;br /&gt;
#render final video as master file&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Screenshots:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:Car chase screenshot youtube.png]]&lt;br /&gt;
[[File:Car chase screenshot downsub.png]]&lt;br /&gt;
[[File:Car chase screenshot reaper.png]]&lt;br /&gt;
[[File:Screenshot Edius.png]]&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113837</id>
		<title>GMU:Procedural Cut/Konrad</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113837"/>
		<updated>2020-03-12T19:47:44Z</updated>

		<summary type="html">&lt;p&gt;Krnd: /* &amp;quot;Car chase&amp;quot; by Konrad Behr */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;quot;Car chase&amp;quot; by Konrad Behr ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;vimeo&amp;quot;&amp;gt;https://vimeo.com/392177952&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
„Car chase“ is a Super Cut of footage of Police pursuits broadcast by US-American television stations. The editing is based on search words and phrases of the original live comments of the helicopter pilots and news reporters at the studio. To make the workflow more efficient I used the automatic subtitle function the YouTube footage. Scanning for search terms like “dangerous”, “you see ...” and emotional reactions like “Wow” or “Ohh” of the commentators presorted the material. To lead the viewers focus in to the material I organized the editing starting from silent watching to a peak of accidents back to the top view. &lt;br /&gt;
&lt;br /&gt;
The video shows the morally questionable live description between &amp;quot;Attention, there is danger there&amp;quot; to &amp;quot;We&#039;ll stay on until the end&amp;quot;. The high probability of a violent end of the chase combined with the hint &amp;quot;viewer description is advised&amp;quot; and the expectant reactions of news reporters in case of a &amp;quot;close call&amp;quot; produces a dubious media event.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
my work process:&lt;br /&gt;
&lt;br /&gt;
#YouTube research for car chase videos.&lt;br /&gt;
[[File:Car chase screenshot youtube.png]]&lt;br /&gt;
# Download YouTube video file using command tool [https://github.com/ytdl-org/youtube-dl youtube-dl]&lt;br /&gt;
# extract srt subtitle file using web service downsub [https://downsub.com downsub.com][[File:Car chase screenshot downsub.png]]&lt;br /&gt;
&lt;br /&gt;
#Importing video file and srt file using DAW [http://reaper.fm reaper] and Reaper extension software package SWS [https://www.sws-extension.org sws-extension.org]&lt;br /&gt;
# Filter timeline regions after keywords (e.g. “dangerous”) using filter function of Reaper[[File:Car chase screenshot reaper.png]]&lt;br /&gt;
&lt;br /&gt;
# Select IN and OUT points of wanted phrase or word as Region and mark them&lt;br /&gt;
# Export filtered marked sections as mp4 video with useful names (all files with “dangerous” as ‘dangerous_[regionnumber].mp4’&lt;br /&gt;
#Import all the new created clips into video editing software [https://www.edius.de edius.de]&lt;br /&gt;
[[File:Screenshot Edius.png]]&lt;br /&gt;
#combine clips with certain keywords randomly or after visual aesthetically or word content&lt;br /&gt;
#render final video as master file&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113836</id>
		<title>GMU:Procedural Cut/Konrad</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113836"/>
		<updated>2020-03-12T19:47:06Z</updated>

		<summary type="html">&lt;p&gt;Krnd: /* &amp;quot;Car chase&amp;quot; by Konrad Behr */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;quot;Car chase&amp;quot; by Konrad Behr ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;vimeo&amp;quot;&amp;gt;https://vimeo.com/392177952&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
„Car chase“ is a Super Cut of footage of Police pursuits broadcast by US-American television stations. The editing is based on search words and phrases of the original live comments of the helicopter pilots and news reporters at the studio. To make the workflow more efficient I used the automatic subtitle function the YouTube footage. Scanning for search terms like “dangerous”, “you see ...” and emotional reactions like “Wow” or “Ohh” of the commentators presorted the material. To lead the viewers focus in to the material I organized the editing starting from silent watching to a peak of accidents back to the top view. &lt;br /&gt;
&lt;br /&gt;
The video shows the morally questionable live description between &amp;quot;Attention, there is danger there&amp;quot; to &amp;quot;We&#039;ll stay on until the end&amp;quot;. The high probability of a violent end of the chase combined with the hint &amp;quot;viewer description is advised&amp;quot; and the expectant reactions of news reporters in case of a &amp;quot;close call&amp;quot; produces a dubious media event.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
my work process:&lt;br /&gt;
&lt;br /&gt;
#YouTube research for car chase videos.&lt;br /&gt;
[[File:Car chase screenshot youtube.png]]&lt;br /&gt;
# Download YouTube video file using command tool [https://github.com/ytdl-org/youtube-dl youtube-dl]&lt;br /&gt;
# extract srt subtitle file using web service downsub [https://downsub.com downsub.com]&lt;br /&gt;
[[File:Car chase screenshot downsub.png]]&lt;br /&gt;
#Importing video file and srt file using DAW [http://reaper.fm reaper] and Reaper extension software package SWS [https://www.sws-extension.org sws-extension.org]&lt;br /&gt;
# Filter timeline regions after keywords (e.g. “dangerous”) using filter function of Reaper&lt;br /&gt;
[[File:Car chase screenshot reaper.png]]&lt;br /&gt;
# Select IN and OUT points of wanted phrase or word as Region and mark them&lt;br /&gt;
# Export filtered marked sections as mp4 video with useful names (all files with “dangerous” as ‘dangerous_[regionnumber].mp4’&lt;br /&gt;
#Import all the new created clips into video editing software [https://www.edius.de edius.de]&lt;br /&gt;
[[File:Screenshot Edius.png]]&lt;br /&gt;
#combine clips with certain keywords randomly or after visual aesthetically or word content&lt;br /&gt;
#render final video as master file&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Screenshot_Edius.png&amp;diff=113835</id>
		<title>File:Screenshot Edius.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Screenshot_Edius.png&amp;diff=113835"/>
		<updated>2020-03-12T19:46:49Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Car_chase_screenshot_reaper.png&amp;diff=113834</id>
		<title>File:Car chase screenshot reaper.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Car_chase_screenshot_reaper.png&amp;diff=113834"/>
		<updated>2020-03-12T19:46:09Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Car_chase_screenshot_downsub.png&amp;diff=113833</id>
		<title>File:Car chase screenshot downsub.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Car_chase_screenshot_downsub.png&amp;diff=113833"/>
		<updated>2020-03-12T19:44:37Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113832</id>
		<title>GMU:Procedural Cut/Konrad</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113832"/>
		<updated>2020-03-12T19:43:44Z</updated>

		<summary type="html">&lt;p&gt;Krnd: /* &amp;quot;Car chase&amp;quot; by Konrad Behr */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;quot;Car chase&amp;quot; by Konrad Behr ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;vimeo&amp;quot;&amp;gt;https://vimeo.com/392177952&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
„Car chase“ is a Super Cut of footage of Police pursuits broadcast by US-American television stations. The editing is based on search words and phrases of the original live comments of the helicopter pilots and news reporters at the studio. To make the workflow more efficient I used the automatic subtitle function the YouTube footage. Scanning for search terms like “dangerous”, “you see ...” and emotional reactions like “Wow” or “Ohh” of the commentators presorted the material. To lead the viewers focus in to the material I organized the editing starting from silent watching to a peak of accidents back to the top view. &lt;br /&gt;
&lt;br /&gt;
The video shows the morally questionable live description between &amp;quot;Attention, there is danger there&amp;quot; to &amp;quot;We&#039;ll stay on until the end&amp;quot;. The high probability of a violent end of the chase combined with the hint &amp;quot;viewer description is advised&amp;quot; and the expectant reactions of news reporters in case of a &amp;quot;close call&amp;quot; produces a dubious media event.&lt;br /&gt;
&lt;br /&gt;
my work process:&lt;br /&gt;
&lt;br /&gt;
#YouTube research for car chase videos.&lt;br /&gt;
[[File:Car chase screenshot youtube.png]]&lt;br /&gt;
# Download YouTube video file using command tool [https://github.com/ytdl-org/youtube-dl youtube-dl]&lt;br /&gt;
# extract srt subtitle file using web service downsub [https://downsub.com downsub.com]&lt;br /&gt;
#Importing video file and srt file using DAW [http://reaper.fm reaper] and Reaper extension software package SWS [https://www.sws-extension.org sws-extension.org]&lt;br /&gt;
# Filter timeline regions after keywords (e.g. “dangerous”) using filter function of Reaper&lt;br /&gt;
# Select IN and OUT points of wanted phrase or word as Region and mark them&lt;br /&gt;
# Export filtered marked sections as mp4 video with useful names (all files with “dangerous” as ‘dangerous_[regionnumber].mp4’&lt;br /&gt;
#Import all the new created clips into video editing software [https://www.edius.de edius.de]&lt;br /&gt;
#combine clips with certain keywords randomly or after visual aesthetically or word content&lt;br /&gt;
#render final video as master file&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Car_chase_screenshot_youtube.png&amp;diff=113831</id>
		<title>File:Car chase screenshot youtube.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Car_chase_screenshot_youtube.png&amp;diff=113831"/>
		<updated>2020-03-12T19:42:39Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113830</id>
		<title>GMU:Procedural Cut/Konrad</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113830"/>
		<updated>2020-03-12T19:40:20Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;quot;Car chase&amp;quot; by Konrad Behr ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;vimeo&amp;quot;&amp;gt;https://vimeo.com/392177952&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
„Car chase“ is a Super Cut of footage of Police pursuits broadcast by US-American television stations. The editing is based on search words and phrases of the original live comments of the helicopter pilots and news reporters at the studio. To make the workflow more efficient I used the automatic subtitle function the YouTube footage. Scanning for search terms like “dangerous”, “you see ...” and emotional reactions like “Wow” or “Ohh” of the commentators presorted the material. To lead the viewers focus in to the material I organized the editing starting from silent watching to a peak of accidents back to the top view. &lt;br /&gt;
&lt;br /&gt;
The video shows the morally questionable live description between &amp;quot;Attention, there is danger there&amp;quot; to &amp;quot;We&#039;ll stay on until the end&amp;quot;. The high probability of a violent end of the chase combined with the hint &amp;quot;viewer description is advised&amp;quot; and the expectant reactions of news reporters in case of a &amp;quot;close call&amp;quot; produces a dubious media event.&lt;br /&gt;
&lt;br /&gt;
my work process:&lt;br /&gt;
&lt;br /&gt;
#YouTube research for car chase videos.&lt;br /&gt;
[http://reaper.fm reaper] &lt;br /&gt;
# Download YouTube video file using command tool [https://github.com/ytdl-org/youtube-dl youtube-dl]&lt;br /&gt;
# extract srt subtitle file using web service downsub [https://downsub.com downsub.com]&lt;br /&gt;
#Importing video file and srt file using DAW [http://reaper.fm reaper] and Reaper extension software package SWS [https://www.sws-extension.org sws-extension.org]&lt;br /&gt;
# Filter timeline regions after keywords (e.g. “dangerous”) using filter function of Reaper&lt;br /&gt;
# Select IN and OUT points of wanted phrase or word as Region and mark them&lt;br /&gt;
# Export filtered marked sections as mp4 video with useful names (all files with “dangerous” as ‘dangerous_[regionnumber].mp4’&lt;br /&gt;
#Import all the new created clips into video editing software [https://www.edius.de edius.de]&lt;br /&gt;
#combine clips with certain keywords randomly or after visual aesthetically or word content&lt;br /&gt;
#render final video as master file&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113829</id>
		<title>GMU:Procedural Cut/Konrad</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113829"/>
		<updated>2020-03-12T19:38:03Z</updated>

		<summary type="html">&lt;p&gt;Krnd: /* &amp;quot;Car chase&amp;quot; by Konrad Behr */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;quot;Car chase&amp;quot; by Konrad Behr ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;vimeo&amp;quot;&amp;gt;https://vimeo.com/392177952&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
„Car chase“ is a Super Cut of footage of Police pursuits broadcast by US-American television stations. The editing is based on search words and phrases of the original live comments of the helicopter pilots and news reporters at the studio. To make the workflow more efficient I used the automatic subtitle function the YouTube footage. Scanning for search terms like “dangerous”, “you see ...” and emotional reactions like “Wow” or “Ohh” of the commentators presorted the material. To lead the viewers focus in to the material I organized the editing starting from silent watching to a peak of accidents back to the top view. &lt;br /&gt;
&lt;br /&gt;
The video shows the morally questionable live description between &amp;quot;Attention, there is danger there&amp;quot; to &amp;quot;We&#039;ll stay on until the end&amp;quot;. The high probability of a violent end of the chase combined with the hint &amp;quot;viewer description is advised&amp;quot; and the expectant reactions of news reporters in case of a &amp;quot;close call&amp;quot; produces a dubious media event.&lt;br /&gt;
&lt;br /&gt;
my work process:&lt;br /&gt;
&lt;br /&gt;
#YouTube research for car chase videos.&lt;br /&gt;
# Download YouTube video file using command tool [https://github.com/ytdl-org/youtube-dl youtube-dl]&lt;br /&gt;
# extract srt subtitle file using web service downsub [https://downsub.com downsub.com]&lt;br /&gt;
#Importing video file and srt file using DAW [http://reaper.fm reaper] and Reaper extension software package SWS [https://www.sws-extension.org sws-extension.org]&lt;br /&gt;
# Filter timeline regions after keywords (e.g. “dangerous”) using filter function of Reaper&lt;br /&gt;
# Select IN and OUT points of wanted phrase or word as Region and mark them&lt;br /&gt;
# Export filtered marked sections as mp4 video with useful names (all files with “dangerous” as ‘dangerous_[regionnumber].mp4’&lt;br /&gt;
#Import all the new created clips into video editing software [https://www.edius.de edius.de]&lt;br /&gt;
#combine clips with certain keywords randomly or after visual aesthetically or word content&lt;br /&gt;
#render final video as master file&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113828</id>
		<title>GMU:Procedural Cut/Konrad</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113828"/>
		<updated>2020-03-12T19:36:14Z</updated>

		<summary type="html">&lt;p&gt;Krnd: /* &amp;quot;Car chase&amp;quot; by Konrad Behr */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;quot;Car chase&amp;quot; by Konrad Behr ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;vimeo&amp;quot;&amp;gt;https://vimeo.com/392177952&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
„Car chase“ is a Super Cut of footage of Police pursuits broadcast by US-American television stations. The editing is based on search words and phrases of the original live comments of the helicopter pilots and news reporters at the studio. To make the workflow more efficient I used the automatic subtitle function the YouTube footage. Scanning for search terms like “dangerous”, “you see ...” and emotional reactions like “Wow” or “Ohh” of the commentators presorted the material. To lead the viewers focus in to the material I organized the editing starting from silent watching to a peak of accidents back to the top view. &lt;br /&gt;
&lt;br /&gt;
The video shows the morally questionable live description between &amp;quot;Attention, there is danger there&amp;quot; to &amp;quot;We&#039;ll stay on until the end&amp;quot;. The high probability of a violent end of the chase combined with the hint &amp;quot;viewer description is advised&amp;quot; and the expectant reactions of news reporters in case of a &amp;quot;close call&amp;quot; produces a dubious media event.&lt;br /&gt;
&lt;br /&gt;
my work process:&lt;br /&gt;
&lt;br /&gt;
#YouTube research for car chase videos.&lt;br /&gt;
# Download YouTube video file using command tool “youtube-dl” &lt;br /&gt;
[https://github.com/ytdl-org/youtube-dl github.com/ytdl-org/youtube-dl]&lt;br /&gt;
# extract srt subtitle file using web service downsub&lt;br /&gt;
[https://downsub.com downsub.com]&lt;br /&gt;
#Importing video file and srt file using DAW Reaper and Reaper extension software package SWS&lt;br /&gt;
[http://reaper.fm]&lt;br /&gt;
[https://www.sws-extension.org sws-extension.org]&lt;br /&gt;
# Filter timeline regions after keywords (e.g. “dangerous”) using filter function of Reaper&lt;br /&gt;
# Select IN and OUT points of wanted phrase or word as Region and mark them&lt;br /&gt;
# Export filtered marked sections as mp4 video with useful names (all files with “dangerous” as ‘dangerous_[regionnumber].mp4’&lt;br /&gt;
#Import all the new created clips into video editing software edius&lt;br /&gt;
[https://www.edius.de edius.de]&lt;br /&gt;
#combine clips with certain keywords randomly or after visual aesthetically or word content&lt;br /&gt;
#render final video as master file&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113827</id>
		<title>GMU:Procedural Cut/Konrad</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113827"/>
		<updated>2020-03-12T19:34:15Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;quot;Car chase&amp;quot; by Konrad Behr ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;vimeo&amp;quot;&amp;gt;https://vimeo.com/392177952&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
„Car chase“ is a Super Cut of footage of Police pursuits broadcast by US-American television stations. The editing is based on search words and phrases of the original live comments of the helicopter pilots and news reporters at the studio. To make the workflow more efficient I used the automatic subtitle function the YouTube footage. Scanning for search terms like “dangerous”, “you see ...” and emotional reactions like “Wow” or “Ohh” of the commentators presorted the material. To lead the viewers focus in to the material I organized the editing starting from silent watching to a peak of accidents back to the top view. &lt;br /&gt;
&lt;br /&gt;
The video shows the morally questionable live description between &amp;quot;Attention, there is danger there&amp;quot; to &amp;quot;We&#039;ll stay on until the end&amp;quot;. The high probability of a violent end of the chase combined with the hint &amp;quot;viewer description is advised&amp;quot; and the expectant reactions of news reporters in case of a &amp;quot;close call&amp;quot; produces a dubious media event.&lt;br /&gt;
&lt;br /&gt;
my work process:&lt;br /&gt;
&lt;br /&gt;
#YouTube research for car chase videos.&lt;br /&gt;
# Download YouTube video file using command tool “youtube-dl” &lt;br /&gt;
[https://github.com/ytdl-org/youtube-dl]&lt;br /&gt;
# extract srt subtitle file using web service downsub&lt;br /&gt;
[https://downsub.com]&lt;br /&gt;
#Importing video file and srt file using DAW Reaper and Reaper extension software package SWS&lt;br /&gt;
[http://reaper.fm]&lt;br /&gt;
[https://www.sws-extension.org]&lt;br /&gt;
# Filter timeline regions after keywords (e.g. “dangerous”) using filter function of Reaper&lt;br /&gt;
# Select IN and OUT points of wanted phrase or word as Region and mark them&lt;br /&gt;
# Export filtered marked sections as mp4 video with useful names (all files with “dangerous” as ‘dangerous_[regionnumber].mp4’&lt;br /&gt;
#Import all the new created clips into video editing software edius&lt;br /&gt;
[https://www.edius.de/]&lt;br /&gt;
#combine clips with certain keywords randomly or after visual aesthetically or word content&lt;br /&gt;
#render final video as master file&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113826</id>
		<title>GMU:Procedural Cut/Konrad</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113826"/>
		<updated>2020-03-12T19:32:19Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;quot;Car chase&amp;quot; by Konrad Behr ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;vimeo&amp;quot;&amp;gt;https://vimeo.com/392177952&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
„Car chase“ is a Super Cut of footage of Police pursuits broadcast by US-American television stations. The editing is based on search words and phrases of the original live comments of the helicopter pilots and news reporters at the studio. To make the workflow more efficient I used the automatic subtitle function the YouTube footage. Scanning for search terms like “dangerous”, “you see ...” and emotional reactions like “Wow” or “Ohh” of the commentators presorted the material. To lead the viewers focus in to the material I organized the editing starting from silent watching to a peak of accidents back to the top view. &lt;br /&gt;
&lt;br /&gt;
The video shows the morally questionable live description between &amp;quot;Attention, there is danger there&amp;quot; to &amp;quot;We&#039;ll stay on until the end&amp;quot;. The high probability of a violent end of the chase combined with the hint &amp;quot;viewer description is advised&amp;quot; and the expectant reactions of news reporters in case of a &amp;quot;close call&amp;quot; produces a dubious media event.&lt;br /&gt;
&lt;br /&gt;
Work process:&lt;br /&gt;
&lt;br /&gt;
#YouTube research for car chase videos.&lt;br /&gt;
# Download YouTube video file using command tool “youtube-dl” &lt;br /&gt;
[https://github.com/ytdl-org/youtube-dl]&lt;br /&gt;
    3. extract srt subtitle file using web service downsub&lt;br /&gt;
https://downsub.com&lt;br /&gt;
    4. Importing video file and srt file using DAW Reaper and Reaper extension software package SWS&lt;br /&gt;
http://reaper.fm/ &lt;br /&gt;
https://www.sws-extension.org/ &lt;br /&gt;
    5. Filter timeline regions after keywords (e.g. “dangerous”) using filter function of Reaper&lt;br /&gt;
    6. Select IN and OUT points of wanted phrase or word as Region and mark them&lt;br /&gt;
    7. Export filtered marked sections as mp4 video with useful names (all files with “dangerous” as ‘dangerous_[regionnumber].mp4’&lt;br /&gt;
    8. Import all the new created clips into video editing software&lt;br /&gt;
    9. combine clips with certain keywords randomly or after visual aesthetically or word content&lt;br /&gt;
    10. render final video as master file&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113823</id>
		<title>GMU:Procedural Cut/Konrad</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113823"/>
		<updated>2020-03-12T19:29:28Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;quot;Car chase&amp;quot; by Konrad Behr ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;vimeo&amp;quot;&amp;gt;https://vimeo.com/392177952&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
„Car chase“ is a Super Cut of footage of Police pursuits broadcast by US-American television stations. The editing is based on search words and phrases of the original live comments of the helicopter pilots and news reporters at the studio. To make the workflow more efficient I used the automatic subtitle function the YouTube footage. Scanning for search terms like “dangerous”, “you see ...” and emotional reactions like “Wow” or “Ohh” of the commentators presorted the material. To lead the viewers focus in to the material I organized the editing starting from silent watching to a peak of accidents back to the top view. &lt;br /&gt;
&lt;br /&gt;
The video shows the morally questionable live description between &amp;quot;Attention, there is danger there&amp;quot; to &amp;quot;We&#039;ll stay on until the end&amp;quot;. The high probability of a violent end of the chase combined with the hint &amp;quot;viewer description is advised&amp;quot; and the expectant reactions of news reporters in case of a &amp;quot;close call&amp;quot; produces a dubious media event.&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113822</id>
		<title>GMU:Procedural Cut/Konrad</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113822"/>
		<updated>2020-03-12T19:28:57Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Car chase&lt;br /&gt;
&lt;br /&gt;
&amp;lt;embedvideo service=&amp;quot;vimeo&amp;quot;&amp;gt;https://vimeo.com/392177952&amp;lt;/embedvideo&amp;gt;&lt;br /&gt;
&lt;br /&gt;
„Car chase“ is a Super Cut of footage of Police pursuits broadcast by US-American television stations. The editing is based on search words and phrases of the original live comments of the helicopter pilots and news reporters at the studio. To make the workflow more efficient I used the automatic subtitle function the YouTube footage. Scanning for search terms like “dangerous”, “you see ...” and emotional reactions like “Wow” or “Ohh” of the commentators presorted the material. To lead the viewers focus in to the material I organized the editing starting from silent watching to a peak of accidents back to the top view. &lt;br /&gt;
&lt;br /&gt;
The video shows the morally questionable live description between &amp;quot;Attention, there is danger there&amp;quot; to &amp;quot;We&#039;ll stay on until the end&amp;quot;. The high probability of a violent end of the chase combined with the hint &amp;quot;viewer description is advised&amp;quot; and the expectant reactions of news reporters in case of a &amp;quot;close call&amp;quot; produces a dubious media event.&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113817</id>
		<title>GMU:Procedural Cut/Konrad</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113817"/>
		<updated>2020-03-12T17:47:07Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://vimeo.com/392177952 link title]&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113816</id>
		<title>GMU:Procedural Cut/Konrad</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113816"/>
		<updated>2020-03-12T17:45:34Z</updated>

		<summary type="html">&lt;p&gt;Krnd: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Hello!&lt;br /&gt;
[[vimeo:392177952]]&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113815</id>
		<title>GMU:Procedural Cut/Konrad</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut/Konrad&amp;diff=113815"/>
		<updated>2020-03-12T17:44:52Z</updated>

		<summary type="html">&lt;p&gt;Krnd: Created page with &amp;quot;Hello! vimeo:https://vimeo.com/392177952&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Hello!&lt;br /&gt;
[[vimeo:https://vimeo.com/392177952]]&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut&amp;diff=113259</id>
		<title>GMU:Procedural Cut</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut&amp;diff=113259"/>
		<updated>2020-01-20T11:36:53Z</updated>

		<summary type="html">&lt;p&gt;Krnd: /* Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;Lecturer:&#039;&#039; [[GMU:Max Neupert|Max Neupert]]&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;Credits:&#039;&#039; 6 [[ECTS]], 4 [[SWS]]&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;Date:&#039;&#039; Mondays, 09:15-12:30&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;Venue:&#039;&#039; [[Marienstraße 7b]],  [[Marienstraße 7b/204|Raum 204]]&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;First meeting:&#039;&#039; Monday, 14th of October 09:15-12:30&amp;lt;br&amp;gt;&lt;br /&gt;
Course language is English&lt;br /&gt;
&lt;br /&gt;
==Procedural Cut: Algorithmic Micro-editing==&lt;br /&gt;
Micro-editing is a technique of rearranging tiny fragments of media to form a new work. In the context of music, [[wikipedia:Microhouse|microhouse]] is a subgenre of house which employs this technique. [[wikipedia:Akufen|Akufen]]&#039;s &#039;&#039;Deck the House&#039;&#039; from 2002 may serve as an example. In the context of experimental film, [[wikipedia:Martin Arnold|Martin Arnold]] compiled his 1989 montage “Pièce Touchée” entirely from found-footage by copying frames in a specific order with an [[wikipedia:Optical printer|optical printer]], emphasizing and amplifying gestures from the original movie. [[wikipedia:Steina und Woody Vasulka|Steina Vasulka]], [[wikipedia:Kurt Hentschlager#Granular-Synthesis|Granular Synthesis]] and many other artists followed in exploring an aesthetic of deconstruction and reassembly of the timeline in moving images. In pop culture this “audiovisual cut-up” was used to expand the visual language of music clips and to have the audiences of live performances spellbound. Micro-edits are used in different contexts ranging from media art, experimental film-making to music clips and advertising. &lt;br /&gt;
&lt;br /&gt;
Digital video has become an almost infinite source of to-be-found-footage which is accessible to anyone, anytime through platforms like YouTube, which are essentially databases for moving images of almost any kind. They enabled pop culture phenomenons like [[wikipedia:Supercut|supercuts]]: compilations of short shots of the same action, or [[wikipedia:YouTube Poop|YouTube Poop]] mashups of videos with a comical and at times immature humour: &lt;br /&gt;
&lt;br /&gt;
Today, meta information, close captions, machine learning analysis and music information retrieval can provide the means to generate automated edits. Real-time reassembly of media fragments based on databases, feature extraction or meta-information has become entirely feasible.&lt;br /&gt;
&lt;br /&gt;
In the class &#039;&#039;Procedural Cut: Algorithmic Micro-editing&#039;&#039; we will learn to let algorithms cut and edit.&lt;br /&gt;
&lt;br /&gt;
This class builds upon two previous classes: [[GMU:Breaking the Timeline]] and [[GMU:Bits, Beats &amp;amp; Pieces]].&lt;br /&gt;
No previous knowledge is required.&lt;br /&gt;
&lt;br /&gt;
==Syllabus==&lt;br /&gt;
# 2019-10-21 [[/Early Montages and Flicker/]]&lt;br /&gt;
# 2019-10-28 [[/Supercuts/]]&lt;br /&gt;
# 2019-11-04 Lecture Joachim Goßmann&lt;br /&gt;
# 2019-11-11&lt;br /&gt;
# 2019-11-18&lt;br /&gt;
# 2019-11-25&lt;br /&gt;
# 2019-12-02&lt;br /&gt;
# 2019-12-09 [[/Musical recombinations/]]&lt;br /&gt;
# 2019-12-16&lt;br /&gt;
# 2020-01-06&lt;br /&gt;
# 2020-01-13&lt;br /&gt;
# 2020-01-20&lt;br /&gt;
# 2020-01-27&lt;br /&gt;
# 2020-02-03 Presentation&lt;br /&gt;
# 2020-02-10 Possible extra meeting to make up for the late start&lt;br /&gt;
&lt;br /&gt;
==Assignments==&lt;br /&gt;
# [[/Assignment Supercut/]]&lt;br /&gt;
# [[/Assignment Sequence/]]&lt;br /&gt;
# [[/Assignment Object Recognition/]]&lt;br /&gt;
# [[/Assignment Subtitle Automate/]]&lt;br /&gt;
&lt;br /&gt;
== Files==&lt;br /&gt;
* [https://github.com/BauhausUniversity/cuttlefish/tree/master Cuttlefish] Python scripts and tools&lt;br /&gt;
* [[Audiovideo]] Pure Data Workshop&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
* [https://medium.com/janbot/a-brief-history-of-algorithmic-editing-732c3e19884b Clint Enns: A Brief History of Algorithmic Editing]&lt;br /&gt;
* http://thisunruly.com&lt;br /&gt;
* http://reorder.tv&lt;br /&gt;
* https://www.reclaimingremix.com&lt;br /&gt;
* [https://journal.transformativeworks.org/index.php/twc/article/view/371/299 A history of subversive remix video before YouTube: Thirty political video mashups made between World War II and 2005]&lt;br /&gt;
* [[wikipedia:Edit decision list]] EDL&lt;br /&gt;
* [[wikipedia:Regular expression]] Online regex tester: [https://regex101.com RegEx101]&lt;br /&gt;
* David Claerbout - The pure necessity https://www.instagram.com/p/B3nHj-MgEyj/ &amp;amp; https://davidclaerbout.com/The-pure-necessity-2016&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
* [[Audio tools]]&lt;br /&gt;
* [[Video tools]]&lt;br /&gt;
&lt;br /&gt;
* [https://code.visualstudio.com Microsoft Visual Studio Code] Editor &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
** [https://vscodium.com VSCodium] Visual Studio Code without the Microsoft telemetry &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [https://www.videolan.org VLC] Media Player (helps you downloading YouTube videos: copy YouTube URL, open in VLC as network stream, then go to Tools -&amp;gt; Media Information, copy the Location and paste back into the Browser from where you can save it) &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
* [https://downsub.com downsub.com] Web service to get the subtitles from a YouTube video &amp;lt;span style=&amp;quot;color:#ff0000 &amp;gt;(web-service)&amp;lt;/span&amp;gt;&lt;br /&gt;
* [http://www.aegisub.org Aegisub] Subtitle Editor &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
* [https://www.praxislive.org Praxis Live] &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
* [https://python.org Python] (great [https://www.youtube.com/watch?v=_uQrJ0TkZlc video tutorial] for beginners) &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
** [https://pyscenedetect.readthedocs.io PySceneDetect] can autonatically detect scene changes in video and segment footage into clips &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
** [https://jupyter.org Jupyter Notebook] &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [https://pjreddie.com/darknet/yolo/ YOLO: Real-Time Object Detection] &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
* [https://github.com/deepfakes/faceswap Deepfakes Faceswap] &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
* [https://github.com/apreshill/bakeoff apreshill/bakeoff] &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
* [https://github.com/NVlabs/few-shot-vid2vid  NVlabs/few-shot-vid2vid] &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
* [[Open Frameworks]]&lt;br /&gt;
* [[Processing]]&lt;br /&gt;
* [[Pure Data - Getting started]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Max Neupert]]&lt;br /&gt;
[[Category:WS19]]&lt;br /&gt;
[[Category:Fachmodul]]&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut&amp;diff=113258</id>
		<title>GMU:Procedural Cut</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut&amp;diff=113258"/>
		<updated>2020-01-20T11:36:33Z</updated>

		<summary type="html">&lt;p&gt;Krnd: /* Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;Lecturer:&#039;&#039; [[GMU:Max Neupert|Max Neupert]]&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;Credits:&#039;&#039; 6 [[ECTS]], 4 [[SWS]]&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;Date:&#039;&#039; Mondays, 09:15-12:30&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;Venue:&#039;&#039; [[Marienstraße 7b]],  [[Marienstraße 7b/204|Raum 204]]&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;First meeting:&#039;&#039; Monday, 14th of October 09:15-12:30&amp;lt;br&amp;gt;&lt;br /&gt;
Course language is English&lt;br /&gt;
&lt;br /&gt;
==Procedural Cut: Algorithmic Micro-editing==&lt;br /&gt;
Micro-editing is a technique of rearranging tiny fragments of media to form a new work. In the context of music, [[wikipedia:Microhouse|microhouse]] is a subgenre of house which employs this technique. [[wikipedia:Akufen|Akufen]]&#039;s &#039;&#039;Deck the House&#039;&#039; from 2002 may serve as an example. In the context of experimental film, [[wikipedia:Martin Arnold|Martin Arnold]] compiled his 1989 montage “Pièce Touchée” entirely from found-footage by copying frames in a specific order with an [[wikipedia:Optical printer|optical printer]], emphasizing and amplifying gestures from the original movie. [[wikipedia:Steina und Woody Vasulka|Steina Vasulka]], [[wikipedia:Kurt Hentschlager#Granular-Synthesis|Granular Synthesis]] and many other artists followed in exploring an aesthetic of deconstruction and reassembly of the timeline in moving images. In pop culture this “audiovisual cut-up” was used to expand the visual language of music clips and to have the audiences of live performances spellbound. Micro-edits are used in different contexts ranging from media art, experimental film-making to music clips and advertising. &lt;br /&gt;
&lt;br /&gt;
Digital video has become an almost infinite source of to-be-found-footage which is accessible to anyone, anytime through platforms like YouTube, which are essentially databases for moving images of almost any kind. They enabled pop culture phenomenons like [[wikipedia:Supercut|supercuts]]: compilations of short shots of the same action, or [[wikipedia:YouTube Poop|YouTube Poop]] mashups of videos with a comical and at times immature humour: &lt;br /&gt;
&lt;br /&gt;
Today, meta information, close captions, machine learning analysis and music information retrieval can provide the means to generate automated edits. Real-time reassembly of media fragments based on databases, feature extraction or meta-information has become entirely feasible.&lt;br /&gt;
&lt;br /&gt;
In the class &#039;&#039;Procedural Cut: Algorithmic Micro-editing&#039;&#039; we will learn to let algorithms cut and edit.&lt;br /&gt;
&lt;br /&gt;
This class builds upon two previous classes: [[GMU:Breaking the Timeline]] and [[GMU:Bits, Beats &amp;amp; Pieces]].&lt;br /&gt;
No previous knowledge is required.&lt;br /&gt;
&lt;br /&gt;
==Syllabus==&lt;br /&gt;
# 2019-10-21 [[/Early Montages and Flicker/]]&lt;br /&gt;
# 2019-10-28 [[/Supercuts/]]&lt;br /&gt;
# 2019-11-04 Lecture Joachim Goßmann&lt;br /&gt;
# 2019-11-11&lt;br /&gt;
# 2019-11-18&lt;br /&gt;
# 2019-11-25&lt;br /&gt;
# 2019-12-02&lt;br /&gt;
# 2019-12-09 [[/Musical recombinations/]]&lt;br /&gt;
# 2019-12-16&lt;br /&gt;
# 2020-01-06&lt;br /&gt;
# 2020-01-13&lt;br /&gt;
# 2020-01-20&lt;br /&gt;
# 2020-01-27&lt;br /&gt;
# 2020-02-03 Presentation&lt;br /&gt;
# 2020-02-10 Possible extra meeting to make up for the late start&lt;br /&gt;
&lt;br /&gt;
==Assignments==&lt;br /&gt;
# [[/Assignment Supercut/]]&lt;br /&gt;
# [[/Assignment Sequence/]]&lt;br /&gt;
# [[/Assignment Object Recognition/]]&lt;br /&gt;
# [[/Assignment Subtitle Automate/]]&lt;br /&gt;
&lt;br /&gt;
== Files==&lt;br /&gt;
* [https://github.com/BauhausUniversity/cuttlefish/tree/master Cuttlefish] Python scripts and tools&lt;br /&gt;
* [[Audiovideo]] Pure Data Workshop&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
* [https://medium.com/janbot/a-brief-history-of-algorithmic-editing-732c3e19884b Clint Enns: A Brief History of Algorithmic Editing]&lt;br /&gt;
* http://thisunruly.com&lt;br /&gt;
* http://reorder.tv&lt;br /&gt;
* https://www.reclaimingremix.com&lt;br /&gt;
* [https://journal.transformativeworks.org/index.php/twc/article/view/371/299 A history of subversive remix video before YouTube: Thirty political video mashups made between World War II and 2005]&lt;br /&gt;
* [[wikipedia:Edit decision list]] EDL&lt;br /&gt;
* [[wikipedia:Regular expression]] Online regex tester: [https://regex101.com RegEx101]&lt;br /&gt;
* David Claerbout - The pure necessity &lt;br /&gt;
https://www.instagram.com/p/B3nHj-MgEyj/&lt;br /&gt;
https://davidclaerbout.com/The-pure-necessity-2016&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
* [[Audio tools]]&lt;br /&gt;
* [[Video tools]]&lt;br /&gt;
&lt;br /&gt;
* [https://code.visualstudio.com Microsoft Visual Studio Code] Editor &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
** [https://vscodium.com VSCodium] Visual Studio Code without the Microsoft telemetry &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [https://www.videolan.org VLC] Media Player (helps you downloading YouTube videos: copy YouTube URL, open in VLC as network stream, then go to Tools -&amp;gt; Media Information, copy the Location and paste back into the Browser from where you can save it) &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
* [https://downsub.com downsub.com] Web service to get the subtitles from a YouTube video &amp;lt;span style=&amp;quot;color:#ff0000 &amp;gt;(web-service)&amp;lt;/span&amp;gt;&lt;br /&gt;
* [http://www.aegisub.org Aegisub] Subtitle Editor &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
* [https://www.praxislive.org Praxis Live] &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
* [https://python.org Python] (great [https://www.youtube.com/watch?v=_uQrJ0TkZlc video tutorial] for beginners) &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
** [https://pyscenedetect.readthedocs.io PySceneDetect] can autonatically detect scene changes in video and segment footage into clips &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
** [https://jupyter.org Jupyter Notebook] &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [https://pjreddie.com/darknet/yolo/ YOLO: Real-Time Object Detection] &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
* [https://github.com/deepfakes/faceswap Deepfakes Faceswap] &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
* [https://github.com/apreshill/bakeoff apreshill/bakeoff] &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
* [https://github.com/NVlabs/few-shot-vid2vid  NVlabs/few-shot-vid2vid] &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
* [[Open Frameworks]]&lt;br /&gt;
* [[Processing]]&lt;br /&gt;
* [[Pure Data - Getting started]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Max Neupert]]&lt;br /&gt;
[[Category:WS19]]&lt;br /&gt;
[[Category:Fachmodul]]&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut&amp;diff=113257</id>
		<title>GMU:Procedural Cut</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Procedural_Cut&amp;diff=113257"/>
		<updated>2020-01-20T11:34:56Z</updated>

		<summary type="html">&lt;p&gt;Krnd: /* Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;Lecturer:&#039;&#039; [[GMU:Max Neupert|Max Neupert]]&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;Credits:&#039;&#039; 6 [[ECTS]], 4 [[SWS]]&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;Date:&#039;&#039; Mondays, 09:15-12:30&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;Venue:&#039;&#039; [[Marienstraße 7b]],  [[Marienstraße 7b/204|Raum 204]]&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;First meeting:&#039;&#039; Monday, 14th of October 09:15-12:30&amp;lt;br&amp;gt;&lt;br /&gt;
Course language is English&lt;br /&gt;
&lt;br /&gt;
==Procedural Cut: Algorithmic Micro-editing==&lt;br /&gt;
Micro-editing is a technique of rearranging tiny fragments of media to form a new work. In the context of music, [[wikipedia:Microhouse|microhouse]] is a subgenre of house which employs this technique. [[wikipedia:Akufen|Akufen]]&#039;s &#039;&#039;Deck the House&#039;&#039; from 2002 may serve as an example. In the context of experimental film, [[wikipedia:Martin Arnold|Martin Arnold]] compiled his 1989 montage “Pièce Touchée” entirely from found-footage by copying frames in a specific order with an [[wikipedia:Optical printer|optical printer]], emphasizing and amplifying gestures from the original movie. [[wikipedia:Steina und Woody Vasulka|Steina Vasulka]], [[wikipedia:Kurt Hentschlager#Granular-Synthesis|Granular Synthesis]] and many other artists followed in exploring an aesthetic of deconstruction and reassembly of the timeline in moving images. In pop culture this “audiovisual cut-up” was used to expand the visual language of music clips and to have the audiences of live performances spellbound. Micro-edits are used in different contexts ranging from media art, experimental film-making to music clips and advertising. &lt;br /&gt;
&lt;br /&gt;
Digital video has become an almost infinite source of to-be-found-footage which is accessible to anyone, anytime through platforms like YouTube, which are essentially databases for moving images of almost any kind. They enabled pop culture phenomenons like [[wikipedia:Supercut|supercuts]]: compilations of short shots of the same action, or [[wikipedia:YouTube Poop|YouTube Poop]] mashups of videos with a comical and at times immature humour: &lt;br /&gt;
&lt;br /&gt;
Today, meta information, close captions, machine learning analysis and music information retrieval can provide the means to generate automated edits. Real-time reassembly of media fragments based on databases, feature extraction or meta-information has become entirely feasible.&lt;br /&gt;
&lt;br /&gt;
In the class &#039;&#039;Procedural Cut: Algorithmic Micro-editing&#039;&#039; we will learn to let algorithms cut and edit.&lt;br /&gt;
&lt;br /&gt;
This class builds upon two previous classes: [[GMU:Breaking the Timeline]] and [[GMU:Bits, Beats &amp;amp; Pieces]].&lt;br /&gt;
No previous knowledge is required.&lt;br /&gt;
&lt;br /&gt;
==Syllabus==&lt;br /&gt;
# 2019-10-21 [[/Early Montages and Flicker/]]&lt;br /&gt;
# 2019-10-28 [[/Supercuts/]]&lt;br /&gt;
# 2019-11-04 Lecture Joachim Goßmann&lt;br /&gt;
# 2019-11-11&lt;br /&gt;
# 2019-11-18&lt;br /&gt;
# 2019-11-25&lt;br /&gt;
# 2019-12-02&lt;br /&gt;
# 2019-12-09 [[/Musical recombinations/]]&lt;br /&gt;
# 2019-12-16&lt;br /&gt;
# 2020-01-06&lt;br /&gt;
# 2020-01-13&lt;br /&gt;
# 2020-01-20&lt;br /&gt;
# 2020-01-27&lt;br /&gt;
# 2020-02-03 Presentation&lt;br /&gt;
# 2020-02-10 Possible extra meeting to make up for the late start&lt;br /&gt;
&lt;br /&gt;
==Assignments==&lt;br /&gt;
# [[/Assignment Supercut/]]&lt;br /&gt;
# [[/Assignment Sequence/]]&lt;br /&gt;
# [[/Assignment Object Recognition/]]&lt;br /&gt;
# [[/Assignment Subtitle Automate/]]&lt;br /&gt;
&lt;br /&gt;
== Files==&lt;br /&gt;
* [https://github.com/BauhausUniversity/cuttlefish/tree/master Cuttlefish] Python scripts and tools&lt;br /&gt;
* [[Audiovideo]] Pure Data Workshop&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
* [https://medium.com/janbot/a-brief-history-of-algorithmic-editing-732c3e19884b Clint Enns: A Brief History of Algorithmic Editing]&lt;br /&gt;
* http://thisunruly.com&lt;br /&gt;
* http://reorder.tv&lt;br /&gt;
* https://www.reclaimingremix.com&lt;br /&gt;
* [https://journal.transformativeworks.org/index.php/twc/article/view/371/299 A history of subversive remix video before YouTube: Thirty political video mashups made between World War II and 2005]&lt;br /&gt;
* [[wikipedia:Edit decision list]] EDL&lt;br /&gt;
* [[wikipedia:Regular expression]] Online regex tester: [https://regex101.com RegEx101]&lt;br /&gt;
* David Claerbout - The pure necessity https://www.instagram.com/p/B3nHj-MgEyj/ + https://davidclaerbout.com/The-pure-necessity-2016&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
* [[Audio tools]]&lt;br /&gt;
* [[Video tools]]&lt;br /&gt;
&lt;br /&gt;
* [https://code.visualstudio.com Microsoft Visual Studio Code] Editor &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
** [https://vscodium.com VSCodium] Visual Studio Code without the Microsoft telemetry &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [https://www.videolan.org VLC] Media Player (helps you downloading YouTube videos: copy YouTube URL, open in VLC as network stream, then go to Tools -&amp;gt; Media Information, copy the Location and paste back into the Browser from where you can save it) &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
* [https://downsub.com downsub.com] Web service to get the subtitles from a YouTube video &amp;lt;span style=&amp;quot;color:#ff0000 &amp;gt;(web-service)&amp;lt;/span&amp;gt;&lt;br /&gt;
* [http://www.aegisub.org Aegisub] Subtitle Editor &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
* [https://www.praxislive.org Praxis Live] &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
* [https://python.org Python] (great [https://www.youtube.com/watch?v=_uQrJ0TkZlc video tutorial] for beginners) &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
** [https://pyscenedetect.readthedocs.io PySceneDetect] can autonatically detect scene changes in video and segment footage into clips &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
** [https://jupyter.org Jupyter Notebook] &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [https://pjreddie.com/darknet/yolo/ YOLO: Real-Time Object Detection] &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
* [https://github.com/deepfakes/faceswap Deepfakes Faceswap] &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
* [https://github.com/apreshill/bakeoff apreshill/bakeoff] &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
* [https://github.com/NVlabs/few-shot-vid2vid  NVlabs/few-shot-vid2vid] &amp;lt;span style=&amp;quot;color:#008000&amp;gt;(open source)&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
* [[Open Frameworks]]&lt;br /&gt;
* [[Processing]]&lt;br /&gt;
* [[Pure Data - Getting started]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Max Neupert]]&lt;br /&gt;
[[Category:WS19]]&lt;br /&gt;
[[Category:Fachmodul]]&lt;/div&gt;</summary>
		<author><name>Krnd</name></author>
	</entry>
</feed>