<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Diyang</id>
	<title>Medien Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Diyang"/>
	<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/Special:Contributions/Diyang"/>
	<updated>2026-04-12T13:06:25Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.6</generator>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=IFD:Kolloq_WS1718&amp;diff=93554</id>
		<title>IFD:Kolloq WS1718</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=IFD:Kolloq_WS1718&amp;diff=93554"/>
		<updated>2017-11-17T08:16:58Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Syllabus */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[:Category:Kolloquium|Kolloquium]] &amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;Lehrperson:&#039;&#039; [[Jens Geelhaar]],  [[Johannes Deich]], [[Jason Reizner]]&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;Bewertung:&#039;&#039; up to 6 [[ECTS]] (depending on the relevant study version), 3 [[SWS]]&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;Termin:&#039;&#039; Mondays &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;17:30-20:00&amp;lt;/span&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;Ort:&#039;&#039;  Marienstraße 7b, R104&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;First Meeting:&#039;&#039; To Be Announced&lt;br /&gt;
&lt;br /&gt;
==Beschreibung==&lt;br /&gt;
Das Interface-Design Kolloquium lädt unregelmäßig Mittwochs zu einem kleinen Vortrag aus der Praxis (von Studierenden, Gästen oder Mitarbeitern) ein. Im Anschluss wird über das Thema des Vortrags diskutiert.&lt;br /&gt;
&lt;br /&gt;
==English description==&lt;br /&gt;
We are inviting students, guests and lecturers to present their works. There will be a brief discussion afterwards.&lt;br /&gt;
&lt;br /&gt;
== Anmeldung ==&lt;br /&gt;
Der Besuch ist offen für alle und erfordert keine Anmeldung&lt;br /&gt;
&lt;br /&gt;
==Zielgruppe==&lt;br /&gt;
Alle Bachelor- und Master-Studenten, Absolventen und Mitarbeiter der Professur Interface-Design.&lt;br /&gt;
&lt;br /&gt;
== Hinweise für Vortragende ==&lt;br /&gt;
&lt;br /&gt;
Normalerweise gibt es zwei Redner pro Termin. Es gibt einen Beamer (VGA-Connector!), wenn Sie Sound oder einen Computer benötigen, melden Sie sich bitte bei &amp;lt;strong&amp;gt;Johannes Deich - johannes [d0t] deich [at] uni-weimar.de&amp;lt;/strong&amp;gt;. Falls Sie die Präsentation auf einem USB-Stick mitbringen, dann bitte entweder als Mac-Keynote oder als PDF. Wir haben nur eventuell Zugriff auf einen Windows-PC mit Powerpoint.&lt;br /&gt;
&lt;br /&gt;
Ein Internetzugang steht über das WiFi Netzwerk &amp;quot;eduroam&amp;quot; zur Verfügung. Falls Sie kein Universitätslogin haben, melden Sie sich bitte rechtzeitig, damit wir einen Gastzugang beantragen können.&lt;br /&gt;
&lt;br /&gt;
Sie können Ihre Vortragssprache (Deutsch/Englisch) frei wählen. Es sind jedoch üblicherweise 5 bis 10% fremdsprachige Teilnehmer anwesend, die i.d.R. besser Englisch als Deutsch verstehen.&lt;br /&gt;
&lt;br /&gt;
Die Präsentation sollte eine Dauer von etwa ~20 Minuten haben. Bitte haben Sie Verständnis dafür, wenn wir nach spätestens 30 Minuten abbrechen, denn wir möchten noch genügend Zeit für Feedback, Fragen und kurze Anschlussdiskussionen haben.&lt;br /&gt;
&lt;br /&gt;
== Notes for Speakers ==&lt;br /&gt;
&lt;br /&gt;
If you feel you have something interesting to share, please contact us. We are always looking for interesting things to share and talk about.&lt;br /&gt;
&lt;br /&gt;
Typically, we have two presentations per meeting. There is a projector with VGA Connector. If you need Sound or a Computer, please contact &amp;lt;strong&amp;gt;Johannes Deich - johannes [d0t] deich [at] uni-weimar.de&amp;lt;/strong&amp;gt;. If you bring your presentation on a USB-Stick, please use either Mac-Keynote or PDF, because we may or may not find a Windows-PC with Powerpoint.&lt;br /&gt;
&lt;br /&gt;
To connect to the internet, you can use the &amp;quot;eduroam&amp;quot; WiFi network. If you have no University login, please get in touch, so we can arrange a guest account.&lt;br /&gt;
&lt;br /&gt;
The language of your talk (German/English) is up to you to decide. Usually, there are about 5 to 10% Students that understand English better than German.&lt;br /&gt;
&lt;br /&gt;
The presentation should have a duration of about ~ 20 minutes. Please understand that we need to stop the talk after 30 minutes latest, because there should be time for feedback, questions and short discussions.&lt;br /&gt;
&lt;br /&gt;
==Syllabus==&lt;br /&gt;
Vorläufiger Semesterplan:&lt;br /&gt;
  BITTE MINIMAL + MAXIMAL ZWEI VORTRÄGE PRO TERMIN!&lt;br /&gt;
  DIE VERANSTALTUNG FINDET NUR STATT, WENN ES ZWEI VORTRÄGE GIBT!&lt;br /&gt;
&lt;br /&gt;
#16.10. No Colloquium&lt;br /&gt;
#23.10. Aline Martinez, Gabriel Tobar&lt;br /&gt;
#30.10. No Colloquium - Mark Shepard Workshop&lt;br /&gt;
#06.11. Tatyana Sagitova (internship presentation), Diana Cota&lt;br /&gt;
#13.11. Aline Martinez&lt;br /&gt;
#20.11.&lt;br /&gt;
#27.11.&lt;br /&gt;
#04.12.&lt;br /&gt;
#11.12.&lt;br /&gt;
#18.12.&lt;br /&gt;
#08.01. Steffen Harder&lt;br /&gt;
#15.01.&lt;br /&gt;
#22.01. Di Yang (internship presentation)&lt;br /&gt;
#29.01.&lt;br /&gt;
&lt;br /&gt;
  MINIMUM + MAXIMUM TWO PRESENTATIONS PER DAY!&lt;br /&gt;
  THE KOLLOQUIUM WILL ONLY TAKE PLACE IF THERE ARE TWO PRESENTATIONS!&lt;br /&gt;
&lt;br /&gt;
== Previous Semesters ==&lt;br /&gt;
* [[IFD:Kolloq SS17]]&lt;br /&gt;
* [[IFD:Kolloq WS16]]&lt;br /&gt;
* [[IFD:Kolloq SS16]]&lt;br /&gt;
* [[IFD:Kolloq WS15]]&lt;br /&gt;
* [[IFD:Kolloq SS15]]&lt;br /&gt;
* [[IFD:Kolloq WS14]]&lt;br /&gt;
* [[IFD:Kolloq SS14]]&lt;br /&gt;
* [[IFD:Kolloq WS13]]&lt;br /&gt;
* [[IFD:Kolloq SS12]]&lt;br /&gt;
* [[IFD:Kolloq WS12]]&lt;br /&gt;
* [[IFD:Kolloq SS13]]&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
* tba&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86635</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86635"/>
		<updated>2016-08-15T03:24:56Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Technical setting-up */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sketch based on James Alliban&#039;s work: [https://jamesalliban.wordpress.com/2008/11/16/colour-detection-in-processing/]&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan. But it&#039;s still very clear to explain the meaning when the objects getting close or parting with corresponding sound effects(close - heartbeat/ far - heartbeat fading away).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:5l1.png|thumb|left|500px|Screenshot-1-Color Detection]]&lt;br /&gt;
[[File:6l1.png|thumb|left|400px|Screenshot-2-Close]]&lt;br /&gt;
[[File:7l1.png|thumb|left|400px|Screenshot-3-Far]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
Generative music in Supercollider &amp;amp; Processing - [https://www.youtube.com/watch?v=rMbcqv8rxnA] &lt;br /&gt;
&lt;br /&gt;
Barbican&#039;s Rain Room: it&#039;s raining, but you won&#039;t get wet - [https://www.youtube.com/watch?v=EkvazIZx-F0]&lt;br /&gt;
&lt;br /&gt;
Processing tutorial: Overview of data visualization | lynda.com - [https://www.youtube.com/watch?v=T5lRLA_Vn7o]&lt;br /&gt;
&lt;br /&gt;
Tracking Performance using Kinect - [https://vimeo.com/99301608]&lt;br /&gt;
&lt;br /&gt;
Sherry Turkle, Alone Together: Why We Expect More from Technology and Less from Each Other&lt;br /&gt;
&lt;br /&gt;
Jenova Chen, Journey - [http://thatgamecompany.com/games/journey/]&lt;br /&gt;
&lt;br /&gt;
Arduino to Maxmsp via OSC gudie - [http://liamtmlacey.blogspot.de/2011/03/arduino-to-maxmsp-via-osc-guide-and.html]&lt;br /&gt;
&lt;br /&gt;
Color detection in Processing - [https://jamesalliban.wordpress.com/2008/11/16/colour-detection-in-processing/]&lt;br /&gt;
&lt;br /&gt;
OpenCV Tutorial: Real-Time Object Tracking Without Colour - [https://www.youtube.com/watch?v=X6rPdRZzgjg]&lt;br /&gt;
&lt;br /&gt;
Multiple Object Detection with Color Using OpenCV - [https://www.youtube.com/watch?v=hQ-bpfdWQh8]&lt;br /&gt;
&lt;br /&gt;
Play Melody - [https://www.arduino.cc/en/Tutorial/PlayMelody]&lt;br /&gt;
&lt;br /&gt;
TouchOSC - [http://hexler.net/software/touchosc]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86634</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86634"/>
		<updated>2016-08-15T03:12:44Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Reference */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sketch based on James Alliban&#039;s work: [https://jamesalliban.wordpress.com/2008/11/16/colour-detection-in-processing/]&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:5l1.png|thumb|left|500px|Screenshot-1-Color Detection]]&lt;br /&gt;
[[File:6l1.png|thumb|left|400px|Screenshot-2-Close]]&lt;br /&gt;
[[File:7l1.png|thumb|left|400px|Screenshot-3-Far]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
Generative music in Supercollider &amp;amp; Processing - [https://www.youtube.com/watch?v=rMbcqv8rxnA] &lt;br /&gt;
&lt;br /&gt;
Barbican&#039;s Rain Room: it&#039;s raining, but you won&#039;t get wet - [https://www.youtube.com/watch?v=EkvazIZx-F0]&lt;br /&gt;
&lt;br /&gt;
Processing tutorial: Overview of data visualization | lynda.com - [https://www.youtube.com/watch?v=T5lRLA_Vn7o]&lt;br /&gt;
&lt;br /&gt;
Tracking Performance using Kinect - [https://vimeo.com/99301608]&lt;br /&gt;
&lt;br /&gt;
Sherry Turkle, Alone Together: Why We Expect More from Technology and Less from Each Other&lt;br /&gt;
&lt;br /&gt;
Jenova Chen, Journey - [http://thatgamecompany.com/games/journey/]&lt;br /&gt;
&lt;br /&gt;
Arduino to Maxmsp via OSC gudie - [http://liamtmlacey.blogspot.de/2011/03/arduino-to-maxmsp-via-osc-guide-and.html]&lt;br /&gt;
&lt;br /&gt;
Color detection in Processing - [https://jamesalliban.wordpress.com/2008/11/16/colour-detection-in-processing/]&lt;br /&gt;
&lt;br /&gt;
OpenCV Tutorial: Real-Time Object Tracking Without Colour - [https://www.youtube.com/watch?v=X6rPdRZzgjg]&lt;br /&gt;
&lt;br /&gt;
Multiple Object Detection with Color Using OpenCV - [https://www.youtube.com/watch?v=hQ-bpfdWQh8]&lt;br /&gt;
&lt;br /&gt;
Play Melody - [https://www.arduino.cc/en/Tutorial/PlayMelody]&lt;br /&gt;
&lt;br /&gt;
TouchOSC - [http://hexler.net/software/touchosc]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86633</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86633"/>
		<updated>2016-08-15T03:11:26Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Reference */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sketch based on James Alliban&#039;s work: [https://jamesalliban.wordpress.com/2008/11/16/colour-detection-in-processing/]&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:5l1.png|thumb|left|500px|Screenshot-1-Color Detection]]&lt;br /&gt;
[[File:6l1.png|thumb|left|400px|Screenshot-2-Close]]&lt;br /&gt;
[[File:7l1.png|thumb|left|400px|Screenshot-3-Far]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
Generative music in Supercollider &amp;amp; Processing - [https://www.youtube.com/watch?v=rMbcqv8rxnA] &lt;br /&gt;
&lt;br /&gt;
Barbican&#039;s Rain Room: it&#039;s raining, but you won&#039;t get wet - [https://www.youtube.com/watch?v=EkvazIZx-F0]&lt;br /&gt;
&lt;br /&gt;
Processing tutorial: Overview of data visualization | lynda.com - [https://www.youtube.com/watch?v=T5lRLA_Vn7o]&lt;br /&gt;
&lt;br /&gt;
Tracking Performance using Kinect - [https://vimeo.com/99301608]&lt;br /&gt;
&lt;br /&gt;
Sherry Turkle, Alone Together: Why We Expect More from Technology and Less from Each Other&lt;br /&gt;
&lt;br /&gt;
Jenova Chen, Journey - http://thatgamecompany.com/games/journey/&lt;br /&gt;
&lt;br /&gt;
Arduino to Maxmsp via OSC gudie - [http://liamtmlacey.blogspot.de/2011/03/arduino-to-maxmsp-via-osc-guide-and.html]&lt;br /&gt;
&lt;br /&gt;
Color detection in Processing - [https://jamesalliban.wordpress.com/2008/11/16/colour-detection-in-processing/]&lt;br /&gt;
&lt;br /&gt;
OpenCV Tutorial: Real-Time Object Tracking Without Colour - [https://www.youtube.com/watch?v=X6rPdRZzgjg]&lt;br /&gt;
&lt;br /&gt;
Multiple Object Detection with Color Using OpenCV - [https://www.youtube.com/watch?v=hQ-bpfdWQh8]&lt;br /&gt;
&lt;br /&gt;
Play Melody - [https://www.arduino.cc/en/Tutorial/PlayMelody]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86632</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86632"/>
		<updated>2016-08-15T03:08:45Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Reference */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sketch based on James Alliban&#039;s work: [https://jamesalliban.wordpress.com/2008/11/16/colour-detection-in-processing/]&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:5l1.png|thumb|left|500px|Screenshot-1-Color Detection]]&lt;br /&gt;
[[File:6l1.png|thumb|left|400px|Screenshot-2-Close]]&lt;br /&gt;
[[File:7l1.png|thumb|left|400px|Screenshot-3-Far]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
Generative music in Supercollider &amp;amp; Processing - [https://www.youtube.com/watch?v=rMbcqv8rxnA] &lt;br /&gt;
&lt;br /&gt;
Barbican&#039;s Rain Room: it&#039;s raining, but you won&#039;t get wet - [https://www.youtube.com/watch?v=EkvazIZx-F0]&lt;br /&gt;
&lt;br /&gt;
Processing tutorial: Overview of data visualization | lynda.com - [https://www.youtube.com/watch?v=T5lRLA_Vn7o]&lt;br /&gt;
&lt;br /&gt;
[https://vimeo.com/99301608]&lt;br /&gt;
&lt;br /&gt;
Sherry Turkle, Alone Together: Why We Expect More from Technology and Less from Each Other&lt;br /&gt;
&lt;br /&gt;
Jenova Chen, Journey - http://thatgamecompany.com/games/journey/&lt;br /&gt;
&lt;br /&gt;
Arduino to Maxmsp via OSC gudie - [http://liamtmlacey.blogspot.de/2011/03/arduino-to-maxmsp-via-osc-guide-and.html]&lt;br /&gt;
&lt;br /&gt;
Color detection in Processing - [https://jamesalliban.wordpress.com/2008/11/16/colour-detection-in-processing/]&lt;br /&gt;
&lt;br /&gt;
OpenCV Tutorial: Real-Time Object Tracking Without Colour - [https://www.youtube.com/watch?v=X6rPdRZzgjg]&lt;br /&gt;
&lt;br /&gt;
Multiple Object Detection with Color Using OpenCV - [https://www.youtube.com/watch?v=hQ-bpfdWQh8]&lt;br /&gt;
&lt;br /&gt;
Play Melody - [https://www.arduino.cc/en/Tutorial/PlayMelody]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86631</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86631"/>
		<updated>2016-08-15T03:07:56Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Technical setting-up */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sketch based on James Alliban&#039;s work: [https://jamesalliban.wordpress.com/2008/11/16/colour-detection-in-processing/]&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:5l1.png|thumb|left|500px|Screenshot-1-Color Detection]]&lt;br /&gt;
[[File:6l1.png|thumb|left|400px|Screenshot-2-Close]]&lt;br /&gt;
[[File:7l1.png|thumb|left|400px|Screenshot-3-Far]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
Generative music in Supercollider &amp;amp; Processing - [https://www.youtube.com/watch?v=rMbcqv8rxnA] &lt;br /&gt;
&lt;br /&gt;
Barbican&#039;s Rain Room: it&#039;s raining, but you won&#039;t get wet - [https://www.youtube.com/watch?v=EkvazIZx-F0]&lt;br /&gt;
&lt;br /&gt;
Processing tutorial: Overview of data visualization | lynda.com - [https://www.youtube.com/watch?v=T5lRLA_Vn7o]&lt;br /&gt;
&lt;br /&gt;
[https://vimeo.com/99301608]&lt;br /&gt;
&lt;br /&gt;
Sherry Turkle, Alone Together: Why We Expect More from Technology and Less from Each Other&lt;br /&gt;
&lt;br /&gt;
Jenova Chen, Journey - http://thatgamecompany.com/games/journey/&lt;br /&gt;
&lt;br /&gt;
Arduino to Maxmsp via OSC gudie - [http://liamtmlacey.blogspot.de/2011/03/arduino-to-maxmsp-via-osc-guide-and.html]&lt;br /&gt;
&lt;br /&gt;
Color detection in Processing - [https://jamesalliban.wordpress.com/2008/11/16/colour-detection-in-processing/]&lt;br /&gt;
&lt;br /&gt;
OpenCV Tutorial: Real-Time Object Tracking Without Colour - [https://www.youtube.com/watch?v=X6rPdRZzgjg]&lt;br /&gt;
&lt;br /&gt;
Multiple Object Detection with Color Using OpenCV - [https://www.youtube.com/watch?v=hQ-bpfdWQh8]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86630</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86630"/>
		<updated>2016-08-15T03:07:26Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Technical setting-up */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
Sketch based on James Alliban&#039;s work: [https://jamesalliban.wordpress.com/2008/11/16/colour-detection-in-processing/]&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:5l1.png|thumb|left|500px|Screenshot-1-Color Detection]]&lt;br /&gt;
[[File:6l1.png|thumb|left|400px|Screenshot-2-Close]]&lt;br /&gt;
[[File:7l1.png|thumb|left|400px|Screenshot-3-Far]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
Generative music in Supercollider &amp;amp; Processing - [https://www.youtube.com/watch?v=rMbcqv8rxnA] &lt;br /&gt;
&lt;br /&gt;
Barbican&#039;s Rain Room: it&#039;s raining, but you won&#039;t get wet - [https://www.youtube.com/watch?v=EkvazIZx-F0]&lt;br /&gt;
&lt;br /&gt;
Processing tutorial: Overview of data visualization | lynda.com - [https://www.youtube.com/watch?v=T5lRLA_Vn7o]&lt;br /&gt;
&lt;br /&gt;
[https://vimeo.com/99301608]&lt;br /&gt;
&lt;br /&gt;
Sherry Turkle, Alone Together: Why We Expect More from Technology and Less from Each Other&lt;br /&gt;
&lt;br /&gt;
Jenova Chen, Journey - http://thatgamecompany.com/games/journey/&lt;br /&gt;
&lt;br /&gt;
Arduino to Maxmsp via OSC gudie - [http://liamtmlacey.blogspot.de/2011/03/arduino-to-maxmsp-via-osc-guide-and.html]&lt;br /&gt;
&lt;br /&gt;
Color detection in Processing - [https://jamesalliban.wordpress.com/2008/11/16/colour-detection-in-processing/]&lt;br /&gt;
&lt;br /&gt;
OpenCV Tutorial: Real-Time Object Tracking Without Colour - [https://www.youtube.com/watch?v=X6rPdRZzgjg]&lt;br /&gt;
&lt;br /&gt;
Multiple Object Detection with Color Using OpenCV - [https://www.youtube.com/watch?v=hQ-bpfdWQh8]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86629</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86629"/>
		<updated>2016-08-15T03:05:39Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Reference */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:5l1.png|thumb|left|500px|Screenshot-1-Color Detection]]&lt;br /&gt;
[[File:6l1.png|thumb|left|400px|Screenshot-2-Close]]&lt;br /&gt;
[[File:7l1.png|thumb|left|400px|Screenshot-3-Far]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
Generative music in Supercollider &amp;amp; Processing - [https://www.youtube.com/watch?v=rMbcqv8rxnA] &lt;br /&gt;
&lt;br /&gt;
Barbican&#039;s Rain Room: it&#039;s raining, but you won&#039;t get wet - [https://www.youtube.com/watch?v=EkvazIZx-F0]&lt;br /&gt;
&lt;br /&gt;
Processing tutorial: Overview of data visualization | lynda.com - [https://www.youtube.com/watch?v=T5lRLA_Vn7o]&lt;br /&gt;
&lt;br /&gt;
[https://vimeo.com/99301608]&lt;br /&gt;
&lt;br /&gt;
Sherry Turkle, Alone Together: Why We Expect More from Technology and Less from Each Other&lt;br /&gt;
&lt;br /&gt;
Jenova Chen, Journey - http://thatgamecompany.com/games/journey/&lt;br /&gt;
&lt;br /&gt;
Arduino to Maxmsp via OSC gudie - [http://liamtmlacey.blogspot.de/2011/03/arduino-to-maxmsp-via-osc-guide-and.html]&lt;br /&gt;
&lt;br /&gt;
Color detection in Processing - [https://jamesalliban.wordpress.com/2008/11/16/colour-detection-in-processing/]&lt;br /&gt;
&lt;br /&gt;
OpenCV Tutorial: Real-Time Object Tracking Without Colour - [https://www.youtube.com/watch?v=X6rPdRZzgjg]&lt;br /&gt;
&lt;br /&gt;
Multiple Object Detection with Color Using OpenCV - [https://www.youtube.com/watch?v=hQ-bpfdWQh8]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86628</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86628"/>
		<updated>2016-08-15T03:04:53Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Reference */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:5l1.png|thumb|left|500px|Screenshot-1-Color Detection]]&lt;br /&gt;
[[File:6l1.png|thumb|left|400px|Screenshot-2-Close]]&lt;br /&gt;
[[File:7l1.png|thumb|left|400px|Screenshot-3-Far]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
Generative music in Supercollider &amp;amp; Processing - [https://www.youtube.com/watch?v=rMbcqv8rxnA] &lt;br /&gt;
&lt;br /&gt;
Barbican&#039;s Rain Room: it&#039;s raining, but you won&#039;t get wet - [https://www.youtube.com/watch?v=EkvazIZx-F0]&lt;br /&gt;
&lt;br /&gt;
Processing tutorial: Overview of data visualization | lynda.com - [https://www.youtube.com/watch?v=T5lRLA_Vn7o]&lt;br /&gt;
&lt;br /&gt;
[https://vimeo.com/99301608]&lt;br /&gt;
&lt;br /&gt;
Sherry Turkle, Alone Together: Why We Expect More from Technology and Less from Each Other&lt;br /&gt;
&lt;br /&gt;
Jenova Chen, Journey - http://thatgamecompany.com/games/journey/&lt;br /&gt;
&lt;br /&gt;
Arduino to Maxmsp via OSC gudie - [http://liamtmlacey.blogspot.de/2011/03/arduino-to-maxmsp-via-osc-guide-and.html]&lt;br /&gt;
&lt;br /&gt;
Color detection in processing - [https://jamesalliban.wordpress.com/2008/11/16/colour-detection-in-processing/]&lt;br /&gt;
&lt;br /&gt;
OpenCV Tutorial: Real-Time Object Tracking Without Colour - [https://www.youtube.com/watch?v=X6rPdRZzgjg]&lt;br /&gt;
&lt;br /&gt;
Multiple Object Detection with Color Using OpenCV - [https://www.youtube.com/watch?v=hQ-bpfdWQh8]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86627</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86627"/>
		<updated>2016-08-15T03:02:56Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Reference */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:5l1.png|thumb|left|500px|Screenshot-1-Color Detection]]&lt;br /&gt;
[[File:6l1.png|thumb|left|400px|Screenshot-2-Close]]&lt;br /&gt;
[[File:7l1.png|thumb|left|400px|Screenshot-3-Far]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
Generative music in Supercollider &amp;amp; Processing - [https://www.youtube.com/watch?v=rMbcqv8rxnA] &lt;br /&gt;
&lt;br /&gt;
Barbican&#039;s Rain Room: it&#039;s raining, but you won&#039;t get wet - [https://www.youtube.com/watch?v=EkvazIZx-F0]&lt;br /&gt;
&lt;br /&gt;
Processing tutorial: Overview of data visualization | lynda.com - [https://www.youtube.com/watch?v=T5lRLA_Vn7o]&lt;br /&gt;
&lt;br /&gt;
[https://vimeo.com/99301608]&lt;br /&gt;
&lt;br /&gt;
Sherry Turkle, Alone Together: Why We Expect More from Technology and Less from Each Other&lt;br /&gt;
&lt;br /&gt;
Jenova Chen, Journey - http://thatgamecompany.com/games/journey/&lt;br /&gt;
&lt;br /&gt;
Arduino to Maxmsp via OSC gudie - [http://liamtmlacey.blogspot.de/2011/03/arduino-to-maxmsp-via-osc-guide-and.html]&lt;br /&gt;
&lt;br /&gt;
Color detection in processing - [https://jamesalliban.wordpress.com/2008/11/16/colour-detection-in-processing/]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86626</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86626"/>
		<updated>2016-08-15T02:27:23Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Reference */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:5l1.png|thumb|left|500px|Screenshot-1-Color Detection]]&lt;br /&gt;
[[File:6l1.png|thumb|left|400px|Screenshot-2-Close]]&lt;br /&gt;
[[File:7l1.png|thumb|left|400px|Screenshot-3-Far]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
Generative music in Supercollider &amp;amp; Processing - [https://www.youtube.com/watch?v=rMbcqv8rxnA] &lt;br /&gt;
&lt;br /&gt;
Barbican&#039;s Rain Room: it&#039;s raining, but you won&#039;t get wet - [https://www.youtube.com/watch?v=EkvazIZx-F0]&lt;br /&gt;
&lt;br /&gt;
Processing tutorial: Overview of data visualization | lynda.com - [https://www.youtube.com/watch?v=T5lRLA_Vn7o]&lt;br /&gt;
&lt;br /&gt;
[https://vimeo.com/99301608]&lt;br /&gt;
&lt;br /&gt;
Sherry Turkle, Alone Together: Why We Expect More from Technology and Less from Each Other&lt;br /&gt;
&lt;br /&gt;
Jenova Chen, Journey - http://thatgamecompany.com/games/journey/&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86625</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86625"/>
		<updated>2016-08-12T16:19:56Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Reference */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:5l1.png|thumb|left|500px|Screenshot-1-Color Detection]]&lt;br /&gt;
[[File:6l1.png|thumb|left|400px|Screenshot-2-Close]]&lt;br /&gt;
[[File:7l1.png|thumb|left|400px|Screenshot-3-Far]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
Generative music in Supercollider &amp;amp; Processing - [https://www.youtube.com/watch?v=rMbcqv8rxnA] &lt;br /&gt;
&lt;br /&gt;
Barbican&#039;s Rain Room: it&#039;s raining, but you won&#039;t get wet - [https://www.youtube.com/watch?v=EkvazIZx-F0]&lt;br /&gt;
&lt;br /&gt;
Processing tutorial: Overview of data visualization | lynda.com - [https://www.youtube.com/watch?v=T5lRLA_Vn7o]&lt;br /&gt;
&lt;br /&gt;
[https://vimeo.com/99301608]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86624</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86624"/>
		<updated>2016-08-12T16:19:30Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Reference */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:5l1.png|thumb|left|500px|Screenshot-1-Color Detection]]&lt;br /&gt;
[[File:6l1.png|thumb|left|400px|Screenshot-2-Close]]&lt;br /&gt;
[[File:7l1.png|thumb|left|400px|Screenshot-3-Far]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
Generative music in Supercollider &amp;amp; Processing - [https://www.youtube.com/watch?v=rMbcqv8rxnA] &lt;br /&gt;
Barbican&#039;s Rain Room: it&#039;s raining, but you won&#039;t get wet - [https://www.youtube.com/watch?v=EkvazIZx-F0]&lt;br /&gt;
Processing tutorial: Overview of data visualization | lynda.com - [https://www.youtube.com/watch?v=T5lRLA_Vn7o]&lt;br /&gt;
[https://vimeo.com/99301608]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86623</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86623"/>
		<updated>2016-08-12T15:50:48Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Final Result */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:5l1.png|thumb|left|500px|Screenshot-1-Color Detection]]&lt;br /&gt;
[[File:6l1.png|thumb|left|400px|Screenshot-2-Close]]&lt;br /&gt;
[[File:7l1.png|thumb|left|400px|Screenshot-3-Far]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86622</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86622"/>
		<updated>2016-08-12T15:50:20Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Final Result */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:5l1.png|thumb|left|440px|Screenshot-1-Color Detection]]&lt;br /&gt;
[[File:6l1.png|thumb|left|400px|Screenshot-2-Close]]&lt;br /&gt;
[[File:7l1.png|thumb|left|400px|Screenshot-3-Far]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86621</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86621"/>
		<updated>2016-08-12T15:50:00Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Final Result */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&lt;br /&gt;
[[File:5l1.png|thumb|left|410px|Screenshot-1-Color Detection]]&lt;br /&gt;
[[File:6l1.png|thumb|left|400px|Screenshot-2-Close]]&lt;br /&gt;
[[File:7l1.png|thumb|left|400px|Screenshot-3-Far]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86620</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86620"/>
		<updated>2016-08-12T15:49:38Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Final Result */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&lt;br /&gt;
[[File:5l1.png|thumb|left|400px|Screenshot-1-Color Detection]]&lt;br /&gt;
[[File:6l1.png|thumb|left|400px|Screenshot-2-Close]]&lt;br /&gt;
[[File:7l1.png|thumb|left|400px|Screenshot-3-Far]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:7l1.png&amp;diff=86619</id>
		<title>File:7l1.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:7l1.png&amp;diff=86619"/>
		<updated>2016-08-12T15:41:22Z</updated>

		<summary type="html">&lt;p&gt;Diyang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Licensing ==&lt;br /&gt;
{{self|c}}&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86618</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86618"/>
		<updated>2016-08-12T15:40:57Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Final Result */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
[[File:5l1.png|thumb|left|310px|Screenshot-1-Color Detection]]&lt;br /&gt;
[[File:6l1.png|thumb|left|310px|Screenshot-2-Close]]&lt;br /&gt;
[[File:7l1.png|thumb|left|310px|Screenshot-3-Far]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:6l1.png&amp;diff=86617</id>
		<title>File:6l1.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:6l1.png&amp;diff=86617"/>
		<updated>2016-08-12T15:39:26Z</updated>

		<summary type="html">&lt;p&gt;Diyang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Licensing ==&lt;br /&gt;
{{self|c}}&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:5l1.png&amp;diff=86616</id>
		<title>File:5l1.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:5l1.png&amp;diff=86616"/>
		<updated>2016-08-12T15:38:24Z</updated>

		<summary type="html">&lt;p&gt;Diyang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Licensing ==&lt;br /&gt;
{{self|c}}&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86615</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86615"/>
		<updated>2016-08-12T15:38:01Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Final Result */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
[[File:5l1.png|thumb|left|310px|Screenshot-1-Color Detection]]&lt;br /&gt;
[[File:6l1.png|thumb|left|310px|Screenshot-2-Close]]&lt;br /&gt;
[[File:6l1.png|thumb|left|310px|Screenshot-3-Far]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86614</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86614"/>
		<updated>2016-08-12T15:17:58Z</updated>

		<summary type="html">&lt;p&gt;Diyang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86613</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86613"/>
		<updated>2016-08-12T15:17:25Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Final Result */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86612</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86612"/>
		<updated>2016-08-12T15:16:51Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Final Result */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86611</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86611"/>
		<updated>2016-08-12T15:16:21Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Technical setting-up */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&lt;br /&gt;
[[File:setttt.png]]&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86610</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86610"/>
		<updated>2016-08-12T15:14:27Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Technical setting-up */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px|Diagram]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&lt;br /&gt;
[[File:setttt.png]]&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86609</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86609"/>
		<updated>2016-08-12T15:13:40Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Technical setting-up */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
[[File:cppp.png|thumb|left|600px]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&lt;br /&gt;
[[File:setttt.png]]&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86608</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86608"/>
		<updated>2016-08-12T15:12:07Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Technical setting-up */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
[[File:cppp.png|thumb|left|400px]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&lt;br /&gt;
[[File:setttt.png]]&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Cppp.png&amp;diff=86607</id>
		<title>File:Cppp.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Cppp.png&amp;diff=86607"/>
		<updated>2016-08-12T14:48:38Z</updated>

		<summary type="html">&lt;p&gt;Diyang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Licensing ==&lt;br /&gt;
{{self|c}}&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86606</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86606"/>
		<updated>2016-08-12T14:44:35Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Technical setting-up */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
[[File:cppp.png]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&lt;br /&gt;
[[File:setttt.png]]&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86605</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86605"/>
		<updated>2016-08-12T14:43:55Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Technical setting-up */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
[[File:Diagram.png]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&lt;br /&gt;
[[File:setttt.png]]&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86593</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86593"/>
		<updated>2016-08-10T14:59:46Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Final Result */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I tried to project images from a locker onto the table, but I found I cannot set my camera properly. So, I have to use magnets which can be attached to the  locker which is used as the ground, and put my webcam on the table, so I can project the animations to the correct position.&lt;br /&gt;
&lt;br /&gt;
[[File:setttt.png]]&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86592</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86592"/>
		<updated>2016-08-10T14:54:19Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Technical setting-up */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86591</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86591"/>
		<updated>2016-08-10T14:48:46Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Technical setting-up */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86590</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86590"/>
		<updated>2016-08-10T14:47:28Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Concept */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86589</id>
		<title>GMU:Human and Nonhuman Performances II SS16/Di Yang</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Human_and_Nonhuman_Performances_II_SS16/Di_Yang&amp;diff=86589"/>
		<updated>2016-08-10T14:47:01Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Final Result */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aloneg ==&lt;br /&gt;
— Connection and Communication&lt;br /&gt;
&lt;br /&gt;
== Idea ==&lt;br /&gt;
&lt;br /&gt;
   &#039;&#039;We’re born alone, we live alone, we die alone. Only&lt;br /&gt;
   through our love and friendship can we create the&lt;br /&gt;
   illusion for the moment that we’re not alone.&lt;br /&gt;
      — Orson Welles&#039;&#039;&lt;br /&gt;
Humans are always complex and have different kinds of emotions inside, which you can observe a little bit from the outside appearance. But in this way, one can never understand much about the other. So, the better way is to get close with transpositional consideration, be friendly and make effort to create a channel to communicate with each other for a mutual understanding. After the connection is formed, when we stop talking or sharing feelings with each other, the relation will be unstable and needs care from both sides to maintain. And also the distance is an important factor that affects the link. And the ’distance’ here doesn’t only represents the geographical distance, but also the distance of heart or inner environment of our own. When we are parted from others for a while, we always need some time and some actions to get close again and rebuild the link with others.&lt;br /&gt;
And as human being, we can not live alone or we may somehow lose the meaning of being alive. So, when we have no connections with others for a long time, the value or the existence of ourselves will be eroded and little by little be washed out from the world...&lt;br /&gt;
&lt;br /&gt;
== Concept ==&lt;br /&gt;
&lt;br /&gt;
I want to create a virtual scene, where participants can walk around with VR devices in the purpose of creating connection with others. All the participants in the motion tracking system will have a corresponding virtual character with a fluid shied/wall consisting of falling shapes(triangle, circle, square, etc.). Each character has its own typical shape. Like the picture below.&lt;br /&gt;
&lt;br /&gt;
[[File:AA1.jpg]]&lt;br /&gt;
&lt;br /&gt;
But from the outside, the character looks just like they are covered by the rain of triangles/circles/squares. This image conveys a unclear, vague, unknown, mysterious feel of a stranger. And it’s only possible to see what behind the ‘rain’, when participants create the connection with each other.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
And the size and the speed of the ’rain’ is based on which zone the participant is standing on. As the picture below, the platform is divided according to the diagram into four quadrants.&lt;br /&gt;
&lt;br /&gt;
[[File:AA2.jpg]]&lt;br /&gt;
&lt;br /&gt;
The horizontal line means how your mood is, and the vertical line shows how emotional you are. That is to say,the happier you are, the smaller the shapes will be. Meanwhile, the more emotional you are, the faster the shapes will drop. And vice versa. Thus, the inner feel will be partly shown by the ‘rain’. One participant can notice a general idea of others’ emotion, so that participants will try to get closer to match a similar emotional state for further connection.&lt;br /&gt;
&lt;br /&gt;
[[File:AA33.jpg]]&lt;br /&gt;
&lt;br /&gt;
Besides, the participants will hear their own melody when walking around in the four quadrants matching with the their own ‘inner feeling’. So, the participants will be aware of how they ‘feel’ in this virtual simulation scene.&lt;br /&gt;
&lt;br /&gt;
In order to create the channel to communicate with other participants, one just needs to wave his hand and spread part of the ‘rain’ to the other. The other will be noticed through the sound of other’s ‘rain’ hitting on his own. Just like picture here, A waves and B waves back, then they are able to connect to each other. You can see, there are elements of their own flowing bidirectionally. Once the connection is built, they can see each other a bit more but not entirely and they can directly talk to each other or use body languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When there’s no any kind of communication or the distance between the participants is too long, then the channel will be slowly closed. After the characters are parted, they will carry a few shapes from other characters symbolizing the influence others have brought. But as time goes on, the shapes from others will disappear slowly indicating the character is forgetting the others. And maybe, it’s the time for a reunion.&lt;br /&gt;
&lt;br /&gt;
[[File:AA4.jpg]]&lt;br /&gt;
&lt;br /&gt;
Additionally, when the participant stop making connections with others too long, the character in the virtual scene will slowly fade away… as mentioned above, the existence of human being will be meaningless without connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
In a word, by allowing the participants play a role in the interactive scene, it’s a concept strengthening the necessity and significance of the connection and communication with others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Technical setting-up  ==&lt;br /&gt;
[[File:AA5.png|thumb|left|200px|projected rainfall]]&lt;br /&gt;
As my original idea using Blender and Unity or Unreal engine to create a VR environment is too complicated and it&#039;s hard to set up just using simple VR sets like Google cardboard and interact with multiple participants at the same time. &lt;br /&gt;
So, I finally chose to use some easier approach to do my project, but still in some role-play mode. &lt;br /&gt;
From the sketch for 3D and develop the concept of a plain and simple way to show the rains around the participants by using projector. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ddd12.png|600px|Diagram]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Object tracking test&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYY0tKMmlKZ3ZHeXc]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[File:AA6.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
[[File:AA7.png|thumb|left|200px|]]&lt;br /&gt;
&lt;br /&gt;
Just like the pictures on the left side, I will use tracking system to locate the participants and project the dots of rainfall to the ground where the participants are actually standing. And the circles and dots will change its size according to the position of the participants. The patterns indicate the &#039;inner feeling&#039; of the participants. (The feeling is not your real feeling but a simulated one in this specific environment.)  At the same time, the screen behind will display the corresponding curve like electrocardiogram of each participants in different colors with their own  rhythms of sounds. By looking at the curves on the screen and hearing the sound, the participants can get a notion of others&#039;  &#039;feeling&#039; in this interactive &#039;game&#039;. When people get close, the rhythms will be alike and the dots will merge.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Updates:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Instead of using the tracking system, I am using the color detection sketch from processing to capture certain objects&#039; movements. Ideally, it should be able to track several single-color objects. But in the real world, the webcam just not accurate enough for the tracking and it depends on the light condition extremely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ground graphics:&#039;&#039;&#039;&lt;br /&gt;
Just as I mentioned above, the field is a coordinate and the dots will change according to the location data.&lt;br /&gt;
&lt;br /&gt;
[[File:pp1.png]]&lt;br /&gt;
[[File:pp2.png]]&lt;br /&gt;
[[File:pp3.png]]&lt;br /&gt;
[[File:pp4.png]]&lt;br /&gt;
[[File:pp5.png]]&lt;br /&gt;
[[File:pp6.png]]&lt;br /&gt;
&lt;br /&gt;
The particles will stop moving when two participants get close enough just like the 5th pictures showing.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sound feedback:&#039;&#039;&#039;&lt;br /&gt;
[[File:BBuzzer.png|thumb|left|300px]]&lt;br /&gt;
[[File:B&amp;amp;ard.png|thumb|left|300px]]&lt;br /&gt;
&lt;br /&gt;
This is my original idea of sound feedback using individual buzzer for each single-color object. The melody or sound will be played, when the distance between every two objects are close or far enough . But during my test with syncing the tracking data with the arduino, I found the buzzer couldn&#039;t work as the way I wanted, one melody for one specific pin will go through the all three buzzers. I could&#039;t solve this, so I had no choice but looking for another solution. And the simplest solution is just play the audio through the processing sketch from computer speakers. Which is not so instinct and natural comparing with the original plan.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Result ==&lt;br /&gt;
&lt;br /&gt;
[[File:1l1.png|thumb|left|310px|Composition]]&lt;br /&gt;
[[File:2l1.png|thumb|left|310px|Projector]]&lt;br /&gt;
[[File:4l1.png|thumb|left|310px|Graphics]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The setting is able to track three objects and measure the distance between each other and trigger certain interactions mentioned above, although the tracking is not stable.So for the time being, two objects are proper for the interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tests&#039;&#039;&#039;&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYTFpWRDVWTlhwUWc]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYT2gtU2RyQVdhN1k]&lt;br /&gt;
[https://drive.google.com/open?id=0BwFYVbmk3aHYdW9ZaDFScGl1X00]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Moxa0.png&amp;diff=85605</id>
		<title>File:Moxa0.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Moxa0.png&amp;diff=85605"/>
		<updated>2016-07-28T13:39:00Z</updated>

		<summary type="html">&lt;p&gt;Diyang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Licensing ==&lt;br /&gt;
{{BUW|c}}&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Tutorials/Performance_Platform/Videowall_Calibration&amp;diff=85604</id>
		<title>GMU:Tutorials/Performance Platform/Videowall Calibration</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Tutorials/Performance_Platform/Videowall_Calibration&amp;diff=85604"/>
		<updated>2016-07-28T13:38:48Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Part 2 With Software */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
The video wall consists of 4*4 screens, which are connected with each other with serial ports. And all the signal will processed by the signal box which will provide a HDMI port to connect to the computer. The signal from the computer can be then distributed through the box to each screen.&lt;br /&gt;
And the screens are actually connected in the order that the picture shows below.&lt;br /&gt;
&lt;br /&gt;
[[File:backofwall.png|thumb|left|Behind Video Wall|300px]]&lt;br /&gt;
[[File:screenbox.png|thumb|left|Signal Box|300px]]&lt;br /&gt;
[[File:grid.png|thumb|left|Numbers Of All Screens|300px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Useful links:&lt;br /&gt;
&lt;br /&gt;
More info about the Screens:&lt;br /&gt;
[http://www.eyevis.de/produkte/lcd-loesungen/videowand-lcd/eye-lcd-5500-xsn-ld-fx.html]&lt;br /&gt;
Manual:&lt;br /&gt;
[http://www.eyevis.de/files/um_eye-lcd-5500-xsn-ld-fx_en_v1.0_e03_1.pdf]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Part 1 With Remote Control ==&lt;br /&gt;
&lt;br /&gt;
The infrared receiver is attached to the right side frame of the video wall. So the screen will react only when you target your remote control towards the infrared receiver.&lt;br /&gt;
&lt;br /&gt;
[[File:remotecontrolll.png|thumb|left|Remote Control|300px]]&lt;br /&gt;
&lt;br /&gt;
[[File:receive.png|thumb|left|Infrared Receiver|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
When you press the &amp;quot;menu&amp;quot; button on the remote control, you can activate the adjustment menu on the screen which is located on the lower right corner. That is actually no.1 screen. You can use the arrow buttons in the menu to do specific adjustments which will have an effect on all the screens.  Detailed instructions can be found in the Manual PDF from page 41 to 51.&lt;br /&gt;
&lt;br /&gt;
[[File:picture.png|thumb|left|Picture|300px]]&lt;br /&gt;
[[File:display.png|thumb|left|Display|300px]]&lt;br /&gt;
[[File:ssetup.png|thumb|left|Setup|300px]]&lt;br /&gt;
[[File:input.png|thumb|left|Input|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In order to change some settings for individual screens, you need to press &amp;quot;select&amp;quot; button, so that you can see each screen will display some informations containing the number of its own. Then you can choose the screen you want to adjust by pressing the corresponding number on the remote control. In this way, you can make some changes individually.&lt;br /&gt;
&lt;br /&gt;
[[File:videowalll.png|thumb|left|Select|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Part 2 With Software  ==&lt;br /&gt;
&lt;br /&gt;
The softwares will be used only work for Windows. They are uploaded to Google Drive.[https://drive.google.com/folderview?id=0BwFYVbmk3aHYUkUzS2cyWndQSlk&amp;amp;usp=sharing]&lt;br /&gt;
&lt;br /&gt;
The route for each program is listed below.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1. NPort Windows Driver Manager&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\Moxa\Downloaded\drvmgr_setup_Ver1.18_Build_15022515_whql.exe&lt;br /&gt;
&lt;br /&gt;
You have to install this program for configuring the proper port for screens in order to get the other two softwares working. After installation, you can get access to configure the COM port settings. &lt;br /&gt;
Usually, we only need to change the numbers at &amp;quot;Serial Parameters&amp;quot;, such as &amp;quot;Baud Rate&amp;quot;.&lt;br /&gt;
[[File:moxa0.png|thumb|left|400px]][[File:moxa1.png|thumb|left|400px]][[File:moxa2.png|thumb|left|400px]]&lt;br /&gt;
&lt;br /&gt;
Here is a video setting tutorial: [https://www.youtube.com/watch?v=1gNnMDqkCm0]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2. eye Device Setup&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\eyeDeviceSetup\Software\eyeDeviceSetup.exe&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3. EYE-LCD-CAS&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\EYE-LCD-CAS\Software\eyeLCD-CAS.exe&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:1-user.png|thumb|User|left|300px]]&lt;br /&gt;
[[File:2-setup.png|thumb|Setup|left|300px]]&lt;br /&gt;
[[File:3-service.png|thumb|Service|left|300px]]&lt;br /&gt;
[[File:4-status.png|thumb|Status|left|300px]]&lt;br /&gt;
[[File:5-gamma.png|thumb|Gamma|left|300px]]&lt;br /&gt;
[[File:6-preference.png|thumb|Preference|left|300px]]&lt;br /&gt;
[[File:7-reports.png|thumb|Reports|left|300px]]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Tutorials/Performance_Platform/Videowall_Calibration&amp;diff=85603</id>
		<title>GMU:Tutorials/Performance Platform/Videowall Calibration</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Tutorials/Performance_Platform/Videowall_Calibration&amp;diff=85603"/>
		<updated>2016-07-28T13:36:58Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Part 2 With Software */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
The video wall consists of 4*4 screens, which are connected with each other with serial ports. And all the signal will processed by the signal box which will provide a HDMI port to connect to the computer. The signal from the computer can be then distributed through the box to each screen.&lt;br /&gt;
And the screens are actually connected in the order that the picture shows below.&lt;br /&gt;
&lt;br /&gt;
[[File:backofwall.png|thumb|left|Behind Video Wall|300px]]&lt;br /&gt;
[[File:screenbox.png|thumb|left|Signal Box|300px]]&lt;br /&gt;
[[File:grid.png|thumb|left|Numbers Of All Screens|300px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Useful links:&lt;br /&gt;
&lt;br /&gt;
More info about the Screens:&lt;br /&gt;
[http://www.eyevis.de/produkte/lcd-loesungen/videowand-lcd/eye-lcd-5500-xsn-ld-fx.html]&lt;br /&gt;
Manual:&lt;br /&gt;
[http://www.eyevis.de/files/um_eye-lcd-5500-xsn-ld-fx_en_v1.0_e03_1.pdf]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Part 1 With Remote Control ==&lt;br /&gt;
&lt;br /&gt;
The infrared receiver is attached to the right side frame of the video wall. So the screen will react only when you target your remote control towards the infrared receiver.&lt;br /&gt;
&lt;br /&gt;
[[File:remotecontrolll.png|thumb|left|Remote Control|300px]]&lt;br /&gt;
&lt;br /&gt;
[[File:receive.png|thumb|left|Infrared Receiver|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
When you press the &amp;quot;menu&amp;quot; button on the remote control, you can activate the adjustment menu on the screen which is located on the lower right corner. That is actually no.1 screen. You can use the arrow buttons in the menu to do specific adjustments which will have an effect on all the screens.  Detailed instructions can be found in the Manual PDF from page 41 to 51.&lt;br /&gt;
&lt;br /&gt;
[[File:picture.png|thumb|left|Picture|300px]]&lt;br /&gt;
[[File:display.png|thumb|left|Display|300px]]&lt;br /&gt;
[[File:ssetup.png|thumb|left|Setup|300px]]&lt;br /&gt;
[[File:input.png|thumb|left|Input|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In order to change some settings for individual screens, you need to press &amp;quot;select&amp;quot; button, so that you can see each screen will display some informations containing the number of its own. Then you can choose the screen you want to adjust by pressing the corresponding number on the remote control. In this way, you can make some changes individually.&lt;br /&gt;
&lt;br /&gt;
[[File:videowalll.png|thumb|left|Select|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Part 2 With Software  ==&lt;br /&gt;
&lt;br /&gt;
The softwares will be used only work for Windows. They are uploaded to Google Drive.[https://drive.google.com/folderview?id=0BwFYVbmk3aHYUkUzS2cyWndQSlk&amp;amp;usp=sharing]&lt;br /&gt;
&lt;br /&gt;
The route for each program is listed below.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1. NPort Windows Driver Manager&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\Moxa\Downloaded\drvmgr_setup_Ver1.18_Build_15022515_whql.exe&lt;br /&gt;
&lt;br /&gt;
You have to install this program for configuring the proper port for screens in order to get the other two softwares working. After installation, you can get access to configure the COM port settings. &lt;br /&gt;
Usually, we only need to change the numbers at &amp;quot;Serial Parameters&amp;quot;, such as &amp;quot;Baud Rate&amp;quot;.&lt;br /&gt;
[[File:moxa1.png|thumb|left|400px]][[File:moxa2.png|thumb|left|400px]]&lt;br /&gt;
&lt;br /&gt;
Here is a video setting tutorial: [https://www.youtube.com/watch?v=1gNnMDqkCm0]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2. eye Device Setup&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\eyeDeviceSetup\Software\eyeDeviceSetup.exe&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3. EYE-LCD-CAS&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\EYE-LCD-CAS\Software\eyeLCD-CAS.exe&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:1-user.png|thumb|User|left|300px]]&lt;br /&gt;
[[File:2-setup.png|thumb|Setup|left|300px]]&lt;br /&gt;
[[File:3-service.png|thumb|Service|left|300px]]&lt;br /&gt;
[[File:4-status.png|thumb|Status|left|300px]]&lt;br /&gt;
[[File:5-gamma.png|thumb|Gamma|left|300px]]&lt;br /&gt;
[[File:6-preference.png|thumb|Preference|left|300px]]&lt;br /&gt;
[[File:7-reports.png|thumb|Reports|left|300px]]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Tutorials/Performance_Platform/Videowall_Calibration&amp;diff=85602</id>
		<title>GMU:Tutorials/Performance Platform/Videowall Calibration</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Tutorials/Performance_Platform/Videowall_Calibration&amp;diff=85602"/>
		<updated>2016-07-28T13:34:11Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Part 2 With Software */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
The video wall consists of 4*4 screens, which are connected with each other with serial ports. And all the signal will processed by the signal box which will provide a HDMI port to connect to the computer. The signal from the computer can be then distributed through the box to each screen.&lt;br /&gt;
And the screens are actually connected in the order that the picture shows below.&lt;br /&gt;
&lt;br /&gt;
[[File:backofwall.png|thumb|left|Behind Video Wall|300px]]&lt;br /&gt;
[[File:screenbox.png|thumb|left|Signal Box|300px]]&lt;br /&gt;
[[File:grid.png|thumb|left|Numbers Of All Screens|300px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Useful links:&lt;br /&gt;
&lt;br /&gt;
More info about the Screens:&lt;br /&gt;
[http://www.eyevis.de/produkte/lcd-loesungen/videowand-lcd/eye-lcd-5500-xsn-ld-fx.html]&lt;br /&gt;
Manual:&lt;br /&gt;
[http://www.eyevis.de/files/um_eye-lcd-5500-xsn-ld-fx_en_v1.0_e03_1.pdf]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Part 1 With Remote Control ==&lt;br /&gt;
&lt;br /&gt;
The infrared receiver is attached to the right side frame of the video wall. So the screen will react only when you target your remote control towards the infrared receiver.&lt;br /&gt;
&lt;br /&gt;
[[File:remotecontrolll.png|thumb|left|Remote Control|300px]]&lt;br /&gt;
&lt;br /&gt;
[[File:receive.png|thumb|left|Infrared Receiver|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
When you press the &amp;quot;menu&amp;quot; button on the remote control, you can activate the adjustment menu on the screen which is located on the lower right corner. That is actually no.1 screen. You can use the arrow buttons in the menu to do specific adjustments which will have an effect on all the screens.  Detailed instructions can be found in the Manual PDF from page 41 to 51.&lt;br /&gt;
&lt;br /&gt;
[[File:picture.png|thumb|left|Picture|300px]]&lt;br /&gt;
[[File:display.png|thumb|left|Display|300px]]&lt;br /&gt;
[[File:ssetup.png|thumb|left|Setup|300px]]&lt;br /&gt;
[[File:input.png|thumb|left|Input|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In order to change some settings for individual screens, you need to press &amp;quot;select&amp;quot; button, so that you can see each screen will display some informations containing the number of its own. Then you can choose the screen you want to adjust by pressing the corresponding number on the remote control. In this way, you can make some changes individually.&lt;br /&gt;
&lt;br /&gt;
[[File:videowalll.png|thumb|left|Select|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Part 2 With Software  ==&lt;br /&gt;
&lt;br /&gt;
The softwares will be used only work for Windows. They are uploaded to Google Drive.[https://drive.google.com/folderview?id=0BwFYVbmk3aHYUkUzS2cyWndQSlk&amp;amp;usp=sharing]&lt;br /&gt;
&lt;br /&gt;
The route for each program is listed below.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1. NPort Windows Driver Manager&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\Moxa\Downloaded\drvmgr_setup_Ver1.18_Build_15022515_whql.exe&lt;br /&gt;
&lt;br /&gt;
You have to install this program for configuring the proper port for screens in order to get the other two softwares working. After installation, you can get access to configure the COM port settings. Usually, we only need to change the numbers at &amp;quot;Serial Parameters&amp;quot;, such as &amp;quot;Baud Rate&amp;quot;.&lt;br /&gt;
[[File:moxa1.png|thumb|left|400px]][[File:moxa2.png|thumb|left|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2. eye Device Setup&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\eyeDeviceSetup\Software\eyeDeviceSetup.exe&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3. EYE-LCD-CAS&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\EYE-LCD-CAS\Software\eyeLCD-CAS.exe&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:1-user.png|thumb|User|left|300px]]&lt;br /&gt;
[[File:2-setup.png|thumb|Setup|left|300px]]&lt;br /&gt;
[[File:3-service.png|thumb|Service|left|300px]]&lt;br /&gt;
[[File:4-status.png|thumb|Status|left|300px]]&lt;br /&gt;
[[File:5-gamma.png|thumb|Gamma|left|300px]]&lt;br /&gt;
[[File:6-preference.png|thumb|Preference|left|300px]]&lt;br /&gt;
[[File:7-reports.png|thumb|Reports|left|300px]]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Tutorials/Performance_Platform/Videowall_Calibration&amp;diff=85601</id>
		<title>GMU:Tutorials/Performance Platform/Videowall Calibration</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Tutorials/Performance_Platform/Videowall_Calibration&amp;diff=85601"/>
		<updated>2016-07-28T13:33:41Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Part 2 With Software */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
The video wall consists of 4*4 screens, which are connected with each other with serial ports. And all the signal will processed by the signal box which will provide a HDMI port to connect to the computer. The signal from the computer can be then distributed through the box to each screen.&lt;br /&gt;
And the screens are actually connected in the order that the picture shows below.&lt;br /&gt;
&lt;br /&gt;
[[File:backofwall.png|thumb|left|Behind Video Wall|300px]]&lt;br /&gt;
[[File:screenbox.png|thumb|left|Signal Box|300px]]&lt;br /&gt;
[[File:grid.png|thumb|left|Numbers Of All Screens|300px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Useful links:&lt;br /&gt;
&lt;br /&gt;
More info about the Screens:&lt;br /&gt;
[http://www.eyevis.de/produkte/lcd-loesungen/videowand-lcd/eye-lcd-5500-xsn-ld-fx.html]&lt;br /&gt;
Manual:&lt;br /&gt;
[http://www.eyevis.de/files/um_eye-lcd-5500-xsn-ld-fx_en_v1.0_e03_1.pdf]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Part 1 With Remote Control ==&lt;br /&gt;
&lt;br /&gt;
The infrared receiver is attached to the right side frame of the video wall. So the screen will react only when you target your remote control towards the infrared receiver.&lt;br /&gt;
&lt;br /&gt;
[[File:remotecontrolll.png|thumb|left|Remote Control|300px]]&lt;br /&gt;
&lt;br /&gt;
[[File:receive.png|thumb|left|Infrared Receiver|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
When you press the &amp;quot;menu&amp;quot; button on the remote control, you can activate the adjustment menu on the screen which is located on the lower right corner. That is actually no.1 screen. You can use the arrow buttons in the menu to do specific adjustments which will have an effect on all the screens.  Detailed instructions can be found in the Manual PDF from page 41 to 51.&lt;br /&gt;
&lt;br /&gt;
[[File:picture.png|thumb|left|Picture|300px]]&lt;br /&gt;
[[File:display.png|thumb|left|Display|300px]]&lt;br /&gt;
[[File:ssetup.png|thumb|left|Setup|300px]]&lt;br /&gt;
[[File:input.png|thumb|left|Input|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In order to change some settings for individual screens, you need to press &amp;quot;select&amp;quot; button, so that you can see each screen will display some informations containing the number of its own. Then you can choose the screen you want to adjust by pressing the corresponding number on the remote control. In this way, you can make some changes individually.&lt;br /&gt;
&lt;br /&gt;
[[File:videowalll.png|thumb|left|Select|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Part 2 With Software  ==&lt;br /&gt;
&lt;br /&gt;
The softwares will be used only work for Windows. They are uploaded to Google Drive.[https://drive.google.com/folderview?id=0BwFYVbmk3aHYUkUzS2cyWndQSlk&amp;amp;usp=sharing]&lt;br /&gt;
&lt;br /&gt;
The route for each program is listed below.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1. NPort Windows Driver Manager&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\Moxa\Downloaded\drvmgr_setup_Ver1.18_Build_15022515_whql.exe&lt;br /&gt;
&lt;br /&gt;
You have to install this program for configuring the proper port for screens in order to get the other two softwares working. After installation, you can get access to configure the COM port settings. Usually, we only need to change the numbers at &amp;quot;Serial Parameters&amp;quot;, such as baud rate.&lt;br /&gt;
[[File:moxa1.png|thumb|left|400px]][[File:moxa2.png|thumb|left|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2. eye Device Setup&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\eyeDeviceSetup\Software\eyeDeviceSetup.exe&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3. EYE-LCD-CAS&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\EYE-LCD-CAS\Software\eyeLCD-CAS.exe&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:1-user.png|thumb|User|left|300px]]&lt;br /&gt;
[[File:2-setup.png|thumb|Setup|left|300px]]&lt;br /&gt;
[[File:3-service.png|thumb|Service|left|300px]]&lt;br /&gt;
[[File:4-status.png|thumb|Status|left|300px]]&lt;br /&gt;
[[File:5-gamma.png|thumb|Gamma|left|300px]]&lt;br /&gt;
[[File:6-preference.png|thumb|Preference|left|300px]]&lt;br /&gt;
[[File:7-reports.png|thumb|Reports|left|300px]]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Tutorials/Performance_Platform/Videowall_Calibration&amp;diff=85600</id>
		<title>GMU:Tutorials/Performance Platform/Videowall Calibration</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Tutorials/Performance_Platform/Videowall_Calibration&amp;diff=85600"/>
		<updated>2016-07-28T13:33:24Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Part 2 With Software */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
The video wall consists of 4*4 screens, which are connected with each other with serial ports. And all the signal will processed by the signal box which will provide a HDMI port to connect to the computer. The signal from the computer can be then distributed through the box to each screen.&lt;br /&gt;
And the screens are actually connected in the order that the picture shows below.&lt;br /&gt;
&lt;br /&gt;
[[File:backofwall.png|thumb|left|Behind Video Wall|300px]]&lt;br /&gt;
[[File:screenbox.png|thumb|left|Signal Box|300px]]&lt;br /&gt;
[[File:grid.png|thumb|left|Numbers Of All Screens|300px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Useful links:&lt;br /&gt;
&lt;br /&gt;
More info about the Screens:&lt;br /&gt;
[http://www.eyevis.de/produkte/lcd-loesungen/videowand-lcd/eye-lcd-5500-xsn-ld-fx.html]&lt;br /&gt;
Manual:&lt;br /&gt;
[http://www.eyevis.de/files/um_eye-lcd-5500-xsn-ld-fx_en_v1.0_e03_1.pdf]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Part 1 With Remote Control ==&lt;br /&gt;
&lt;br /&gt;
The infrared receiver is attached to the right side frame of the video wall. So the screen will react only when you target your remote control towards the infrared receiver.&lt;br /&gt;
&lt;br /&gt;
[[File:remotecontrolll.png|thumb|left|Remote Control|300px]]&lt;br /&gt;
&lt;br /&gt;
[[File:receive.png|thumb|left|Infrared Receiver|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
When you press the &amp;quot;menu&amp;quot; button on the remote control, you can activate the adjustment menu on the screen which is located on the lower right corner. That is actually no.1 screen. You can use the arrow buttons in the menu to do specific adjustments which will have an effect on all the screens.  Detailed instructions can be found in the Manual PDF from page 41 to 51.&lt;br /&gt;
&lt;br /&gt;
[[File:picture.png|thumb|left|Picture|300px]]&lt;br /&gt;
[[File:display.png|thumb|left|Display|300px]]&lt;br /&gt;
[[File:ssetup.png|thumb|left|Setup|300px]]&lt;br /&gt;
[[File:input.png|thumb|left|Input|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In order to change some settings for individual screens, you need to press &amp;quot;select&amp;quot; button, so that you can see each screen will display some informations containing the number of its own. Then you can choose the screen you want to adjust by pressing the corresponding number on the remote control. In this way, you can make some changes individually.&lt;br /&gt;
&lt;br /&gt;
[[File:videowalll.png|thumb|left|Select|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Part 2 With Software  ==&lt;br /&gt;
&lt;br /&gt;
The softwares will be used only work for Windows. They are uploaded to Google Drive.[https://drive.google.com/folderview?id=0BwFYVbmk3aHYUkUzS2cyWndQSlk&amp;amp;usp=sharing]&lt;br /&gt;
&lt;br /&gt;
The route for each program is listed below.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1. NPort Windows Driver Manager&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\Moxa\Downloaded\drvmgr_setup_Ver1.18_Build_15022515_whql.exe&lt;br /&gt;
&lt;br /&gt;
You have to install this program for configuring the proper port for screens in order to get the other two softwares working. After installation, you can get access to configure the COM port settings. Usually, we only need to change the numbers at &amp;quot;Serial Parameters&amp;quot;, such as braud rate.&lt;br /&gt;
[[File:moxa1.png|thumb|left|400px]][[File:moxa2.png|thumb|left|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2. eye Device Setup&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\eyeDeviceSetup\Software\eyeDeviceSetup.exe&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3. EYE-LCD-CAS&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\EYE-LCD-CAS\Software\eyeLCD-CAS.exe&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:1-user.png|thumb|User|left|300px]]&lt;br /&gt;
[[File:2-setup.png|thumb|Setup|left|300px]]&lt;br /&gt;
[[File:3-service.png|thumb|Service|left|300px]]&lt;br /&gt;
[[File:4-status.png|thumb|Status|left|300px]]&lt;br /&gt;
[[File:5-gamma.png|thumb|Gamma|left|300px]]&lt;br /&gt;
[[File:6-preference.png|thumb|Preference|left|300px]]&lt;br /&gt;
[[File:7-reports.png|thumb|Reports|left|300px]]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Tutorials/Performance_Platform/Videowall_Calibration&amp;diff=85599</id>
		<title>GMU:Tutorials/Performance Platform/Videowall Calibration</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Tutorials/Performance_Platform/Videowall_Calibration&amp;diff=85599"/>
		<updated>2016-07-28T13:32:48Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Part 2 With Software */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
The video wall consists of 4*4 screens, which are connected with each other with serial ports. And all the signal will processed by the signal box which will provide a HDMI port to connect to the computer. The signal from the computer can be then distributed through the box to each screen.&lt;br /&gt;
And the screens are actually connected in the order that the picture shows below.&lt;br /&gt;
&lt;br /&gt;
[[File:backofwall.png|thumb|left|Behind Video Wall|300px]]&lt;br /&gt;
[[File:screenbox.png|thumb|left|Signal Box|300px]]&lt;br /&gt;
[[File:grid.png|thumb|left|Numbers Of All Screens|300px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Useful links:&lt;br /&gt;
&lt;br /&gt;
More info about the Screens:&lt;br /&gt;
[http://www.eyevis.de/produkte/lcd-loesungen/videowand-lcd/eye-lcd-5500-xsn-ld-fx.html]&lt;br /&gt;
Manual:&lt;br /&gt;
[http://www.eyevis.de/files/um_eye-lcd-5500-xsn-ld-fx_en_v1.0_e03_1.pdf]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Part 1 With Remote Control ==&lt;br /&gt;
&lt;br /&gt;
The infrared receiver is attached to the right side frame of the video wall. So the screen will react only when you target your remote control towards the infrared receiver.&lt;br /&gt;
&lt;br /&gt;
[[File:remotecontrolll.png|thumb|left|Remote Control|300px]]&lt;br /&gt;
&lt;br /&gt;
[[File:receive.png|thumb|left|Infrared Receiver|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
When you press the &amp;quot;menu&amp;quot; button on the remote control, you can activate the adjustment menu on the screen which is located on the lower right corner. That is actually no.1 screen. You can use the arrow buttons in the menu to do specific adjustments which will have an effect on all the screens.  Detailed instructions can be found in the Manual PDF from page 41 to 51.&lt;br /&gt;
&lt;br /&gt;
[[File:picture.png|thumb|left|Picture|300px]]&lt;br /&gt;
[[File:display.png|thumb|left|Display|300px]]&lt;br /&gt;
[[File:ssetup.png|thumb|left|Setup|300px]]&lt;br /&gt;
[[File:input.png|thumb|left|Input|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In order to change some settings for individual screens, you need to press &amp;quot;select&amp;quot; button, so that you can see each screen will display some informations containing the number of its own. Then you can choose the screen you want to adjust by pressing the corresponding number on the remote control. In this way, you can make some changes individually.&lt;br /&gt;
&lt;br /&gt;
[[File:videowalll.png|thumb|left|Select|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Part 2 With Software  ==&lt;br /&gt;
&lt;br /&gt;
The softwares will be used only work for Windows. They are uploaded to Google Drive.[https://drive.google.com/folderview?id=0BwFYVbmk3aHYUkUzS2cyWndQSlk&amp;amp;usp=sharing]&lt;br /&gt;
&lt;br /&gt;
The route for each program is listed below.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1. NPort Windows Driver Manager&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\Moxa\Downloaded\drvmgr_setup_Ver1.18_Build_15022515_whql.exe&lt;br /&gt;
&lt;br /&gt;
You have to install this program for configuring the proper port for screens in order to get the other two softwares working. After installation, you can get access to configure the COM port settings. Usually, we only need to set the numbers at &amp;quot;SerialPrameters&amp;quot; &lt;br /&gt;
[[File:moxa1.png|thumb|left|400px]][[File:moxa2.png|thumb|left|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2. eye Device Setup&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\eyeDeviceSetup\Software\eyeDeviceSetup.exe&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3. EYE-LCD-CAS&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\EYE-LCD-CAS\Software\eyeLCD-CAS.exe&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:1-user.png|thumb|User|left|300px]]&lt;br /&gt;
[[File:2-setup.png|thumb|Setup|left|300px]]&lt;br /&gt;
[[File:3-service.png|thumb|Service|left|300px]]&lt;br /&gt;
[[File:4-status.png|thumb|Status|left|300px]]&lt;br /&gt;
[[File:5-gamma.png|thumb|Gamma|left|300px]]&lt;br /&gt;
[[File:6-preference.png|thumb|Preference|left|300px]]&lt;br /&gt;
[[File:7-reports.png|thumb|Reports|left|300px]]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Tutorials/Performance_Platform/Videowall_Calibration&amp;diff=85598</id>
		<title>GMU:Tutorials/Performance Platform/Videowall Calibration</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Tutorials/Performance_Platform/Videowall_Calibration&amp;diff=85598"/>
		<updated>2016-07-28T13:28:51Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Part 2 With Software */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
The video wall consists of 4*4 screens, which are connected with each other with serial ports. And all the signal will processed by the signal box which will provide a HDMI port to connect to the computer. The signal from the computer can be then distributed through the box to each screen.&lt;br /&gt;
And the screens are actually connected in the order that the picture shows below.&lt;br /&gt;
&lt;br /&gt;
[[File:backofwall.png|thumb|left|Behind Video Wall|300px]]&lt;br /&gt;
[[File:screenbox.png|thumb|left|Signal Box|300px]]&lt;br /&gt;
[[File:grid.png|thumb|left|Numbers Of All Screens|300px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Useful links:&lt;br /&gt;
&lt;br /&gt;
More info about the Screens:&lt;br /&gt;
[http://www.eyevis.de/produkte/lcd-loesungen/videowand-lcd/eye-lcd-5500-xsn-ld-fx.html]&lt;br /&gt;
Manual:&lt;br /&gt;
[http://www.eyevis.de/files/um_eye-lcd-5500-xsn-ld-fx_en_v1.0_e03_1.pdf]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Part 1 With Remote Control ==&lt;br /&gt;
&lt;br /&gt;
The infrared receiver is attached to the right side frame of the video wall. So the screen will react only when you target your remote control towards the infrared receiver.&lt;br /&gt;
&lt;br /&gt;
[[File:remotecontrolll.png|thumb|left|Remote Control|300px]]&lt;br /&gt;
&lt;br /&gt;
[[File:receive.png|thumb|left|Infrared Receiver|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
When you press the &amp;quot;menu&amp;quot; button on the remote control, you can activate the adjustment menu on the screen which is located on the lower right corner. That is actually no.1 screen. You can use the arrow buttons in the menu to do specific adjustments which will have an effect on all the screens.  Detailed instructions can be found in the Manual PDF from page 41 to 51.&lt;br /&gt;
&lt;br /&gt;
[[File:picture.png|thumb|left|Picture|300px]]&lt;br /&gt;
[[File:display.png|thumb|left|Display|300px]]&lt;br /&gt;
[[File:ssetup.png|thumb|left|Setup|300px]]&lt;br /&gt;
[[File:input.png|thumb|left|Input|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In order to change some settings for individual screens, you need to press &amp;quot;select&amp;quot; button, so that you can see each screen will display some informations containing the number of its own. Then you can choose the screen you want to adjust by pressing the corresponding number on the remote control. In this way, you can make some changes individually.&lt;br /&gt;
&lt;br /&gt;
[[File:videowalll.png|thumb|left|Select|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Part 2 With Software  ==&lt;br /&gt;
&lt;br /&gt;
The softwares will be used only work for Windows. They are uploaded to Google Drive.[https://drive.google.com/folderview?id=0BwFYVbmk3aHYUkUzS2cyWndQSlk&amp;amp;usp=sharing]&lt;br /&gt;
&lt;br /&gt;
The route for each program is listed below.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1. NPort Windows Driver Manager&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\Moxa\Downloaded\drvmgr_setup_Ver1.18_Build_15022515_whql.exe&lt;br /&gt;
&lt;br /&gt;
You have to install this program for configuring the proper port for screens in order to get the other two softwares working. After installation, you can get access to configure the COM port settings.&lt;br /&gt;
[[File:moxa1.png|thumb|left|400px]][[File:moxa2.png|thumb|left|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2. eye Device Setup&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\eyeDeviceSetup\Software\eyeDeviceSetup.exe&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3. EYE-LCD-CAS&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\EYE-LCD-CAS\Software\eyeLCD-CAS.exe&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:1-user.png|thumb|User|left|300px]]&lt;br /&gt;
[[File:2-setup.png|thumb|Setup|left|300px]]&lt;br /&gt;
[[File:3-service.png|thumb|Service|left|300px]]&lt;br /&gt;
[[File:4-status.png|thumb|Status|left|300px]]&lt;br /&gt;
[[File:5-gamma.png|thumb|Gamma|left|300px]]&lt;br /&gt;
[[File:6-preference.png|thumb|Preference|left|300px]]&lt;br /&gt;
[[File:7-reports.png|thumb|Reports|left|300px]]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Tutorials/Performance_Platform/Videowall_Calibration&amp;diff=85597</id>
		<title>GMU:Tutorials/Performance Platform/Videowall Calibration</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Tutorials/Performance_Platform/Videowall_Calibration&amp;diff=85597"/>
		<updated>2016-07-28T13:27:25Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Part 2 With Software */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
The video wall consists of 4*4 screens, which are connected with each other with serial ports. And all the signal will processed by the signal box which will provide a HDMI port to connect to the computer. The signal from the computer can be then distributed through the box to each screen.&lt;br /&gt;
And the screens are actually connected in the order that the picture shows below.&lt;br /&gt;
&lt;br /&gt;
[[File:backofwall.png|thumb|left|Behind Video Wall|300px]]&lt;br /&gt;
[[File:screenbox.png|thumb|left|Signal Box|300px]]&lt;br /&gt;
[[File:grid.png|thumb|left|Numbers Of All Screens|300px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Useful links:&lt;br /&gt;
&lt;br /&gt;
More info about the Screens:&lt;br /&gt;
[http://www.eyevis.de/produkte/lcd-loesungen/videowand-lcd/eye-lcd-5500-xsn-ld-fx.html]&lt;br /&gt;
Manual:&lt;br /&gt;
[http://www.eyevis.de/files/um_eye-lcd-5500-xsn-ld-fx_en_v1.0_e03_1.pdf]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Part 1 With Remote Control ==&lt;br /&gt;
&lt;br /&gt;
The infrared receiver is attached to the right side frame of the video wall. So the screen will react only when you target your remote control towards the infrared receiver.&lt;br /&gt;
&lt;br /&gt;
[[File:remotecontrolll.png|thumb|left|Remote Control|300px]]&lt;br /&gt;
&lt;br /&gt;
[[File:receive.png|thumb|left|Infrared Receiver|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
When you press the &amp;quot;menu&amp;quot; button on the remote control, you can activate the adjustment menu on the screen which is located on the lower right corner. That is actually no.1 screen. You can use the arrow buttons in the menu to do specific adjustments which will have an effect on all the screens.  Detailed instructions can be found in the Manual PDF from page 41 to 51.&lt;br /&gt;
&lt;br /&gt;
[[File:picture.png|thumb|left|Picture|300px]]&lt;br /&gt;
[[File:display.png|thumb|left|Display|300px]]&lt;br /&gt;
[[File:ssetup.png|thumb|left|Setup|300px]]&lt;br /&gt;
[[File:input.png|thumb|left|Input|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In order to change some settings for individual screens, you need to press &amp;quot;select&amp;quot; button, so that you can see each screen will display some informations containing the number of its own. Then you can choose the screen you want to adjust by pressing the corresponding number on the remote control. In this way, you can make some changes individually.&lt;br /&gt;
&lt;br /&gt;
[[File:videowalll.png|thumb|left|Select|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Part 2 With Software  ==&lt;br /&gt;
&lt;br /&gt;
The softwares will be used only work for Windows. They are uploaded to Google Drive.[https://drive.google.com/folderview?id=0BwFYVbmk3aHYUkUzS2cyWndQSlk&amp;amp;usp=sharing]&lt;br /&gt;
&lt;br /&gt;
The route for each program is listed below.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1. NPort Windows Driver Manager&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\Moxa\Downloaded\drvmgr_setup_Ver1.18_Build_15022515_whql.exe&lt;br /&gt;
&lt;br /&gt;
You have to install this program for configuring the proper port for screens in order to get the other two softwares working.&lt;br /&gt;
[[File:moxa1.png|thumb|left|400px]][[File:moxa2.png|thumb|left|400px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2. eye Device Setup&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\eyeDeviceSetup\Software\eyeDeviceSetup.exe&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3. EYE-LCD-CAS&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\EYE-LCD-CAS\Software\eyeLCD-CAS.exe&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:1-user.png|thumb|User|left|300px]]&lt;br /&gt;
[[File:2-setup.png|thumb|Setup|left|300px]]&lt;br /&gt;
[[File:3-service.png|thumb|Service|left|300px]]&lt;br /&gt;
[[File:4-status.png|thumb|Status|left|300px]]&lt;br /&gt;
[[File:5-gamma.png|thumb|Gamma|left|300px]]&lt;br /&gt;
[[File:6-preference.png|thumb|Preference|left|300px]]&lt;br /&gt;
[[File:7-reports.png|thumb|Reports|left|300px]]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Tutorials/Performance_Platform/Videowall_Calibration&amp;diff=85596</id>
		<title>GMU:Tutorials/Performance Platform/Videowall Calibration</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Tutorials/Performance_Platform/Videowall_Calibration&amp;diff=85596"/>
		<updated>2016-07-28T13:27:08Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Part 2 With Software */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
The video wall consists of 4*4 screens, which are connected with each other with serial ports. And all the signal will processed by the signal box which will provide a HDMI port to connect to the computer. The signal from the computer can be then distributed through the box to each screen.&lt;br /&gt;
And the screens are actually connected in the order that the picture shows below.&lt;br /&gt;
&lt;br /&gt;
[[File:backofwall.png|thumb|left|Behind Video Wall|300px]]&lt;br /&gt;
[[File:screenbox.png|thumb|left|Signal Box|300px]]&lt;br /&gt;
[[File:grid.png|thumb|left|Numbers Of All Screens|300px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Useful links:&lt;br /&gt;
&lt;br /&gt;
More info about the Screens:&lt;br /&gt;
[http://www.eyevis.de/produkte/lcd-loesungen/videowand-lcd/eye-lcd-5500-xsn-ld-fx.html]&lt;br /&gt;
Manual:&lt;br /&gt;
[http://www.eyevis.de/files/um_eye-lcd-5500-xsn-ld-fx_en_v1.0_e03_1.pdf]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Part 1 With Remote Control ==&lt;br /&gt;
&lt;br /&gt;
The infrared receiver is attached to the right side frame of the video wall. So the screen will react only when you target your remote control towards the infrared receiver.&lt;br /&gt;
&lt;br /&gt;
[[File:remotecontrolll.png|thumb|left|Remote Control|300px]]&lt;br /&gt;
&lt;br /&gt;
[[File:receive.png|thumb|left|Infrared Receiver|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
When you press the &amp;quot;menu&amp;quot; button on the remote control, you can activate the adjustment menu on the screen which is located on the lower right corner. That is actually no.1 screen. You can use the arrow buttons in the menu to do specific adjustments which will have an effect on all the screens.  Detailed instructions can be found in the Manual PDF from page 41 to 51.&lt;br /&gt;
&lt;br /&gt;
[[File:picture.png|thumb|left|Picture|300px]]&lt;br /&gt;
[[File:display.png|thumb|left|Display|300px]]&lt;br /&gt;
[[File:ssetup.png|thumb|left|Setup|300px]]&lt;br /&gt;
[[File:input.png|thumb|left|Input|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In order to change some settings for individual screens, you need to press &amp;quot;select&amp;quot; button, so that you can see each screen will display some informations containing the number of its own. Then you can choose the screen you want to adjust by pressing the corresponding number on the remote control. In this way, you can make some changes individually.&lt;br /&gt;
&lt;br /&gt;
[[File:videowalll.png|thumb|left|Select|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Part 2 With Software  ==&lt;br /&gt;
&lt;br /&gt;
The softwares will be used only work for Windows. They are uploaded to Google Drive.[https://drive.google.com/folderview?id=0BwFYVbmk3aHYUkUzS2cyWndQSlk&amp;amp;usp=sharing]&lt;br /&gt;
&lt;br /&gt;
The route for each program is listed below.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1. NPort Windows Driver Manager&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\Moxa\Downloaded\drvmgr_setup_Ver1.18_Build_15022515_whql.exe&lt;br /&gt;
&lt;br /&gt;
You have to install this program for configuring the proper port for screens in order to get the other two softwares working.&lt;br /&gt;
[[File:moxa1.png|thumb|left|300px]][[File:moxa2.png|thumb|left|300px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2. eye Device Setup&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\eyeDeviceSetup\Software\eyeDeviceSetup.exe&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3. EYE-LCD-CAS&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\EYE-LCD-CAS\Software\eyeLCD-CAS.exe&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:1-user.png|thumb|User|left|300px]]&lt;br /&gt;
[[File:2-setup.png|thumb|Setup|left|300px]]&lt;br /&gt;
[[File:3-service.png|thumb|Service|left|300px]]&lt;br /&gt;
[[File:4-status.png|thumb|Status|left|300px]]&lt;br /&gt;
[[File:5-gamma.png|thumb|Gamma|left|300px]]&lt;br /&gt;
[[File:6-preference.png|thumb|Preference|left|300px]]&lt;br /&gt;
[[File:7-reports.png|thumb|Reports|left|300px]]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Moxa2.png&amp;diff=85595</id>
		<title>File:Moxa2.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Moxa2.png&amp;diff=85595"/>
		<updated>2016-07-28T13:26:03Z</updated>

		<summary type="html">&lt;p&gt;Diyang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Licensing ==&lt;br /&gt;
{{BUW|c}}&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Moxa1.png&amp;diff=85594</id>
		<title>File:Moxa1.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Moxa1.png&amp;diff=85594"/>
		<updated>2016-07-28T13:25:43Z</updated>

		<summary type="html">&lt;p&gt;Diyang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status: ==&lt;br /&gt;
&lt;br /&gt;
== Licensing ==&lt;br /&gt;
{{BUW|c}}&lt;br /&gt;
== Source: ==&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Tutorials/Performance_Platform/Videowall_Calibration&amp;diff=85593</id>
		<title>GMU:Tutorials/Performance Platform/Videowall Calibration</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Tutorials/Performance_Platform/Videowall_Calibration&amp;diff=85593"/>
		<updated>2016-07-28T13:25:31Z</updated>

		<summary type="html">&lt;p&gt;Diyang: /* Part 2 With Software */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
The video wall consists of 4*4 screens, which are connected with each other with serial ports. And all the signal will processed by the signal box which will provide a HDMI port to connect to the computer. The signal from the computer can be then distributed through the box to each screen.&lt;br /&gt;
And the screens are actually connected in the order that the picture shows below.&lt;br /&gt;
&lt;br /&gt;
[[File:backofwall.png|thumb|left|Behind Video Wall|300px]]&lt;br /&gt;
[[File:screenbox.png|thumb|left|Signal Box|300px]]&lt;br /&gt;
[[File:grid.png|thumb|left|Numbers Of All Screens|300px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Useful links:&lt;br /&gt;
&lt;br /&gt;
More info about the Screens:&lt;br /&gt;
[http://www.eyevis.de/produkte/lcd-loesungen/videowand-lcd/eye-lcd-5500-xsn-ld-fx.html]&lt;br /&gt;
Manual:&lt;br /&gt;
[http://www.eyevis.de/files/um_eye-lcd-5500-xsn-ld-fx_en_v1.0_e03_1.pdf]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Part 1 With Remote Control ==&lt;br /&gt;
&lt;br /&gt;
The infrared receiver is attached to the right side frame of the video wall. So the screen will react only when you target your remote control towards the infrared receiver.&lt;br /&gt;
&lt;br /&gt;
[[File:remotecontrolll.png|thumb|left|Remote Control|300px]]&lt;br /&gt;
&lt;br /&gt;
[[File:receive.png|thumb|left|Infrared Receiver|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
When you press the &amp;quot;menu&amp;quot; button on the remote control, you can activate the adjustment menu on the screen which is located on the lower right corner. That is actually no.1 screen. You can use the arrow buttons in the menu to do specific adjustments which will have an effect on all the screens.  Detailed instructions can be found in the Manual PDF from page 41 to 51.&lt;br /&gt;
&lt;br /&gt;
[[File:picture.png|thumb|left|Picture|300px]]&lt;br /&gt;
[[File:display.png|thumb|left|Display|300px]]&lt;br /&gt;
[[File:ssetup.png|thumb|left|Setup|300px]]&lt;br /&gt;
[[File:input.png|thumb|left|Input|300px]]&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In order to change some settings for individual screens, you need to press &amp;quot;select&amp;quot; button, so that you can see each screen will display some informations containing the number of its own. Then you can choose the screen you want to adjust by pressing the corresponding number on the remote control. In this way, you can make some changes individually.&lt;br /&gt;
&lt;br /&gt;
[[File:videowalll.png|thumb|left|Select|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Part 2 With Software  ==&lt;br /&gt;
&lt;br /&gt;
The softwares will be used only work for Windows. They are uploaded to Google Drive.[https://drive.google.com/folderview?id=0BwFYVbmk3aHYUkUzS2cyWndQSlk&amp;amp;usp=sharing]&lt;br /&gt;
&lt;br /&gt;
The route for each program is listed below.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1. NPort Windows Driver Manager&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\Moxa\Downloaded\drvmgr_setup_Ver1.18_Build_15022515_whql.exe&lt;br /&gt;
&lt;br /&gt;
You have to install this program for configuring the proper port for screens in order to get the other two softwares working.&lt;br /&gt;
[[File:moxa1.png]][[File:moxa2.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2. eye Device Setup&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\eyeDeviceSetup\Software\eyeDeviceSetup.exe&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3. EYE-LCD-CAS&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Google Drive\EYEVIS-Color-Calibration\EYE-LCD-CAS\Software\eyeLCD-CAS.exe&lt;br /&gt;
&amp;lt;br style=&amp;quot;clear:both&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:1-user.png|thumb|User|left|300px]]&lt;br /&gt;
[[File:2-setup.png|thumb|Setup|left|300px]]&lt;br /&gt;
[[File:3-service.png|thumb|Service|left|300px]]&lt;br /&gt;
[[File:4-status.png|thumb|Status|left|300px]]&lt;br /&gt;
[[File:5-gamma.png|thumb|Gamma|left|300px]]&lt;br /&gt;
[[File:6-preference.png|thumb|Preference|left|300px]]&lt;br /&gt;
[[File:7-reports.png|thumb|Reports|left|300px]]&lt;/div&gt;</summary>
		<author><name>Diyang</name></author>
	</entry>
</feed>