<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Flux999</id>
	<title>Medien Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Flux999"/>
	<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/Special:Contributions/Flux999"/>
	<updated>2026-05-13T20:46:20Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.6</generator>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=Selena_Deger&amp;diff=135599</id>
		<title>Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=Selena_Deger&amp;diff=135599"/>
		<updated>2023-04-04T19:29:48Z</updated>

		<summary type="html">&lt;p&gt;Flux999: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==[[Encapsulated//Encrypted]]==&lt;br /&gt;
SoSe22 - Observe Experiment Connect&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;an interpretation of the digital privacy concept through auto-biographical photography practices&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:selenade1.png]]&lt;br /&gt;
&lt;br /&gt;
https://flux999.github.io/encapsulated/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==InterFace: how you see me==&lt;br /&gt;
&lt;br /&gt;
WiSe23 - Designing Utopias&lt;br /&gt;
&lt;br /&gt;
[[GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{{#ev:youtube|e1CXrPU11XQ}}&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=Selena_Deger&amp;diff=135598</id>
		<title>Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=Selena_Deger&amp;diff=135598"/>
		<updated>2023-04-04T19:28:42Z</updated>

		<summary type="html">&lt;p&gt;Flux999: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==[[Encapsulated//Encrypted]]==&lt;br /&gt;
SoSe22 - Observe Experiement Connect&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;an interpretation of the digital privacy concept through auto-biographical photography practices&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:selenade1.png]]&lt;br /&gt;
&lt;br /&gt;
https://flux999.github.io/encapsulated/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==InterFace: how you see me==&lt;br /&gt;
&lt;br /&gt;
WiSe23 - Designing Utopias&lt;br /&gt;
&lt;br /&gt;
[[GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{{#ev:youtube|e1CXrPU11XQ}}&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=Encapsulated//Encrypted&amp;diff=135597</id>
		<title>Encapsulated//Encrypted</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=Encapsulated//Encrypted&amp;diff=135597"/>
		<updated>2023-04-04T19:21:46Z</updated>

		<summary type="html">&lt;p&gt;Flux999: Created page with &amp;quot;==&amp;#039;&amp;#039;&amp;#039;Encapsulated//Encrypted&amp;#039;&amp;#039;&amp;#039;== &amp;#039;&amp;#039;an interpretation of the digital privacy concept through auto-biographical photography practices&amp;#039;&amp;#039;  File:selenade1.png  https://flux999...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==&#039;&#039;&#039;Encapsulated//Encrypted&#039;&#039;&#039;==&lt;br /&gt;
&#039;&#039;an interpretation of the digital privacy concept through auto-biographical photography practices&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:selenade1.png]]&lt;br /&gt;
&lt;br /&gt;
https://flux999.github.io/encapsulated/&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;abstract&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Photography is a relatively new medium of visual expression that also has many contemporary interpretations regarding archiving or even constructing a new identity for the self. Especially auto-biographic photos have become a part of people’s identities with newly emerging online platforms and continuously changing world views of younger people. The purpose of photography moved from being only memoirs to recall the past to documenting daily life. Unlike past, there is no need to have a special occasion to remember but every day is to be recorded and sometimes shared with others. These archives of nonsignificant moments in life mirror people’s lives and enable them to curate and communicate their (digital)identities. With the paradigm shift in the usage of this medium, photography is democratized even more and started to have a significant effect on social interactions but this also came with many concerns about privacy in online environments. Digital privacy is also a very contemporary subject in everyone’s lives as we are going through a very digitalized era where most of the experiences are converting to online even the social interactions are becoming more and more framed on screens.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;about the project&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
The project&#039;s focal point, &amp;quot;Encapsulated//Encryped&amp;quot; utilizes my photo archive, which includes the photos I took or the photos of me taken by another person. The first part, &amp;quot;Encapsulated&amp;quot; refers to the archive where I document my private life from many aspects, the place I have been living, the people I know, the places I have been, my face, works, cat… These photos encapsulate the essential moments of my life and most of them I don&#039;t share online. However, I was curious about how my decision process goes by when I share one photo online, with people I rarely know and also acknowledge that the photo will probably stay on the internet for ever.&lt;br /&gt;
&lt;br /&gt;
At this point the second part of the project, &amp;quot;Encrypted&amp;quot; comes. Even though I have many concerns about the privacy of my life and being skeptical about sharing photos online, I gladly exist in online communities. And usually, I am more disturbed when passively consuming the content than creating or sharing of my own. The dilemma is I don&#039;t want to share my life online, but I want to share the photos I take, which are basically depictions of my life and contain a lot of information about my identity and surroundings. Therefore, to share the photos I took, I have to encrypt them so that they will be based on authentic images but will no longer be the photos on my gallery.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;process&#039;&#039;&#039;==&lt;br /&gt;
&#039;&#039;&#039;Machine Learning Model for Public/Private Assessment&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;in collaboration with Tarkan Uskudar, Data Scientist&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To decide if a photo is available for sharing online or entirely private, I used a deep learning model based on my past decisions on sharing a photo. Two classes with the photos that I have already shared online(public) and another selection of photos from my archive that is rated explicitly as private and fed to the algorithm to find the patterns.&lt;br /&gt;
&lt;br /&gt;
Tensorflow library is used to implement convolutional neural network architecture. There are three steps for the model. During the preprocessing, the images are resized to 540x540px for computational purposes. After resizing, data augmentation was used to create different representations of the images. This is to improve the generalisability of the model by creating flipped, zoomed, and rescaled versions of the photos. The second part was to train the model, convolutional neural networks with max pooling were used and the idea was to convey the meaning of the image through the layers and get information back from the model.&lt;br /&gt;
&lt;br /&gt;
After the training, the results are evaluated, and minor adjustments are made to reach the final accuracy of 89%. Finally, by feeding a batch of 45 photos taken between September 2021-2022, the privacy values are exported to be used for image processing at later steps.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Pixel Sorting Algorithm for Image Processing&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
After trying many methods to alter the photos, such as style transfer or mapping them in a different concept like Lissajous figures, randomized geometric shapes etc. it was more appealing and fruitful to process them via the colors they already carry.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;trials with different algorithms;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:selenade3.png&lt;br /&gt;
File:selenade4.png&lt;br /&gt;
File:selenade5.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A bubble sorting algorithm and p5.js library features are used to change the images with the aim of not showing the actual image but the &amp;quot;encrypted&amp;quot; version. First, the image&#039;s pixel values are retrieved and with the bubble sorting, pixels are sorted according to their hue values and relocated until they are neighbors with another with the same hue value. Additionally, a random color function is assigned to the photos to have slightly different results for each time program runs. The final look of the images shows flowing pixel paths and different color gradings.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pixel sorting initial experiments;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:selenade6.jpg &lt;br /&gt;
File:selenade7.jpg&lt;br /&gt;
File:selenade8.png&lt;br /&gt;
File:selenade9.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;code; &#039;&#039; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source style=&amp;quot;border:none; height:650px; overflow:scroll;&amp;quot; lang=&amp;quot;c&amp;quot; line start=&amp;quot;55&amp;quot; highlight=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
let img;&lt;br /&gt;
let imgCornerx;&lt;br /&gt;
let imgCornerY;&lt;br /&gt;
let counter = 0;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
function preload() {&lt;br /&gt;
  img = loadImage(&amp;quot;cover/cover11.jpeg&amp;quot;);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function setup() {&lt;br /&gt;
  img.resize(600, 0);&lt;br /&gt;
  createCanvas(windowWidth, windowHeight);&lt;br /&gt;
  pixelDensity(1);&lt;br /&gt;
  &lt;br /&gt;
  imgCornerx = (windowWidth-img.width)/2;&lt;br /&gt;
  imgCornery = (windowHeight-img.height)/2;&lt;br /&gt;
  &lt;br /&gt;
  frameRate(10);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
function draw() {&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  img.loadPixels();&lt;br /&gt;
  &lt;br /&gt;
  for (let i = 0; i &amp;lt; img.width - 1; i++) {&lt;br /&gt;
    for (let j = 0; j &amp;lt; img.height - 1; j++) {&lt;br /&gt;
&lt;br /&gt;
       if (random(2)&amp;lt;1) {&lt;br /&gt;
          vmovePixels(i, j);&lt;br /&gt;
       } else {&lt;br /&gt;
           hmovePixels(i, j);&lt;br /&gt;
         }&lt;br /&gt;
        &lt;br /&gt;
      counter++; //not to change every pixel during the process. &lt;br /&gt;
        &lt;br /&gt;
      if (counter&amp;lt;img.width*img.height*50) {&lt;br /&gt;
     &lt;br /&gt;
        randomColor(parseInt(random(img.width)), parseInt(random(img.height)));&lt;br /&gt;
      } else {&lt;br /&gt;
        //console.log(counter);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  img.updatePixels();&lt;br /&gt;
&lt;br /&gt;
  background(0);&lt;br /&gt;
  &lt;br /&gt;
  image(img, imgCornerx, imgCornery);&lt;br /&gt;
  //filter(INVERT);&lt;br /&gt;
  &lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// assign random colors to random pixels on the canvas&lt;br /&gt;
function randomColor(i, j, density=&amp;quot;low&amp;quot;) {&lt;br /&gt;
  &lt;br /&gt;
  if (density == &amp;quot;high&amp;quot;) {&lt;br /&gt;
    randomValue= random(50);&lt;br /&gt;
    &lt;br /&gt;
  } else {&lt;br /&gt;
    randomValue = random(100);&lt;br /&gt;
  }  &lt;br /&gt;
  &lt;br /&gt;
  if(randomValue&amp;lt;0.05) {&lt;br /&gt;
    const index5 = 4 * (j * img.width +i);&lt;br /&gt;
    img.pixels[index5]= random(200);&lt;br /&gt;
    img.pixels[index5+1]= random(200);&lt;br /&gt;
    img.pixels[index5+2]= random(200);      &lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function hmovePixels(x, y) { &lt;br /&gt;
  let neighbour;&lt;br /&gt;
  let pix;&lt;br /&gt;
  &lt;br /&gt;
  //locating the pixel values in the array of values&lt;br /&gt;
  const index = 4 * (y * img.width + x); &lt;br /&gt;
  const index3 = 4 * (y * img.width + (x+1)); &lt;br /&gt;
  neighbour = [img.pixels[index3], img.pixels[index3+1], img.pixels[index3+2]];&lt;br /&gt;
  pix = [img.pixels[index], img.pixels[index+1], img.pixels[index+2]];&lt;br /&gt;
   &lt;br /&gt;
  //bubble sort algorithm to first check the values of the pixels inside if condition,&lt;br /&gt;
  if (hue(pix) &amp;lt; hue(neighbour)) {&lt;br /&gt;
     temp = [];&lt;br /&gt;
     &lt;br /&gt;
     temp[0]= img.pixels[index]; //then displacement by putting the first index to the temporary position in a new array and,&lt;br /&gt;
     img.pixels[index] = img.pixels[index3]; //exchanging the locations according to the if condition.&lt;br /&gt;
     img.pixels[index3] = temp[0]; &lt;br /&gt;
     &lt;br /&gt;
     //repeating it for all r,g,b values of the pixels.&lt;br /&gt;
     temp[1]= img.pixels[index+1];&lt;br /&gt;
     img.pixels[index+1] = img.pixels[index3+1];&lt;br /&gt;
     img.pixels[index3+1] = temp[1];&lt;br /&gt;
     &lt;br /&gt;
     temp[2]= img.pixels[index+2];&lt;br /&gt;
     img.pixels[index+2] = img.pixels[index3+2];&lt;br /&gt;
     img.pixels[index3+2] = temp[2];&lt;br /&gt;
     &lt;br /&gt;
    &lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function vmovePixels(x, y) { &lt;br /&gt;
  let neighbour;&lt;br /&gt;
  let pix;&lt;br /&gt;
  &lt;br /&gt;
  const index = 4 * (y * img.width + x); &lt;br /&gt;
  const index3 = 4 * ((y+1) * img.width + x); &lt;br /&gt;
  neighbour = [img.pixels[index3], img.pixels[index3+1], img.pixels[index3+2]];&lt;br /&gt;
  pix = [img.pixels[index], img.pixels[index+1], img.pixels[index+2]];&lt;br /&gt;
   &lt;br /&gt;
  let a = random([hue, brightness]); //vertical move method works with either hue or brightness.&lt;br /&gt;
  if (a(pix) &amp;lt; a(neighbour)) {&lt;br /&gt;
     temp = [];&lt;br /&gt;
     &lt;br /&gt;
     temp[0]= img.pixels[index3];&lt;br /&gt;
     img.pixels[index3] = img.pixels[index];&lt;br /&gt;
     img.pixels[index] = temp[0];&lt;br /&gt;
     &lt;br /&gt;
     temp[1]= img.pixels[index+1];&lt;br /&gt;
     img.pixels[index+1] = img.pixels[index3+1];&lt;br /&gt;
     img.pixels[index3+1] = temp[1];&lt;br /&gt;
     &lt;br /&gt;
     temp[2]= img.pixels[index+2];&lt;br /&gt;
     img.pixels[index+2] = img.pixels[index3+2];&lt;br /&gt;
     img.pixels[index3+2] = temp[2];&lt;br /&gt;
     &lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Exhibition of the Final Images&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The final visuals generated by the algorithm, are put into a website format to exhibit them. The first content the user see when the page shows, is the on-site processed images works as covers and changes when the page is refreshed. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt; &lt;br /&gt;
File:selenade2.png&lt;br /&gt;
File:selenade13.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Grid format is chosen where the collection is shown, the reason behind that is the resemblance with the contemporary social media formatting of visual media as well as not focusing only to one image but seeing all of them as a whole. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt; &lt;br /&gt;
File:selenade10.png  &lt;br /&gt;
File:selenade11.png&lt;br /&gt;
File:selenade12.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since on-site calculation of the image requires so much computational power and it decreases the responsiveness of the website. therefore, for exhibition purposes, a lite version again with a static video is placed in the place of the cover image and shows the processes from the end to the beginning.&lt;br /&gt;
&lt;br /&gt;
lite version;&lt;br /&gt;
&lt;br /&gt;
[[File:selenade15.mp4]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
in-site computed version;&lt;br /&gt;
&lt;br /&gt;
[[File:selenade16.mp4]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;references&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;image processing inspirations&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
- Coding Train - Pixel Arrays (https://www.youtube.com/watch?v=nMUMZ5YRxHI)&lt;br /&gt;
&lt;br /&gt;
- Sortraits (https://wtracy.gitlab.io/sortraits/)&lt;br /&gt;
&lt;br /&gt;
- Pixel Sorting Article in satyarth.me (http://satyarth.me/articles/pixel-sorting/)&lt;br /&gt;
&lt;br /&gt;
- Kim Assendorf - Pixel Sorting with Threshold Control (https://github.com/kimasendorf/ASDFPixelSort)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;literature&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Barthes, Roland (1981). Camera Lucida: Reflections on Photography, Çev. Richard Howard. New York: Hill and Wang.&lt;br /&gt;
&lt;br /&gt;
Lehtonen, T.-K., Koskinen , lpo, &amp;amp;amp; Kurvinen, E. (n.d.). Mobile Digital Pictures the future of the Postcard . Retrieved September 28, 2022, from http://www2.uiah.fi/~ikoskine/recentpapers/mobile_multimedia/Mobiles_Vienna.pdf &lt;br /&gt;
&lt;br /&gt;
Lury, C. (2007). Prosthetic culture: Photography, memory and Identity. Routledge. &lt;br /&gt;
&lt;br /&gt;
van Dijck, J. (2008). Digital Photography: Communication, Identity, memory. Visual Communication, 7(1), 57–76. https://doi.org/10.1177/1470357207084865&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=Selena_Deger&amp;diff=135596</id>
		<title>Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=Selena_Deger&amp;diff=135596"/>
		<updated>2023-04-04T19:20:58Z</updated>

		<summary type="html">&lt;p&gt;Flux999: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==[[Encapsulated//Encrypted]]==&lt;br /&gt;
&#039;&#039;SoSe22 - Observe Experiement Connect&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;an interpretation of the digital privacy concept through auto-biographical photography practices&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:selenade1.png]]&lt;br /&gt;
&lt;br /&gt;
https://flux999.github.io/encapsulated/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[InterFace: how you see me]]==&lt;br /&gt;
&#039;&#039;WiSe23 - Designing Utopias&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;Encapsulated//Encrypted&#039;&#039;&#039;==&lt;br /&gt;
&#039;&#039;an interpretation of the digital privacy concept through auto-biographical photography practices&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:selenade1.png]]&lt;br /&gt;
&lt;br /&gt;
https://flux999.github.io/encapsulated/&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;abstract&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Photography is a relatively new medium of visual expression that also has many contemporary interpretations regarding archiving or even constructing a new identity for the self. Especially auto-biographic photos have become a part of people’s identities with newly emerging online platforms and continuously changing world views of younger people. The purpose of photography moved from being only memoirs to recall the past to documenting daily life. Unlike past, there is no need to have a special occasion to remember but every day is to be recorded and sometimes shared with others. These archives of nonsignificant moments in life mirror people’s lives and enable them to curate and communicate their (digital)identities. With the paradigm shift in the usage of this medium, photography is democratized even more and started to have a significant effect on social interactions but this also came with many concerns about privacy in online environments. Digital privacy is also a very contemporary subject in everyone’s lives as we are going through a very digitalized era where most of the experiences are converting to online even the social interactions are becoming more and more framed on screens.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;about the project&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
The project&#039;s focal point, &amp;quot;Encapsulated//Encryped&amp;quot; utilizes my photo archive, which includes the photos I took or the photos of me taken by another person. The first part, &amp;quot;Encapsulated&amp;quot; refers to the archive where I document my private life from many aspects, the place I have been living, the people I know, the places I have been, my face, works, cat… These photos encapsulate the essential moments of my life and most of them I don&#039;t share online. However, I was curious about how my decision process goes by when I share one photo online, with people I rarely know and also acknowledge that the photo will probably stay on the internet for ever.&lt;br /&gt;
&lt;br /&gt;
At this point the second part of the project, &amp;quot;Encrypted&amp;quot; comes. Even though I have many concerns about the privacy of my life and being skeptical about sharing photos online, I gladly exist in online communities. And usually, I am more disturbed when passively consuming the content than creating or sharing of my own. The dilemma is I don&#039;t want to share my life online, but I want to share the photos I take, which are basically depictions of my life and contain a lot of information about my identity and surroundings. Therefore, to share the photos I took, I have to encrypt them so that they will be based on authentic images but will no longer be the photos on my gallery.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;process&#039;&#039;&#039;==&lt;br /&gt;
&#039;&#039;&#039;Machine Learning Model for Public/Private Assessment&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;in collaboration with Tarkan Uskudar, Data Scientist&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To decide if a photo is available for sharing online or entirely private, I used a deep learning model based on my past decisions on sharing a photo. Two classes with the photos that I have already shared online(public) and another selection of photos from my archive that is rated explicitly as private and fed to the algorithm to find the patterns.&lt;br /&gt;
&lt;br /&gt;
Tensorflow library is used to implement convolutional neural network architecture. There are three steps for the model. During the preprocessing, the images are resized to 540x540px for computational purposes. After resizing, data augmentation was used to create different representations of the images. This is to improve the generalisability of the model by creating flipped, zoomed, and rescaled versions of the photos. The second part was to train the model, convolutional neural networks with max pooling were used and the idea was to convey the meaning of the image through the layers and get information back from the model.&lt;br /&gt;
&lt;br /&gt;
After the training, the results are evaluated, and minor adjustments are made to reach the final accuracy of 89%. Finally, by feeding a batch of 45 photos taken between September 2021-2022, the privacy values are exported to be used for image processing at later steps.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Pixel Sorting Algorithm for Image Processing&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
After trying many methods to alter the photos, such as style transfer or mapping them in a different concept like Lissajous figures, randomized geometric shapes etc. it was more appealing and fruitful to process them via the colors they already carry.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;trials with different algorithms;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:selenade3.png&lt;br /&gt;
File:selenade4.png&lt;br /&gt;
File:selenade5.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A bubble sorting algorithm and p5.js library features are used to change the images with the aim of not showing the actual image but the &amp;quot;encrypted&amp;quot; version. First, the image&#039;s pixel values are retrieved and with the bubble sorting, pixels are sorted according to their hue values and relocated until they are neighbors with another with the same hue value. Additionally, a random color function is assigned to the photos to have slightly different results for each time program runs. The final look of the images shows flowing pixel paths and different color gradings.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pixel sorting initial experiments;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:selenade6.jpg &lt;br /&gt;
File:selenade7.jpg&lt;br /&gt;
File:selenade8.png&lt;br /&gt;
File:selenade9.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;code; &#039;&#039; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source style=&amp;quot;border:none; height:650px; overflow:scroll;&amp;quot; lang=&amp;quot;c&amp;quot; line start=&amp;quot;55&amp;quot; highlight=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
let img;&lt;br /&gt;
let imgCornerx;&lt;br /&gt;
let imgCornerY;&lt;br /&gt;
let counter = 0;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
function preload() {&lt;br /&gt;
  img = loadImage(&amp;quot;cover/cover11.jpeg&amp;quot;);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function setup() {&lt;br /&gt;
  img.resize(600, 0);&lt;br /&gt;
  createCanvas(windowWidth, windowHeight);&lt;br /&gt;
  pixelDensity(1);&lt;br /&gt;
  &lt;br /&gt;
  imgCornerx = (windowWidth-img.width)/2;&lt;br /&gt;
  imgCornery = (windowHeight-img.height)/2;&lt;br /&gt;
  &lt;br /&gt;
  frameRate(10);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
function draw() {&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  img.loadPixels();&lt;br /&gt;
  &lt;br /&gt;
  for (let i = 0; i &amp;lt; img.width - 1; i++) {&lt;br /&gt;
    for (let j = 0; j &amp;lt; img.height - 1; j++) {&lt;br /&gt;
&lt;br /&gt;
       if (random(2)&amp;lt;1) {&lt;br /&gt;
          vmovePixels(i, j);&lt;br /&gt;
       } else {&lt;br /&gt;
           hmovePixels(i, j);&lt;br /&gt;
         }&lt;br /&gt;
        &lt;br /&gt;
      counter++; //not to change every pixel during the process. &lt;br /&gt;
        &lt;br /&gt;
      if (counter&amp;lt;img.width*img.height*50) {&lt;br /&gt;
     &lt;br /&gt;
        randomColor(parseInt(random(img.width)), parseInt(random(img.height)));&lt;br /&gt;
      } else {&lt;br /&gt;
        //console.log(counter);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  img.updatePixels();&lt;br /&gt;
&lt;br /&gt;
  background(0);&lt;br /&gt;
  &lt;br /&gt;
  image(img, imgCornerx, imgCornery);&lt;br /&gt;
  //filter(INVERT);&lt;br /&gt;
  &lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// assign random colors to random pixels on the canvas&lt;br /&gt;
function randomColor(i, j, density=&amp;quot;low&amp;quot;) {&lt;br /&gt;
  &lt;br /&gt;
  if (density == &amp;quot;high&amp;quot;) {&lt;br /&gt;
    randomValue= random(50);&lt;br /&gt;
    &lt;br /&gt;
  } else {&lt;br /&gt;
    randomValue = random(100);&lt;br /&gt;
  }  &lt;br /&gt;
  &lt;br /&gt;
  if(randomValue&amp;lt;0.05) {&lt;br /&gt;
    const index5 = 4 * (j * img.width +i);&lt;br /&gt;
    img.pixels[index5]= random(200);&lt;br /&gt;
    img.pixels[index5+1]= random(200);&lt;br /&gt;
    img.pixels[index5+2]= random(200);      &lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function hmovePixels(x, y) { &lt;br /&gt;
  let neighbour;&lt;br /&gt;
  let pix;&lt;br /&gt;
  &lt;br /&gt;
  //locating the pixel values in the array of values&lt;br /&gt;
  const index = 4 * (y * img.width + x); &lt;br /&gt;
  const index3 = 4 * (y * img.width + (x+1)); &lt;br /&gt;
  neighbour = [img.pixels[index3], img.pixels[index3+1], img.pixels[index3+2]];&lt;br /&gt;
  pix = [img.pixels[index], img.pixels[index+1], img.pixels[index+2]];&lt;br /&gt;
   &lt;br /&gt;
  //bubble sort algorithm to first check the values of the pixels inside if condition,&lt;br /&gt;
  if (hue(pix) &amp;lt; hue(neighbour)) {&lt;br /&gt;
     temp = [];&lt;br /&gt;
     &lt;br /&gt;
     temp[0]= img.pixels[index]; //then displacement by putting the first index to the temporary position in a new array and,&lt;br /&gt;
     img.pixels[index] = img.pixels[index3]; //exchanging the locations according to the if condition.&lt;br /&gt;
     img.pixels[index3] = temp[0]; &lt;br /&gt;
     &lt;br /&gt;
     //repeating it for all r,g,b values of the pixels.&lt;br /&gt;
     temp[1]= img.pixels[index+1];&lt;br /&gt;
     img.pixels[index+1] = img.pixels[index3+1];&lt;br /&gt;
     img.pixels[index3+1] = temp[1];&lt;br /&gt;
     &lt;br /&gt;
     temp[2]= img.pixels[index+2];&lt;br /&gt;
     img.pixels[index+2] = img.pixels[index3+2];&lt;br /&gt;
     img.pixels[index3+2] = temp[2];&lt;br /&gt;
     &lt;br /&gt;
    &lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function vmovePixels(x, y) { &lt;br /&gt;
  let neighbour;&lt;br /&gt;
  let pix;&lt;br /&gt;
  &lt;br /&gt;
  const index = 4 * (y * img.width + x); &lt;br /&gt;
  const index3 = 4 * ((y+1) * img.width + x); &lt;br /&gt;
  neighbour = [img.pixels[index3], img.pixels[index3+1], img.pixels[index3+2]];&lt;br /&gt;
  pix = [img.pixels[index], img.pixels[index+1], img.pixels[index+2]];&lt;br /&gt;
   &lt;br /&gt;
  let a = random([hue, brightness]); //vertical move method works with either hue or brightness.&lt;br /&gt;
  if (a(pix) &amp;lt; a(neighbour)) {&lt;br /&gt;
     temp = [];&lt;br /&gt;
     &lt;br /&gt;
     temp[0]= img.pixels[index3];&lt;br /&gt;
     img.pixels[index3] = img.pixels[index];&lt;br /&gt;
     img.pixels[index] = temp[0];&lt;br /&gt;
     &lt;br /&gt;
     temp[1]= img.pixels[index+1];&lt;br /&gt;
     img.pixels[index+1] = img.pixels[index3+1];&lt;br /&gt;
     img.pixels[index3+1] = temp[1];&lt;br /&gt;
     &lt;br /&gt;
     temp[2]= img.pixels[index+2];&lt;br /&gt;
     img.pixels[index+2] = img.pixels[index3+2];&lt;br /&gt;
     img.pixels[index3+2] = temp[2];&lt;br /&gt;
     &lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Exhibition of the Final Images&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The final visuals generated by the algorithm, are put into a website format to exhibit them. The first content the user see when the page shows, is the on-site processed images works as covers and changes when the page is refreshed. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt; &lt;br /&gt;
File:selenade2.png&lt;br /&gt;
File:selenade13.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Grid format is chosen where the collection is shown, the reason behind that is the resemblance with the contemporary social media formatting of visual media as well as not focusing only to one image but seeing all of them as a whole. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt; &lt;br /&gt;
File:selenade10.png  &lt;br /&gt;
File:selenade11.png&lt;br /&gt;
File:selenade12.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since on-site calculation of the image requires so much computational power and it decreases the responsiveness of the website. therefore, for exhibition purposes, a lite version again with a static video is placed in the place of the cover image and shows the processes from the end to the beginning.&lt;br /&gt;
&lt;br /&gt;
lite version;&lt;br /&gt;
&lt;br /&gt;
[[File:selenade15.mp4]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
in-site computed version;&lt;br /&gt;
&lt;br /&gt;
[[File:selenade16.mp4]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;references&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;image processing inspirations&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
- Coding Train - Pixel Arrays (https://www.youtube.com/watch?v=nMUMZ5YRxHI)&lt;br /&gt;
&lt;br /&gt;
- Sortraits (https://wtracy.gitlab.io/sortraits/)&lt;br /&gt;
&lt;br /&gt;
- Pixel Sorting Article in satyarth.me (http://satyarth.me/articles/pixel-sorting/)&lt;br /&gt;
&lt;br /&gt;
- Kim Assendorf - Pixel Sorting with Threshold Control (https://github.com/kimasendorf/ASDFPixelSort)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;literature&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Barthes, Roland (1981). Camera Lucida: Reflections on Photography, Çev. Richard Howard. New York: Hill and Wang.&lt;br /&gt;
&lt;br /&gt;
Lehtonen, T.-K., Koskinen , lpo, &amp;amp;amp; Kurvinen, E. (n.d.). Mobile Digital Pictures the future of the Postcard . Retrieved September 28, 2022, from http://www2.uiah.fi/~ikoskine/recentpapers/mobile_multimedia/Mobiles_Vienna.pdf &lt;br /&gt;
&lt;br /&gt;
Lury, C. (2007). Prosthetic culture: Photography, memory and Identity. Routledge. &lt;br /&gt;
&lt;br /&gt;
van Dijck, J. (2008). Digital Photography: Communication, Identity, memory. Visual Communication, 7(1), 57–76. https://doi.org/10.1177/1470357207084865&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=135415</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=135415"/>
		<updated>2023-03-29T21:52:44Z</updated>

		<summary type="html">&lt;p&gt;Flux999: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{#ev:youtube|e1CXrPU11XQ}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning. It is designed to be used in a public space but rather than being in the center of attention, it aims to exist in the peripheral while still stimulating the people around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The evolutionary development of human resulted in many features to create greater societies from extensive groups of people, ideally living in harmony with each other. One of the most influential features that bring us the ability to build and sustain these social structures is our ability to empathize with the other people around us. However, in modern societies, we are getting more and more apart from each other and lost in the rush of our modern-day problems. Our perception of social interactions gets trapped in a closer circle even though we are encountering many different faces even in one day of our lives. As those faces got blurry for us, the lifting effect of being social and sharing decays even more. The project evaluates the effects of emotions through facial expressions in the contexts of empathy and modern-day social structures.&lt;br /&gt;
&lt;br /&gt;
The process of empathy starts with people imagining themselves in another person’s shoes and trying to form meaning out of it. This involves paying attention to their body language, facial expressions, tone of voice, and words, as well as considering their past experiences and current circumstances. Several experts think that mirror neurons, or at least a similar mechanism, play a role in some forms of basic empathy. Mirror neurons in the mouth and the ability to imitate facial expressions are likely the foundation for being in tune with others emotionally. While the embodiment of emotions does not cover all aspects of empathetic experience, it provides a straightforward explanation of how we may share emotions with others and how this skill could have evolved through evolution (Coudé &amp;amp; Ferrari, 2018).&lt;br /&gt;
&lt;br /&gt;
Moreover, it is a naturally evolved survival mechanism to avoid an unwanted situation with the help of others around. According to the findings of Adams et al(2006), two studies suggest that there is accuracy in detecting movements in angry and fearful faces, either moving towards or away from the observer. They found that observers were quicker to correctly identify angry faces moving towards them, suggesting that anger displays convey the intent to approach. However, the results were not the same for fear faces, which may indicate that fear signals a &amp;quot;freeze&amp;quot; response rather than a behavior of fleeing. Therefore, translating the emotions of one party to another has an essential role in sharing “data” collected from the outer world sensed by human body receptors. Besides the expressions of emotion being a means of non-verbal communication, unlike the gestures that can change from culture to culture, they are also relatively universal. According to Ekman(1970), basic emotions have a pancultural nature in that they are identified and also expressed in similar ways in different cultures with the same facial muscle responses.&lt;br /&gt;
&lt;br /&gt;
The embodiment of emotions through facial expressions is a mean of communication with the outer world. However, it is distinctive from vocal communications etc. by not being self-reflective that people cannot see or feel the immediate effect of their actions. Rather it is moving to the other party to be evaluated and has its effect on them and that is where the reflection forms. So one person feels the emotion but the other one sees the facial expression. The viewer is the bridge to the outer world as well as the reflection of the inside.&lt;br /&gt;
&lt;br /&gt;
To explore the nature of these interactions through facial expressions of emotions in a bigger picture and to disrupt the woven structure of daily life, InterFace pursues to create a space for emphasizing the power of these individual emotions becoming visible and vivid for the outside world.&lt;br /&gt;
&lt;br /&gt;
==Hardware Setup==&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See the detailed development process below&#039;&#039; &lt;br /&gt;
==[[Hardware and Software Systems Processes]]==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Interaction==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Video Walk&#039;&#039;&lt;br /&gt;
{{#ev:youtube|i5LFJzIUPKk}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As an object to be used in the public space, the evaluation of the interactivity is performed with a video walk. The walk starts at the uni campus and by using the most crowded streets aims to reach Theaterplatz and then go back to the campus again. The observations on this route follow as the most interactivity is caught when the wearer was facing the viewer and the distance between them was smaller(as if walking on the same sidewalk to opposite directions). In some instances are seen as some curious ones are also turning their heads to have another look after they pass. Moreover interest in the tool was more visible when the observational video recording is stopped and it became a standalone object. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Emotions on the viewer screen after the video walk&#039;&#039;&lt;br /&gt;
{{#ev:youtube|WUd2enyf2lQ}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File: emo_ss.jpg|300px]]&lt;br /&gt;
[[File: emo_archive.png|200px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
During the walk, emotion output is saved with an automated screenshot script while the laptop in the bag was also remote controlled to ensure the stability of the system.  The collected emotion visuals(static images) are blended together with a frame interpolator to create the transitions between them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Discussions &amp;amp; Further Development==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Limitations of Facial Emotion Recognition Through Deep Learning Algorithms===&lt;br /&gt;
&lt;br /&gt;
Even though emotion detection with AI has been studied a lot and used in many different ways, it is important to think about its problems. One of the main limitations is accuracy. It is claimed by the DeepFace framework that according to experiments, humans can recognize faces with an accuracy of 97.53%, whereas those models have already attained and surpassed that mark. From my personal experience, I can say that it can be used for simple emotions; however, its power to assess complex or micro emotions is not close to the human capability of understanding emotional expressions. Current emotion detection systems can still struggle to identify emotions accurately due to cultural and individual differences and context. The technology is also prone to algorithmic bias, leading to inaccuracies for certain groups of people. My own experience with the tool is more or less had the same hustles, where the emotion detection was usually tagging me as sad rather than neutral. This kind of algorithm is a black box that makes it harder to understand what goes wrong during the process, and it gives a very different result than expected. Therefore, it is crucial to approach these tools with a critical mindset and use them as a supplement rather than a replacement for human analysis. From another point of view, the fragility and inaccuracy of the system make it vulnerable to misinterpretation in an automated setting. It still needs an operator or interpreter that extracts the source of truth. This highlights the importance of human involvement in decision-making processes, especially in complex systems such as emotion recognition. While automation or massification instead of personalization of these kinds of tools can improve efficiency, they should not completely replace human oversight and intervention.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another point of bias in the facial emotion recognition systems is being evaluated at the 2D level. Human facial expressions have historically been investigated using either static 2D images or 2D video sequences. Recent datasets have taken into account 3D facial expressions to better facilitate an examination of the fine structural changes inherent to spontaneous expressions; however, a 2D-based analysis has difficulty handling large variations in pose and subtle facial movements (Ko, 2018) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The OpenCV library used in this project works in a way that it retrieves the static images of a face from the camera data to assess if it is a human face or not. DeepFace then evaluates the facial emotion of the face screenshots. As a result, the system is judging the emotion from a static example of the face rather than from the context of the face, which makes it less unreliable than the human eye. Moreover, it was not able to recognize a face from another angle, which means the camera needs to be stable enough to see the face horizontally aligned and this feature can limit the possibility of some actions while wearing the device.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Color Usage to Evoke Emotions===&lt;br /&gt;
&lt;br /&gt;
The reasoning of psychophysical effects and evolutionary interpretations of the colors that have been given on the part of color selection for the emotion visualizer is a relatively controversial topic in the context of this work. Anyway,&lt;br /&gt;
&lt;br /&gt;
There are many factors such as cultural, age, and gender that can influence the psychological effects of colors. White, for instance, can signify innocence and purity in some cultures, but death and mourning in others. Similarly, people of different ages and sexes may associate different colors with different things. The vision of humans is not possible to put in the same shape, and there are even some limitations regarding the perception of color, such as several types of color blindness. Even with these limitations, I think it&#039;s still important to look into the general psychological properties of colors and use this knowledge to create meaning that speaks to a wider audience. &lt;br /&gt;
&lt;br /&gt;
From this point of view, the choice of colors for this work could go in a more personal direction, depending on who is looking at it, when it is more on an exclusive communication level and not shown to the public. Personalized color selection can help create a deeper emotional connection with the viewer, which can lead to a more impactful and memorable experience. However, it is important to consider the context and purpose of the work before making any color choices.&lt;br /&gt;
&lt;br /&gt;
Overall, I think that the use of color in art is a powerful way to make people feel things and get the emotions across even if it has a more personalized or symbolic meaning, and I am particularly fascinated by the idea of manipulating a viewer&#039;s mood and feelings through color. By understanding the psychological properties of colors and using them in a deliberate, thoughtful, and also more personal manner, they can have a lasting impact on the viewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Limitations Regarding the Wearable Piece===&lt;br /&gt;
&lt;br /&gt;
The usage of the setup in the wild showed that the most important factors affecting the visibility of the device are the distance between the wearer and the viewer and its positioning. As mentioned above, the most visible results were when the distance was even smaller than 2 arm lengths, and it was also visible that some curious people were turning their heads to look, but the device had already passed. For the next phase, the setup will be more controlled and assistive but still interactive in a way that there will be no active motion but more exclusive communication between the parties. &lt;br /&gt;
&lt;br /&gt;
Also, according to the goals of usage, the lighting and size of the display were not ideal. The size of the screen was too small, and it became negligible for most of the people who were further away. But this problem can&#039;t just be fixed by making the screen bigger, because then it stops being an extension of the person wearing it. Hence, the placement of the visual could be behind the wearer in a more immersive environment. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
Adams, R. B., Ambady, N., Macrae, C. N., &amp;amp; Kleck, R. E. (2006). Emotional expressions forecast approach-avoidance behavior. *Motivation and Emotion*, *30*(2), 177–186. https://doi.org/10.1007/s11031-006-9020-2&lt;br /&gt;
&lt;br /&gt;
De’Aira, Deepak, &amp;amp; Ayanna. (2021). Age bias in emotion detection An analysis of facial emotion recognition performance on young middle-aged and older adults.&lt;br /&gt;
&lt;br /&gt;
Ekman, P. (1970). Universal Facial Expressions of Emotions. California Mental Health Research Digest, 8(4), 151-158.&lt;br /&gt;
&lt;br /&gt;
Ferrari, P. F., &amp;amp; Coudé, G. (2018). Mirror neurons, embodied emotions, and empathy. *Neuronal Correlates of Empathy*, 67–77. https://doi.org/10.1016/b978-0-12-805397-3.00006-1&lt;br /&gt;
&lt;br /&gt;
Ko, B. (2018). A brief review of facial emotion recognition based on visual information. Sensors, 18(2), 401. https://doi.org/10.3390/s18020401 &lt;br /&gt;
&lt;br /&gt;
DeepFace https://github.com/serengil/deepface&lt;br /&gt;
&lt;br /&gt;
OpenCV https://opencv.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=135414</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=135414"/>
		<updated>2023-03-29T21:48:13Z</updated>

		<summary type="html">&lt;p&gt;Flux999: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{#ev:youtube|e1CXrPU11XQ}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning. It is designed to be used in a public space but rather than being in the center of attention, it aims to exist in the peripheral while still stimulating the people around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The evolutionary development of human resulted in many features to create greater societies from extensive groups of people, ideally living in harmony with each other. One of the most influential features that bring us the ability to build and sustain these social structures is our ability to empathize with the other people around us. However, in modern societies, we are getting more and more apart from each other and lost in the rush of our modern-day problems. Our perception of social interactions gets trapped in a closer circle even though we are encountering many different faces even in one day of our lives. As those faces got blurry for us, the lifting effect of being social and sharing decays even more. The project evaluates the effects of emotions through facial expressions in the contexts of empathy and modern-day social structures.&lt;br /&gt;
&lt;br /&gt;
The process of empathy starts with people imagining themselves in another person’s shoes and trying to form meaning out of it. This involves paying attention to their body language, facial expressions, tone of voice, and words, as well as considering their past experiences and current circumstances. Several experts think that mirror neurons, or at least a similar mechanism, play a role in some forms of basic empathy. Mirror neurons in the mouth and the ability to imitate facial expressions are likely the foundation for being in tune with others emotionally. While the embodiment of emotions does not cover all aspects of empathetic experience, it provides a straightforward explanation of how we may share emotions with others and how this skill could have evolved through evolution (Coudé &amp;amp; Ferrari, 2018).&lt;br /&gt;
&lt;br /&gt;
Moreover, it is a naturally evolved survival mechanism to avoid an unwanted situation with the help of others around. According to the findings of Adams et al(2006), two studies suggest that there is accuracy in detecting movements in angry and fearful faces, either moving towards or away from the observer. They found that observers were quicker to correctly identify angry faces moving towards them, suggesting that anger displays convey the intent to approach. However, the results were not the same for fear faces, which may indicate that fear signals a &amp;quot;freeze&amp;quot; response rather than a behavior of fleeing. Therefore, translating the emotions of one party to another has an essential role in sharing “data” collected from the outer world sensed by human body receptors. Besides the expressions of emotion being a means of non-verbal communication, unlike the gestures that can change from culture to culture, they are also relatively universal. According to Ekman(1970), basic emotions have a pancultural nature in that they are identified and also expressed in similar ways in different cultures with the same facial muscle responses.&lt;br /&gt;
&lt;br /&gt;
The embodiment of emotions through facial expressions is a mean of communication with the outer world. However, it is distinctive from vocal communications etc. by not being self-reflective that people cannot see or feel the immediate effect of their actions. Rather it is moving to the other party to be evaluated and has its effect on them and that is where the reflection forms. So one person feels the emotion but the other one sees the facial expression. The viewer is the bridge to the outer world as well as the reflection of the inside.&lt;br /&gt;
&lt;br /&gt;
To explore the nature of these interactions through facial expressions of emotions in a bigger picture and to disrupt the woven structure of daily life, InterFace pursues to create a space for emphasizing the power of these individual emotions becoming visible and vivid for the outside world.&lt;br /&gt;
&lt;br /&gt;
==Hardware Setup==&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See the detailed development process below&#039;&#039; &lt;br /&gt;
==[[Hardware and Software Systems Processes]]==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Interaction==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Video Walk&#039;&#039;&lt;br /&gt;
{{#ev:youtube|i5LFJzIUPKk}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As an object to be used in the public space, the evaluation of the interactivity is performed with a video walk. The walk starts at the uni campus and by using the most crowded streets aims to reach Theaterplatz and then go back to the campus again. The observations on this route follow as the most interactivity is caught when the wearer was facing the viewer and the distance between them was smaller(as if walking on the same sidewalk to opposite directions). In some instances are seen as some curious ones are also turning their heads to have another look after they pass. Moreover interest in the tool was more visible when the observational video recording is stopped and it became a standalone object. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Emotions on the viewer screen after the video walk&#039;&#039;&lt;br /&gt;
{{#ev:youtube|WUd2enyf2lQ}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File: emo_ss.jpg|300px]]&lt;br /&gt;
[[File: emo_archive.png|200px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
During the walk, emotion output is saved with an automated screenshot script while the laptop in the bag was also remote controlled to ensure the stability of the system.  The collected emotion visuals(static images) are blended together with a frame interpolator to create the transitions between them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Discussions &amp;amp; Further Development==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Limitations of Facial Emotion Recognition Through Deep Learning Algorithms===&lt;br /&gt;
&lt;br /&gt;
Even though emotion detection with AI has been studied a lot and used in many different ways, it is important to think about its problems. One of the main limitations is accuracy. It is claimed by the DeepFace framework that according to experiments, humans can recognize faces with an accuracy of 97.53%, whereas those models have already attained and surpassed that mark. From my personal experience, I can say that it can be used for simple emotions; however, its power to assess complex or micro emotions is not close to the human capability of understanding emotional expressions. Current emotion detection systems can still struggle to identify emotions accurately due to cultural and individual differences and context. The technology is also prone to algorithmic bias, leading to inaccuracies for certain groups of people. My own experience with the tool is more or less had the same hustles, where the emotion detection was usually tagging me as sad rather than neutral. This kind of algorithm is a black box that makes it harder to understand what goes wrong during the process, and it gives a very different result than expected. Therefore, it is crucial to approach these tools with a critical mindset and use them as a supplement rather than a replacement for human analysis. From another point of view, the fragility and inaccuracy of the system make it vulnerable to misinterpretation in an automated setting. It still needs an operator or interpreter that extracts the source of truth. This highlights the importance of human involvement in decision-making processes, especially in complex systems such as emotion recognition. While automation or massification instead of personalization of these kinds of tools can improve efficiency, they should not completely replace human oversight and intervention.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another point of bias in the facial emotion recognition systems is being evaluated at the 2D level. Human facial expressions have historically been investigated using either static 2D images or 2D video sequences. Recent datasets have taken into account 3D facial expressions to better facilitate an examination of the fine structural changes inherent to spontaneous expressions; however, a 2D-based analysis has difficulty handling large variations in pose and subtle facial movements (Ko, 2018) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The OpenCV library used in this project works in a way that it retrieves the static images of a face from the camera data to assess if it is a human face or not. DeepFace then evaluates the facial emotion of the face screenshots. As a result, the system is judging the emotion from a static example of the face rather than from the context of the face, which makes it less unreliable than the human eye. Moreover, it was not able to recognize a face from another angle, which means the camera needs to be stable enough to see the face horizontally aligned and this feature can limit the possibility of some actions while wearing the device.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Color Usage to Evoke Emotions===&lt;br /&gt;
&lt;br /&gt;
The reasoning of psychophysical effects and evolutionary interpretations of the colors that have been given on the part of color selection for the emotion visualizer is a relatively controversial topic in the context of this work. Anyway,&lt;br /&gt;
&lt;br /&gt;
There are many factors such as cultural, age, and gender that can influence the psychological effects of colors. White, for instance, can signify innocence and purity in some cultures, but death and mourning in others. Similarly, people of different ages and sexes may associate different colors with different things. The vision of humans is not possible to put in the same shape, and there are even some limitations regarding the perception of color, such as several types of color blindness. Even with these limitations, I think it&#039;s still important to look into the general psychological properties of colors and use this knowledge to create meaning that speaks to a wider audience. &lt;br /&gt;
&lt;br /&gt;
From this point of view, the choice of colors for this work could go in a more personal direction, depending on who is looking at it, when it is more on an exclusive communication level and not shown to the public. Personalized color selection can help create a deeper emotional connection with the viewer, which can lead to a more impactful and memorable experience. However, it is important to consider the context and purpose of the work before making any color choices.&lt;br /&gt;
&lt;br /&gt;
Overall, I think that the use of color in art is a powerful way to make people feel things and get the emotions across even if it has a more personalized or symbolic meaning, and I am particularly fascinated by the idea of manipulating a viewer&#039;s mood and feelings through color. By understanding the psychological properties of colors and using them in a deliberate, thoughtful, and also more personal manner, they can have a lasting impact on the viewer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Limitations Regarding the Wearable Piece===&lt;br /&gt;
&lt;br /&gt;
The usage of the setup in the wild showed that the most important factors affecting the visibility of the device are the distance between the wearer and the viewer and its positioning. As mentioned above, the most visible results were when the distance was even smaller than 2 arm lengths, and it was also visible that some curious people were turning their heads to look, but the device had already passed. For the next phase, the setup will be more controlled and assistive but still interactive in a way that there will be no active motion but more exclusive communication between the parties. &lt;br /&gt;
&lt;br /&gt;
Also, according to the goals of usage, the lighting and size of the display were not ideal. The size of the screen was too small, and it became negligible for most of the people who were further away. But this problem can&#039;t just be fixed by making the screen bigger, because then it stops being an extension of the person wearing it. Hence, the placement of the visual could be behind the wearer in a more immersive environment. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
Adams, R. B., Ambady, N., Macrae, C. N., &amp;amp; Kleck, R. E. (2006). Emotional expressions forecast approach-avoidance behavior. *Motivation and Emotion*, *30*(2), 177–186. https://doi.org/10.1007/s11031-006-9020-2&lt;br /&gt;
&lt;br /&gt;
De’Aira, Deepak, &amp;amp; Ayanna. (2021). Age bias in emotion detection An analysis of facial emotion recognition performance on young middle-aged and older adults.&lt;br /&gt;
&lt;br /&gt;
Ekman, P. (1970). Universal Facial Expressions of Emotions. California Mental Health Research Digest, 8(4), 151-158.&lt;br /&gt;
&lt;br /&gt;
Ferrari, P. F., &amp;amp; Coudé, G. (2018). Mirror neurons, embodied emotions, and empathy. *Neuronal Correlates of Empathy*, 67–77. https://doi.org/10.1016/b978-0-12-805397-3.00006-1&lt;br /&gt;
&lt;br /&gt;
DeepFace https://github.com/serengil/deepface&lt;br /&gt;
&lt;br /&gt;
OpenCV https://opencv.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=135413</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=135413"/>
		<updated>2023-03-29T21:30:12Z</updated>

		<summary type="html">&lt;p&gt;Flux999: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{#ev:youtube|e1CXrPU11XQ}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning. It is designed to be used in a public space but rather than being in the center of attention, it aims to exist in the peripheral while still stimulating the people around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The evolutionary development of human resulted in many features to create greater societies from extensive groups of people, ideally living in harmony with each other. One of the most influential features that bring us the ability to build and sustain these social structures is our ability to empathize with the other people around us. However, in modern societies, we are getting more and more apart from each other and lost in the rush of our modern-day problems. Our perception of social interactions gets trapped in a closer circle even though we are encountering many different faces even in one day of our lives. As those faces got blurry for us, the lifting effect of being social and sharing decays even more. The project evaluates the effects of emotions through facial expressions in the contexts of empathy and modern-day social structures.&lt;br /&gt;
&lt;br /&gt;
The process of empathy starts with people imagining themselves in another person’s shoes and trying to form meaning out of it. This involves paying attention to their body language, facial expressions, tone of voice, and words, as well as considering their past experiences and current circumstances. Several experts think that mirror neurons, or at least a similar mechanism, play a role in some forms of basic empathy. Mirror neurons in the mouth and the ability to imitate facial expressions are likely the foundation for being in tune with others emotionally. While the embodiment of emotions does not cover all aspects of empathetic experience, it provides a straightforward explanation of how we may share emotions with others and how this skill could have evolved through evolution (Coudé &amp;amp; Ferrari, 2018).&lt;br /&gt;
&lt;br /&gt;
Moreover, it is a naturally evolved survival mechanism to avoid an unwanted situation with the help of others around. According to the findings of Adams et al(2006), two studies suggest that there is accuracy in detecting movements in angry and fearful faces, either moving towards or away from the observer. They found that observers were quicker to correctly identify angry faces moving towards them, suggesting that anger displays convey the intent to approach. However, the results were not the same for fear faces, which may indicate that fear signals a &amp;quot;freeze&amp;quot; response rather than a behavior of fleeing. Therefore, translating the emotions of one party to another has an essential role in sharing “data” collected from the outer world sensed by human body receptors. Besides the expressions of emotion being a means of non-verbal communication, unlike the gestures that can change from culture to culture, they are also relatively universal. According to Ekman(1970), basic emotions have a pancultural nature in that they are identified and also expressed in similar ways in different cultures with the same facial muscle responses.&lt;br /&gt;
&lt;br /&gt;
The embodiment of emotions through facial expressions is a mean of communication with the outer world. However, it is distinctive from vocal communications etc. by not being self-reflective that people cannot see or feel the immediate effect of their actions. Rather it is moving to the other party to be evaluated and has its effect on them and that is where the reflection forms. So one person feels the emotion but the other one sees the facial expression. The viewer is the bridge to the outer world as well as the reflection of the inside.&lt;br /&gt;
&lt;br /&gt;
To explore the nature of these interactions through facial expressions of emotions in a bigger picture and to disrupt the woven structure of daily life, InterFace pursues to create a space for emphasizing the power of these individual emotions becoming visible and vivid for the outside world.&lt;br /&gt;
&lt;br /&gt;
==Hardware Setup==&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See the detailed development process below&#039;&#039; &lt;br /&gt;
==[[Hardware and Software Systems Processes]]==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Interaction==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Video Walk&#039;&#039;&lt;br /&gt;
{{#ev:youtube|i5LFJzIUPKk}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As an object to be used in the public space, the evaluation of the interactivity is performed with a video walk. The walk starts at the uni campus and by using the most crowded streets aims to reach Theaterplatz and then go back to the campus again. The observations on this route follow as the most interactivity is caught when the wearer was facing the viewer and the distance between them was smaller(as if walking on the same sidewalk to opposite directions). In some instances are seen as some curious ones are also turning their heads to have another look after they pass. Moreover interest in the tool was more visible when the observational video recording is stopped and it became a standalone object. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Emotions on the viewer screen after the video walk&#039;&#039;&lt;br /&gt;
{{#ev:youtube|WUd2enyf2lQ}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File: emo_ss.jpg|300px]]&lt;br /&gt;
[[File: emo_archive.png|200px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
During the walk, emotion output is saved with an automated screenshot script while the laptop in the bag was also remote controlled to ensure the stability of the system.  The collected emotion visuals(static images) are blended together with a frame interpolator to create the transitions between them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Discussions==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Even though emotion detection with AI technology has been widely researched and used in a variety of applications, it is essential to consider its drawbacks. One of the main limitations is accuracy. The algorithm used in this project is said to be one of the highest accuracy ones however it still stays at 97%. From my personal experience I can say that it can be used for basic emotions however its power to assess complex or micro emotions is not close to the human capability of understanding emotional expressions. Current emotion detection systems can still struggle to identify emotions accurately due to cultural and individual differences and context. The technology is also prone to algorithmic bias, leading to inaccuracies for certain groups of people. My own experience with the tool is more or less had the same hustles, where the emotion detection was usually tagging me as sad rather than neutral. This kind of algorithm is a black box that makes it harder to understand what goes wrong during the process and it gives a very different result than expected.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On the other hand, face recognition systems also have their own disadvantages. One of them is not being able to recognize a face from another angle, which means the camera needs to be stable enough to see the face horizontally aligned and this feature can limit the possibility of some actions while wearing the device.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
Adams, R. B., Ambady, N., Macrae, C. N., &amp;amp; Kleck, R. E. (2006). Emotional expressions forecast approach-avoidance behavior. *Motivation and Emotion*, *30*(2), 177–186. https://doi.org/10.1007/s11031-006-9020-2&lt;br /&gt;
&lt;br /&gt;
De’Aira, Deepak, &amp;amp; Ayanna. (2021). Age bias in emotion detection An analysis of facial emotion recognition performance on young middle-aged and older adults.&lt;br /&gt;
&lt;br /&gt;
Ekman, P. (1970). Universal Facial Expressions of Emotions. California Mental Health Research Digest, 8(4), 151-158.&lt;br /&gt;
&lt;br /&gt;
Ferrari, P. F., &amp;amp; Coudé, G. (2018). Mirror neurons, embodied emotions, and empathy. *Neuronal Correlates of Empathy*, 67–77. https://doi.org/10.1016/b978-0-12-805397-3.00006-1&lt;br /&gt;
&lt;br /&gt;
DeepFace https://github.com/serengil/deepface&lt;br /&gt;
&lt;br /&gt;
OpenCV https://opencv.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=Hardware_and_Software_Systems_Processes&amp;diff=135412</id>
		<title>Hardware and Software Systems Processes</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=Hardware_and_Software_Systems_Processes&amp;diff=135412"/>
		<updated>2023-03-29T21:21:38Z</updated>

		<summary type="html">&lt;p&gt;Flux999: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Hardware Setup=&lt;br /&gt;
&#039;&#039;Initial Sketch&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_12.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Experiments with the holder&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on head, camera facing the wearer, screen facing out. &lt;br /&gt;
Tools used;&lt;br /&gt;
*Phone holder&lt;br /&gt;
*Headphones&lt;br /&gt;
*Bİke Helmet&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_15.JPG|300px]] [[File:emotiondet_16.JPG|300px]]&lt;br /&gt;
&lt;br /&gt;
This model of display did not work, because the holder was too heavy to be balanced on the head. And since the camera cannot be too close to the face(to see and detect the face) the weight distribution was faulty.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on the shoulder, camera facing the wearer, screen facing out. &lt;br /&gt;
*Phone holder&lt;br /&gt;
*Adjustable strap&lt;br /&gt;
*Backpack(to carry the laptop connected to the camera)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_24.jpg|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This model was more stable than the head ones. The holder is clipped to the strap to ensure the stability with the help of the upper body that the strap goes around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Camera&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An external camera (an action camera which has a wide angle lens built-in) successfully set within the software setup process, which was more or less a cyclical process that went together with the hardware setup.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_17.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
=Software Setup=&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 1: Backend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sources used;&lt;br /&gt;
*OpenCV  &#039;&#039;Face Detection&#039;&#039;&lt;br /&gt;
*DeepFace  &#039;&#039;Emotion Recognition&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Starting with the OpenCV library which enabled face detection from the camera input, instances of the face each second are fed to DeepFace algorithm. DeepFace gives an output of the emotional data, labeled on the face.&lt;br /&gt;
The default emotion read-write was too fast(&amp;lt;1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_2.png|300px]] [[File:emotiondet_3.png|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 2: Frontend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together.  This experiment was successful so it created space for elaborating the emotion driven visual. &lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_4.png|400px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 3: Emotion Signifier Visual&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_11.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created. &lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_6.png|320px]] [[File: emotiondet_7.png|300px]]&lt;br /&gt;
[[File: emotiondet_1.mov|300px]]&lt;br /&gt;
&lt;br /&gt;
To select the colors that signify emotions, a research on color psychology is made to better understand the interpretation of colors. The psychological effects of colors on human mood and behavior stem from the electromagnetic radiation of light and are a universal, psychophysical response that is less impacted by factors such as culture, age, and gender than commonly believed. It&#039;s important to distinguish between color psychology and color symbolism. Color symbolism refers to the context in which colors are used, while the psychological properties of colors relate to the general moods they evoke in people(Wright, 2008 as cited in Kurt &amp;amp; Osueke, 2014). In the context of this project, the visual aims to transfer the general feeling to viewer by building a layer to distort the reality, rather than putting it inside the limits of an emotion.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Besides the effects of the physical property of colors on human psychology, a couple of studies suggest that humans are hard-wired to understand and interpret certain colors as part of the natural processes that keep the most primitive actions going. Trichromatic color vision, according to Changizi and coworkers (2006), evolved so that primates could detect minute color changes in the skin caused by underlying blood flow. Visual sensitivity to these modulations is thus extremely useful in interpersonal interaction because they reflect the emotion, state, or condition of the perceived mating pairs (as cited in Elliot &amp;amp; Maier, 2014).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I also appreciate the distinction between color psychology and color symbolism. While color symbolism can be influenced more by cultural and contextual factors, color psychology focuses on the inherent emotional and psychological effects of colors. By understanding these effects, the color combinations used in this work can be communicated in a way that is not only aesthetically pleasing but also emotionally representative. By building a layer of color on top of reality, I can create a visual experience that is both familiar and unfamiliar, creating a sense of unease or disorientation in the viewer. This technique allows me to communicate emotions and ideas in a way that goes beyond the limits of traditional representation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Colors representing the wearer&#039;s emotions are&lt;br /&gt;
&lt;br /&gt;
*Neutral &lt;br /&gt;
Colors from nature such as greens and earthy tones to trigger the calm feeling&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_22.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Sad &lt;br /&gt;
Gray tones to represent the &amp;quot;missing&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_18.png|200px]] &lt;br /&gt;
&lt;br /&gt;
*Happy&lt;br /&gt;
Orange and yellows to which are connected to optimistic thoughts.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Surprised&lt;br /&gt;
&lt;br /&gt;
Bright purple and magenta to trigger the curiosity&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_21.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Angry&lt;br /&gt;
&lt;br /&gt;
Dark reds to trigger the negative/hostile feelings&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_20.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Fear&lt;br /&gt;
&lt;br /&gt;
Bright red and green to trigger the alertness&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_23.png|200px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 4: Connection to the hardware &amp;amp; collecting the signifier output&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For showing the same web page that is hosted on the laptop, the phone used as the screen should be connected to the same wifi. This method has its disadvantages and advantages such as not being able to make it full screen on the phone(not impossible but also not easy since the wearer has so little control over the screen) but also there is no significant latency for the display of the emotion signifier output.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop. However after the experiments, using the tools OBS and YouTube streaming, there was a long latency period that the visual loses its purpose of being in sync with the real facial expression of the wearer. Therefore it is better to go with the first option which connecting via wifi.&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_8.png|200px]] [[File: emotiondet_9.png|150px]] [[File: emotiondet_10.png|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
During the video walk, an automated screen capture script is used. The script enabled to have a capture every 10 seconds. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
while [ 1 ];do vardate=$(date +%d\-%m\-%Y\_%H.%M.%S); screencapture -t jpg -x ~/Desktop/ss/$vardate.jpg; sleep 10; done&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;References&#039;&#039;==&lt;br /&gt;
Elliot, A. J., &amp;amp; Maier, M. A. (2014). Color psychology: Effects of perceiving color on psychological functioning in humans. Annual Review of Psychology, 65(1), 95–120. https://doi.org/10.1146/annurev-psych-010213-115035&lt;br /&gt;
&lt;br /&gt;
Kurt, S., &amp;amp; Osueke, K. K. (2014). The Effects of Color on the Moods of College Students. SAGE Open, 4(1). https://doi.org/10.1177/2158244014525423&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=Hardware_and_Software_Systems_Processes&amp;diff=134610</id>
		<title>Hardware and Software Systems Processes</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=Hardware_and_Software_Systems_Processes&amp;diff=134610"/>
		<updated>2023-02-08T22:37:52Z</updated>

		<summary type="html">&lt;p&gt;Flux999: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Hardware Setup==&lt;br /&gt;
&#039;&#039;Initial Sketch&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_12.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Experiments with the holder&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on head, camera facing the wearer, screen facing out. &lt;br /&gt;
Tools used;&lt;br /&gt;
*Phone holder&lt;br /&gt;
*Headphones&lt;br /&gt;
*Bİke Helmet&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_15.JPG|300px]] [[File:emotiondet_16.JPG|300px]]&lt;br /&gt;
&lt;br /&gt;
This model of display did not work, because the holder was too heavy to be balanced on the head. And since the camera cannot be too close to the face(to see and detect the face) the weight distribution was faulty.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on the shoulder, camera facing the wearer, screen facing out. &lt;br /&gt;
*Phone holder&lt;br /&gt;
*Adjustable strap&lt;br /&gt;
*Backpack(to carry the laptop connected to the camera)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_24.jpg|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This model was more stable than the head ones. The holder is clipped to the strap to ensure the stability with the help of the upper body that the strap goes around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Camera&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An external camera (an action camera which has a wide angle lens built-in) successfully set within the software setup process, which was more or less a cyclical process that went together with the hardware setup.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_17.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 1: Backend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sources used;&lt;br /&gt;
*OpenCV  &#039;&#039;Face Detection&#039;&#039;&lt;br /&gt;
*DeepFace  &#039;&#039;Emotion Recognition&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Starting with the OpenCV library which enabled face detection from the camera input, instances of the face each second are fed to DeepFace algorithm. DeepFace gives an output of the emotional data, labeled on the face.&lt;br /&gt;
The default emotion read-write was too fast(&amp;lt;1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_2.png|300px]] [[File:emotiondet_3.png|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 2: Frontend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together.  This experiment was successful so it created space for elaborating the emotion driven visual. &lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_4.png|400px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 3: Emotion Signifier Visual&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_11.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created. &lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_6.png|320px]] [[File: emotiondet_7.png|300px]]&lt;br /&gt;
[[File: emotiondet_1.mov|300px]]&lt;br /&gt;
&lt;br /&gt;
To select the colors signifying emotions, a research on color psychology is made to better understand the interpretation of colors. The psychological effects of colors on human mood and behavior stem from the electromagnetic radiation of light and are a universal, psychophysical response that is less impacted by factors such as culture, age, and gender than commonly believed. It&#039;s important to distinguish between color psychology and color symbolism. Color symbolism refers to the context in which colors are used, while the psychological properties of colors relate to the general moods they evoke in people(Wright, 2008 as cited in Kurt &amp;amp; Osueke, 2014). In the context of this project, the visual aims to transfer the general feeling to viewer by building a layer to distort the reality, rather than putting it inside the limits of an emotion.&lt;br /&gt;
&lt;br /&gt;
Colors representing the wearer&#039;s emotions are&lt;br /&gt;
&lt;br /&gt;
*Neutral &lt;br /&gt;
Colors from nature such as greens and earthy tones to trigger the calm feeling&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_22.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Sad &lt;br /&gt;
Gray tones to represent the &amp;quot;missing&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_18.png|200px]] &lt;br /&gt;
&lt;br /&gt;
*Happy&lt;br /&gt;
Orange and yellows to which are connected to optimistic thoughts.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Surprised&lt;br /&gt;
&lt;br /&gt;
Bright purple and magenta to trigger the curiosity&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_21.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Angry&lt;br /&gt;
&lt;br /&gt;
Dark reds to trigger the negative/hostile feelings&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_20.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Fear&lt;br /&gt;
&lt;br /&gt;
Bright red and green to trigger the alertness&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_23.png|200px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 4: Connection to the hardware &amp;amp; collecting the signifier output&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For showing the same web page that is hosted on the laptop, the phone used as the screen should be connected to the same wifi. This method has its disadvantages and advantages such as not being able to make it full screen on the phone(not impossible but also not easy since the wearer has so little control over the screen) but also there is no significant latency for the display of the emotion signifier output.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop. However after the experiments, using the tools OBS and YouTube streaming, there was a long latency period that the visual loses its purpose of being in sync with the real facial expression of the wearer. Therefore it is better to go with the first option which connecting via wifi.&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_8.png|200px]] [[File: emotiondet_9.png|150px]] [[File: emotiondet_10.png|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
During the video walk, an automated screen capture script is used. The script enabled to have a capture every 10 seconds. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
while [ 1 ];do vardate=$(date +%d\-%m\-%Y\_%H.%M.%S); screencapture -t jpg -x ~/Desktop/ss/$vardate.jpg; sleep 10; done&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=Hardware_and_Software_Systems_Processes&amp;diff=134609</id>
		<title>Hardware and Software Systems Processes</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=Hardware_and_Software_Systems_Processes&amp;diff=134609"/>
		<updated>2023-02-08T22:06:41Z</updated>

		<summary type="html">&lt;p&gt;Flux999: /* Software Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Hardware Setup==&lt;br /&gt;
&#039;&#039;Initial Sketch&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_12.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Experiments with the holder&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on head, camera facing the wearer, screen facing out. &lt;br /&gt;
Tools used;&lt;br /&gt;
*Phone holder&lt;br /&gt;
*Headphones&lt;br /&gt;
*Bİke Helmet&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_15.JPG|300px]] [[File:emotiondet_16.JPG|300px]]&lt;br /&gt;
&lt;br /&gt;
This model of display did not work, because the holder was too heavy to be balanced on the head. And since the camera cannot be too close to the face(to see and detect the face) the weight distribution was faulty.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on the shoulder, camera facing the wearer, screen facing out. &lt;br /&gt;
*Phone holder&lt;br /&gt;
*Adjustable strap&lt;br /&gt;
*Backpack(to carry the laptop connected to the camera)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_24.jpg|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This model was more stable than the head ones. The holder is clipped to the strap to ensure the stability with the help of the upper body that the strap goes around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Camera&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An external camera (an action camera which has a wide angle lens built-in) successfully set within the software setup process, which was more or less a cyclical process that went together with the hardware setup.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_17.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 1: Backend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sources used;&lt;br /&gt;
*OpenCV  &#039;&#039;Face Detection&#039;&#039;&lt;br /&gt;
*DeepFace  &#039;&#039;Emotion Recognition&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Starting with the OpenCV library which enabled face detection from the camera input, instances of the face each second are fed to DeepFace algorithm. DeepFace gives an output of the emotional data, labeled on the face.&lt;br /&gt;
The default emotion read-write was too fast(&amp;lt;1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_2.png|300px]] [[File:emotiondet_3.png|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 2: Frontend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together.  This experiment was successful so it created space for elaborating the emotion driven visual. &lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_4.png|400px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 3: Emotion Signifier Visual&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_11.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created. &lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_6.png|320px]] [[File: emotiondet_7.png|300px]]&lt;br /&gt;
[[File: emotiondet_1.mov|300px]]&lt;br /&gt;
&lt;br /&gt;
To select the colors signifying emotions, a research on color psychology is made to better understand the interpretation of colors. The psychological effects of colors on human mood and behavior stem from the electromagnetic radiation of light and are a universal, psychophysical response that is less impacted by factors such as culture, age, and gender than commonly believed. It&#039;s important to distinguish between color psychology and color symbolism. Color symbolism refers to the context in which colors are used, while the psychological properties of colors relate to the general moods they evoke in people(Wright, 2008 as cited in Kurt &amp;amp; Osueke, 2014). In the context of this project, the visual aims to transfer the general feeling to viewer by building a layer to distort the reality, rather than putting it inside the limits of an emotion.&lt;br /&gt;
&lt;br /&gt;
Colors representing the wearer&#039;s emotions are&lt;br /&gt;
&lt;br /&gt;
*Neutral &lt;br /&gt;
Colors from nature such as greens and earthy tones to trigger the calm feeling&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_22.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Sad &lt;br /&gt;
Gray tones to represent the &amp;quot;missing&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_18.png|200px]] &lt;br /&gt;
&lt;br /&gt;
*Happy&lt;br /&gt;
Orange and yellows to which are connected to optimistic thoughts.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Surprised&lt;br /&gt;
&lt;br /&gt;
Bright purple and magenta to trigger the curiosity&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_21.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Angry&lt;br /&gt;
&lt;br /&gt;
Dark reds to trigger the negative/hostile feelings&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_20.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Fear&lt;br /&gt;
&lt;br /&gt;
Bright red and green to trigger the alertness&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_23.png|200px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 4: Connection to the hardware &amp;amp; collecting the signifier output&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For showing the same web page that is hosted on the laptop, the phone used as the screen should be connected to the same wifi. This method has its disadvantages and advantages such as not being able to make it full screen on the phone(not impossible but also not easy since the wearer has so little control over the screen) but also there is no significant latency for the display of the emotion signifier output.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop. However after the experiments, using the tools OBS and YouTube streaming, there was a long latency period that the visual loses its purpose of being in sync with the real facial expression of the wearer. Therefore it is better to go with the first option which connecting via wifi.&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_8.png|200px]] [[File: emotiondet_9.png|150px]] [[File: emotiondet_10.png|300px]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134608</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134608"/>
		<updated>2023-02-08T21:23:22Z</updated>

		<summary type="html">&lt;p&gt;Flux999: /* Abstract */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{#ev:youtube|e1CXrPU11XQ}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning. It is designed to be used in a public space but rather than being in the center of attention, it aims to exist in the peripheral while still stimulating the people around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The evolutionary development of human resulted in many features to create greater societies from extensive groups of people, ideally living in harmony with each other. One of the most influential features that bring us the ability to build and sustain these social structures is our ability to empathize with the other people around us. However, in modern societies, we are getting more and more apart from each other and lost in the rush of our modern-day problems. Our perception of social interactions gets trapped in a closer circle even though we are encountering many different faces even in one day of our lives. As those faces got blurry for us, the lifting effect of being social and sharing decays even more. The project evaluates the effects of emotions through facial expressions in the contexts of empathy and modern-day social structures.&lt;br /&gt;
&lt;br /&gt;
The process of empathy starts with people imagining themselves in another person’s shoes and trying to form meaning out of it. This involves paying attention to their body language, facial expressions, tone of voice, and words, as well as considering their past experiences and current circumstances. Several experts think that mirror neurons, or at least a similar mechanism, play a role in some forms of basic empathy. Mirror neurons in the mouth and the ability to imitate facial expressions are likely the foundation for being in tune with others emotionally. While the embodiment of emotions does not cover all aspects of empathetic experience, it provides a straightforward explanation of how we may share emotions with others and how this skill could have evolved through evolution (Coudé &amp;amp; Ferrari, 2018).&lt;br /&gt;
&lt;br /&gt;
Moreover, it is a naturally evolved survival mechanism to avoid an unwanted situation with the help of others around. According to the findings of Adams et al(2006), two studies suggest that there is accuracy in detecting movements in angry and fearful faces, either moving towards or away from the observer. They found that observers were quicker to correctly identify angry faces moving towards them, suggesting that anger displays convey the intent to approach. However, the results were not the same for fear faces, which may indicate that fear signals a &amp;quot;freeze&amp;quot; response rather than a behavior of fleeing. Therefore, translating the emotions of one party to another has an essential role in sharing “data” collected from the outer world sensed by human body receptors. Besides the expressions of emotion being a means of non-verbal communication, unlike the gestures that can change from culture to culture, they are also relatively universal. According to Ekman(1970), basic emotions have a pancultural nature in that they are identified and also expressed in similar ways in different cultures with the same facial muscle responses.&lt;br /&gt;
&lt;br /&gt;
The embodiment of emotions through facial expressions is a mean of communication with the outer world. However, it is distinctive from vocal communications etc. by not being self-reflective that people cannot see or feel the immediate effect of their actions. Rather it is moving to the other party to be evaluated and has its effect on them and that is where the reflection forms. So one person feels the emotion but the other one sees the facial expression. The viewer is the bridge to the outer world as well as the reflection of the inside.&lt;br /&gt;
&lt;br /&gt;
To explore the nature of these interactions through facial expressions of emotions in a bigger picture and to disrupt the woven structure of daily life, InterFace pursues to create a space for emphasizing the power of these individual emotions becoming visible and vivid for the outside world.&lt;br /&gt;
&lt;br /&gt;
==Hardware Setup==&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See the detailed development process below&#039;&#039; &lt;br /&gt;
==[[Hardware and Software Systems Processes]]==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Interaction==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Video Walk&#039;&#039;&lt;br /&gt;
{{#ev:youtube|i5LFJzIUPKk}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As an object to be used in the public space, the evaluation of the interactivity is performed with a video walk. The walk starts at the uni campus and by using the most crowded streets aims to reach Theaterplatz and then go back to the campus again. The observations on this route follow as the most interactivity is caught when the wearer was facing the viewer and the distance between them was smaller(as if walking on the same sidewalk to opposite directions). In some instances are seen as some curious ones are also turning their heads to have another look after they pass. Moreover interest in the tool was more visible when the observational video recording is stopped and it became a standalone object. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Emotions on the viewer screen after the video walk&#039;&#039;&lt;br /&gt;
{{#ev:youtube|WUd2enyf2lQ}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File: emo_ss.jpg|300px]]&lt;br /&gt;
[[File: emo_archive.png|200px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
During the walk, emotion output is saved with an automated screenshot script while the laptop in the bag was also remote controlled to ensure the stability of the system.  The collected emotion visuals(static images) are blended together with a frame interpolator to create the transitions between them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Discussions==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Even though emotion detection with AI technology has been widely researched and used in a variety of applications, it is essential to consider its drawbacks. One of the main limitations is accuracy. The algorithm used in this project is said to be one of the highest accuracy ones however it still stays at 97%. From my personal experience I can say that it can be used for basic emotions however its power to assess complex or micro emotions is not close to the human capability of understanding emotional expressions. Current emotion detection systems can still struggle to identify emotions accurately due to cultural and individual differences and context. The technology is also prone to algorithmic bias, leading to inaccuracies for certain groups of people. My own experience with the tool is more or less had the same hustles, where the emotion detection was usually tagging me as sad rather than neutral. This kind of algorithm is a black box that makes it harder to understand what goes wrong during the process and it gives a very different result than expected.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On the other hand, face recognition systems also have their own disadvantages. One of them is not being able to recognize a face from another angle, which means the camera needs to be stable enough to see the face horizontally aligned and this feature can limit the possibility of some actions while wearing the device.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
Ekman, P. (1970). Universal Facial Expressions of Emotions. California Mental Health Research Digest, 8(4), 151-158.&lt;br /&gt;
&lt;br /&gt;
Adams, R. B., Ambady, N., Macrae, C. N., &amp;amp; Kleck, R. E. (2006). Emotional expressions forecast approach-avoidance behavior. *Motivation and Emotion*, *30*(2), 177–186. https://doi.org/10.1007/s11031-006-9020-2&lt;br /&gt;
&lt;br /&gt;
Ferrari, P. F., &amp;amp; Coudé, G. (2018). Mirror neurons, embodied emotions, and empathy. *Neuronal Correlates of Empathy*, 67–77. https://doi.org/10.1016/b978-0-12-805397-3.00006-1&lt;br /&gt;
&lt;br /&gt;
DeepFace https://github.com/serengil/deepface&lt;br /&gt;
&lt;br /&gt;
OpenCV https://opencv.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134607</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134607"/>
		<updated>2023-02-08T20:59:46Z</updated>

		<summary type="html">&lt;p&gt;Flux999: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{#ev:youtube|e1CXrPU11XQ}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning. It is designed to be used in a public space but rather than being in the center of attention, it aims to exist in the peripheral while still stimulating the people around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The evolutionary development of human resulted in many features to create greater societies from extensive groups of people, ideally living in harmony with each other. One of the most influential features that bring us the ability to build and sustain these social structures is our ability to empathize with the other people around us. However, in modern societies, we are getting more and more apart from each other and lost in the rush of our modern-day problems. Our perception of social interactions gets trapped in a closer circle even though we are encountering many different faces even in one day of our lives. As those faces got blurry for us, the lifting effect of being social and sharing decays even more. The project evaluates the effects of emotions through facial expressions in the contexts of empathy and modern-day social structures.&lt;br /&gt;
&lt;br /&gt;
The process of empathy starts with people imagining themselves in another person’s shoes and trying to form meaning out of it. This involves paying attention to their body language, facial expressions, tone of voice, and words, as well as considering their past experiences and current circumstances. Several experts think that mirror neurons, or at least a similar mechanism, play a role in some forms of basic empathy. Mirror neurons in the mouth and the ability to imitate facial expressions are likely the foundation for being in tune with others emotionally. While the embodiment of emotions does not cover all aspects of empathetic experience, it provides a straightforward explanation of how we may share emotions with others and how this skill could have evolved through evolution (Coudé &amp;amp; Ferrari, 2018).&lt;br /&gt;
&lt;br /&gt;
Moreover, it is a naturally evolved survival mechanism to avoid an unwanted situation with the help of others around. According to the findings of Adams et al(2006), two studies suggest that there is accuracy in detecting movements in angry and fearful faces, either moving towards or away from the observer. They found that observers were quicker to correctly identify angry faces moving towards them, suggesting that anger displays convey the intent to approach. However, the results were not the same for fear faces, which may indicate that fear signals a &amp;quot;freeze&amp;quot; response rather than a behavior of fleeing. Therefore, translating the emotions of one party to another has an essential role in sharing “data” collected from the outer world sensed by human body receptors. Besides the expressions of emotion being a means of non-verbal communication, unlike the gestures that can change from culture to culture, they are also relatively universal. According to Ekman(1970), basic emotions have a pancultural nature in that they are identified and also expressed in similar ways in different cultures with the same facial muscle responses.&lt;br /&gt;
&lt;br /&gt;
While the embodiment of emotions through facial expressions it is a means of communication with the outer world. However, it’s the distinction between vocal communications etc. not being self-reflective that people cannot see or feel the immediate effect of their actions. Rather it is moving to the other party to be evaluated and has its effect on them and that is where the reflection forms. So one person feels the emotion but the other one sees the facial expression. The viewer is the bridge to the outer world as well as the reflection of the inside.&lt;br /&gt;
&lt;br /&gt;
To explore the nature of these interactions through facial expressions of emotions in a bigger picture and to disrupt the woven structure of daily life, InterFace pursues to create a space for emphasizing the power of these individual emotions becoming visible and vivid for the outside world.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Hardware Setup==&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See the detailed development process below&#039;&#039; &lt;br /&gt;
==[[Hardware and Software Systems Processes]]==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Interaction==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Video Walk&#039;&#039;&lt;br /&gt;
{{#ev:youtube|i5LFJzIUPKk}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As an object to be used in the public space, the evaluation of the interactivity is performed with a video walk. The walk starts at the uni campus and by using the most crowded streets aims to reach Theaterplatz and then go back to the campus again. The observations on this route follow as the most interactivity is caught when the wearer was facing the viewer and the distance between them was smaller(as if walking on the same sidewalk to opposite directions). In some instances are seen as some curious ones are also turning their heads to have another look after they pass. Moreover interest in the tool was more visible when the observational video recording is stopped and it became a standalone object. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Emotions on the viewer screen after the video walk&#039;&#039;&lt;br /&gt;
{{#ev:youtube|WUd2enyf2lQ}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File: emo_ss.jpg|300px]]&lt;br /&gt;
[[File: emo_archive.png|200px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
During the walk, emotion output is saved with an automated screenshot script while the laptop in the bag was also remote controlled to ensure the stability of the system.  The collected emotion visuals(static images) are blended together with a frame interpolator to create the transitions between them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Discussions==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Even though emotion detection with AI technology has been widely researched and used in a variety of applications, it is essential to consider its drawbacks. One of the main limitations is accuracy. The algorithm used in this project is said to be one of the highest accuracy ones however it still stays at 97%. From my personal experience I can say that it can be used for basic emotions however its power to assess complex or micro emotions is not close to the human capability of understanding emotional expressions. Current emotion detection systems can still struggle to identify emotions accurately due to cultural and individual differences and context. The technology is also prone to algorithmic bias, leading to inaccuracies for certain groups of people. My own experience with the tool is more or less had the same hustles, where the emotion detection was usually tagging me as sad rather than neutral. This kind of algorithm is a black box that makes it harder to understand what goes wrong during the process and it gives a very different result than expected.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On the other hand, face recognition systems also have their own disadvantages. One of them is not being able to recognize a face from another angle, which means the camera needs to be stable enough to see the face horizontally aligned and this feature can limit the possibility of some actions while wearing the device.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
Ekman, P. (1970). Universal Facial Expressions of Emotions. California Mental Health Research Digest, 8(4), 151-158.&lt;br /&gt;
&lt;br /&gt;
Adams, R. B., Ambady, N., Macrae, C. N., &amp;amp; Kleck, R. E. (2006). Emotional expressions forecast approach-avoidance behavior. *Motivation and Emotion*, *30*(2), 177–186. https://doi.org/10.1007/s11031-006-9020-2&lt;br /&gt;
&lt;br /&gt;
Ferrari, P. F., &amp;amp; Coudé, G. (2018). Mirror neurons, embodied emotions, and empathy. *Neuronal Correlates of Empathy*, 67–77. https://doi.org/10.1016/b978-0-12-805397-3.00006-1&lt;br /&gt;
&lt;br /&gt;
DeepFace https://github.com/serengil/deepface&lt;br /&gt;
&lt;br /&gt;
OpenCV https://opencv.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134606</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134606"/>
		<updated>2023-02-08T20:39:47Z</updated>

		<summary type="html">&lt;p&gt;Flux999: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{#ev:youtube|e1CXrPU11XQ}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The evolutionary development of human resulted in many features to create greater societies from extensive groups of people, ideally living in harmony with each other. One of the most influential features that bring us the ability to build and sustain these social structures is our ability to empathize with the other people around us. However, in modern societies, we are getting more and more apart from each other and lost in the rush of our modern-day problems. Our perception of social interactions gets trapped in a closer circle even though we are encountering many different faces even in one day of our lives. As those faces got blurry for us, the lifting effect of being social and sharing decays even more. The project evaluates the effects of emotions through facial expressions in the contexts of empathy and modern-day social structures.&lt;br /&gt;
&lt;br /&gt;
The process of empathy starts with people imagining themselves in another person’s shoes and trying to form meaning out of it. This involves paying attention to their body language, facial expressions, tone of voice, and words, as well as considering their past experiences and current circumstances. Several experts think that mirror neurons, or at least a similar mechanism, play a role in some forms of basic empathy. Mirror neurons in the mouth and the ability to imitate facial expressions are likely the foundation for being in tune with others emotionally. While the embodiment of emotions does not cover all aspects of empathetic experience, it provides a straightforward explanation of how we may share emotions with others and how this skill could have evolved through evolution (Coudé &amp;amp; Ferrari, 2018).&lt;br /&gt;
&lt;br /&gt;
Moreover, it is a naturally evolved survival mechanism to avoid an unwanted situation with the help of others around. According to the findings of Adams et al(2006), two studies suggest that there is accuracy in detecting movements in angry and fearful faces, either moving towards or away from the observer. They found that observers were quicker to correctly identify angry faces moving towards them, suggesting that anger displays convey the intent to approach. However, the results were not the same for fear faces, which may indicate that fear signals a &amp;quot;freeze&amp;quot; response rather than a behavior of fleeing. Therefore, translating the emotions of one party to another has an essential role in sharing “data” collected from the outer world sensed by human body receptors. Besides the expressions of emotion being a means of non-verbal communication, unlike the gestures that can change from culture to culture, they are also relatively universal. According to Ekman(1970), basic emotions have a pancultural nature in that they are identified and also expressed in similar ways in different cultures with the same facial muscle responses.&lt;br /&gt;
&lt;br /&gt;
While the embodiment of emotions through facial expressions it is a means of communication with the outer world. However, it’s the distinction between vocal communications etc. not being self-reflective that people cannot see or feel the immediate effect of their actions. Rather it is moving to the other party to be evaluated and has its effect on them and that is where the reflection forms. So one person feels the emotion but the other one sees the facial expression. The viewer is the bridge to the outer world as well as the reflection of the inside.&lt;br /&gt;
&lt;br /&gt;
To explore the nature of these interactions through facial expressions of emotions in a bigger picture and to disrupt the woven structure of daily life, InterFace pursues to create a space for emphasizing the power of these individual emotions becoming visible and vivid for the outside world.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Hardware Setup==&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See the detailed development process below&#039;&#039; &lt;br /&gt;
==[[Hardware and Software Systems Processes]]==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Interaction==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Video Walk&#039;&#039;&lt;br /&gt;
{{#ev:youtube|i5LFJzIUPKk}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As an object to be used in the public space, the evaluation of the interactivity is performed with a video walk. The walk starts at the uni campus and by using the most crowded streets aims to reach Theaterplatz and then go back to the campus again. The observations on this route follow as the most interactivity is caught when the wearer was facing the viewer and the distance between them was smaller(as if walking on the same sidewalk to opposite directions). In some instances are seen as some curious ones are also turning their heads to have another look after they pass. Moreover interest in the tool was more visible when the observational video recording is stopped and it became a standalone object. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Emotions on the viewer screen after the video walk&#039;&#039;&lt;br /&gt;
{{#ev:youtube|WUd2enyf2lQ}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File: emo_ss.jpg|300px]]&lt;br /&gt;
[[File: emo_archive.png|200px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
During the walk, emotion output is saved with an automated screenshot script while the laptop in the bag was also remote controlled to ensure the stability of the system.  The collected emotion visuals(static images) are blended together with a frame interpolator to create the transitions between them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Discussions==&lt;br /&gt;
*emotion detection with AI&lt;br /&gt;
*inclusivity, color blindness&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
Ekman, P. (1970). Universal Facial Expressions of Emotions. California Mental Health Research Digest, 8(4), 151-158.&lt;br /&gt;
&lt;br /&gt;
Adams, R. B., Ambady, N., Macrae, C. N., &amp;amp; Kleck, R. E. (2006). Emotional expressions forecast approach-avoidance behavior. *Motivation and Emotion*, *30*(2), 177–186. https://doi.org/10.1007/s11031-006-9020-2&lt;br /&gt;
&lt;br /&gt;
Ferrari, P. F., &amp;amp; Coudé, G. (2018). Mirror neurons, embodied emotions, and empathy. *Neuronal Correlates of Empathy*, 67–77. https://doi.org/10.1016/b978-0-12-805397-3.00006-1&lt;br /&gt;
&lt;br /&gt;
DeepFace https://github.com/serengil/deepface&lt;br /&gt;
&lt;br /&gt;
OpenCV https://opencv.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=Hardware_and_Software_Systems_Processes&amp;diff=134605</id>
		<title>Hardware and Software Systems Processes</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=Hardware_and_Software_Systems_Processes&amp;diff=134605"/>
		<updated>2023-02-08T20:37:13Z</updated>

		<summary type="html">&lt;p&gt;Flux999: /* Hardware Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Hardware Setup==&lt;br /&gt;
&#039;&#039;Initial Sketch&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_12.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Experiments with the holder&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on head, camera facing the wearer, screen facing out. &lt;br /&gt;
Tools used;&lt;br /&gt;
*Phone holder&lt;br /&gt;
*Headphones&lt;br /&gt;
*Bİke Helmet&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_15.JPG|300px]] [[File:emotiondet_16.JPG|300px]]&lt;br /&gt;
&lt;br /&gt;
This model of display did not work, because the holder was too heavy to be balanced on the head. And since the camera cannot be too close to the face(to see and detect the face) the weight distribution was faulty.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on the shoulder, camera facing the wearer, screen facing out. &lt;br /&gt;
*Phone holder&lt;br /&gt;
*Adjustable strap&lt;br /&gt;
*Backpack(to carry the laptop connected to the camera)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_24.jpg|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This model was more stable than the head ones. The holder is clipped to the strap to ensure the stability with the help of the upper body that the strap goes around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Camera&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An external camera (an action camera which has a wide angle lens built-in) successfully set within the software setup process, which was more or less a cyclical process that went together with the hardware setup.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_17.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 1: Backend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sources used;&lt;br /&gt;
*OpenCV  &#039;&#039;Face Detection&#039;&#039;&lt;br /&gt;
*DeepFace  &#039;&#039;Emotion Recognition&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Starting with the OpenCV library which enabled face detection from the camera input, instances of the face each second are fed to DeepFace algorithm. DeepFace gives an output of the emotional data, labeled on the face.&lt;br /&gt;
The default emotion read-write was too fast(&amp;lt;1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_2.png|300px]] [[File:emotiondet_3.png|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 2: Frontend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together.  This experiment was successful so it created space for elaborating the emotion driven visual. &lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_4.png|400px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 3: Emotion Signifier Visual&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_11.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created. &lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_6.png|320px]] [[File: emotiondet_7.png|300px]]&lt;br /&gt;
[[File: emotiondet_1.mov|300px]]&lt;br /&gt;
&lt;br /&gt;
To select the colors signifying emotions, a research on color psychology is made to better understand the interpretation of colors. The psychological effects of colors on human mood and behavior stem from the electromagnetic radiation of light and are a universal, psychophysical response that is less impacted by factors such as culture, age, and gender than commonly believed. It&#039;s important to distinguish between color psychology and color symbolism. Color symbolism refers to the context in which colors are used, while the psychological properties of colors relate to the general moods they evoke in people(Wright, 2008 as cited in Kurt &amp;amp; Osueke, 2014). In the context of this project, the visual aims to transfer the general feeling to viewer rather than putting it inside the limits of an emotion by putting a layer to distort the reality. &lt;br /&gt;
&lt;br /&gt;
Colors representing the wearer&#039;s emotions are&lt;br /&gt;
&lt;br /&gt;
*Neutral &lt;br /&gt;
Colors from nature such as greens and earthy tones to trigger the calm feeling&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_22.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Sad &lt;br /&gt;
Gray tones to represent the &amp;quot;missing&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_18.png|200px]] &lt;br /&gt;
&lt;br /&gt;
*Happy&lt;br /&gt;
Orange and yellows to which are connected to optimistic thoughts.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Surprised&lt;br /&gt;
&lt;br /&gt;
Bright purple and magenta to trigger the curiosity&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_21.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Angry&lt;br /&gt;
&lt;br /&gt;
Dark reds to trigger the negative/hostile feelings&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_20.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Fear&lt;br /&gt;
&lt;br /&gt;
Bright red and green to trigger the alertness&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_23.png|200px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 4: Connection to the hardware &amp;amp; collecting the signifier output&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For showing the same web page that is hosted on the laptop, the phone used as the screen should be connected to the same wifi. This method has its disadvantages and advantages such as not being able to make it full screen on the phone(not impossible but also not easy since the wearer has so little control over the screen) but also there is no significant latency for the display of the emotion signifier output.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop. However after the experiments, using the tools OBS and YouTube streaming, there was a long latency period that the visual loses its purpose of being in sync with the real facial expression of the wearer. Therefore it is better to go with the first option which connecting via wifi.&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_8.png|200px]] [[File: emotiondet_9.png|150px]] [[File: emotiondet_10.png|300px]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_24.jpg&amp;diff=134604</id>
		<title>File:Emotiondet 24.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_24.jpg&amp;diff=134604"/>
		<updated>2023-02-08T20:35:53Z</updated>

		<summary type="html">&lt;p&gt;Flux999: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134603</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134603"/>
		<updated>2023-02-08T20:34:30Z</updated>

		<summary type="html">&lt;p&gt;Flux999: /* Interaction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{#ev:youtube|e1CXrPU11XQ}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The evolutionary development of human resulted in many features to create greater societies from extensive groups of people, ideally living in harmony with each other. One of the most influential features that bring us the ability to build and sustain these social structures is our ability to empathize with the other people around us. However, in modern societies, we are getting more and more apart from each other and lost in the rush of our modern-day problems. Our perception of social interactions gets trapped in a closer circle even though we are encountering many different faces even in one day of our lives. As those faces got blurry for us, the lifting effect of being social and sharing decays even more. The project evaluates the effects of emotions through facial expressions in the contexts of empathy and modern-day social structures.&lt;br /&gt;
&lt;br /&gt;
The process of empathy starts with people imagining themselves in another person’s shoes and trying to form meaning out of it. This involves paying attention to their body language, facial expressions, tone of voice, and words, as well as considering their past experiences and current circumstances. Several experts think that mirror neurons, or at least a similar mechanism, play a role in some forms of basic empathy. Mirror neurons in the mouth and the ability to imitate facial expressions are likely the foundation for being in tune with others emotionally. While the embodiment of emotions does not cover all aspects of empathetic experience, it provides a straightforward explanation of how we may share emotions with others and how this skill could have evolved through evolution (Coudé &amp;amp; Ferrari, 2018).&lt;br /&gt;
&lt;br /&gt;
Moreover, it is a naturally evolved survival mechanism to avoid an unwanted situation with the help of others around. According to the findings of Adams et al(2006), two studies suggest that there is accuracy in detecting movements in angry and fearful faces, either moving towards or away from the observer. They found that observers were quicker to correctly identify angry faces moving towards them, suggesting that anger displays convey the intent to approach. However, the results were not the same for fear faces, which may indicate that fear signals a &amp;quot;freeze&amp;quot; response rather than a behavior of fleeing. Therefore, translating the emotions of one party to another has an essential role in sharing “data” collected from the outer world sensed by human body receptors. Besides the expressions of emotion being a means of non-verbal communication, unlike the gestures that can change from culture to culture, they are also relatively universal. According to Ekman(1970), basic emotions have a pancultural nature in that they are identified and also expressed in similar ways in different cultures with the same facial muscle responses.&lt;br /&gt;
&lt;br /&gt;
While the embodiment of emotions through facial expressions it is a means of communication with the outer world. However, it’s the distinction between vocal communications etc. not being self-reflective that people cannot see or feel the immediate effect of their actions. Rather it is moving to the other party to be evaluated and has its effect on them and that is where the reflection forms. So one person feels the emotion but the other one sees the facial expression. The viewer is the bridge to the outer world as well as the reflection of the inside.&lt;br /&gt;
&lt;br /&gt;
To explore the nature of these interactions through facial expressions of emotions in a bigger picture and to disrupt the woven structure of daily life, InterFace pursues to create a space for emphasizing the power of these individual emotions becoming visible and vivid for the outside world.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Hardware Setup==&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
[[Hardware and Software Systems Processes]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Interaction==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Video Walk&#039;&#039;&lt;br /&gt;
{{#ev:youtube|i5LFJzIUPKk}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As an object to be used in the public space, the evaluation of the interactivity is performed with a video walk. The walk starts at the uni campus and by using the most crowded streets aims to reach Theaterplatz and then go back to the campus again. The observations on this route follow as the most interactivity is caught when the wearer was facing the viewer and the distance between them was smaller(as if walking on the same sidewalk to opposite directions). In some instances are seen as some curious ones are also turning their heads to have another look after they pass. Moreover interest in the tool was more visible when the observational video recording is stopped and it became a standalone object. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Emotions on the viewer screen after the video walk&#039;&#039;&lt;br /&gt;
{{#ev:youtube|WUd2enyf2lQ}}&lt;br /&gt;
&lt;br /&gt;
During the walk, emotion output is saved with an automated screenshot script while the laptop in the bag was also remote controlled to ensure the stability of the system.  The collected emotion visuals(static images) are blended together with a frame interpolator to create the transitions between them.&lt;br /&gt;
&lt;br /&gt;
[[File: emo_ss.jpg|300px]]&lt;br /&gt;
[[File: emo_archive.png|200px]]&lt;br /&gt;
&lt;br /&gt;
==Discussions==&lt;br /&gt;
*emotion detection with AI&lt;br /&gt;
*inclusivity, color blindness&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
Ekman, P. (1970). Universal Facial Expressions of Emotions. California Mental Health Research Digest, 8(4), 151-158.&lt;br /&gt;
&lt;br /&gt;
Adams, R. B., Ambady, N., Macrae, C. N., &amp;amp; Kleck, R. E. (2006). Emotional expressions forecast approach-avoidance behavior. *Motivation and Emotion*, *30*(2), 177–186. https://doi.org/10.1007/s11031-006-9020-2&lt;br /&gt;
&lt;br /&gt;
Ferrari, P. F., &amp;amp; Coudé, G. (2018). Mirror neurons, embodied emotions, and empathy. *Neuronal Correlates of Empathy*, 67–77. https://doi.org/10.1016/b978-0-12-805397-3.00006-1&lt;br /&gt;
&lt;br /&gt;
DeepFace https://github.com/serengil/deepface&lt;br /&gt;
&lt;br /&gt;
OpenCV https://opencv.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134602</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134602"/>
		<updated>2023-02-08T20:30:45Z</updated>

		<summary type="html">&lt;p&gt;Flux999: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{#ev:youtube|e1CXrPU11XQ}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The evolutionary development of human resulted in many features to create greater societies from extensive groups of people, ideally living in harmony with each other. One of the most influential features that bring us the ability to build and sustain these social structures is our ability to empathize with the other people around us. However, in modern societies, we are getting more and more apart from each other and lost in the rush of our modern-day problems. Our perception of social interactions gets trapped in a closer circle even though we are encountering many different faces even in one day of our lives. As those faces got blurry for us, the lifting effect of being social and sharing decays even more. The project evaluates the effects of emotions through facial expressions in the contexts of empathy and modern-day social structures.&lt;br /&gt;
&lt;br /&gt;
The process of empathy starts with people imagining themselves in another person’s shoes and trying to form meaning out of it. This involves paying attention to their body language, facial expressions, tone of voice, and words, as well as considering their past experiences and current circumstances. Several experts think that mirror neurons, or at least a similar mechanism, play a role in some forms of basic empathy. Mirror neurons in the mouth and the ability to imitate facial expressions are likely the foundation for being in tune with others emotionally. While the embodiment of emotions does not cover all aspects of empathetic experience, it provides a straightforward explanation of how we may share emotions with others and how this skill could have evolved through evolution (Coudé &amp;amp; Ferrari, 2018).&lt;br /&gt;
&lt;br /&gt;
Moreover, it is a naturally evolved survival mechanism to avoid an unwanted situation with the help of others around. According to the findings of Adams et al(2006), two studies suggest that there is accuracy in detecting movements in angry and fearful faces, either moving towards or away from the observer. They found that observers were quicker to correctly identify angry faces moving towards them, suggesting that anger displays convey the intent to approach. However, the results were not the same for fear faces, which may indicate that fear signals a &amp;quot;freeze&amp;quot; response rather than a behavior of fleeing. Therefore, translating the emotions of one party to another has an essential role in sharing “data” collected from the outer world sensed by human body receptors. Besides the expressions of emotion being a means of non-verbal communication, unlike the gestures that can change from culture to culture, they are also relatively universal. According to Ekman(1970), basic emotions have a pancultural nature in that they are identified and also expressed in similar ways in different cultures with the same facial muscle responses.&lt;br /&gt;
&lt;br /&gt;
While the embodiment of emotions through facial expressions it is a means of communication with the outer world. However, it’s the distinction between vocal communications etc. not being self-reflective that people cannot see or feel the immediate effect of their actions. Rather it is moving to the other party to be evaluated and has its effect on them and that is where the reflection forms. So one person feels the emotion but the other one sees the facial expression. The viewer is the bridge to the outer world as well as the reflection of the inside.&lt;br /&gt;
&lt;br /&gt;
To explore the nature of these interactions through facial expressions of emotions in a bigger picture and to disrupt the woven structure of daily life, InterFace pursues to create a space for emphasizing the power of these individual emotions becoming visible and vivid for the outside world.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Hardware Setup==&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
[[Hardware and Software Systems Processes]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Interaction==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Video Walk&#039;&#039;&lt;br /&gt;
{{#ev:youtube|i5LFJzIUPKk}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As an object to be used in the public space, the evaluation of the interactivity is performed with a video walk. The walk starts at the uni campus and by using the most crowded streets aims to reach Theaterplatz and then go back to the campus again. The observations on this route follow as the most interactivity is caught when the wearer was facing the viewer and the distance between them was smaller(as if walking on the same sidewalk to opposite directions). In some instances are seen as some curious ones are also turning their heads to have another look after they pass. Moreover interest in the tool was more visible when the observational video recording is stopped and it became a standalone object. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Emotions on the viewer screen after the video walk&#039;&#039;&lt;br /&gt;
{{#ev:youtube|WUd2enyf2lQ}}&lt;br /&gt;
&lt;br /&gt;
During the walk, emotion output is saved with an automated screenshot script while the laptop in the bag was also remote controlled to ensure the stability of the system.  The collected emotion visuals(static images) are blended together with a frame interpolator to create the transitions between them.&lt;br /&gt;
&lt;br /&gt;
[[:File: emo_ss.jpg]]&lt;br /&gt;
&lt;br /&gt;
[[:File: emo_archive.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Discussions==&lt;br /&gt;
*emotion detection with AI&lt;br /&gt;
*inclusivity, color blindness&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
Ekman, P. (1970). Universal Facial Expressions of Emotions. California Mental Health Research Digest, 8(4), 151-158.&lt;br /&gt;
&lt;br /&gt;
Adams, R. B., Ambady, N., Macrae, C. N., &amp;amp; Kleck, R. E. (2006). Emotional expressions forecast approach-avoidance behavior. *Motivation and Emotion*, *30*(2), 177–186. https://doi.org/10.1007/s11031-006-9020-2&lt;br /&gt;
&lt;br /&gt;
Ferrari, P. F., &amp;amp; Coudé, G. (2018). Mirror neurons, embodied emotions, and empathy. *Neuronal Correlates of Empathy*, 67–77. https://doi.org/10.1016/b978-0-12-805397-3.00006-1&lt;br /&gt;
&lt;br /&gt;
DeepFace https://github.com/serengil/deepface&lt;br /&gt;
&lt;br /&gt;
OpenCV https://opencv.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emo_archive.png&amp;diff=134601</id>
		<title>File:Emo archive.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emo_archive.png&amp;diff=134601"/>
		<updated>2023-02-08T20:27:38Z</updated>

		<summary type="html">&lt;p&gt;Flux999: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emo_ss.jpg&amp;diff=134600</id>
		<title>File:Emo ss.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emo_ss.jpg&amp;diff=134600"/>
		<updated>2023-02-08T20:27:37Z</updated>

		<summary type="html">&lt;p&gt;Flux999: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134599</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134599"/>
		<updated>2023-02-08T18:50:52Z</updated>

		<summary type="html">&lt;p&gt;Flux999: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
{{#ev:youtube|e1CXrPU11XQ}}&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The evolutionary development of human resulted in many features to create greater societies from extensive groups of people, ideally living in harmony with each other. One of the most influential features that bring us the ability to build and sustain these social structures is our ability to empathize with the other people around us. However, in modern societies, we are getting more and more apart from each other and lost in the rush of our modern-day problems. Our perception of social interactions gets trapped in a closer circle even though we are encountering many different faces even in one day of our lives. As those faces got blurry for us, the lifting effect of being social and sharing decays even more. The project evaluates the effects of emotions through facial expressions in the contexts of empathy and modern-day social structures.&lt;br /&gt;
&lt;br /&gt;
The process of empathy starts with people imagining themselves in another person’s shoes and trying to form meaning out of it. This involves paying attention to their body language, facial expressions, tone of voice, and words, as well as considering their past experiences and current circumstances. Several experts think that mirror neurons, or at least a similar mechanism, play a role in some forms of basic empathy. Mirror neurons in the mouth and the ability to imitate facial expressions are likely the foundation for being in tune with others emotionally. While the embodiment of emotions does not cover all aspects of empathetic experience, it provides a straightforward explanation of how we may share emotions with others and how this skill could have evolved through evolution (Coudé &amp;amp; Ferrari, 2018).&lt;br /&gt;
&lt;br /&gt;
Moreover, it is a naturally evolved survival mechanism to avoid an unwanted situation with the help of others around. According to the findings of Adams et al(2006), two studies suggest that there is accuracy in detecting movements in angry and fearful faces, either moving towards or away from the observer. They found that observers were quicker to correctly identify angry faces moving towards them, suggesting that anger displays convey the intent to approach. However, the results were not the same for fear faces, which may indicate that fear signals a &amp;quot;freeze&amp;quot; response rather than a behavior of fleeing. Therefore, translating the emotions of one party to another has an essential role in sharing “data” collected from the outer world sensed by human body receptors. Besides the expressions of emotion being a means of non-verbal communication, unlike the gestures that can change from culture to culture, they are also relatively universal. According to Ekman(1970), basic emotions have a pancultural nature in that they are identified and also expressed in similar ways in different cultures with the same facial muscle responses.&lt;br /&gt;
&lt;br /&gt;
While the embodiment of emotions through facial expressions it is a means of communication with the outer world. However, it’s the distinction between vocal communications etc. not being self-reflective that people cannot see or feel the immediate effect of their actions. Rather it is moving to the other party to be evaluated and has its effect on them and that is where the reflection forms. So one person feels the emotion but the other one sees the facial expression. The viewer is the bridge to the outer world as well as the reflection of the inside.&lt;br /&gt;
&lt;br /&gt;
To explore the nature of these interactions through facial expressions of emotions in a bigger picture and to disrupt the woven structure of daily life, InterFace pursues to create a space for emphasizing the power of these individual emotions becoming visible and vivid for the outside world.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Hardware Setup&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Software Setup&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
[[Hardware and Software Systems Processes]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Interactions&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Video Walk&lt;br /&gt;
{{#ev:youtube|i5LFJzIUPKk}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Discussions==&lt;br /&gt;
*emotion detection with AI&lt;br /&gt;
*inclusivity, color blindness&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
Ekman, P. (1970). Universal Facial Expressions of Emotions. California Mental Health Research Digest, 8(4), 151-158.&lt;br /&gt;
&lt;br /&gt;
Adams, R. B., Ambady, N., Macrae, C. N., &amp;amp; Kleck, R. E. (2006). Emotional expressions forecast approach-avoidance behavior. *Motivation and Emotion*, *30*(2), 177–186. https://doi.org/10.1007/s11031-006-9020-2&lt;br /&gt;
&lt;br /&gt;
Ferrari, P. F., &amp;amp; Coudé, G. (2018). Mirror neurons, embodied emotions, and empathy. *Neuronal Correlates of Empathy*, 67–77. https://doi.org/10.1016/b978-0-12-805397-3.00006-1&lt;br /&gt;
&lt;br /&gt;
DeepFace https://github.com/serengil/deepface&lt;br /&gt;
&lt;br /&gt;
OpenCV https://opencv.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134598</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134598"/>
		<updated>2023-02-08T18:50:13Z</updated>

		<summary type="html">&lt;p&gt;Flux999: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
{{#ev:youtube|e1CXrPU11XQ}}&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The evolutionary development of human resulted in many features to create greater societies from extensive groups of people, ideally living in harmony with each other. One of the most influential features that bring us the ability to build and sustain these social structures is our ability to empathize with the other people around us. However, in modern societies, we are getting more and more apart from each other and lost in the rush of our modern-day problems. Our perception of social interactions gets trapped in a closer circle even though we are encountering many different faces even in one day of our lives. As those faces got blurry for us, the lifting effect of being social and sharing decays even more. The project evaluates the effects of emotions through facial expressions in the contexts of empathy and modern-day social structures.&lt;br /&gt;
&lt;br /&gt;
The process of empathy starts with people imagining themselves in another person’s shoes and trying to form meaning out of it. This involves paying attention to their body language, facial expressions, tone of voice, and words, as well as considering their past experiences and current circumstances. Several experts think that mirror neurons, or at least a similar mechanism, play a role in some forms of basic empathy. Mirror neurons in the mouth and the ability to imitate facial expressions are likely the foundation for being in tune with others emotionally. While the embodiment of emotions does not cover all aspects of empathetic experience, it provides a straightforward explanation of how we may share emotions with others and how this skill could have evolved through evolution (Coudé &amp;amp; Ferrari, 2018).&lt;br /&gt;
&lt;br /&gt;
Moreover, it is a naturally evolved survival mechanism to avoid an unwanted situation with the help of others around. According to the findings of Adams et al(2006), two studies suggest that there is accuracy in detecting movements in angry and fearful faces, either moving towards or away from the observer. They found that observers were quicker to correctly identify angry faces moving towards them, suggesting that anger displays convey the intent to approach. However, the results were not the same for fear faces, which may indicate that fear signals a &amp;quot;freeze&amp;quot; response rather than a behavior of fleeing. Therefore, translating the emotions of one party to another has an essential role in sharing “data” collected from the outer world sensed by human body receptors. Besides the expressions of emotion being a means of non-verbal communication, unlike the gestures that can change from culture to culture, they are also relatively universal. According to Ekman(1970), basic emotions have a pancultural nature in that they are identified and also expressed in similar ways in different cultures with the same facial muscle responses.&lt;br /&gt;
&lt;br /&gt;
While the embodiment of emotions through facial expressions it is a means of communication with the outer world. However, it’s the distinction between vocal communications etc. not being self-reflective that people cannot see or feel the immediate effect of their actions. Rather it is moving to the other party to be evaluated and has its effect on them and that is where the reflection forms. So one person feels the emotion but the other one sees the facial expression. The viewer is the bridge to the outer world as well as the reflection of the inside.&lt;br /&gt;
&lt;br /&gt;
To explore the nature of these interactions through facial expressions of emotions in a bigger picture and to disrupt the woven structure of daily life, InterFace pursues to create a space for emphasizing the power of these individual emotions becoming visible and vivid for the outside world.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Hardware and Software Systems Processes]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Hardware System&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Software System&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Interactions&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Video Walk&lt;br /&gt;
{{#ev:youtube|i5LFJzIUPKk}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Discussions==&lt;br /&gt;
*emotion detection with AI&lt;br /&gt;
*inclusivity, color blindness&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
Ekman, P. (1970). Universal Facial Expressions of Emotions. California Mental Health Research Digest, 8(4), 151-158.&lt;br /&gt;
&lt;br /&gt;
Adams, R. B., Ambady, N., Macrae, C. N., &amp;amp; Kleck, R. E. (2006). Emotional expressions forecast approach-avoidance behavior. *Motivation and Emotion*, *30*(2), 177–186. https://doi.org/10.1007/s11031-006-9020-2&lt;br /&gt;
&lt;br /&gt;
Ferrari, P. F., &amp;amp; Coudé, G. (2018). Mirror neurons, embodied emotions, and empathy. *Neuronal Correlates of Empathy*, 67–77. https://doi.org/10.1016/b978-0-12-805397-3.00006-1&lt;br /&gt;
&lt;br /&gt;
DeepFace https://github.com/serengil/deepface&lt;br /&gt;
&lt;br /&gt;
OpenCV https://opencv.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=Hardware_and_Software_Systems_Processes&amp;diff=134597</id>
		<title>Hardware and Software Systems Processes</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=Hardware_and_Software_Systems_Processes&amp;diff=134597"/>
		<updated>2023-02-08T18:49:45Z</updated>

		<summary type="html">&lt;p&gt;Flux999: Created page with &amp;quot;==Hardware Setup== &amp;#039;&amp;#039;Initial Sketch&amp;#039;&amp;#039;  400px   &amp;#039;&amp;#039;Experiments with the holder&amp;#039;&amp;#039;   Placed on head, camera facing the wearer, screen facing out.  Tools...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Hardware Setup==&lt;br /&gt;
&#039;&#039;Initial Sketch&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_12.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Experiments with the holder&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on head, camera facing the wearer, screen facing out. &lt;br /&gt;
Tools used;&lt;br /&gt;
*Phone holder&lt;br /&gt;
*Headphones&lt;br /&gt;
*Bİke Helmet&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_15.JPG|300px]] [[File:emotiondet_16.JPG|300px]]&lt;br /&gt;
&lt;br /&gt;
This model of display did not work, because the holder was too heavy to be balanced on the head. And since the camera cannot be too close to the face(to see and detect the face) the weight distribution was faulty.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on the shoulder, camera facing the wearer, screen facing out. &lt;br /&gt;
*Phone holder&lt;br /&gt;
*Adjustable strap&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.JPG|300px]] //change this&lt;br /&gt;
&lt;br /&gt;
This model was more stable than the head ones. The holder is clipped to the strap to ensure the stability with the help of the upper body that the strap goes around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Camera&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An external camera (an action camera which has a wide angle lens built-in) successfully set within the software setup process, which was more or less a cyclical process that went together with the hardware setup.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_17.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 1: Backend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sources used;&lt;br /&gt;
*OpenCV  &#039;&#039;Face Detection&#039;&#039;&lt;br /&gt;
*DeepFace  &#039;&#039;Emotion Recognition&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Starting with the OpenCV library which enabled face detection from the camera input, instances of the face each second are fed to DeepFace algorithm. DeepFace gives an output of the emotional data, labeled on the face.&lt;br /&gt;
The default emotion read-write was too fast(&amp;lt;1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_2.png|300px]] [[File:emotiondet_3.png|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 2: Frontend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together.  This experiment was successful so it created space for elaborating the emotion driven visual. &lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_4.png|400px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 3: Emotion Signifier Visual&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_11.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created. &lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_6.png|320px]] [[File: emotiondet_7.png|300px]]&lt;br /&gt;
[[File: emotiondet_1.mov|300px]]&lt;br /&gt;
&lt;br /&gt;
To select the colors signifying emotions, a research on color psychology is made to better understand the interpretation of colors. The psychological effects of colors on human mood and behavior stem from the electromagnetic radiation of light and are a universal, psychophysical response that is less impacted by factors such as culture, age, and gender than commonly believed. It&#039;s important to distinguish between color psychology and color symbolism. Color symbolism refers to the context in which colors are used, while the psychological properties of colors relate to the general moods they evoke in people(Wright, 2008 as cited in Kurt &amp;amp; Osueke, 2014). In the context of this project, the visual aims to transfer the general feeling to viewer rather than putting it inside the limits of an emotion by putting a layer to distort the reality. &lt;br /&gt;
&lt;br /&gt;
Colors representing the wearer&#039;s emotions are&lt;br /&gt;
&lt;br /&gt;
*Neutral &lt;br /&gt;
Colors from nature such as greens and earthy tones to trigger the calm feeling&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_22.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Sad &lt;br /&gt;
Gray tones to represent the &amp;quot;missing&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_18.png|200px]] &lt;br /&gt;
&lt;br /&gt;
*Happy&lt;br /&gt;
Orange and yellows to which are connected to optimistic thoughts.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Surprised&lt;br /&gt;
&lt;br /&gt;
Bright purple and magenta to trigger the curiosity&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_21.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Angry&lt;br /&gt;
&lt;br /&gt;
Dark reds to trigger the negative/hostile feelings&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_20.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Fear&lt;br /&gt;
&lt;br /&gt;
Bright red and green to trigger the alertness&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_23.png|200px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 4: Connection to the hardware &amp;amp; collecting the signifier output&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For showing the same web page that is hosted on the laptop, the phone used as the screen should be connected to the same wifi. This method has its disadvantages and advantages such as not being able to make it full screen on the phone(not impossible but also not easy since the wearer has so little control over the screen) but also there is no significant latency for the display of the emotion signifier output.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop. However after the experiments, using the tools OBS and YouTube streaming, there was a long latency period that the visual loses its purpose of being in sync with the real facial expression of the wearer. Therefore it is better to go with the first option which connecting via wifi.&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_8.png|200px]] [[File: emotiondet_9.png|150px]] [[File: emotiondet_10.png|300px]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134596</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134596"/>
		<updated>2023-02-08T18:46:14Z</updated>

		<summary type="html">&lt;p&gt;Flux999: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
{{#ev:youtube|e1CXrPU11XQ}}&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The evolutionary development of human resulted in many features to create greater societies from extensive groups of people, ideally living in harmony with each other. One of the most influential features that bring us the ability to build and sustain these social structures is our ability to empathize with the other people around us. However, in modern societies, we are getting more and more apart from each other and lost in the rush of our modern-day problems. Our perception of social interactions gets trapped in a closer circle even though we are encountering many different faces even in one day of our lives. As those faces got blurry for us, the lifting effect of being social and sharing decays even more. The project evaluates the effects of emotions through facial expressions in the contexts of empathy and modern-day social structures.&lt;br /&gt;
&lt;br /&gt;
The process of empathy starts with people imagining themselves in another person’s shoes and trying to form meaning out of it. This involves paying attention to their body language, facial expressions, tone of voice, and words, as well as considering their past experiences and current circumstances. Several experts think that mirror neurons, or at least a similar mechanism, play a role in some forms of basic empathy. Mirror neurons in the mouth and the ability to imitate facial expressions are likely the foundation for being in tune with others emotionally. While the embodiment of emotions does not cover all aspects of empathetic experience, it provides a straightforward explanation of how we may share emotions with others and how this skill could have evolved through evolution (Coudé &amp;amp; Ferrari, 2018).&lt;br /&gt;
&lt;br /&gt;
Moreover, it is a naturally evolved survival mechanism to avoid an unwanted situation with the help of others around. According to the findings of Adams et al(2006), two studies suggest that there is accuracy in detecting movements in angry and fearful faces, either moving towards or away from the observer. They found that observers were quicker to correctly identify angry faces moving towards them, suggesting that anger displays convey the intent to approach. However, the results were not the same for fear faces, which may indicate that fear signals a &amp;quot;freeze&amp;quot; response rather than a behavior of fleeing. Therefore, translating the emotions of one party to another has an essential role in sharing “data” collected from the outer world sensed by human body receptors. Besides the expressions of emotion being a means of non-verbal communication, unlike the gestures that can change from culture to culture, they are also relatively universal. According to Ekman(1970), basic emotions have a pancultural nature in that they are identified and also expressed in similar ways in different cultures with the same facial muscle responses.&lt;br /&gt;
&lt;br /&gt;
While the embodiment of emotions through facial expressions it is a means of communication with the outer world. However, it’s the distinction between vocal communications etc. not being self-reflective that people cannot see or feel the immediate effect of their actions. Rather it is moving to the other party to be evaluated and has its effect on them and that is where the reflection forms. So one person feels the emotion but the other one sees the facial expression. The viewer is the bridge to the outer world as well as the reflection of the inside.&lt;br /&gt;
&lt;br /&gt;
To explore the nature of these interactions through facial expressions of emotions in a bigger picture and to disrupt the woven structure of daily life, InterFace pursues to create a space for emphasizing the power of these individual emotions becoming visible and vivid for the outside world.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Process]]&lt;br /&gt;
&lt;br /&gt;
==Hardware Setup==&lt;br /&gt;
&#039;&#039;Initial Sketch&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_12.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Experiments with the holder&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on head, camera facing the wearer, screen facing out. &lt;br /&gt;
Tools used;&lt;br /&gt;
*Phone holder&lt;br /&gt;
*Headphones&lt;br /&gt;
*Bİke Helmet&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_15.JPG|300px]] [[File:emotiondet_16.JPG|300px]]&lt;br /&gt;
&lt;br /&gt;
This model of display did not work, because the holder was too heavy to be balanced on the head. And since the camera cannot be too close to the face(to see and detect the face) the weight distribution was faulty.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on the shoulder, camera facing the wearer, screen facing out. &lt;br /&gt;
*Phone holder&lt;br /&gt;
*Adjustable strap&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.JPG|300px]] //change this&lt;br /&gt;
&lt;br /&gt;
This model was more stable than the head ones. The holder is clipped to the strap to ensure the stability with the help of the upper body that the strap goes around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Camera&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An external camera (an action camera which has a wide angle lens built-in) successfully set within the software setup process, which was more or less a cyclical process that went together with the hardware setup.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_17.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Hardware System Diagram&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 1: Backend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sources used;&lt;br /&gt;
*OpenCV  &#039;&#039;Face Detection&#039;&#039;&lt;br /&gt;
*DeepFace  &#039;&#039;Emotion Recognition&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Starting with the OpenCV library which enabled face detection from the camera input, instances of the face each second are fed to DeepFace algorithm. DeepFace gives an output of the emotional data, labeled on the face.&lt;br /&gt;
The default emotion read-write was too fast(&amp;lt;1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_2.png|300px]] [[File:emotiondet_3.png|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 2: Frontend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together.  This experiment was successful so it created space for elaborating the emotion driven visual. &lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_4.png|400px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 3: Emotion Signifier Visual&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_11.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created. &lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_6.png|320px]] [[File: emotiondet_7.png|300px]]&lt;br /&gt;
[[File: emotiondet_1.mov|300px]]&lt;br /&gt;
&lt;br /&gt;
To select the colors signifying emotions, a research on color psychology is made to better understand the interpretation of colors. The psychological effects of colors on human mood and behavior stem from the electromagnetic radiation of light and are a universal, psychophysical response that is less impacted by factors such as culture, age, and gender than commonly believed. It&#039;s important to distinguish between color psychology and color symbolism. Color symbolism refers to the context in which colors are used, while the psychological properties of colors relate to the general moods they evoke in people(Wright, 2008 as cited in Kurt &amp;amp; Osueke, 2014). In the context of this project, the visual aims to transfer the general feeling to viewer rather than putting it inside the limits of an emotion by putting a layer to distort the reality. &lt;br /&gt;
&lt;br /&gt;
Colors representing the wearer&#039;s emotions are&lt;br /&gt;
&lt;br /&gt;
*Neutral &lt;br /&gt;
Colors from nature such as greens and earthy tones to trigger the calm feeling&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_22.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Sad &lt;br /&gt;
Gray tones to represent the &amp;quot;missing&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_18.png|200px]] &lt;br /&gt;
&lt;br /&gt;
*Happy&lt;br /&gt;
Orange and yellows to which are connected to optimistic thoughts.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Surprised&lt;br /&gt;
&lt;br /&gt;
Bright purple and magenta to trigger the curiosity&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_21.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Angry&lt;br /&gt;
&lt;br /&gt;
Dark reds to trigger the negative/hostile feelings&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_20.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Fear&lt;br /&gt;
&lt;br /&gt;
Bright red and green to trigger the alertness&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_23.png|200px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 4: Connection to the hardware &amp;amp; collecting the signifier output&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For showing the same web page that is hosted on the laptop, the phone used as the screen should be connected to the same wifi. This method has its disadvantages and advantages such as not being able to make it full screen on the phone(not impossible but also not easy since the wearer has so little control over the screen) but also there is no significant latency for the display of the emotion signifier output.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop. However after the experiments, using the tools OBS and YouTube streaming, there was a long latency period that the visual loses its purpose of being in sync with the real facial expression of the wearer. Therefore it is better to go with the first option which connecting via wifi.&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_8.png|200px]] [[File: emotiondet_9.png|150px]] [[File: emotiondet_10.png|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 5: Interactions&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Video Walk&lt;br /&gt;
{{#ev:youtube|i5LFJzIUPKk}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Discussions==&lt;br /&gt;
*emotion detection with AI&lt;br /&gt;
*inclusivity, color blindness&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
Ekman, P. (1970). Universal Facial Expressions of Emotions. California Mental Health Research Digest, 8(4), 151-158.&lt;br /&gt;
&lt;br /&gt;
Adams, R. B., Ambady, N., Macrae, C. N., &amp;amp; Kleck, R. E. (2006). Emotional expressions forecast approach-avoidance behavior. *Motivation and Emotion*, *30*(2), 177–186. https://doi.org/10.1007/s11031-006-9020-2&lt;br /&gt;
&lt;br /&gt;
Ferrari, P. F., &amp;amp; Coudé, G. (2018). Mirror neurons, embodied emotions, and empathy. *Neuronal Correlates of Empathy*, 67–77. https://doi.org/10.1016/b978-0-12-805397-3.00006-1&lt;br /&gt;
&lt;br /&gt;
DeepFace https://github.com/serengil/deepface&lt;br /&gt;
&lt;br /&gt;
OpenCV https://opencv.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134595</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134595"/>
		<updated>2023-02-08T18:43:20Z</updated>

		<summary type="html">&lt;p&gt;Flux999: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
{{#ev:youtube|e1CXrPU11XQ}}&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The evolutionary development of human resulted in many features to create greater societies from extensive groups of people, ideally living in harmony with each other. One of the most influential features that bring us the ability to build and sustain these social structures is our ability to empathize with the other people around us. However, in modern societies, we are getting more and more apart from each other and lost in the rush of our modern-day problems. Our perception of social interactions gets trapped in a closer circle even though we are encountering many different faces even in one day of our lives. As those faces got blurry for us, the lifting effect of being social and sharing decays even more. The project evaluates the effects of emotions through facial expressions in the contexts of empathy and modern-day social structures.&lt;br /&gt;
&lt;br /&gt;
The process of empathy starts with people imagining themselves in another person’s shoes and trying to form meaning out of it. This involves paying attention to their body language, facial expressions, tone of voice, and words, as well as considering their past experiences and current circumstances. Several experts think that mirror neurons, or at least a similar mechanism, play a role in some forms of basic empathy. Mirror neurons in the mouth and the ability to imitate facial expressions are likely the foundation for being in tune with others emotionally. While the embodiment of emotions does not cover all aspects of empathetic experience, it provides a straightforward explanation of how we may share emotions with others and how this skill could have evolved through evolution (Coudé &amp;amp; Ferrari, 2018).&lt;br /&gt;
&lt;br /&gt;
Moreover, it is a naturally evolved survival mechanism to avoid an unwanted situation with the help of others around. According to the findings of Adams et al(2006), two studies suggest that there is accuracy in detecting movements in angry and fearful faces, either moving towards or away from the observer. They found that observers were quicker to correctly identify angry faces moving towards them, suggesting that anger displays convey the intent to approach. However, the results were not the same for fear faces, which may indicate that fear signals a &amp;quot;freeze&amp;quot; response rather than a behavior of fleeing. Therefore, translating the emotions of one party to another has an essential role in sharing “data” collected from the outer world sensed by human body receptors. Besides the expressions of emotion being a means of non-verbal communication, unlike the gestures that can change from culture to culture, they are also relatively universal. According to Ekman(1970), basic emotions have a pancultural nature in that they are identified and also expressed in similar ways in different cultures with the same facial muscle responses.&lt;br /&gt;
&lt;br /&gt;
While the embodiment of emotions through facial expressions it is a means of communication with the outer world. However, it’s the distinction between vocal communications etc. not being self-reflective that people cannot see or feel the immediate effect of their actions. Rather it is moving to the other party to be evaluated and has its effect on them and that is where the reflection forms. So one person feels the emotion but the other one sees the facial expression. The viewer is the bridge to the outer world as well as the reflection of the inside.&lt;br /&gt;
&lt;br /&gt;
To explore the nature of these interactions through facial expressions of emotions in a bigger picture and to disrupt the woven structure of daily life, InterFace pursues to create a space for emphasizing the power of these individual emotions becoming visible and vivid for the outside world.&lt;br /&gt;
&lt;br /&gt;
==Hardware Setup==&lt;br /&gt;
&#039;&#039;Initial Sketch&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_12.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Experiments with the holder&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on head, camera facing the wearer, screen facing out. &lt;br /&gt;
Tools used;&lt;br /&gt;
*Phone holder&lt;br /&gt;
*Headphones&lt;br /&gt;
*Bİke Helmet&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_15.JPG|300px]] [[File:emotiondet_16.JPG|300px]]&lt;br /&gt;
&lt;br /&gt;
This model of display did not work, because the holder was too heavy to be balanced on the head. And since the camera cannot be too close to the face(to see and detect the face) the weight distribution was faulty.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on the shoulder, camera facing the wearer, screen facing out. &lt;br /&gt;
*Phone holder&lt;br /&gt;
*Adjustable strap&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.JPG|300px]] //change this&lt;br /&gt;
&lt;br /&gt;
This model was more stable than the head ones. The holder is clipped to the strap to ensure the stability with the help of the upper body that the strap goes around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Camera&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An external camera (an action camera which has a wide angle lens built-in) successfully set within the software setup process, which was more or less a cyclical process that went together with the hardware setup.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_17.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Hardware System Diagram&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 1: Backend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sources used;&lt;br /&gt;
*OpenCV  &#039;&#039;Face Detection&#039;&#039;&lt;br /&gt;
*DeepFace  &#039;&#039;Emotion Recognition&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Starting with the OpenCV library which enabled face detection from the camera input, instances of the face each second are fed to DeepFace algorithm. DeepFace gives an output of the emotional data, labeled on the face.&lt;br /&gt;
The default emotion read-write was too fast(&amp;lt;1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_2.png|300px]] [[File:emotiondet_3.png|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 2: Frontend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together.  This experiment was successful so it created space for elaborating the emotion driven visual. &lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_4.png|400px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 3: Emotion Signifier Visual&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_11.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created. &lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_6.png|320px]] [[File: emotiondet_7.png|300px]]&lt;br /&gt;
[[File: emotiondet_1.mov|300px]]&lt;br /&gt;
&lt;br /&gt;
To select the colors signifying emotions, a research on color psychology is made to better understand the interpretation of colors. The psychological effects of colors on human mood and behavior stem from the electromagnetic radiation of light and are a universal, psychophysical response that is less impacted by factors such as culture, age, and gender than commonly believed. It&#039;s important to distinguish between color psychology and color symbolism. Color symbolism refers to the context in which colors are used, while the psychological properties of colors relate to the general moods they evoke in people(Wright, 2008 as cited in Kurt &amp;amp; Osueke, 2014). In the context of this project, the visual aims to transfer the general feeling to viewer rather than putting it inside the limits of an emotion by putting a layer to distort the reality. &lt;br /&gt;
&lt;br /&gt;
Colors representing the wearer&#039;s emotions are&lt;br /&gt;
&lt;br /&gt;
*Neutral &lt;br /&gt;
Colors from nature such as greens and earthy tones to trigger the calm feeling&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_22.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Sad &lt;br /&gt;
Gray tones to represent the &amp;quot;missing&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_18.png|200px]] &lt;br /&gt;
&lt;br /&gt;
*Happy&lt;br /&gt;
Orange and yellows to which are connected to optimistic thoughts.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Surprised&lt;br /&gt;
&lt;br /&gt;
Bright purple and magenta to trigger the curiosity&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_21.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Angry&lt;br /&gt;
&lt;br /&gt;
Dark reds to trigger the negative/hostile feelings&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_20.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Fear&lt;br /&gt;
&lt;br /&gt;
Bright red and green to trigger the alertness&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_23.png|200px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 4: Connection to the hardware &amp;amp; collecting the signifier output&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For showing the same web page that is hosted on the laptop, the phone used as the screen should be connected to the same wifi. This method has its disadvantages and advantages such as not being able to make it full screen on the phone(not impossible but also not easy since the wearer has so little control over the screen) but also there is no significant latency for the display of the emotion signifier output.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop. However after the experiments, using the tools OBS and YouTube streaming, there was a long latency period that the visual loses its purpose of being in sync with the real facial expression of the wearer. Therefore it is better to go with the first option which connecting via wifi.&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_8.png|200px]] [[File: emotiondet_9.png|150px]] [[File: emotiondet_10.png|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 5: Interactions&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Video Walk&lt;br /&gt;
{{#ev:youtube|i5LFJzIUPKk}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Discussions==&lt;br /&gt;
*emotion detection with AI&lt;br /&gt;
*inclusivity, color blindness&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
Ekman, P. (1970). Universal Facial Expressions of Emotions. California Mental Health Research Digest, 8(4), 151-158.&lt;br /&gt;
&lt;br /&gt;
Adams, R. B., Ambady, N., Macrae, C. N., &amp;amp; Kleck, R. E. (2006). Emotional expressions forecast approach-avoidance behavior. *Motivation and Emotion*, *30*(2), 177–186. https://doi.org/10.1007/s11031-006-9020-2&lt;br /&gt;
&lt;br /&gt;
Ferrari, P. F., &amp;amp; Coudé, G. (2018). Mirror neurons, embodied emotions, and empathy. *Neuronal Correlates of Empathy*, 67–77. https://doi.org/10.1016/b978-0-12-805397-3.00006-1&lt;br /&gt;
&lt;br /&gt;
DeepFace https://github.com/serengil/deepface&lt;br /&gt;
&lt;br /&gt;
OpenCV https://opencv.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134363</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134363"/>
		<updated>2023-02-02T00:10:12Z</updated>

		<summary type="html">&lt;p&gt;Flux999: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;//here comes the collage&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning.&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The evolutionary development of human resulted in many features to create greater societies from extensive groups of people, ideally living in harmony with each other. One of the most influential features that bring us the ability to build and sustain these social structures is our ability to empathize with the other people around us. However, in modern societies, we are getting more and more apart from each other and lost in the rush of our modern-day problems. Our perception of social interactions gets trapped in a closer circle even though we are encountering many different faces even in one day of our lives. As those faces got blurry for us, the lifting effect of being social and sharing decays even more. The project evaluates the effects of emotions through facial expressions in the contexts of empathy and modern-day social structures.&lt;br /&gt;
&lt;br /&gt;
The process of empathy starts with people imagining themselves in another person’s shoes and trying to form meaning out of it. This involves paying attention to their body language, facial expressions, tone of voice, and words, as well as considering their past experiences and current circumstances. Several experts think that mirror neurons, or at least a similar mechanism, play a role in some forms of basic empathy. Mirror neurons in the mouth and the ability to imitate facial expressions are likely the foundation for being in tune with others emotionally. While the embodiment of emotions does not cover all aspects of empathetic experience, it provides a straightforward explanation of how we may share emotions with others and how this skill could have evolved through evolution (Coudé &amp;amp; Ferrari, 2018).&lt;br /&gt;
&lt;br /&gt;
Moreover, it is a naturally evolved survival mechanism to avoid an unwanted situation with the help of others around. According to the findings of Adams et al(2006), two studies suggest that there is accuracy in detecting movements in angry and fearful faces, either moving towards or away from the observer. They found that observers were quicker to correctly identify angry faces moving towards them, suggesting that anger displays convey the intent to approach. However, the results were not the same for fear faces, which may indicate that fear signals a &amp;quot;freeze&amp;quot; response rather than a behavior of fleeing. Therefore, translating the emotions of one party to another has an essential role in sharing “data” collected from the outer world sensed by human body receptors. Besides the expressions of emotion being a means of non-verbal communication, unlike the gestures that can change from culture to culture, they are also relatively universal. According to Ekman(1970), basic emotions have a pancultural nature in that they are identified and also expressed in similar ways in different cultures with the same facial muscle responses.&lt;br /&gt;
&lt;br /&gt;
While the embodiment of emotions through facial expressions it is a means of communication with the outer world. However, it’s the distinction between vocal communications etc. not being self-reflective that people cannot see or feel the immediate effect of their actions. Rather it is moving to the other party to be evaluated and has its effect on them and that is where the reflection forms. So one person feels the emotion but the other one sees the facial expression. The viewer is the bridge to the outer world as well as the reflection of the inside.&lt;br /&gt;
&lt;br /&gt;
To explore the nature of these interactions through facial expressions of emotions in a bigger picture and to disrupt the woven structure of daily life, InterFace pursues to create a space for emphasizing the power of these individual emotions becoming visible and vivid for the outside world.&lt;br /&gt;
&lt;br /&gt;
==Hardware Setup==&lt;br /&gt;
&#039;&#039;Initial Sketch&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_12.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Experiments with the holder&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on head, camera facing the wearer, screen facing out. &lt;br /&gt;
Tools used;&lt;br /&gt;
*Phone holder&lt;br /&gt;
*Headphones&lt;br /&gt;
*Bİke Helmet&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_15.JPG|300px]] [[File:emotiondet_16.JPG|300px]]&lt;br /&gt;
&lt;br /&gt;
This model of display did not work, because the holder was too heavy to be balanced on the head. And since the camera cannot be too close to the face(to see and detect the face) the weight distribution was faulty.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on the shoulder, camera facing the wearer, screen facing out. &lt;br /&gt;
*Phone holder&lt;br /&gt;
*Adjustable strap&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.JPG|300px]] //change this&lt;br /&gt;
&lt;br /&gt;
This model was more stable than the head ones. The holder is clipped to the strap to ensure the stability with the help of the upper body that the strap goes around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Camera&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An external camera (an action camera which has a wide angle lens built-in) successfully set within the software setup process, which was more or less a cyclical process that went together with the hardware setup.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_17.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Hardware System Diagram&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 1: Backend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sources used;&lt;br /&gt;
*OpenCV  &#039;&#039;Face Detection&#039;&#039;&lt;br /&gt;
*DeepFace  &#039;&#039;Emotion Recognition&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Starting with the OpenCV library which enabled face detection from the camera input, instances of the face each second are fed to DeepFace algorithm. DeepFace gives an output of the emotional data, labeled on the face.&lt;br /&gt;
The default emotion read-write was too fast(&amp;lt;1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_2.png|300px]] [[File:emotiondet_3.png|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 2: Frontend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together.  This experiment was successful so it created space for elaborating the emotion driven visual. &lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_4.png|400px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 3: Emotion Signifier Visual&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_11.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created. &lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_6.png|320px]] [[File: emotiondet_7.png|300px]]&lt;br /&gt;
[[File: emotiondet_1.mov|300px]]&lt;br /&gt;
&lt;br /&gt;
To select the colors signifying emotions, a research on color psychology is made to better understand the interpretation of colors. The psychological effects of colors on human mood and behavior stem from the electromagnetic radiation of light and are a universal, psychophysical response that is less impacted by factors such as culture, age, and gender than commonly believed. It&#039;s important to distinguish between color psychology and color symbolism. Color symbolism refers to the context in which colors are used, while the psychological properties of colors relate to the general moods they evoke in people(Wright, 2008 as cited in Kurt &amp;amp; Osueke, 2014). In the context of this project, the visual aims to transfer the general feeling to viewer rather than putting it inside the limits of an emotion by putting a layer to distort the reality. &lt;br /&gt;
&lt;br /&gt;
Colors representing the wearer&#039;s emotions are&lt;br /&gt;
&lt;br /&gt;
*Neutral &lt;br /&gt;
Colors from nature such as greens and earthy tones to trigger the calm feeling&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_22.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Sad &lt;br /&gt;
Gray tones to represent the &amp;quot;missing&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_18.png|200px]] &lt;br /&gt;
&lt;br /&gt;
*Happy&lt;br /&gt;
Orange and yellows to which are connected to optimistic thoughts.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Surprised&lt;br /&gt;
&lt;br /&gt;
Bright purple and magenta to trigger the curiosity&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_21.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Angry&lt;br /&gt;
&lt;br /&gt;
Dark reds to trigger the negative/hostile feelings&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_20.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Fear&lt;br /&gt;
&lt;br /&gt;
Bright red and green to trigger the alertness&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_23.png|200px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 4: Connection to the hardware &amp;amp; collecting the signifier output&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For showing the same web page that is hosted on the laptop, the phone used as the screen should be connected to the same wifi. This method has its disadvantages and advantages such as not being able to make it full screen on the phone(not impossible but also not easy since the wearer has so little control over the screen) but also there is no significant latency for the display of the emotion signifier output.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop. However after the experiments, using the tools OBS and YouTube streaming, there was a long latency period that the visual loses its purpose of being in sync with the real facial expression of the wearer. Therefore it is better to go with the first option which connecting via wifi.&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_8.png|200px]] [[File: emotiondet_9.png|150px]] [[File: emotiondet_10.png|300px]]&lt;br /&gt;
&lt;br /&gt;
==Discussions==&lt;br /&gt;
*emotion detection with AI&lt;br /&gt;
*inclusivity, color blindness&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
Ekman, P. (1970). Universal Facial Expressions of Emotions. California Mental Health Research Digest, 8(4), 151-158.&lt;br /&gt;
&lt;br /&gt;
Adams, R. B., Ambady, N., Macrae, C. N., &amp;amp; Kleck, R. E. (2006). Emotional expressions forecast approach-avoidance behavior. *Motivation and Emotion*, *30*(2), 177–186. https://doi.org/10.1007/s11031-006-9020-2&lt;br /&gt;
&lt;br /&gt;
Ferrari, P. F., &amp;amp; Coudé, G. (2018). Mirror neurons, embodied emotions, and empathy. *Neuronal Correlates of Empathy*, 67–77. https://doi.org/10.1016/b978-0-12-805397-3.00006-1&lt;br /&gt;
&lt;br /&gt;
DeepFace https://github.com/serengil/deepface&lt;br /&gt;
&lt;br /&gt;
OpenCV https://opencv.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134362</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134362"/>
		<updated>2023-02-02T00:08:10Z</updated>

		<summary type="html">&lt;p&gt;Flux999: /* Discussions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;//here comes the collage&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning.&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The evolutionary development of human resulted in many features to create greater societies from extensive groups of people, ideally living in harmony with each other. One of the most influential features that bring us the ability to build and sustain these social structures is our ability to empathize with the other people around us. However, in modern societies, we are getting more and more apart from each other and lost in the rush of our modern-day problems. Our perception of social interactions gets trapped in a closer circle even though we are encountering many different faces even in one day of our lives. As those faces got blurry for us, the lifting effect of being social and sharing decays even more. The project evaluates the effects of emotions through facial expressions in the contexts of empathy and modern-day social structures.&lt;br /&gt;
&lt;br /&gt;
The process of empathy starts with people imagining themselves in another person’s shoes and trying to form meaning out of it. This involves paying attention to their body language, facial expressions, tone of voice, and words, as well as considering their past experiences and current circumstances. Several experts think that mirror neurons, or at least a similar mechanism, play a role in some forms of basic empathy. Mirror neurons in the mouth and the ability to imitate facial expressions are likely the foundation for being in tune with others emotionally. While the embodiment of emotions does not cover all aspects of empathetic experience, it provides a straightforward explanation of how we may share emotions with others and how this skill could have evolved through evolution (Coudé &amp;amp; Ferrari, 2018).&lt;br /&gt;
&lt;br /&gt;
Moreover, it is a naturally evolved survival mechanism to avoid an unwanted situation with the help of others around. According to the findings of Adams et al(2006), two studies suggest that there is accuracy in detecting movements in angry and fearful faces, either moving towards or away from the observer. They found that observers were quicker to correctly identify angry faces moving towards them, suggesting that anger displays convey the intent to approach. However, the results were not the same for fear faces, which may indicate that fear signals a &amp;quot;freeze&amp;quot; response rather than a behavior of fleeing. Therefore, translating the emotions of one party to another has an essential role in sharing “data” collected from the outer world sensed by human body receptors. Besides the expressions of emotion being a means of non-verbal communication, unlike the gestures that can change from culture to culture, they are also relatively universal. According to Ekman(1970), basic emotions have a pancultural nature in that they are identified and also expressed in similar ways in different cultures with the same facial muscle responses.&lt;br /&gt;
&lt;br /&gt;
While the embodiment of emotions through facial expressions it is a means of communication with the outer world. However, it’s the distinction between vocal communications etc. not being self-reflective that people cannot see or feel the immediate effect of their actions. Rather it is moving to the other party to be evaluated and has its effect on them and that is where the reflection forms. So one person feels the emotion but the other one sees the facial expression. The viewer is the bridge to the outer world as well as the reflection of the inside.&lt;br /&gt;
&lt;br /&gt;
To explore the nature of these interactions through facial expressions of emotions in a bigger picture and to disrupt the woven structure of daily life, InterFace pursues to create a space for emphasizing the power of these individual emotions becoming visible and vivid for the outside world.&lt;br /&gt;
&lt;br /&gt;
==Hardware Setup==&lt;br /&gt;
&#039;&#039;Initial Sketch&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_12.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Experiments with the holder&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on head, camera facing the wearer, screen facing out. &lt;br /&gt;
Tools used;&lt;br /&gt;
*Phone holder&lt;br /&gt;
*Headphones&lt;br /&gt;
*Bİke Helmet&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_15.JPG|300px]] [[File:emotiondet_16.JPG|300px]]&lt;br /&gt;
&lt;br /&gt;
This model of display did not work, because the holder was too heavy to be balanced on the head. And since the camera cannot be too close to the face(to see and detect the face) the weight distribution was faulty.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on the shoulder, camera facing the wearer, screen facing out. &lt;br /&gt;
*Phone holder&lt;br /&gt;
*Adjustable strap&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.JPG|300px]] //change this&lt;br /&gt;
&lt;br /&gt;
This model was more stable than the head ones. The holder is clipped to the strap to ensure the stability with the help of the upper body that the strap goes around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Camera&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An external camera (an action camera which has a wide angle lens built-in) successfully set within the software setup process, which was more or less a cyclical process that went together with the hardware setup.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_17.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Hardware System Diagram&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 1: Backend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sources used;&lt;br /&gt;
*OpenCV  &#039;&#039;Face Detection&#039;&#039;&lt;br /&gt;
*DeepFace  &#039;&#039;Emotion Recognition&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Starting with the OpenCV library which enabled face detection from the camera input, instances of the face each second are fed to DeepFace algorithm. DeepFace gives an output of the emotional data, labeled on the face.&lt;br /&gt;
The default emotion read-write was too fast(&amp;lt;1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_2.png|300px]] [[File:emotiondet_3.png|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 2: Frontend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together.  This experiment was successful so it created space for elaborating the emotion driven visual. &lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_4.png|400px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 3: Emotion Signifier Visual&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_11.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created. &lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_6.png|320px]] [[File: emotiondet_7.png|300px]]&lt;br /&gt;
[[File: emotiondet_1.mov|300px]]&lt;br /&gt;
&lt;br /&gt;
To select the colors signifying emotions, a research on color psychology is made to better understand the interpretation of colors. The psychological effects of colors on human mood and behavior stem from the electromagnetic radiation of light and are a universal, psychophysical response that is less impacted by factors such as culture, age, and gender than commonly believed. It&#039;s important to distinguish between color psychology and color symbolism. Color symbolism refers to the context in which colors are used, while the psychological properties of colors relate to the general moods they evoke in people(Wright, 2008 as cited in Kurt &amp;amp; Osueke, 2014). In the context of this project, the visual aims to transfer the general feeling to viewer rather than putting it inside the limits of an emotion by putting a layer to distort the reality. &lt;br /&gt;
&lt;br /&gt;
Colors representing the wearer&#039;s emotions are&lt;br /&gt;
&lt;br /&gt;
*Neutral &lt;br /&gt;
Colors from nature such as greens and earthy tones to trigger the calm feeling&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_22.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Sad &lt;br /&gt;
Gray tones to represent the &amp;quot;missing&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_18.png|200px]] &lt;br /&gt;
&lt;br /&gt;
*Happy&lt;br /&gt;
Orange and yellows to which are connected to optimistic thoughts.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Surprised&lt;br /&gt;
&lt;br /&gt;
Bright purple and magenta to trigger the curiosity&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_21.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Angry&lt;br /&gt;
&lt;br /&gt;
Dark reds to trigger the negative/hostile feelings&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_20.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Fear&lt;br /&gt;
&lt;br /&gt;
Bright red and green to trigger the alertness&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_23.png|200px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 4: Connection to the hardware &amp;amp; collecting the signifier output&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For showing the same web page that is hosted on the laptop, the phone used as the screen should be connected to the same wifi. This method has its disadvantages and advantages such as not being able to make it full screen on the phone(not impossible but also not easy since the wearer has so little control over the screen) but also there is no significant latency for the display of the emotion signifier output.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop. However after the experiments, using the tools OBS and YouTube streaming, there was a long latency period that the visual loses its purpose of being in sync with the real facial expression of the wearer. Therefore it is better to go with the first option which connecting via wifi.&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_8.png|200px]] [[File: emotiondet_9.png|150px]] [[File: emotiondet_10.png|300px]]&lt;br /&gt;
&lt;br /&gt;
==Discussions==&lt;br /&gt;
*emotion detection with AI&lt;br /&gt;
*inclusivity, color blindness&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
Ekman, P. (1970). Universal Facial Expressions of Emotions. California Mental Health Research Digest, 8(4), 151-158.&lt;br /&gt;
&lt;br /&gt;
Adams, R. B., Ambady, N., Macrae, C. N., &amp;amp; Kleck, R. E. (2006). Emotional expressions forecast approach-avoidance behavior. *Motivation and Emotion*, *30*(2), 177–186. https://doi.org/10.1007/s11031-006-9020-2&lt;br /&gt;
&lt;br /&gt;
Ferrari, P. F., &amp;amp; Coudé, G. (2018). Mirror neurons, embodied emotions, and empathy. *Neuronal Correlates of Empathy*, 67–77. https://doi.org/10.1016/b978-0-12-805397-3.00006-1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134361</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134361"/>
		<updated>2023-02-02T00:07:24Z</updated>

		<summary type="html">&lt;p&gt;Flux999: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;//here comes the collage&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning.&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The evolutionary development of human resulted in many features to create greater societies from extensive groups of people, ideally living in harmony with each other. One of the most influential features that bring us the ability to build and sustain these social structures is our ability to empathize with the other people around us. However, in modern societies, we are getting more and more apart from each other and lost in the rush of our modern-day problems. Our perception of social interactions gets trapped in a closer circle even though we are encountering many different faces even in one day of our lives. As those faces got blurry for us, the lifting effect of being social and sharing decays even more. The project evaluates the effects of emotions through facial expressions in the contexts of empathy and modern-day social structures.&lt;br /&gt;
&lt;br /&gt;
The process of empathy starts with people imagining themselves in another person’s shoes and trying to form meaning out of it. This involves paying attention to their body language, facial expressions, tone of voice, and words, as well as considering their past experiences and current circumstances. Several experts think that mirror neurons, or at least a similar mechanism, play a role in some forms of basic empathy. Mirror neurons in the mouth and the ability to imitate facial expressions are likely the foundation for being in tune with others emotionally. While the embodiment of emotions does not cover all aspects of empathetic experience, it provides a straightforward explanation of how we may share emotions with others and how this skill could have evolved through evolution (Coudé &amp;amp; Ferrari, 2018).&lt;br /&gt;
&lt;br /&gt;
Moreover, it is a naturally evolved survival mechanism to avoid an unwanted situation with the help of others around. According to the findings of Adams et al(2006), two studies suggest that there is accuracy in detecting movements in angry and fearful faces, either moving towards or away from the observer. They found that observers were quicker to correctly identify angry faces moving towards them, suggesting that anger displays convey the intent to approach. However, the results were not the same for fear faces, which may indicate that fear signals a &amp;quot;freeze&amp;quot; response rather than a behavior of fleeing. Therefore, translating the emotions of one party to another has an essential role in sharing “data” collected from the outer world sensed by human body receptors. Besides the expressions of emotion being a means of non-verbal communication, unlike the gestures that can change from culture to culture, they are also relatively universal. According to Ekman(1970), basic emotions have a pancultural nature in that they are identified and also expressed in similar ways in different cultures with the same facial muscle responses.&lt;br /&gt;
&lt;br /&gt;
While the embodiment of emotions through facial expressions it is a means of communication with the outer world. However, it’s the distinction between vocal communications etc. not being self-reflective that people cannot see or feel the immediate effect of their actions. Rather it is moving to the other party to be evaluated and has its effect on them and that is where the reflection forms. So one person feels the emotion but the other one sees the facial expression. The viewer is the bridge to the outer world as well as the reflection of the inside.&lt;br /&gt;
&lt;br /&gt;
To explore the nature of these interactions through facial expressions of emotions in a bigger picture and to disrupt the woven structure of daily life, InterFace pursues to create a space for emphasizing the power of these individual emotions becoming visible and vivid for the outside world.&lt;br /&gt;
&lt;br /&gt;
==Hardware Setup==&lt;br /&gt;
&#039;&#039;Initial Sketch&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_12.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Experiments with the holder&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on head, camera facing the wearer, screen facing out. &lt;br /&gt;
Tools used;&lt;br /&gt;
*Phone holder&lt;br /&gt;
*Headphones&lt;br /&gt;
*Bİke Helmet&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_15.JPG|300px]] [[File:emotiondet_16.JPG|300px]]&lt;br /&gt;
&lt;br /&gt;
This model of display did not work, because the holder was too heavy to be balanced on the head. And since the camera cannot be too close to the face(to see and detect the face) the weight distribution was faulty.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on the shoulder, camera facing the wearer, screen facing out. &lt;br /&gt;
*Phone holder&lt;br /&gt;
*Adjustable strap&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.JPG|300px]] //change this&lt;br /&gt;
&lt;br /&gt;
This model was more stable than the head ones. The holder is clipped to the strap to ensure the stability with the help of the upper body that the strap goes around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Camera&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An external camera (an action camera which has a wide angle lens built-in) successfully set within the software setup process, which was more or less a cyclical process that went together with the hardware setup.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_17.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Hardware System Diagram&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 1: Backend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sources used;&lt;br /&gt;
*OpenCV  &#039;&#039;Face Detection&#039;&#039;&lt;br /&gt;
*DeepFace  &#039;&#039;Emotion Recognition&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Starting with the OpenCV library which enabled face detection from the camera input, instances of the face each second are fed to DeepFace algorithm. DeepFace gives an output of the emotional data, labeled on the face.&lt;br /&gt;
The default emotion read-write was too fast(&amp;lt;1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_2.png|300px]] [[File:emotiondet_3.png|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 2: Frontend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together.  This experiment was successful so it created space for elaborating the emotion driven visual. &lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_4.png|400px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 3: Emotion Signifier Visual&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_11.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created. &lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_6.png|320px]] [[File: emotiondet_7.png|300px]]&lt;br /&gt;
[[File: emotiondet_1.mov|300px]]&lt;br /&gt;
&lt;br /&gt;
To select the colors signifying emotions, a research on color psychology is made to better understand the interpretation of colors. The psychological effects of colors on human mood and behavior stem from the electromagnetic radiation of light and are a universal, psychophysical response that is less impacted by factors such as culture, age, and gender than commonly believed. It&#039;s important to distinguish between color psychology and color symbolism. Color symbolism refers to the context in which colors are used, while the psychological properties of colors relate to the general moods they evoke in people(Wright, 2008 as cited in Kurt &amp;amp; Osueke, 2014). In the context of this project, the visual aims to transfer the general feeling to viewer rather than putting it inside the limits of an emotion by putting a layer to distort the reality. &lt;br /&gt;
&lt;br /&gt;
Colors representing the wearer&#039;s emotions are&lt;br /&gt;
&lt;br /&gt;
*Neutral &lt;br /&gt;
Colors from nature such as greens and earthy tones to trigger the calm feeling&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_22.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Sad &lt;br /&gt;
Gray tones to represent the &amp;quot;missing&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_18.png|200px]] &lt;br /&gt;
&lt;br /&gt;
*Happy&lt;br /&gt;
Orange and yellows to which are connected to optimistic thoughts.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Surprised&lt;br /&gt;
&lt;br /&gt;
Bright purple and magenta to trigger the curiosity&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_21.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Angry&lt;br /&gt;
&lt;br /&gt;
Dark reds to trigger the negative/hostile feelings&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_20.png|200px]]&lt;br /&gt;
&lt;br /&gt;
*Fear&lt;br /&gt;
&lt;br /&gt;
Bright red and green to trigger the alertness&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_23.png|200px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 4: Connection to the hardware &amp;amp; collecting the signifier output&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For showing the same web page that is hosted on the laptop, the phone used as the screen should be connected to the same wifi. This method has its disadvantages and advantages such as not being able to make it full screen on the phone(not impossible but also not easy since the wearer has so little control over the screen) but also there is no significant latency for the display of the emotion signifier output.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop. However after the experiments, using the tools OBS and YouTube streaming, there was a long latency period that the visual loses its purpose of being in sync with the real facial expression of the wearer. Therefore it is better to go with the first option which connecting via wifi.&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_8.png|200px]] [[File: emotiondet_9.png|150px]] [[File: emotiondet_10.png|300px]]&lt;br /&gt;
&lt;br /&gt;
==Discussions==&lt;br /&gt;
*emotion detection with AI&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
Ekman, P. (1970). Universal Facial Expressions of Emotions. California Mental Health Research Digest, 8(4), 151-158.&lt;br /&gt;
&lt;br /&gt;
Adams, R. B., Ambady, N., Macrae, C. N., &amp;amp; Kleck, R. E. (2006). Emotional expressions forecast approach-avoidance behavior. *Motivation and Emotion*, *30*(2), 177–186. https://doi.org/10.1007/s11031-006-9020-2&lt;br /&gt;
&lt;br /&gt;
Ferrari, P. F., &amp;amp; Coudé, G. (2018). Mirror neurons, embodied emotions, and empathy. *Neuronal Correlates of Empathy*, 67–77. https://doi.org/10.1016/b978-0-12-805397-3.00006-1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_21.png&amp;diff=134360</id>
		<title>File:Emotiondet 21.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_21.png&amp;diff=134360"/>
		<updated>2023-02-02T00:05:54Z</updated>

		<summary type="html">&lt;p&gt;Flux999: Flux999 uploaded a new version of File:Emotiondet 21.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_23.png&amp;diff=134359</id>
		<title>File:Emotiondet 23.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_23.png&amp;diff=134359"/>
		<updated>2023-02-02T00:04:04Z</updated>

		<summary type="html">&lt;p&gt;Flux999: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_22.png&amp;diff=134358</id>
		<title>File:Emotiondet 22.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_22.png&amp;diff=134358"/>
		<updated>2023-02-01T23:57:01Z</updated>

		<summary type="html">&lt;p&gt;Flux999: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_21.png&amp;diff=134357</id>
		<title>File:Emotiondet 21.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_21.png&amp;diff=134357"/>
		<updated>2023-02-01T23:57:01Z</updated>

		<summary type="html">&lt;p&gt;Flux999: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_20.png&amp;diff=134356</id>
		<title>File:Emotiondet 20.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_20.png&amp;diff=134356"/>
		<updated>2023-02-01T23:57:01Z</updated>

		<summary type="html">&lt;p&gt;Flux999: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_19.png&amp;diff=134355</id>
		<title>File:Emotiondet 19.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_19.png&amp;diff=134355"/>
		<updated>2023-02-01T23:57:01Z</updated>

		<summary type="html">&lt;p&gt;Flux999: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_18.png&amp;diff=134354</id>
		<title>File:Emotiondet 18.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_18.png&amp;diff=134354"/>
		<updated>2023-02-01T23:57:00Z</updated>

		<summary type="html">&lt;p&gt;Flux999: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134350</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134350"/>
		<updated>2023-02-01T21:58:09Z</updated>

		<summary type="html">&lt;p&gt;Flux999: /* Discussions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;//here comes the collage&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning.&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The evolutionary development of human resulted in many features to create greater societies from extensive groups of people, ideally living in harmony with each other. One of the most influential features that bring us the ability to build and sustain these social structures is our ability to empathize with the other people around us. However, in modern societies, we are getting more and more apart from each other and lost in the rush of our modern-day problems. Our perception of social interactions gets trapped in a closer circle even though we are encountering many different faces even in one day of our lives. As those faces got blurry for us, the lifting effect of being social and sharing decays even more. The project evaluates the effects of emotions through facial expressions in the contexts of empathy and modern-day social structures.&lt;br /&gt;
&lt;br /&gt;
The process of empathy starts with people imagining themselves in another person’s shoes and trying to form meaning out of it. This involves paying attention to their body language, facial expressions, tone of voice, and words, as well as considering their past experiences and current circumstances. Several experts think that mirror neurons, or at least a similar mechanism, play a role in some forms of basic empathy. Mirror neurons in the mouth and the ability to imitate facial expressions are likely the foundation for being in tune with others emotionally. While the embodiment of emotions does not cover all aspects of empathetic experience, it provides a straightforward explanation of how we may share emotions with others and how this skill could have evolved through evolution (Coudé &amp;amp; Ferrari, 2018).&lt;br /&gt;
&lt;br /&gt;
Moreover, it is a naturally evolved survival mechanism to avoid an unwanted situation with the help of others around. According to the findings of Adams et al(2006), two studies suggest that there is accuracy in detecting movements in angry and fearful faces, either moving towards or away from the observer. They found that observers were quicker to correctly identify angry faces moving towards them, suggesting that anger displays convey the intent to approach. However, the results were not the same for fear faces, which may indicate that fear signals a &amp;quot;freeze&amp;quot; response rather than a behavior of fleeing. Therefore, translating the emotions of one party to another has an essential role in sharing “data” collected from the outer world sensed by human body receptors. Besides the expressions of emotion being a means of non-verbal communication, unlike the gestures that can change from culture to culture, they are also relatively universal. According to Ekman(1970), basic emotions have a pancultural nature in that they are identified and also expressed in similar ways in different cultures with the same facial muscle responses.&lt;br /&gt;
&lt;br /&gt;
While the embodiment of emotions through facial expressions it is a means of communication with the outer world. However, it’s the distinction between vocal communications etc. not being self-reflective that people cannot see or feel the immediate effect of their actions. Rather it is moving to the other party to be evaluated and has its effect on them and that is where the reflection forms. So one person feels the emotion but the other one sees the facial expression. The viewer is the bridge to the outer world as well as the reflection of the inside.&lt;br /&gt;
&lt;br /&gt;
To explore the nature of these interactions through facial expressions of emotions in a bigger picture and to disrupt the woven structure of daily life, InterFace pursues to create a space for emphasizing the power of these individual emotions becoming visible and vivid for the outside world.&lt;br /&gt;
&lt;br /&gt;
==Hardware Setup==&lt;br /&gt;
&#039;&#039;Initial Sketch&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_12.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Experiments with the holder&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on head, camera facing the wearer, screen facing out. &lt;br /&gt;
Tools used;&lt;br /&gt;
*Phone holder&lt;br /&gt;
*Headphones&lt;br /&gt;
*Bİke Helmet&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_15.JPG|300px]] [[File:emotiondet_16.JPG|300px]]&lt;br /&gt;
&lt;br /&gt;
This model of display did not work, because the holder was too heavy to be balanced on the head. And since the camera cannot be too close to the face(to see and detect the face) the weight distribution was faulty.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on the shoulder, camera facing the wearer, screen facing out. &lt;br /&gt;
*Phone holder&lt;br /&gt;
*Adjustable strap&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.JPG|300px]] //change this&lt;br /&gt;
&lt;br /&gt;
This model was more stable than the head ones. The holder is clipped to the strap to ensure the stability with the help of the upper body that the strap goes around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Camera&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An external camera (an action camera which has a wide angle lens built-in) successfully set within the software setup process, which was more or less a cyclical process that went together with the hardware setup.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_17.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Hardware System Diagram&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 1: Backend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sources used;&lt;br /&gt;
*OpenCV  &#039;&#039;Face Detection&#039;&#039;&lt;br /&gt;
*DeepFace  &#039;&#039;Emotion Recognition&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Starting with the OpenCV library which enabled face detection from the camera input, instances of the face each second are fed to DeepFace algorithm. DeepFace gives an output of the emotional data, labeled on the face.&lt;br /&gt;
The default emotion read-write was too fast(&amp;lt;1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_2.png|300px]] [[File:emotiondet_3.png|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 2: Frontend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together.  This experiment was successful so it created space for elaborating the emotion driven visual. &lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_4.png|400px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 3: Emotion Signifier Visual&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_11.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created. &lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_6.png|320px]] [[File: emotiondet_7.png|300px]]&lt;br /&gt;
[[File: emotiondet_1.mov|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 4: Connection to the hardware &amp;amp; collecting the signifier output&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For showing the same web page that is hosted on the laptop, the phone used as the screen should be connected to the same wifi. This method has its disadvantages and advantages such as not being able to make it full screen on the phone(not impossible but also not easy since the wearer has so little control over the screen) but also there is no significant latency for the display of the emotion signifier output.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop. However after the experiments, using the tools OBS and YouTube streaming, there was a long latency period that the visual loses its purpose of being in sync with the real facial expression of the wearer. Therefore it is better to go with the first option which connecting via wifi.&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_8.png|200px]] [[File: emotiondet_9.png|150px]] [[File: emotiondet_10.png|300px]]&lt;br /&gt;
&lt;br /&gt;
==Discussions==&lt;br /&gt;
*emotion detection with AI&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
Ekman, P. (1970). Universal Facial Expressions of Emotions. California Mental Health Research Digest, 8(4), 151-158.&lt;br /&gt;
&lt;br /&gt;
Adams, R. B., Ambady, N., Macrae, C. N., &amp;amp; Kleck, R. E. (2006). Emotional expressions forecast approach-avoidance behavior. *Motivation and Emotion*, *30*(2), 177–186. https://doi.org/10.1007/s11031-006-9020-2&lt;br /&gt;
&lt;br /&gt;
Ferrari, P. F., &amp;amp; Coudé, G. (2018). Mirror neurons, embodied emotions, and empathy. *Neuronal Correlates of Empathy*, 67–77. https://doi.org/10.1016/b978-0-12-805397-3.00006-1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134349</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134349"/>
		<updated>2023-02-01T21:57:13Z</updated>

		<summary type="html">&lt;p&gt;Flux999: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;//here comes the collage&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning.&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The evolutionary development of human resulted in many features to create greater societies from extensive groups of people, ideally living in harmony with each other. One of the most influential features that bring us the ability to build and sustain these social structures is our ability to empathize with the other people around us. However, in modern societies, we are getting more and more apart from each other and lost in the rush of our modern-day problems. Our perception of social interactions gets trapped in a closer circle even though we are encountering many different faces even in one day of our lives. As those faces got blurry for us, the lifting effect of being social and sharing decays even more. The project evaluates the effects of emotions through facial expressions in the contexts of empathy and modern-day social structures.&lt;br /&gt;
&lt;br /&gt;
The process of empathy starts with people imagining themselves in another person’s shoes and trying to form meaning out of it. This involves paying attention to their body language, facial expressions, tone of voice, and words, as well as considering their past experiences and current circumstances. Several experts think that mirror neurons, or at least a similar mechanism, play a role in some forms of basic empathy. Mirror neurons in the mouth and the ability to imitate facial expressions are likely the foundation for being in tune with others emotionally. While the embodiment of emotions does not cover all aspects of empathetic experience, it provides a straightforward explanation of how we may share emotions with others and how this skill could have evolved through evolution (Coudé &amp;amp; Ferrari, 2018).&lt;br /&gt;
&lt;br /&gt;
Moreover, it is a naturally evolved survival mechanism to avoid an unwanted situation with the help of others around. According to the findings of Adams et al(2006), two studies suggest that there is accuracy in detecting movements in angry and fearful faces, either moving towards or away from the observer. They found that observers were quicker to correctly identify angry faces moving towards them, suggesting that anger displays convey the intent to approach. However, the results were not the same for fear faces, which may indicate that fear signals a &amp;quot;freeze&amp;quot; response rather than a behavior of fleeing. Therefore, translating the emotions of one party to another has an essential role in sharing “data” collected from the outer world sensed by human body receptors. Besides the expressions of emotion being a means of non-verbal communication, unlike the gestures that can change from culture to culture, they are also relatively universal. According to Ekman(1970), basic emotions have a pancultural nature in that they are identified and also expressed in similar ways in different cultures with the same facial muscle responses.&lt;br /&gt;
&lt;br /&gt;
While the embodiment of emotions through facial expressions it is a means of communication with the outer world. However, it’s the distinction between vocal communications etc. not being self-reflective that people cannot see or feel the immediate effect of their actions. Rather it is moving to the other party to be evaluated and has its effect on them and that is where the reflection forms. So one person feels the emotion but the other one sees the facial expression. The viewer is the bridge to the outer world as well as the reflection of the inside.&lt;br /&gt;
&lt;br /&gt;
To explore the nature of these interactions through facial expressions of emotions in a bigger picture and to disrupt the woven structure of daily life, InterFace pursues to create a space for emphasizing the power of these individual emotions becoming visible and vivid for the outside world.&lt;br /&gt;
&lt;br /&gt;
==Hardware Setup==&lt;br /&gt;
&#039;&#039;Initial Sketch&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_12.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Experiments with the holder&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on head, camera facing the wearer, screen facing out. &lt;br /&gt;
Tools used;&lt;br /&gt;
*Phone holder&lt;br /&gt;
*Headphones&lt;br /&gt;
*Bİke Helmet&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_15.JPG|300px]] [[File:emotiondet_16.JPG|300px]]&lt;br /&gt;
&lt;br /&gt;
This model of display did not work, because the holder was too heavy to be balanced on the head. And since the camera cannot be too close to the face(to see and detect the face) the weight distribution was faulty.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on the shoulder, camera facing the wearer, screen facing out. &lt;br /&gt;
*Phone holder&lt;br /&gt;
*Adjustable strap&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.JPG|300px]] //change this&lt;br /&gt;
&lt;br /&gt;
This model was more stable than the head ones. The holder is clipped to the strap to ensure the stability with the help of the upper body that the strap goes around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Camera&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An external camera (an action camera which has a wide angle lens built-in) successfully set within the software setup process, which was more or less a cyclical process that went together with the hardware setup.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_17.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Hardware System Diagram&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 1: Backend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sources used;&lt;br /&gt;
*OpenCV  &#039;&#039;Face Detection&#039;&#039;&lt;br /&gt;
*DeepFace  &#039;&#039;Emotion Recognition&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Starting with the OpenCV library which enabled face detection from the camera input, instances of the face each second are fed to DeepFace algorithm. DeepFace gives an output of the emotional data, labeled on the face.&lt;br /&gt;
The default emotion read-write was too fast(&amp;lt;1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_2.png|300px]] [[File:emotiondet_3.png|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 2: Frontend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together.  This experiment was successful so it created space for elaborating the emotion driven visual. &lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_4.png|400px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 3: Emotion Signifier Visual&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_11.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created. &lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_6.png|320px]] [[File: emotiondet_7.png|300px]]&lt;br /&gt;
[[File: emotiondet_1.mov|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 4: Connection to the hardware &amp;amp; collecting the signifier output&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For showing the same web page that is hosted on the laptop, the phone used as the screen should be connected to the same wifi. This method has its disadvantages and advantages such as not being able to make it full screen on the phone(not impossible but also not easy since the wearer has so little control over the screen) but also there is no significant latency for the display of the emotion signifier output.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop. However after the experiments, using the tools OBS and YouTube streaming, there was a long latency period that the visual loses its purpose of being in sync with the real facial expression of the wearer. Therefore it is better to go with the first option which connecting via wifi.&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_8.png|200px]] [[File: emotiondet_9.png|150px]] [[File: emotiondet_10.png|300px]]&lt;br /&gt;
&lt;br /&gt;
==Discussions==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
Ekman, P. (1970). Universal Facial Expressions of Emotions. California Mental Health Research Digest, 8(4), 151-158.&lt;br /&gt;
&lt;br /&gt;
Adams, R. B., Ambady, N., Macrae, C. N., &amp;amp; Kleck, R. E. (2006). Emotional expressions forecast approach-avoidance behavior. *Motivation and Emotion*, *30*(2), 177–186. https://doi.org/10.1007/s11031-006-9020-2&lt;br /&gt;
&lt;br /&gt;
Ferrari, P. F., &amp;amp; Coudé, G. (2018). Mirror neurons, embodied emotions, and empathy. *Neuronal Correlates of Empathy*, 67–77. https://doi.org/10.1016/b978-0-12-805397-3.00006-1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134348</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134348"/>
		<updated>2023-02-01T21:56:03Z</updated>

		<summary type="html">&lt;p&gt;Flux999: /* InterFace: How You See Me */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;//here comes the collage&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning.&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The evolutionary development of human resulted in many features to create greater societies from extensive groups of people, ideally living in harmony with each other. One of the most influential features that bring us the ability to build and sustain these social structures is our ability to empathize with the other people around us. However, in modern societies, we are getting more and more apart from each other and lost in the rush of our modern-day problems. Our perception of social interactions gets trapped in a closer circle even though we are encountering many different faces even in one day of our lives. As those faces got blurry for us, the lifting effect of being social and sharing decays even more. The project evaluates the effects of emotions through facial expressions in the contexts of empathy and modern-day social structures.&lt;br /&gt;
&lt;br /&gt;
The process of empathy starts with people imagining themselves in another person’s shoes and trying to form meaning out of it. This involves paying attention to their body language, facial expressions, tone of voice, and words, as well as considering their past experiences and current circumstances. Several experts think that mirror neurons, or at least a similar mechanism, play a role in some forms of basic empathy. Mirror neurons in the mouth and the ability to imitate facial expressions are likely the foundation for being in tune with others emotionally. While the embodiment of emotions does not cover all aspects of empathetic experience, it provides a straightforward explanation of how we may share emotions with others and how this skill could have evolved through evolution (Coudé &amp;amp; Ferrari, 2018).&lt;br /&gt;
&lt;br /&gt;
Moreover, it is a naturally evolved survival mechanism to avoid an unwanted situation with the help of others around. According to the findings of Adams et al(2006), two studies suggest that there is accuracy in detecting movements in angry and fearful faces, either moving towards or away from the observer. They found that observers were quicker to correctly identify angry faces moving towards them, suggesting that anger displays convey the intent to approach. However, the results were not the same for fear faces, which may indicate that fear signals a &amp;quot;freeze&amp;quot; response rather than a behavior of fleeing. Therefore, translating the emotions of one party to another has an essential role in sharing “data” collected from the outer world sensed by human body receptors. Besides the expressions of emotion being a means of non-verbal communication, unlike the gestures that can change from culture to culture, they are also relatively universal. According to Ekman(1970), basic emotions have a pancultural nature in that they are identified and also expressed in similar ways in different cultures with the same facial muscle responses.&lt;br /&gt;
&lt;br /&gt;
While the embodiment of emotions through facial expressions it is a means of communication with the outer world. However, it’s the distinction between vocal communications etc. not being self-reflective that people cannot see or feel the immediate effect of their actions. Rather it is moving to the other party to be evaluated and has its effect on them and that is where the reflection forms. So one person feels the emotion but the other one sees the facial expression. The viewer is the bridge to the outer world as well as the reflection of the inside.&lt;br /&gt;
&lt;br /&gt;
To explore the nature of these interactions through facial expressions of emotions in a bigger picture and to disrupt the woven structure of daily life, InterFace pursues to create a space for emphasizing the power of these individual emotions becoming visible and vivid for the outside world.&lt;br /&gt;
&lt;br /&gt;
==Hardware Setup==&lt;br /&gt;
&#039;&#039;Initial Sketch&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_12.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Experiments with the holder&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on head, camera facing the wearer, screen facing out. &lt;br /&gt;
Tools used;&lt;br /&gt;
*Phone holder&lt;br /&gt;
*Headphones&lt;br /&gt;
*Bİke Helmet&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_15.JPG|300px]] [[File:emotiondet_16.JPG|300px]]&lt;br /&gt;
&lt;br /&gt;
This model of display did not work, because the holder was too heavy to be balanced on the head. And since the camera cannot be too close to the face(to see and detect the face) the weight distribution was faulty.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on the shoulder, camera facing the wearer, screen facing out. &lt;br /&gt;
*Phone holder&lt;br /&gt;
*Adjustable strap&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.JPG|300px]] //change this&lt;br /&gt;
&lt;br /&gt;
This model was more stable than the head ones. The holder is clipped to the strap to ensure the stability with the help of the upper body that the strap goes around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Camera&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An external camera (an action camera which has a wide angle lens built-in) successfully set within the software setup process, which was more or less a cyclical process that went together with the hardware setup.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_17.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Hardware System Diagram&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 1: Backend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sources used;&lt;br /&gt;
*OpenCV  &#039;&#039;Face Detection&#039;&#039;&lt;br /&gt;
*DeepFace  &#039;&#039;Emotion Recognition&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Starting with the OpenCV library which enabled face detection from the camera input, instances of the face each second are fed to DeepFace algorithm. DeepFace gives an output of the emotional data, labeled on the face.&lt;br /&gt;
The default emotion read-write was too fast(&amp;lt;1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_2.png|300px]] [[File:emotiondet_3.png|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 2: Frontend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together.  This experiment was successful so it created space for elaborating the emotion driven visual. &lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_4.png|400px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 3: Emotion Signifier Visual&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_11.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created. &lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_6.png|320px]] [[File: emotiondet_7.png|300px]]&lt;br /&gt;
[[File: emotiondet_1.mov|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 4: Connection to the hardware &amp;amp; collecting the signifier output&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For showing the same web page that is hosted on the laptop, the phone used as the screen should be connected to the same wifi. This method has its disadvantages and advantages such as not being able to make it full screen on the phone(not impossible but also not easy since the wearer has so little control over the screen) but also there is no significant latency for the display of the emotion signifier output.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop. However after the experiments, using the tools OBS and YouTube streaming, there was a long latency period that the visual loses its purpose of being in sync with the real facial expression of the wearer. Therefore it is better to go with the first option which connecting via wifi.&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_8.png|200px]] [[File: emotiondet_9.png|150px]] [[File: emotiondet_10.png|300px]]&lt;br /&gt;
&lt;br /&gt;
==Discussions==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134347</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134347"/>
		<updated>2023-02-01T21:55:19Z</updated>

		<summary type="html">&lt;p&gt;Flux999: /* Abstract */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning. &lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The evolutionary development of human resulted in many features to create greater societies from extensive groups of people, ideally living in harmony with each other. One of the most influential features that bring us the ability to build and sustain these social structures is our ability to empathize with the other people around us. However, in modern societies, we are getting more and more apart from each other and lost in the rush of our modern-day problems. Our perception of social interactions gets trapped in a closer circle even though we are encountering many different faces even in one day of our lives. As those faces got blurry for us, the lifting effect of being social and sharing decays even more. The project evaluates the effects of emotions through facial expressions in the contexts of empathy and modern-day social structures.&lt;br /&gt;
&lt;br /&gt;
The process of empathy starts with people imagining themselves in another person’s shoes and trying to form meaning out of it. This involves paying attention to their body language, facial expressions, tone of voice, and words, as well as considering their past experiences and current circumstances. Several experts think that mirror neurons, or at least a similar mechanism, play a role in some forms of basic empathy. Mirror neurons in the mouth and the ability to imitate facial expressions are likely the foundation for being in tune with others emotionally. While the embodiment of emotions does not cover all aspects of empathetic experience, it provides a straightforward explanation of how we may share emotions with others and how this skill could have evolved through evolution (Coudé &amp;amp; Ferrari, 2018).&lt;br /&gt;
&lt;br /&gt;
Moreover, it is a naturally evolved survival mechanism to avoid an unwanted situation with the help of others around. According to the findings of Adams et al(2006), two studies suggest that there is accuracy in detecting movements in angry and fearful faces, either moving towards or away from the observer. They found that observers were quicker to correctly identify angry faces moving towards them, suggesting that anger displays convey the intent to approach. However, the results were not the same for fear faces, which may indicate that fear signals a &amp;quot;freeze&amp;quot; response rather than a behavior of fleeing. Therefore, translating the emotions of one party to another has an essential role in sharing “data” collected from the outer world sensed by human body receptors. Besides the expressions of emotion being a means of non-verbal communication, unlike the gestures that can change from culture to culture, they are also relatively universal. According to Ekman(1970), basic emotions have a pancultural nature in that they are identified and also expressed in similar ways in different cultures with the same facial muscle responses.&lt;br /&gt;
&lt;br /&gt;
While the embodiment of emotions through facial expressions it is a means of communication with the outer world. However, it’s the distinction between vocal communications etc. not being self-reflective that people cannot see or feel the immediate effect of their actions. Rather it is moving to the other party to be evaluated and has its effect on them and that is where the reflection forms. So one person feels the emotion but the other one sees the facial expression. The viewer is the bridge to the outer world as well as the reflection of the inside.&lt;br /&gt;
&lt;br /&gt;
To explore the nature of these interactions through facial expressions of emotions in a bigger picture and to disrupt the woven structure of daily life, InterFace pursues to create a space for emphasizing the power of these individual emotions becoming visible and vivid for the outside world.&lt;br /&gt;
&lt;br /&gt;
==Hardware Setup==&lt;br /&gt;
&#039;&#039;Initial Sketch&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_12.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Experiments with the holder&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on head, camera facing the wearer, screen facing out. &lt;br /&gt;
Tools used;&lt;br /&gt;
*Phone holder&lt;br /&gt;
*Headphones&lt;br /&gt;
*Bİke Helmet&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_15.JPG|300px]] [[File:emotiondet_16.JPG|300px]]&lt;br /&gt;
&lt;br /&gt;
This model of display did not work, because the holder was too heavy to be balanced on the head. And since the camera cannot be too close to the face(to see and detect the face) the weight distribution was faulty.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on the shoulder, camera facing the wearer, screen facing out. &lt;br /&gt;
*Phone holder&lt;br /&gt;
*Adjustable strap&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.JPG|300px]] //change this&lt;br /&gt;
&lt;br /&gt;
This model was more stable than the head ones. The holder is clipped to the strap to ensure the stability with the help of the upper body that the strap goes around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Camera&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An external camera (an action camera which has a wide angle lens built-in) successfully set within the software setup process, which was more or less a cyclical process that went together with the hardware setup.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_17.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Hardware System Diagram&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 1: Backend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sources used;&lt;br /&gt;
*OpenCV  &#039;&#039;Face Detection&#039;&#039;&lt;br /&gt;
*DeepFace  &#039;&#039;Emotion Recognition&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Starting with the OpenCV library which enabled face detection from the camera input, instances of the face each second are fed to DeepFace algorithm. DeepFace gives an output of the emotional data, labeled on the face.&lt;br /&gt;
The default emotion read-write was too fast(&amp;lt;1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_2.png|300px]] [[File:emotiondet_3.png|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 2: Frontend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together.  This experiment was successful so it created space for elaborating the emotion driven visual. &lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_4.png|400px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 3: Emotion Signifier Visual&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_11.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created. &lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_6.png|320px]] [[File: emotiondet_7.png|300px]]&lt;br /&gt;
[[File: emotiondet_1.mov|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 4: Connection to the hardware &amp;amp; collecting the signifier output&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For showing the same web page that is hosted on the laptop, the phone used as the screen should be connected to the same wifi. This method has its disadvantages and advantages such as not being able to make it full screen on the phone(not impossible but also not easy since the wearer has so little control over the screen) but also there is no significant latency for the display of the emotion signifier output.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop. However after the experiments, using the tools OBS and YouTube streaming, there was a long latency period that the visual loses its purpose of being in sync with the real facial expression of the wearer. Therefore it is better to go with the first option which connecting via wifi.&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_8.png|200px]] [[File: emotiondet_9.png|150px]] [[File: emotiondet_10.png|300px]]&lt;br /&gt;
&lt;br /&gt;
==Discussions==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134346</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134346"/>
		<updated>2023-02-01T21:54:42Z</updated>

		<summary type="html">&lt;p&gt;Flux999: /* Abstract */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning. &lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
The evolutionary development of human resulted in many features to create greater societies from extensive groups of people, ideally living in harmony with each other. One of the most influential features that bring us the ability to build and sustain these social structures is our ability to empathize with the other people around us. However, in modern societies, we are getting more and more apart from each other and lost in the rush of our modern-day problems. Our perception of social interactions gets trapped in a closer circle even though we are encountering many different faces even in one day of our lives. As those faces got blurry for us, the lifting effect of being social and sharing decays even more. The project evaluates the effects of emotions through facial expressions in the contexts of empathy and modern-day social structures.&lt;br /&gt;
The process of empathy starts with people imagining themselves in another person’s shoes and trying to form meaning out of it. This involves paying attention to their body language, facial expressions, tone of voice, and words, as well as considering their past experiences and current circumstances. Several experts think that mirror neurons, or at least a similar mechanism, play a role in some forms of basic empathy. Mirror neurons in the mouth and the ability to imitate facial expressions are likely the foundation for being in tune with others emotionally. While the embodiment of emotions does not cover all aspects of empathetic experience, it provides a straightforward explanation of how we may share emotions with others and how this skill could have evolved through evolution. (Coudé &amp;amp; Ferrari, 2018)&lt;br /&gt;
Moreover, it is a naturally evolved survival mechanism to avoid an unwanted situation with the help of others around. According to the findings of Adams et al(2006), two studies suggest that there is accuracy in detecting movements in angry and fearful faces, either moving towards or away from the observer. They found that observers were quicker to correctly identify angry faces moving towards them, suggesting that anger displays convey the intent to approach. However, the results were not the same for fear faces, which may indicate that fear signals a &amp;quot;freeze&amp;quot; response rather than a behavior of fleeing. Therefore, translating the emotions of one party to another has an essential role in sharing “data” collected from the outer world sensed by human body receptors. Besides the expressions of emotion being a means of non-verbal communication, unlike the gestures that can change from culture to culture, they are also relatively universal. According to Ekman(1970), basic emotions have a pancultural nature in that they are identified and also expressed in similar ways in different cultures with the same facial muscle responses.&lt;br /&gt;
While the embodiment of emotions through facial expressions it is a means of communication with the outer world. However, it’s the distinction between vocal communications etc. not being self-reflective that people cannot see or feel the immediate effect of their actions. Rather it is moving to the other party to be evaluated and has its effect on them and that is where the reflection forms. So one person feels the emotion but the other one sees the facial expression. The viewer is the bridge to the outer world as well as the reflection of the inside.&lt;br /&gt;
&lt;br /&gt;
To explore the nature of these interactions through facial expressions of emotions in a bigger picture and to disrupt the woven structure of daily life, InterFace pursues to create a space for emphasizing the power of these individual emotions becoming visible and vivid for the outside world.&lt;br /&gt;
&lt;br /&gt;
==Hardware Setup==&lt;br /&gt;
&#039;&#039;Initial Sketch&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_12.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Experiments with the holder&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on head, camera facing the wearer, screen facing out. &lt;br /&gt;
Tools used;&lt;br /&gt;
*Phone holder&lt;br /&gt;
*Headphones&lt;br /&gt;
*Bİke Helmet&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_15.JPG|300px]] [[File:emotiondet_16.JPG|300px]]&lt;br /&gt;
&lt;br /&gt;
This model of display did not work, because the holder was too heavy to be balanced on the head. And since the camera cannot be too close to the face(to see and detect the face) the weight distribution was faulty.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on the shoulder, camera facing the wearer, screen facing out. &lt;br /&gt;
*Phone holder&lt;br /&gt;
*Adjustable strap&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.JPG|300px]] //change this&lt;br /&gt;
&lt;br /&gt;
This model was more stable than the head ones. The holder is clipped to the strap to ensure the stability with the help of the upper body that the strap goes around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Camera&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An external camera (an action camera which has a wide angle lens built-in) successfully set within the software setup process, which was more or less a cyclical process that went together with the hardware setup.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_17.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Hardware System Diagram&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 1: Backend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sources used;&lt;br /&gt;
*OpenCV  &#039;&#039;Face Detection&#039;&#039;&lt;br /&gt;
*DeepFace  &#039;&#039;Emotion Recognition&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Starting with the OpenCV library which enabled face detection from the camera input, instances of the face each second are fed to DeepFace algorithm. DeepFace gives an output of the emotional data, labeled on the face.&lt;br /&gt;
The default emotion read-write was too fast(&amp;lt;1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_2.png|300px]] [[File:emotiondet_3.png|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 2: Frontend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together.  This experiment was successful so it created space for elaborating the emotion driven visual. &lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_4.png|400px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 3: Emotion Signifier Visual&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_11.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created. &lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_6.png|320px]] [[File: emotiondet_7.png|300px]]&lt;br /&gt;
[[File: emotiondet_1.mov|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 4: Connection to the hardware &amp;amp; collecting the signifier output&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For showing the same web page that is hosted on the laptop, the phone used as the screen should be connected to the same wifi. This method has its disadvantages and advantages such as not being able to make it full screen on the phone(not impossible but also not easy since the wearer has so little control over the screen) but also there is no significant latency for the display of the emotion signifier output.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop. However after the experiments, using the tools OBS and YouTube streaming, there was a long latency period that the visual loses its purpose of being in sync with the real facial expression of the wearer. Therefore it is better to go with the first option which connecting via wifi.&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_8.png|200px]] [[File: emotiondet_9.png|150px]] [[File: emotiondet_10.png|300px]]&lt;br /&gt;
&lt;br /&gt;
==Discussions==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134327</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134327"/>
		<updated>2023-02-01T18:49:23Z</updated>

		<summary type="html">&lt;p&gt;Flux999: /* Software Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning. &lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
==Hardware Setup==&lt;br /&gt;
&#039;&#039;Initial Sketch&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_12.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Experiments with the holder&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on head, camera facing the wearer, screen facing out. &lt;br /&gt;
Tools used;&lt;br /&gt;
*Phone holder&lt;br /&gt;
*Headphones&lt;br /&gt;
*Bİke Helmet&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_15.JPG|300px]] [[File:emotiondet_16.JPG|300px]]&lt;br /&gt;
&lt;br /&gt;
This model of display did not work, because the holder was too heavy to be balanced on the head. And since the camera cannot be too close to the face(to see and detect the face) the weight distribution was faulty.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on the shoulder, camera facing the wearer, screen facing out. &lt;br /&gt;
*Phone holder&lt;br /&gt;
*Adjustable strap&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.JPG|300px]] //change this&lt;br /&gt;
&lt;br /&gt;
This model was more stable than the head ones. The holder is clipped to the strap to ensure the stability with the help of the upper body that the strap goes around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Camera&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An external camera (an action camera which has a wide angle lens built-in) successfully set within the software setup process, which was more or less a cyclical process that went together with the hardware setup.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_17.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Hardware System Diagram&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 1: Backend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sources used;&lt;br /&gt;
*OpenCV  &#039;&#039;Face Detection&#039;&#039;&lt;br /&gt;
*DeepFace  &#039;&#039;Emotion Recognition&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Starting with the OpenCV library which enabled face detection from the camera input, instances of the face each second are fed to DeepFace algorithm. DeepFace gives an output of the emotional data, labeled on the face.&lt;br /&gt;
The default emotion read-write was too fast(&amp;lt;1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_2.png|300px]] [[File:emotiondet_3.png|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 2: Frontend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together.  This experiment was successful so it created space for elaborating the emotion driven visual. &lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_4.png|400px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 3: Emotion Signifier Visual&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_11.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created. &lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_6.png|320px]] [[File: emotiondet_7.png|300px]]&lt;br /&gt;
[[File: emotiondet_1.mov|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 4: Connection to the hardware &amp;amp; collecting the signifier output&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For showing the same web page that is hosted on the laptop, the phone used as the screen should be connected to the same wifi. This method has its disadvantages and advantages such as not being able to make it full screen on the phone(not impossible but also not easy since the wearer has so little control over the screen) but also there is no significant latency for the display of the emotion signifier output.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop. However after the experiments, using the tools OBS and YouTube streaming, there was a long latency period that the visual loses its purpose of being in sync with the real facial expression of the wearer. Therefore it is better to go with the first option which connecting via wifi.&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_8.png|200px]] [[File: emotiondet_9.png|150px]] [[File: emotiondet_10.png|300px]]&lt;br /&gt;
&lt;br /&gt;
==Discussions==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_4.png&amp;diff=134326</id>
		<title>File:Emotiondet 4.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_4.png&amp;diff=134326"/>
		<updated>2023-02-01T18:49:04Z</updated>

		<summary type="html">&lt;p&gt;Flux999: Flux999 uploaded a new version of File:Emotiondet 4.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_4.png&amp;diff=134325</id>
		<title>File:Emotiondet 4.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_4.png&amp;diff=134325"/>
		<updated>2023-02-01T18:48:18Z</updated>

		<summary type="html">&lt;p&gt;Flux999: Flux999 uploaded a new version of File:Emotiondet 4.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_5.png&amp;diff=134324</id>
		<title>File:Emotiondet 5.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_5.png&amp;diff=134324"/>
		<updated>2023-02-01T18:45:30Z</updated>

		<summary type="html">&lt;p&gt;Flux999: Flux999 uploaded a new version of File:Emotiondet 5.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_4.png&amp;diff=134323</id>
		<title>File:Emotiondet 4.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_4.png&amp;diff=134323"/>
		<updated>2023-02-01T18:45:29Z</updated>

		<summary type="html">&lt;p&gt;Flux999: Flux999 uploaded a new version of File:Emotiondet 4.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_5.png&amp;diff=134322</id>
		<title>File:Emotiondet 5.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_5.png&amp;diff=134322"/>
		<updated>2023-02-01T18:44:43Z</updated>

		<summary type="html">&lt;p&gt;Flux999: Flux999 uploaded a new version of File:Emotiondet 5.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_4.png&amp;diff=134321</id>
		<title>File:Emotiondet 4.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_4.png&amp;diff=134321"/>
		<updated>2023-02-01T18:44:41Z</updated>

		<summary type="html">&lt;p&gt;Flux999: Flux999 uploaded a new version of File:Emotiondet 4.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_5.png&amp;diff=134320</id>
		<title>File:Emotiondet 5.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_5.png&amp;diff=134320"/>
		<updated>2023-02-01T18:44:06Z</updated>

		<summary type="html">&lt;p&gt;Flux999: Flux999 uploaded a new version of File:Emotiondet 5.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_4.png&amp;diff=134319</id>
		<title>File:Emotiondet 4.png</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_4.png&amp;diff=134319"/>
		<updated>2023-02-01T18:44:04Z</updated>

		<summary type="html">&lt;p&gt;Flux999: Flux999 uploaded a new version of File:Emotiondet 4.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134318</id>
		<title>GMU:Designing Utopias: Theory and Practice/Selena Deger</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=GMU:Designing_Utopias:_Theory_and_Practice/Selena_Deger&amp;diff=134318"/>
		<updated>2023-02-01T18:43:31Z</updated>

		<summary type="html">&lt;p&gt;Flux999: /* Hardware Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==&#039;&#039;&#039;InterFace: How You See Me&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning. &lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
==Hardware Setup==&lt;br /&gt;
&#039;&#039;Initial Sketch&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_12.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Experiments with the holder&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on head, camera facing the wearer, screen facing out. &lt;br /&gt;
Tools used;&lt;br /&gt;
*Phone holder&lt;br /&gt;
*Headphones&lt;br /&gt;
*Bİke Helmet&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_15.JPG|300px]] [[File:emotiondet_16.JPG|300px]]&lt;br /&gt;
&lt;br /&gt;
This model of display did not work, because the holder was too heavy to be balanced on the head. And since the camera cannot be too close to the face(to see and detect the face) the weight distribution was faulty.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Placed on the shoulder, camera facing the wearer, screen facing out. &lt;br /&gt;
*Phone holder&lt;br /&gt;
*Adjustable strap&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_19.JPG|300px]] //change this&lt;br /&gt;
&lt;br /&gt;
This model was more stable than the head ones. The holder is clipped to the strap to ensure the stability with the help of the upper body that the strap goes around.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Camera&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An external camera (an action camera which has a wide angle lens built-in) successfully set within the software setup process, which was more or less a cyclical process that went together with the hardware setup.&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_17.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Hardware System Diagram&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:emotiondet_14.png|400px]]&lt;br /&gt;
&lt;br /&gt;
==Software Setup==&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_13.png|700px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 1: Backend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sources used;&lt;br /&gt;
*OpenCV  &#039;&#039;Face Detection&#039;&#039;&lt;br /&gt;
*DeepFace  &#039;&#039;Emotion Recognition&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Starting with the OpenCV library which enabled face detection from the camera input, instances of the face each second are fed to DeepFace algorithm. DeepFace gives an output of the emotional data, labeled on the face.&lt;br /&gt;
The default emotion read-write was too fast(&amp;lt;1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_2.png|300px]] [[File: emotiondet_3.png|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 2: Frontend&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together.  This experiment was successful so it created space for elaborating the emotion driven visual. &lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_4.png|300px]] [[File: emotiondet_5.png|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 3: Emotion Signifier Visual&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_11.jpg|400px]]&lt;br /&gt;
&lt;br /&gt;
Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created. &lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_6.png|300px]] [[File: emotiondet_7.png|300px]]&lt;br /&gt;
[[File: emotiondet_1.mov|300px]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Phase 4: Connection to the hardware &amp;amp; collecting the signifier output&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For showing the same web page that is hosted on the laptop, the phone used as the screen should be connected to the same wifi. This method has its disadvantages and advantages such as not being able to make it full screen on the phone(not impossible but also not easy since the wearer has so little control over the screen) but also there is no significant latency for the display of the emotion signifier output.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop. However after the experiments, using the tools OBS and YouTube streaming, there was a long latency period that the visual loses its purpose of being in sync with the real facial expression of the wearer. Therefore it is better to go with the first option which connecting via wifi.&lt;br /&gt;
&lt;br /&gt;
[[File: emotiondet_8.png|200px]] [[File: emotiondet_9.png|150px]] [[File: emotiondet_10.png|300px]]&lt;br /&gt;
&lt;br /&gt;
==Discussions==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;early sensor experiments&#039;&#039;&lt;br /&gt;
*[[ Analog sound and ultrasonic distance sensor ]]&lt;br /&gt;
*[[ line tracking sensor]]&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
	<entry>
		<id>https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_17.JPG&amp;diff=134317</id>
		<title>File:Emotiondet 17.JPG</title>
		<link rel="alternate" type="text/html" href="https://www.uni-weimar.de/kunst-und-gestaltung/wiki/index.php?title=File:Emotiondet_17.JPG&amp;diff=134317"/>
		<updated>2023-02-01T18:39:13Z</updated>

		<summary type="html">&lt;p&gt;Flux999: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Flux999</name></author>
	</entry>
</feed>