GMU:Designing Utopias: Theory and Practice/Selena Deger: Difference between revisions

From Medien Wiki
Line 8: Line 8:


==Software Setup==
==Software Setup==
[[File: emotiondet_13.png|400px]]


''Phase 1: Backend''
''Phase 1: Backend''
Line 19: Line 22:
The default emotion read-write was too fast(<1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.
The default emotion read-write was too fast(<1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.


[[File: |400px]]
[[File: emotiondet_2.png|300px]] [[File: emotiondet_3.png|300px]]


----


''Phase 2: Frontend''
''Phase 2: Frontend''
Line 26: Line 30:
The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together.  This experiment was successful so it created space for elaborating the emotion driven visual.  
The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together.  This experiment was successful so it created space for elaborating the emotion driven visual.  


[[File: |400px]]
[[File: emotiondet_4.png|300px]] [[File: emotiondet_5.png|300px]]
 
----


''Phase 3: Emotion Signifier Visual''
''Phase 3: Emotion Signifier Visual''


[[File: |400px]]
[[File: emotiondet_11.jpg|400px]]


Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created.  
Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created.  
[[File: |400px]] [[File: |400px]]


[[File: emotiondet_6.png|300px]] [[File: emotiondet_7.png|300px]]
[[File: emotiondet_1.mov|300px]]
----


''Phase 4: Connection to the hardware & collecting the signifier output''
''Phase 4: Connection to the hardware & collecting the signifier output''
Line 41: Line 50:




An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop.
An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop. However after the experiments, using the tools OBS and YouTube streaming, there was a long latency period that the visual loses its purpose of being in sync with the real facial expression of the wearer. Therefore it is better to go with the first option which connecting via wifi.


==Discussions==
==Discussions==

Revision as of 18:07, 1 February 2023

InterFace: How You See Me

InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning.

Abstract

Hardware Setup

Software Setup

Emotiondet 13.png


Phase 1: Backend

Sources used;

  • OpenCV Face Detection
  • DeepFace Emotion Recognition


Starting with the OpenCV library which enabled face detection from the camera input, instances of the face each second are fed to DeepFace algorithm. DeepFace gives an output of the emotional data, labeled on the face. The default emotion read-write was too fast(<1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.

Emotiondet 2.png Emotiondet 3.png


Phase 2: Frontend

The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together. This experiment was successful so it created space for elaborating the emotion driven visual.

Emotiondet 4.png Emotiondet 5.png


Phase 3: Emotion Signifier Visual

Emotiondet 11.jpg

Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created.

Emotiondet 6.png Emotiondet 7.png


Phase 4: Connection to the hardware & collecting the signifier output

For showing the same web page that is hosted on the laptop, the phone used as the screen should be connected to the same wifi. This method has its disadvantages and advantages such as not being able to make it full screen on the phone(not impossible but also not easy since the wearer has so little control over the screen) but also there is no significant latency for the display of the emotion signifier output.


An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop. However after the experiments, using the tools OBS and YouTube streaming, there was a long latency period that the visual loses its purpose of being in sync with the real facial expression of the wearer. Therefore it is better to go with the first option which connecting via wifi.

Discussions

early sensor experiments