GMU:Designing Utopias: Theory and Practice/Selena Deger: Difference between revisions

From Medien Wiki
No edit summary
Line 61: Line 61:




==Discussions==
==Discussions & Further Development==




Even though emotion detection with AI technology has been widely researched and used in a variety of applications, it is essential to consider its drawbacks. One of the main limitations is accuracy. The algorithm used in this project is said to be one of the highest accuracy ones however it still stays at 97%. From my personal experience I can say that it can be used for basic emotions however its power to assess complex or micro emotions is not close to the human capability of understanding emotional expressions. Current emotion detection systems can still struggle to identify emotions accurately due to cultural and individual differences and context. The technology is also prone to algorithmic bias, leading to inaccuracies for certain groups of people. My own experience with the tool is more or less had the same hustles, where the emotion detection was usually tagging me as sad rather than neutral. This kind of algorithm is a black box that makes it harder to understand what goes wrong during the process and it gives a very different result than expected.
===Limitations of Facial Emotion Recognition Through Deep Learning Algorithms===


Even though emotion detection with AI has been studied a lot and used in many different ways, it is important to think about its problems. One of the main limitations is accuracy. It is claimed by the DeepFace framework that according to experiments, humans can recognize faces with an accuracy of 97.53%, whereas those models have already attained and surpassed that mark. From my personal experience, I can say that it can be used for simple emotions; however, its power to assess complex or micro emotions is not close to the human capability of understanding emotional expressions. Current emotion detection systems can still struggle to identify emotions accurately due to cultural and individual differences and context. The technology is also prone to algorithmic bias, leading to inaccuracies for certain groups of people. My own experience with the tool is more or less had the same hustles, where the emotion detection was usually tagging me as sad rather than neutral. This kind of algorithm is a black box that makes it harder to understand what goes wrong during the process, and it gives a very different result than expected. Therefore, it is crucial to approach these tools with a critical mindset and use them as a supplement rather than a replacement for human analysis. From another point of view, the fragility and inaccuracy of the system make it vulnerable to misinterpretation in an automated setting. It still needs an operator or interpreter that extracts the source of truth. This highlights the importance of human involvement in decision-making processes, especially in complex systems such as emotion recognition. While automation or massification instead of personalization of these kinds of tools can improve efficiency, they should not completely replace human oversight and intervention.


On the other hand, face recognition systems also have their own disadvantages. One of them is not being able to recognize a face from another angle, which means the camera needs to be stable enough to see the face horizontally aligned and this feature can limit the possibility of some actions while wearing the device.
 
Another point of bias in the facial emotion recognition systems is being evaluated at the 2D level. Human facial expressions have historically been investigated using either static 2D images or 2D video sequences. Recent datasets have taken into account 3D facial expressions to better facilitate an examination of the fine structural changes inherent to spontaneous expressions; however, a 2D-based analysis has difficulty handling large variations in pose and subtle facial movements (Ko, 2018)
 
 
The OpenCV library used in this project works in a way that it retrieves the static images of a face from the camera data to assess if it is a human face or not. DeepFace then evaluates the facial emotion of the face screenshots. As a result, the system is judging the emotion from a static example of the face rather than from the context of the face, which makes it less unreliable than the human eye. Moreover, it was not able to recognize a face from another angle, which means the camera needs to be stable enough to see the face horizontally aligned and this feature can limit the possibility of some actions while wearing the device.
 
 
 
===Color Usage to Evoke Emotions===
 
The reasoning of psychophysical effects and evolutionary interpretations of the colors that have been given on the part of color selection for the emotion visualizer is a relatively controversial topic in the context of this work. Anyway,
 
There are many factors such as cultural, age, and gender that can influence the psychological effects of colors. White, for instance, can signify innocence and purity in some cultures, but death and mourning in others. Similarly, people of different ages and sexes may associate different colors with different things. The vision of humans is not possible to put in the same shape, and there are even some limitations regarding the perception of color, such as several types of color blindness. Even with these limitations, I think it's still important to look into the general psychological properties of colors and use this knowledge to create meaning that speaks to a wider audience.
 
From this point of view, the choice of colors for this work could go in a more personal direction, depending on who is looking at it, when it is more on an exclusive communication level and not shown to the public. Personalized color selection can help create a deeper emotional connection with the viewer, which can lead to a more impactful and memorable experience. However, it is important to consider the context and purpose of the work before making any color choices.
 
Overall, I think that the use of color in art is a powerful way to make people feel things and get the emotions across even if it has a more personalized or symbolic meaning, and I am particularly fascinated by the idea of manipulating a viewer's mood and feelings through color. By understanding the psychological properties of colors and using them in a deliberate, thoughtful, and also more personal manner, they can have a lasting impact on the viewer.
 
 
 
===Limitations Regarding the Wearable Piece===
 
The usage of the setup in the wild showed that the most important factors affecting the visibility of the device are the distance between the wearer and the viewer and its positioning. As mentioned above, the most visible results were when the distance was even smaller than 2 arm lengths, and it was also visible that some curious people were turning their heads to look, but the device had already passed. For the next phase, the setup will be more controlled and assistive but still interactive in a way that there will be no active motion but more exclusive communication between the parties.
 
Also, according to the goals of usage, the lighting and size of the display were not ideal. The size of the screen was too small, and it became negligible for most of the people who were further away. But this problem can't just be fixed by making the screen bigger, because then it stops being an extension of the person wearing it. Hence, the placement of the visual could be behind the wearer in a more immersive environment.