GMU:I and my Max/Mojjo Krenz: Difference between revisions

From Medien Wiki
No edit summary
No edit summary
Line 1: Line 1:
 
===Project===
Semster Project : '''Chicken Tracker'''
'''Chicken Tracker'''


We humans communicate. At work, with friends, colleagues and with relatives. To classify the messages communicated to us, we use our senses. In direct verbal communication, our auditory perception is primarily engaged. Peripherally one could think the acoustic channel is clear and unmistakable. What may be simply true in individual cases, however, reveals itself in everyday life as a diverse, complex act. Human communication and perception is multi-layered. Besides the acoustic channel we use optical, olfactory as well as tactile stimuli to communicate with our fellow human being. What happens to the understanding of our counterparts when visual communication is restricted, feeling and smelling are even completely eliminated, and when eye contact, facial expressions and gestures are distorted by the Internet?

We humans communicate. At work, with friends, colleagues and with relatives. To classify the messages communicated to us, we use our senses. In direct verbal communication, our auditory perception is primarily engaged. Peripherally one could think the acoustic channel is clear and unmistakable. What may be simply true in individual cases, however, reveals itself in everyday life as a diverse, complex act. Human communication and perception is multi-layered. Besides the acoustic channel we use optical, olfactory as well as tactile stimuli to communicate with our fellow human being. What happens to the understanding of our counterparts when visual communication is restricted, feeling and smelling are even completely eliminated, and when eye contact, facial expressions and gestures are distorted by the Internet?

Line 12: Line 12:
[[File:photo_2021-01-14 17.15.54.jpeg|400px]]
[[File:photo_2021-01-14 17.15.54.jpeg|400px]]


The project has several stages of development.
===Development===
First, I wanted to change certain colours in  Max video - using grab. Therefore I build a patch exchanging bright / dark colours by a matrix.
The project has several stages of development
 
[[File:Bildschirmfoto 2021-01-18 um 17.15.33.png|400px]]
 
Next steps will be tracking human skin color by RGB-Code:
[[File:Bildschirmfoto 2021-01-15 um 09.44.05.png|400px]]
Source: https://colorswall.com/palette/2513/)
 
Changing color by sound:
-
 
Changing skin color to red by volume:
-
 
Changing picture: grab(video cam) to chicken by max vol.
-
 
connecting patch to zoom


====change certain colours in  Max video - using grab====
Therefore I build a patch exchanging bright / dark colours by a matrix.


<gallery>
File:Bildschirmfoto 2021-01-18 um 17.15.33.png
</gallery>


==== tracking human skin color by RGB-Code====
<gallery>
[[File:Bildschirmfoto 2021-01-15 um 09.44.05.png|Source: https://colorswall.com/palette/2513/
</gallery>


====Changing color by sound====


====Changing skin color to red by volume====


====Building two interconnected patches====
*to use OSC protocol
*alternatively „jit.net.send/recv“






===Exercises===
"Greenscreen"  
"Greenscreen"  


Line 49: Line 47:
[[:File:VIDEO_PLAY1.maxpat]]
[[:File:VIDEO_PLAY1.maxpat]]


https://www.studioroosegaarde.net/info
*https://www.studioroosegaarde.net/info
https://www.studioroosegaarde.net/project/space-waste-lab
*https://www.studioroosegaarde.net/project/space-waste-lab

Revision as of 15:28, 6 April 2021

Project

Chicken Tracker

We humans communicate. At work, with friends, colleagues and with relatives. To classify the messages communicated to us, we use our senses. In direct verbal communication, our auditory perception is primarily engaged. Peripherally one could think the acoustic channel is clear and unmistakable. What may be simply true in individual cases, however, reveals itself in everyday life as a diverse, complex act. Human communication and perception is multi-layered. Besides the acoustic channel we use optical, olfactory as well as tactile stimuli to communicate with our fellow human being. What happens to the understanding of our counterparts when visual communication is restricted, feeling and smelling are even completely eliminated, and when eye contact, facial expressions and gestures are distorted by the Internet?


Photo 2021-01-14 17.16.54.jpeg

The current pandemic and the associated lockdown limit verbal, interpersonal exchange to two elementary channels. Acoustic communication and visual communication made possible by video telephony are coming to the fore in our everyday lives. However, contact via the medium of the Internet lacks the possibility of expressing empathy and feelings. For this purpose I have invented an „Chicken Tracker". 
As a support tool to the usual video chat, the „Chicken Tracker" is supposed to visually amplify a feeling. 
The basic idea is simple. Participant A starts talking, initially appears completely realistic in the video chat - if A talks himself into a rage, the volume of his acoustic output increases steadily. The amplified signal in turn turns his skin color increasingly red through the "Chicken Tracker" - until A appears completely red. Once in a rage, the volume of A's output increases until it collapses and A turns into a chicken for the time in which the volume is higher than X.

The whole idea is about loosening up in online discussions and not taking yourself too seriously.

Photo 2021-01-14 17.15.54.jpeg

Development

The project has several stages of development

change certain colours in Max video - using grab

Therefore I build a patch exchanging bright / dark colours by a matrix.

tracking human skin color by RGB-Code

Changing color by sound

Changing skin color to red by volume

Building two interconnected patches

  • to use OSC protocol
  • alternatively „jit.net.send/recv“


Exercises

"Greenscreen"

1. try 2 videos in 1 patch - goal will be they run mixed into each other

File:VIDEO_PLAY1.maxpat