GMU:Max and I/Aliya Sayfart: Difference between revisions

From Medien Wiki
No edit summary
No edit summary
Line 1: Line 1:
[[:File:firststeps.maxpat]]
                                                         
=Motion to sound // Aliya Sayfart=


                                                                    ''Idee'' 


Die Ausgangsidee ist es das Gesicht als ein Instrument zu nutzen mit Hilfe von der Webcam.Dabei soll das Gesichterkennungstool cv.jit.faces  genutzt werden.


[[File:BodyFace.jpg|400px]]
<u>idea:</u>
abb.1


Wie in der abb.1 Body dargestellt wird das Bild der Webcam in 4 Quadranten eingeteilt werden . Wenn das Gesicht in dem Quadranten=Q1 erkannt wird hört man einen C-ton . Wenn das Gesicht im Q2 erkannt wird dann wird der Ton G abgespielt usw. ..  
The aim of the project is to play sounds by moving the body. I would like to use 3 webcams that run under different tracking systems.The movement is recorded in max msp and converted into numbers. This numbers are converted into the sounds or they activate the sounds.


abb.1 Face
Here is a little insight from outside.
Das könnte ein extra Tool sein zu dem man umschalten kann. Beim drücken z.b der Taste Enter , zoomt das Programm rein und die Reinfolge der Töne in den Quadranten ändert sich zufällig.


[[Datei:347F475E-2330-4DA2-B6E9-726813D376A8.JPEG|mini|]]


[[File:closeopen1.jpg|400px]]
First step is to look how to get  three webcams in max to run in the same time. There for I used the the jit.qt.grab or jit.dx.grab (Windows) and see whether I can have three of these objects running simultaneously for the three different inputs.
close/open eyes


Dazu kann man umschalten wenn es einem langweilig wird. Bei geschlossenen Augen wird ein Pitch von z.b d-fis gespielt und bei offenen Augen von fis zu c. Die Varianten könnten sich auch nacheinander abwechseln.  
So now we come to the point where I start to play around with different opurtunities for tracking.  
; Face tracking : run with Face OSC
**There for I need to install Face Osc and connect it to max with the object:  udprecive 8338 the number is the ip of face osc you can change if like. Then you need the object OSC-route/… after you can say that you wonna route the mouth or the left eyebrow. If something in the route area will change it gets a bang wich activate the sound.


***The Patch :


[[File:roundsound.jpg|400px]]
2.
 
Das wäre eine einfachere alternative. Die Töne würden feste Postionen in den Quadranten bekommen und beim erkennen eines Gesichtes aktiviert werden .

Revision as of 19:27, 5 February 2020

Motion to sound // Aliya Sayfart

idea:

The aim of the project is to play sounds by moving the body. I would like to use 3 webcams that run under different tracking systems.The movement is recorded in max msp and converted into numbers. This numbers are converted into the sounds or they activate the sounds.

Here is a little insight from outside.

mini|

First step is to look how to get three webcams in max to run in the same time. There for I used the the jit.qt.grab or jit.dx.grab (Windows) and see whether I can have three of these objects running simultaneously for the three different inputs.

So now we come to the point where I start to play around with different opurtunities for tracking.

Face tracking
run with Face OSC
    • There for I need to install Face Osc and connect it to max with the object: udprecive 8338 the number is the ip of face osc you can change if like. Then you need the object OSC-route/… after you can say that you wonna route the mouth or the left eyebrow. If something in the route area will change it gets a bang wich activate the sound.
      • The Patch :

2.