GMU:Home Made Bioelectronics/Marah Doleh/Docummentation of progress: Difference between revisions

From Medien Wiki
No edit summary
Line 23: Line 23:
[https://youtu.be/SJZIMGg-thY Instancing in Touchdesigner],  
[https://youtu.be/SJZIMGg-thY Instancing in Touchdesigner],  
[https://youtu.be/R7sAomk2vR4 Touchdesigner audio analysis]
[https://youtu.be/R7sAomk2vR4 Touchdesigner audio analysis]
===What's next?===
====Arduino====
*Send '''pulse''' data to TD
*Send to Emotion recognition code
====Python | Emotion recognition====
*Emotion recognition
*Send data to TD
====Touchdesinger====
*test other forms of 3D (point clouds/ more complex instancing/....)
*try other sets of reactions to music
*create multiple scenes for different emotions
*intersection with Arduino: reaction to the '''pulse''' input
*intersection with python: trigger change in scenes by the '''emotional''' input

Revision as of 15:57, 11 May 2022


What has been done so far:

  • Testing receiving data from Arduino to Touchdesigner using Ultrasonic Distance Sensor (only for this test)

/Code on arduino

  • Manipulating 3D object using data received from Arduino

Links used for this step: Feedback in Touchdesigner, TOUCHDESIGNER & ARDUINO

  • Audio-reactive 3D model

Links used for this step: Instancing in Touchdesigner, Touchdesigner audio analysis


What's next?

Arduino

  • Send pulse data to TD
  • Send to Emotion recognition code

Python | Emotion recognition

  • Emotion recognition
  • Send data to TD

Touchdesinger

  • test other forms of 3D (point clouds/ more complex instancing/....)
  • try other sets of reactions to music
  • create multiple scenes for different emotions
  • intersection with Arduino: reaction to the pulse input
  • intersection with python: trigger change in scenes by the emotional input