GMU:My Computer, Max, and I/Pedro Ramos/Neurosynthesizer (Human-Machine Feedback): Difference between revisions

From Medien Wiki
No edit summary
No edit summary
Line 17: Line 17:
A rather contemporary issue, I’m interested in the potential opportunities and challenges on music creation with Artificial Intelligence - the interconnection between human (s) and technological agent (s). As a time efficient solution, instead of developing the system from scratch, the use of a stand-alone technology demonstrated to be a reasonable alternative. Although up to some level a Black Box, Wekinator, a software developed for the use of Machine Learning for Creative Appliances, seemed a good alternative for exploring creatively a connection between Max/MSP and Machine Learning.  
A rather contemporary issue, I’m interested in the potential opportunities and challenges on music creation with Artificial Intelligence - the interconnection between human (s) and technological agent (s). As a time efficient solution, instead of developing the system from scratch, the use of a stand-alone technology demonstrated to be a reasonable alternative. Although up to some level a Black Box, Wekinator, a software developed for the use of Machine Learning for Creative Appliances, seemed a good alternative for exploring creatively a connection between Max/MSP and Machine Learning.  


[[File:Diagrams_Neurosynth_02.JPG]]
[[File:Diagrams_Neurosynth_02.JPG|800px]]


In this framework, the human agent is intended to emit signs from its body, which will be processed through sound synthesis and consist on a Neuron of the system. The synthesis is developed from sine wave oscillators, envelopes and sound effects. The same signals are received by the technological agent, which are processed and sent back to Max MSP after the Machine Learning. The intercommunication happening between the human-technological agents through the music created out of this feedback cycle is the main interest of the work, that is currently developed as an investigation of the affective reaction between human and a technological agent for music creation and the use of AI as a creative tool for music creation.
In this framework, the human agent is intended to emit signs from its body, which will be processed through sound synthesis and consist on a Neuron of the system. The synthesis is developed from sine wave oscillators, envelopes and sound effects. The same signals are received by the technological agent, which are processed and sent back to Max MSP after the Machine Learning. The intercommunication happening between the human-technological agents through the music created out of this feedback cycle is the main interest of the work, that is currently developed as an investigation of the affective reaction between human and a technological agent for music creation and the use of AI as a creative tool for music creation.


[[File:Diagrams_Neurosynth_04.JPG|400 px]]
[[File:Diagrams_Neurosynth_04.JPG|400 px]]