User:Jo El/1: Difference between revisions

From Medien Wiki
No edit summary
No edit summary
 
(20 intermediate revisions by the same user not shown)
Line 1: Line 1:
From Random to Fiction
'''From Random to Fiction'''<br />
''Professor:'' [[Ursula Damm]]<br/>
''Credits:'' 18 [[ECTS]], 16 [[SWS]]<br/>
''Semester:'' WS 2019<br/>


[https://vimeo.com/417414029?activityReferer=1]
Project by Joel Schaefer
<br/>
{{#ev:youtube|BRHOUWcj2sI|560|left}}
<br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/>
<br/><br/><br/><br/><br/><br/>
 
During the winter semster 2019 i took the project module "from random to fiction" with Ursula Damm.
My conceptual intention at the beginning of the course was to do something generative, realtimish with particles and sound basically.
So not that much of a conceptual idea there, close to none to be honest.
I had some technical idea in my mind tho.
 
I was getting into Realtime Graphic Softwares (Unity & Processing) in the last semesters.
Besides that i was already for some time into creating music with voltage controlled analog gear (modular synthesizers). Which is more like playing a instrument (so also realtime) then like writing music in a Digital Audio Workstation. So i came to the Idea to make Unity and my Modular Synth work together as performative tools. Not just on a audio reactive level but on a level where the digital and analog data flows are tightly connected. So that there is no hirarchy between sound and visuals, like it appears in audio reactivity where the visuals just react on different apsects of the incoming sound. My claim was that already in the stage when the sound is generated there is a connection to the visuals, and vice versa. So in modular synthesizers those signals that determine how/when the sound will be generated are represented by voltage values in the range of -10v to 10v maximum. Mostly lower. In the digital graphic generation software the parameters that determine how/when/where a visual element will appear can represented by variable type float.
 
So my hardware part of my project was clear, i need a interface which is able to convert the digital and analog modulation data. In parallel to the project module i was doing the course "Analog Circuits and Interfaces" with Clemens Wegener. Together with my friend Paul Plattner we developed our [[CV Interface and Function Generator]]. This device was able to do exactly what i needed for this project, only restriction - it was just 2x2 In/Out Channels (check link for more info).
 
Also in parallel i took the course "Maschinelles Lernen" by Alexander König, because my interest in AI was constantly growing already, tho i didn't know much about it. Through this seminar I learned a lot more about it, but the result wasn't to start coding or using AI algorithms at this point but instead i came to discover the Concept or the Idea of Artificial Consciousness, which isn't a applied thing like Artificial Intelligence but more a philosophical discussion about what makes a system being alive and what not and what does intelligence even mean and how is that different to being some (more or less) intelligent being that is able to experience its own existence and the world. 
 
Georg Trogemann and Ursula Damm guided me to a paper by David Chalmers from 1995 called "the hard problem of consciousness". Which i found highly interesting and through which i found the place or better to say the people who scientifically work on those topics for long time. Which was great for me because i was interested in the phenoma of consciousness and the philosophy of mind since my school days and tho i cancelled my philosophy study in Rostock my general interest in philosophy never really disappeared. Still it felt like finding it again with this topic and connection to my other work in the media art study.
 
So i spend a lot of time of this course reading papers or hearing podcasts with very different people talking about the topic of AI and Consciousness -  philosophers, physicians, media scientists, neuro scientists, psychologists, psychotherapists, biologists, Ethnopharmakologists, ...
The still young scientific field of Consciousness is quite widely spread across different scientific disciplines. Young in the sense that its still in the beginning states.
 
I found it hard at that time to directly come up with making coding experiments from the topics and ideas i had / i read about, mainly because my programming skills weren't that sophisticated yet i guess. As i said the classical way to go for coding/using neural networks wasn't what i was searching for at that stage. I mean to use some highly customizable librarys and frameworks to solve some tasks. I couldnt really think about tasks or different ways to effectively solve them at that point.  I needed to take a step back first and try to really understand whats going on on the computational level in all those different approaches one could take towards the summed technical term "Artificial Intelligence".
 
Through different presentations during the course i got to know many new and very creative works towards AI, like "Emissaries" from Ian Cheng for example, which i found highly inspirational. This and other more Artificial Life oriented approaches were giving me some ideas and directions towards i want to go.
 
So i got back to Processing and practised with Daniel Shiffmanns Book "The nature of code". Starting with Steering Behaviour and the Vector Math Basics to Autonomous Agents and Genetic Algorithms. My aim at that time actually was to in the end of the course be able to write my own "intelligent" agents with super simple neural network algorithms. I didnt got that far to neural nets because the topic of evolutionary computing and artificial life simulations took me quite a while to get into but also it was very interesting for me. So i decided do make my project more based around that.
 
Time to come to the actual project, "Artificial Ocean". Please take a look at the video for visual documentation of the setup, some recorded final outcomes and a very amateurish recording of a very early test projection in my living room (due to covid19 situation i wasn't able to access any university infrastructure) (due to my inability to take the time to make a better recording the quality got really bad but u can see/hear the audio and video working together in realtime, at least a bit)
 
It was planned to be a simple wall projection installation. No special forms or mask in the projection mapping stage. Wasn't making any sense for me at that time to plan further about hardware setup because of covid19.
 
The style should be generative, so basically a sound and video setup that constantly generates related, continiously changing output from "itself" (based on the Artificial Life Simulation running in the background). So i thought about 3 Layers. Video, Audio, Computation. All connected through abstract relations.
 
As I still didn't really came up with a story or something that guides me in the coding stage towards more well thought out system structures, i choose changes like the death of an agent or the birth of a new agent or the position/velocity/size of longest living agent as data for the particle system in unity and the modular synthesizers. In the particle system was a death of a agent shown as a circular impulse on the flow field of the particles. At the same time the death triggered a clocked analog noise generator where i sampled different modulation signals from. Those then were going into different parameters of a stereo delay which sits behind the also triggered synth. So on every death a wide variety of delayed sounds we're playing with according visual feedback. another example would be the position of the oldest agent, where the x value determined the frequency of a filter layered over the drone background sound. The same value was hooked up to the x transform of a perlin noise velocity field which affects particles position (wave movement).
 
As I already said, unfortunately this project never became a real installation because of covid but i have to also say that i didn't found out what i really want to do with my agent system and those algorithms and thats why i didn't came up with a consistent idea that really connects those layers. So on different levels my initial project idea failed, but still, i'm very happy with this project because i learned really a lot on many different and ground-setting levels. Besides that was the whole setup process a very good pratice for setting Unity, Processing and Modular Synths up together not just for installative ways bur also for performative use cases. Because the setup would aswell fit for a audio visual-performance piece.

Latest revision as of 22:15, 11 January 2022

From Random to Fiction
Professor: Ursula Damm
Credits: 18 ECTS, 16 SWS
Semester: WS 2019

Project by Joel Schaefer



















During the winter semster 2019 i took the project module "from random to fiction" with Ursula Damm. My conceptual intention at the beginning of the course was to do something generative, realtimish with particles and sound basically. So not that much of a conceptual idea there, close to none to be honest. I had some technical idea in my mind tho.

I was getting into Realtime Graphic Softwares (Unity & Processing) in the last semesters. Besides that i was already for some time into creating music with voltage controlled analog gear (modular synthesizers). Which is more like playing a instrument (so also realtime) then like writing music in a Digital Audio Workstation. So i came to the Idea to make Unity and my Modular Synth work together as performative tools. Not just on a audio reactive level but on a level where the digital and analog data flows are tightly connected. So that there is no hirarchy between sound and visuals, like it appears in audio reactivity where the visuals just react on different apsects of the incoming sound. My claim was that already in the stage when the sound is generated there is a connection to the visuals, and vice versa. So in modular synthesizers those signals that determine how/when the sound will be generated are represented by voltage values in the range of -10v to 10v maximum. Mostly lower. In the digital graphic generation software the parameters that determine how/when/where a visual element will appear can represented by variable type float.

So my hardware part of my project was clear, i need a interface which is able to convert the digital and analog modulation data. In parallel to the project module i was doing the course "Analog Circuits and Interfaces" with Clemens Wegener. Together with my friend Paul Plattner we developed our CV Interface and Function Generator. This device was able to do exactly what i needed for this project, only restriction - it was just 2x2 In/Out Channels (check link for more info).

Also in parallel i took the course "Maschinelles Lernen" by Alexander König, because my interest in AI was constantly growing already, tho i didn't know much about it. Through this seminar I learned a lot more about it, but the result wasn't to start coding or using AI algorithms at this point but instead i came to discover the Concept or the Idea of Artificial Consciousness, which isn't a applied thing like Artificial Intelligence but more a philosophical discussion about what makes a system being alive and what not and what does intelligence even mean and how is that different to being some (more or less) intelligent being that is able to experience its own existence and the world.

Georg Trogemann and Ursula Damm guided me to a paper by David Chalmers from 1995 called "the hard problem of consciousness". Which i found highly interesting and through which i found the place or better to say the people who scientifically work on those topics for long time. Which was great for me because i was interested in the phenoma of consciousness and the philosophy of mind since my school days and tho i cancelled my philosophy study in Rostock my general interest in philosophy never really disappeared. Still it felt like finding it again with this topic and connection to my other work in the media art study.

So i spend a lot of time of this course reading papers or hearing podcasts with very different people talking about the topic of AI and Consciousness - philosophers, physicians, media scientists, neuro scientists, psychologists, psychotherapists, biologists, Ethnopharmakologists, ... The still young scientific field of Consciousness is quite widely spread across different scientific disciplines. Young in the sense that its still in the beginning states.

I found it hard at that time to directly come up with making coding experiments from the topics and ideas i had / i read about, mainly because my programming skills weren't that sophisticated yet i guess. As i said the classical way to go for coding/using neural networks wasn't what i was searching for at that stage. I mean to use some highly customizable librarys and frameworks to solve some tasks. I couldnt really think about tasks or different ways to effectively solve them at that point. I needed to take a step back first and try to really understand whats going on on the computational level in all those different approaches one could take towards the summed technical term "Artificial Intelligence".

Through different presentations during the course i got to know many new and very creative works towards AI, like "Emissaries" from Ian Cheng for example, which i found highly inspirational. This and other more Artificial Life oriented approaches were giving me some ideas and directions towards i want to go.

So i got back to Processing and practised with Daniel Shiffmanns Book "The nature of code". Starting with Steering Behaviour and the Vector Math Basics to Autonomous Agents and Genetic Algorithms. My aim at that time actually was to in the end of the course be able to write my own "intelligent" agents with super simple neural network algorithms. I didnt got that far to neural nets because the topic of evolutionary computing and artificial life simulations took me quite a while to get into but also it was very interesting for me. So i decided do make my project more based around that.

Time to come to the actual project, "Artificial Ocean". Please take a look at the video for visual documentation of the setup, some recorded final outcomes and a very amateurish recording of a very early test projection in my living room (due to covid19 situation i wasn't able to access any university infrastructure) (due to my inability to take the time to make a better recording the quality got really bad but u can see/hear the audio and video working together in realtime, at least a bit)

It was planned to be a simple wall projection installation. No special forms or mask in the projection mapping stage. Wasn't making any sense for me at that time to plan further about hardware setup because of covid19.

The style should be generative, so basically a sound and video setup that constantly generates related, continiously changing output from "itself" (based on the Artificial Life Simulation running in the background). So i thought about 3 Layers. Video, Audio, Computation. All connected through abstract relations.

As I still didn't really came up with a story or something that guides me in the coding stage towards more well thought out system structures, i choose changes like the death of an agent or the birth of a new agent or the position/velocity/size of longest living agent as data for the particle system in unity and the modular synthesizers. In the particle system was a death of a agent shown as a circular impulse on the flow field of the particles. At the same time the death triggered a clocked analog noise generator where i sampled different modulation signals from. Those then were going into different parameters of a stereo delay which sits behind the also triggered synth. So on every death a wide variety of delayed sounds we're playing with according visual feedback. another example would be the position of the oldest agent, where the x value determined the frequency of a filter layered over the drone background sound. The same value was hooked up to the x transform of a perlin noise velocity field which affects particles position (wave movement).

As I already said, unfortunately this project never became a real installation because of covid but i have to also say that i didn't found out what i really want to do with my agent system and those algorithms and thats why i didn't came up with a consistent idea that really connects those layers. So on different levels my initial project idea failed, but still, i'm very happy with this project because i learned really a lot on many different and ground-setting levels. Besides that was the whole setup process a very good pratice for setting Unity, Processing and Modular Synths up together not just for installative ways bur also for performative use cases. Because the setup would aswell fit for a audio visual-performance piece.