341
edits
m (→Manipulation) |
m (→Video) |
||
(9 intermediate revisions by the same user not shown) | |||
Line 7: | Line 7: | ||
==Concept== | ==Concept== | ||
Digital images can be treated as pseudo-random distributions of color. If these are interpreted as three-dimensional point-clouds, it is possible to overcome the planar image by adding depth and creating new ways to interpret the image by exploring the color distribution in its purest form. After using different color metrics to calculate the three-dimensional position of every point, complex structures and patterns emerge, colors shift, strains form and clusters of colors move from position to position. | |||
---- | ---- | ||
Line 105: | Line 105: | ||
Exmaples of transitions between mappings (left), images (middle) and resolutions (right). | Exmaples of transitions between mappings (left), images (middle) and resolutions (right). | ||
{{#ev:youtube|cA7D-l4XI84|735}} | |||
===Manipulation=== | ===Manipulation=== | ||
Since VR-headsets were rare this semester I tried to use the [https://www.ultraleap.com/product/leap-motion-controller/ Leap-controller] to capture the movement of the hands and try to manipulate the image with it. | Since VR-headsets were rare this semester I tried to use the [https://www.ultraleap.com/product/leap-motion-controller/ Leap-controller] to capture the movement of the hands and try to manipulate the image with it. | ||
While I was able to set up and calibrate the controller in Unity, the captured motion was very inconsistent and jittery and the movement of the hand was limited. Since the camera had to be static the added value of the three-dimensionality was limited, using a VR-headset in combination with the Leap-controller might solve this problem. | While I was able to set up and calibrate the controller in Unity, the captured motion was very inconsistent and jittery and the movement of the hand was limited. Since the camera had to be static the added value of the three-dimensionality was limited, using a VR-headset in combination with the Leap-controller might solve this problem. | ||
[[File:ImageCloud-Manipulation1.mp4| | [[File:ImageCloud-Manipulation1.mp4|Hand Manipulation|735px]] | ||
Another problem with manipulating the voxels is that the particle representations of the voxels are not part of the physics-world and cannot collide with the other objects. Every collision of every particle with every object has to be calculated in the compute-shader. While I was able to implement simple interaction with the center of one hand, I failed to implement a working collision for simple geometric forms in the compute-shader that was efficient enough to run every frame. This is possible, but GPU-collision is a difficult task that takes a lot more work than expected. | Another problem with manipulating the voxels is that the particle representations of the voxels are not part of the physics-world and cannot collide with the other objects. Every collision of every particle with every object has to be calculated in the compute-shader. While I was able to implement simple interaction with the center of one hand, I failed to implement a working collision for simple geometric forms in the compute-shader that was efficient enough to run every frame. This is possible, but GPU-collision is a difficult task that takes a lot more work than expected. | ||
---- | ---- | ||
== | ===Video=== | ||
{{#ev:youtube| | The uniforms of the compute-shader can be updated during runtime. With the use of the Unity-Video-Player the input image can be the current frame of a video. If the video is updated every frame the particles get a new target position every frame creating a moving cloud of particles that create ever-emerging patterns of movment. | ||
{{#ev:youtube|ZSVnI2nV1V4|735}} | |||
==Media== | ==Media== |
edits