GMU:Critical VR Lab II - Lab Work/L.-E. Kuehr: Difference between revisions

From Medien Wiki
Line 96: Line 96:
===Transitions===
===Transitions===
Because the compute-shader can run every frame transitions between different states can be calculated. In this case a simple particle-based system was used where every particle (voxel) has a position, velocity and acceleration.
Because the compute-shader can run every frame transitions between different states can be calculated. In this case a simple particle-based system was used where every particle (voxel) has a position, velocity and acceleration.
[[File:ImageCloud-Transition1.mp4|Mapping Transition|700px]]
[[File:ImageCloud-Transition1.mp4|Mapping Transition|200px]]
Mapping Transition
[[File:ImageCloud-Transition2.mp4|Image Transition|200px]]
 
[[File:ImageCloud-Transition3.mp4|Resolution Transition|200px]]
[[File:ImageCloud-Transition2.mp4|Image Transition|700px]]
Image Transition
 
[[File:ImageCloud-Transition3.mp4|Resolution Transition|700px]]
Resolution Transition


===Manipulation===
===Manipulation===

Revision as of 06:18, 11 May 2021

color%20

Context

In this project I wanted to revisit the idea of treating digital images as spatial pseudo-random distributions of color-entities (pixels in 2d, voxels in 3d). As digital color is mostly stored as a triplett of color-channels (red, green and blue), it can easily be mapped into a three-dimensional space. We can further take advantage of the possible perception of true three-dimensionality using a VR-Headset. Having the image represented as a cloud of voxels in digital environment, we can then use the VR-controllers to manipulate the image in this three-dimensional domain using our hands by pushing and pulling parts of the image-cloud and remapping the cloud to the two-dimensional canvas. This technique might provide a more intuitive approach of manipulating images and their colors.


Concept


Approach

For every pixel in the image a colored voxel is spawned. The position of the voxel is than determined by different mapping functions that map the pixel (x, y, r, g, b) to position in space (x, y, z). In the default mapping every voxel is on a plane (z = 0) and the position in the plane is directly determined by the position in the original two-dimensional image.

Error creating thumbnail: convert convert: Unrecognized option (-layers). Error code: 1

Since the virtual space is three-dimensional space, the third dimension or depth (z-axis in this case) can be used to convey additional information thus revealing a new perspective on the image. The red channel could for example be mapped to the z-coordinate of every voxel. But we aren't limited to only using the z-axis. Since every digital image represents a subset of the rgb-space (cube), we can also map every pixel to a position in the rgb-space (represented as a cube).

Mapping

Since there are more three-dimensional color-representations of digital color such as HSV (or HSL) and YCbCr that have their own three-dimensional interpretations (HSV a cylinder, YCbCr a cube) and every singel channel can be mapped to the z-axis the following mappings were implemented:

Name X-Axis Y-Axis Z-Axis
Red Color Channel Depth Original Original Red
Green Color Channel Depth Original Original Green
Blue Color Channel Depth Original Original Blue
RGB Space Red Green Blue
Hue Channel Depth Original Original Hue
Saturation Channel Depth Original Original Saturation
Value Channel Depth Original Original Value
HSV Space (Cylinder) Hue Saturation Value
Y Channel Depth Original Original Y (luma component)
Cb Channel Depth Original Original Cb
Cr Channel Depth Original Original Cr
YCbCr Space Y (luma component) Cb Cr

Implementation

Depending on the resolution of the original image the amount of voxels that have to be rendered and updated can easily exceed the millions. Since the resolution of VR-headsets is still limited aroud 2k it is sufficient to rescale the input-image to about 1024*1024 = 1048576 pixels. This is still too much to render and update every voxel as an independent object in real-time. A solution to this is using instancing on the GPU and just passing the color and position of the instances as a buffer. When using direct instanced drawing in Unity the calculation of the bounds of the object is very important to support native-features like view-frustum-culling. To calculate the position- and color-buffer in realtime a compute-shader is used that can easily compute millions of independent calculations in parallel per frame. Compute-Shader inputs can be changed at runtime enabling transitions between diffrent resolutions, mappings and positions.

Transitions

Because the compute-shader can run every frame transitions between different states can be calculated. In this case a simple particle-based system was used where every particle (voxel) has a position, velocity and acceleration.

Manipulation


Outlook


Media