GMU:Critical VR Lab I/L.-E. Kuehr: Difference between revisions

From Medien Wiki
Line 42: Line 42:
===Wordsphere===
===Wordsphere===
[[File:VRI-LEK-3.png|1000px]]
[[File:VRI-LEK-3.png|1000px]]
[[File:https://drive.google.com/uc?export=download&id=1PoyErB2gvz9RsHYwUdlAzBbnTrHwVEIq]]


==Further Reading:==
==Further Reading:==

Revision as of 07:49, 5 November 2020

The Internet is calm and soothing Place

Context:


Concept:


Approach:

Raw JSON data

For the texts To generate the point word-clouds, the text of the posts and comments are extracted from the JSON files and are analyzed using the natural language processing technique word2vec.

Learning Process

The resulting vector space is of very high dimensionality, thus cannot be easily visualized. To reduce the high dimensional space to three.dimensions the method t-distributed stochastic neighbor embedding is used, which keeps words close together that are close in the high dimensional space.

Resulting Graph

The resulting data is then imported into unity using a csv-file and for every data-point a billboard-text of the word is generated. This process is repeated for every text.

VRI-LEK-5.png

The world in Unity that the viewer walks through is generated using tileable noise as displacement for a plane. As the user walks through the world, new chunks are generated on the fly thus giving the illusion of an infinite world.

The grass, cuboids and word-spheres are generated and distributed per tile using seeded randomness. Every tile has its own noise and therefore distribution pattern of the objects, making the world even more endless. A swarm of letters using the boid-algorithm guides the player through the world and towards the word-spheres. The closer the view-direction of the viewer is to a word-sphere the more letters are in the swarm and the closer they fly to each other. The boid-algorithm simulates the behaviour of birds by enforcing simple rules for every boid like separation, alignment and cohesion towards all other boids. A compute-shader is used to speed up the simulation process.


Reflection / Outlook:

The interaction with the word-cloud would not work with a real VR-controller, text-input needs to work with a heads up keyboard and scale and rotation could work with two controllers. To fully benefit from the word-embeddings it would be great to make simple arithmetics in the word-vector-space available. Maybe by dragging and dropping words onto each other. To further underline the idea of an immersive walk, there should be ambient and interaction sound-effects. Distribution of things in the world should not be totally random and interaction should have consequences.


Media:

Wordcloud

VRI-LEK-4.png

Boids

VRI-LEK-1.png

Lettertrails

VRI-LEK-2.png

Wordsphere

VRI-LEK-3.png File:Https://drive.google.com/uc?export=download&id=1PoyErB2gvz9RsHYwUdlAzBbnTrHwVEIq

Further Reading: