GMU:Critical VR Lab I/L.-E. Kuehr: Difference between revisions

From Medien Wiki
No edit summary
No edit summary
Line 11: Line 11:
==Approach:==
==Approach:==
To generate the point word-clouds, the texts are analyzed using the natural language processing technique ''word2vec''. The resutling vector space is of very high dimensionality, thus cannot be easily visualized. To reduce the high dimensional space to three.dimensions the method ''t-distributed stochastic neighbor embedding'' is used, which keeps words close together that were close in the high dimensional space.
To generate the point word-clouds, the texts are analyzed using the natural language processing technique ''word2vec''. The resutling vector space is of very high dimensionality, thus cannot be easily visualized. To reduce the high dimensional space to three.dimensions the method ''t-distributed stochastic neighbor embedding'' is used, which keeps words close together that were close in the high dimensional space.
[[File:VRI-LEK-Epochs.PNG|frame|Learning Process]]
[[File:VRI-LEK-Epochs.PNG|none|1000px|Learning Process]]
[[File:VRI-LEK-Graph.png|frame|500px|Resulting Graph]]
[[File:VRI-LEK-Graph.png|frame|500px|Resulting Graph]]
The resulting data is than imported into unity using a ''csv''-file and for every data-point a billboard-text of the word is generated. This process is repeated for every text.
The resulting data is than imported into unity using a ''csv''-file and for every data-point a billboard-text of the word is generated. This process is repeated for every text.

Revision as of 12:50, 2 November 2020

Title

Context:


Concept:


Approach:

To generate the point word-clouds, the texts are analyzed using the natural language processing technique word2vec. The resutling vector space is of very high dimensionality, thus cannot be easily visualized. To reduce the high dimensional space to three.dimensions the method t-distributed stochastic neighbor embedding is used, which keeps words close together that were close in the high dimensional space.

Learning Process
Resulting Graph

The resulting data is than imported into unity using a csv-file and for every data-point a billboard-text of the word is generated. This process is repeated for every text.

VRI-LEK-5.png

The world in Unity that the viewer walks through is generated using tileable noise as displacement for a plane. As the user walks through the world, new chunks are generated on the fly thus giving the illusion of an infinite world.

The grass, cuboids and word-spheres are generated and distributed per tile using seeded randomness. Every tile has it's own noise and therefore distribution pattern of the objects, making the world even more endless. A swarm of letters using the boid-algorithm guides the player through the world and towards the word-spheres. The close the view-direcition of the viewer is to a word-sphere the more letters are in the swarm and the closer they fly to each other. The boid-algorithm simulates the behaviour of birds by enforcing simple rules for every boid like seperation, alignment and cohesion towards all other boids. A compute-shader is used to speed up the simulation process.


VRI-LEK-4.png VRI-LEK-1.png VRI-LEK-2.png VRI-LEK-3.png

Reflection / Outlook:


Further Reading: