GMU:The Hidden Layer:Topics

From Medien Wiki
Revision as of 10:08, 8 May 2017 by Fbonowski (talk | contribs)

General Information on word embeddings

For a general explanation look here: [1]

Word2vec

Made by Google, uses Neural Net, performs good on semantics.

Installation + getting started:

pip install gensim\\ Here are some of the things you can do with the model: [2]\\ Here is a bit of background information an an explanation how to train your own models: [3].

Fastword

Made by Facebbok based on word2vec. Better at capturing syntactic relations (like apparent ---> apparently) see here: [4] Pretrained model files are HUGE

GloVe

pre trained models