Skip to content
forked from tmikolov/word2vec

Automatically exported from code.google.com/p/word2vec

License

Notifications You must be signed in to change notification settings

mansayk/word2vec

This branch is 6 commits ahead of tmikolov/word2vec:master.

Folders and files

NameName
Last commit message
Last commit date
Sep 1, 2019
Sep 1, 2019
Jul 30, 2013
Mar 11, 2019
Jun 17, 2017
Sep 6, 2014
Sep 6, 2014
Sep 6, 2014
Sep 6, 2014
Sep 15, 2014
Sep 6, 2014
Sep 6, 2014
Jun 17, 2017
Mar 11, 2019
Sep 1, 2019
Aug 1, 2013
Aug 1, 2013
Jun 17, 2017
Jul 16, 2017
Jun 17, 2017

Repository files navigation

Tools for computing distributed representtion of words
------------------------------------------------------

We provide an implementation of the Continuous Bag-of-Words (CBOW) and the Skip-gram model (SG), as well as several demo scripts.

Given a text corpus, the word2vec tool learns a vector for every word in the vocabulary using the Continuous
Bag-of-Words or the Skip-Gram neural network architectures. The user should specify the following:
 - desired vector dimensionality
 - the size of the context window for either the Skip-Gram or the Continuous Bag-of-Words model
 - training algorithm: hierarchical softmax and / or negative sampling
 - threshold for downsampling the frequent words 
 - number of threads to use
 - the format of the output word vector file (text or binary)

Usually, the other hyper-parameters such as the learning rate do not need to be tuned for different training sets. 

The script demo-word.sh downloads a small (100MB) text corpus from the web, and trains a small word vector model. After the training
is finished, the user can interactively explore the similarity of the words.

More information about the scripts is provided at https://code.google.com/p/word2vec/

About

Automatically exported from code.google.com/p/word2vec

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C 86.9%
  • Shell 11.8%
  • Makefile 1.3%