You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Wor2Vec computes distributed vector representation of words. The main advantage of the distributed representations is that similar words are close in the vector space, which makes generalization to novel patterns easier and model estimation more robust. Distributed vector representation is showed to be useful in many natural language processing applications such as named entity recognition, disambiguation, parsing, tagging and machine translation.
13
+
14
+
### Model
15
+
In our implementation of Word2Vec, we used skip-gram model. The training objective of skip-gram is to learn word vector representations that are good at predicting its context in the same sentence. Mathematically, given a sequence of training words `$w_1, w_2, \dots, w_T$`, the objective of the skip-gram model is to maximize the average log-likelihood
In the skip-gram model, every word $w$ is associated with two vectors $u_w$ and $v_w$ which are vector representations of $w$ as word and context respectively. The probability of correctly predicting word $w_i$ given word $w_j$ is determined by the softmax model, which is
The skip-gram model with softmax is expensive because the cost of computing $\log p(w_i | w_j)$
28
+
is proportional to $V$, which can be easily in order of millions. To speed up Word2Vec training, we used hierarchical softmax, which reduced the complexity of computing of $\log p(w_i | w_j)$ to
29
+
$O(\log(V))$
30
+
31
+
### Example
32
+
33
+
The example below demonstrates how to load a text file, parse it as an RDD of `Seq[String]` and then construct a `Word2Vec` instance with specified parameters. Then we fit a Word2Vec model with the input data. Finally, we display the top 40 similar words to the specified word.
34
+
35
+
<divclass="codetabs">
36
+
<divdata-lang="scala">
37
+
{% highlight scala %}
38
+
import org.apache.spark._
39
+
import org.apache.spark.rdd._
40
+
import org.apache.spark.SparkContext._
41
+
import org.apache.spark.mllib.feature.Word2Vec
42
+
43
+
val input = sc.textFile().map(line => line.split(" ").toSeq)
0 commit comments