Skip to content

Commit 8d7458f

Browse files
committed
code reformat
1 parent 6df0dcb commit 8d7458f

File tree

1 file changed

+25
-18
lines changed

1 file changed

+25
-18
lines changed

docs/mllib-feature-extraction.md

Lines changed: 25 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -9,28 +9,43 @@ displayTitle: <a href="mllib-guide.html">MLlib</a> - Feature Extraction
99

1010
## Word2Vec
1111

12-
Wor2Vec computes distributed vector representation of words. The main advantage of the distributed representations is that similar words are close in the vector space, which makes generalization to novel patterns easier and model estimation more robust. Distributed vector representation is showed to be useful in many natural language processing applications such as named entity recognition, disambiguation, parsing, tagging and machine translation.
12+
Wor2Vec computes distributed vector representation of words. The main advantage of the distributed
13+
representations is that similar words are close in the vector space, which makes generalization to
14+
novel patterns easier and model estimation more robust. Distributed vector representation is
15+
showed to be useful in many natural language processing applications such as named entity
16+
recognition, disambiguation, parsing, tagging and machine translation.
1317

1418
### Model
15-
In our implementation of Word2Vec, we used skip-gram model. The training objective of skip-gram is to learn word vector representations that are good at predicting its context in the same sentence. Mathematically, given a sequence of training words `$w_1, w_2, \dots, w_T$`, the objective of the skip-gram model is to maximize the average log-likelihood
19+
20+
In our implementation of Word2Vec, we used skip-gram model. The training objective of skip-gram is
21+
to learn word vector representations that are good at predicting its context in the same sentence.
22+
Mathematically, given a sequence of training words `$w_1, w_2, \dots, w_T$`, the objective of the
23+
skip-gram model is to maximize the average log-likelihood
1624
`\[
1725
\frac{1}{T} \sum_{t = 1}^{T}\sum_{j=-k}^{j=k} \log p(w_{t+j} | w_t)
1826
\]`
1927
where $k$ is the size of the training window.
2028

21-
In the skip-gram model, every word $w$ is associated with two vectors $u_w$ and $v_w$ which are vector representations of $w$ as word and context respectively. The probability of correctly predicting word $w_i$ given word $w_j$ is determined by the softmax model, which is
29+
In the skip-gram model, every word $w$ is associated with two vectors $u_w$ and $v_w$ which are
30+
vector representations of $w$ as word and context respectively. The probability of correctly
31+
predicting word $w_i$ given word $w_j$ is determined by the softmax model, which is
2232
`\[
2333
p(w_i | w_j ) = \frac{\exp(u_{w_i}^{\top}v_{w_j})}{\sum_{l=1}^{V} \exp(u_l^{\top}v_{w_j})}
2434
\]`
2535
where $V$ is the vocabulary size.
2636

2737
The skip-gram model with softmax is expensive because the cost of computing $\log p(w_i | w_j)$
28-
is proportional to $V$, which can be easily in order of millions. To speed up Word2Vec training, we used hierarchical softmax, which reduced the complexity of computing of $\log p(w_i | w_j)$ to
38+
is proportional to $V$, which can be easily in order of millions. To speed up training of Word2Vec,
39+
we used hierarchical softmax, which reduced the complexity of computing of $\log p(w_i | w_j)$ to
2940
$O(\log(V))$
3041

3142
### Example
3243

33-
The example below demonstrates how to load a text file, parse it as an RDD of `Seq[String]` and then construct a `Word2Vec` instance with specified parameters. Then we fit a Word2Vec model with the input data. Finally, we display the top 40 similar words to the specified word.
44+
The example below demonstrates how to load a text file, parse it as an RDD of `Seq[String]`,
45+
construct a `Word2Vec` instance and then fit a `Word2VecModel` with the input data. Finally,
46+
we display the top 40 synonyms of the specified word. To run the example, first download
47+
the [text8](http://mattmahoney.net/dc/text8.zip) data and extract it to your preferred directory.
48+
Here we assume the extracted file is `text8` and in same directory as you run the spark shell.
3449

3550
<div class="codetabs">
3651
<div data-lang="scala">
@@ -40,27 +55,19 @@ import org.apache.spark.rdd._
4055
import org.apache.spark.SparkContext._
4156
import org.apache.spark.mllib.feature.Word2Vec
4257

43-
val input = sc.textFile().map(line => line.split(" ").toSeq)
44-
val size = 100
45-
val startingAlpha = 0.025
46-
val numPartitions = 1
47-
val numIterations = 1
58+
val input = sc.textFile("text8").map(line => line.split(" ").toSeq)
4859

4960
val word2vec = new Word2Vec()
50-
.setVectorSize(size)
51-
.setSeed(42L)
52-
.setNumPartitions(numPartitions)
53-
.setNumIterations(numIterations)
5461

5562
val model = word2vec.fit(input)
5663

57-
val vec = model.findSynonyms("china", 40)
64+
val synonyms = model.findSynonyms("china", 40)
5865

59-
for((word, cosineSimilarity) <- vec) {
60-
println(word + " " + cosineSimilarity.toString)
66+
for((synonym, cosineSimilarity) <- synonyms) {
67+
println(synonym + " " + cosineSimilarity.toString)
6168
}
6269
{% endhighlight %}
6370
</div>
6471
</div>
6572

66-
## TFIDF
73+
## TFIDF

0 commit comments

Comments
 (0)