@@ -26,11 +26,11 @@ of the vector.
26
26
<div data-lang =" scala " markdown =" 1 " >
27
27
28
28
The base class of local vectors is
29
- [ ` Vector ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.Vector ) , and we provide two
30
- implementations: [ ` DenseVector ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.DenseVector ) and
31
- [ ` SparseVector ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.SparseVector ) . We recommend
29
+ [ ` Vector ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.Vector ) , and we provide two
30
+ implementations: [ ` DenseVector ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.DenseVector ) and
31
+ [ ` SparseVector ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.SparseVector ) . We recommend
32
32
using the factory methods implemented in
33
- [ ` Vectors ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.Vector ) to create local vectors.
33
+ [ ` Vectors ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.Vector ) to create local vectors.
34
34
35
35
{% highlight scala %}
36
36
import org.apache.spark.mllib.linalg.{Vector, Vectors}
@@ -53,11 +53,11 @@ Scala imports `scala.collection.immutable.Vector` by default, so you have to imp
53
53
<div data-lang =" java " markdown =" 1 " >
54
54
55
55
The base class of local vectors is
56
- [ ` Vector ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.Vector ) , and we provide two
57
- implementations: [ ` DenseVector ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.DenseVector ) and
58
- [ ` SparseVector ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.SparseVector ) . We recommend
56
+ [ ` Vector ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.Vector ) , and we provide two
57
+ implementations: [ ` DenseVector ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.DenseVector ) and
58
+ [ ` SparseVector ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.SparseVector ) . We recommend
59
59
using the factory methods implemented in
60
- [ ` Vectors ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.Vector ) to create local vectors.
60
+ [ ` Vectors ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.Vector ) to create local vectors.
61
61
62
62
{% highlight java %}
63
63
import org.apache.spark.mllib.linalg.Vector;
@@ -117,7 +117,7 @@ For multiclass classification, labels should be class indices staring from zero:
117
117
<div data-lang =" scala " markdown =" 1 " >
118
118
119
119
A labeled point is represented by the case class
120
- [ ` LabeledPoint ` ] ( api/mllib /index.html#org.apache.spark.mllib.regression.LabeledPoint ) .
120
+ [ ` LabeledPoint ` ] ( api/scala /index.html#org.apache.spark.mllib.regression.LabeledPoint ) .
121
121
122
122
{% highlight scala %}
123
123
import org.apache.spark.mllib.linalg.Vectors
@@ -134,7 +134,7 @@ val neg = LabeledPoint(0.0, Vectors.sparse(3, Array(0, 2), Array(1.0, 3.0)))
134
134
<div data-lang =" java " markdown =" 1 " >
135
135
136
136
A labeled point is represented by
137
- [ ` LabeledPoint ` ] ( api/mllib /index.html#org.apache.spark.mllib.regression.LabeledPoint ) .
137
+ [ ` LabeledPoint ` ] ( api/scala /index.html#org.apache.spark.mllib.regression.LabeledPoint ) .
138
138
139
139
{% highlight java %}
140
140
import org.apache.spark.mllib.linalg.Vectors;
@@ -184,7 +184,7 @@ After loading, the feature indices are converted to zero-based.
184
184
<div class =" codetabs " >
185
185
<div data-lang =" scala " markdown =" 1 " >
186
186
187
- [ ` MLUtils.loadLibSVMFile ` ] ( api/mllib /index.html#org.apache.spark.mllib.util.MLUtils$ ) reads training
187
+ [ ` MLUtils.loadLibSVMFile ` ] ( api/scala /index.html#org.apache.spark.mllib.util.MLUtils$ ) reads training
188
188
examples stored in LIBSVM format.
189
189
190
190
{% highlight scala %}
@@ -197,7 +197,7 @@ val training: RDD[LabeledPoint] = MLUtils.loadLibSVMFile(sc, "mllib/data/sample_
197
197
</div >
198
198
199
199
<div data-lang =" java " markdown =" 1 " >
200
- [ ` MLUtils.loadLibSVMFile ` ] ( api/mllib /index.html#org.apache.spark.mllib.util.MLUtils$ ) reads training
200
+ [ ` MLUtils.loadLibSVMFile ` ] ( api/scala /index.html#org.apache.spark.mllib.util.MLUtils$ ) reads training
201
201
examples stored in LIBSVM format.
202
202
203
203
{% highlight java %}
@@ -227,10 +227,10 @@ We are going to add sparse matrix in the next release.
227
227
<div data-lang =" scala " markdown =" 1 " >
228
228
229
229
The base class of local matrices is
230
- [ ` Matrix ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.Matrix ) , and we provide one
231
- implementation: [ ` DenseMatrix ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.DenseMatrix ) .
230
+ [ ` Matrix ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.Matrix ) , and we provide one
231
+ implementation: [ ` DenseMatrix ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.DenseMatrix ) .
232
232
Sparse matrix will be added in the next release. We recommend using the factory methods implemented
233
- in [ ` Matrices ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.Matrices ) to create local
233
+ in [ ` Matrices ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.Matrices ) to create local
234
234
matrices.
235
235
236
236
{% highlight scala %}
@@ -244,10 +244,10 @@ val dm: Matrix = Matrices.dense(3, 2, Array(1.0, 3.0, 5.0, 2.0, 4.0, 6.0))
244
244
<div data-lang =" java " markdown =" 1 " >
245
245
246
246
The base class of local matrices is
247
- [ ` Matrix ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.Matrix ) , and we provide one
248
- implementation: [ ` DenseMatrix ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.DenseMatrix ) .
247
+ [ ` Matrix ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.Matrix ) , and we provide one
248
+ implementation: [ ` DenseMatrix ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.DenseMatrix ) .
249
249
Sparse matrix will be added in the next release. We recommend using the factory methods implemented
250
- in [ ` Matrices ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.Matrices ) to create local
250
+ in [ ` Matrices ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.Matrices ) to create local
251
251
matrices.
252
252
253
253
{% highlight java %}
@@ -284,7 +284,7 @@ limited by the integer range but it should be much smaller in practice.
284
284
<div class =" codetabs " >
285
285
<div data-lang =" scala " markdown =" 1 " >
286
286
287
- A [ ` RowMatrix ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.distributed.RowMatrix ) can be
287
+ A [ ` RowMatrix ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.distributed.RowMatrix ) can be
288
288
created from an ` RDD[Vector] ` instance. Then we can compute its column summary statistics.
289
289
290
290
{% highlight scala %}
@@ -303,7 +303,7 @@ val n = mat.numCols()
303
303
304
304
<div data-lang =" java " markdown =" 1 " >
305
305
306
- A [ ` RowMatrix ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.distributed.RowMatrix ) can be
306
+ A [ ` RowMatrix ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.distributed.RowMatrix ) can be
307
307
created from a ` JavaRDD<Vector> ` instance. Then we can compute its column summary statistics.
308
308
309
309
{% highlight java %}
@@ -334,7 +334,7 @@ which could be faster if the rows are sparse.
334
334
<div data-lang =" scala " markdown =" 1 " >
335
335
336
336
` RowMatrix#computeColumnSummaryStatistics ` returns an instance of
337
- [ ` MultivariateStatisticalSummary ` ] ( api/mllib /index.html#org.apache.spark.mllib.stat.MultivariateStatisticalSummary ) ,
337
+ [ ` MultivariateStatisticalSummary ` ] ( api/scala /index.html#org.apache.spark.mllib.stat.MultivariateStatisticalSummary ) ,
338
338
which contains the column-wise max, min, mean, variance, and number of nonzeros, as well as the
339
339
total count.
340
340
@@ -366,9 +366,9 @@ an RDD of indexed rows, which each row is represented by its index (long-typed)
366
366
<div data-lang =" scala " markdown =" 1 " >
367
367
368
368
An
369
- [ ` IndexedRowMatrix ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix )
369
+ [ ` IndexedRowMatrix ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix )
370
370
can be created from an ` RDD[IndexedRow] ` instance, where
371
- [ ` IndexedRow ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.distributed.IndexedRow ) is a
371
+ [ ` IndexedRow ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.distributed.IndexedRow ) is a
372
372
wrapper over ` (Long, Vector) ` . An ` IndexedRowMatrix ` can be converted to a ` RowMatrix ` by dropping
373
373
its row indices.
374
374
@@ -391,9 +391,9 @@ val rowMat: RowMatrix = mat.toRowMatrix()
391
391
<div data-lang =" java " markdown =" 1 " >
392
392
393
393
An
394
- [ ` IndexedRowMatrix ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix )
394
+ [ ` IndexedRowMatrix ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix )
395
395
can be created from an ` JavaRDD<IndexedRow> ` instance, where
396
- [ ` IndexedRow ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.distributed.IndexedRow ) is a
396
+ [ ` IndexedRow ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.distributed.IndexedRow ) is a
397
397
wrapper over ` (long, Vector) ` . An ` IndexedRowMatrix ` can be converted to a ` RowMatrix ` by dropping
398
398
its row indices.
399
399
@@ -427,9 +427,9 @@ dimensions of the matrix are huge and the matrix is very sparse.
427
427
<div data-lang =" scala " markdown =" 1 " >
428
428
429
429
A
430
- [ ` CoordinateMatrix ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.distributed.CoordinateMatrix )
430
+ [ ` CoordinateMatrix ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.distributed.CoordinateMatrix )
431
431
can be created from an ` RDD[MatrixEntry] ` instance, where
432
- [ ` MatrixEntry ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.distributed.MatrixEntry ) is a
432
+ [ ` MatrixEntry ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.distributed.MatrixEntry ) is a
433
433
wrapper over ` (Long, Long, Double) ` . A ` CoordinateMatrix ` can be converted to a ` IndexedRowMatrix `
434
434
with sparse rows by calling ` toIndexedRowMatrix ` . In this release, we do not provide other
435
435
computation for ` CoordinateMatrix ` .
@@ -453,9 +453,9 @@ val indexedRowMatrix = mat.toIndexedRowMatrix()
453
453
<div data-lang =" java " markdown =" 1 " >
454
454
455
455
A
456
- [ ` CoordinateMatrix ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.distributed.CoordinateMatrix )
456
+ [ ` CoordinateMatrix ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.distributed.CoordinateMatrix )
457
457
can be created from a ` JavaRDD<MatrixEntry> ` instance, where
458
- [ ` MatrixEntry ` ] ( api/mllib /index.html#org.apache.spark.mllib.linalg.distributed.MatrixEntry ) is a
458
+ [ ` MatrixEntry ` ] ( api/scala /index.html#org.apache.spark.mllib.linalg.distributed.MatrixEntry ) is a
459
459
wrapper over ` (long, long, double) ` . A ` CoordinateMatrix ` can be converted to a ` IndexedRowMatrix `
460
460
with sparse rows by calling ` toIndexedRowMatrix ` .
461
461
0 commit comments