Skip to content

Commit 35bdeb9

Browse files
committed
api/mllib -> api/scala
1 parent e4afaa8 commit 35bdeb9

7 files changed

+57
-57
lines changed

docs/mllib-basics.md

Lines changed: 29 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -26,11 +26,11 @@ of the vector.
2626
<div data-lang="scala" markdown="1">
2727

2828
The base class of local vectors is
29-
[`Vector`](api/mllib/index.html#org.apache.spark.mllib.linalg.Vector), and we provide two
30-
implementations: [`DenseVector`](api/mllib/index.html#org.apache.spark.mllib.linalg.DenseVector) and
31-
[`SparseVector`](api/mllib/index.html#org.apache.spark.mllib.linalg.SparseVector). We recommend
29+
[`Vector`](api/scala/index.html#org.apache.spark.mllib.linalg.Vector), and we provide two
30+
implementations: [`DenseVector`](api/scala/index.html#org.apache.spark.mllib.linalg.DenseVector) and
31+
[`SparseVector`](api/scala/index.html#org.apache.spark.mllib.linalg.SparseVector). We recommend
3232
using the factory methods implemented in
33-
[`Vectors`](api/mllib/index.html#org.apache.spark.mllib.linalg.Vector) to create local vectors.
33+
[`Vectors`](api/scala/index.html#org.apache.spark.mllib.linalg.Vector) to create local vectors.
3434

3535
{% highlight scala %}
3636
import org.apache.spark.mllib.linalg.{Vector, Vectors}
@@ -53,11 +53,11 @@ Scala imports `scala.collection.immutable.Vector` by default, so you have to imp
5353
<div data-lang="java" markdown="1">
5454

5555
The base class of local vectors is
56-
[`Vector`](api/mllib/index.html#org.apache.spark.mllib.linalg.Vector), and we provide two
57-
implementations: [`DenseVector`](api/mllib/index.html#org.apache.spark.mllib.linalg.DenseVector) and
58-
[`SparseVector`](api/mllib/index.html#org.apache.spark.mllib.linalg.SparseVector). We recommend
56+
[`Vector`](api/scala/index.html#org.apache.spark.mllib.linalg.Vector), and we provide two
57+
implementations: [`DenseVector`](api/scala/index.html#org.apache.spark.mllib.linalg.DenseVector) and
58+
[`SparseVector`](api/scala/index.html#org.apache.spark.mllib.linalg.SparseVector). We recommend
5959
using the factory methods implemented in
60-
[`Vectors`](api/mllib/index.html#org.apache.spark.mllib.linalg.Vector) to create local vectors.
60+
[`Vectors`](api/scala/index.html#org.apache.spark.mllib.linalg.Vector) to create local vectors.
6161

6262
{% highlight java %}
6363
import org.apache.spark.mllib.linalg.Vector;
@@ -117,7 +117,7 @@ For multiclass classification, labels should be class indices staring from zero:
117117
<div data-lang="scala" markdown="1">
118118

119119
A labeled point is represented by the case class
120-
[`LabeledPoint`](api/mllib/index.html#org.apache.spark.mllib.regression.LabeledPoint).
120+
[`LabeledPoint`](api/scala/index.html#org.apache.spark.mllib.regression.LabeledPoint).
121121

122122
{% highlight scala %}
123123
import org.apache.spark.mllib.linalg.Vectors
@@ -134,7 +134,7 @@ val neg = LabeledPoint(0.0, Vectors.sparse(3, Array(0, 2), Array(1.0, 3.0)))
134134
<div data-lang="java" markdown="1">
135135

136136
A labeled point is represented by
137-
[`LabeledPoint`](api/mllib/index.html#org.apache.spark.mllib.regression.LabeledPoint).
137+
[`LabeledPoint`](api/scala/index.html#org.apache.spark.mllib.regression.LabeledPoint).
138138

139139
{% highlight java %}
140140
import org.apache.spark.mllib.linalg.Vectors;
@@ -184,7 +184,7 @@ After loading, the feature indices are converted to zero-based.
184184
<div class="codetabs">
185185
<div data-lang="scala" markdown="1">
186186

187-
[`MLUtils.loadLibSVMFile`](api/mllib/index.html#org.apache.spark.mllib.util.MLUtils$) reads training
187+
[`MLUtils.loadLibSVMFile`](api/scala/index.html#org.apache.spark.mllib.util.MLUtils$) reads training
188188
examples stored in LIBSVM format.
189189

190190
{% highlight scala %}
@@ -197,7 +197,7 @@ val training: RDD[LabeledPoint] = MLUtils.loadLibSVMFile(sc, "mllib/data/sample_
197197
</div>
198198

199199
<div data-lang="java" markdown="1">
200-
[`MLUtils.loadLibSVMFile`](api/mllib/index.html#org.apache.spark.mllib.util.MLUtils$) reads training
200+
[`MLUtils.loadLibSVMFile`](api/scala/index.html#org.apache.spark.mllib.util.MLUtils$) reads training
201201
examples stored in LIBSVM format.
202202

203203
{% highlight java %}
@@ -227,10 +227,10 @@ We are going to add sparse matrix in the next release.
227227
<div data-lang="scala" markdown="1">
228228

229229
The base class of local matrices is
230-
[`Matrix`](api/mllib/index.html#org.apache.spark.mllib.linalg.Matrix), and we provide one
231-
implementation: [`DenseMatrix`](api/mllib/index.html#org.apache.spark.mllib.linalg.DenseMatrix).
230+
[`Matrix`](api/scala/index.html#org.apache.spark.mllib.linalg.Matrix), and we provide one
231+
implementation: [`DenseMatrix`](api/scala/index.html#org.apache.spark.mllib.linalg.DenseMatrix).
232232
Sparse matrix will be added in the next release. We recommend using the factory methods implemented
233-
in [`Matrices`](api/mllib/index.html#org.apache.spark.mllib.linalg.Matrices) to create local
233+
in [`Matrices`](api/scala/index.html#org.apache.spark.mllib.linalg.Matrices) to create local
234234
matrices.
235235

236236
{% highlight scala %}
@@ -244,10 +244,10 @@ val dm: Matrix = Matrices.dense(3, 2, Array(1.0, 3.0, 5.0, 2.0, 4.0, 6.0))
244244
<div data-lang="java" markdown="1">
245245

246246
The base class of local matrices is
247-
[`Matrix`](api/mllib/index.html#org.apache.spark.mllib.linalg.Matrix), and we provide one
248-
implementation: [`DenseMatrix`](api/mllib/index.html#org.apache.spark.mllib.linalg.DenseMatrix).
247+
[`Matrix`](api/scala/index.html#org.apache.spark.mllib.linalg.Matrix), and we provide one
248+
implementation: [`DenseMatrix`](api/scala/index.html#org.apache.spark.mllib.linalg.DenseMatrix).
249249
Sparse matrix will be added in the next release. We recommend using the factory methods implemented
250-
in [`Matrices`](api/mllib/index.html#org.apache.spark.mllib.linalg.Matrices) to create local
250+
in [`Matrices`](api/scala/index.html#org.apache.spark.mllib.linalg.Matrices) to create local
251251
matrices.
252252

253253
{% highlight java %}
@@ -284,7 +284,7 @@ limited by the integer range but it should be much smaller in practice.
284284
<div class="codetabs">
285285
<div data-lang="scala" markdown="1">
286286

287-
A [`RowMatrix`](api/mllib/index.html#org.apache.spark.mllib.linalg.distributed.RowMatrix) can be
287+
A [`RowMatrix`](api/scala/index.html#org.apache.spark.mllib.linalg.distributed.RowMatrix) can be
288288
created from an `RDD[Vector]` instance. Then we can compute its column summary statistics.
289289

290290
{% highlight scala %}
@@ -303,7 +303,7 @@ val n = mat.numCols()
303303

304304
<div data-lang="java" markdown="1">
305305

306-
A [`RowMatrix`](api/mllib/index.html#org.apache.spark.mllib.linalg.distributed.RowMatrix) can be
306+
A [`RowMatrix`](api/scala/index.html#org.apache.spark.mllib.linalg.distributed.RowMatrix) can be
307307
created from a `JavaRDD<Vector>` instance. Then we can compute its column summary statistics.
308308

309309
{% highlight java %}
@@ -334,7 +334,7 @@ which could be faster if the rows are sparse.
334334
<div data-lang="scala" markdown="1">
335335

336336
`RowMatrix#computeColumnSummaryStatistics` returns an instance of
337-
[`MultivariateStatisticalSummary`](api/mllib/index.html#org.apache.spark.mllib.stat.MultivariateStatisticalSummary),
337+
[`MultivariateStatisticalSummary`](api/scala/index.html#org.apache.spark.mllib.stat.MultivariateStatisticalSummary),
338338
which contains the column-wise max, min, mean, variance, and number of nonzeros, as well as the
339339
total count.
340340

@@ -366,9 +366,9 @@ an RDD of indexed rows, which each row is represented by its index (long-typed)
366366
<div data-lang="scala" markdown="1">
367367

368368
An
369-
[`IndexedRowMatrix`](api/mllib/index.html#org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix)
369+
[`IndexedRowMatrix`](api/scala/index.html#org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix)
370370
can be created from an `RDD[IndexedRow]` instance, where
371-
[`IndexedRow`](api/mllib/index.html#org.apache.spark.mllib.linalg.distributed.IndexedRow) is a
371+
[`IndexedRow`](api/scala/index.html#org.apache.spark.mllib.linalg.distributed.IndexedRow) is a
372372
wrapper over `(Long, Vector)`. An `IndexedRowMatrix` can be converted to a `RowMatrix` by dropping
373373
its row indices.
374374

@@ -391,9 +391,9 @@ val rowMat: RowMatrix = mat.toRowMatrix()
391391
<div data-lang="java" markdown="1">
392392

393393
An
394-
[`IndexedRowMatrix`](api/mllib/index.html#org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix)
394+
[`IndexedRowMatrix`](api/scala/index.html#org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix)
395395
can be created from an `JavaRDD<IndexedRow>` instance, where
396-
[`IndexedRow`](api/mllib/index.html#org.apache.spark.mllib.linalg.distributed.IndexedRow) is a
396+
[`IndexedRow`](api/scala/index.html#org.apache.spark.mllib.linalg.distributed.IndexedRow) is a
397397
wrapper over `(long, Vector)`. An `IndexedRowMatrix` can be converted to a `RowMatrix` by dropping
398398
its row indices.
399399

@@ -427,9 +427,9 @@ dimensions of the matrix are huge and the matrix is very sparse.
427427
<div data-lang="scala" markdown="1">
428428

429429
A
430-
[`CoordinateMatrix`](api/mllib/index.html#org.apache.spark.mllib.linalg.distributed.CoordinateMatrix)
430+
[`CoordinateMatrix`](api/scala/index.html#org.apache.spark.mllib.linalg.distributed.CoordinateMatrix)
431431
can be created from an `RDD[MatrixEntry]` instance, where
432-
[`MatrixEntry`](api/mllib/index.html#org.apache.spark.mllib.linalg.distributed.MatrixEntry) is a
432+
[`MatrixEntry`](api/scala/index.html#org.apache.spark.mllib.linalg.distributed.MatrixEntry) is a
433433
wrapper over `(Long, Long, Double)`. A `CoordinateMatrix` can be converted to a `IndexedRowMatrix`
434434
with sparse rows by calling `toIndexedRowMatrix`. In this release, we do not provide other
435435
computation for `CoordinateMatrix`.
@@ -453,9 +453,9 @@ val indexedRowMatrix = mat.toIndexedRowMatrix()
453453
<div data-lang="java" markdown="1">
454454

455455
A
456-
[`CoordinateMatrix`](api/mllib/index.html#org.apache.spark.mllib.linalg.distributed.CoordinateMatrix)
456+
[`CoordinateMatrix`](api/scala/index.html#org.apache.spark.mllib.linalg.distributed.CoordinateMatrix)
457457
can be created from a `JavaRDD<MatrixEntry>` instance, where
458-
[`MatrixEntry`](api/mllib/index.html#org.apache.spark.mllib.linalg.distributed.MatrixEntry) is a
458+
[`MatrixEntry`](api/scala/index.html#org.apache.spark.mllib.linalg.distributed.MatrixEntry) is a
459459
wrapper over `(long, long, double)`. A `CoordinateMatrix` can be converted to a `IndexedRowMatrix`
460460
with sparse rows by calling `toIndexedRowMatrix`.
461461

docs/mllib-clustering.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ a given dataset, the algorithm returns the best clustering result).
4040
Following code snippets can be executed in `spark-shell`.
4141

4242
In the following example after loading and parsing data, we use the
43-
[`KMeans`](api/mllib/index.html#org.apache.spark.mllib.clustering.KMeans) object to cluster the data
43+
[`KMeans`](api/scala/index.html#org.apache.spark.mllib.clustering.KMeans) object to cluster the data
4444
into two clusters. The number of desired clusters is passed to the algorithm. We then compute Within
4545
Set Sum of Squared Error (WSSSE). You can reduce this error measure by increasing *k*. In fact the
4646
optimal *k* is usually one where there is an "elbow" in the WSSSE graph.

docs/mllib-collaborative-filtering.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ user for an item.
4848

4949
<div data-lang="scala" markdown="1">
5050
In the following example we load rating data. Each row consists of a user, a product and a rating.
51-
We use the default [ALS.train()](api/mllib/index.html#org.apache.spark.mllib.recommendation.ALS$)
51+
We use the default [ALS.train()](api/scala/index.html#org.apache.spark.mllib.recommendation.ALS$)
5252
method which assumes ratings are explicit. We evaluate the
5353
recommendation model by measuring the Mean Squared Error of rating prediction.
5454

docs/mllib-guide.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ filtering, dimensionality reduction, as well as underlying optimization primitiv
2828
* limited-memory BFGS (L-BFGS)
2929

3030
MLlib is a new component under active development.
31-
The APIs marked `Experimental`/`DeveloperApi` may change in the future releases,
31+
The APIs marked `Experimental`/`DeveloperApi` may change in future releases,
3232
and we will provide migration guide between releases.
3333

3434
## Dependencies
@@ -62,9 +62,9 @@ take advantage of sparsity in both storage and computation.
6262
<div data-lang="scala" markdown="1">
6363

6464
We used to represent a feature vector by `Array[Double]`, which is replaced by
65-
[`Vector`](api/mllib/index.html#org.apache.spark.mllib.linalg.Vector) in v1.0. Algorithms that used
65+
[`Vector`](api/scala/index.html#org.apache.spark.mllib.linalg.Vector) in v1.0. Algorithms that used
6666
to accept `RDD[Array[Double]]` now take
67-
`RDD[Vector]`. [`LabeledPoint`](api/mllib/index.html#org.apache.spark.mllib.regression.LabeledPoint)
67+
`RDD[Vector]`. [`LabeledPoint`](api/scala/index.html#org.apache.spark.mllib.regression.LabeledPoint)
6868
is now a wrapper of `(Double, Vector)` instead of `(Double, Array[Double])`. Converting
6969
`Array[Double]` to `Vector` is straightforward:
7070

@@ -75,7 +75,7 @@ val array: Array[Double] = ... // a double array
7575
val vector: Vector = Vectors.dense(array) // a dense vector
7676
{% endhighlight %}
7777

78-
[`Vectors`](api/mllib/index.html#org.apache.spark.mllib.linalg.Vectors$) provides factory methods to create sparse vectors.
78+
[`Vectors`](api/scala/index.html#org.apache.spark.mllib.linalg.Vectors$) provides factory methods to create sparse vectors.
7979

8080
*Note*. Scala imports `scala.collection.immutable.Vector` by default, so you have to import `org.apache.spark.mllib.linalg.Vector` explicitly to use MLlib's `Vector`.
8181

@@ -84,9 +84,9 @@ val vector: Vector = Vectors.dense(array) // a dense vector
8484
<div data-lang="java" markdown="1">
8585

8686
We used to represent a feature vector by `double[]`, which is replaced by
87-
[`Vector`](api/mllib/index.html#org.apache.spark.mllib.linalg.Vector) in v1.0. Algorithms that used
87+
[`Vector`](api/scala/index.html#org.apache.spark.mllib.linalg.Vector) in v1.0. Algorithms that used
8888
to accept `RDD<double[]>` now take
89-
`RDD<Vector>`. [`LabeledPoint`](api/mllib/index.html#org.apache.spark.mllib.regression.LabeledPoint)
89+
`RDD<Vector>`. [`LabeledPoint`](api/scala/index.html#org.apache.spark.mllib.regression.LabeledPoint)
9090
is now a wrapper of `(double, Vector)` instead of `(double, double[])`. Converting `double[]` to
9191
`Vector` is straightforward:
9292

@@ -98,7 +98,7 @@ double[] array = ... // a double array
9898
Vector vector = Vectors.dense(array); // a dense vector
9999
{% endhighlight %}
100100

101-
[`Vectors`](api/mllib/index.html#org.apache.spark.mllib.linalg.Vectors$) provides factory methods to
101+
[`Vectors`](api/scala/index.html#org.apache.spark.mllib.linalg.Vectors$) provides factory methods to
102102
create sparse vectors.
103103

104104
</div>

docs/mllib-linear-methods.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -233,7 +233,7 @@ val modelL1 = svmAlg.run(training)
233233
{% endhighlight %}
234234

235235
Similarly, you can use replace `SVMWithSGD` by
236-
[`LogisticRegressionWithSGD`](api/mllib/index.html#org.apache.spark.mllib.classification.LogisticRegressionWithSGD).
236+
[`LogisticRegressionWithSGD`](api/scala/index.html#org.apache.spark.mllib.classification.LogisticRegressionWithSGD).
237237

238238
</div>
239239

@@ -328,8 +328,8 @@ println("training Mean Squared Error = " + MSE)
328328
{% endhighlight %}
329329

330330
Similarly you can use
331-
[`RidgeRegressionWithSGD`](api/mllib/index.html#org.apache.spark.mllib.regression.RidgeRegressionWithSGD)
332-
and [`LassoWithSGD`](api/mllib/index.html#org.apache.spark.mllib.regression.LassoWithSGD).
331+
[`RidgeRegressionWithSGD`](api/scala/index.html#org.apache.spark.mllib.regression.RidgeRegressionWithSGD)
332+
and [`LassoWithSGD`](api/scala/index.html#org.apache.spark.mllib.regression.LassoWithSGD).
333333

334334
</div>
335335

@@ -380,11 +380,11 @@ all three possible regularizations (none, L1 or L2).
380380

381381
Algorithms are all implemented in Scala:
382382

383-
* [SVMWithSGD](api/mllib/index.html#org.apache.spark.mllib.classification.SVMWithSGD)
384-
* [LogisticRegressionWithSGD](api/mllib/index.html#org.apache.spark.mllib.classification.LogisticRegressionWithSGD)
385-
* [LinearRegressionWithSGD](api/mllib/index.html#org.apache.spark.mllib.regression.LinearRegressionWithSGD)
386-
* [RidgeRegressionWithSGD](api/mllib/index.html#org.apache.spark.mllib.regression.RidgeRegressionWithSGD)
387-
* [LassoWithSGD](api/mllib/index.html#org.apache.spark.mllib.regression.LassoWithSGD)
383+
* [SVMWithSGD](api/scala/index.html#org.apache.spark.mllib.classification.SVMWithSGD)
384+
* [LogisticRegressionWithSGD](api/scala/index.html#org.apache.spark.mllib.classification.LogisticRegressionWithSGD)
385+
* [LinearRegressionWithSGD](api/scala/index.html#org.apache.spark.mllib.regression.LinearRegressionWithSGD)
386+
* [RidgeRegressionWithSGD](api/scala/index.html#org.apache.spark.mllib.regression.RidgeRegressionWithSGD)
387+
* [LassoWithSGD](api/scala/index.html#org.apache.spark.mllib.regression.LassoWithSGD)
388388

389389
Python calls the Scala implementation via
390-
[PythonMLLibAPI](api/mllib/index.html#org.apache.spark.mllib.api.python.PythonMLLibAPI).
390+
[PythonMLLibAPI](api/scala/index.html#org.apache.spark.mllib.api.python.PythonMLLibAPI).

docs/mllib-naive-bayes.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -27,11 +27,11 @@ sparsity. Since the training data is only used once, it is not necessary to cach
2727
<div class="codetabs">
2828
<div data-lang="scala" markdown="1">
2929

30-
[NaiveBayes](api/mllib/index.html#org.apache.spark.mllib.classification.NaiveBayes$) implements
30+
[NaiveBayes](api/scala/index.html#org.apache.spark.mllib.classification.NaiveBayes$) implements
3131
multinomial naive Bayes. It takes an RDD of
32-
[LabeledPoint](api/mllib/index.html#org.apache.spark.mllib.regression.LabeledPoint) and an optional
32+
[LabeledPoint](api/scala/index.html#org.apache.spark.mllib.regression.LabeledPoint) and an optional
3333
smoothing parameter `lambda` as input, and output a
34-
[NaiveBayesModel](api/mllib/index.html#org.apache.spark.mllib.classification.NaiveBayesModel), which
34+
[NaiveBayesModel](api/scala/index.html#org.apache.spark.mllib.classification.NaiveBayesModel), which
3535
can be used for evaluation and prediction.
3636

3737
{% highlight scala %}
@@ -59,11 +59,11 @@ val accuracy = 1.0 * predictionAndLabel.filter(x => x._1 == x._2).count() / test
5959

6060
<div data-lang="java" markdown="1">
6161

62-
[NaiveBayes](api/mllib/index.html#org.apache.spark.mllib.classification.NaiveBayes$) implements
62+
[NaiveBayes](api/scala/index.html#org.apache.spark.mllib.classification.NaiveBayes$) implements
6363
multinomial naive Bayes. It takes a Scala RDD of
64-
[LabeledPoint](api/mllib/index.html#org.apache.spark.mllib.regression.LabeledPoint) and an
64+
[LabeledPoint](api/scala/index.html#org.apache.spark.mllib.regression.LabeledPoint) and an
6565
optionally smoothing parameter `lambda` as input, and output a
66-
[NaiveBayesModel](api/mllib/index.html#org.apache.spark.mllib.classification.NaiveBayesModel), which
66+
[NaiveBayesModel](api/scala/index.html#org.apache.spark.mllib.classification.NaiveBayesModel), which
6767
can be used for evaluation and prediction.
6868

6969
{% highlight java %}

docs/mllib-optimization.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -170,17 +170,17 @@ each iteration, to compute the gradient direction.
170170

171171
Available algorithms for gradient descent:
172172

173-
* [GradientDescent.runMiniBatchSGD](api/mllib/index.html#org.apache.spark.mllib.optimization.GradientDescent)
173+
* [GradientDescent.runMiniBatchSGD](api/scala/index.html#org.apache.spark.mllib.optimization.GradientDescent)
174174

175175
### L-BFGS
176176
L-BFGS is currently only a low-level optimization primitive in `MLlib`. If you want to use L-BFGS in various
177177
ML algorithms such as Linear Regression, and Logistic Regression, you have to pass the gradient of objective
178178
function, and updater into optimizer yourself instead of using the training APIs like
179-
[LogisticRegressionWithSGD](api/mllib/index.html#org.apache.spark.mllib.classification.LogisticRegressionWithSGD).
179+
[LogisticRegressionWithSGD](api/scala/index.html#org.apache.spark.mllib.classification.LogisticRegressionWithSGD).
180180
See the example below. It will be addressed in the next release.
181181

182182
The L1 regularization by using
183-
[L1Updater](api/mllib/index.html#org.apache.spark.mllib.optimization.L1Updater) will not work since the
183+
[L1Updater](api/scala/index.html#org.apache.spark.mllib.optimization.L1Updater) will not work since the
184184
soft-thresholding logic in L1Updater is designed for gradient descent. See the developer's note.
185185

186186
The L-BFGS method
@@ -274,4 +274,4 @@ the actual gradient descent step. However, we're able to take the gradient and
274274
loss of objective function of regularization for L-BFGS by ignoring the part of logic
275275
only for gradient decent such as adaptive step size stuff. We will refactorize
276276
this into regularizer to replace updater to separate the logic between
277-
regularization and step update later.
277+
regularization and step update later.

0 commit comments

Comments
 (0)