Skip to content

Commit dc1ba9e

Browse files
sarutakmarmbrus
authored andcommitted
[SPARK-3378] [DOCS] Replace the word "SparkSQL" with right word "Spark SQL"
Author: Kousuke Saruta <[email protected]> Closes apache#2251 from sarutak/SPARK-3378 and squashes the following commits: 0bfe234 [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into SPARK-3378 bb5938f [Kousuke Saruta] Replaced rest of "SparkSQL" with "Spark SQL" 6df66de [Kousuke Saruta] Replaced "SparkSQL" with "Spark SQL"
1 parent 4feb46c commit dc1ba9e

File tree

6 files changed

+8
-8
lines changed

6 files changed

+8
-8
lines changed

dev/run-tests

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ echo "========================================================================="
8989
echo "Running Spark unit tests"
9090
echo "========================================================================="
9191

92-
# Build Spark; we always build with Hive because the PySpark SparkSQL tests need it.
92+
# Build Spark; we always build with Hive because the PySpark Spark SQL tests need it.
9393
# echo "q" is needed because sbt on encountering a build file with failure
9494
# (either resolution or compilation) prompts the user for input either q, r,
9595
# etc to quit or retry. This echo is there to make it not block.

docs/programming-guide.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -385,7 +385,7 @@ Apart from text files, Spark's Python API also supports several other data forma
385385

386386
* SequenceFile and Hadoop Input/Output Formats
387387

388-
**Note** this feature is currently marked ```Experimental``` and is intended for advanced users. It may be replaced in future with read/write support based on SparkSQL, in which case SparkSQL is the preferred approach.
388+
**Note** this feature is currently marked ```Experimental``` and is intended for advanced users. It may be replaced in future with read/write support based on Spark SQL, in which case Spark SQL is the preferred approach.
389389

390390
**Writable Support**
391391

python/pyspark/sql.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -900,7 +900,7 @@ def __reduce__(self):
900900

901901
class SQLContext:
902902

903-
"""Main entry point for SparkSQL functionality.
903+
"""Main entry point for Spark SQL functionality.
904904
905905
A SQLContext can be used create L{SchemaRDD}s, register L{SchemaRDD}s as
906906
tables, execute SQL over tables, cache tables, and read parquet files.
@@ -946,7 +946,7 @@ def __init__(self, sparkContext, sqlContext=None):
946946

947947
@property
948948
def _ssql_ctx(self):
949-
"""Accessor for the JVM SparkSQL context.
949+
"""Accessor for the JVM Spark SQL context.
950950
951951
Subclasses can override this property to provide their own
952952
JVM Contexts.
@@ -1507,7 +1507,7 @@ class SchemaRDD(RDD):
15071507
"""An RDD of L{Row} objects that has an associated schema.
15081508
15091509
The underlying JVM object is a SchemaRDD, not a PythonRDD, so we can
1510-
utilize the relational query api exposed by SparkSQL.
1510+
utilize the relational query api exposed by Spark SQL.
15111511
15121512
For normal L{pyspark.rdd.RDD} operations (map, count, etc.) the
15131513
L{SchemaRDD} is not operated on directly, as it's underlying

python/run-tests

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ FAILED=0
2828

2929
rm -f unit-tests.log
3030

31-
# Remove the metastore and warehouse directory created by the HiveContext tests in SparkSQL
31+
# Remove the metastore and warehouse directory created by the HiveContext tests in Spark SQL
3232
rm -rf metastore warehouse
3333

3434
function run_test() {

sql/core/src/main/scala/org/apache/spark/sql/api/java/Row.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ import scala.math.BigDecimal
2525
import org.apache.spark.sql.catalyst.expressions.{Row => ScalaRow}
2626

2727
/**
28-
* A result row from a SparkSQL query.
28+
* A result row from a Spark SQL query.
2929
*/
3030
class Row(private[spark] val row: ScalaRow) extends Serializable {
3131

sql/hive/src/main/scala/org/apache/spark/sql/hive/parquet/FakeParquetSerDe.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector
2626
import org.apache.hadoop.io.Writable
2727

2828
/**
29-
* A placeholder that allows SparkSQL users to create metastore tables that are stored as
29+
* A placeholder that allows Spark SQL users to create metastore tables that are stored as
3030
* parquet files. It is only intended to pass the checks that the serde is valid and exists
3131
* when a CREATE TABLE is run. The actual work of decoding will be done by ParquetTableScan
3232
* when "spark.sql.hive.convertMetastoreParquet" is set to true.

0 commit comments

Comments
 (0)