You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/sql-programming-guide.md
-83Lines changed: 0 additions & 83 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -573,37 +573,6 @@ for teenName in teenNames.collect():
573
573
574
574
</div>
575
575
576
-
<divdata-lang="r"markdown="1">
577
-
578
-
Spark SQL can convert an RDD of list of objects to a DataFrame, inferring the datatypes. The keys of this list define the column names of the table, and the types are inferred by looking at the first row. Since we currently only look at the first row, it is important that there is no missing data in the first row of the RDD. In future versions we
579
-
plan to more completely infer the schema by looking at more data, similar to the inference that is
580
-
performed on JSON files.
581
-
582
-
{% highlight r %}
583
-
# sc is an existing SparkContext.
584
-
sqlContext <- sparkRSQL.init(sc)
585
-
586
-
# Load a text file and convert each line to a Row.
0 commit comments