Skip to content

[SPARK-4916][SQL][DOCS]Update SQL programming guide about cache section #3759

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from
Closed

[SPARK-4916][SQL][DOCS]Update SQL programming guide about cache section #3759

wants to merge 4 commits into from

Conversation

luogankun
Copy link
Contributor

SchemeRDD.cache() now uses in-memory columnar storage.

@AmplabJenkins
Copy link

Can one of the admins verify this patch?

@andrewor14
Copy link
Contributor

@marmbrus

@@ -835,8 +835,7 @@ Spark SQL can cache tables using an in-memory columnar format by calling `sqlCon
Then Spark SQL will scan only required columns and will automatically tune compression to minimize
memory usage and GC pressure. You can call `sqlContext.uncacheTable("tableName")` to remove the table from memory.

Note that if you call `schemaRDD.cache()` rather than `sqlContext.cacheTable(...)`, tables will _not_ be cached using
the in-memory columnar format, and therefore `sqlContext.cacheTable(...)` is strongly recommended for this use case.
Note that you call `schemaRDD.cache()` alike `sqlContext.cacheTable(...)` in 1.2 release of Spark, tables will be cached using the in-memory columnar format.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe a bit rewording:

Note that starting from Spark 1.2, both schemaRDD.cache() and sqlContext.cacheTable(...) leverage in-memory columnar format.

@marmbrus
Copy link
Contributor

Thanks! Merged to master.

@asfgit asfgit closed this in f7a41a0 Dec 30, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants