You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* [SPARK-33641][SQL][DOC][FOLLOW-UP] Add migration guide for CHAR VARCHAR types
### What changes were proposed in this pull request?
Add migration guide for CHAR VARCHAR types
### Why are the changes needed?
for migration
### Does this PR introduce _any_ user-facing change?
doc change
### How was this patch tested?
passing ci
Closesapache#30654 from yaooqinn/SPARK-33641-F.
Authored-by: Kent Yao <[email protected]>
Signed-off-by: Wenchen Fan <[email protected]>
* [SPARK-33669] Wrong error message from YARN application state monitor when sc.stop in yarn client mode
### What changes were proposed in this pull request?
This change make InterruptedIOException to be treated as InterruptedException when closing YarnClientSchedulerBackend, which doesn't log error with "YARN application has exited unexpectedly xxx"
### Why are the changes needed?
For YarnClient mode, when stopping YarnClientSchedulerBackend, it first tries to interrupt Yarn application monitor thread. In MonitorThread.run() it catches InterruptedException to gracefully response to stopping request.
But client.monitorApplication method also throws InterruptedIOException when the hadoop rpc call is calling. In this case, MonitorThread will not know it is interrupted, a Yarn App failed is returned with "Failed to contact YARN for application xxxxx; YARN application has exited unexpectedly with state xxxxx" is logged with error level. which confuse user a lot.
### Does this PR introduce _any_ user-facing change?
Yes
### How was this patch tested?
very simple patch, seems no need?
Closesapache#30617 from sqlwindspeaker/yarn-client-interrupt-monitor.
Authored-by: suqilong <[email protected]>
Signed-off-by: Mridul Muralidharan <mridul<at>gmail.com>
* [SPARK-33655][SQL] Improve performance of processing FETCH_PRIOR
### What changes were proposed in this pull request?
Currently, when a client requests FETCH_PRIOR to Thriftserver, Thriftserver reiterates from the start position. Because Thriftserver caches a query result with an array when THRIFTSERVER_INCREMENTAL_COLLECT feature is off, FETCH_PRIOR can be implemented without reiterating the result. A trait FeatureIterator is added in order to separate the implementation for iterator and an array. Also, FeatureIterator supports moves cursor with absolute position, which will be useful for the implementation of FETCH_RELATIVE, FETCH_ABSOLUTE.
### Why are the changes needed?
For better performance of Thriftserver.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
FetchIteratorSuite
Closesapache#30600 from Dooyoung-Hwang/refactor_with_fetch_iterator.
Authored-by: Dooyoung Hwang <[email protected]>
Signed-off-by: HyukjinKwon <[email protected]>
* [SPARK-33719][DOC] Add make_date/make_timestamp/make_interval into the doc of ANSI Compliance
### What changes were proposed in this pull request?
Add make_date/make_timestamp/make_interval into the doc of ANSI Compliance
### Why are the changes needed?
Users can know that these functions throw runtime exceptions under ANSI mode if the result is not valid.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Build doc and check it in browser:

Closesapache#30683 from gengliangwang/improveDoc.
Authored-by: Gengliang Wang <[email protected]>
Signed-off-by: HyukjinKwon <[email protected]>
* [SPARK-33071][SPARK-33536][SQL][FOLLOW-UP] Rename deniedMetadataKeys to nonInheritableMetadataKeys in Alias
### What changes were proposed in this pull request?
This PR is a followup of apache#30488. This PR proposes to rename `Alias.deniedMetadataKeys` to `Alias.nonInheritableMetadataKeys` to make it less confusing.
### Why are the changes needed?
To make it easier to maintain and read.
### Does this PR introduce _any_ user-facing change?
No. This is rather a code cleanup.
### How was this patch tested?
Ran the unittests written in the previous PR manually. Jenkins and GitHub Actions in this PR should also test them.
Closesapache#30682 from HyukjinKwon/SPARK-33071-SPARK-33536.
Authored-by: HyukjinKwon <[email protected]>
Signed-off-by: HyukjinKwon <[email protected]>
* [SPARK-33722][SQL] Handle DELETE in ReplaceNullWithFalseInPredicate
### What changes were proposed in this pull request?
This PR adds `DeleteFromTable` to supported plans in `ReplaceNullWithFalseInPredicate`.
### Why are the changes needed?
This change allows Spark to optimize delete conditions like we optimize filters.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
This PR extends the existing test cases to also cover `DeleteFromTable`.
Closesapache#30688 from aokolnychyi/spark-33722.
Authored-by: Anton Okolnychyi <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
Co-authored-by: Kent Yao <[email protected]>
Co-authored-by: suqilong <[email protected]>
Co-authored-by: Dooyoung Hwang <[email protected]>
Co-authored-by: Gengliang Wang <[email protected]>
Co-authored-by: HyukjinKwon <[email protected]>
Co-authored-by: Anton Okolnychyi <[email protected]>
Copy file name to clipboardExpand all lines: docs/sql-migration-guide.md
+2Lines changed: 2 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -58,6 +58,8 @@ license: |
58
58
59
59
- In Spark 3.1, creating or altering a view will capture runtime SQL configs and store them as view properties. These configs will be applied during the parsing and analysis phases of the view resolution. To restore the behavior before Spark 3.1, you can set `spark.sql.legacy.useCurrentConfigsForView` to `true`.
60
60
61
+
- Since Spark 3.1, CHAR/CHARACTER and VARCHAR types are supported in the table schema. Table scan/insertion will respect the char/varchar semantic. If char/varchar is used in places other than table schema, an exception will be thrown (CAST is an exception that simply treats char/varchar as string like before). To restore the behavior before Spark 3.1, which treats them as STRING types and ignores a length parameter, e.g. `CHAR(4)`, you can set `spark.sql.legacy.charVarcharAsString` to `true`.
62
+
61
63
## Upgrading from Spark SQL 3.0 to 3.0.1
62
64
63
65
- In Spark 3.0, JSON datasource and JSON function `schema_of_json` infer TimestampType from string values if they match to the pattern defined by the JSON option `timestampFormat`. Since version 3.0.1, the timestamp type inference is disabled by default. Set the JSON option `inferTimestamp` to `true` to enable such type inference.
Copy file name to clipboardExpand all lines: sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/ReplaceNullWithFalseInPredicateSuite.scala
0 commit comments