Skip to content

Commit 16a3f6c

Browse files
szehon-hodongjoon-hyun
authored andcommitted
[SPARK-54595][SQL] Keep existing behavior of MERGE INTO without SCHEMA EVOLUTION clause
### What changes were proposed in this pull request? Keep existing behavior for MERGE INTO without SCHEMA EVOLUTION clause for UPDATE SET * and INSERT * as well as UPDATE struct or INSERT struct, to throw exception if the source and target schemas are not exactly the same. ### Why are the changes needed? As aokolnychyi tested this feature, he mentioned that as of Spark 4.1 the behavior is changed for MERGE INTO but without SCHEMA EVOLUTION clause. In particular: - Source has less columns/nested fields than target => we fill with NULL or DEFAULT for inserts, and existing value for Update. (though we disabled for nested structs by default in [[SPARK-54525](https://issues.apache.org/jira/browse/SPARK-54525))](https://github.com/apache/spark/pull/53229) - Source has more columns/fields than target => we drop the extra fields. Initially, I thought its a good improvement of MERGE INTO and is not related to SCHEMA EVOLUTION exactly because the schema is not altered. But Anton has a good point that it may be a surprise to some user. So it may be better for now to be more conservative and keep the exact same behavior for without SCHEMA EVOLUTION clause. Note: this behavior is still enabled if SCHEMA EVOLUTION is specified, as the user then is more explicit about the decision. ### Does this PR introduce _any_ user-facing change? No, this keeps behavior exactly the same as 4.0 without SCHEMA EVOLUTION clause. ### How was this patch tested? Added a test and changed existing test output to expect the exception if SCHEMA EVOLUTION is not specified. ### Was this patch authored or co-authored using generative AI tooling? No Closes #53326 from szehon-ho/merge_restriction. Authored-by: Szehon Ho <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]> (cherry picked from commit 74b6a93) Signed-off-by: Dongjoon Hyun <[email protected]>
1 parent edb2ac7 commit 16a3f6c

File tree

4 files changed

+711
-786
lines changed

4 files changed

+711
-786
lines changed

sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1736,9 +1736,8 @@ class Analyzer(
17361736
Assignment(key, sourceAttr)
17371737
}
17381738
} else {
1739-
sourceTable.output.flatMap { sourceAttr =>
1740-
findAttrInTarget(sourceAttr.name).map(
1741-
targetAttr => Assignment(targetAttr, sourceAttr))
1739+
targetTable.output.map { attr =>
1740+
Assignment(attr, UnresolvedAttribute(Seq(attr.name)))
17421741
}
17431742
}
17441743
UpdateAction(
@@ -1775,9 +1774,8 @@ class Analyzer(
17751774
Assignment(key, sourceAttr)
17761775
}
17771776
} else {
1778-
sourceTable.output.flatMap { sourceAttr =>
1779-
findAttrInTarget(sourceAttr.name).map(
1780-
targetAttr => Assignment(targetAttr, sourceAttr))
1777+
targetTable.output.map { attr =>
1778+
Assignment(attr, UnresolvedAttribute(Seq(attr.name)))
17811779
}
17821780
}
17831781
InsertAction(

sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveRowLevelCommandAssignments.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ object ResolveRowLevelCommandAssignments extends Rule[LogicalPlan] {
5353
case m: MergeIntoTable if !m.skipSchemaResolution && m.resolved && m.rewritable && !m.aligned &&
5454
!m.needSchemaEvolution =>
5555
validateStoreAssignmentPolicy()
56-
val coerceNestedTypes = SQLConf.get.coerceMergeNestedTypes
56+
val coerceNestedTypes = SQLConf.get.coerceMergeNestedTypes && m.withSchemaEvolution
5757
m.copy(
5858
targetTable = cleanAttrMetadata(m.targetTable),
5959
matchedActions = alignActions(m.targetTable.output, m.matchedActions,

0 commit comments

Comments
 (0)