Skip to content

Add an option to control checking the strictness of Spark schema when writing an EBCDIC file #833

@yruslan

Description

@yruslan

Background

Currently, if there is a field in the copybook which doesn't exist in Spark schema the field writing will be skipped, effectively writing zero bytes in place of the field. In order to catch schema name bugs easier add an option to fail the job if a field is defined in the copybook but not in the Spark schema.

Feature

Add an option to control checking the strictness of Spark schema when writing an EBCDIC file.

Example [Optional]

// Make it true by default
.option("strict_schema", "true")

Proposed Solution [Optional]

--

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions