skip to Main Content

When running my job, I am getting the following exception:

Exception in User Class: org.apache.spark.SparkException : Job aborted due to stage failure: Task 32 in stage 2.0 failed 4 times, most recent failure: Lost task 32.3 in stage 2.0 (TID 50) (10.100.1.48 executor 8): org.apache.spark.SparkUpgradeException: You may get a different result due to the upgrading of Spark 3.0: reading dates before 1582-10-15 or timestamps before 1900-01-01T00:00:00Z from Parquet INT96 files can be ambiguous, as the files may be written by Spark 2.x or legacy versions of Hive, which uses a legacy hybrid calendar that is different from Spark 3.0+’s Proleptic Gregorian calendar. See more details in SPARK-31404. You can set spark.sql.legacy.parquet.int96RebaseModeInRead to ‘LEGACY’ to rebase the datetime values w.r.t. the calendar difference during reading. Or set spark.sql.legacy.parquet.int96RebaseModeInRead to ‘CORRECTED’ to read the datetime values as it is.

I have tried to apply the requested configuration value, as follows:

    val conf = new SparkConf()
    conf.set("spark.sql.legacy.parquet.int96RebaseModeInRead", "LEGACY")

    val spark: SparkContext = new SparkContext(conf)
    //Get current sparkconf which is set by glue
    
    val glueContext: GlueContext = new GlueContext(spark)
    val args = GlueArgParser.getResolvedOptions(
      sysArgs, 
      Seq("JOB_NAME").toArray
    )
    Job.init(args("JOB_NAME"), glueContext, args.asJava)

but the same error occurs. I have also tried setting it to "CORRECTED" via the same approach.

It seems that the config is not properly making its way into the Spark execution. What is the proper way to get Spark config values set from a ScalaSpark job on Glue?

2

Answers


  1. Chosen as BEST ANSWER

    This code at the top of my glue job seems to have done the trick

    val conf = new SparkConf()
    
    //alternatively, use LEGACY if that is required
    conf.set("spark.sql.legacy.parquet.int96RebaseModeInRead", "CORRECTED")
    conf.set("spark.sql.legacy.parquet.int96RebaseModeInWrite", "CORRECTED")
    conf.set("spark.sql.legacy.parquet.datetimeRebaseModeInRead", "CORRECTED")
    conf.set("spark.sql.legacy.parquet.datetimeRebaseModeInWrite", "CORRECTED")
    
    val spark: SparkContext = new SparkContext(conf)
    
    val glueContext: GlueContext = new GlueContext(spark)
    

  2. When you are migrating between versions it is always best to check out the Migration guides by AWS. In your case this can be set in your Glue Job properties by passing below properties as per requirement.To set these navigate to Glue console -> Jobs -> Click on Job -> Job details -> Advanced properties -> Job parameters.

    - Key: --conf
    - Value: spark.sql.legacy.parquet.int96RebaseModeInRead=[CORRECTED|LEGACY] --conf spark.sql.legacy.parquet.int96RebaseModeInWrite=[CORRECTED|LEGACY] --conf spark.sql.legacy.parquet.datetimeRebaseModeInRead=[CORRECTED|LEGACY]
    

    Please refer to below guide for the more information:

    https://docs.aws.amazon.com/glue/latest/dg/migrating-version-30.html#migrating-version-30-from-20

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search