I am using a flow as follows(basically to fetch a file from s3 and then convert few records from the main CSV file n later push it to Elasticsearch) :
GetSQS ->UpdateAtttribute->SplitJson->EvaluateJsonPath->UpdateAttribute->convertRecord-> other processor…
I am able to fetch the file from s3 correctly but the ConvertRecord processor thows error: Invalid char between encapsulated token a delimiter
Please find the ConvertRecord Configs below:
**CSVRecordReader** : Schema Access strategy as "Use 'Schema Text' Property
Schema Text:
{
"type": "record",
"name": "AVLRecord0",
"fields" : [
{"name": "TimeOfDay","type": "string", "logicalType":"timestamp-millis"},
{"name": "Field_0", "type": "double"},
{"name": "Field_1", "type": "double"},
{"name": "Field_2", "type": "double"},
{"name": "Field_3", "type": "double"}}
]
}
**CSVRecordWritter**:
Schema Write Strategy : Set 'Avro. schema' Attribute
Schema Access Strategy: Use Schema Text Property
Please tell me why am i not able to see the converted record after succesfully fetching from S3.
The desired output is CSV format only. Please find attached sample file uploaded on s3 and I want to convert only upto field_5.
Attached the contoller services screenshots:
Thank you!
2
Answers
I have figured my error: 1. I forgot to add FetchS3Object Processor after EvaluateJsonPath 2. There was an extra comma in my Schema text Property.
Can you tell where exactly was that extra comma in your convert record processor?
As I am facing the same issue.
As per my understanding, issue is occurring because of size_dimension field
Below is my csv data :
And the avro schema which i have used is: