Pyspark – How to read json with nested arrays as "column-row" or "key-value"
I have a json file just like below, and I need to read it and generate a table with the attributes of the person. { "person":[ [ "name", "Guy" ], [ "age", "25" ], [ "height", "2.00" ] ] }…