Alright. I have a table that has SUPER type fields. These fields hold values like below:
id mycol
---------------------------------
1 [{"Title":"first"},{"Title":"Second"},{"Title":"Third"},{"Title":"fourth"}]
2 [{"Title":"some"},{"Title":"thing"}]
3 [{"Title":"else"}]
I want to extract the JSON values and concatenate them with commas in each row.
My current query:
with cte as (
SELECT id, mycol
FROM mytable t, t.mycol AS item AT index
)
select id, item."Title" from cte
This generates output as:
id mycol
-------------
1 first
1 second
1 third
1 fourth
2 some
......and so on
I want my results to be:
id mycol
-------------
1 first,second,third,fourth
2 some,thing
3 else
So to merge rows with same ID, I used LISSTAG:
SELECT id, LISTAGG(item."Title",' ') AS mycol
FROM mytable
GROUP BY id
this throws error that LISTAGG(Super) is not available
. So any function on SUPER field is throwing unavailable error.
I don’t want to leave them as SUPER fields because further conversions/implementations like LISTAGG/CONCAT is becoming difficult. So I want to see convert the field to varchar after extraction from JSON style. any help?
2
Answers
I believe it’s a problem with inferring the argument type. Try casting it explicitly to
VARCHAR
By default Redshift is case insensitive for identifiers. "SELECT FOO from table;" and "SELECT foo from table;" are exactly the same.
This presents a problem when it comes to SUPER data types as the identifiers in the SUPER are case sensitive. So if you use other than lower case identifiers in your SUPER columns you need to tell Redshift to be case sensitive by running
Since your SUPER has the identifier "Title" which uses upper case I’m suspecting that this is your issue. Just be careful as the above two select examples won’t produce the same result after you set case sensitivity.