I’ve got two files containing compressed JSONs. All JSONs having the same root fragments "id" and "test":
file1.json:
{"test":{"a":2},"id":"850303847"}
{"test":{"a":3},"id":"2742540872"}
{"test":{"a":4},"id":"1358887220"}
file2.json:
{"test":{"b":3},"id":"850303847"}
{"test":{"b":3},"id":"2742540872"}
{"test":{"b":4},"id":"1358887220"}
file3.json
{"test":{"c":8},"id":"850303847"}
{"test":{"c":4},"id":"2742540872"}
{"test":{"c":5},"id":"1358887220"}
I would like to merge these Files based on it’s IDs resulting to:
{"test":{"a":2,"b":3,"c":8},"id":"850303847"}
{"test":{"a":3,"b":3,"c":4},"id":"2742540872"}
{"test":{"a":4,"b":4,"c":5},"id":"1358887220"}
I’ve looked into jq -s
option for this but fail to find a way. Any ideas how to achieve in Bash (with or without JQ)?
2
Answers
You could
reduce
allinputs
into an object with.id
as key, deep-merge using*
, then iterate over the fields.[]
to obtain a stream again (use the-c
flag for compact output):Demo
Here is one way:
Online demo