That jq command, broken down and commented:
jq -c ' .results[] | # iterate over the elements of the .results array # (which are also arrays) map(.key=.field) | # for each of those arrays, transform the # elements (which are objects) by adding a # field of key "key" with same value as that # with "field" key in each, as that's what # from_entries needs from_entries | # transforms those [{"key":"foo","value":"bar"}] # (the "field" field is ignored) to {"foo":"bar"} del(."@ptr") # deletes the field with key "@ptr" from those # objects' file.json
The result is not JSON, but several JSONs concatenated together, but both jq and mlr support that. With -c (compact), that's NDJSON (newline-delimited JSON) where we have one JSON per line, also supported by vd. To get proper JSON, we'd need:
jq -c '.results|map(map(.key=.field)|from_entries|del(."@ptr"))' file.json
Where we use map on the .results array so it results in another JSON array instead of iterating over the elements. So the end result is one large arrays. That's also supported by jq (obviously as that's proper JSON), mlr and vd, is a bit longer to type and means those tools need to read up to the closing ] at the very end before they have anything to chew on. In practice, I've not checked whether that made any difference in terms of performance though.