2

I have two files with the following JSON that I need to combine using the relative array position of each object:

PS: - I am restricted to version 1.4 as am on Solaris so don't have the [inputs] feature

File 1

{ "input": [ { "email": "[email protected]", "firstName": "Fred" }, { "email": "[email protected]", "firstName": "James" } ] } 

File 2:

{ "result": [ { "id": 50, "status": "created" }, { "id": 51, "status": "rejected" } ] } 

the expected result is the elements of input[1] combined with elements of result[1] and so on as follows:

{ "combined": [ { "email": "[email protected]", "firstName": "Fred", "id": 50, "status": "created" }, { "email": "[email protected]", "firstName": "James", "id": 51, "status": "rejected" } ] } 
1
  • Hello, Paolog, do you remember that it is good to mark an answer as accepted if it helped you? Commented Mar 2, 2019 at 8:15

2 Answers 2

4

You can use the --slurp option to read both files into one array, and from there it's relatively simple to loop over the keys of one of the arrays and add the corresponding elements of both arrays together.

jq --slurp ' { combined: [ .[0].input as $is| .[1].result as $rs| range(0; $is|length) as $n| $is[$n]+$rs[$n] ] } ' file1.json file2.json 
Sign up to request clarification or add additional context in comments.

4 Comments

This approach nicely avoids reduce but using keys introduces an unnecessary inefficiency. It would be far better to use range(0; $is|length) to effect the iteration.
@peak That seems to be right (for values of "far better" equal to about 5% performance gain and about 1% memory saved on 1M entries). Changing the code.
@MichałPolitowski - By "far better" I had in mind the role that SO plays in illustrating best practices.
@peak, well, best practices include measuring the impact of a change, don't they? :)
1

If more recent versions of jq were available to you, you could take advantage of the transpose function to combine them rather easily:

$ jq -n '{ combined: ([inputs[]] | transpose | map(add)) }' input1.json input2.json 

However, since you are limited to 1.4, your options are bit limited. When working with multiple files, it's useful to have all the inputs read into memory. --slurp allows you to do this reading all inputs in as an array. You will have to zip the inputs together differently however.

$ jq --slurp 'add | reduce range(0; .input | length) as $i (.; .combined += [.input[$i] + .result[$i]] ) | {combined}' input1.json input2.json 

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.