265

I'm using the jq tools (jq-json-processor) in shell script to parse json.

I've got 2 json files and want to merge them into one unique file

Here the content of files:

file1

{ "value1": 200, "timestamp": 1382461861, "value": { "aaa": { "value1": "v1", "value2": "v2" }, "bbb": { "value1": "v1", "value2": "v2" }, "ccc": { "value1": "v1", "value2": "v2" } } } 

file2

{ "status": 200, "timestamp": 1382461861, "value": { "aaa": { "value3": "v3", "value4": 4 }, "bbb": { "value3": "v3" }, "ddd": { "value3": "v3", "value4": 4 } } } 

expected result

{ "value": { "aaa": { "value1": "v1", "value2": "v2", "value3": "v3", "value4": 4 }, "bbb": { "value1": "v1", "value2": "v2", "value3": "v3" }, "ccc": { "value1": "v1", "value2": "v2" }, "ddd": { "value3": "v3", "value4": 4 } } } 

I try a lot of combinations but the only result i get is the following, which is not the expected result:

{ "ccc": { "value2": "v2", "value1": "v1" }, "bbb": { "value2": "v2", "value1": "v1" }, "aaa": { "value2": "v2", "value1": "v1" } } { "ddd": { "value4": 4, "value3": "v3" }, "bbb": { "value3": "v3" }, "aaa": { "value4": 4, "value3": "v3" } } 

Using this command:

jq -s '.[].value' file1 file2 
4
  • 2
    Have you tried jsontool? github.com/trentm/json Commented Oct 22, 2013 at 22:57
  • 3
    To do this with json use: cat f1 f2 | json --deep-merge Commented Nov 21, 2014 at 5:47
  • where/how do you get json @xer0x ? Commented Oct 27, 2017 at 13:17
  • 1
    @Gus oh, to get the json tool go to github.com/trentm/json Commented Nov 2, 2017 at 20:14

9 Answers 9

324

Since 1.4 this is now possible with the * operator. When given two objects, it will merge them recursively. For example,

jq -s '.[0] * .[1]' file1 file2 

Important: Note the -s (--slurp) flag, which puts files in the same array.

Would get you:

{ "value1": 200, "timestamp": 1382461861, "value": { "aaa": { "value1": "v1", "value2": "v2", "value3": "v3", "value4": 4 }, "bbb": { "value1": "v1", "value2": "v2", "value3": "v3" }, "ccc": { "value1": "v1", "value2": "v2" }, "ddd": { "value3": "v3", "value4": 4 } }, "status": 200 } 

If you also want to get rid of the other keys (like your expected result), one way to do it is this:

jq -s '.[0] * .[1] | {value: .value}' file1 file2 

Or the presumably somewhat more efficient (because it doesn't merge any other values):

jq -s '.[0].value * .[1].value | {value: .}' file1 file2 
Sign up to request clarification or add additional context in comments.

8 Comments

NOTE: the -s flag is important as without it the two objects are not in an array.
@SimoKinnunen Here we are merging two json files. Is it possible to have 1 json variable and other json file. I tried but it seems not working for me!
If the individual files are sorted by a key, is it possible to preserve the order in the resulting file?
Note that {value: .value} can be shortened to just {value}.
For @JayeshDhandha and future readers, merging one variable and one file can be done like this: jq ". * $json_variable" json_input_file.json > json_output_file.json
|
141

Use jq -s add:

$ echo '{"a":"foo","b":"bar"} {"c":"baz","a":0}' | jq -s add { "a": 0, "b": "bar", "c": "baz" } 

This reads all JSON texts from stdin into an array (jq -s does that) then it "reduces" them.

(add is defined as def add: reduce .[] as $x (null; . + $x);, which iterates over the input array's/object's values and adds them. Object addition == merge.)

3 Comments

Is it possible to do the recursive merge (* operator) with this approach?
@DaveFoster Yes, with something like reduce .[] as $x ({}; . * $x) – see also jrib’s answer.
echo '{"a":1}' | jq -s '[.[0], {"b":$b}] | add' --argjson b 2
64

Here's a version that works recursively (using *) on an arbitrary number of objects:

echo '{"A": {"a": 1}}' '{"A": {"b": 2}}' '{"B": 3}' |\ jq --slurp 'reduce .[] as $item ({}; . * $item)' { "A": { "a": 1, "b": 2 }, "B": 3 } 

2 Comments

Great answer. Works well when merging all files in a directory: jq -s 'reduce .[] as $item ({}; . * $item)' *.json
My json content is an array of objects, so this works for me: echo $JSON_ONE $JSON_TWO | jq -s 'flatten | group_by(keys[]) | map(reduce .[] as $item ({}; . * $item))'
52

Who knows if you still need it, but here is the solution.

Once you get to the --slurp option, it's easy!

--slurp/-s: Instead of running the filter for each JSON object in the input, read the entire input stream into a large array and run the filter just once. 

Then the + operator will do what you want:

jq -s '.[0] + .[1]' config.json config-user.json 

(Note: if you want to merge inner objects instead of just overwriting the left file ones with the right file ones, you will need to do it manually)

1 Comment

Thanks… really useful point about how right-most inputs take precedence.
32

No solution or comment given so far considers using input to access the second file. Employing it would render unnecessary the buildup of an additional structure to extract from, such as the all-embracing array when using the --slurp (or -s) option, which features in almost all of the other approaches.

To merge two files on top level, simply add the second file from input to the first in . using +:

jq '. + input' file1.json file2.json 

To merge two files recursively on all levels, do the same using * as operator instead:

jq '. * input' file1.json file2.json 

That said, to recursively merge your two files, with both objects reduced to their value field, filter them first using {value}:

jq '{value} * (input | {value})' file1.json file2.json 
{ "value": { "aaa": { "value1": "v1", "value2": "v2", "value3": "v3", "value4": 4 }, "bbb": { "value1": "v1", "value2": "v2", "value3": "v3" }, "ccc": { "value1": "v1", "value2": "v2" }, "ddd": { "value3": "v3", "value4": 4 } } } 

Demo

Note that a solution which reduces only after the merge, like . * input | {value} would, is shorter in code but resurrects the "buildup of an additional structure to extract from" futility again, which may produce a lot of overhead if the parts eventually cut off get big.

In order to operate on more than two files, either accordingly use input multiple times, or programmatically iterate over all of them using inputs instead, as in

jq 'reduce inputs as $i (.; . * $i)' file*.json 

Note that in either case the first file is always accessed via the input context . while input(s) only addresses the remaining files, i.e. starting from the second (unless, of course, the --null-input or -n option is given).

Comments

11

This can be used to merge any number of files specified on the command:

jq -rs 'reduce .[] as $item ({}; . * $item)' file1.json file2.json file3.json ... file10.json

or this for any number of files

jq -rs 'reduce .[] as $item ({}; . * $item)' ./*.json

1 Comment

I do not think, that raw output (-r) of objects makes much sense.
10

First, {"value": .value} can be abbreviated to just {value}.

Second, the --argfile option (available in jq 1.4 and jq 1.5) may be of interest as it avoids having to use the --slurp option.

Putting these together, the two objects in the two files can be combined in the specified way as follows:

$ jq -n --argfile o1 file1 --argfile o2 file2 '$o1 * $o2 | {value}' 

The '-n' flag tells jq not to read from stdin, since inputs are coming from the --argfile options here.

Note on --argfile

The jq manual deprecates --argfile because its semantics are non-trivial: if the specified input file contains exactly one JSON entity, then that entity is read as is; otherwise, the items in the stream are wrapped in an array.

If you are uncomfortable using --argfile, there are several alternatives you may wish to consider. In doing so, be assured that using --slurpfile does not incur the inefficiencies of the -s command-line option when the latter is used with multiple files.

1 Comment

jq has deprecated --argile for --slurpfile
5

I didn't want to discard previous non-unique key in my objects

jq -n '{a:1, c:2}, {b:3, d:4}, {a:5,d:6}' | jq -s 'map(to_entries)|flatten|group_by(.key)|map({(.[0].key):map(.value)|add})|add' { "a": 6, "b": 3, "c": 2, "d": 10 } 

or alternatively if you wanted to just keep an array of the values, remove the add after the extraction of the value map(.value)|̶a̶d̶d̶

jq -n '{a:1, c:2}, {b:3, d:4}, {a:5,d:6}' | jq -s 'map(to_entries)|flatten|group_by(.key)|map({(.[0].key):map(.value)})|add' { "a": [1, 5], "b": [3], "c": [2], "d": [4, 6] } 

Try removing each part of the command and see how each step modifies the array of objects... aka run these steps and see how the output changes

map(to_entries) map(to_entries)|flatten map(to_entries)|flatten|group_by(.key) map(to_entries)|flatten|group_by(.key)|map({(.[0].key):map(.value)}) map(to_entries)|flatten|group_by(.key)|map({(.[0].key):map(.value)})|add 

Comments

0

Merging two JSONL files line by line

I came across this closely related use case which may be of interest to Googlers, suppose you have:

in1.jsonl

{"a":1, "b":2} {"a":3, "b":4} 

in2.jsonl

{"c":5, "d":6} {"c":7, "d":8} 

and you want:

{"a":1, "b":2, "c":5, "d":6} {"a":3, "b":4, "c":7, "d":8} 

I've managed to do that with:

paste in1.jsonl in2.jsonl | jq '. * input' 

The way this works is that first it pastes the two JSONL into a weird JSONL-like format:

{"a":1,"b":2} {"c":5,"d":6} {"a":3,"b":4} {"c":7,"d":8} 

but jq doesn't care and treats that like JSONL. Then input pulls the next JSON out ahead of time, and then we merge the current one . with the next one from input with *.

Unfortunately I couldn't get rid of the paste and do it in pure jq, but it's not too bad.

Tested on jq 1.7, Ubuntu 25.04.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.