8

I need to convert a JSON string using jq, as shown below, into a table format for displaying the output in the terminal.

{ "results": [ [ { "field": "@timestamp", "value": "2023-07-03 08:28:00.000" }, { "field": "CpuReserved", "value": "8192.0" }, { "field": "CpuUtilized", "value": "4056.412942708333" }, { "field": "MemoryReserved", "value": "61440" }, { "field": "MemoryUtilized", "value": "4311" }, { "field": "@ptr", "value": "CpABClUKUTE0NjcxNzAzNzI0NzovYXdzL2Vjcy9jb250YWluZXJpbnNpZ2h0cy9pcmwtaW5mcmEtc2hhcmVkLWplbmtpbnMtbWFpbi9wZXJmb3JtYW5jZRABEjUaGAIGSaxosQAAAAAd+TFeAAZKKHIwAAABEiABKIDF9taRMTDA7v3WkTE4BkC9NEjNPFCBIRgAEAAYAQ==" } ], [ { "field": "@timestamp", "value": "2023-07-03 08:28:00.000" }, { "field": "CpuReserved", "value": "8192.0" }, { "field": "CpuUtilized", "value": "4056.412942708333" }, { "field": "MemoryReserved", "value": "61440" }, { "field": "MemoryUtilized", "value": "4311" }, { "field": "@ptr", "value": "CpABClUKUTE0NjcxNzAzNzI0NzovYXdzL2Vjcy9jb250YWluZXJpbnNpZ2h0cy9pcmwtaW5mcmEtc2hhcmVkLWplbmtpbnMtbWFpbi9wZXJmb3JtYW5jZRABEjUaGAIGSaxosQAAAAAd+TFeAAZKKHIwAAABEiABKIDF9taRMTDA7v3WkTE4BkC9NEjNPFCBIRgAEAEYAQ==" } ] ] } 

What I want to display in the terminal is as follows:

@timestamp CpuReserved CpuUtilized MemoryReserved MemoryUtilized ========================================================================================== 2023-07-03 08:16:00.000 8192.0 410.5300065104166 61440 1417 2023-07-03 08:15:00.000 8192.0 702.310791015625 61440 792 

Can someone guide me in the right direction?

4 Answers 4

12

Maybe:

$ jq -c '.results[]|map(.key=.field)|from_entries|del(."@ptr")' file.json | mlr --ijson --opprint --barred cat +-------------------------+-------------+-------------------+----------------+----------------+ | @timestamp | CpuReserved | CpuUtilized | MemoryReserved | MemoryUtilized | +-------------------------+-------------+-------------------+----------------+----------------+ | 2023-07-03 08:28:00.000 | 8192.0 | 4056.412942708333 | 61440 | 4311 | | 2023-07-03 08:28:00.000 | 8192.0 | 4056.412942708333 | 61440 | 4311 | +-------------------------+-------------+-------------------+----------------+----------------+ 

Without --barred, that looks like:

 @̲t̲i̲m̲e̲s̲t̲a̲m̲p̲ C̲p̲u̲R̲e̲s̲e̲r̲v̲e̲d̲ C̲p̲u̲U̲t̲i̲l̲i̲z̲e̲d̲ M̲e̲m̲o̲r̲y̲R̲e̲s̲e̲r̲v̲e̲d̲ M̲e̲m̲o̲r̲y̲U̲t̲i̲l̲i̲z̲e̲d̲ 2023-07-03 08:28:00.000 8192.0 4056.412942708333 61440 4311 2023-07-03 08:28:00.000 8192.0 4056.412942708333 61440 4311 

You could also pipe the output of that jq command to vd -f json (VisiData) instead of mlr (Miller) to get an interactive table viewer.

Those use jq to extract the information and mlr only to format the table. There is some overlap between the feature set of jq and that of mlr. For instance, you could also remove the @ptr column with mlr's cut:

jq -c '.results[]|map(.key=.field)|from_entries' file.json | mlr --ijson --opprint cut -xf @ptr 

That jq command, broken down and commented:

jq -c ' .results[] | # iterate over the elements of the .results array # (which are also arrays) map(.key=.field) | # for each of those arrays, transform the # elements (which are objects) by adding a # field of key "key" with same value as that # with "field" key in each, as that's what # from_entries needs from_entries | # transforms those [{"key":"foo","value":"bar"}] # (the "field" field is ignored) to {"foo":"bar"} del(."@ptr") # deletes the field with key "@ptr" from those # objects' file.json 

The result is not JSON, but several JSONs concatenated together, but both jq and mlr support that. With -c (compact), that's NDJSON (newline-delimited JSON) where we have one JSON per line, also supported by vd. To get proper JSON, we'd need:

jq -c '.results|map(map(.key=.field)|from_entries|del(."@ptr"))' file.json 

Where we use map on the .results array so it results in another JSON array instead of iterating over the elements. So the end result is one large arrays. That's also supported by jq (obviously as that's proper JSON), mlr and vd, is a bit longer to type and means those tools need to read up to the closing ] at the very end before they have anything to chew on. In practice, I've not checked whether that made any difference in terms of performance though.

1
  • 1
    Thanks for sharing Miller! I'll try it next. I currently use jtbl -m (jtbl) for my tabulation. Commented Jul 9, 2023 at 11:59
4

Yet another:

$ jq -r '.results|map(map({key:.field,value}|select(.key!="@ptr"))|from_entries)|(.[0]|keys_unsorted) as $keys|([$keys]+map([.[$keys[]]]))[]|@csv' input.json | xsv table @timestamp CpuReserved CpuUtilized MemoryReserved MemoryUtilized 2023-07-03 08:28:00.000 8192.0 4056.412942708333 61440 4311 2023-07-03 08:28:00.000 8192.0 4056.412942708333 61440 4311 

Notes:

  • from_entries will convert an array of key-value pairs to an object.
  • To convert array-of-objects to csv: View Details.
  • xsv table is for pretty-printing only, you can remove it.
1
  • 1
    I like this answer most since it uses jq only (no mlr) and works without explicit indices. Commented Jul 7, 2023 at 10:15
3

I think I found the answer:

jq -r '([ "@timestamp", "CpuReserved", "CpuUtilized", "MemoryReserved", "MemoryUtilized"] | (., map(length*"-"))), ( .results[] | [.[0,1,2,3,4].value] ) | @tsv' |column -ts $'\t' @timestamp CpuReserved CpuUtilized MemoryReserved MemoryUtilized ---------- ----------- ----------- -------------- -------------- 2023-07-03 08:28:00.000 8192.0 4056.412942708333 61440 4311 2023-07-03 08:28:00.000 8192.0 4056.412942708333 61440 4311 2023-07-03 08:27:00.000 8192.0 1056.0744270833331 61440 4436 2023-07-03 08:27:00.000 8192.0 1056.0744270833331 61440 4436 2023-07-03 08:26:00.000 8192.0 2756.6764583333334 61440 5138 2023-07-03 08:26:00.000 8192.0 2756.6764583333334 61440 5138 2023-07-03 08:25:00.000 8192.0 5715.494895833333 61440 6600 2023-07-03 08:25:00.000 8192.0 5715.494895833333 61440 6600 2023-07-03 08:24:00.000 8192.0 7977.704166666666 61440 8451 2023-07-03 08:24:00.000 8192.0 7977.704166666666 61440 8451 2023-07-03 08:23:00.000 8192.0 7288.051666666666 61440 4757 2023-07-03 08:23:00.000 8192.0 7288.051666666666 61440 4757 2023-07-03 08:22:00.000 8192.0 4286.02375 61440 4815 2023-07-03 08:22:00.000 8192.0 4286.02375 61440 4815 2023-07-03 08:21:00.000 8192.0 3357.523776041666 61440 2146 2023-07-03 08:21:00.000 8192.0 3357.523776041666 61440 2146 2023-07-03 08:20:00.000 8192.0 990.2647916666666 61440 1692 2023-07-03 08:20:00.000 8192.0 990.2647916666666 61440 1692 2023-07-03 08:19:00.000 8192.0 4533.409375 61440 1816 2023-07-03 08:19:00.000 8192.0 4533.409375 61440 1816 2023-07-03 08:18:00.000 8192.0 939.4855208333333 61440 1810 2023-07-03 08:18:00.000 8192.0 939.4855208333333 61440 1810 2023-07-03 08:17:00.000 8192.0 4770.659791666667 61440 1924 2023-07-03 08:17:00.000 8192.0 4770.659791666667 61440 1924 2023-07-03 08:16:00.000 8192.0 410.5300065104166 61440 1417 2023-07-03 08:16:00.000 8192.0 410.5300065104166 61440 1417 2023-07-03 08:15:00.000 8192.0 702.310791015625 61440 792 2023-07-03 08:15:00.000 8192.0 702.310791015625 61440 792 2023-07-03 08:14:00.000 8192.0 0.0 61440 0 2023-07-03 08:14:00.000 8192.0 0.0 61440 0 
1
  • 2
    It's good to hear that you found the a solution. You could make your answer more helpful for future readers by explaining how it works and why it's better than the earlier answers. Commented Jul 7, 2023 at 12:25
2

Another take that uses jq to emit tab-separated values, and column to prettify it

jq -r ' .results | first as $first | [$first | map(.field)] + [.[] | map(.value)] | map(.[:-1])[] | @tsv ' file.json | column -t -s $'\t' 
@timestamp CpuReserved CpuUtilized MemoryReserved MemoryUtilized 2023-07-03 08:28:00.000 8192.0 4056.412942708333 61440 4311 2023-07-03 08:28:00.000 8192.0 4056.412942708333 61440 4311 

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.