1

I have some output that prints an element every line. I want to be able to parse that output in chunks of N, even if the number of lines is not a multiple of N.

find . -f | chunk 10 1 # shows first chunk of size 10 find . -f | chunk 10 2 # second chunk of size 10 find . -f | chunk 10 3 # last of size < n find . -f | chunk 10 4 # does nothing 

1 Answer 1

1

Your chunk function can be implemented in bash as follows.

chunk(){ size=$1 n=$2 firstline=$((n*size)) i=0 while [ $i -lt $firstline ] do read -r junk || return i=$((i+1)) done i=0 while [ $i -lt $size ] do read -r str || return printf "%s\n" "$str" i=$((i+1)) done } 

Note that results may be unexpected if files are added in between calls to find. So, you may want to implement a save_chunks/get_chunk API instead of your requested one, or do something like:

catn(){ i=0 while [ $i -lt $1 ] && read -r s do printf "%s\n" "$s";i=$((i+1)) done } find -f . | (catn 10; # shows first chunk of size 10 catn 10; # second chunk of size 10 catn 10; # last of size < n catn 10) # does nothing 

This should also be faster.

2
  • Awesome. not sure how the series of head functions works though, it seems to just output the first chunk. Commented Mar 7, 2016 at 3:27
  • @barrrista Does the new catn function work for you? Commented Mar 7, 2016 at 3:40

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.