36

I want to print lines from a file backwards without using tac command. Is there any other solution to do such thing with bash?

0

12 Answers 12

46

Using sed to emulate tac:

sed '1!G;h;$!d' "${inputfile}" 
10
  • 7
    This is a good solution, but some explanation for how this works will be even better. Commented Mar 16, 2011 at 11:19
  • 17
    It's a famous sed one-liner. See "36. Reverse order of lines (emulate "tac" Unix command)." in Famous Sed One-Liners Explained for a full explanation of how it works. Commented Mar 16, 2011 at 11:37
  • 7
    Perhaps worth noting: "These two one-liners actually use a lot of memory because they keep the whole file in hold buffer in reverse order before printing it out. Avoid these one-liners for large files." Commented Mar 17, 2011 at 14:33
  • So do all the other answers (except maybe the one using sort - there's a chance it will use a temporary file). Commented Apr 15, 2011 at 20:39
  • 2
    Note that tac is faster for regular files because it reads the file backward. For pipes, it has to do the same as the other solutions (hold in memory or in temp files), so is not significantly faster. Commented Sep 14, 2012 at 5:48
11

With ed:

ed -s infile <<IN g/^/m0 ,p q IN 

If you're on BSD/OSX (and hopefully soon on GNU/linux too as it will be POSIX):

tail -r infile 
9

awk '{a[i++]=$0} END {for (j=i-1; j>=0;) print a[j--] }' file.txt

via awk one liners

8
  • 5
    A shorter one: awk 'a=$0RS a{}END{printf a}' Commented Mar 17, 2011 at 23:34
  • @ninjalj: it may be shorter, but it becomes extremely slow as the file size gets larger. I gave up after waiting for 2min 30sec. but your first perl reverse<>` it the best/fastest answer on the page (to me), at 10 times faster than this awk answer (all the awk anseres are about the same, time-wise) Commented Aug 6, 2012 at 17:26
  • 3
    Or awk '{a[NR]=$0} END {while (NR) print a[NR--]}' Commented Sep 14, 2012 at 5:41
  • @ninjalj appending to a variable is one of the slowest operations in awk because awk has to calculate the resulting size, find new memory, move the contents there, and change the variable to point to that new location. Doing that with a large string isn't tenable. Also never do printf a for any input data as it'll fail when that data contains printf formatting strings, always do printf "%s", a instead. Commented Oct 1, 2021 at 19:59
  • @ninjalj what does the a{} part do. normally {} contains some command, here it is, however, empty? Commented Mar 7, 2023 at 16:44
6

You can pipe it through:

awk '{print NR" "$0}' | sort -k1 -n -r | sed 's/^[^ ]* //g' 

The awk prefixes each line with the line number followed by a space. The sort reverses the order of the lines by sorting on the first field (line number) in reverse order, numeric style. And the sed strips off the line numbers.

The following example shows this in action:

pax$ echo 'a b c d e f g h i j k l' | awk '{print NR" "$0}' | sort -k1 -n -r | sed 's/^[^ ]* //g' 

It outputs:

l k j i h g f e d c b a 
6
  • Ok. Thanks! I'll try this. And thanks for the comments! Commented Mar 16, 2011 at 11:04
  • 5
    cat -n acts much like awk '{print NR" "$0}' Commented Mar 16, 2011 at 11:09
  • 2
    I think that's called Schwartzian transform, or decorate-sort-undecorate Commented Mar 17, 2011 at 23:25
  • This general approach is nice in that sort handles using files on disk if the task is too big to reasonably do in memory. It might be gentler on memory though if you used temporary files between the steps rather than pipes. Commented Aug 10, 2016 at 6:15
  • 1
    awk -v OFS='\t' '{print NR, $0}' file | sort -k1,1nr | cut -f2- is IMHO the cleanest way to write that as it's still using mandatory POSIX tools but is using awks OFS instead of hard-coding a separator char in the print and doing string concatenation, is using awk to generate input that uses the default separator for cut, \t, is using cut for it's sole purpose instead of making sed do what cut exists to do, and is only sorting on the one, necessary field that contains the line number. Commented Oct 1, 2021 at 21:01
6

As you asked to do it in bash, here is a solution that doesn't make use of awk, sed or perl, just a bash function:

reverse () { local line if IFS= read -r line then reverse printf '%s\n' "$line" fi } 

The output of

echo 'a b c d ' | reverse 

is

d c b a 

As expected.

But beware that lines are stored in memory, one line in each recursively called instance of the function. So careful with big files.

1
  • 5
    It quickly becomes impractically slow as file size increases, compared to even the slowest of the other answers, and as you have suggested, it blows memory pretty easily: *bash recursive function... but its an interesting idea. Segmentation fault * Commented Aug 6, 2012 at 16:55
2

POSIX vi does this, so also ed or ex.

vi:

:g/^/m0 

ex:

ex -s infile <<EOS g/^/move0 x EOS 

ed:

ed -s infile <<EOF g/^/m0 ,p w EOF 
1

In perl:

cat <somefile> | perl -e 'while(<>) { push @a, $_; } foreach (reverse(@a)) { print; }' 
4
  • 2
    or the shorter perl -e 'print reverse<>' Commented Mar 17, 2011 at 23:21
  • 2
    (Actually, there's a shorter one, but it's really ugly, witness its awfulness: perl -pe '$\=$_.$\}{' ) Commented Mar 17, 2011 at 23:21
  • @Frederik Deweerdt: Fast, but it loses the first line... @ ninjalj: reverse<) is fast: good! but the "really ugly" one is extremely slow as the number of lines increases.!!.... Commented Aug 6, 2012 at 16:41
  • The nice thing about this one is that the contents of the file is not necessarily read into memory (except possibly in chunks by sort). Commented Aug 8, 2018 at 11:19
0

Bash, with mapfile mentioned in comments to fiximan, and actually an possibly better version:

# last [LINES=50] _last_flush(){ BUF=("${BUF[@]:$(($1-LINES)):$1}"); } # flush the lines, can be slow. last(){ local LINES="${1:-10}" BUF ((LINES)) || return 2 mapfile -C _last_flush -c $(( (LINES<5000) ? 5000 : LINES+5 )) BUF BUF=("${BUF[@]}") # Make sure the array subscripts make sence, can be slow. ((LINES="${#BUF[@]}" > LINES ? LINES : "${#BUF[@]}")) for ((i="${#BUF[@]}"; i>"${#BUF[@]}"-LINES; i--)); do echo -n "${BUF[i]}"; done } 

Its performance is basically comparable to the sed solution, and gets faster as the number of requested lines decreases.

0
Simple do this with your file to sort the data in reverse order, and it should be unique. 

sed -n '1h;1d;G;h;$p'

0
  • Number the lines with nl
  • sort in reverse order by number
  • remove the numbers with sed

as shown here:

echo 'e > f > a > c > ' | nl -ba | sort -nr | sed -r 's/^ *[0-9]+\t//' 

Result:

c a f e 

Note that "nl -ba" for including empty lines in the numbering is an option of GNU-nl and might not work with every nl.

3
  • 1
    nl does not number blank lines. There is already another answer implementing this Schwartzian transform idea. Commented Apr 27, 2021 at 20:19
  • You're right, nl needs the option -ba for empty lines and might not exist for every nl. Commented Apr 28, 2021 at 5:23
  • nl isn't a mandatory POSIX tool so iif you don't have tac you probably don't have nl either. Commented Oct 1, 2021 at 20:53
-1

BASH-only solution

read file into bash array ( one line = one element of array ) and print out array in reverse order:

i=0 while read line[$i] ; do i=$(($i+1)) done < FILE for (( i=${#line[@]}-1 ; i>=0 ; i-- )) ; do echo ${line[$i]} done 
3
  • Try it with indented lines... philfr's version is a bit better but still veeeery slooooow so really, when it comes to text processing, never use while..read. Commented Jul 31, 2015 at 12:56
  • Use IFS='' and read -r to prevent all kinds of escapes and trailing IFS removal from screwing it up. I think the bash mapfile ARRAY_NAME builtin is a better solution for reading into arrays though. Commented Sep 18, 2015 at 16:17
  • This approach should never be used. See unix.stackexchange.com/q/169716/135943 Commented Aug 30, 2024 at 19:45
-1
awk -v OFS='\t' '{ print NR,$0 }' | sort -nr | cut -f 2- 

This prepends the line number before each line, with a delimiting tab character, sorts the lines in reverse order on these line numbers, and then removes the numbers again with cut.

See also: Schwartzian transform (Wikipedia link)

1

You must log in to answer this question.