114

On shell, I pipe to awk when I need a particular column.

This prints column 9, for example:

... | awk '{print $9}' 

How can I tell awk to print all the columns including and after column 9, not just column 9?

1

11 Answers 11

115
awk '{ s = ""; for (i = 9; i <= NF; i++) s = s $i " "; print s }' 
Sign up to request clarification or add additional context in comments.

10 Comments

a couple of slight refinements: awk -v N=9 '{sep=""; for (i=N; i<=NF; i++) {printf("%s%s",sep,$i); sep=OFS}; printf("\n")}'
@SiegeX: It doesn't add NUL bytes, it leaves the FS in place between each empty field.
Please see @Ascherer's answer for elegance.
@veryhungrymike: Elegance is nice, but I'd rather be correct. :p
Code only answer. Some explanation about what it does and how is welcomed... :(
|
88

When you want to do a range of fields, awk doesn't really have a straight forward way to do this. I would recommend cut instead:

cut -d' ' -f 9- ./infile 

Edit

Added space field delimiter due to default being a tab. Thanks to Glenn for pointing this out

6 Comments

One thing about cut is that it uses a specific delimiter (tab by default), where awk uses "whitespace". With cut, 2 consecutive tabs delimit an empty field.
As @glennjackman pointed out, awk's delimiter is "whitespace" (any amount, too). So setting cut delimiter to single space would not match behavior too. unfortunately the loop is the best one can do, so it looks.
This one does not work properly. Try the command find . | xargs ls -l | cut -d' ' -f 9-. For some reason double spaces are counted as well. Example: lrwxrwxrwx 1 me me 21 Dec 12 00:00 ./file_a lrwxrwxrwx 1 me me 64 Dec 6 00:06 ./file_b will result in ./file_a 00:06 ./file_b
@MarcoPashkov please elaborate on This one does not work properly, especially considering you use the exact same code in your pipeline. By the way, you should never try to parse the output of ls
cut does not do the job here. For example the, if your input is "foo bar" (single space) for one line, and "foo ___ bar" (ie multiple spaces, but SO is too smart to show it) for another, cut will process them differently.
|
57
awk '{print substr($0, index($0,$9))}' 

Edit: Note, this doesn't work if any field before the ninth contains the same value as the ninth.

7 Comments

@veryhungrymike: ...and doesn't work if any field before ninth contains the same value as the ninth.
Probably because of the classic sentence "hopefully your file doesn't have that issue". It's a total no-no in s/w engineering to state: "we're not going to waste time including error-checking for input of e. g. negative values, because 'we hope the user will be intelligent enough to not try them out, crashing our tool'". HAHAHA! Always love to hear this! (I like a good sense of humor) Well, as idiots do exist, it's the developer's duty to make his stuff idiot-proof! Instead of "hoping for the good in man". That's rather an attitude expected with philosophers, not s/w engineers...LOL
I wasn't saying not to check for errors, but if you know you're not going to run into the issue, then this solution is fine, like I stated. But thank you for the unnecessary downvote @syntaxerror. This solution will work for some, as the (currently) 19 upvotes will show, but if it doesn't, then don't use it for your solution. There are lots of ways to solve the OP's problem.
If you are using awk on the command line in your daily work, this is definitely the solution you want. Is it not obvious? Error checking, etc, doesn't really matter in that case since you are typing it in & can catch these sort of things before you press enter (personally, I don't think awk should be used for anything else anyways, that's why we've got perl, python, tcl, and about 100+ other, better, faster, less annoying scripting languages!) 'Course maybe Im giving my fellow software developers too much credit and they really do need error checking even on the stuff they type on the fly (??)
Not that it needed it, as its right below the answer, but i added it @atti
|
13
sed -re 's,\s+, ,g' | cut -d ' ' -f 9- 

Instead of dealing with variable width whitespace, replace all whitespace as single space. Then use simple cut with the fields of interest.

It doesn't use awk so isn't germane but seemed appropriate given the other answers/comments.

4 Comments

Please make your answer more throrough, otherwise post this as a comment to the question.
This is ideal for ps faux | use. Never be afraid of admitting tool XYZ is not the most appropriate.
@Kevin Even more ideal is ps faux | perl -pe 's/^(\H*\h*){8}//'. See my answer.
Downvoting because the question asked how to do this in awk, not sed, perl, ruby, java, python, bash.
12

Generally perl replaces awk/sed/grep et. al., and is much more portable (as well as just being a better penknife).

perl -lane 'print "@F[8..$#F]"' 

Timtowtdi applies of course.

5 Comments

You need to add command line option -l, or add \n to the print statement.
@glenn jackman: Possibly. Not required if part of another message, or being assigned to a variable etc. As far as "better" goes, perl certainly looks better in the small. Can look very untidy in the large admittedly.
Don't get me wrong, I like Perl. I love awk for what it is though.
My embeded device doesn't come with Perl, but it does have awk.
Downvoting because the question asked how to do this in awk, not perl, ruby, java, python, bash.
5
awk -v m="\x01" -v N="3" '{$N=m$N ;print substr($0, index($0,m)+1)}' 

This chops what is before the given field nr., N, and prints all the rest of the line, including field nr.N and maintaining the original spacing (it does not reformat). It doesn't mater if the string of the field appears also somewhere else in the line, which is the problem with Ascherer's answer.

Define a function:

fromField () { awk -v m="\x01" -v N="$1" '{$N=m$N; print substr($0,index($0,m)+1)}' } 

And use it like this:

$ echo " bat bi iru lau bost " | fromField 3 iru lau bost $ echo " bat bi iru lau bost " | fromField 2 bi iru lau bost 

Output maintains everything, including trailing spaces For N=0 it returns the whole line, as is, and for n>NF the empty string

2 Comments

This is a good idea. It doesn't quite work on a current Mac using typical gawk, because $0 collapses. The fix is to set a variable to $0 as the first step, such as: '{s=$0; ... print substr(s,index(s,m)+1}
That definitely WILL reformat the line since $N=m$N is changing the value of a field which causes awk to rebuild $0 replacing all FSs with OFSs. I can't imagine how you're getting the output you show given that script.
1

Here is an example of ls -l output:

-rwxr-----@ 1 ricky.john 1493847943 5610048 Apr 16 14:09 00-Welcome.mp4 -rwxr-----@ 1 ricky.john 1493847943 27862521 Apr 16 14:09 01-Hello World.mp4 -rwxr-----@ 1 ricky.john 1493847943 21262056 Apr 16 14:09 02-Typical Go Directory Structure.mp4 -rwxr-----@ 1 ricky.john 1493847943 10627144 Apr 16 14:09 03-Where to Get Help.mp4 

My solution to print anything post $9 is awk '{print substr($0, 61, 50)}'

Comments

0

To display the first 3 fields and print the remaining fields you can use:

awk '{s = ""; for (i=4; i<= NF; i++) s= s $i : "; print $1 $2 $3 s}' filename 

where $1 $2 $3 are the first 3 fields.

Comments

0
function print_fields(field_num1, field_num2){ input_line = $0 j = 1; for (i=field_num1; i <= field_num2; i++){ $(j++) = $(i); } NF = field_num2 - field_num1 + 1; print $0 $0 = input_line } 

Comments

0

Using cut instead of awk and overcoming issues with figuring out which column to start at by using the -c character cut command.

Here I am saying, give me all but the first 49 characters of the output.

 ls -l /some/path/*/* | cut -c 50- 

The /*/*/ at the end of the ls command is saying show me what is in subdirectories too.

You can also pull out certain ranges of characters ala (from the cut man page). E.g., show the names and login times of the currently logged in users:

 who | cut -c 1-16,26-38 

Comments

-3
ruby -lane 'print $F[3..-1].join(" ")' file 

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.