4

I have a file with ~ 3 million rows, here is the first few lines of my file:

head out.txt NA NA NA NA NA gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753,gene85754 gene85752,gene85753,gene85754 gene85752,gene85753,gene85754 gene85752,gene85753,gene85754 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752 gene85752 

For those rows that are separated by ",", I want to keep everything after the first comma and before the second comma. This is my desired output:

outgood.txt NA NA NA NA NA gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85752 gene85752 

4 Answers 4

18

Since cut prints non-delimited lines by default the following works

cut -f2 -d, file 
1
  • 1
    It's nice when someone remember the little quirks of standard tools. Commented Apr 8, 2019 at 18:16
3
awk -F, 'NF > 1 { $1 = $2 } { print $1 }' file 

This uses awk to parse the file as lines consisting of comma-delimited fields.

The code detects when there is more than a single field on a line, and when there is, the first field is replaced by the second field. The first field, either unmodified or modified by the conditional code, is then printed.

2
  • 1
    With a big file, this would probably be faster: awk -F, '{print(NF>1 ? $2 : $1)}' -- since you won't have to rewrite $0 Commented Apr 8, 2019 at 19:53
  • @glennjackman Well, the cut solution would be even faster in any case. Commented Apr 8, 2019 at 19:57
1
awk -F, 'NF == 1 {print $1} NF > 1 { print $2}' filename 

This will print just the first string if there is no comma, second string if there is one or more comma.

0

You can do this with Perl as follows.

Command-line:

$ perl -F, -pale '$_ = $F[1] // $_' out.txt 

Explanation:

  • -p will read records line-by-line AND autoprint before going in to read the next or eof.
  • -l makes IRS = ORS = "\n"
  • -F, makes FS a comma.
  • -a splits each record $_ on the field separator, in our case a comma, and goes ahead and stores the fields so generated in the array @F, which is zero-indexed.
  • -e implies, what follows it is the Perl code, which shall be gets applied to each record.
  • $_ = $F[1] // $_ expression reads as follows: if the 2nd field $F[1] isn't defined, use the current record $_. And then the result of this expression is assigned to the current record $_.
  • owing to the -p switch of perl being in use, before the new record is read in, the current record is taken to stdout.

Result:

NA NA NA NA NA gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85753 gene85752 gene85752 

You may also do it with the GNU version of the sed editor as shown below:

$ sed -ne ' s/,/\n/ s/.*\n// s/,/\n/ P ' out.txt 

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.