Skip to main content
Post Closed as "Duplicate" by αғsнιη awk

I have a file that is as follows

1:A

2:B

3:A

1:A 2:B 3:A 

I need the output to be:

1:A

2:B

1:A 2:B 

As the third entries second column contains A, like the first entry does, it removes it. Also it needs to be case sensitive.

it'sIt's an extremely large file so time saving would be nice.

I have tried this but it seems to only print the unique lines

sort -u -t':' -k3,3 file

sort -u -t':' -k3,3 file 

I have a file that is as follows

1:A

2:B

3:A

I need the output to be:

1:A

2:B

As the third entries second column contains A, like the first entry does, it removes it. Also it needs to be case sensitive.

it's an extremely large file so time saving would be nice.

I have tried this but it seems to only print the unique lines

sort -u -t':' -k3,3 file

I have a file that is as follows

1:A 2:B 3:A 

I need the output to be:

1:A 2:B 

As the third entries second column contains A, like the first entry does, it removes it. Also it needs to be case sensitive.

It's an extremely large file so time saving would be nice.

I have tried this but it seems to only print the unique lines

sort -u -t':' -k3,3 file 
Source Link

Sort column by duplicates and keep the first occurrence

I have a file that is as follows

1:A

2:B

3:A

I need the output to be:

1:A

2:B

As the third entries second column contains A, like the first entry does, it removes it. Also it needs to be case sensitive.

it's an extremely large file so time saving would be nice.

I have tried this but it seems to only print the unique lines

sort -u -t':' -k3,3 file