Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

4
  • 4
    Not as fast as the awk command in other answers, but conceptually simple! Commented Mar 31, 2015 at 23:11
  • @Johann I am doing this pretty often on files with hundreds of thousands (even million) of short newline terminated strings. I get the results pretty quick for the experiments I am doing. It can be more important if used in scripts which are run again and again, savings in time can be considerable. Commented Mar 31, 2015 at 23:13
  • 4
    Use sort -u to remove duplicates during the sort, rather than after. (And saves memory bandwidth) piping it to another program). This is only better than the awk version if you want your output sorted, too. (The OP on this question wants his original ordering preserved, so this is a good answer for a slightly different use-case.) Commented Sep 14, 2015 at 15:39
  • Took about a minute, for me, for a 5.5 million line file (1.8 GB in total). Brilliant. Commented Jan 4, 2018 at 11:23