My question is similar to [this question][1] but with a couple of different constraints: 

 - I have a large `\n` delimited wordlist -- one word per line. Size of
 files range from 2GB to as large as 10GB. 
 - I need to remove any duplicate lines. 
 - The process may sort the list during the course of removing the duplicates but not required.
 - There is enough space on the partition to hold the new unique wordlist outputted.

I have tried both of these methods but they both fail with out of memory errors.

 sort -u wordlist.lst > wordlist_unique.lst

<pre><code>awk '!seen[$0]++' wordlist.lst > wordlist_unique.lst
awk: (FILENAME=wordlist.lst FNR=43601815) fatal: assoc_lookup: bucket-ahname_str: can't allocate 10 bytes of memory (Cannot allocate memory)</code></pre>

What other approaches can I try?

 [1]: https://unix.stackexchange.com/questions/11939/how-to-get-only-the-unique-results-without-having-to-sort-data