2

I have a problem coming up with an algorithm. Will you, guys, help me out here?

I have a file which is huge and thus can not be loaded at once. There exists duplicate data (generic data, might be strings). I need to remove duplicates.

6
  • Does the order of the data in the file need to be preserved? Commented May 22, 2011 at 16:34
  • Yes, the data needs to be preserved. Commented May 22, 2011 at 16:35
  • More than RAM avaliable. Say, 10 Gb. Commented May 22, 2011 at 16:35
  • The more, the better, of course. But simplicity is a virtue as well. Any proposals are welcome!) Commented May 22, 2011 at 16:38
  • I think that we need more informations about that file's "nature". Is it multiline text (ASCII, UTF8 ?) file, XML, CSV or something like that ? Can you show head -50 ? Commented May 22, 2011 at 16:44

4 Answers 4

2

One easy but slow solution is read 1st Gigabite in HashSet. Read sequential rest of the file and remove duplicit Strings, that are in file. Than read 2nd gigabite in memory(hashset) and remove duplicit in files and again, and again... Its quite easy to program and if you want to do it only once it could be enough.

Sign up to request clarification or add additional context in comments.

5 Comments

Nice suggestion. After reading a chunk of records (1GB or any other size), you only need to scan forward from there. If records cannot be removed in place, do this as a series of file copies. Don't forget to scan for duplicates in each chunk before scanning the rest of the file!
HashSet is ordered. But initial order is lost, isn't it? LinkedHashSet is a solution, though.
HashSet is "ordered" according to hash. Initial order is lost. = you must read n-th gigabite in memory and than read whole file and remove duplicits.
Load a gigabyte HashSet and iterate through the file comparing on my way through?
yes. and than second gigabite in memory and again iterate file. Than third gigabite ... I know that it is not optimal, but it is simple.
1

you can calculate a hash for each record and keep that in a Map>

read in the file building the map and if you find the HashKey exists in the map you seek to position to double check (and if not equal add the location to the mapped set)

2 Comments

Yes, this sounds good. If you have enough memory for all hashes, it will be simple and good solution.
actually the hash can be limited arbitrarily (balanced against collisions) but the locations might explode (that's a long for each unique record)
0

Second solution:

  1. Create new file, where you write pairs <String, Position in original file>
  2. Than you will use classic sorting for big files according to String (Sorting big files = Sort small parts of file in memory, and than merge them together) - during this you will remove duplicits
  3. And than rebuild original order = you will sort it again but according to "Position in original file"

Comments

0

Depending on how the input is placed in the file; if each line can be represented by row data;

Another way is to use a database server, insert your data into a database table with a unique value column, read from file and insert into database. At the end database will contain all unique lines/rows.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.