Timeline for Find duplicate files using Python
Current License: CC BY-SA 3.0
4 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Sep 14, 2015 at 7:46 | vote | accept | pepoluan | ||
| Sep 14, 2015 at 3:51 | comment | added | pepoluan | Oh, about incrementing order manually, I did that instead of using enumerate() because flist was actually iterated over a dict's values, and the length of each value in the dict are not guaranteed to be the same. E.g.: {1: ['a', 'b'], 2: ['c', 'd', 'e'], 3: ['f', 'g', 'h', 'i'], 4: ['j', 'k']} | |
| Sep 14, 2015 at 3:38 | comment | added | pepoluan | Agree with the use of sys.argv instead of hardcoding. In fact, that's the next on the drawing board :-) I just hardcode them at this moment to focus on the correctness. Anyways, nice tips! I'll certainly consider the 'class-ification' of hashing_strategy. | |
| Sep 11, 2015 at 17:23 | history | answered | vnp | CC BY-SA 3.0 |