Timeline for Remove duplicates csv based on first value keeping the longest line between duplicates
Current License: CC BY-SA 4.0
14 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Apr 4, 2020 at 21:30 | history | edited | user313992 | CC BY-SA 4.0 | added 6 characters in body |
| Apr 4, 2020 at 21:21 | history | edited | user313992 | CC BY-SA 4.0 | deleted 25 characters in body |
| Apr 4, 2020 at 21:19 | comment | added | user313992 | I was kind of second guessing the OP, my bad, I have clarified the answer. If the challenge really is to sort by first field then by length, then preprocessing the input to prepend the length is probably the way to go, something like awk '{print length";"$0}' file | sort -t';' -k2,2 -k1,1rn | sed 's/[^;]*;//' | sort -st';' -k1,1 -u | |
| Apr 4, 2020 at 19:08 | history | edited | user313992 | CC BY-SA 4.0 | clarify |
| Apr 4, 2020 at 15:23 | comment | added | Ed Morton | That won't keep the longest lines since sort -rt';' would output foo;z; before foo;bar;lots;of;extras. | |
| Apr 2, 2020 at 14:02 | history | edited | user313992 | CC BY-SA 4.0 | deleted 18 characters in body |
| Apr 2, 2020 at 13:56 | history | undeleted | CommunityBot | ||
| Apr 2, 2020 at 13:54 | history | edited | user313992 | CC BY-SA 4.0 | added 7 characters in body |
| Apr 2, 2020 at 13:48 | history | edited | user313992 | CC BY-SA 4.0 | added 7 characters in body |
| Apr 2, 2020 at 13:43 | history | deleted | CommunityBot | via Vote | |
| Apr 2, 2020 at 13:31 | history | undeleted | CommunityBot | ||
| Apr 2, 2020 at 13:31 | history | edited | user313992 | CC BY-SA 4.0 | added 382 characters in body |
| Apr 2, 2020 at 13:11 | history | deleted | CommunityBot | via Vote | |
| Apr 2, 2020 at 13:09 | history | answered | user313992 | CC BY-SA 4.0 |