Timeline for How to generalize and speed up this program?
Current License: CC BY-SA 3.0
5 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Aug 26, 2013 at 2:16 | comment | added | Michael E2 | @EdenHarder I'm not entirely sure, but perhaps it's not that strange. The data in both cases has 50000 elements. In the case of f00[a, 500], it consists of 100 segments of 500 elements. In f00[a, 5], it consists o 10000 segments of 5 elements. It processes the segments differently than the list of them. For instance f00[a, 5000] is slower than f00[a, 500], so it doesn't keep getting faster as the length of the segments grows. The time depends mostly on the total number of elements and varies with how it's segmented. | |
| Aug 26, 2013 at 1:11 | comment | added | Eden Harder | @MichaelE2 Why f00[a, 500] // timeAvg is faster than f00[a, 5] // timeAvg ? | |
| Aug 25, 2013 at 18:16 | comment | added | Mr.Wizard | Yes, this is exactly how I came to visualize the problem when I was working on my second answer. Thanks for the illustration. I haven't read your code yet and I've got to leave, but if you haven't yet you might try my idea under the "Room for improvement" heading in my answer. (It's a bit vague, but better formed in my head; we should chat later.) | |
| Aug 25, 2013 at 17:08 | comment | added | Michael E2 | @Mr.Wizard This started just as a way to help Eden Harder visualize your algorithm, but it gave me an idea how to tweak your solution a little. Somehow I feel it might be improved, but I'm drawing a blank. | |
| Aug 25, 2013 at 17:05 | history | answered | Michael E2 | CC BY-SA 3.0 |