Not sure exactly what you mean about data coming in for `M` times. If we assume an endless stream of `N` timeslot/workload triplets. The best I come up with is also `O(N*N)`. My main difference is that I do not use a dictionary/hashtable. Basic idea: Initially we have 0 timeslots and there is no violation of `MaxWorkLoad`. On each new timeslot, find all overlapping timeperiods, and check if the new addition causes a violation. Worst case this all the current entries overlap. If we use a sorted structure, we can sort entries by primarily begin time, with end time secondary (and for microoptimisation largest workload as the third priority). Then when we add a new timeslot `(xBegin, xEnd, workload)` we can find the left bound (and also do the insertion) in `log N` time, by comparing the `xBegin` with `yEnd` for the already inserted `y`. Then we sum the workload until we reach until `yStart > xEnd` which is at most `N` entries. If the sum at any point is larger than `MaxWorkLoad` then return true otherwise return false. If the granularity of the workload is fixed, for example integers, and `M` is max workload then this changes to `O(min(N*M, N*N))`, because during the sum at most `M` entries will be processed. I don't think in general you can do better than `O(N*N)`. You must remember all `N` values received at any moment (unless they agree on start and end time), in order to determine any future violation. The most related problem I can think of is the [Lecture Hall Assignment problem](http://www.personal.kent.edu/~rmuhamma/Algorithms/MyAlgorithms/Greedy/activity.htm) (in the bottom) which is `O(N*N)`. To convert `(xBegin, xEnd, K)`, to lecture hall activities, make `K` copies of `(xBegin, xEnd)`.