Skip to main content
added 83 characters in body
Source Link

Not sure exactly what you mean about data coming in for M times. If we assume an endless stream of N timeslot/workload triplets. The best I come up with is also O(N*N). My main difference is that I do not use a dictionary/hashtable.

Basic idea: Initially we have 0 timeslots and there is no violation of MaxWorkLoad. On each new timeslot, find all overlapping timeperiods, and check if the new addition causes a violation. Worst case this all the current entries overlap.

If we use a sorted structure, we can sort entries by primarily begin time, with end time secondary (and for microoptimisation largest workload as the third priority). Then when we add a new timeslot (xBegin, xEnd, workload) we can find the leftright bound (and also do the insertion) in log N time, by comparing the xBeginxEnd with yEndyBegin for the already inserted y. Then we sum the workload until we reachleft until yStart > xEnd which is at mostthe end N entries(we have to). If the sum at any point is larger than MaxWorkLoad then return true otherwise return false. Worst case we have to sum over N elements each time, which we would have to anyway if we just looped through all.

If the granularity of the workload is fixed, for example integers, and M is max workload then this changes to O(min(N*M, N*N)), because during the sum at most M entries will be processed.

I don't think in general you can do better than O(N*N). You must remember all N values received at any moment (unless they agree on start and end time), in order to determine any future violation.

The most related problem I can think of is the Lecture Hall Assignment problem (in the bottom) which is O(N*N). To convert (xBegin, xEnd, K), to lecture hall activities, make K copies of (xBegin, xEnd). Going the other direction we can reduce the decision variant of the lecture hall assignment problem to your problem. For each activity make a timeslot with the workload set to 1. The lecture hall decision problem is satisfiable with P halls, if your corresponding problem is satisfiable with MaxWorkLoad = P. If we believe the decision version of the lecture hall problem is also at least O(N*N), then so is your problem.

Not sure exactly what you mean about data coming in for M times. If we assume an endless stream of N timeslot/workload triplets. The best I come up with is also O(N*N). My main difference is that I do not use a dictionary/hashtable.

Basic idea: Initially we have 0 timeslots and there is no violation of MaxWorkLoad. On each new timeslot, find all overlapping timeperiods, and check if the new addition causes a violation. Worst case this all the current entries overlap.

If we use a sorted structure, we can sort entries by primarily begin time, with end time secondary (and for microoptimisation largest workload as the third priority). Then when we add a new timeslot (xBegin, xEnd, workload) we can find the left bound (and also do the insertion) in log N time, by comparing the xBegin with yEnd for the already inserted y. Then we sum the workload until we reach until yStart > xEnd which is at most N entries. If the sum at any point is larger than MaxWorkLoad then return true otherwise return false.

If the granularity of the workload is fixed, for example integers, and M is max workload then this changes to O(min(N*M, N*N)), because during the sum at most M entries will be processed.

I don't think in general you can do better than O(N*N). You must remember all N values received at any moment (unless they agree on start and end time), in order to determine any future violation.

The most related problem I can think of is the Lecture Hall Assignment problem (in the bottom) which is O(N*N). To convert (xBegin, xEnd, K), to lecture hall activities, make K copies of (xBegin, xEnd). Going the other direction we can reduce the decision variant of the lecture hall assignment problem to your problem. For each activity make a timeslot with the workload set to 1. The lecture hall decision problem is satisfiable with P halls, if your corresponding problem is satisfiable with MaxWorkLoad = P. If we believe the decision version of the lecture hall problem is also at least O(N*N), then so is your problem.

Not sure exactly what you mean about data coming in for M times. If we assume an endless stream of N timeslot/workload triplets. The best I come up with is also O(N*N). My main difference is that I do not use a dictionary/hashtable.

Basic idea: Initially we have 0 timeslots and there is no violation of MaxWorkLoad. On each new timeslot, find all overlapping timeperiods, and check if the new addition causes a violation. Worst case this all the current entries overlap.

If we use a sorted structure, we can sort entries by primarily begin time, with end time secondary (and for microoptimisation largest workload as the third priority). Then when we add a new timeslot (xBegin, xEnd, workload) we can find the right bound (and also do the insertion) in log N time, by comparing the xEnd with yBegin for the already inserted y. Then we sum the workload left until the end (we have to). If the sum at any point is larger than MaxWorkLoad then return true otherwise return false. Worst case we have to sum over N elements each time, which we would have to anyway if we just looped through all.

If the granularity of the workload is fixed, for example integers, and M is max workload then this changes to O(min(N*M, N*N)), because during the sum at most M entries will be processed.

I don't think in general you can do better than O(N*N). You must remember all N values received at any moment (unless they agree on start and end time), in order to determine any future violation.

The most related problem I can think of is the Lecture Hall Assignment problem (in the bottom) which is O(N*N). To convert (xBegin, xEnd, K), to lecture hall activities, make K copies of (xBegin, xEnd). Going the other direction we can reduce the decision variant of the lecture hall assignment problem to your problem. For each activity make a timeslot with the workload set to 1. The lecture hall decision problem is satisfiable with P halls, if your corresponding problem is satisfiable with MaxWorkLoad = P. If we believe the decision version of the lecture hall problem is also at least O(N*N), then so is your problem.

reduction
Source Link

Not sure exactly what you mean about data coming in for M times. If we assume an endless stream of N timeslot/workload triplets. The best I come up with is also O(N*N). My main difference is that I do not use a dictionary/hashtable.

Basic idea: Initially we have 0 timeslots and there is no violation of MaxWorkLoad. On each new timeslot, find all overlapping timeperiods, and check if the new addition causes a violation. Worst case this all the current entries overlap.

If we use a sorted structure, we can sort entries by primarily begin time, with end time secondary (and for microoptimisation largest workload as the third priority). Then when we add a new timeslot (xBegin, xEnd, workload) we can find the left bound (and also do the insertion) in log N time, by comparing the xBegin with yEnd for the already inserted y. Then we sum the workload until we reach until yStart > xEnd which is at most N entries. If the sum at any point is larger than MaxWorkLoad then return true otherwise return false.

If the granularity of the workload is fixed, for example integers, and M is max workload then this changes to O(min(N*M, N*N)), because during the sum at most M entries will be processed.

I don't think in general you can do better than O(N*N). You must remember all N values received at any moment (unless they agree on start and end time), in order to determine any future violation.

The most related problem I can think of is the Lecture Hall Assignment problem (in the bottom) which is O(N*N). To convert (xBegin, xEnd, K), to lecture hall activities, make K copies of (xBegin, xEnd). Going the other direction we can reduce the decision variant of the lecture hall assignment problem to your problem. For each activity make a timeslot with the workload set to 1. The lecture hall decision problem is satisfiable with P halls, if your corresponding problem is satisfiable with MaxWorkLoad = P. If we believe the decision version of the lecture hall problem is also at least O(N*N), then so is your problem.

Not sure exactly what you mean about data coming in for M times. If we assume an endless stream of N timeslot/workload triplets. The best I come up with is also O(N*N). My main difference is that I do not use a dictionary/hashtable.

Basic idea: Initially we have 0 timeslots and there is no violation of MaxWorkLoad. On each new timeslot, find all overlapping timeperiods, and check if the new addition causes a violation. Worst case this all the current entries overlap.

If we use a sorted structure, we can sort entries by primarily begin time, with end time secondary (and for microoptimisation largest workload as the third priority). Then when we add a new timeslot (xBegin, xEnd, workload) we can find the left bound (and also do the insertion) in log N time, by comparing the xBegin with yEnd for the already inserted y. Then we sum the workload until we reach until yStart > xEnd which is at most N entries. If the sum at any point is larger than MaxWorkLoad then return true otherwise return false.

If the granularity of the workload is fixed, for example integers, and M is max workload then this changes to O(min(N*M, N*N)), because during the sum at most M entries will be processed.

I don't think in general you can do better than O(N*N). You must remember all N values received at any moment (unless they agree on start and end time), in order to determine any future violation.

The most related problem I can think of is the Lecture Hall Assignment problem (in the bottom) which is O(N*N). To convert (xBegin, xEnd, K), to lecture hall activities, make K copies of (xBegin, xEnd).

Not sure exactly what you mean about data coming in for M times. If we assume an endless stream of N timeslot/workload triplets. The best I come up with is also O(N*N). My main difference is that I do not use a dictionary/hashtable.

Basic idea: Initially we have 0 timeslots and there is no violation of MaxWorkLoad. On each new timeslot, find all overlapping timeperiods, and check if the new addition causes a violation. Worst case this all the current entries overlap.

If we use a sorted structure, we can sort entries by primarily begin time, with end time secondary (and for microoptimisation largest workload as the third priority). Then when we add a new timeslot (xBegin, xEnd, workload) we can find the left bound (and also do the insertion) in log N time, by comparing the xBegin with yEnd for the already inserted y. Then we sum the workload until we reach until yStart > xEnd which is at most N entries. If the sum at any point is larger than MaxWorkLoad then return true otherwise return false.

If the granularity of the workload is fixed, for example integers, and M is max workload then this changes to O(min(N*M, N*N)), because during the sum at most M entries will be processed.

I don't think in general you can do better than O(N*N). You must remember all N values received at any moment (unless they agree on start and end time), in order to determine any future violation.

The most related problem I can think of is the Lecture Hall Assignment problem (in the bottom) which is O(N*N). To convert (xBegin, xEnd, K), to lecture hall activities, make K copies of (xBegin, xEnd). Going the other direction we can reduce the decision variant of the lecture hall assignment problem to your problem. For each activity make a timeslot with the workload set to 1. The lecture hall decision problem is satisfiable with P halls, if your corresponding problem is satisfiable with MaxWorkLoad = P. If we believe the decision version of the lecture hall problem is also at least O(N*N), then so is your problem.

added 302 characters in body
Source Link

Not sure exactly what you mean about data coming in for M times. If we assume an endless stream of N timeslot/workload triplets. The best I come up with is also O(N*N). My main difference is that I do not use a dictionary/hashtable.

Basic idea: Initially we have 0 timeslots and there is no violation of MaxWorkLoad. On each new timeslot, find all overlapping timeperiods, and check if the new addition causes a violation. Worst case this all the current entries overlap.

If we use a sorted structure, we can sort entries by primarily begin time, with end time secondary (and for microoptimisation largest workload as the third priority). Then when we add a new timeslot (xBegin, xEnd, workload) we can find the left bound (and also do the insertion) in log N time, by comparing the xBegin with yEnd for the already inserted y. Then we sum the workload until we reach until yStart > xEnd which is at most N entries. If the sum at any point is larger than MaxWorkLoad then return true otherwise return false.

If the granularity of the workload is fixed, for example integers, and M is max workload then this changes to O(min(N*M, N*N)), because during the sum at most M entries will be processed.

I don't think in general you can do better than O(N*N). You must remember all N values received at any moment (unless they agree on start and end time), in order to determine any future violation.

The most related problem I can think of is the Lecture Hall Assignment problem (in the bottom) which is O(N*N). To convert (xBegin, xEnd, K), to lecture hall activities, make K copies of (xBegin, xEnd).

Not sure exactly what you mean about data coming in for M times. If we assume an endless stream of N timeslot/workload triplets. The best I come up with is also O(N*N). My main difference is that I do not use a dictionary/hashtable.

Basic idea: Initially we have 0 timeslots and there is no violation of MaxWorkLoad. On each new timeslot, find all overlapping timeperiods, and check if the new addition causes a violation. Worst case this all the current entries overlap.

If we use a sorted structure, we can sort entries by primarily begin time, with end time secondary (and for microoptimisation largest workload as the third priority). Then when we add a new timeslot (xBegin, xEnd, workload) we can find the left bound (and also do the insertion) in log N time, by comparing the xBegin with yEnd for the already inserted y. Then we sum the workload until we reach until yStart > xEnd which is at most N entries. If the sum at any point is larger than MaxWorkLoad then return true otherwise return false.

If the granularity of the workload is fixed, for example integers, and M is max workload then this changes to O(min(N*M, N*N)), because during the sum at most M entries will be processed.

I don't think in general you can do better than O(N*N). You must remember all N values received at any moment (unless they agree on start and end time), in order to determine any future violation.

Not sure exactly what you mean about data coming in for M times. If we assume an endless stream of N timeslot/workload triplets. The best I come up with is also O(N*N). My main difference is that I do not use a dictionary/hashtable.

Basic idea: Initially we have 0 timeslots and there is no violation of MaxWorkLoad. On each new timeslot, find all overlapping timeperiods, and check if the new addition causes a violation. Worst case this all the current entries overlap.

If we use a sorted structure, we can sort entries by primarily begin time, with end time secondary (and for microoptimisation largest workload as the third priority). Then when we add a new timeslot (xBegin, xEnd, workload) we can find the left bound (and also do the insertion) in log N time, by comparing the xBegin with yEnd for the already inserted y. Then we sum the workload until we reach until yStart > xEnd which is at most N entries. If the sum at any point is larger than MaxWorkLoad then return true otherwise return false.

If the granularity of the workload is fixed, for example integers, and M is max workload then this changes to O(min(N*M, N*N)), because during the sum at most M entries will be processed.

I don't think in general you can do better than O(N*N). You must remember all N values received at any moment (unless they agree on start and end time), in order to determine any future violation.

The most related problem I can think of is the Lecture Hall Assignment problem (in the bottom) which is O(N*N). To convert (xBegin, xEnd, K), to lecture hall activities, make K copies of (xBegin, xEnd).

added 6 characters in body
Source Link
Loading
Post Undeleted by senevoldsen
added 471 characters in body
Source Link
Loading
Post Deleted by senevoldsen
Source Link
Loading