You don't need to remember every single bucket you've seen.
Once a bucket is added to the top of the pile, any wider bucket that has its rim lower than the rim of the new bucket becomes irrelevant, as do all the narrower buckets it sits on.
So a better data structure would just store the altitudes of all rims currently in play; this can be initialised to (0,0). When you add a bucket, find the widest rim not greater than the new bucket's width; we now set height[0] to that height and height[bucket_width] to height[0]+bucket_height. Remove all narrower buckets, and all other buckets up to a the new height - they can no longer affect the result.
I've added a working demonstration, with tests, below.
Other items for review:
using namespace std;
Bringing all names in from a namespace is problematic; namespace std particularly so. See Why is “using namespace std” considered bad practice?.
Spellings: highestPoint, maximum.
- Naming:
temp and temporary - these are declared far from their use, so it's hard to appreciate the difference. - You can omit
return 0; in main().
Working solution
#include <algorithm> #include <iostream> #include <istream> #include <iterator> #include <map> #include <sstream> // Assume that unsigned int is large enough to represent the tallest // tower unsigned tower_height(std::istream& input) { unsigned count; input >> count; if (!input) { std::cerr << "reading count" << std::endl; return 0; } // start with a point at 0,0 on which everything balances std::map<unsigned,unsigned> m{ {0,0} }; // radius -> altitude while (count-->0) { unsigned radius, height; input >> radius >> height; if (!input) { std::cerr << "reading size" << std::endl; return 0; } // find the widest bucket this sits on top of auto smaller = std::prev(m.upper_bound(radius)); // floor height is now smaller's height m[0] = smaller->second; // floor extends out over smaller if (smaller != m.begin()) { m.erase(std::next(m.begin()), std::next(smaller)); } // put this bucket in place (it will always be second) auto current = m.insert({radius, m[0]+height}).first; // find the first wider bucket that's taller auto height_test = [current](auto const& e){ return e.second > current->second; }; auto wider = std::find_if(current, m.end(), height_test); // and remove values up to it m.erase(std::next(current), wider); } return m.rbegin()->second; } static bool test(unsigned expected, const std::string& input) { std::istringstream is{input}; auto actual = tower_height(is); if (actual != expected) { std::cerr << "Got " << actual << " instead of " << expected << " for input " << input << std::endl; return false; } return true; } int main() { int errors = 0; errors += !test(0, "0"); errors += !test(10, "1 20 10"); errors += !test(20, "2 20 10 20 10"); errors += !test(15, "2 20 10 10 15"); errors += !test(25, "2 10 15 20 10"); errors += !test(25, "3 10 15 20 10 5 10"); errors += !test(20, "3 20 10 5 5 10 15"); return errors; }
A note on complexity
At first glance, we might think that the code here is O(n²), because we used std::find_if() each time we add a bucket. But find_if() returns at the first match, and we then remove all the elements we've seen, thus reducing work for subsequent iterations. Therefore on average, we scale as O(n), and that's demonstrable:
std::mt19937 gen{std::random_device()()}; std::uniform_int_distribution dist{1, 1000}; for (unsigned n: {10000, 100000, 1000000}) { std::stringstream s; s << n; for (unsigned i = 0; i < n; ++i) { s << ' ' << dist(gen) << ' ' << dist(gen); } auto start_time = std::chrono::high_resolution_clock::now(); auto height = tower_height(s); auto end_time = std::chrono::high_resolution_clock::now(); auto millis = std::chrono::duration_cast<std::chrono::milliseconds>(end_time - start_time); std::clog << "Measured " << n << "-bucket tower as " << height << " in " << millis.count() << "ms" << std::endl; }
Measured 10000-bucket tower as 2897237 in 1ms Measured 100000-bucket tower as 28587991 in 13ms Measured 1000000-bucket tower as 285125976 in 136ms
The very worst case is where each bucket is narrower and shorter than the previous, meaning that the map continuously grows in size. This introduces an O(log n) term, since that's how map operations scale. So we have worst-case performance that scales as O(n log n):
for (unsigned n: {10000, 100000, 1000000}) { std::stringstream s; s << n; for (unsigned i = n; i > 0; --i) { s << ' ' << i << ' ' << i; } auto start_time = std::chrono::high_resolution_clock::now(); auto height = tower_height(s); auto end_time = std::chrono::high_resolution_clock::now(); auto millis = std::chrono::duration_cast<std::chrono::milliseconds>(end_time - start_time); std::clog << "Measured " << n << "-bucket tower as " << height << " in " << millis.count() << "ms" << std::endl; }
Measured 10000-bucket tower as 10000 in 2ms Measured 100000-bucket tower as 100000 in 36ms Measured 1000000-bucket tower as 1000000 in 539ms