This question relates to the question [here][1], but I'll generalise it so that you can answer effectively without reading all of that. ____ Context: ------ Imagine you had a large set of data greater than your available RAM that was partitioned in to chunks. <sub>(Sized such that access was efficiently processed by virtual memory managers etc.)</sub>. An application with a GUI prompts a user to record a sequential process that will require access to sections of this data over time. In realtime this would be managed by a kind of LRU cache for the user so they get feedback (perhaps at a lower resolution to account for latency). Data required, but not in RAM would be loaded by replacing older data that is tagged 'least recently used'... >But now instead, imagine that I *know the sequence in advance* - i.e. I effectively have **look-ahead/clairvoyance** of future memory access requirements. Questions: --- What *optimal* algorithms/strategies are there to manage this in the case of: 1. Needing to 'play' back the sequence in a kind of psuedo-realtime (sequential) manner for a user. 2. Needing to just process it *as fast as possible* in a non-sequential 'offline' fashion. Imagine a worst case where data was required 'on and off', but *often* throughout the sequence - i.e. its requirement period is just over the period that an LRU, LFU strategy would dictate it 'not-required'. By most definitions of optimal, we'd rather just keep it in RAM right? [1]: http://programmers.stackexchange.com/questions/315951/caching-in-3d-data-structures-and-octree-recursion-hrrm/315955?noredirect=1#comment668879_315955