Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

6
  • 1
    I can't speak about praxis, but two things come to my mind from Haskell: zippers, allowing constant-time "steps" over data structures, and comonads, which are related to zippers by some theory which I neither remember nor properly understand ;) Commented Jun 15, 2016 at 20:44
  • How big is this playing board? Big O characterizes how an algorithm scales, not how fast it is. On a small board (say, less than 100 in each direction), O(1) vs. O(n) is unlikely to matter much, if you only touch each square once. Commented Jun 15, 2016 at 21:09
  • @RobertHarvey It will vary. But to give an example: In Chess, we have a 64x64 board, but all computations to check for what moves are possible, and to determine the current position's heuristic value (difference in material, king in check or not, passed pawns, etc) all need to access squares of the board. Commented Jun 15, 2016 at 21:33
  • 1
    You have an 8x8 board in chess. In a memory-mapped language like C, you can make a mathematical calculation to get the exact address of a cell, but that's not true in memory-managed languages (where ordinal addressing is an implementation detail). It wouldn't surprise me if jumping across (a maximum of) 14 nodes takes roughly the same amount of time as addressing an array element in a memory-managed language. Commented Jun 15, 2016 at 21:42
  • See also stackoverflow.com/q/9611904/124319 Commented Jun 15, 2016 at 22:38