Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

4
  • 1
    Thank you, and I admire how you were able to sniff out the problem (pun intended :-) ) so promptly and repro it succinctly on DB Fiddle. I'd just gotten to the point where I could repro the problem on my development server by reloading the DB from a backup. Simply recreating or altering the procedure was not enough, because the table in the development DB had experienced several 100k inserts with much higher id values but lower timestamps (i.e. primary key values) than those at the 'working end' of the table, which must have been enough to keep the planner from going down the wrong path. Commented Sep 25 at 19:11
  • @ Martin Smith: I can understand where this optimisation is coming from - if things look right, go straight for the first or last index block and start a scan (expecting the result to be on the first and only page accessed) for a single LRead instead of doing an index descent, which on non-trivial tables usual costs something like 3 or 4 LReads. However, for me as a programmer, a plan with a single index descent is already a perfectly perfect plan that executes in fractions of a millisecond unless there are physical reads involved. Commented Sep 25 at 21:41
  • Mitigating the optimisation by way of OPTION (RECOMPILE) tends to cost something on the order of 10 ms, which is a pessimisation by more than an order of magnitude. If we didn't have TOP 1 queries with a suitable ORDER BY clause as a workaround, I would regard this optimisation as majorly iffy ... Commented Sep 25 at 21:45
  • 2
    It does consider the transformation rule for the case in the Fiddle But it is choosing between a plan with (according to its estimates) Forward Seek of 1 row -> Stream aggregate and one with Backward Seek of 1 row -> Top -> Stream aggregate And the seeks and aggregates are costed identically. The Top operator introduces a miniscule extra operator cost of 0.0000001 - which is enough to make that alternative seem more expensive. Despite in reality that plan being much more resilient in the face of incorrect estimates. Commented Sep 25 at 23:00