Skip to content

Conversation

@brandtbucher
Copy link
Member

@brandtbucher brandtbucher commented Feb 11, 2022

On further reflection, I think it's definitely possible to get our current ~80% hit rate up to ~90%. These detailed failure stats will help guide my work on this.

Example full pyperformance run with this patch applied:

Kind Count Ratio
unquickened 7837567 0.1%
deferred 1187894724 19.8%
deopt 600612 0.0%
hit 4756557559 79.5%
miss 32075102 0.5%

Specialization attempts

Count Ratio
Success 948284 5.0%
Failure 18123381 95.0%
Failure kind Count Ratio
and int 2841285 15.7%
rshift 1964549 10.8%
lshift 1690147 9.3%
xor 1608323 8.9%
remainder 1574409 8.7%
add other 1411333 7.8%
true divide different types 1272004 7.0%
floor divide 993317 5.5%
true divide float 855450 4.7%
multiply different types 806998 4.5%
subtract different types 782979 4.3%
subtract other 722394 4.0%
power 598295 3.3%
or 345072 1.9%
add different types 326248 1.8%
multiply other 165969 0.9%
and other 140198 0.8%
true divide other 17550 0.1%
and different types 6861 0.0%

Note that "add other" and "multiply different types" also correspond more specifically to sequence concatenation and repetition, respectively. Similarly, "remainder" probably contains many examples of old-style string formatting.

I figure it's not worth the clutter of breaking out specific stats for these non-numeric cases yet, since their proportion of the total failures is still pretty low (and the benefit of specialization seems slim).

https://bugs.python.org/issue46072

@markshannon
Copy link
Member

This looks like it adds some overhead when not gathering stats.
The way we usually avoid that is to put a function call into the macro, i.e:

default: SPECIALIZATION_FAIL(BINARY_OP, binary_op_failure_kind(lhs, rhs));
@brandtbucher brandtbucher merged commit 580cd9a into python:main Feb 16, 2022
@brandtbucher brandtbucher deleted the binary-op-stats branch July 21, 2022 20:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

4 participants