TL;DR
Is (a and b) equivalent to tf.logical_and(a, b) in terms of optimization and performance? (a and b are tensorflow tensors)
Details:
I use python with tensorflow. My first priority is to make the code run fast and my second priority is to make it readable. I have working and fast code that, for my personal feeling, looks ugly:
@tf.function # @tf.function(jit_compile=True) def my_tf_func(): # ... a = ... # some tensorflow tensor b = ... # another tensorflow tensor # currently ugly: prefix notation with tf.logical_and c = tf.math.count_nonzero(tf.logical_and(a, b)) # more readable alternative: infix notation: c = tf.math.count_nonzero(a and b) # ... The code that uses prefix notation works and runs fast, but I don't think it's very readable due to the prefix notation (it's called prefix notation, because the name of the operation logical_and comes before the operands a and b).
Can I use infix notation, i.e. the alternative at the end of above code, with usual python operators like and, +, -, or == and still get all the benefits of tensorflow on the GPU and compile it with XLA support? Will it compile to the same result?
The same question applies to unary operators like not vs. tf.logical_not(...).
This question was crossposted at https://software.codidact.com/posts/289588 .
andandtf.logical_anddo different things, first of all.tf.logical_and(a, b)is element-wise logical AND ifaandbare both tensors, whilea and bisaunlessais empty andbotherwise (becauseandis controlled only by__bool__, you can't overloadandoperator for custom classes, andtf.Tensoris truthy iff its shape is not tuple of zeros, AFAIC). For+this should not give a performance penalty.