div and idiv in its current implementations in x86 processors do have an execution time that depends on their operands, but indirectly. It really depends on the magnitude of the result. Dividing such that the result is 0 or 1 is faster (but certainly not fast) than dividing and getting a large result. The exact details depend on the microarchitecture, the performance difference can be bigger or smaller.
But it doesn't care about powers of two on any implementation I know of, and in all implementations I know of it is even in the best case much slower than using a bitwise AND, by a factor of at least 9 (Core2 45nm) but usually more like 20, and even worse than that for 64bit operands.
If the compiler knows that b is a power of two it may do something about it, but that's usually reserved for obvious cases, for example compile time constants (not necessarily literals) or a value created as 1 << n. There may be more cases in which your compiler can figure it out, or fewer. Anyway the point here is that it is not sufficient if you know, the compiler has to know, and the rules for that are obviously compiler specific.