The performance of the casting depends on the JVM implementation.
The JLS 5.5 only determines the requirements for the casting (which includes a recursive algorithm), but does not set any requirements upon the implementation. Actually the runtime cast rules in 5.5.3 are also determined the same way. All JVM implementations that produce th same result as the proposed algorithm are accepted as a proper JVM.
Generally, casting down to C takes a little bit more time since the JVM must examine the runtime type of the object. When casting up to A it has no reason to do the same check, since B extends A.
Actually, the JVM does not care about the number of methods and fields. It only compares the type hierarchy, the same you can examine with reflection (o.getClass())
I made a sample code as follows, one downcast, then an upcast:
Object o = new Integer(1); Integer i = (Integer) o; Object o2 = i;
The compiled bytecode is the following:
0 new java.lang.Integer [16] 3 dup 4 iconst_1 <-- 1 as a parameter to the constructor 5 invokespecial java.lang.Integer(int) [18] <-- constructor 8 astore_1 [o] <-- store in 'o' 9 aload_1 [o] 10 checkcast java.lang.Integer [16] <-- DOWNCAST CHECK, SPECIAL BYTECODE 13 astore_2 [i] 14 aload_2 [i] 15 astore_3 [o2] <-- WITH UPCAST NO CHECK
So, there is a specific JVM instruction that checks the element on the top of the stack with a given class.
With upcast, there is no check at all.
The size of the classes (number of fields, methods, actual footprint) does not matter, because the casting examines the Class (the metadata, which is actually an object).
The number of hierarchy levels, and the number if implemented interfaces (if casting to an interface) does matter, because that is the traversable inheritance/implementation tree to check.
I would be surprised if there wouldn't be some kind of cache for this check.