You use the hashes to compute the placement of an element within the HashMap. It would be supremely inefficient if adding a key to your HashMap required re-computation of the hashes every time a collision needed to be resolved. Not only that, but some objects may have expensive hash functions (for example, String hashCode goes through every character in the String). It would always be desirable to compute such functions once and cache them for later (since you should not change hashCode/equals for an object placed within a Map).
Let's consider a HashMap that is half full (n/2), with independent elements being placing into the entry set. There is a 1/2 probability of collision (minimum) for any given element being added. The amount of entries possible to fill the HashMap is n, but the default load factor is 0.75, which means we have 3n/4 - n/2 = n/4 entries left to fill. All of these entries must be without hash collisions, as we save time for each collision (by caching). Assuming the maximum possible probability of 1/2 for not having a collision, we see that we have probability 1/2n/4 of no collisions occurring before HashMap expansion. This means that for any sizeable HashMap (n=29+, out of which only 0.5*29=(about 15) keys have to be filled), there is a greater than 99% chance that you get time savings off of caching hash values.
tl;dr It saves time, especially when collisions become frequent for addition/lookups.
treeifyBin(...)).