Taking your questions in turn:

> what value does unit testing have then

They're cheap to write and run and you get early feedback. If you break X, you'll find out more or less immediately if you have good tests.

> Does it really only tell you that when all tests pass, you haven't
> introduced a breaking change

Having tests that pass tells you very little. You may not have written enough tests. You may not have tested enough scenarios. Code coverage can help here but it isn't a silver bullet.

> And when some class's behaviour changes (willingly or unwillingly),
> how can you detect (preferably in an automated way) all the
> consequences

More testing - although tools are getting better and better. BUT you should be defining class behaviour in an interface (see below).

> Shouldn't we focus more on integration testing?

Ever more integration tests are not the answer either, they're expensive to write, run and maintain. Depending on your build setup, your build manager may exclude them anyway making them reliant on a developer remembering (never a good thing!).

I've seen developers spend hours trying to fix broken integration tests they'd have found in five minutes if they had good unit tests. Failing this, try just running the software - that is all your end users will care about. No point having million unit tests that pass if the whole house of cards falls down when the user runs the entire suite.

If you want to make sure class A consumes class X in the same way, you should be using an interface rather than a concretion. Then a breaking change is more likely to be picked up at compile time.