I'm writing unit tests for a steering system for a video game. The system has several behaviours (avoid this area because of reason A, avoid this area because of reason B, each adding a bit of context to a map of the region. A separate function then parses the map and produces a desired movement.
I'm having trouble deciding how to write the unit tests for the behaviours. As TDD suggests, I'm interested only in how the behaviours affect the desired movement. For instance, avoid-because-of-reason-A should result in a movement away from the suggest bad position. I don't care actually how or why the behaviour adds context to the map, only that the desired movement is away from the position.
So my tests for each behaviour set up the behaviour, make it write to the map, then executes the map-parsing function to work out the desired movement. If that movement satisfies my specifications then I'm happy.
However now my tests depend on both the behaviours working correctly, and the map parsing function working correctly. If the parsing function fails, then I would get hundreds of failed tests rather than a couple. Many test-writing guides suggest this is a bad idea.
However if I test directly against the output of the behaviours by mocking out the map, then surely I'm coupling too tightly to the implementation? If I can get the same desired movement from the map by using a slightly different behaviour, then the tests should still pass.
So now I'm suffering cognitive dissonance. What's the best way to structure these tests?