Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

5
  • 2
    This is sensible - especially regarding the fact that everything depends on many others. A good unit test should test the minimum possible. Anything that is within that minimum possible amount should be tested by a preceeding unit test. If you've completely tested Parser, you can assume that you can safely use Parser to test ParseStatement Commented Dec 4, 2014 at 16:25
  • 6
    The main purity concern (I think) is to avoid writing circular dependencies in your unit tests. If either the parser or the parser tests use the expander, and this expander test relies on the parser working, then you have a difficult-to-manage risk that all you're testing is that the parser and the expander are consistent, whereas what you wanted to do was test that the expander actually does what it's supposed to. But as long as there's no dependency back the other way, using parser in this unit test isn't really any different from using a standard library in a unit test. Commented Dec 4, 2014 at 18:15
  • @SteveJessop Good point. It's important to use independent components. Commented Dec 4, 2014 at 18:33
  • 3
    Something I've done in cases where the parser itself is an expensive operation (eg reading data out of Excel files via com interop) is to write test generation methods that run the parser and output code to the console recreate the data structure the parser return. I then copy the output from the generator into more conventional unit tests. This allows reducing the cross dependency in that the parser only needs to be working correctly when the tests were created not every time they're run. (Not wasting a few seconds/test to create/destroy Excel processes was a nice bonus.) Commented Dec 4, 2014 at 22:18
  • +1 for @DanNeely's approach. We use something similar to store several serialized versions of our data model as test data, so that we can be sure new code can still work with older data. Commented Dec 5, 2014 at 3:10