Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

4
  • 2
    It could be the case that when I later change or refactor this method I decide to do something that causes negative to react wrongly, which would have been picked up if I had that test there. Should I just not worry about adding such tests until I go to make changes that could effect that? Commented Nov 11, 2012 at 8:10
  • 3
    @JoshuaHarris: Exactly. Like I say, you could make a change that causes it not to work when passed exactly 40.23. Are you going to test for that too? If you're doing tests like that then you're moving to unit testing, rather than TDD, and then your red-green-refactor cycle isn't quite as important. Commented Nov 11, 2012 at 8:40
  • 2
    @JoshuaHarris: exactly. If you see no possible error in the code that could cause a test to fail, the test is worthless. Frex a function that adds 1 to an unsigned int -- you could write a test for every possible value an unsigned int might hold, but really, how many of them would be useful? Commented Nov 11, 2012 at 8:43
  • 2
    I agree with the answer and comments here regarding testing vast permutations of numbers, but I think it would be fine to add a test for negative numbers in this case. It seems like it would give you more confidence in the code you've written and your ability to refactor it in the future. Gaining that confidence is one of the reasons we write tests. The test passed on the first run? That's a warning sign, not a sin. Investigate why it passed to make sure something unexpected isn't happening. You wrote the code correctly the first time? Clever you. Pat yourself on the back and move forward. Commented Sep 20, 2014 at 15:23