Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

7
  • 5
    The problem with that study is they didn't unit test the code before adapting TDD. TDD is not a magic tool that decrease number of defects by 40-90% by simply adopting it Commented Jul 29, 2013 at 7:45
  • 1
    @BЈовић I don't think they claim "magic" anywhere in that paper. They claim that some teams adopted TDD, some teams didn't, they were given "similar" work and some defect densities and development times were recorded. If they had forced the non-TDD teams to write unit tests anyway just so that everyone had unit tests, it wouldn't be an ecologically valid study. Commented Jul 29, 2013 at 16:23
  • 1
    An ecologically valid study? Sorta depends on what you're measuring. If you want to know whether writing your tests up front matters, then everyone needs to be writing unit tests, not just the TDD group. Commented Jul 29, 2013 at 16:42
  • 1
    @robert Harvey that's a question of confounding variables, not ecological validity. Designing a good experiment involves gradient those off. For example if the control group were writing unit tests post hoc, people would argue the experiment was unsound because the control group were working in a way uncommonly found in the wild. Commented Jul 29, 2013 at 17:01
  • 2
    Luckily I didn't say they were. Commented Jul 29, 2013 at 17:12