Skip to main content
7 events
when toggle format what by license comment
Jun 15, 2023 at 18:20 comment added candied_orange I know this is old but I gotta say, the point of checking in a failing test is to keep the build from building. It's something a peer reviewer can do to show you what is wrong. The person who wrote a failing test doesn't have to be the same as the one who satisfies it. But this code had better not be headed to production until the test either passes or is deleted.
Mar 15, 2011 at 18:00 comment added user20194 The question should have been clearer. It should be the The test will compile, but the expected result will fail.
Mar 15, 2011 at 15:58 comment added Tieson T. @Chad: Right, I totally forgot about CI servers. That would definitely be a point to consider. It is also worth clarifying what we mean by "broken" tests; are they just plain "bad" tests, or is the test failing because the API changed in some way?
Mar 15, 2011 at 15:40 comment added CaffGeek I was just adding a point to consider, some continuous integration build servers run the tests, and if they fail, they don't get deployed. Rightfully, as if the build fails, the code fails, and there is no point in deploying a product known to be broken.
Mar 15, 2011 at 15:33 comment added unholysampler @Chad: Building and testing are two separate pieces of one automated steps. Building ensures that everything compiles. Test ensures that the result of the build is valid. My interpretation of the question was not, "should I check in code that doesn't compile?" Instead it was, "should I check in a test I know will fail?"
Mar 15, 2011 at 15:20 comment added CaffGeek But you should never check in a failing test, as your build server shouldn't build a project with a broken test.
Mar 15, 2011 at 15:16 history answered Tieson T. CC BY-SA 2.5