Skip to main content
30 events
when toggle format what by license comment
Jan 14 at 8:48 answer added gnasher729 timeline score: 0
Jan 10 at 12:15 answer added MakePeaceGreatAgain timeline score: 1
Jan 16, 2022 at 15:25 comment added donquixote @GregBurghardt Or alternatively, it could be part of an answer to distinguish when to use which strategy. There might be a line somewhere, but I don't think it always has to be between integration vs functional vs unit tests. I think it applies to any case where the thing you test is too complex or too important to fix immediately. For a lot of unit tests, this is not the case, but for some it might be. Btw, I might post an answer myself, once I have developed a consistent opinion about it.
Jan 16, 2022 at 13:59 comment added Greg Burghardt Ok. I think you are getting a variety of answers because most of us are thinking unit tests. Integration tests or functional tests change the question. You might get a better answer by posting another question scoped specifically to those kinds of tests. Plus a little more info about the complexity of the tests.
Jan 16, 2022 at 3:24 comment added donquixote @GregBurghardt When asking the question I was mostly thinking of integration tests and functional tests, and some unit tests for non-trivial components.
Jan 15, 2022 at 20:17 review Close votes
Jan 20, 2022 at 3:07
Jan 15, 2022 at 20:01 comment added Greg Burghardt Are you asking about unit tests, integration tests or functional tests? The type and complexity of tests can make a difference.
Jan 15, 2022 at 6:32 comment added Hartmut Braun @donquixote then you need two tests: one that describes the correct behaviour and is marked as xfail, and another test that describes the current behaviour which is run normally. The second test should then be marked as „known problem“ and requires a reference to the correct-behaviour-test.
Jan 15, 2022 at 0:52 comment added donquixote @DavidCary In this reply I see "test driven development - which says nothing about committing code.". But I think the devil is in the detail: We don't want these failing tests to block the CI for other development. So if we can somehow mark them as "known to fail", in a way that is understood by CI, then we can merge them.
Jan 15, 2022 at 0:45 history edited donquixote CC BY-SA 4.0
added 124 characters in body
Jan 15, 2022 at 0:44 comment added donquixote @HartmutBraun Very nice episode! xfaiil seems a good idea, but then the test won't describe in detail the current behavior. We want to detect if the buggy behavior changes over time, even if it is still wrong, but in a different flavor of wrong.
Jan 14, 2022 at 23:52 comment added David Cary @donquixote: If I'm reading that right, overall consensus seems to be, when "the test doesn't even compile", don't merge. I don't see how that's relevant to test cases that compile and run (and print out "failed") and the rest of the application is just as shippable as it was before. At least one commenter there recommends "For a large bugfix, commit after each improvement (e.g. refactoring), even if the bug is not fixed yet." and I claim adding a test that fails is an "improvement" that should be treated the same as a refactoring.
Jan 14, 2022 at 23:44 history edited donquixote CC BY-SA 4.0
added 240 characters in body
Jan 14, 2022 at 23:32 history edited donquixote CC BY-SA 4.0
More structure for the "EDIT" section explaining reasons why.
Jan 14, 2022 at 22:03 history edited donquixote CC BY-SA 4.0
added 88 characters in body
Jan 14, 2022 at 20:10 comment added donquixote @DavidCary Relevant to "should I merge a failing test?": softwareengineering.stackexchange.com/questions/201743/… - Overall consensus seems no.
Jan 14, 2022 at 19:34 comment added David Cary @mmathis: Many people recommend "commit early, commit often", so I definitely commit after I write a (failing) test and before I fix the bug. That test doesn't make the application any "less shippable" than it was before (it has zero effect on the end-user experience), so "merge early, merge often" seems to imply merging right away.
Jan 14, 2022 at 18:11 comment added mmathis @DavidCary But you don't write the (failing) test and merge it to your main branch without actually fixing the bug, do you? Sure you write the test first, but you fix the bug at the same time...
Jan 14, 2022 at 17:57 comment added David Cary @mmathis: test-first programming (TDD) requires writing tests that are expected to fail before fixing bugs. TDD is part of agile software development.
Jan 14, 2022 at 12:22 comment added Hartmut Braun There is a podcast by Brian Okken where he and a guest discuss a feature called xfail (tests which are known to fail) share.fireside.fm/episode/DOAjrBV2+iHYGDqvh . The podcast is about pytest but as far as I remember they also discuss quite generally the question if failing tests may or shouldn’t be merged into the main line.
Jan 14, 2022 at 8:24 comment added Filip Milovanović ... if you get the same string when you shuffle code around (so, you're not even thinking what the behavior really means). Once you get to a point where you have established concepts (workable methods/classes) in your code that you can reason about on some level, you can write tests that describe (or, I guess, prescribe) the correct behavior for those, one at a time (probably not a good idea to do it all at once), and bring the system into shape bit by bit. Eventually throw away the ad hoc "safety net" test. 2/2
Jan 14, 2022 at 8:24 comment added Filip Milovanović IMO, if one of the roles associated with the tests is to describe behavior, then they should describe the correct behavior; however if you're refactoring, you shouldn't change the overall behavior of the system, just the structure and the way different components (old or newly created) interact. So you can write a temporary "safety net" test that allows you to do the restructuring first, to get you to a point where you can begin to understand what the hell is going on in the code - this sort of test could do something "stupid" like concat all the output in a string, and then check 1/2
Jan 14, 2022 at 8:14 answer added Kain0_0 timeline score: 4
Jan 14, 2022 at 3:50 answer added Greg Burghardt timeline score: 4
Jan 14, 2022 at 3:34 history edited donquixote CC BY-SA 4.0
Add "EDIT" section so I don't have to write so many comments
Jan 14, 2022 at 3:23 comment added donquixote Also, perhaps some users or other packages depend on aspects of the old "faulty" behavior. So the test for the old behavior can reveal the full scope of change from the bug fix.
Jan 14, 2022 at 3:17 answer added mmathis timeline score: 7
Jan 14, 2022 at 3:16 comment added donquixote Because then I can also catch side effects from other changes I might do before that bug is fixed. Perhaps fixing it is more complicated, and the tests cover a wider range of functionality.
Jan 14, 2022 at 3:11 comment added mmathis Why do you want to add the tests to your main branch before you fix the bug?
Jan 14, 2022 at 2:52 history asked donquixote CC BY-SA 4.0