Timeline for Is it reasonable to insist on reproducing every defect before diagnosing and fixing it?
Current License: CC BY-SA 3.0
8 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Jan 30, 2015 at 23:46 | comment | added | Jaydel Gluckie | We are saying the same thing? From my post "There are times when a bug occurs and I fix it and I don't bother to test it. I know 100% for sure that it works. " | |
| Jan 29, 2015 at 22:30 | comment | added | supercat | ...testing and production environments are apt to have sufficient timing differences that judging whether particular bad timings can actually occur is apt to be extremely difficult and not terribly informative. What's important is to examine places which could be potentially timing-sensitive to ensure that they aren't, since tests for timing sensitivity are prone to have a lot of false negatives. | |
| Jan 29, 2015 at 22:25 | comment | added | supercat | With suspected threading problems, even if one can manage to jinx things in such a way as to force things to happen at precisely the "wrong" time, is there any way to really know whether the problem you reproduced is the same one observed by the customer? If code has a defect such that things happening with a certain timing would cause a failure, and it is at least theoretically possible for such timing to occur, I would think the code should be fixed whether or not one can jinx a test environment to make the requisite timings occur. In many such situations... | |
| Jan 29, 2015 at 22:12 | comment | added | Jaydel Gluckie | Well that's that whole, "it depends" on the situation argument. If it was a mission critical life or death system or customer expected that kind of testing then yes, make a best effort at reproducing the issue and test. I have had to download code to a customers machine so I could debug because we could not reproduce an issue in our test servers. It was some sort of windows security issue. Created a fix and everyone is happy. It's difficult if setting up the test environment is harder than fixing the bug. Then you can ask the customer. Most of time they are ok with testing it themselves. | |
| Jan 28, 2015 at 17:58 | comment | added | supercat | In such a case, would it be necessary to reproduce the failure, or would the fact that the code in question would clearly be capable of causing failures like the indicated one be sufficient if one inspects the code for any other places where similar failure modes could occur? | |
| Jan 28, 2015 at 17:55 | comment | added | supercat | Suppose e.g. that one receives a report that a program occasionally incorrectly formats some decimal-formatted numbers when installed on a French version of windows; an search for culture-setting code reveals one discovers a methods which saves the current thread culture and sets it to InvariantCulture within a CompareExchange loop, but resets it afterward [such that if the CompareExchange fails the first time, the "saved" culture variable will get overwritten]. Reproducing the circumstances of failure would be hard, but the code is clearly wrong and could cause the indicated problem. | |
| Oct 9, 2013 at 17:53 | review | First posts | |||
| Oct 9, 2013 at 18:23 | |||||
| Oct 9, 2013 at 17:33 | history | answered | Jaydel Gluckie | CC BY-SA 3.0 |