Timeline for Is there a pattern for unit/integration testing where tests that are higher level are intended to act as "gates" for other more specific tests?
Current License: CC BY-SA 4.0
17 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Dec 7, 2020 at 14:24 | answer | added | Flater | timeline score: 3 | |
| Dec 7, 2020 at 12:57 | comment | added | Flater | Secondly, you open yourself up to a different delay. In the classic unit-then-integration testing, you don't even start on the slower and lengthier integration tests until the quick unit tests have run. If you invert that order, that means that any tiny bug will need to run both the slow integration tests and the subsequent unit tests for each failed integration test. That's not going to be faster. | |
| Dec 7, 2020 at 12:54 | comment | added | Flater | Your proposed approach implies the ability to know which tests hide behind which gate. What's the expected failure rate on the gating configuration itself? Because if you make one mistake and this fails to catch a bug that now slips through, you've just broken your release. Is cutting down testing time really worth potentially compromising your release quality? Unit tests aren't even that slow to begin with as they are short, in-memory, and easily multithreadable. This seems like a matter of confused priorities. | |
| Dec 6, 2020 at 17:19 | comment | added | davidbak | P.S. Another problem sometimes found with large suites of unit tests is that they take a long time to build. This indicates that the unit tests are too big - including too many dependencies. It could possibly just mean a problem with the build system (great! easy to fix!) or it could signal a troubling lack of modularity in your system (hard to fix but probably worth it) or, most likely, just badly written unit tests (that are not in fact unit tests) (tedious to fix but worth doing). | |
| Dec 6, 2020 at 17:15 | comment | added | davidbak | The "theory" of unit tests is that they are supposed to be fast, you run every minute if not every keystroke ... There are very easy and common ways to make them slow - usually due to hacks used to make poorly written failure prone tests more "deterministic" (fail only 1 out of 10 times instead of 1 in 2). Those slow tests should identified and then taken out and shot and replaced with properly written fast tests. Then you have no problem with unit tests. I have done this with a large codebase with great results on the team. BTW, fast means < 0.3s, preferably < 0.1s, per test. | |
| Dec 6, 2020 at 14:34 | answer | added | Ewan | timeline score: 0 | |
| Dec 5, 2020 at 16:07 | comment | added | Thorbjørn Ravn Andersen | Unit test frameworks expects tests to be independent. This makes it harder to use them for dependend integration tests. | |
| Dec 5, 2020 at 8:24 | comment | added | Steven Lu | @gnat It doesn't... | |
| Dec 5, 2020 at 7:06 | review | Close votes | |||
| Dec 10, 2020 at 3:07 | |||||
| Dec 5, 2020 at 6:44 | comment | added | gnat | Does this answer your question? How to write unit tests a method with a result that is highly based on another method | |
| Dec 5, 2020 at 6:23 | history | edited | Steven Lu | CC BY-SA 4.0 | added 31 characters in body |
| Dec 5, 2020 at 6:12 | history | edited | Steven Lu | CC BY-SA 4.0 | added 508 characters in body |
| Dec 5, 2020 at 6:00 | history | edited | Steven Lu | CC BY-SA 4.0 | edited title |
| Dec 5, 2020 at 5:51 | history | edited | Steven Lu | CC BY-SA 4.0 | edited title |
| Dec 5, 2020 at 5:35 | history | edited | Steven Lu | CC BY-SA 4.0 | added 34 characters in body |
| Dec 5, 2020 at 5:28 | history | edited | Steven Lu | CC BY-SA 4.0 | added 34 characters in body |
| Dec 5, 2020 at 5:23 | history | asked | Steven Lu | CC BY-SA 4.0 |