Test configuration failures in JUnit
posted 22 years ago
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
I am a big fan of JUnit and my workplace is gradually adopting it (and similar XUnit frameworks for other languages).
As much as possible, our unit tests are written to be "self-sufficient"; that is, they should set up all resources that they require. But this is not always possible. Sometimes, a test really does require that the environment in which it runs be set up in a particular way (e.g. a certain file here, an environment variable there).
I have difficulty knowing what to do in my test cases when they determine that required resources are not available. As far as I know, JUnit only allows a test to pass or fail (throw or assert). There is no third state meaning "couldn't run the test".
A test could be coded to do nothing (pass), if its required resources aren't there. But that could lead to people thinking all tests are running nicely, when in fact they are not running at all.
Alternatively, a test could be coded to fail, if its required resources aren't there. But that makes people think something is wrong with the code, whereas in fact something is wrong with their set-up.
Anyone have similar dilemmas?
As much as possible, our unit tests are written to be "self-sufficient"; that is, they should set up all resources that they require. But this is not always possible. Sometimes, a test really does require that the environment in which it runs be set up in a particular way (e.g. a certain file here, an environment variable there).
I have difficulty knowing what to do in my test cases when they determine that required resources are not available. As far as I know, JUnit only allows a test to pass or fail (throw or assert). There is no third state meaning "couldn't run the test".
A test could be coded to do nothing (pass), if its required resources aren't there. But that could lead to people thinking all tests are running nicely, when in fact they are not running at all.
Alternatively, a test could be coded to fail, if its required resources aren't there. But that makes people think something is wrong with the code, whereas in fact something is wrong with their set-up.
Anyone have similar dilemmas?
Betty Rubble? Well, I would go with Betty... but I'd be thinking of Wilma.
posted 22 years ago
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
The more dangerous case is in letting the test pass - if the required configuration isn't there, then you haven't proved that it passed. If you have the test fail, then someone presumably looks at it.
In your situation, I think I might set up a specific exception called BadTestEnvironment. That way, in your output you could immediately see that it's associated with the test environment and not necessarily associated with the code.
In your situation, I think I might set up a specific exception called BadTestEnvironment. That way, in your output you could immediately see that it's associated with the test environment and not necessarily associated with the code.
posted 22 years ago
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
Or you could simply use a descriptive message, explaining the reason for the failure.
But, as you already pointed out, it would be best if the unit tests were self contained.
Perhaps you could give some concrete examples where you are having difficulties *not* depending on the test environment, so that we can try to discuss possible solutions?
But, as you already pointed out, it would be best if the unit tests were self contained.
Perhaps you could give some concrete examples where you are having difficulties *not* depending on the test environment, so that we can try to discuss possible solutions?
The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice. Day by day, what you do is who you become. Your integrity is your destiny - it is the light that guides your way. - Heraclitus
posted 22 years ago
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
Unfortunately JUnit doesn't explicitly factor out the notion of verification (per the usual QA definition of the term), which is essentially what you are talking about. This is something I've grappled with in my own JUnit tests, and I've been doing some more thinking about it while working on a book.
There are a few possibilities.
In any given test, you have more than one assert. The first assert, or group of asserts, verifies the testing environment. Later asserts are the "real" tests. This approach is simple when:
(a) different test methods have different environment conditions, and
(b) you are using a development environment that quickly tosses you to the location of the failure (e.g. JBuilder). I use this approach a lot to catch cut-and-paste errors in test code that I re-use.
When all the tests in the class have the same environmental conditions, you use setUp() to establish *and* verify the environment. setUp() bombs if the environment can't be created. Works well when you have a
(a) complex but shared set of environment conditions across the tests,
(b) the tests are non-destructive (i.e. they don't change the state of the environment).
Extend TestCase to modify JUnit functionality. Create verification methods (perhaps based on some naming convention) that are executed and can fail independently of the unit tests.
[ January 29, 2003: Message edited by: Reid M. Pinchback ]
There are a few possibilities.
(a) different test methods have different environment conditions, and
(b) you are using a development environment that quickly tosses you to the location of the failure (e.g. JBuilder). I use this approach a lot to catch cut-and-paste errors in test code that I re-use.
(a) complex but shared set of environment conditions across the tests,
(b) the tests are non-destructive (i.e. they don't change the state of the environment).
[ January 29, 2003: Message edited by: Reid M. Pinchback ]
Reid - SCJP2 (April 2002)
Ilja Preuss
author
Posts: 14112
posted 22 years ago
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
There is another possibility, which, in my humble opinion, would be preferable:
Write the tests (and structure the production code) in such a way that configuring the test environment is under full control of the automatic test setup and virtually can't go wrong.
Write the tests (and structure the production code) in such a way that configuring the test environment is under full control of the automatic test setup and virtually can't go wrong.
The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice. Day by day, what you do is who you become. Your integrity is your destiny - it is the light that guides your way. - Heraclitus
Reid M. Pinchback
Ranch Hand
Posts: 775
posted 22 years ago
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
Agreed. For true unit tests the obvious thing to do is to use mock objects everyplace you would otherwise be facing complex system interdependencies. For example, you can easily create a mock context object with a lookup() that will return whatever you want instead of connecting to a real JNDI service.
The limitation to this is when you have something you really can't mock up, like an integration test against a live service. If something makes it impossible or impractical to have a special dev/test instance that you can configure to suit your tests, then you may be faced with simply integrating what you have, and doing some verification that the service is both available and currently configured in a way that matches your test expectations.
I ran into a case like that last fall. The tests were an attempt to isolate some performance problems, and the tests had to be run against the live system to be meaningful. Unfortunately the data content and configuration of the system changed weekly, so you had to verify that the current state still met your test expectations. If not, you had to update the test.
The limitation to this is when you have something you really can't mock up, like an integration test against a live service. If something makes it impossible or impractical to have a special dev/test instance that you can configure to suit your tests, then you may be faced with simply integrating what you have, and doing some verification that the service is both available and currently configured in a way that matches your test expectations.
I ran into a case like that last fall. The tests were an attempt to isolate some performance problems, and the tests had to be run against the live system to be meaningful. Unfortunately the data content and configuration of the system changed weekly, so you had to verify that the current state still met your test expectations. If not, you had to update the test.
Reid - SCJP2 (April 2002)
| Mo-om! You're embarassing me! Can you just read a tiny ad like a normal person? Paul Wheaton's 16th Kickstarter: Gardening playing cards for gardeners and homesteaders https://coderanch.com/t/889615/Paul-Wheaton-Kickstarter-Gardening-playing |






