• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Tim Cooke
  • Ron McLeod
  • Devaka Cooray
  • Paul Clapham
Sheriffs:
  • paul wheaton
Saloon Keepers:
  • Tim Holloway
Bartenders:

Test configuration failures in JUnit

 
Ranch Hand
Posts: 1970
1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I am a big fan of JUnit and my workplace is gradually adopting it (and similar XUnit frameworks for other languages).
As much as possible, our unit tests are written to be "self-sufficient"; that is, they should set up all resources that they require. But this is not always possible. Sometimes, a test really does require that the environment in which it runs be set up in a particular way (e.g. a certain file here, an environment variable there).
I have difficulty knowing what to do in my test cases when they determine that required resources are not available. As far as I know, JUnit only allows a test to pass or fail (throw or assert). There is no third state meaning "couldn't run the test".
A test could be coded to do nothing (pass), if its required resources aren't there. But that could lead to people thinking all tests are running nicely, when in fact they are not running at all.
Alternatively, a test could be coded to fail, if its required resources aren't there. But that makes people think something is wrong with the code, whereas in fact something is wrong with their set-up.
Anyone have similar dilemmas?
 
Ranch Hand
Posts: 150
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The more dangerous case is in letting the test pass - if the required configuration isn't there, then you haven't proved that it passed. If you have the test fail, then someone presumably looks at it.
In your situation, I think I might set up a specific exception called BadTestEnvironment. That way, in your output you could immediately see that it's associated with the test environment and not necessarily associated with the code.
 
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Or you could simply use a descriptive message, explaining the reason for the failure.
But, as you already pointed out, it would be best if the unit tests were self contained.
Perhaps you could give some concrete examples where you are having difficulties *not* depending on the test environment, so that we can try to discuss possible solutions?
 
Ranch Hand
Posts: 775
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Unfortunately JUnit doesn't explicitly factor out the notion of verification (per the usual QA definition of the term), which is essentially what you are talking about. This is something I've grappled with in my own JUnit tests, and I've been doing some more thinking about it while working on a book.
There are a few possibilities.
  • In any given test, you have more than one assert. The first assert, or group of asserts, verifies the testing environment. Later asserts are the "real" tests. This approach is simple when:
    (a) different test methods have different environment conditions, and
    (b) you are using a development environment that quickly tosses you to the location of the failure (e.g. JBuilder). I use this approach a lot to catch cut-and-paste errors in test code that I re-use.


  • When all the tests in the class have the same environmental conditions, you use setUp() to establish *and* verify the environment. setUp() bombs if the environment can't be created. Works well when you have a
    (a) complex but shared set of environment conditions across the tests,
    (b) the tests are non-destructive (i.e. they don't change the state of the environment).


  • Extend TestCase to modify JUnit functionality. Create verification methods (perhaps based on some naming convention) that are executed and can fail independently of the unit tests.

  • [ January 29, 2003: Message edited by: Reid M. Pinchback ]
     
    Ilja Preuss
    author
    Posts: 14112
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    There is another possibility, which, in my humble opinion, would be preferable:
    Write the tests (and structure the production code) in such a way that configuring the test environment is under full control of the automatic test setup and virtually can't go wrong.
     
    Reid M. Pinchback
    Ranch Hand
    Posts: 775
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Agreed. For true unit tests the obvious thing to do is to use mock objects everyplace you would otherwise be facing complex system interdependencies. For example, you can easily create a mock context object with a lookup() that will return whatever you want instead of connecting to a real JNDI service.
    The limitation to this is when you have something you really can't mock up, like an integration test against a live service. If something makes it impossible or impractical to have a special dev/test instance that you can configure to suit your tests, then you may be faced with simply integrating what you have, and doing some verification that the service is both available and currently configured in a way that matches your test expectations.
    I ran into a case like that last fall. The tests were an attempt to isolate some performance problems, and the tests had to be run against the live system to be meaningful. Unfortunately the data content and configuration of the system changed weekly, so you had to verify that the current state still met your test expectations. If not, you had to update the test.
     
    Mo-om! You're embarassing me! Can you just read a tiny ad like a normal person?
    Paul Wheaton's 16th Kickstarter: Gardening playing cards for gardeners and homesteaders
    https://coderanch.com/t/889615/Paul-Wheaton-Kickstarter-Gardening-playing
    reply
      Bookmark Topic Watch Topic
    • New Topic