4

Consider the following test output:

A passing tests spec illustrating my situation`

To summarize, Word documents are not supported, and PDFs are. So we immediately reject Word documents. But for PDFs, there are a lot more steps that need testing, so we do those.

The problem is, let's say I also want to support text files, which have the exact same workflow as PDFs. The code I am testing essentially looks like this:

function uploadDocument(type, document) { if (type !== "application/pdf" && type !== "text/plain") { throw new UnsupportedMediaTypeError(); } // do all the steps involving temp documents, hashing, ownership, etc. } 

My question is: how can I structure my tests for this? I don't want to duplicate the entire tree underneath "when uploading a PDF" as "when uploading a text file".

I feel like I run into this problem a lot. As you can see I've already done some duplication (the entries under "and deleting the temporary document succeeds" and "and committing the temporary document succeeds" are the same).

Essentially it's an issue of varying multiple dimensions of the system and testing them in combination. Someone must have thought of how to structure such tests.

4
  • 1
    Why can't you parameterize the narrative and use data-driven testing for this. In one run you execute the specification for PDF and in another run you execute the spec for text file. By the way, which tool are you using for BDD? It is impressive. Commented May 8, 2012 at 21:28
  • So it sounds like there is a solution, and it goes under names like "parameterizing the narrative" and "data-driven testing." I'd love an answer explaining those! As for the tool, we're using Mocha. Commented May 8, 2012 at 21:50
  • Have you explored the "Shared Behaviours" feature under mocha? I am not familiar with Mocha so you might want to ask in their forum on how to do parameterization. Commented May 8, 2012 at 21:54
  • Hmm, thanks for the pointer to the shared behaviors keywords. It looks like it is not natively supported but instead you essentially just factor those tests out into a function and call that function: github.com/visionmedia/mocha/wiki/Shared-Behaviours This is OK and probably the best I can hope for---indeed I'm doing it already---it just seems wasteful to end up with another 40 tests run under almost-identical situations. Commented May 8, 2012 at 21:58

2 Answers 2

1

I seems like to need to break this up into 2 tests (probably more, but that's a different subject)

Given A document of type <doc_type> Then it should have an allowed status of <status> Examples: | doc_type | status | | word | fail | | text | accept | | pdf | accept | 

And then your test would simplify to

... when uploading a valid file type ... 

Happy Testing, Llewellyn

Sign up to request clarification or add additional context in comments.

Comments

0

I understand your example in the following way:

  • You distinguish between supported and unsupported media types.
  • The behaviour for all unsupported media is identical
  • The behaviour for all supported media is identical
  • For supported media, the content is used, but only as raw bytes (hashing)

Thus, it seems there is little value in performing the tests twice for different types of supported media. Let me explain this for the different possible development processes you might be using:

Possibililty a. If you are doing BDD, then the tests are chosen to "drive" your implementation. After you have implemented everything for pdf, however, the code is already in place. You only need one extra test case you "drive" you to add the && type !== "text/plain" fraction. For all other tests from the duplicate tree, you would not be adding any extra code. Thus, the tests from the duplicate tree have no "driving" characteristic.

Possibility b. You are doing test case design and just use the BDD notation to formulate your test. In this case, your goal is to define a test suite that will find the possible bugs. You have white-box knowledge about the code, which means you know that the handling is the same for pdf and text files. Thus, you only need to make different test cases for pdf files and text files where this could find additional bugs. There might be some scenarios here, for example if the file extension could prevent it from being stored - but most likely not for all scenarios the distinction is relevant.

Possibility c. There is, finally, a possible scenario similar to b, but where you don't have white box knowledge. Given how in your example test cases internals are discussed this does not really fit to your example, and thus I won't elaborate on this.

Having discussed on the level of tests, there is also something to be mentioned on the level of design: Your code a) distinguishes between supported and unsupported media and b) handles all supported media alike. It does all this within the same function, which may even be the reason you initially considered duplicating the tests: Since in this big function both arguments type and document are accessible, someone might even use type in the handing part, making it less common. You might consider handling the different responsibilities in separate functions, and then test the functions individually: One function to determine if the type is supported (only using the type argument) and one function doing the handling for supported documents (only using the document argument).

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.