When FIT doesn’t fit
Marcus Ahnve recently wrote about why he doesn’t like FIT.
Here are my thoughts:
When I first started doing customer specified integration (”story”) testing on enterprise Java projects, JUnit was the tool choice. It was just as simple to program and maintain integration tests as it was to write and maintain unit tests. Because the developers were responsible for translating the test plans into their executable form, it was an obvious and simple choice. It enabled us to run a debugger when we needed to, the tests ran fast, and we didn’t have to learn a new syntax or language. Admittedly, it was not the most natural form of input for certain test cases, but for the overwhelming majority of tests it was very comfortable. For those tests where we wanted to read tabular data, we would store and read data from comma-separated value (CSV) files.
But as developers, we like clear separation of responsibility within our code, and we often have the same expectation of the real world. Though we were collaborating with customers to write integration tests in JUnit, we wanted a more clear distinction between test specification and test execution.
I joined a project that wanted to have the customer (in an XP sense) be the author of the story tests. We adopted a tool (I can’t remember the name) which allowed us to use XML to define tests. The idea was that the customer could understand and write XML-based tests, hand them off to us, and we would integrate them into the testing framework. This was an interesting idea because we thought it would help separate responsibilities between developers and the customer. We also figured that XML was a safe choice because the files could easily be version controlled along with our code.
But there were some trade offs. We couldn’t easily debug the tests in our IDE. And, if we wanted to debug, we had to dig through the testing framework code. What we often ended up doing is creating a JUnit “replica” for each integration test. These replicas were used to drive the development of the functionality specified in the test, and were meant to be discarded thereafter to eliminate duplication. Unfortunately we often saw the need to resurect these tests when we wanted to update an integration test. So even though the customer still had responsibility for the “gold” version of the test specifications the developers were not necessarily saved from the responsibility of translating the XML-based tests in into Java anyways.
A few months into the project, the customer slowly gave up on writing the tests and the responsibility of translating the written or verbal communication of the specifications became the developers’. Ater a few weeks of messing around with XML tests, we realized that it would be much simpler to get rid of the XML tool and just translate all of our tests into JUnit. After all, why should we worry about maintaining XML files when we’re really good at managing, reading, and writing Java code. After doing this, our tests ran faster and were much easier for us to maintain, debug, and update.
At the time, we blamed the technology. We thought, “Well, XML is not that friendly for non-technical people. If we could just run Excel files or Wiki tables, things would be much better.” This seemed like the logical assessment at the time because the customer still has to specify the tests, but they were doing it in whatever medium made sense for the given spec: Excel, paper/pencil, Wiki, etc.
When I first read about Fit/FITNesse, it seemed like a great solution. The customer specifies the tests in a tabular fashion in a Wiki page. On my previous project, the customer was specifying several things on a Wiki anyways, so they could have just specified everything in this fashion and it would have been executable. I read quite a bit about FIT/FITNesse and conceptually agreed with the ideas.
But, I had some concerns as well. Version control was going to be an issue. Would I version control the entire Wiki structure? How easy would it be to restore or diff a given test? Does the Wiki have version control built in? Even if it does, how can I associate a version of a Wiki test with a version of my code? I was also concerned about the difficulty in defining tests. These are tests that my non-technical customer is supposed to specify. For example, understanding row fixtures seemed trivial to me, but I have been technical for so long, that I often underestimate what people should be able to pick up on quickly. In addition, I still had the problem of having to create “replica” tests in JUnit just to be able to develop/debug easier.
From a technical perspective, I didn’t like the FIT API for developing fixtures either. The FIT API didn’t seem to follow common Java/OO idioms. For example, arrays were decidedly used rather than collections. They also chose to make class attributes public and justified it by claiming that they weren’t really objects, but rather data structures.
After I used FIT on a couple projects, I confirmed my belief that using an integration tool requires more justification than most people provide. The problems are a combination of the overhead involved in using the technology and real-world parameters that affect its benefit.
I see two main issues at play here. The first is one of responsibility. Regardless of the testing tool, developers often end up writing and maintaining integration tests. This is inescapable in many projects because of customer involvement, availability, or desire. If this is the case, I find little value in using integration testing tools. Developers are significantly better off writing the integration tests using a unit testing tool, which they are familiar with.
The second issue is one of technical capability. If the customer is able and willing to write the tests specifications, I think a tool like FIT can work, but for me it is still a difficult thing to justify. First, if the customer is not technical, my experience is that you have to significantly coach them. At that point, I’ve found it easier to just pair with them and write the tests in Java. If they are technical, I would question whether it would be easier for them to just write the tests in Java and avoid the overhead that FIT imposes on version control and duplication of test writing.
I’m not saying that an integration testing tool like FIT is always unjustified. What I hope to communicate is that we often overlook the option of just sitting down with the customer and pair-programming with them. They explain the requirements, we code them into a JUnit test. I’ve found that this helps to facilitate and explore the requirements, boundary conditions, and exceptional cases much better than setting the customer off on their own with a fancy UI and telling them to let us know when they’ve got the tests ready.