I’ve been experimenting with different test frameworks to validate my Ansible scripts. So far, I have actually found very few that will do this type of testing. I suppose the idea is still so new that it seems foreign to many people. To me once you hear the idea of automating your IT testing, it seems like the obviously right thing to do. serverspec comes the closest of the ones I am aware of. I also like rspec and cucumber.
cumber is interesting because it transforms integration (or acceptance) testing code into a fully readable document. This reminds me of the literate programming concept. Literate programming reverses the normal paradigm: it make comments the normal mode and code the exceptional mode. If only this idea caught on with open source programmers! How vastly more useful their tools would be.
This blog has a great example of IT integration testing with cucumber. He walks us through testing a DNS server change to a DNS using cucumber. You can see an example of near perfect integration of documentation and code. He even linked the trouble ticket ID to the text, making it part of a fully auditable historical record of changes to the system. His well written post also serves as an introduction to cucumber so I highly recommend you read it.
After reading that post, my question was how to balance the readability and auditability of the cucumber format with the quantity of tests required. rspec and serverspec allow you to specify and run many tests using a concise format. My serverspec tests already number about 50 and I’ve barely started. But if the blogger had simply updated the core IT test suite with this change, then the record of it as a change would be lost. That historical record is important but so is maintainability.
On one hand you might have a large set of small cucumber feature files (the “deltas”) and on the other a single rspec/serverspec test file (the desired state). I guess you could add the trouble ticket ID and a narrative to the git commit comment. My experience with developers, however, makes me doubt such a practice would be followed consistently if at all.
For a DevOps style change management process the work-flow might go something like this:
- Change request entered into the tracking system (e.g. Jira)
- Changes are reviewed, approved, and assigned.
- In good TDD style, the end-state of the change is documented in cucumber as in the example above. That cucumber script could even be validated with the person who proposed the change if required. cucumber tests can be understood by non-technical people. The cucumber test scripts become part of the change record.
- The change is made to the appropriate automated IT management script (e.g., in Ansible or chef).
- The change is run and tested in the right test environments. This means that the full IT test suite is run as well as the new individual test.
- Once it passes it gets pushed to production and verified.
I think cucumber vs serverspec for IT testing is another interesting topic that definitely warrants further experimentation.