Sunday, 29 November 2009

Trying to Achieve Flow in Software Development

I've been trying to apply the Lean principle of Flow to my software projects for the last few months and it's teaching me a few things about our process. I've also been attempting to draw out a value map for the whole process to help me do this and am finding it all very useful.

So, what does a basic process look and feel like now I've walked it out? Well, here's my rough take on it:
  1. We take a feature from the to-do pile (or a few bugs from the bug list) and write out a complete set of requirements and a specification.
  2. The developers write code. For this phase to be complete, we'd like to have all code reviewed and a whole bunch of new unit tests and integration tests passing.
  3. Once development of sufficient work is complete, a build will be made available to QA and they then write tests. For this to be complete, all tests should be approved by the lead test engineer (and hopefully the product manager).
  4. The testers carry out the tests and hopefully find no bugs or issues. If they do, we go back to step 2...
If you take a look at your software process in this manner, you should quickly notice that you have queues of work between the various processes and sub-processes. If they're small and manageable, then you're already on the way to achieving flow. Huge or unknown in size and you potentially have problems hidden in there: stop piling them up and clear out the backlog!

Now, these queues of work hide different sorts of problem and tackling levelling out the various work rates to achieve flow requires different approaches.

First of all, the feature queue is in some senses always going to exist, at least in an abstract sense: once a project has a guiding vision and a high level specification exists, a large number of the features exist, if only in a project managers head. However, the queue needs to be consumed and turned into completed specification items, with accompanying acceptance tests and so on, and this is often a lengthy process. More importantly, this process is extremely context sensitive, in that individual feature specifications are often related and can be significantly influenced by what has gone before. Herein lies the classic waterfall problem: do too much and there tends to be a lot of re-work. Keep the completed specification queue as small, or you're potentially wasting your time!

Completed specifications are traditionally consumed by development, as code is written to satisfy the requirements. Once this is complete, QA then ensure the requirements have been satisfied by writing and carrying out tests. The queue between development and QA is normally BIG (it can often accumulate for an entire project and QA has to clear it out in one monumental effort at the end of the project. This isn't very fair, as it then becomes QA's responsibility to ship on time: everyone else has done their job to the best of their ability and it simply remains for QA to ensure quality before we can start making some money! Think about this for a moment and the Lean mantra of "building in quality". Doesn't fit does it?

Not only is the queue between development and QA usually large and symptomatic of a non-Lean process, it can also hide some pretty big and horrific problems. Ever got halfway through QA and discovered that something pretty fundamental is broken that was finished an age ago?How much code is subsequently "broken" as a result of these horrors? It's pretty obvious then that keeping this queue small is pretty important if you want to avoid re-work.

There is something that can be done to alleviate both of these problems, and that is to create parallel processes of coding and test writing that start with the finished requirements and specifications. If everyone sits down together when specifications are being written, the QA can start to design tests to verify the requirements have been satisfied and further tests to cover more general quality. During this conversation, development can even pinch some of the tests that can be automated and you can start to "build in quality". If you can get this going, then you find that QA are almost ready to start running tests the moment they get a build, so the completed code queue can be consumed faster. Finding bugs in early in completed code is essential in avoiding re-work and (using the TPS "Andon" idea), you can stop coding if a Big Problem is discovered and go back to solving that before you proceed with new work.

Now, I've not even touched on documentation here, as I'm not that confident about how and when most people carry it out. For sure, if you do it at the end, you have a massive queue to be consumed by the poor mites, and the "blame" for being late is soundly passed on to them. Perhaps more people do it in parallel with the final QA Big Push, which is better, but still not great. In common with the principle of starting the test writing early, you can potentially start documentation early (at least loosely) at the requirements/specification stage. How much overlap between tutorials and quick start guides do you think there might be with the general quality tests? If it's a lot, then why not do them together and maybe you can even use these tutorials and guides as some form of quality test: it's all about feature coverage!

I've noticed while thinking about these things that one of the most fundamental things that needs to be done to achieve flow is to involve everyone at every stage. Assembling the whole team at the very start of a project means that re-work can be kept to a minimum as everyone "picks off" the parts of the process that are their responsibility at the earliest opportunity as the features flow through. Not only does a collaborative encourage non-dependent tasks to be done in parallel, but you can't beat different perspectives for solving problems!

No comments:

Post a Comment