Sunday, 29 November 2009

Trying to Achieve Flow in Software Development

I've been trying to apply the Lean principle of Flow to my software projects for the last few months and it's teaching me a few things about our process. I've also been attempting to draw out a value map for the whole process to help me do this and am finding it all very useful.

So, what does a basic process look and feel like now I've walked it out? Well, here's my rough take on it:
  1. We take a feature from the to-do pile (or a few bugs from the bug list) and write out a complete set of requirements and a specification.
  2. The developers write code. For this phase to be complete, we'd like to have all code reviewed and a whole bunch of new unit tests and integration tests passing.
  3. Once development of sufficient work is complete, a build will be made available to QA and they then write tests. For this to be complete, all tests should be approved by the lead test engineer (and hopefully the product manager).
  4. The testers carry out the tests and hopefully find no bugs or issues. If they do, we go back to step 2...
If you take a look at your software process in this manner, you should quickly notice that you have queues of work between the various processes and sub-processes. If they're small and manageable, then you're already on the way to achieving flow. Huge or unknown in size and you potentially have problems hidden in there: stop piling them up and clear out the backlog!

Now, these queues of work hide different sorts of problem and tackling levelling out the various work rates to achieve flow requires different approaches.

First of all, the feature queue is in some senses always going to exist, at least in an abstract sense: once a project has a guiding vision and a high level specification exists, a large number of the features exist, if only in a project managers head. However, the queue needs to be consumed and turned into completed specification items, with accompanying acceptance tests and so on, and this is often a lengthy process. More importantly, this process is extremely context sensitive, in that individual feature specifications are often related and can be significantly influenced by what has gone before. Herein lies the classic waterfall problem: do too much and there tends to be a lot of re-work. Keep the completed specification queue as small, or you're potentially wasting your time!

Completed specifications are traditionally consumed by development, as code is written to satisfy the requirements. Once this is complete, QA then ensure the requirements have been satisfied by writing and carrying out tests. The queue between development and QA is normally BIG (it can often accumulate for an entire project and QA has to clear it out in one monumental effort at the end of the project. This isn't very fair, as it then becomes QA's responsibility to ship on time: everyone else has done their job to the best of their ability and it simply remains for QA to ensure quality before we can start making some money! Think about this for a moment and the Lean mantra of "building in quality". Doesn't fit does it?

Not only is the queue between development and QA usually large and symptomatic of a non-Lean process, it can also hide some pretty big and horrific problems. Ever got halfway through QA and discovered that something pretty fundamental is broken that was finished an age ago?How much code is subsequently "broken" as a result of these horrors? It's pretty obvious then that keeping this queue small is pretty important if you want to avoid re-work.

There is something that can be done to alleviate both of these problems, and that is to create parallel processes of coding and test writing that start with the finished requirements and specifications. If everyone sits down together when specifications are being written, the QA can start to design tests to verify the requirements have been satisfied and further tests to cover more general quality. During this conversation, development can even pinch some of the tests that can be automated and you can start to "build in quality". If you can get this going, then you find that QA are almost ready to start running tests the moment they get a build, so the completed code queue can be consumed faster. Finding bugs in early in completed code is essential in avoiding re-work and (using the TPS "Andon" idea), you can stop coding if a Big Problem is discovered and go back to solving that before you proceed with new work.

Now, I've not even touched on documentation here, as I'm not that confident about how and when most people carry it out. For sure, if you do it at the end, you have a massive queue to be consumed by the poor mites, and the "blame" for being late is soundly passed on to them. Perhaps more people do it in parallel with the final QA Big Push, which is better, but still not great. In common with the principle of starting the test writing early, you can potentially start documentation early (at least loosely) at the requirements/specification stage. How much overlap between tutorials and quick start guides do you think there might be with the general quality tests? If it's a lot, then why not do them together and maybe you can even use these tutorials and guides as some form of quality test: it's all about feature coverage!

I've noticed while thinking about these things that one of the most fundamental things that needs to be done to achieve flow is to involve everyone at every stage. Assembling the whole team at the very start of a project means that re-work can be kept to a minimum as everyone "picks off" the parts of the process that are their responsibility at the earliest opportunity as the features flow through. Not only does a collaborative encourage non-dependent tasks to be done in parallel, but you can't beat different perspectives for solving problems!

Monday, 2 November 2009

Personal Kaizen

Last Monday was my personal Kaizen day and I've been a bit late on it this month. So, what did I try in October that worked and didn't?

First off, I was sticking with vision boxes and Evo-style customer values and the experience has thus far been extremely good (see my last post). More significantly, other people in the company have been taking notice of this way of presenting the data, so it may gather some momentum and become a bit more widely used. The experiment is not yet over, and I can't comment on whether it's a success until I've successfully delivered a valuable product at the end of the project.

I also tried a Google site for communicating between members of the EMHD, but have not yet had the chance to use it: the next event I'm involved in, I'll wheel it out and see what everyone thinks. My initial reaction has been that it's a lot easier to set up than a Sharepoint and provides all of the functions that I need. I don't need a very customised experience for EMHD purposes, so it's fine just the way it is. It took about 30 minutes to get everything on there that I needed, so I reckon that's a pretty shallow learning curve and a decent return on my time. If it works that is...

I've also been trying to drive project work from our Sharepoint, which I have to say isn't proving so tractable. The best I've managed so far is to send blanket updates to everyone about changing specs, new features and tasks and so on, it's no finer grained than that. We (I?) now have a choice to make about how to integrate basic calendar functions into the planning tools we're using, which could be quite a challenge. The kinds of things I see as essential are:
  • Blocking out iterations/timeboxes
  • Attaching start/finish dates to features and tasks
  • Entering holidays, out of office days and any other non-project days for planning purposes
  • Entering similar retrospective "holes" in the project to assist analysis of the data
  • Displaying project finish dates and milestones on the calendar
There has been one other thing tried this month, and that was Python. (Yes, I should have learned this AGES ago I know...) Basically, it rocks! I particularly like dictionaries (well, the fact I can in-place initialise them: roll on C++0x for that) and decorators (although I haven't used them properly yet). I can't see it replacing Matlab for prototype code for me (well, at least for very algorithmic stuff), but it certainly has a place in my toolbox.

So, what next? Well, building on the things that are staying from last month, during November I'll be trying:
  • Project planning integration completely from a Sharepoint task list and calendar
  • Running more code through GCC on Linux to get my C++ more compliant (Dev studio, I feel you've been leading me astray...)
  • Figuring out what Jidoka means for Lean software development