But what happens when you pass feature complete? There is still a pile of testing to be done on the features and the bug list begins to grow. We know this is going to happen and we've (hopefully) planned for this final phase of production in some form. But how do we know whether or when we're going to get the work finished? If product management need to know not to announce a December release, we'd better bloody well tell them now!
Before I start on the ins-and-outs, I must first admit that I realise that a truly Agile and iterative approach will have rolled the entire develop-test-bug fix cycle in over the course of the project and "feature complete" only actually happens when the last iteration is complete, which - like all previous iterations - means the last batch of features has passed all acceptance tests and any bugs thrown up have been addressed. However, this doesn't render my quandary a moot point, it simply spreads the worry throughout the whole schedule, breaking it down into smaller pieces of unknown bug-fix work.
So what's the problem? Well, when we extrapolate our velocity on features forwards, we are dealing with a known, bounded set of tasks in the future. We also have estimates of how long each of these tasks will take. Most variability in estimated vs real world effort can be accounted for by random fluctuations plus (or minus) a fixed bias. We seek to calibrate the bias in the estimation process and rely on a large number of measurements to hopefully capture the inherent variance in the estimate. But for bug-fixes - whether rolled into self contained iterations, lumped at the end or any mixture of the two - we don't know how long each work item will take. Any effort to guess (and that's all it would be: calling this estimation would be a terrible misuse of the word!) would probably be subject to enormous variability, such that any range of estimates this might generate would probably be useless in the realms of managing the project to a finish.
What options do we have? Well, maybe the traditional approach of simply attacking the critical bugs first, followed by the important ones and so on is the best we can do: we simply get as much done by the end date as humanly possible and make a strategic decision to ship or delay based on what's outstanding. The problem here is that you find out you're not going to make it when it's too late: fait accompli, big bollocking for the project manager, we all look stupid because we don't release on the day we announced to the world.
Or perhaps we should make some attempt to estimate the bugs. Perhaps bring legacy data to bear: this might tell us about what proportion of time bugs may take to fix in certain code areas with respect to initial development time: low level library functions, core application components, application business logic, GUI and so on. Legacy data could even give some prediction of how many bugs we might expect to be raised before they are even found.
My feeling that using Agile estimation tools is at least worth a try: make a guess at the time it will take to fix bugs, with a default of say 2 bugs per day for minor ones, but use the legacy data you have to guide you. The data you have gathered for the complete features is perhaps an excellent candidate: take sub-sets of the data and see which code areas and features were easy (came in under estimate), which were unpredictable (large variance in velocity), or which were hard (consistently over estimated) and use this to generate (or perhaps bias) your bug-fix effort estimates. I suggest this, because it's odds on that a particularly sticky feature to initially code is more likely to contain sticky bugs than one that was trivial. There will of course be exceptions to the rule, but the point is to try and create some rules that capture a large portion of the estimates.
Of course I don't know the answer here: I simply suggest using the Agile estimation tools we already have to help us with project management decisions during this period rather than surrendering to fate. Maybe the testing phase of a project is just too short to start getting any useful answers in some cases. Maybe the inherent variability in the estimates is just too great. But it's worth a go. Isn't it?
No comments:
Post a Comment