Wednesday, 30 December 2009

Software Development Best Practices: Enabling or Stifling?

It seems that the arguments for and against best practices and standardised ways of writing code will never cease. But what motivates people to be strongly against or vehemently in favour of best practices?
  • Fear and distrust are big issues, and it's no surprise that people fear losing control over what they see as a creative process to best practices, boiler plate code or standardised working practices. The way in which arguments for best practices are presented may affect whether they are perceived as a threat or not.
  • Understanding is another key area here: differences in people's perceptions of what constitutes "best practice" or "coding standard" will undoubtedly lead to differences in levels of acceptance. Inconsistency between perceived requirements and constraints imposed by best practices and the other pressures present in writing good, solid software are a close cousin to understanding (or misunderstanding) standards and best practices.
  • Finally, personality is an unfortunately big part of this: developers are extremely independent thinkers (which is no bad thing) with strong independent personalities to match. Ideas such as coding standards and best practices can seem too authoritarian for many, so may be resisted simply because they "feel wrong". Worse, an idea can fail to be adopted by an individual simply because it's put forward by someone they generally disagree with or even personally dislike.
Best practices come in many shapes an sizes (of course depending on opinion), but lets lay out a few along the continuum:
  • Low level best practices such as memory management or object storage. For instance "always use boost::shared_ptr rather than a raw pointer" or "avoid using a vector of std::auto_ptr". Such ideas are really rather hard to argue against: the ideas are about avoiding common pitfalls and bugs. Of course there are (rare) situations where we may need to diverge from such rules, but a coding guideline should at the very least make it necessary to justify your decision to do so.
  • Next come mid level ideas such as common pattern usage. For instance a standardised way (or a couple of ways) to do Observer may be recommended. How much you can standardise through generic code, boiler-plate or simply guidelines is language dependent, but it's at this level that divergences in opinion start to form. Observer is a great example, as you don't have to write too much code to find that a generic "Observer with Hint" template might not appear to be enough: you might need two hints, visibility both ways or perhaps two types of observer. The balance that needs to be made at this level is the cost of modification to the code that uses the pattern versus the cost of partial duplication of a new pattern variant. These costs can be in time to code, clarity, maintainability and bug fixing.
  • Finally, the largest ideas we might consider putting forward as best practices are architectural paradigms such as "all communication via asynchronous events" or "no database access on the non-GUI thread". Such rules are unlikely to be effective across many projects, but are suggested to be essential ideas in XP in the "System Metaphor".
Lean manufacturing suggests that standard working practices are at the heart of continuous improvement, but those using them must be empowered and motivated to improve them. If developers are in charge of their best practices, then it's less likely that they will be viewed as an authoritarian diktat. In addition, issues of trust and respect becomes potentially less thorny, as the standards are maintained by a group of peers by consensus. Going back to an earlier example, if someone has a real need to diverge from a best practice such as a generic Observer pattern, then the particular problem they are trying to solve should be discussed and the standards amended if needs be. The main point of such an exercise is to inspect and adapt standards and best practices to suit, rather than view them as something fixed.

Tuesday, 29 December 2009

Kanban Style Project Development Status

I was talking with a colleague this morning about the beautifully simple and effective way the Toyota Production System embodies Lean tools such as Kanban, Andon and so on using stationary supplier and thrift store bought bits and pieces such as laminated cards, coloured flags and so on and an incredibly simple idea emerged for managing multiple projects.

Say you have three big projects on - we'll call them "The Big One", "Middleweight" and "Express" to illustrate the fact that they're of differing importance. Management have made decisions about available resources (or people as I often like to call them). It has been decided to dedicate 3 full time developers, 2 testers and one product owner to The Big One, 2 developers, 1 tester to Middleweight and 1 developer and a part time tester to Express. Middleweight and Express will share a product owner.

Why not just create a few simple coloured cards for these? 6 big red ones for The Big One, 3 big green ones and one small green ones for Middlewight and 1 big yellow one and 2 small yellow ones for Express. Write the project name in big letters and "Developer", "Tester" or "Product Manager" on them accordingly and place them in a marked out area on a table. When you allocate your staff, they take a card and pin it over their desk somewhere visible. No-one is allowed more than one big card or two small ones.

Now, take a look at the "holding pen" we created on the table: are there any cards in there still? If so, you know you're under-staffed and in what area. Look around the office: has anyone got more work on than they should, or is anyone seriously slacking? Smashing, everyone's got just the right amount of work on :)

Now, obviously, we have to manage other activities: the odd emergency here and there, process improvement, perhaps some personal learning and so on, and we should create temporary cards for these. So, for example, you have a Big Emergency to deal with on a legacy project or for a demo and you need to pull someone: grab a temporary wipe-clean card and write it on. Go pick your developer/tester/PM and they pin this card up. Obviously, any work they have on that takes them over their limit has to come down and go back to the holding pen.

Now you can see the effect that the emergency has had. When the PM head walks past the desk and sees a big red card in there, they immediately know that something's awry and can start asking questions. No cards in there and they're happy.

Monday, 21 December 2009

Personal Kaizen

I've missed the first Monday of the month by a mile, so I'll roll November's and December's Kaizen into one. Where did this month go?!

I've been continuing to try to increase the integration of planning tools on a Sharepoint site for project management and probably reached a sensible end point at the beginning of December. Any more work will be too much for a diminishing return. However, this seems to be the month that a turning point has been reached: we have Product Managers suddenly interested in Agile techniques and a department head showing a keen interest in moving our ad-hoc methods towards a more unified tool set. This represents both great opportunity and a challenge. It's a time for enthusiasm and commitment for the future, yet also caution and careful decision making.

I've rather failed to get regular GCC Linux builds going, which is annoying and this still needs to be done, as I think this has the potential to push up the quality of my code. This effort was unfortunately hijacked because my main Windows box died and the box I was ready to install Linux on was suddenly needed to do all of my work. Now, I have three windows PCs, which should reduce to two (when the worst of the bunch is rejected). One will then be turned into a Linux box, or perhaps have a Linux VM installed.

I also promised myself that I'd look into how the Toyota principle of Jidoka could be applied in our process. It seems that many people share a view on this, but know it by different names and arrive at it from different perspectives. There seem to be two things that we can do in the relatively short term
  1. It's emerging that if we can all "move to the blockage" in our development line when a Big Bug is discovered and focus on getting it moving again, then productivity will actually go up and worries that highly paid developers will be doing menial testing are unfounded.
  2. We have to solve the "root problem" before we re-start the line (part of the principle!), and there seems to be concensus that this normally means putting the automated tests in place to prevent similar show-stopping bugs ever making it to QA again.
I've had some interesting side benefits this month after installing Windows 7 on my main PC after it was repaired. There are just a handful of productivity shortcuts that really seem to keep things moving fast without me having to actually use the operating system much at all! Also, a simple system for mirroring our centralised automated build and test system on local machines has proved incredibly useful in flushing out bugs in merges of release branches to our trunk prior to committing the changes, and it all happens while I'm safely tucked up in bed.

Next month I've a few things down, which I'll report back on at the beginning of February:
  • My GCC Linux box or VM.
  • Looking into ways to inject a bit of FMEA into our processes without too much effort.
  • Find some good project management courses before my CPD assessment: Masters courses hopefully.
  • More books: I seem to be coming to the end of the last pile a picked out.
  • Find out if I've become a Chartered Engineer!

Friday, 11 December 2009

Project Management Top Trumps

Ever get the feeling that discussions about project management are turning into a game of Top Trumps? You suggest an improvement to the way you might do something and you get the response:
That's the least of our worries
I disagree, THIS is more important
I'm sure this happens all the time, and it's a close cousin to "Escalation" that seems to be talked about in the Agile/Lean blogosphere at the moment. Although it can seem to be quite counter-productive, I've been wondering how to interpret and use such opinions and feelings, because when someone says this to us, they're not lying: it's truly the way they feel and what they believe!

Having thought about it a bit (and I stress "a bit"), here's my thoughts on what we should do when the Top Trumps start:

First of all, think about whether it's a fair point and if it does truly "trump" your original idea. That is, you can't begin on your idea until the suggestion is dealt with. For example, you've suggested doing more TDD, but it's pointed out that you don't have a working test framework: they're right. You'll need that testing framework sorted first if you're to make a go of TDD.

If you aren't "trumped", then there's still the issue of which idea is most important. Unless the counter idea can be just as easily implemented by you (if you intend to do it yourself) or would have a much greater impact to balance the increased cost of doing something you're not familiar with or don't have the resource, responsibility or authority to carry out, then your time is probably best spent on your original idea. Be sure to explain this rough cost-benefit analysis to the person you disagree with, particularly if they're in charge!

If there aren't clear priorities, then think about whether the various topics that have been thrown into the discussion share anything. If you can find common ground, then you have an opportunity to get more done, because of a shared goal. If you can't find common ground, then you'll have to agree to disagree. If someone believes with equal passion in something else, then why not just do both? You do your thing and suggest that they do theirs (rather than expecting you to do it for them). This brings me on to the last point, which is very important if many people have different agendas and that is...

Make sure you're pushing in the right direction! What you suggest must be in line with overall strategy if it's to have a positive impact. Absolutely make sure you're not going against the grain: if you really want to and believe that's the way forward, then you have to obtain some serious buy-in at a higher level, because you wont be thanked for sabotaging existing efforts with your own agenda. Lots of different opinions could be symptomatic of an overall lack of clear strategy - a serious problem - but we're generally a highly opinionated bunch, so we have to learn live with disagreement and confrontation, particularly when the stakes are high.

Sunday, 6 December 2009

Learning from Our Mistakes: Using FMEA in Software Development

I've heard the phrases so many times now:
We've learned from our mistakes
We won't make the same mistakes again
However, I've not been privy to an explanation of how this will be ensured. If I make a mistake, how do I make sure that someone else doesn't repeat it? If someone in our US office found serious problems with a tool they use, how do I know and not get burned the same way?

The answer in engineering is Failure Mode and Effects Analysis and this has been applied to software development for many years now.

Everyone tends to do their own slightly home-brewed version of FMEA that fits well with their processes and systems and I'm keen to try the techniques out. As a first step, pooling all existing knowledge about foreseen and unforeseen project problems, delays, failures additional costs and so on should be pooled. This pool of knowledge should then be used as a starting point for any future risk analyses, rather than it being an exercise in memory, intuition and guesswork.

I've been reading around a little and found the following useful:

I think I'll set myself the goal of kicking off a closer look at FMEA as my first personal Kaizen of the New Year and will post back here as usual.

Sunday, 29 November 2009

Trying to Achieve Flow in Software Development

I've been trying to apply the Lean principle of Flow to my software projects for the last few months and it's teaching me a few things about our process. I've also been attempting to draw out a value map for the whole process to help me do this and am finding it all very useful.

So, what does a basic process look and feel like now I've walked it out? Well, here's my rough take on it:
  1. We take a feature from the to-do pile (or a few bugs from the bug list) and write out a complete set of requirements and a specification.
  2. The developers write code. For this phase to be complete, we'd like to have all code reviewed and a whole bunch of new unit tests and integration tests passing.
  3. Once development of sufficient work is complete, a build will be made available to QA and they then write tests. For this to be complete, all tests should be approved by the lead test engineer (and hopefully the product manager).
  4. The testers carry out the tests and hopefully find no bugs or issues. If they do, we go back to step 2...
If you take a look at your software process in this manner, you should quickly notice that you have queues of work between the various processes and sub-processes. If they're small and manageable, then you're already on the way to achieving flow. Huge or unknown in size and you potentially have problems hidden in there: stop piling them up and clear out the backlog!

Now, these queues of work hide different sorts of problem and tackling levelling out the various work rates to achieve flow requires different approaches.

First of all, the feature queue is in some senses always going to exist, at least in an abstract sense: once a project has a guiding vision and a high level specification exists, a large number of the features exist, if only in a project managers head. However, the queue needs to be consumed and turned into completed specification items, with accompanying acceptance tests and so on, and this is often a lengthy process. More importantly, this process is extremely context sensitive, in that individual feature specifications are often related and can be significantly influenced by what has gone before. Herein lies the classic waterfall problem: do too much and there tends to be a lot of re-work. Keep the completed specification queue as small, or you're potentially wasting your time!

Completed specifications are traditionally consumed by development, as code is written to satisfy the requirements. Once this is complete, QA then ensure the requirements have been satisfied by writing and carrying out tests. The queue between development and QA is normally BIG (it can often accumulate for an entire project and QA has to clear it out in one monumental effort at the end of the project. This isn't very fair, as it then becomes QA's responsibility to ship on time: everyone else has done their job to the best of their ability and it simply remains for QA to ensure quality before we can start making some money! Think about this for a moment and the Lean mantra of "building in quality". Doesn't fit does it?

Not only is the queue between development and QA usually large and symptomatic of a non-Lean process, it can also hide some pretty big and horrific problems. Ever got halfway through QA and discovered that something pretty fundamental is broken that was finished an age ago?How much code is subsequently "broken" as a result of these horrors? It's pretty obvious then that keeping this queue small is pretty important if you want to avoid re-work.

There is something that can be done to alleviate both of these problems, and that is to create parallel processes of coding and test writing that start with the finished requirements and specifications. If everyone sits down together when specifications are being written, the QA can start to design tests to verify the requirements have been satisfied and further tests to cover more general quality. During this conversation, development can even pinch some of the tests that can be automated and you can start to "build in quality". If you can get this going, then you find that QA are almost ready to start running tests the moment they get a build, so the completed code queue can be consumed faster. Finding bugs in early in completed code is essential in avoiding re-work and (using the TPS "Andon" idea), you can stop coding if a Big Problem is discovered and go back to solving that before you proceed with new work.

Now, I've not even touched on documentation here, as I'm not that confident about how and when most people carry it out. For sure, if you do it at the end, you have a massive queue to be consumed by the poor mites, and the "blame" for being late is soundly passed on to them. Perhaps more people do it in parallel with the final QA Big Push, which is better, but still not great. In common with the principle of starting the test writing early, you can potentially start documentation early (at least loosely) at the requirements/specification stage. How much overlap between tutorials and quick start guides do you think there might be with the general quality tests? If it's a lot, then why not do them together and maybe you can even use these tutorials and guides as some form of quality test: it's all about feature coverage!

I've noticed while thinking about these things that one of the most fundamental things that needs to be done to achieve flow is to involve everyone at every stage. Assembling the whole team at the very start of a project means that re-work can be kept to a minimum as everyone "picks off" the parts of the process that are their responsibility at the earliest opportunity as the features flow through. Not only does a collaborative encourage non-dependent tasks to be done in parallel, but you can't beat different perspectives for solving problems!

Monday, 2 November 2009

Personal Kaizen

Last Monday was my personal Kaizen day and I've been a bit late on it this month. So, what did I try in October that worked and didn't?

First off, I was sticking with vision boxes and Evo-style customer values and the experience has thus far been extremely good (see my last post). More significantly, other people in the company have been taking notice of this way of presenting the data, so it may gather some momentum and become a bit more widely used. The experiment is not yet over, and I can't comment on whether it's a success until I've successfully delivered a valuable product at the end of the project.

I also tried a Google site for communicating between members of the EMHD, but have not yet had the chance to use it: the next event I'm involved in, I'll wheel it out and see what everyone thinks. My initial reaction has been that it's a lot easier to set up than a Sharepoint and provides all of the functions that I need. I don't need a very customised experience for EMHD purposes, so it's fine just the way it is. It took about 30 minutes to get everything on there that I needed, so I reckon that's a pretty shallow learning curve and a decent return on my time. If it works that is...

I've also been trying to drive project work from our Sharepoint, which I have to say isn't proving so tractable. The best I've managed so far is to send blanket updates to everyone about changing specs, new features and tasks and so on, it's no finer grained than that. We (I?) now have a choice to make about how to integrate basic calendar functions into the planning tools we're using, which could be quite a challenge. The kinds of things I see as essential are:
  • Blocking out iterations/timeboxes
  • Attaching start/finish dates to features and tasks
  • Entering holidays, out of office days and any other non-project days for planning purposes
  • Entering similar retrospective "holes" in the project to assist analysis of the data
  • Displaying project finish dates and milestones on the calendar
There has been one other thing tried this month, and that was Python. (Yes, I should have learned this AGES ago I know...) Basically, it rocks! I particularly like dictionaries (well, the fact I can in-place initialise them: roll on C++0x for that) and decorators (although I haven't used them properly yet). I can't see it replacing Matlab for prototype code for me (well, at least for very algorithmic stuff), but it certainly has a place in my toolbox.

So, what next? Well, building on the things that are staying from last month, during November I'll be trying:
  • Project planning integration completely from a Sharepoint task list and calendar
  • Running more code through GCC on Linux to get my C++ more compliant (Dev studio, I feel you've been leading me astray...)
  • Figuring out what Jidoka means for Lean software development

Friday, 23 October 2009

Lean/Agile Project Control on Sharepoint

Looking back over October's "Personal Kaizen", I see that I've somewhat succeeded in making Lean/Agile project status and control more visible on our development Sharepoint and am feeling a great benefit from applying the core values from our "Vision Box". I've airbrushed out lots here to protect the innocent and removed our benchmarks and current status from the value gauges, but here's how our project Sharepoint now looks (and I'll tell you how I did it further down):

The main focus of this is to allow the interested parties and stakeholders to be able to do three important things:
  • Find out what this project is all about (clear vision statement and core values)
  • Check at a glance where we are (value gauges and task list)
  • Make project planning decisions (value gauges and task list)
These fundamentals rely on a couple of things that you can't see, which are:
  • Links in the task list (a Kanban board really) to a set of lightweight documentation that can be edited in-place (OK, it's a wiki: what else would it be on Sharepoint?!)
  • Cross referencing of the features in the task list to the core values
  • Estimated costs of the features in the task list
You can probably see that the task list is divided into a few sections, allowing features to flow through the development process:
  1. In Backlog
  2. In Elaboration
  3. In Development
  4. In Testing
  5. Signed Off
(There's a couple of legacy categories in there from before this little re-work too: don't worry about those). As work flows through, it is assigned to different people and (if the Sharepoint hasn't crashed...) everyone gets nice little updates to tell them tasks have been assigned, have changed or that the feature specification and tests have been added to or modified.

One of the things I'm loving about this shift in focus of the project planning information is the way the core values have changed the way I think: I no longer see the work to do as a pile of features (or code points, story points, whatever's your fancy) to chew through, rather a concerted effort to push the value gauges to the right! We're "done" when we're in the green (or could grudgingly ship with them in the amber). Of course, we have deadlines to deal with and have to come up with sensible project planning data such as when we think we're likely to finish and we can do this, so long as we continually evaluate how much "value" there is in the backlog and whether it will be sufficient to hit the targets.

All in all, I'm reasonably pleased with progress. OK, it's not a fully integrated, all-singing, all-dancing approach, but it's doing a decent job with the tools to hand and the existing project data. More to the point, this feels like just enough improvement from little effort, rather than an enormous integration/porting job or costly migration to a new tool.

If you fancy trying this yourself, then the key is publishing parts of an Excel spreadsheet within your Sharepoint document library: that way you get html, gifs, pngs inside your site that you can display as content. If you want to go the whole hog, you could integrate things like the task list (and perhaps a project calendar for tracking holidays, significant dates, out-of-office periods and so on) with your Excel calculations using an Access database.

Of course you probably can do all of this on a Google site too...

Wednesday, 7 October 2009

The Personal Toyota Way

I've been reading about the organisational philosophy behind the Toyota Way lately and it's something I agree with as an Engineer. On my dark and wet bike ride home, I started thinking about how individual philosophies and ethics affect a company (and more specifically, a department) and how these personal outlooks can often be correlated with those of the company.

Have a look around in your department and gauge who "contributes" to your internal economy by doing things for the greater good: clearing up their own messes and those of others, figuring out ways to make everyone's lives easier, enhancing productivity, trying out new tools and letting you know which ones are good. See some? Good. (So do I).

Now think about whether there are any "capitalist consumers" who simply seize any advantage offered them by someone else's hard work: rapidly delivering their own work using carefully crafted components, but leaving nothing re-usable in their wake, changing a whole swathe of behaviour, confident that existing unit tests ensure they've broken no-one else's code, but providing no new unit tests to protect their own. Maybe even just grabbing a well indexed book from your reference library and tossing it back in the general direction when they are done or finishing the last of the milk and not going to get some more. Notice any? Oh.

And of course there are those in between: Mr and Mrs average like you and me...

The nature of the people you see around you is likely reflected in the outward philosophy of your company, as a conscientious employer will likely take on people with ethics that fit in and will foster attitudes that match the companies vision. Likewise, a company out to make money and nothing else is likely to value those that have demonstrated their ability to focus on short term gain and may even be reticent to employ someone who appears to come with a large "ethical overhead" such as being say an active BCS or ACCU member, on the C++ standards committee or being evangelically devoted to a "clearly wasteful" practice (in their opinion) such as TDD or unit testing.

In the "closed market" of our company and even our department, we have to carefully maintain the ecosystem such that we don't consume more of the "good stuff" than we create: this equals technical debt (or a mess: thank you Uncle Bob for making this distinction). There's also the non technical "good stuff" such as keeping the place tidy and a pleasure to work in, organising the annual dinner out and sticking a new air freshener in the bog that can so easily taken advantage of and rung dry, making a place a misery to work in. Take care of yourself, and each other (TM).

Monday, 5 October 2009

Personal Kaizen

Today was the first Monday of the month and that means a quick check on my CPD and my personal Kaizen. So, what did I plan to do over the last month to try and improve things and how did it turn out? The plan was:
  • Use Outlook tasks to set deadlines with/for other people.
  • Create "vision boxes" to create very obvious visionary metaphors for new projects.
Additional things I've tried as the month rolled on have been:
  • Twitter
  • Google Reader
The reason I tried the Outlook task thing was to see if I could extend the useful leverage I've been getting from email flags and tasks to other people by handing tasks out: this didn't really work and I'm not so surprised in hindsight, as the way in which and the tools we use to organise our time vary from person to person. So, that's in the can, but I still need a way to communicate deadlines and milestones between interested parties on a project: more on that later.

Vision boxes are an idea I got from Sanjiv Augustine's Managing Agile Projects book and I'd say it's too early to tell whether they can have a profound effect on my work and those around me. I personally found the act of creating the box incredibly useful, as it focused my mind on exactly and succinctly what a project's purpose is. If asked by anyone, I can deliver a sharp one-liner that gives them the idea immediately. Also, I tried to write four or five bullets on the back of my boxes and I put quite a lot of effort into making these tangible things of genuine value to the user that I can measure. These are now core values on one of my project, and I've never had those before, so let's see if they help us to focus on delivering value: they stay on the list.

Moving on to Twitter, you probably think I'm a complete retard for not being on there from the off! But there you go, I don't seek to spend my life on the bleeding edge. My experience so far is that I'm finding it hard to filter out the noise, but having said that, it's shown me dozens of good posts and pointed me to plenty of new blogs to read. The real problem I can see with this model though, is that all of this extra content takes time to read! But I'll take the rough with the smooth and carry on tweeting.

Related to starting to read a lot more content on't'inernet has been Google Reader and I have to say that I think it's just brilliant! I've found it extremely easy to use and it makes the initially frightening volume of information seem quite manageable. I've also started sharing among colleagues when we find relevant and useful items. Of course this feature becomes obsolete once we're all reading the same blogs and there's nothing new to share. Nonetheless, Google Reader is definitely staying on my bookmark bar!

So, I've found a couple of things that have been useful and closed the door on a couple of non starters: a successful month I'd say. The agenda for the next month (to start with) is:
  • Stick with the "vision boxes" and see if the derived project values help us (well, me) focus on delivering value rather than features.
  • Try out a Google Site for sharing information among groups I'm part of at the IMechE EMHD board.
  • Try driving project work assignment, deadlines and milestones with our internal Sharepoint. This is mainly driven by a Kanban board, but I need to try and work in scheduling, meetings and other planning issues also.
Until the next time, ttfn.

Tuesday, 29 September 2009

Making Error Codes Mandatory

Lately, I've been looking over some code I re-factored a while ago while doing a monumental code merge: I'd replaced a load of functions that had boolean returns and error strings returned by reference argument with simple integer error code returns. (If you think this is wrong, then you must be one of the people writing hard coded error strings into core libraries causing your app designers and document writers headaches with the spelling mistakes it takes them an age to remove and preventing them from localising your apps or having different strings in different deployments).

Anyway, back to the main point: I considered using mandatory error codes at the time so clients couldn't ignore them. There's a Dr. Dobbs article on this too, that goes along similar lines, but I wanted to make an important extension to the logical design.

The problem with the idea of the mandatory error code is that it throws on destruction if it hasn't been "queried" (had the embedded error code extracted), which in exception safety terms is the worst thing we can do! So, the thing we have to do is make sure they don't live (long) on the stack. The way I addressed this is to make them non-copyable, so they can construct from their embedded type only:

class mandatory_error_code
mandatory_error_code( const T & i_value )
: m_value( i_value )
, m_throw( true )
if( m_throw )
throw std::logic_error( "un-handled error code" );
operator const T() const
m_throw = false;
return m_value;
mandatory_error_code( const mandatory_error_code & );
mandatory_error_code& operator=( const mandatory_error_code & );
const T m_value;
mutable bool m_throw;

There are two nice advantages to this approach:
  1. The only place you can write mandatory_error_code<> is as a return argument and it must be constructed from a convertible type. This makes porting code really easy, as all you do is change your function signatures.
  2. Client code that ignores error codes by not copying a value or comparing it to a convertible immediately start throwing on the line where the function that returns the error code is called. It is impossible to make the class live longer and risk run time termination if a different exception is thrown.
The original Dr. Dobbs article on this has a non-throwing error code class that wraps the value, but I'm not so sure this is now required: Why wrap an error code in another class when the other class does nothing? If you want to capture the error code for later comparison, just use the "original" type and sacrifice perhaps a tiny bit of readability for a whole extra class. There is however still a way to compromise your run time and that is by throwing from the assignment operator of the embedded type. Despite not being protected by design here, we are most likely protected by circumstance: if the assignment operator throws, it will throw on the construction of mandatory_error_code, so the destructor will never be reached.

There is another "Modern C++" idea that you could use to extend this class, and that is to replace the hard coded throw with a policy (as a template argument), but until I need more than one policy, I'll assume "YAGNI" and leave it as it is.

Monday, 21 September 2009

Becoming Agile? But How Do I Stop Being Non-Agile?

Books on Lean and Agile are stuffed with examples of companies that are and companies that aren't. There seems to be plenty of advice about the core values of Lean/Agile and even some advice on how to cultivate them, but there is very little advice about how to "stop being Non-Agile".

How on earth do you go from every developer 110% loaded, 3 or 4 projects each and each person wearing analyst, architect, developer, test writer and project leader hats to the 20% slack, 1 project each and proper role allocation when management are tearing down the door with the next Big Thing that's going to make the company money every other day?

Now I realise that an answer many practitioners would give would be:
"Get management buy-in"
Great, that seems like good advice, but those of us that aren't on a beautifully functioning Agile team know that when we suggest a change in working practices that the answer's going to be:
"Fine, we can do that when we've finished project X"
Of course, project Y comes along long before X is finished. X 1.0 is rubbish anyway and needs an X 1.0.1 service release, backwards compatibility of X with W rears its ugly head, project Z is already in the marketing pipe (and if you're unlucky, it's already been sold).

It seems that there are quite a few things that may have to happen that are - in the short to medium term at least - extremely incompatible with business needs:
  1. New projects cannot be started. This is terrible news for a company that is scraping by: new products and new releases of existing products are needed to continue to drive sales and a gap in product production means depriving the sales team of oxygen.
  2. Fewer projects can be executed. See 1, the sales team turns blue.
  3. More people are needed. Team members wearing multiple hats is not good, so more roles need to be created if the project workload is to remain high.
  4. Project management has to be improved. Money has to be spent on training and/or coaching.
  5. Development teams need to be shielded from sales and management pressures. This is a perceived problem when management feels the need to control development excessively in order to get anything out of them (and lets face it, they will view a development team operating in a manner as described above as extremely under-productive).
  6. People have to change. The most important thing about being Lean/Agile is its culture, without which a conducive environment can never be created, only simulated in short bursts as the business can afford the spare time, resources and money to do so.
Are there any answers? Perhaps one option is to transition one team. This is probably attractive to management, as it is only a toe in the water and they can pull the plug and go back to the old methods should it fail. The challenge here is likely to be providing the right leadership such that the team is properly shielded from existing external pressures. Additionally, if everyone's working on multiple projects, you can't do this overnight: you probably have to live with people being on many teams to start with and that slowly coming right over time.

What answers do we have to curry favour with management? Of course, we can promise them more productivity in future, but maybe we have to stick our necks out and offer them more productivity right now. If we can start showing management a clear advantage in Lean/Agile practices such as visual control, a clear value stream, better schedule estimation and so on, perhaps this would be sufficient to show greater productivity long before a project is finished.

All of this is of course dependent on changing that key person's mind. Depending on who they are (and who you are), this can be extremely challenging. They have to trust you, so that when you say you can improve things, they'll be confident enough to let you go against history and try it. How can you gain this trust in your abilities to pull off something as managerially and technically challenging as Lean or Agile? Maybe you have to go ahead and do Just Do It(TM) (see Fearless Change) and prove that you can with evidence that you just did.

Saturday, 19 September 2009

ESMAC 2009: My Best Yet

ESMAC 2009 was cut a little short for me when I had to come home last night, but it's still been my best so far in terms of the scientific proceedings. I can't comment on the social side as I missed the Gala dinner, and besides, Marseilles 2003 will take some beating. (The orthopaedic surgeons who re-opened the bar after the staff had left to serve us all Veuve Clicqout at 2am know who they are).

Some really good technical papers, a large variety of well thought out clinical studies and a few excellent keynote talks gave the proceedings a nice balance. Adam Shortland never fails to impress/amuse in equal measure and his stint as a stand up comic is obvious when he's in full flow behind the podium. Congratulations to Adam for organising an excellent and well attended event.

So, off the back of the conference, I've a lot to think about and do over the next 12 months if I'm to hold to my word and do the right research and development to support the needs of the clinical movement analysis community.

Tuesday, 15 September 2009

The Ageing Population: Are Current Trends to be Trusted?

A few things I've been exposed to lately have given rise to the question:
"Is the ageing population trend to be trusted?"
My old supervisor and joint #1 reason for being a biomechanical engineer Garth Johnson has organised a conference on engineering for the ageing population. This is a hot topic, as there is lots of data to suggest that the old will soon outnumber the young and that our increasingly long lives will put extra pressure on health and medical resources. That much is indisputable. Right?

Maybe it isn't. I also read a piece from Michael Blastland on population trends, that showed all attempts to predict future population to be basically guesses that rarely resemble reality. This is only half the story for predicting the split between old and young in the future: mortality is the other key factor. And nudging me to question the statistics on this is a conversation I've had with my Wife that goes along the lines of:
"Is our generation (or the next) going to live as long on average as the current elderly generation?"
Maybe we should consider this. Was the current generation of over 70s as obese on average as our generation is? I don't know, but I'd hazard a guess at no. Did the current set of over 60s smoke and drink as much as we do in our 30s? Again, I don't know, but maybe we should ask these questions when attempting to extrapolate figures from current trends. There are a lot of mechanisms that will affect the health and mortality of the elderly population in two, three and four decades "in the pipe", that I'd suggest we won't understand until they are revealed to us.

Now I'm not suggesting for a minute that we don't do our level best to cater for the ageing population: the elderly are already under-funded and under-cared-for. The ageing conference should be a great success (I hope) and generate lots of interest in engineering the health of the elderly. After all, we're preparing for our own futures by doing this.

Monday, 14 September 2009

Is Software Development Lean Enough?

A question has arisen while I've been reading The Toyota Way:
"Is Lean software development currently a million miles behind what is truly possible?"
I have asked myself this question because it turns out that many Western manufacturers that are believed to be lean - and indeed are accredited as such - turn out to be nowhere near when the experts weigh in. A particular example is given in the book of a North American manufacturer that had been awarded the Shingo Prize for its lean-ness and was latterly visited by Toyota's Supplier Support Centre. They slashed 93% of the supposedly lean production time. 93%! They were just miles of the mark.

So, I wonder if - despite believing we're on top of software development practices - we are going to be surprised by a completely left-field player wading in and showing us all up with defect free software written in 5% of the time for 10% of the cost at some point in the near future.

My bet is yes, we are, only perhaps not in such a dramatic manner. The reason I'm sure that we're no-where near the mark for true lean-ness is that the core cultural values required to successfully follow the example of TPS are just as poorly embodied in our software industry as they are our manufacturing industry. However, software development being such an "immature profession" does give it a slight advantage in trying to move towards a lean culture, in that there are fewer entrenched ideals and practices that would need to be changed.

Lean Software Development is More than a Set of Tools: Still Missing the Point?

I read an interesting post on the NetObjectives blog this morning that said:
"Lean is more than a set of tools."
I think it was still missing the point a bit. Someone had commented that we should read the various books out there on the Toyota Production System (TPS). I'm currently reading The Toyota Way by Jeffrey Liker and completely agree. It seems that even when clever people like those at NetObjectives look beyond the "Lean Toolkit", they simply look for more tools from other areas and see the act of being "truly lean" as being a good tool selector. Yes, that's part of what TPS and Lean Manufacturing teaches us to do, but there are some massive fundamentals that come before any of this and they are:
  • A learning culture
  • Respect for people
  • Continuous improvement (Kaizen)
Without these core principles, selecting the right tools for the job is just going through the motions without really "feeling it". Selecting the right methods is a consequence of striving for continuous improvement, not the way to go about it, and continuous improvement can only work when everyone is allowed to develop to their maximum potential and create core quality and efficiency "from within" by learning from and improving working practices on the shop floor.

Don't get me wrong, I'm on board with Lean and Agile. One of the practices referred to in the "Lean Toolkit" is to map the value stream of your production process and this is undoubtedly hugely important in attempting to reduce end-to-end production time, assessing "one piece flow" and removing waste. However, I'd like to look at Lean and Agile as some of the things I should learn and try out in order to try and improve my personal practices rather than them being the end goal.

Monday, 7 September 2009

Agile Velocity: Speed or Direction?

I've heard quite a few people make this quip when presented with the concept of "velocity" in an Agile project. They're quite right, as it is rather a nonsense to use the term when the measure we are referring to is a scalar. However, it's rather an established term and I've never let it get in my way: I'm not one of the unlucky people for whom it simply causes a parse error form which I cannot recover.

But I was thinking today (odd, because I spent most of it singing songs to and building cup stacks with my 20 month old son) about whether it would be possible to "expand" the Agile velocity measure a bit and make it a vector. Now this isn't intended to be deadly serious, rather a tongue in cheek attempt to assuage the guilt of using the term velocity when I should know better. With this caveat in place, I'll proceed...

When a project starts, we have a big deficit: our pile of story points; code points; ideal days; whatever we have chosen. We're going to chew through this in a sensible order and keep track of our progress by measuring the rate at which we're doing it. The deficit can change if we re-estimate (not usually a good idea) or there are new features added (commonplace), so the target advances towards us or recedes away accordingly. In addition to the usual changes to the work remaining, I've spent a fair bit of time trying to estimate the additional effort emergent bugs introduce to a project, mainly so I can estimate the factor I should apply to future estimates to account for bug-fixing. If we're being extremely tight in our project iterations, this maybe doesn't really figure, because we're not entitled to knock off the effort for a story until it's passed all user acceptance test and QA. That's fine when it's possible, but it often isn't feasible to go through an entire QA cycle to get feedback about progress if it takes 3 weeks to do so and the project is 8 weeks long. It's also quite difficult for projects on legacy code bases when introducing a new feature potentially causes bugs to arise in different areas. So, the idea I had is to introduce a second dimension orthogonal to the "feature effort": "bug-fix effort".

Velocity is now a 2D vector in "feature effort" vs "bug-fix effort" space! Now, as I've already said, this isn't intended to be taken too seriously and there are flaws. Most importantly, feature effort and bug-fix effort aren't really orthogonal, they are collinear and dependent. To make schedule estimation stand up mathematically, we have to represent the magnitudes of the velocity vector and the distance to the target in Manhattan distance: Euclidean distance would artificially reduce the required effort to complete the project.

However, thinking about the two dimensions of velocity represented in this way could reveal some insight into the way our project is going, for example:
  1. During the early phases, the direction is entirely in feature effort and the target is on the feature effort axis: everything's fine!
  2. Later on in a project, the target travels in the bug-fix effort direction yet our velocity remains unchanged: maybe this is a warning to start applying some bug-fixing effort to avoid a large "course change" later.
  3. We focus on bugs excessively and our heading passes the target to the "sea of bugs" side: perhaps we are losing sight of the main game and will miss the feature target.
So, perhaps there are some indicators for a project manager in this way of looking at things, but I suspect that it's more likely a bit too much of a joke to be useful. But if you're firmly attached to your "velocity" measure and are desperate to shirk the ridicule of the pedants, then be my guest and try it.

Saturday, 5 September 2009

First to Market or Best in Market?

I was chatting with a friend yesterday about the pressure to be first to market with software products. I started to think about how you could quantify the benefit - or cost - of pushing something out the door early to make sure you're first out there.

Basically, there are gains to be made when you're without competition:
First to market profit = no. of deals x price x gross margin
Here, we're assuming that the number of deals is independent of your ability to sell: we'll say it's simply the number of customers wanting to buy.

Once competition enters the marketplace, there are further gains to be made, but they are dependent on your market share (of course now less than 100%):
"Normal" profit = no. of deals x market share x price x gross margin
Now, there's more going on in this simple equation than meets the eye, so we have to ask what we can control in these equations?

First off, there's price. Now I don't know that much about marketing, but I do know that you generally get what you pay for, so the best product can usually be sold for the highest price. How do we make sure we have the best product? By putting in the effort and taking the time, which is likely to be at odds with the first to market strategy we often encounter!

Second, we can control our gross margin. Once a software product is shipping, it's not 100% as some people might think, as software needs maintaining and supporting. And guess what, software that's rushed costs more to maintain and support!

The last thing we can control is market share, and this is extremely sensitive to the quality of the software product we're selling: again, an annoying, buggy product doesn't give your sales team the best chance of dominating.

So, what does this all add up to? (Sorry multiply). Well it's probably impossible to substitute real numbers into these equations, but that's not really my point: the point is simply that there is a relationship between the potential benefits before and after a brief period of being a monopoly, and that this is effected considerably by the quality of a software product. The brief period of profitability enjoyed after getting buggy, difficult software out before anyone else can be abruptly ended by a late arriving competitor product that is easier to use and crashes less.

So, which do you want? 100% market share for 1 year or 70% market share for life?

Friday, 4 September 2009

Vicon Launches Boujou 5

Today was the day we made our release build of Boujou 5 at Vicon after it came through QA: it went on sale earlier this week, but will be available for download on Monday and I'm rather proud to have been on the development team. I'm glad to have been involved for two reasons: firstly because it's been a very interesting product to work on - quite a diversion from my usual biomechanics product area - and secondly because I've learned a few things along the way.

The interesting things I learned along the way in terms of coding were about when to re-factor towards patterns, a serious lesson in why we should use RAII everywhere and some interesting uses for the Visitor pattern.

After working on projects of varying maturity, I've now seen code bases that are pattern-heavy and those without many pre-formed architectural patterns. If either of these types of code base need modifying - particularly if the modifications are to be structurally significant - then the pattern-heavy architecture is quite resistant and brittle, whereas the pattern-light code base is pliable and less unstable. The lesson I've learned from this is that patterns should come late: try not to create a pattern for single use; wait for the pattern to emerge from multiple client uses; re-factor when you write the same code three times, rather than twice.

I've been a fan of RAII since I learned about it and have come to realise that it's not just an exception safety paradigm. The code I've seen that can most benefit from RAII is many coders' worst nightmare: the OpenGL state machine. GL bugs where colours change, textures disappear, lighting goes wrong and so on are notoriously hard to debug, requiring quite a talent and intuition (which I must admit I don't possess). Why then don't we write a suite of RAII objects for doing glEnable and glDisable and so forth? If we did this, then we're guaranteed to leave the GL state machine in the same state across calls to any draw() function we could ever write. For general setup code, more scene graph-like hierarchies of draw code and nested transformations, we simply use the stack to nest RAII object lifetimes.

My last interesting encounter on our boujou development journey has been the visitor pattern: surprising, because we're taught that this should be a seldom used pattern. Now I'm not talking about the full blown visitor pattern, I'm talking "visitor-lite" whereby a set of visited objects contribute to a visiting class's state via identical, primitive calls (rather than visitA, visitB, visitC and so on). This proves to be immensely useful when the visitor is polymorphic (either literally or by binding different classes and functions to a simple functor). Imagine a selection of entities that can contribute to a context menu, a UI panel or a script engine through exactly the same visitor framework? How about using the same drawing code to render entities in 2D and 3D, find their extents, cache draw lists and fill structures for algorithms?

So, despite the usual ups and downs of a commercial software project, boujou's been a pleasure to work on and it's been good for me. I hope it's good to its users.

Wednesday, 2 September 2009

North West Wine Blog

After finally getting my Dad on the internet to boost his network for wine sales ( and, I'm going to see if he's ready to take the next step: a wine blog!

I've set him up a blog and popped on some example stuff and I'm impressed with the way it seems to fit what I think the requirements are. A blog is a natural forum for people to leave their comments and opinions on a wide range of products such as wines and I like the reactions tool on Blogger, but the straightforward rating tool on Wordpress is just right for keeping a leader board of his client's favourites. Filtering by grape and wine type is also possible using tags and categories, which would hopefully suit both browsing and specific searching.

So, I'll say it again, if you like a glass of wine and you live in the North West then my Dad's your man :)

Tuesday, 1 September 2009

Personal Kaizen

It's the first of the month today, so my calendar has reminded me to do two things: update my CPD and do "personal Kaizen". My CPD is relatively mundane and has been going on a long time, but my personal Kaizen is a recent thing that I've been doing only for the last few months.

Over the last month, I tried a couple of things to improve the way I do things and these were:
  • Outlook categories
  • Outlook flags
These were respectively practically pointless and a huge success for me.

The problem I found with outlook categories is that I ended up spending more time managing the categories than I did using them. Email titles and bodies contain almost all of the relevant information and can be searched effectively. (I'm still quite prone to organising my mail into a folder structure though which is maybe at odds with this view, but because of this, I spend very little time searching).

However, the Outlook flags were incredibly useful to me, particularly with the To-Do bar visible in the 2007 version. I used to create tasks in my task list and add appointments to my calendar and realised that 80% of the time, I was doing this in response to an email. So, simply being able to mark an email as to-do today, tomorrow, next week, or by a specific date, I can do this in a single click most of the time, then file away the email without going round the houses. This fits in well with the way I think also, because if I have an idea of something to do or I stumble upon something important on the internet (particularly if I have the idea at home), I usually send myself a mail.

To build on this and to try some completely new things, this month I'll be trying to:
  • Use Outlook tasks to set deadlines with/for other people
  • Create "vision boxes" to create very obvious visionary metaphors for new projects

Monday, 31 August 2009


I found this article today: the first one I've noticed that uses the term "Wii-Hab". (Unfortunately, the urban dictionary's definition is not quite the same!) We need a new definition:
"The use of the Wii in rehabilitation"
I'm getting more and more interested in principle, scientifically and commercially about the possibility of better rehabilitation tools that can be shipped to a greater number of people.

As the population ages and the number of people having suffered a stroke or other injury requiring physical and neurological rehabilitation increases, it becomes more important to provide health care solutions that can be accessed by these people. So I believe in principle (as I've stated before) that lower cost, lower fidelity solutions may be the way forward and the Wii seems to be stepping into this gap. Scientifically, I'm keeping a close eye on this, as such technology is only going to become widespread when enough studies have been done that prove an improvement in recovery due to the use of the Wii. These studies are already out there and are increasing in number, so it's probably just a matter of time. Commercially speaking, I work for a company that provides motion capture solutions to the health care industry, so I'm naturally curious (and contractually obliged!) to spot the up-coming waves in health care and medical research and make sure we're in the water and moving forwards when they start to break.

However, I think there's more to rehabilitation than just Wii-Hab style tools and I know that the bigger picture includes higher technology also, more prescriptive techniques than just game-play and quantitative analysis of movement.

So it remains to be seen exactly where this will go and what therapy we'll be undergoing to recover from injury or surgery in the future, but it's undeniable that some forms of motion capture technology will be implicated in almost all cases as our yardstick for wellness in this field is effectively co-ordinated movement.

Tuesday, 25 August 2009

User Interaction: Guiding Visions

After reading the Inmates are Running the Asylum recently, I was surprised and happy to have a conversation with my Brother-in-Law at the weekend about how he (a user) sees the problems of user interaction. He had a very demanding but simple idea:
"A common interaction method for all of my electronic devices"
What he meant was that switching on his telly and selecting a channel, recording a program on his PVR, operating his iPod, starting programs on his laptop, selecting a destination on his sat-nav, looking up a contact in his phone (and so-on) should all share some common usage paradigms.

Apple have got this closer to right than most with iTunes and the iPod: if you can operate one, you can probably operate them both and the idea of the iPod click wheel is (excuse my opinion) one of the best bits of industrial design of the last 20 years. Microsoft have tried to move console and PC gaming closer together (in terms of the user experience) by pushing developers towards a common interface device (the Xbox controller). On a side note, why oh why don't games just take over your PC and start when you put in the DVD?! Yes, I know, security, security... But it' my PC and surely I should be able to make the decisions about what I put in it! (Rant over, back to the main topic).

So what could we do? Well, as individual developers, probably very little other than follow emerging patterns, but this is often contrary to the desire to be innovative. Perhaps only large companies that manufacture a range of devices can foster these visions (Apple, Sony, Microsoft and so on). Despite these potential barriers, I think that good user interaction design will begin to solve this problem. In order to design products such that the user experience is a positive one that engages them with tractable methods, aren't we trying to tap into some "universal interaction ideas" that pre-date software and electronic devices? I stress begin to solve the problem, because this idea of a core vision that underpins all interaction design is a Big Idea and requires coming up with ideas that transfer easily from PCs (that have a keyboard and mouse) to games consoles that have a controller, to phones with a much more limited physical interface and sat-navs with just a button or two.

Who knows, maybe such universal concepts may be over the horizon. I'd hedge my bets and say it's not immediately possible while technology companies are competing to be the one with the next big idea. Maybe the user utopia of common interaction is a commercial dystopia of complete domination by one or two manufacturers. One thing's for sure, we shouldn't stop trying on a product-by-product basis to improve our users' experience of software (and hardware).

Friday, 21 August 2009

Re-Factoring: Tidy away your Tools!

I had a slightly drunken chat with my good friend Paul on Wednesday night and told him the analogy I have for re-factoring being the software development equivalent of the 5S Seiso (Shine) mantra:
"Re-factoring is tidying away your tools at the end of the day"
He liked it, and saw it as a useful way of communicating our intent when we insist we need scheduled time for re-factoring to our managers and stakeholders. So often the word "re-factoring" conjures up images of costly re-writes of large portions of the code for no perceived gain and this is partially our fault for misusing the word when we should have said "re-write" in the first place. So ask anyone reticent to commit to any amount of re-factoring effort:
"Would you be happy if accounts didn't put anything back in filing?"
"Is it be OK for engineering drawings to be left covered in construction lines?"
"Can kitchen staff let the washing up pile up until the end of the month?"
You could even ask them how often they would ask their children to tidy their rooms, but that's maybe a step too far!

Thursday, 20 August 2009

Agile Planning Application Beta for including Google Visualization Integration...

I noticed this post a few days ago on the Agile Alliance LinkedIn group. Although the developers make it clear that it is mainly core components - not yet up to trading blows with XPlanner, Rally, VersionOne, Mingle etc. - it could be a very enticing prospect for those also using a customer relations management (CRM) tool such as for the tight integration it would seem to offer.

What has become clear to me about tracking, estimating and planning (whether you're practicing fairly traditional project management, a light version of Agile or something as aggressively Agile as Scrum) is that the most valuable commodities are the hardest to capture: time and effort. Without accurate representation of the real-world effort designers, developers, testers and managers are expending on the project, we are powerless to predict progress and create reliable estimates in the future. Yet the simple act of recording effort is so easily and so often pushed aside by those doing the work. Once this task has been left for more than even a couple of days, most people's ability to remember exactly what they did is diminished practically to zero.

There are obviously clever ways to get people to enter these details (maybe even gather them automatically) and there are oppressive methods such as detailed time sheets that leave a distinctly sour taste in the mouth. Education must not be discounted of course, as enlightening everyone to the value of their actions, particularly to themselves, can go a long way. So, it is with eager anticipation that I await to see how:
"...a very nice interface for entering time has been developed and will be part of the initial release"
I have my own ideas about an interface for recording project/feature/story/task time which I'll maybe talk about at a later date, but for now I'll wait and see what the Salesforce team have to offer.

Monday, 17 August 2009

Up-Front Design in Agile Projects

The last few things I've read have got me thinking about where design work goes in Agile project management and whether it is consistent with creating the most value for the end user.

Agile methodologies of course claim to deliver value and seem, by many measures, to be a significant improvement over the traditional waterfall approach at doing so. But are they the best? I have a few significant worries about Agile applied in its most rapid, aggressive forms and they are:
  1. People who make high level business decisions need good estimates of cost and schedule up-front to effectively plan and support a business strategy. There is a simple correlation between the amount of planning and design done and the quality of an estimate, which means better estimates possibly mean less agility.
  2. It is during the up-front design phase that important core interaction ideas should be conceived, rather than allowing iterated, piecemeal attempts at GUI by the developers to calcify into the final product with limited clear vision. True value to the end user is in a usable product more than its feature list.
  3. How long does iterative design take? We need to account for costs and time required to elaborate our user stories along the way and, just like code, some stories are stickier than others and can conceal enormous, costly problems.
(Before I go any further, I should make it clear that I realise that the perceived accuracy in an estimate driven by a very detailed up-front design is subject to a very large variance due to the unpredictable nature of the development process, so please don't stop reading because you assume I don't understand the value of Agile in controlling this!)

As with all problems concerning conflicting forces, the answer lies in a careful balance: a reasonable amount of up-front design (greater than the bare minimum) such that a clear vision for the software usage can be established and some estimates of cost and schedule can be made that are in line with the expectations of the stakeholders making the Big Decisions. I'd go so far as to say that the interaction vision should be non-negotiable after this point, but we need to accept that some of our design decisions may be called into question along the way and reconsider with care and reference to the initial design and those that created it. (Strategy and tactics: never lose sight of the main goal!)

As for accounting for the design time, this is as hard as estimating development time, so we should adopt the same methods for dealing with the uncertainty: just do it carefully (remembering to ask those that will do the design to create the estimate) and try to calibrate our estimating procedures over time with historical data.

Sunday, 16 August 2009

Best Reads of 2008-2009

I finished the Inmates are Running the Asylum while we were away on holiday last week and thought it was a great read. As it's been my Birthday recently, the usual bout of introspection got me thinking about which of the books I've read over the last year have been the best reads, have changed the way I think or do things or just made me happier in what I do.

To start with, the Inmates are Running the Asylum by Alan Cooper is easily one of the best reads of the year for many reasons. First off, I think it will be one of those books that changes the way I do (or don't do) things: I'll try very hard in future to push interaction design up front on projects I can influence and try to persuade it into the hands of those most qualified to do it. I'll undoubtedly still be expected to do some interaction design myself (sorry Alan!) but I'll be using personas much more to help me do this. Secondly, it was a very enjoyable read and quite hard to put down. There's a strong possibility that I enjoyed it because I sympathise with the author on his strongly held views about the nature of software development and software developers in general though and I realise this makes my opinion quite biased.

I read The Pragmatic Programmer by Andrew Hunt and David Thomas about a year ago and thoroughly enjoyed it. This book falls into the "made me happier in what I do" category and I specifically remember writing a quick review on our wiki at work, recommending this book for anyone feeling a bit disillusioned with software development. It's an excellent compendium of ideas, practices and patterns for becoming a successful developer that will probably not age. One slight caveat is that you must forgive the authors' obvious lean towards command line tools and away from anything with a GUI, as being "part of their upbringing" on Unix.

User Stories Applied: for Agile Software Development was the second of Mike Cohn's books that I have read and was just as good, if not better than the first (Agile Estimating and Planning). What I liked about this book was the clear summary at the end of each chapter, with a run-down of everyone's responsibilities in the process. I'd recommend this book to developers, project managers and product managers to help them understand how to deliver value through an Agile project.

Finally, 5S Kaizen in 90 Minutes by Andrew Scotchmer was, as the title suggests, an enjoyable quick read that has whetted my appetite for understanding of the values that make Toyota one of the most successful manufacturers in the World.

Friday, 7 August 2009

Holiday Reading

We're off to (hopefully) sunny Cornwall for a week now, far from a PC. My (beach?) reading material is The Inmates are Running The Asylum by Alan Cooper. I'm one chapter in and finding it an excellent read, and my goodness does this book have An Opinion!

I've already started noticing the kind of behaviour this book aims to eradicate that all of us developers are guilty of and I'm wondering what it will take to attenuate our high-held opinions of how software should work and prevent them leaking through into the (un)usability of software.

I can remember more than one occasion on which I've thought about the way a feature should work and concluded that it should be a true reflection of the underlying implementation. Talk about arse-backwards! The fatal flaw in this assumption was of course that the user's model of the underlying process will be the same as mine, if they have one at all! My personal model will have been completely skewed by my attempt to implement the feature: I will have been effectively forcing my own prejudices on the user.

So, for any work I've produced that causes users pain: I apologise and promise to try not to do it again. You're the king in this land and I'll do my level best to listen to you more in future.

Tuesday, 4 August 2009

Software Development Schedule Estimation After Feature Complete

Today, I've been thinking about what happens to project schedule estimation after feature complete. Up until the feature complete date, we have a feature backlog and a task list with estimated effort to work with. We record how much actual time we spend on each task and build up a picture of the probable rate(s) at which we can complete the remaining tasks. We are of course relying on a number of assumptions to let these simple extrapolations guide us, but it gives us a range of feasible project outcomes and provides at least some assistance in effective project management.

But what happens when you pass feature complete? There is still a pile of testing to be done on the features and the bug list begins to grow. We know this is going to happen and we've (hopefully) planned for this final phase of production in some form. But how do we know whether or when we're going to get the work finished? If product management need to know not to announce a December release, we'd better bloody well tell them now!

Before I start on the ins-and-outs, I must first admit that I realise that a truly Agile and iterative approach will have rolled the entire develop-test-bug fix cycle in over the course of the project and "feature complete" only actually happens when the last iteration is complete, which - like all previous iterations - means the last batch of features has passed all acceptance tests and any bugs thrown up have been addressed. However, this doesn't render my quandary a moot point, it simply spreads the worry throughout the whole schedule, breaking it down into smaller pieces of unknown bug-fix work.

So what's the problem? Well, when we extrapolate our velocity on features forwards, we are dealing with a known, bounded set of tasks in the future. We also have estimates of how long each of these tasks will take. Most variability in estimated vs real world effort can be accounted for by random fluctuations plus (or minus) a fixed bias. We seek to calibrate the bias in the estimation process and rely on a large number of measurements to hopefully capture the inherent variance in the estimate. But for bug-fixes - whether rolled into self contained iterations, lumped at the end or any mixture of the two - we don't know how long each work item will take. Any effort to guess (and that's all it would be: calling this estimation would be a terrible misuse of the word!) would probably be subject to enormous variability, such that any range of estimates this might generate would probably be useless in the realms of managing the project to a finish.

What options do we have? Well, maybe the traditional approach of simply attacking the critical bugs first, followed by the important ones and so on is the best we can do: we simply get as much done by the end date as humanly possible and make a strategic decision to ship or delay based on what's outstanding. The problem here is that you find out you're not going to make it when it's too late: fait accompli, big bollocking for the project manager, we all look stupid because we don't release on the day we announced to the world.

Or perhaps we should make some attempt to estimate the bugs. Perhaps bring legacy data to bear: this might tell us about what proportion of time bugs may take to fix in certain code areas with respect to initial development time: low level library functions, core application components, application business logic, GUI and so on. Legacy data could even give some prediction of how many bugs we might expect to be raised before they are even found.

My feeling that using Agile estimation tools is at least worth a try: make a guess at the time it will take to fix bugs, with a default of say 2 bugs per day for minor ones, but use the legacy data you have to guide you. The data you have gathered for the complete features is perhaps an excellent candidate: take sub-sets of the data and see which code areas and features were easy (came in under estimate), which were unpredictable (large variance in velocity), or which were hard (consistently over estimated) and use this to generate (or perhaps bias) your bug-fix effort estimates. I suggest this, because it's odds on that a particularly sticky feature to initially code is more likely to contain sticky bugs than one that was trivial. There will of course be exceptions to the rule, but the point is to try and create some rules that capture a large portion of the estimates.

Of course I don't know the answer here: I simply suggest using the Agile estimation tools we already have to help us with project management decisions during this period rather than surrendering to fate. Maybe the testing phase of a project is just too short to start getting any useful answers in some cases. Maybe the inherent variability in the estimates is just too great. But it's worth a go. Isn't it?

Monday, 3 August 2009

Agile Documentation and Passing Your Audit

Or should this be called "Agile Documentation versus Passing your Audit"? Remember the Agile Manifesto:
"Working software over comprehensive documentation."
(Please do not take this too literally: I understand that the Agile Manifesto is not against documentation, it suggests that we expend effort elsewhere rather than on unnecessary documents that no-one reads).

There's been some discussion lately on the Agile Alliance LinkedIn group about documentation in Agile software development which has been all a bit high brow for me (a deficiency of concentration on my part), because I want real, concrete answers to questions and useful "do this, not that" advice about how to get documentation out of an Agile software development process that will satisfy our ISO auditor.

So, in the absence of concrete answers, I suppose I'd better start thinking of some for myself...

First of all, in the interest of keeping documentation to a minimum, lets enumerate the things we need it to do for us, from both a development perspective and an audit perspective. From the development perspective, it needs to:
  • Communicate the over-arching vision (perhaps optional, if it's communicated in some other way?).
  • Capture the value requirements so we build the right thing and test functionality against genuine user expectations.
  • Record our progress (during designing, coding and testing) such that we can effectively plan and manage the project.
From our auditor's point of view (and I don't claim to know this inside out: this will be mostly conjecture), there are two main categories of document:
  1. The description of the processes we use to design, develop and test our software.
  2. The instances of various documents for specific projects that illustrate that project's state and history.
The former set of documents is in a much slower state of flux, but it is still in motion (if we practice Kaizen that is). Let's leave that set of documents out of this discussion and come back to it some other time (although I realize that a process audit is going to look at both sets and ensure that the project documents are a "product of" the process specification, rather than some ad-hoc jumble that bears no resemblance to your promised quality controlled production system).

The second set of documents try to capture a living process (which is why it's so hard, and why Agile suggests not wasting effort creating documents that are immediately out of date!) and they overlap considerably with our requirements as software engineers, test engineers, project managers and quality managers. So to ensure we only do what's necessary, it's sensible to figure out what the minimum requirements are to pass the audit: If a set of index cards with user stories on is satisfactory, then use it. If your spreadsheet of test cases will do, stick with it. Taken lots of pictures of whiteboards? Check if that's OK to archive and do no more than is necessary. There is rarely a genuine need for a part of a quality process to be captured in a full blown, expensive to produce and maintain text document, so avoid them where you can.
The big idea for me here is that we're not actually interested in documents per se, rather an information repository.
Many of the artifacts I referred to in the previous paragraph are great for capturing a moving target: spreadsheets are meant to be filled out, modified, filtered and transformed. Taking a picture of a whiteboard captures an extremely accurate record, not prone to the scribe's opinion. Index cards are perfect for user stories, can be easily marked with start/end dates, developer, tester and can be moved around to illustrate their current place in the process with great ease and many software versions of this popular system are now in existence. There are two types of information being captured here:
  • The current state of affairs: the current set of features and their acceptance tests; development task breakdown and costings for each; un-started, in progress and "code complete" tasks; the current set of QA tests and which have passed, failed or not been run; the current prioritized bug list.
  • The history of the project: when features were added or removed; re-estimations; when coding tasks were started and finished; particular obstacles and delays; tests run for older builds; time spent bug fixing; records of meetings (minutes, photos, archived flip charts).
The current state of affairs is likely the information we'll use to plan and manage our projects, whereas the history is very much an audit tool. The two are not at odds though: it's easy to allow one to transform into the other or to have the history generated simply as backups or deltas to the current documents. Something I've come to realise more and more of late is that recording the history of your projects is not just a necessary evil to satisfy your auditor, but it is a vital tool in estimating and planning future projects.

So, with a good idea of the kinds of information we need to capture and the methods we want to use to manage it, it's not very hard to envision a system that contains all of the information and has different views upon it to allow interactions in the ways required. In fact this is probably yesterday's news, as tools like XPlanner and Version One provide a significant portion of this functionality. The things I'd say are important to capture are:
  • Vision: a paragraph of text; perhaps a few photos of your "vision box".
  • Requirements and design activities: the current feature set, together with user acceptance tests; design meeting records and discussion threads; mock-ups. Include all changes to this, including features that are removed.
  • Estimation: coarse preliminary estimates; secondary any detailed breakdowns and estimates; the current task breakdown and estimates. Again, include all changes made including re-estimation.
  • Coding Activity: the current task backlog; current tasks in progress, start/end dates and revision numbers; assigned developers. Record all true effort expended on these tasks, as this gives you the true picture of progress required for ongoing estimation and future project estimation.
  • Testing/QA Activity: the current set of QA tests; features in testing with start/end dates; assigned testers; bugs created, addressed and resolved. Recording both the testing and re-development effort is again a vital activity.
  • Project Management Activity: strategic planning; project meeting records; project progress metrics; project resources; risk assessments. Again, changes to the information should be recorded and it is very important that the nature of strategic decisions can be understood at a later date from an audit perspective.
Imagine all of this data in a flat database, for simplicity's sake, say a single XML file. Now could someone write some XSLT and a bit of CSS to create a very nice looking specification based on the features and their acceptance tests? They most definitely could. The same to give a dashboard overview of development and testing progress? Easy too. The view for a group of developers and tasks, including some schedule predictions? You get my point...

So, capturing the data is the key thing, not the creation of "documents": just make sure the data gets recorded and come up with some way of transforming it into media to suit the various requirements of your internal stakeholders and your auditors.

Well, I've rambled on for a very long time, so I'll leave it at that for now. I'll probably come back to the nature of the process documentation at a later date.

Sunday, 2 August 2009

Measuring & Sensing in Medicine & Health Seminar

Excuse the shameless plug, but the EMHD are running a seminar this October - hopefully the first in a series on "Measurement and Sensing" - called Capturing Motion & Musculoskeletal Dynamics at the IMechE headquarters.

I've put quite a lot into this, so I'm hoping it will be a success, and this is will be the first time I'll chair an event like this! Posting links to this is my way of trying to improve traffic to the event website and hopefully improve subscription. I'm hoping to start an EMHD group on LinkedIn so we can hold other discussions and gain feedback from our audience, but for now I'm also posting events off my own profile.