Tuesday, 29 September 2009

Making Error Codes Mandatory

Lately, I've been looking over some code I re-factored a while ago while doing a monumental code merge: I'd replaced a load of functions that had boolean returns and error strings returned by reference argument with simple integer error code returns. (If you think this is wrong, then you must be one of the people writing hard coded error strings into core libraries causing your app designers and document writers headaches with the spelling mistakes it takes them an age to remove and preventing them from localising your apps or having different strings in different deployments).

Anyway, back to the main point: I considered using mandatory error codes at the time so clients couldn't ignore them. There's a Dr. Dobbs article on this too, that goes along similar lines, but I wanted to make an important extension to the logical design.

The problem with the idea of the mandatory error code is that it throws on destruction if it hasn't been "queried" (had the embedded error code extracted), which in exception safety terms is the worst thing we can do! So, the thing we have to do is make sure they don't live (long) on the stack. The way I addressed this is to make them non-copyable, so they can construct from their embedded type only:

template<>
class mandatory_error_code
{
public:
mandatory_error_code( const T & i_value )
: m_value( i_value )
, m_throw( true )
{
}
~mandatory_error_code()
{
if( m_throw )
{
throw std::logic_error( "un-handled error code" );
}
}
operator const T() const
{
m_throw = false;
return m_value;
}
private:
mandatory_error_code( const mandatory_error_code & );
mandatory_error_code& operator=( const mandatory_error_code & );
const T m_value;
mutable bool m_throw;
};

There are two nice advantages to this approach:
  1. The only place you can write mandatory_error_code<> is as a return argument and it must be constructed from a convertible type. This makes porting code really easy, as all you do is change your function signatures.
  2. Client code that ignores error codes by not copying a value or comparing it to a convertible immediately start throwing on the line where the function that returns the error code is called. It is impossible to make the class live longer and risk run time termination if a different exception is thrown.
The original Dr. Dobbs article on this has a non-throwing error code class that wraps the value, but I'm not so sure this is now required: Why wrap an error code in another class when the other class does nothing? If you want to capture the error code for later comparison, just use the "original" type and sacrifice perhaps a tiny bit of readability for a whole extra class. There is however still a way to compromise your run time and that is by throwing from the assignment operator of the embedded type. Despite not being protected by design here, we are most likely protected by circumstance: if the assignment operator throws, it will throw on the construction of mandatory_error_code, so the destructor will never be reached.

There is another "Modern C++" idea that you could use to extend this class, and that is to replace the hard coded throw with a policy (as a template argument), but until I need more than one policy, I'll assume "YAGNI" and leave it as it is.

Monday, 21 September 2009

Becoming Agile? But How Do I Stop Being Non-Agile?

Books on Lean and Agile are stuffed with examples of companies that are and companies that aren't. There seems to be plenty of advice about the core values of Lean/Agile and even some advice on how to cultivate them, but there is very little advice about how to "stop being Non-Agile".

How on earth do you go from every developer 110% loaded, 3 or 4 projects each and each person wearing analyst, architect, developer, test writer and project leader hats to the 20% slack, 1 project each and proper role allocation when management are tearing down the door with the next Big Thing that's going to make the company money every other day?

Now I realise that an answer many practitioners would give would be:
"Get management buy-in"
Great, that seems like good advice, but those of us that aren't on a beautifully functioning Agile team know that when we suggest a change in working practices that the answer's going to be:
"Fine, we can do that when we've finished project X"
Of course, project Y comes along long before X is finished. X 1.0 is rubbish anyway and needs an X 1.0.1 service release, backwards compatibility of X with W rears its ugly head, project Z is already in the marketing pipe (and if you're unlucky, it's already been sold).

It seems that there are quite a few things that may have to happen that are - in the short to medium term at least - extremely incompatible with business needs:
  1. New projects cannot be started. This is terrible news for a company that is scraping by: new products and new releases of existing products are needed to continue to drive sales and a gap in product production means depriving the sales team of oxygen.
  2. Fewer projects can be executed. See 1, the sales team turns blue.
  3. More people are needed. Team members wearing multiple hats is not good, so more roles need to be created if the project workload is to remain high.
  4. Project management has to be improved. Money has to be spent on training and/or coaching.
  5. Development teams need to be shielded from sales and management pressures. This is a perceived problem when management feels the need to control development excessively in order to get anything out of them (and lets face it, they will view a development team operating in a manner as described above as extremely under-productive).
  6. People have to change. The most important thing about being Lean/Agile is its culture, without which a conducive environment can never be created, only simulated in short bursts as the business can afford the spare time, resources and money to do so.
Are there any answers? Perhaps one option is to transition one team. This is probably attractive to management, as it is only a toe in the water and they can pull the plug and go back to the old methods should it fail. The challenge here is likely to be providing the right leadership such that the team is properly shielded from existing external pressures. Additionally, if everyone's working on multiple projects, you can't do this overnight: you probably have to live with people being on many teams to start with and that slowly coming right over time.

What answers do we have to curry favour with management? Of course, we can promise them more productivity in future, but maybe we have to stick our necks out and offer them more productivity right now. If we can start showing management a clear advantage in Lean/Agile practices such as visual control, a clear value stream, better schedule estimation and so on, perhaps this would be sufficient to show greater productivity long before a project is finished.

All of this is of course dependent on changing that key person's mind. Depending on who they are (and who you are), this can be extremely challenging. They have to trust you, so that when you say you can improve things, they'll be confident enough to let you go against history and try it. How can you gain this trust in your abilities to pull off something as managerially and technically challenging as Lean or Agile? Maybe you have to go ahead and do Just Do It(TM) (see Fearless Change) and prove that you can with evidence that you just did.

Saturday, 19 September 2009

ESMAC 2009: My Best Yet

ESMAC 2009 was cut a little short for me when I had to come home last night, but it's still been my best so far in terms of the scientific proceedings. I can't comment on the social side as I missed the Gala dinner, and besides, Marseilles 2003 will take some beating. (The orthopaedic surgeons who re-opened the bar after the staff had left to serve us all Veuve Clicqout at 2am know who they are).

Some really good technical papers, a large variety of well thought out clinical studies and a few excellent keynote talks gave the proceedings a nice balance. Adam Shortland never fails to impress/amuse in equal measure and his stint as a stand up comic is obvious when he's in full flow behind the podium. Congratulations to Adam for organising an excellent and well attended event.

So, off the back of the conference, I've a lot to think about and do over the next 12 months if I'm to hold to my word and do the right research and development to support the needs of the clinical movement analysis community.

Tuesday, 15 September 2009

The Ageing Population: Are Current Trends to be Trusted?

A few things I've been exposed to lately have given rise to the question:
"Is the ageing population trend to be trusted?"
My old supervisor and joint #1 reason for being a biomechanical engineer Garth Johnson has organised a conference on engineering for the ageing population. This is a hot topic, as there is lots of data to suggest that the old will soon outnumber the young and that our increasingly long lives will put extra pressure on health and medical resources. That much is indisputable. Right?

Maybe it isn't. I also read a piece from Michael Blastland on population trends, that showed all attempts to predict future population to be basically guesses that rarely resemble reality. This is only half the story for predicting the split between old and young in the future: mortality is the other key factor. And nudging me to question the statistics on this is a conversation I've had with my Wife that goes along the lines of:
"Is our generation (or the next) going to live as long on average as the current elderly generation?"
Maybe we should consider this. Was the current generation of over 70s as obese on average as our generation is? I don't know, but I'd hazard a guess at no. Did the current set of over 60s smoke and drink as much as we do in our 30s? Again, I don't know, but maybe we should ask these questions when attempting to extrapolate figures from current trends. There are a lot of mechanisms that will affect the health and mortality of the elderly population in two, three and four decades "in the pipe", that I'd suggest we won't understand until they are revealed to us.

Now I'm not suggesting for a minute that we don't do our level best to cater for the ageing population: the elderly are already under-funded and under-cared-for. The ageing conference should be a great success (I hope) and generate lots of interest in engineering the health of the elderly. After all, we're preparing for our own futures by doing this.

Monday, 14 September 2009

Is Software Development Lean Enough?

A question has arisen while I've been reading The Toyota Way:
"Is Lean software development currently a million miles behind what is truly possible?"
I have asked myself this question because it turns out that many Western manufacturers that are believed to be lean - and indeed are accredited as such - turn out to be nowhere near when the experts weigh in. A particular example is given in the book of a North American manufacturer that had been awarded the Shingo Prize for its lean-ness and was latterly visited by Toyota's Supplier Support Centre. They slashed 93% of the supposedly lean production time. 93%! They were just miles of the mark.

So, I wonder if - despite believing we're on top of software development practices - we are going to be surprised by a completely left-field player wading in and showing us all up with defect free software written in 5% of the time for 10% of the cost at some point in the near future.

My bet is yes, we are, only perhaps not in such a dramatic manner. The reason I'm sure that we're no-where near the mark for true lean-ness is that the core cultural values required to successfully follow the example of TPS are just as poorly embodied in our software industry as they are our manufacturing industry. However, software development being such an "immature profession" does give it a slight advantage in trying to move towards a lean culture, in that there are fewer entrenched ideals and practices that would need to be changed.

Lean Software Development is More than a Set of Tools: Still Missing the Point?

I read an interesting post on the NetObjectives blog this morning that said:
"Lean is more than a set of tools."
I think it was still missing the point a bit. Someone had commented that we should read the various books out there on the Toyota Production System (TPS). I'm currently reading The Toyota Way by Jeffrey Liker and completely agree. It seems that even when clever people like those at NetObjectives look beyond the "Lean Toolkit", they simply look for more tools from other areas and see the act of being "truly lean" as being a good tool selector. Yes, that's part of what TPS and Lean Manufacturing teaches us to do, but there are some massive fundamentals that come before any of this and they are:
  • A learning culture
  • Respect for people
  • Continuous improvement (Kaizen)
Without these core principles, selecting the right tools for the job is just going through the motions without really "feeling it". Selecting the right methods is a consequence of striving for continuous improvement, not the way to go about it, and continuous improvement can only work when everyone is allowed to develop to their maximum potential and create core quality and efficiency "from within" by learning from and improving working practices on the shop floor.

Don't get me wrong, I'm on board with Lean and Agile. One of the practices referred to in the "Lean Toolkit" is to map the value stream of your production process and this is undoubtedly hugely important in attempting to reduce end-to-end production time, assessing "one piece flow" and removing waste. However, I'd like to look at Lean and Agile as some of the things I should learn and try out in order to try and improve my personal practices rather than them being the end goal.

Monday, 7 September 2009

Agile Velocity: Speed or Direction?

I've heard quite a few people make this quip when presented with the concept of "velocity" in an Agile project. They're quite right, as it is rather a nonsense to use the term when the measure we are referring to is a scalar. However, it's rather an established term and I've never let it get in my way: I'm not one of the unlucky people for whom it simply causes a parse error form which I cannot recover.

But I was thinking today (odd, because I spent most of it singing songs to and building cup stacks with my 20 month old son) about whether it would be possible to "expand" the Agile velocity measure a bit and make it a vector. Now this isn't intended to be deadly serious, rather a tongue in cheek attempt to assuage the guilt of using the term velocity when I should know better. With this caveat in place, I'll proceed...

When a project starts, we have a big deficit: our pile of story points; code points; ideal days; whatever we have chosen. We're going to chew through this in a sensible order and keep track of our progress by measuring the rate at which we're doing it. The deficit can change if we re-estimate (not usually a good idea) or there are new features added (commonplace), so the target advances towards us or recedes away accordingly. In addition to the usual changes to the work remaining, I've spent a fair bit of time trying to estimate the additional effort emergent bugs introduce to a project, mainly so I can estimate the factor I should apply to future estimates to account for bug-fixing. If we're being extremely tight in our project iterations, this maybe doesn't really figure, because we're not entitled to knock off the effort for a story until it's passed all user acceptance test and QA. That's fine when it's possible, but it often isn't feasible to go through an entire QA cycle to get feedback about progress if it takes 3 weeks to do so and the project is 8 weeks long. It's also quite difficult for projects on legacy code bases when introducing a new feature potentially causes bugs to arise in different areas. So, the idea I had is to introduce a second dimension orthogonal to the "feature effort": "bug-fix effort".

Velocity is now a 2D vector in "feature effort" vs "bug-fix effort" space! Now, as I've already said, this isn't intended to be taken too seriously and there are flaws. Most importantly, feature effort and bug-fix effort aren't really orthogonal, they are collinear and dependent. To make schedule estimation stand up mathematically, we have to represent the magnitudes of the velocity vector and the distance to the target in Manhattan distance: Euclidean distance would artificially reduce the required effort to complete the project.

However, thinking about the two dimensions of velocity represented in this way could reveal some insight into the way our project is going, for example:
  1. During the early phases, the direction is entirely in feature effort and the target is on the feature effort axis: everything's fine!
  2. Later on in a project, the target travels in the bug-fix effort direction yet our velocity remains unchanged: maybe this is a warning to start applying some bug-fixing effort to avoid a large "course change" later.
  3. We focus on bugs excessively and our heading passes the target to the "sea of bugs" side: perhaps we are losing sight of the main game and will miss the feature target.
So, perhaps there are some indicators for a project manager in this way of looking at things, but I suspect that it's more likely a bit too much of a joke to be useful. But if you're firmly attached to your "velocity" measure and are desperate to shirk the ridicule of the pedants, then be my guest and try it.

Saturday, 5 September 2009

First to Market or Best in Market?

I was chatting with a friend yesterday about the pressure to be first to market with software products. I started to think about how you could quantify the benefit - or cost - of pushing something out the door early to make sure you're first out there.

Basically, there are gains to be made when you're without competition:
First to market profit = no. of deals x price x gross margin
Here, we're assuming that the number of deals is independent of your ability to sell: we'll say it's simply the number of customers wanting to buy.

Once competition enters the marketplace, there are further gains to be made, but they are dependent on your market share (of course now less than 100%):
"Normal" profit = no. of deals x market share x price x gross margin
Now, there's more going on in this simple equation than meets the eye, so we have to ask what we can control in these equations?

First off, there's price. Now I don't know that much about marketing, but I do know that you generally get what you pay for, so the best product can usually be sold for the highest price. How do we make sure we have the best product? By putting in the effort and taking the time, which is likely to be at odds with the first to market strategy we often encounter!

Second, we can control our gross margin. Once a software product is shipping, it's not 100% as some people might think, as software needs maintaining and supporting. And guess what, software that's rushed costs more to maintain and support!

The last thing we can control is market share, and this is extremely sensitive to the quality of the software product we're selling: again, an annoying, buggy product doesn't give your sales team the best chance of dominating.

So, what does this all add up to? (Sorry multiply). Well it's probably impossible to substitute real numbers into these equations, but that's not really my point: the point is simply that there is a relationship between the potential benefits before and after a brief period of being a monopoly, and that this is effected considerably by the quality of a software product. The brief period of profitability enjoyed after getting buggy, difficult software out before anyone else can be abruptly ended by a late arriving competitor product that is easier to use and crashes less.

So, which do you want? 100% market share for 1 year or 70% market share for life?

Friday, 4 September 2009

Vicon Launches Boujou 5

Today was the day we made our release build of Boujou 5 at Vicon after it came through QA: it went on sale earlier this week, but will be available for download on Monday and I'm rather proud to have been on the development team. I'm glad to have been involved for two reasons: firstly because it's been a very interesting product to work on - quite a diversion from my usual biomechanics product area - and secondly because I've learned a few things along the way.

The interesting things I learned along the way in terms of coding were about when to re-factor towards patterns, a serious lesson in why we should use RAII everywhere and some interesting uses for the Visitor pattern.

After working on projects of varying maturity, I've now seen code bases that are pattern-heavy and those without many pre-formed architectural patterns. If either of these types of code base need modifying - particularly if the modifications are to be structurally significant - then the pattern-heavy architecture is quite resistant and brittle, whereas the pattern-light code base is pliable and less unstable. The lesson I've learned from this is that patterns should come late: try not to create a pattern for single use; wait for the pattern to emerge from multiple client uses; re-factor when you write the same code three times, rather than twice.

I've been a fan of RAII since I learned about it and have come to realise that it's not just an exception safety paradigm. The code I've seen that can most benefit from RAII is many coders' worst nightmare: the OpenGL state machine. GL bugs where colours change, textures disappear, lighting goes wrong and so on are notoriously hard to debug, requiring quite a talent and intuition (which I must admit I don't possess). Why then don't we write a suite of RAII objects for doing glEnable and glDisable and so forth? If we did this, then we're guaranteed to leave the GL state machine in the same state across calls to any draw() function we could ever write. For general setup code, more scene graph-like hierarchies of draw code and nested transformations, we simply use the stack to nest RAII object lifetimes.

My last interesting encounter on our boujou development journey has been the visitor pattern: surprising, because we're taught that this should be a seldom used pattern. Now I'm not talking about the full blown visitor pattern, I'm talking "visitor-lite" whereby a set of visited objects contribute to a visiting class's state via identical, primitive calls (rather than visitA, visitB, visitC and so on). This proves to be immensely useful when the visitor is polymorphic (either literally or by binding different classes and functions to a simple functor). Imagine a selection of entities that can contribute to a context menu, a UI panel or a script engine through exactly the same visitor framework? How about using the same drawing code to render entities in 2D and 3D, find their extents, cache draw lists and fill structures for algorithms?

So, despite the usual ups and downs of a commercial software project, boujou's been a pleasure to work on and it's been good for me. I hope it's good to its users.

Wednesday, 2 September 2009

North West Wine Blog

After finally getting my Dad on the internet to boost his network for wine sales (northwestwineguild.co.uk and winedrinkersnorthwest.co.uk), I'm going to see if he's ready to take the next step: a wine blog!

I've set him up a Wordpress.com blog and popped on some example stuff and I'm impressed with the way it seems to fit what I think the requirements are. A blog is a natural forum for people to leave their comments and opinions on a wide range of products such as wines and I like the reactions tool on Blogger, but the straightforward rating tool on Wordpress is just right for keeping a leader board of his client's favourites. Filtering by grape and wine type is also possible using tags and categories, which would hopefully suit both browsing and specific searching.

So, I'll say it again, if you like a glass of wine and you live in the North West then my Dad's your man :)

Tuesday, 1 September 2009

Personal Kaizen

It's the first of the month today, so my calendar has reminded me to do two things: update my CPD and do "personal Kaizen". My CPD is relatively mundane and has been going on a long time, but my personal Kaizen is a recent thing that I've been doing only for the last few months.

Over the last month, I tried a couple of things to improve the way I do things and these were:
  • Outlook categories
  • Outlook flags
These were respectively practically pointless and a huge success for me.

The problem I found with outlook categories is that I ended up spending more time managing the categories than I did using them. Email titles and bodies contain almost all of the relevant information and can be searched effectively. (I'm still quite prone to organising my mail into a folder structure though which is maybe at odds with this view, but because of this, I spend very little time searching).

However, the Outlook flags were incredibly useful to me, particularly with the To-Do bar visible in the 2007 version. I used to create tasks in my task list and add appointments to my calendar and realised that 80% of the time, I was doing this in response to an email. So, simply being able to mark an email as to-do today, tomorrow, next week, or by a specific date, I can do this in a single click most of the time, then file away the email without going round the houses. This fits in well with the way I think also, because if I have an idea of something to do or I stumble upon something important on the internet (particularly if I have the idea at home), I usually send myself a mail.

To build on this and to try some completely new things, this month I'll be trying to:
  • Use Outlook tasks to set deadlines with/for other people
  • Create "vision boxes" to create very obvious visionary metaphors for new projects