Monday, 31 August 2009


I found this article today: the first one I've noticed that uses the term "Wii-Hab". (Unfortunately, the urban dictionary's definition is not quite the same!) We need a new definition:
"The use of the Wii in rehabilitation"
I'm getting more and more interested in principle, scientifically and commercially about the possibility of better rehabilitation tools that can be shipped to a greater number of people.

As the population ages and the number of people having suffered a stroke or other injury requiring physical and neurological rehabilitation increases, it becomes more important to provide health care solutions that can be accessed by these people. So I believe in principle (as I've stated before) that lower cost, lower fidelity solutions may be the way forward and the Wii seems to be stepping into this gap. Scientifically, I'm keeping a close eye on this, as such technology is only going to become widespread when enough studies have been done that prove an improvement in recovery due to the use of the Wii. These studies are already out there and are increasing in number, so it's probably just a matter of time. Commercially speaking, I work for a company that provides motion capture solutions to the health care industry, so I'm naturally curious (and contractually obliged!) to spot the up-coming waves in health care and medical research and make sure we're in the water and moving forwards when they start to break.

However, I think there's more to rehabilitation than just Wii-Hab style tools and I know that the bigger picture includes higher technology also, more prescriptive techniques than just game-play and quantitative analysis of movement.

So it remains to be seen exactly where this will go and what therapy we'll be undergoing to recover from injury or surgery in the future, but it's undeniable that some forms of motion capture technology will be implicated in almost all cases as our yardstick for wellness in this field is effectively co-ordinated movement.

Tuesday, 25 August 2009

User Interaction: Guiding Visions

After reading the Inmates are Running the Asylum recently, I was surprised and happy to have a conversation with my Brother-in-Law at the weekend about how he (a user) sees the problems of user interaction. He had a very demanding but simple idea:
"A common interaction method for all of my electronic devices"
What he meant was that switching on his telly and selecting a channel, recording a program on his PVR, operating his iPod, starting programs on his laptop, selecting a destination on his sat-nav, looking up a contact in his phone (and so-on) should all share some common usage paradigms.

Apple have got this closer to right than most with iTunes and the iPod: if you can operate one, you can probably operate them both and the idea of the iPod click wheel is (excuse my opinion) one of the best bits of industrial design of the last 20 years. Microsoft have tried to move console and PC gaming closer together (in terms of the user experience) by pushing developers towards a common interface device (the Xbox controller). On a side note, why oh why don't games just take over your PC and start when you put in the DVD?! Yes, I know, security, security... But it' my PC and surely I should be able to make the decisions about what I put in it! (Rant over, back to the main topic).

So what could we do? Well, as individual developers, probably very little other than follow emerging patterns, but this is often contrary to the desire to be innovative. Perhaps only large companies that manufacture a range of devices can foster these visions (Apple, Sony, Microsoft and so on). Despite these potential barriers, I think that good user interaction design will begin to solve this problem. In order to design products such that the user experience is a positive one that engages them with tractable methods, aren't we trying to tap into some "universal interaction ideas" that pre-date software and electronic devices? I stress begin to solve the problem, because this idea of a core vision that underpins all interaction design is a Big Idea and requires coming up with ideas that transfer easily from PCs (that have a keyboard and mouse) to games consoles that have a controller, to phones with a much more limited physical interface and sat-navs with just a button or two.

Who knows, maybe such universal concepts may be over the horizon. I'd hedge my bets and say it's not immediately possible while technology companies are competing to be the one with the next big idea. Maybe the user utopia of common interaction is a commercial dystopia of complete domination by one or two manufacturers. One thing's for sure, we shouldn't stop trying on a product-by-product basis to improve our users' experience of software (and hardware).

Friday, 21 August 2009

Re-Factoring: Tidy away your Tools!

I had a slightly drunken chat with my good friend Paul on Wednesday night and told him the analogy I have for re-factoring being the software development equivalent of the 5S Seiso (Shine) mantra:
"Re-factoring is tidying away your tools at the end of the day"
He liked it, and saw it as a useful way of communicating our intent when we insist we need scheduled time for re-factoring to our managers and stakeholders. So often the word "re-factoring" conjures up images of costly re-writes of large portions of the code for no perceived gain and this is partially our fault for misusing the word when we should have said "re-write" in the first place. So ask anyone reticent to commit to any amount of re-factoring effort:
"Would you be happy if accounts didn't put anything back in filing?"
"Is it be OK for engineering drawings to be left covered in construction lines?"
"Can kitchen staff let the washing up pile up until the end of the month?"
You could even ask them how often they would ask their children to tidy their rooms, but that's maybe a step too far!

Thursday, 20 August 2009

Agile Planning Application Beta for including Google Visualization Integration...

I noticed this post a few days ago on the Agile Alliance LinkedIn group. Although the developers make it clear that it is mainly core components - not yet up to trading blows with XPlanner, Rally, VersionOne, Mingle etc. - it could be a very enticing prospect for those also using a customer relations management (CRM) tool such as for the tight integration it would seem to offer.

What has become clear to me about tracking, estimating and planning (whether you're practicing fairly traditional project management, a light version of Agile or something as aggressively Agile as Scrum) is that the most valuable commodities are the hardest to capture: time and effort. Without accurate representation of the real-world effort designers, developers, testers and managers are expending on the project, we are powerless to predict progress and create reliable estimates in the future. Yet the simple act of recording effort is so easily and so often pushed aside by those doing the work. Once this task has been left for more than even a couple of days, most people's ability to remember exactly what they did is diminished practically to zero.

There are obviously clever ways to get people to enter these details (maybe even gather them automatically) and there are oppressive methods such as detailed time sheets that leave a distinctly sour taste in the mouth. Education must not be discounted of course, as enlightening everyone to the value of their actions, particularly to themselves, can go a long way. So, it is with eager anticipation that I await to see how:
"...a very nice interface for entering time has been developed and will be part of the initial release"
I have my own ideas about an interface for recording project/feature/story/task time which I'll maybe talk about at a later date, but for now I'll wait and see what the Salesforce team have to offer.

Monday, 17 August 2009

Up-Front Design in Agile Projects

The last few things I've read have got me thinking about where design work goes in Agile project management and whether it is consistent with creating the most value for the end user.

Agile methodologies of course claim to deliver value and seem, by many measures, to be a significant improvement over the traditional waterfall approach at doing so. But are they the best? I have a few significant worries about Agile applied in its most rapid, aggressive forms and they are:
  1. People who make high level business decisions need good estimates of cost and schedule up-front to effectively plan and support a business strategy. There is a simple correlation between the amount of planning and design done and the quality of an estimate, which means better estimates possibly mean less agility.
  2. It is during the up-front design phase that important core interaction ideas should be conceived, rather than allowing iterated, piecemeal attempts at GUI by the developers to calcify into the final product with limited clear vision. True value to the end user is in a usable product more than its feature list.
  3. How long does iterative design take? We need to account for costs and time required to elaborate our user stories along the way and, just like code, some stories are stickier than others and can conceal enormous, costly problems.
(Before I go any further, I should make it clear that I realise that the perceived accuracy in an estimate driven by a very detailed up-front design is subject to a very large variance due to the unpredictable nature of the development process, so please don't stop reading because you assume I don't understand the value of Agile in controlling this!)

As with all problems concerning conflicting forces, the answer lies in a careful balance: a reasonable amount of up-front design (greater than the bare minimum) such that a clear vision for the software usage can be established and some estimates of cost and schedule can be made that are in line with the expectations of the stakeholders making the Big Decisions. I'd go so far as to say that the interaction vision should be non-negotiable after this point, but we need to accept that some of our design decisions may be called into question along the way and reconsider with care and reference to the initial design and those that created it. (Strategy and tactics: never lose sight of the main goal!)

As for accounting for the design time, this is as hard as estimating development time, so we should adopt the same methods for dealing with the uncertainty: just do it carefully (remembering to ask those that will do the design to create the estimate) and try to calibrate our estimating procedures over time with historical data.

Sunday, 16 August 2009

Best Reads of 2008-2009

I finished the Inmates are Running the Asylum while we were away on holiday last week and thought it was a great read. As it's been my Birthday recently, the usual bout of introspection got me thinking about which of the books I've read over the last year have been the best reads, have changed the way I think or do things or just made me happier in what I do.

To start with, the Inmates are Running the Asylum by Alan Cooper is easily one of the best reads of the year for many reasons. First off, I think it will be one of those books that changes the way I do (or don't do) things: I'll try very hard in future to push interaction design up front on projects I can influence and try to persuade it into the hands of those most qualified to do it. I'll undoubtedly still be expected to do some interaction design myself (sorry Alan!) but I'll be using personas much more to help me do this. Secondly, it was a very enjoyable read and quite hard to put down. There's a strong possibility that I enjoyed it because I sympathise with the author on his strongly held views about the nature of software development and software developers in general though and I realise this makes my opinion quite biased.

I read The Pragmatic Programmer by Andrew Hunt and David Thomas about a year ago and thoroughly enjoyed it. This book falls into the "made me happier in what I do" category and I specifically remember writing a quick review on our wiki at work, recommending this book for anyone feeling a bit disillusioned with software development. It's an excellent compendium of ideas, practices and patterns for becoming a successful developer that will probably not age. One slight caveat is that you must forgive the authors' obvious lean towards command line tools and away from anything with a GUI, as being "part of their upbringing" on Unix.

User Stories Applied: for Agile Software Development was the second of Mike Cohn's books that I have read and was just as good, if not better than the first (Agile Estimating and Planning). What I liked about this book was the clear summary at the end of each chapter, with a run-down of everyone's responsibilities in the process. I'd recommend this book to developers, project managers and product managers to help them understand how to deliver value through an Agile project.

Finally, 5S Kaizen in 90 Minutes by Andrew Scotchmer was, as the title suggests, an enjoyable quick read that has whetted my appetite for understanding of the values that make Toyota one of the most successful manufacturers in the World.

Friday, 7 August 2009

Holiday Reading

We're off to (hopefully) sunny Cornwall for a week now, far from a PC. My (beach?) reading material is The Inmates are Running The Asylum by Alan Cooper. I'm one chapter in and finding it an excellent read, and my goodness does this book have An Opinion!

I've already started noticing the kind of behaviour this book aims to eradicate that all of us developers are guilty of and I'm wondering what it will take to attenuate our high-held opinions of how software should work and prevent them leaking through into the (un)usability of software.

I can remember more than one occasion on which I've thought about the way a feature should work and concluded that it should be a true reflection of the underlying implementation. Talk about arse-backwards! The fatal flaw in this assumption was of course that the user's model of the underlying process will be the same as mine, if they have one at all! My personal model will have been completely skewed by my attempt to implement the feature: I will have been effectively forcing my own prejudices on the user.

So, for any work I've produced that causes users pain: I apologise and promise to try not to do it again. You're the king in this land and I'll do my level best to listen to you more in future.

Tuesday, 4 August 2009

Software Development Schedule Estimation After Feature Complete

Today, I've been thinking about what happens to project schedule estimation after feature complete. Up until the feature complete date, we have a feature backlog and a task list with estimated effort to work with. We record how much actual time we spend on each task and build up a picture of the probable rate(s) at which we can complete the remaining tasks. We are of course relying on a number of assumptions to let these simple extrapolations guide us, but it gives us a range of feasible project outcomes and provides at least some assistance in effective project management.

But what happens when you pass feature complete? There is still a pile of testing to be done on the features and the bug list begins to grow. We know this is going to happen and we've (hopefully) planned for this final phase of production in some form. But how do we know whether or when we're going to get the work finished? If product management need to know not to announce a December release, we'd better bloody well tell them now!

Before I start on the ins-and-outs, I must first admit that I realise that a truly Agile and iterative approach will have rolled the entire develop-test-bug fix cycle in over the course of the project and "feature complete" only actually happens when the last iteration is complete, which - like all previous iterations - means the last batch of features has passed all acceptance tests and any bugs thrown up have been addressed. However, this doesn't render my quandary a moot point, it simply spreads the worry throughout the whole schedule, breaking it down into smaller pieces of unknown bug-fix work.

So what's the problem? Well, when we extrapolate our velocity on features forwards, we are dealing with a known, bounded set of tasks in the future. We also have estimates of how long each of these tasks will take. Most variability in estimated vs real world effort can be accounted for by random fluctuations plus (or minus) a fixed bias. We seek to calibrate the bias in the estimation process and rely on a large number of measurements to hopefully capture the inherent variance in the estimate. But for bug-fixes - whether rolled into self contained iterations, lumped at the end or any mixture of the two - we don't know how long each work item will take. Any effort to guess (and that's all it would be: calling this estimation would be a terrible misuse of the word!) would probably be subject to enormous variability, such that any range of estimates this might generate would probably be useless in the realms of managing the project to a finish.

What options do we have? Well, maybe the traditional approach of simply attacking the critical bugs first, followed by the important ones and so on is the best we can do: we simply get as much done by the end date as humanly possible and make a strategic decision to ship or delay based on what's outstanding. The problem here is that you find out you're not going to make it when it's too late: fait accompli, big bollocking for the project manager, we all look stupid because we don't release on the day we announced to the world.

Or perhaps we should make some attempt to estimate the bugs. Perhaps bring legacy data to bear: this might tell us about what proportion of time bugs may take to fix in certain code areas with respect to initial development time: low level library functions, core application components, application business logic, GUI and so on. Legacy data could even give some prediction of how many bugs we might expect to be raised before they are even found.

My feeling that using Agile estimation tools is at least worth a try: make a guess at the time it will take to fix bugs, with a default of say 2 bugs per day for minor ones, but use the legacy data you have to guide you. The data you have gathered for the complete features is perhaps an excellent candidate: take sub-sets of the data and see which code areas and features were easy (came in under estimate), which were unpredictable (large variance in velocity), or which were hard (consistently over estimated) and use this to generate (or perhaps bias) your bug-fix effort estimates. I suggest this, because it's odds on that a particularly sticky feature to initially code is more likely to contain sticky bugs than one that was trivial. There will of course be exceptions to the rule, but the point is to try and create some rules that capture a large portion of the estimates.

Of course I don't know the answer here: I simply suggest using the Agile estimation tools we already have to help us with project management decisions during this period rather than surrendering to fate. Maybe the testing phase of a project is just too short to start getting any useful answers in some cases. Maybe the inherent variability in the estimates is just too great. But it's worth a go. Isn't it?

Monday, 3 August 2009

Agile Documentation and Passing Your Audit

Or should this be called "Agile Documentation versus Passing your Audit"? Remember the Agile Manifesto:
"Working software over comprehensive documentation."
(Please do not take this too literally: I understand that the Agile Manifesto is not against documentation, it suggests that we expend effort elsewhere rather than on unnecessary documents that no-one reads).

There's been some discussion lately on the Agile Alliance LinkedIn group about documentation in Agile software development which has been all a bit high brow for me (a deficiency of concentration on my part), because I want real, concrete answers to questions and useful "do this, not that" advice about how to get documentation out of an Agile software development process that will satisfy our ISO auditor.

So, in the absence of concrete answers, I suppose I'd better start thinking of some for myself...

First of all, in the interest of keeping documentation to a minimum, lets enumerate the things we need it to do for us, from both a development perspective and an audit perspective. From the development perspective, it needs to:
  • Communicate the over-arching vision (perhaps optional, if it's communicated in some other way?).
  • Capture the value requirements so we build the right thing and test functionality against genuine user expectations.
  • Record our progress (during designing, coding and testing) such that we can effectively plan and manage the project.
From our auditor's point of view (and I don't claim to know this inside out: this will be mostly conjecture), there are two main categories of document:
  1. The description of the processes we use to design, develop and test our software.
  2. The instances of various documents for specific projects that illustrate that project's state and history.
The former set of documents is in a much slower state of flux, but it is still in motion (if we practice Kaizen that is). Let's leave that set of documents out of this discussion and come back to it some other time (although I realize that a process audit is going to look at both sets and ensure that the project documents are a "product of" the process specification, rather than some ad-hoc jumble that bears no resemblance to your promised quality controlled production system).

The second set of documents try to capture a living process (which is why it's so hard, and why Agile suggests not wasting effort creating documents that are immediately out of date!) and they overlap considerably with our requirements as software engineers, test engineers, project managers and quality managers. So to ensure we only do what's necessary, it's sensible to figure out what the minimum requirements are to pass the audit: If a set of index cards with user stories on is satisfactory, then use it. If your spreadsheet of test cases will do, stick with it. Taken lots of pictures of whiteboards? Check if that's OK to archive and do no more than is necessary. There is rarely a genuine need for a part of a quality process to be captured in a full blown, expensive to produce and maintain text document, so avoid them where you can.
The big idea for me here is that we're not actually interested in documents per se, rather an information repository.
Many of the artifacts I referred to in the previous paragraph are great for capturing a moving target: spreadsheets are meant to be filled out, modified, filtered and transformed. Taking a picture of a whiteboard captures an extremely accurate record, not prone to the scribe's opinion. Index cards are perfect for user stories, can be easily marked with start/end dates, developer, tester and can be moved around to illustrate their current place in the process with great ease and many software versions of this popular system are now in existence. There are two types of information being captured here:
  • The current state of affairs: the current set of features and their acceptance tests; development task breakdown and costings for each; un-started, in progress and "code complete" tasks; the current set of QA tests and which have passed, failed or not been run; the current prioritized bug list.
  • The history of the project: when features were added or removed; re-estimations; when coding tasks were started and finished; particular obstacles and delays; tests run for older builds; time spent bug fixing; records of meetings (minutes, photos, archived flip charts).
The current state of affairs is likely the information we'll use to plan and manage our projects, whereas the history is very much an audit tool. The two are not at odds though: it's easy to allow one to transform into the other or to have the history generated simply as backups or deltas to the current documents. Something I've come to realise more and more of late is that recording the history of your projects is not just a necessary evil to satisfy your auditor, but it is a vital tool in estimating and planning future projects.

So, with a good idea of the kinds of information we need to capture and the methods we want to use to manage it, it's not very hard to envision a system that contains all of the information and has different views upon it to allow interactions in the ways required. In fact this is probably yesterday's news, as tools like XPlanner and Version One provide a significant portion of this functionality. The things I'd say are important to capture are:
  • Vision: a paragraph of text; perhaps a few photos of your "vision box".
  • Requirements and design activities: the current feature set, together with user acceptance tests; design meeting records and discussion threads; mock-ups. Include all changes to this, including features that are removed.
  • Estimation: coarse preliminary estimates; secondary any detailed breakdowns and estimates; the current task breakdown and estimates. Again, include all changes made including re-estimation.
  • Coding Activity: the current task backlog; current tasks in progress, start/end dates and revision numbers; assigned developers. Record all true effort expended on these tasks, as this gives you the true picture of progress required for ongoing estimation and future project estimation.
  • Testing/QA Activity: the current set of QA tests; features in testing with start/end dates; assigned testers; bugs created, addressed and resolved. Recording both the testing and re-development effort is again a vital activity.
  • Project Management Activity: strategic planning; project meeting records; project progress metrics; project resources; risk assessments. Again, changes to the information should be recorded and it is very important that the nature of strategic decisions can be understood at a later date from an audit perspective.
Imagine all of this data in a flat database, for simplicity's sake, say a single XML file. Now could someone write some XSLT and a bit of CSS to create a very nice looking specification based on the features and their acceptance tests? They most definitely could. The same to give a dashboard overview of development and testing progress? Easy too. The view for a group of developers and tasks, including some schedule predictions? You get my point...

So, capturing the data is the key thing, not the creation of "documents": just make sure the data gets recorded and come up with some way of transforming it into media to suit the various requirements of your internal stakeholders and your auditors.

Well, I've rambled on for a very long time, so I'll leave it at that for now. I'll probably come back to the nature of the process documentation at a later date.

Sunday, 2 August 2009

Measuring & Sensing in Medicine & Health Seminar

Excuse the shameless plug, but the EMHD are running a seminar this October - hopefully the first in a series on "Measurement and Sensing" - called Capturing Motion & Musculoskeletal Dynamics at the IMechE headquarters.

I've put quite a lot into this, so I'm hoping it will be a success, and this is will be the first time I'll chair an event like this! Posting links to this is my way of trying to improve traffic to the event website and hopefully improve subscription. I'm hoping to start an EMHD group on LinkedIn so we can hold other discussions and gain feedback from our audience, but for now I'm also posting events off my own profile.

Saturday, 1 August 2009

Meyers and Sutter "Quick Lookup"

Lately, I seem to have been having a few chats about the Scott Meyers books (Effective C++, More Effective C++ and Effective STL) and the Herb Sutter books (Exceptional C++, More Exceptional C++ and Exceptional C++ Style). These are popular books among the C++ community and for good reason.

The only problem seems to be remembering exactly which one to pick up when you need to be reminded of the detail of one of the items. Granted, some of the items leave a simple message with you that you don't really need a refresher in (make non-leaf classes abstract, catch exceptions by reference, never use a vector of bools), but others are more subtle (the cost of throwing an exception, making equality operators free functions rather than members, Koenig look up and so on). Maybe there are more indexes out there such as this one that categorise the examples in a helpful manner and make it easy to jump in, but what I really want is a large chart, spreadsheet or web page that has the items from all 6 books in it.

So, maybe my only option is to make one myself... I've gathered all of the contents tables together and see that they are usefully organised in quite similar ways, making my job potentially easier. The first question is what format? Big poster? Web page? Spreadsheet? Document? (Probably the least tractable!) And what to do about example source code: this is very important for some items and probably suggests a web page as a good medium. The next question is what headings to use: broadly follow the headings provided or sub-divide, add? I'm not about to do this first thing on a Saturday morning, so I'll no doubt follow this up in later post.