Last week I was reminded of a Classic Problem in software project management, when our management team told us that they wanted us to start filling in time sheets again. This was almost universally met with derision and a very obvious intention to completely ignore the request. Not exactly polite, but this is a peek at a pretty standard problem in our industry.
Now, rather than get into why software developers don't want to fill in time sheets, I'll instead focus on what we want out of it as project managers, and how (if possible) we can make the process of gathering the information easier, less of a bind, and perhaps even completely invisible.
Monday, 14 November 2011
Sunday, 16 October 2011
Keeping the Critical Path Clear
Lately I've learned something. It happens from time to time, and - as they say - the day is not wasted when it does. Yet again, it's thanks in some part to those very clever engineers at Toyota and it comes to me while learning about how they manage to design cars about twice as fast as their closest competitors. It's nice that something I picked up around a month ago and have told my product and program managers I want to apply is echoing back to me via other members of the management team as a principle they wish to pursue.
Lesson one is simple:
Lesson one is simple:
Keep technological innovation off the critical path of new product developmentLet's look at this in detail.
Wednesday, 4 May 2011
Theory of Constraints in Software Development
I finally got around to reading The Goal
recently, and let me tell you, if you you haven't already read it, you must do so, it's simply brilliant. I really mean that:
So what's it got to do with software development? EVERYTHING! Basically, practically every human endeavor to make something (anything) new, is a sequence of linked operations with statistical variation in them. This means that TOC applies. Simple as, no arguments, no buts, no "we're different", it applies. End of.
It's simple (TOC is incredibly obvious once you know it)
It's brilliant (our internal LRQA auditor gave me a story yesterday of doubling a company's turnover in 2 years by applying it).
So what's it got to do with software development? EVERYTHING! Basically, practically every human endeavor to make something (anything) new, is a sequence of linked operations with statistical variation in them. This means that TOC applies. Simple as, no arguments, no buts, no "we're different", it applies. End of.
Thursday, 7 April 2011
Usability Design of a Micro System Clock
I just bought my wife a lovely Birthday present: a DVD micro system (Philips DCD 377) to replace her very tired CD micro system in our bedroom. There was one thing on the old system that I was confident the new one could easily match: and LCD display with a clock on it. How wrong was I. And just how is it possible to get the design of a clock so wrong?!
So, first, I thought I'd list the use cases of this clock. I think they're pretty uncontroversial:
It turns out that none of these have been accounted for in the simplest sense!
You see, the interesting feature built into the system is "eco". It's possible for me to switch the system into this mode and turn off the LCD display (saving a couple of watts probably). However, if I choose to turn this feature off (my choice remember) the system switches itself into eco mode anyway after three or four minutes. So for any of my use cases above, I have to find the remote and press a button. Actually hold a button down for three seconds. And then press another button. You'll notice that a lot of this happens in the dark: not good, and seriously error prone: I completely re-tuned the radio this morning instead of discovering the time.
Two big problems with the design here:
If the designers are so desperate for me to save some energy, they've completely missed, because I'm going to hunt down an alarm clock with an LCD display and plug it in anyway! Fail.
So, first, I thought I'd list the use cases of this clock. I think they're pretty uncontroversial:
- I look at it to find out what time it is when I enter the room
- I look at it to see what time it is when I'm woken in the middle of the night by one of our sons
- I peek at it in the early hours to see if it's time to get up yet
It turns out that none of these have been accounted for in the simplest sense!
You see, the interesting feature built into the system is "eco". It's possible for me to switch the system into this mode and turn off the LCD display (saving a couple of watts probably). However, if I choose to turn this feature off (my choice remember) the system switches itself into eco mode anyway after three or four minutes. So for any of my use cases above, I have to find the remote and press a button. Actually hold a button down for three seconds. And then press another button. You'll notice that a lot of this happens in the dark: not good, and seriously error prone: I completely re-tuned the radio this morning instead of discovering the time.
Two big problems with the design here:
- Turning eco mode off was my decision and shouldn't be messed with.
- That first three or four minutes when the clock stays on is precisely the period during which I WON'T need to look at a clock again: I just looked at it, so I've a fairly accurate guess at the time anyway!
If the designers are so desperate for me to save some energy, they've completely missed, because I'm going to hunt down an alarm clock with an LCD display and plug it in anyway! Fail.
Saturday, 26 March 2011
Whole Software Cost: Reducing Support Costs
A few years ago I started to pay proper attention to the possible return on investiment (ROI) of software projects. At the very least, I took an interest in having a very rough estimate of what it would be so I could prioritize better, and whats-more, arm myself with adequate justification.
However lately, I've realized that there's information in that-there calculation that could direct us to make better software, that actually makes more money in the long run by having lower ongoing costs. The big bolt came when I took a niggly support question from one of our support engineers, and was prompted to ask "How often do users ask about this sort of thing?". The answer in this particular case was "lots".
So, this led to me considering a few things about how we can make software that reduces the support load, and secondly, how to financially evaluate a design that does so.
However lately, I've realized that there's information in that-there calculation that could direct us to make better software, that actually makes more money in the long run by having lower ongoing costs. The big bolt came when I took a niggly support question from one of our support engineers, and was prompted to ask "How often do users ask about this sort of thing?". The answer in this particular case was "lots".
So, this led to me considering a few things about how we can make software that reduces the support load, and secondly, how to financially evaluate a design that does so.
Wednesday, 9 March 2011
Things I Hate Doing in C++
Why do I do it to myself? Manage memory, manage low-level services, write out loops time and time again, hand-roll all of my thread locking, monitor pattern stuff and observers?
Because I've forgotten to behave like the "real engineer" I claim to be, that's why. If I was behaving the way I was taught (as a mechanical engineer), I wouldn't revisit the same problem over and over, I'd capture a solution once and for all and apply the pattern repeatedly, re-factoring it slightly to accommodate edge cases as I discover them. Now I know that we do this sort of thing in C++ already to some extent: that's what the STL's for. And that's what boost picks up and runs on with for miles more. But it's still not enough. I was also reminded recently that one of my esteemed colleagues had written a multi-methods pre-compiler for C++ (http://www.op59.net/cmm/readme.html), solving an age old problem, and I though, heck, why don't we just do that for all common language problems?
Because I've forgotten to behave like the "real engineer" I claim to be, that's why. If I was behaving the way I was taught (as a mechanical engineer), I wouldn't revisit the same problem over and over, I'd capture a solution once and for all and apply the pattern repeatedly, re-factoring it slightly to accommodate edge cases as I discover them. Now I know that we do this sort of thing in C++ already to some extent: that's what the STL's for. And that's what boost picks up and runs on with for miles more. But it's still not enough. I was also reminded recently that one of my esteemed colleagues had written a multi-methods pre-compiler for C++ (http://www.op59.net/cmm/readme.html), solving an age old problem, and I though, heck, why don't we just do that for all common language problems?
Monday, 28 February 2011
What's Wrong With The Way We Write Software?
Lately, I've realized that when discussing "what's wrong with the way we write software?", that I've been starting out on the wrong foot. The clue is in the question: there's not really that much wrong with the way we write, it's all to do with the way we design. (Caveat here: I've been drafting a post about what's actually wrong with the way we literally write software too, so we're not completely off the hook.)
This shift in viewpoint is coupled to another misnomer about software development that we regularly need to challenge our business sponsors on:
It has been said that one of the original flaws in the emergence of the field of "software engineering" was to compare the act of creating software to the act of production in other engineering disciplines. This has since been recognised as not the case, and that in fact our production/assembly/manufacturing/building cost is practically nil and almost entirely error free, because it's just compiling, linking and deploying the software. (OK, deployment of some software has its challenges, but if we imagine the simple case of uploading a package to your website for people to purchase and download, then it's pretty trivial.) The act of writing software is of course the design process, as the code is effectively a detailed specification of the program that the compiler turns into executable code.
So why do we still talk about "writing" software? I suppose it may be because most people tend to design at least partially "on screen". I've met few people who design without the use of at least some skeleton code, to allow them to sketch out the way interfaces might look, or look for creative solutions in the way they might put together some template code for instance. It's pretty hard to purely "think about" some aspects of software design at the detailed technical level, because we're not very good at holding the full problem in our heads, and we're pretty rubbish at remembering language specifications in sufficient detail to get it right without having to type it out at least once. The detailed job of assessing some facets of technical design is after all best left to the compiler.
However, it's said that our creativity could benefit from freeing ourselves from our monitors and keyboards and using other media and involving all of our senses (see Pragmatic Thinking and Learning for
some good ideas here). Time at the white board, sticky notes, paper cut-outs, Lego bricks, even role playing all have a place here. (I found that some paper cut-outs of a mouse pointer, some icons and basic graphics components were of great help in designing a new interaction idea lately).
I perhaps digress from the main point here, so lets try and get back to it. I can see - in my way of thinking about it at least - that making the conscious decision to stop talking about writing software, and instead talking about designing it could have a profound impact on how effective this process is. The reasons are intrinsic and extrinsic to the development activity:
If we shift to talking about designing software, then perhaps even the tools we use might change. An IDE who's primary form of interaction is a UML diagram, dependency management tools that simply involve changing an "efferent arrow" to an "afferent arrow" (and do the necessary dependency inversion under the hood for you), containers that really look like containers, who knows. One thing's for sure though, if we encumber ourselves with the dogma that we must carry on typing at all costs, then even if some mysterious benefactor chooses to create these tools for us, we might miss them.
This shift in viewpoint is coupled to another misnomer about software development that we regularly need to challenge our business sponsors on:
If you're not typing, you're not developingThis has been covered very well by various authors, so I won't go into it (but if you need some ammunition, then read up on it - I wish I could readily remember where the best starting point would be).
It has been said that one of the original flaws in the emergence of the field of "software engineering" was to compare the act of creating software to the act of production in other engineering disciplines. This has since been recognised as not the case, and that in fact our production/assembly/manufacturing/building cost is practically nil and almost entirely error free, because it's just compiling, linking and deploying the software. (OK, deployment of some software has its challenges, but if we imagine the simple case of uploading a package to your website for people to purchase and download, then it's pretty trivial.) The act of writing software is of course the design process, as the code is effectively a detailed specification of the program that the compiler turns into executable code.
So why do we still talk about "writing" software? I suppose it may be because most people tend to design at least partially "on screen". I've met few people who design without the use of at least some skeleton code, to allow them to sketch out the way interfaces might look, or look for creative solutions in the way they might put together some template code for instance. It's pretty hard to purely "think about" some aspects of software design at the detailed technical level, because we're not very good at holding the full problem in our heads, and we're pretty rubbish at remembering language specifications in sufficient detail to get it right without having to type it out at least once. The detailed job of assessing some facets of technical design is after all best left to the compiler.
However, it's said that our creativity could benefit from freeing ourselves from our monitors and keyboards and using other media and involving all of our senses (see Pragmatic Thinking and Learning for
I perhaps digress from the main point here, so lets try and get back to it. I can see - in my way of thinking about it at least - that making the conscious decision to stop talking about writing software, and instead talking about designing it could have a profound impact on how effective this process is. The reasons are intrinsic and extrinsic to the development activity:
- Intrinsically, we can be more creative in design and perhaps more productive if we release ourselves from the idea of developing software purely in text, at our screens. Free your mind and the rest will follow, as the song goes: let your creative, non-logical side in on the act and you'll double the brain power available, not to mention, maybe enjoy yourself a little more.
- Extrinsically, if those around us understand the process as one of design, (but with the necessary and quite mundane constraint that it is translated into code to be compiled), then they may be better placed to support us in this activity. Fundamentally, the idea that "thinking time" is more important than "typing time" can germinate more easily, and also, supporting resources and activities might become more available.
- Moreover, perhaps we can more readily engage non-developers in the technical design process if it is more accessible: product managers, analysts, UI designers, testers, support staff, technical writers and so on. If the design meeting is Lego and playing cards, then there's no barrier to inclusion, as almost everyone can get on board with these sorts of design metaphors.
If we shift to talking about designing software, then perhaps even the tools we use might change. An IDE who's primary form of interaction is a UML diagram, dependency management tools that simply involve changing an "efferent arrow" to an "afferent arrow" (and do the necessary dependency inversion under the hood for you), containers that really look like containers, who knows. One thing's for sure though, if we encumber ourselves with the dogma that we must carry on typing at all costs, then even if some mysterious benefactor chooses to create these tools for us, we might miss them.
The Estimation Contract
Lately, I've been chatting with some colleagues about estimation in the context of a business decision making process. It seems that we shared ideas about the outputs of the estimation process, how they must be comunicatred in full context, with all assumptions and risks, but we forgot to talk about the inputs to the process.
I suggest that we look at estimates in this context as a sort of "contract". If you want to know how much time MegaApp 2.0 is going to take and what it'll cost, there are a few things I need from you. If you fall short on your end of the deal, then I can't offer the best estimate and it'll be lacking in certainty and packed with more risks.
So, here's my first stab at it:
Now I'm not suggesting that there won't be an estimate without the proper "payment", and that the estimate will be water tight given an excellent set of inputs. What I'm trying to suggest here is a format for communication that does its best to be explicit about the inherent process of estimation, while also being aware of the reasons for producing the estimate: to drive a business decision.
This is by no means a set of exclusive rules, nor is it a minimum list: I'm sure plenty of people can find cause to adjust this contract for particular cases. If you do, I'd be interested to know.
I suggest that we look at estimates in this context as a sort of "contract". If you want to know how much time MegaApp 2.0 is going to take and what it'll cost, there are a few things I need from you. If you fall short on your end of the deal, then I can't offer the best estimate and it'll be lacking in certainty and packed with more risks.
So, here's my first stab at it:
In return for:
- A vision statement (elevator statement or similar)
- A rough project scope
- A list of essential features
- A list of known exclusions
I promise to provide:
- A feasible resource profile for development and testers
- A most likely completion date and a 90% confident finish date
- A most likely cost date and a 90% confident cost
- A list of assumptions made during the estimation process
- A list of the major risks that could affect the project cost and timeline, together with a rough action plan (high risk = delegate upwards, medium risk = suggested mitigation plan, low risk = no action)
Now I'm not suggesting that there won't be an estimate without the proper "payment", and that the estimate will be water tight given an excellent set of inputs. What I'm trying to suggest here is a format for communication that does its best to be explicit about the inherent process of estimation, while also being aware of the reasons for producing the estimate: to drive a business decision.
This is by no means a set of exclusive rules, nor is it a minimum list: I'm sure plenty of people can find cause to adjust this contract for particular cases. If you do, I'd be interested to know.
Subscribe to:
Posts (Atom)