Showing posts with label agile. Show all posts
Showing posts with label agile. Show all posts

Wednesday, 13 July 2011

Procrustean sizing

Conventional agile estimation involves assigning story points based on the relative size of each piece of work. Most teams refuse to accept stories whose size falls outside a certain limit and send them back for further analysis. Teams using Procrustean sizing take this to the extreme and only accept stories of a single, fixed size.

Procrustes was mythical ancient Greek bandit whose speciality was capturing unwary travellers and making them fit his iron bed. If they were too short, he would stretch them. If they were too tall, he would cut off an appropriate section of their legs.

Procrustean has come to describe the coercion of data into an arbitrary container or structure. The term is often used perjoratively to refer to a simplistic, one-size-fits all approaches.

However, I believe that a Procrustean story sizing is a viable technique for teams using kanban to manage their work. Limiting all stories to a pre-determined size places extra restrictions on story formation but it provides stronger assurances about how work and value will flow through your team.

When all stories are of the same size it becomes easier to reason about the backlog as a whole. Adding up stories of different sizes is dangerous because teams sometimes have a tendency to over or under-estimate differently for larger stories. An eight point story might take more or less than four times as long as a two-pointer when there is not a strictly linear relationship between story points and effort.

This effect is stronger with greater variability. Some teams avoid the temptation to add up stories of different sizes by estimating in animals or some other arbitrary non-numeric scale. However, when there is only one size this complication is avoided and calculations of the total size of the backlog become more reliable.

More importantly, limiting variability helps to maintain a constant flow. Production levelling (as described by Taiichi Ohno in his account of the Toyota Production System) is essential for reducing waste because it ensures that all parts of the pipeline recieve work at a rate they can handle. Uneven flow (mura) results in work building up in some areas while other areas are forced to remain idle and disrupts feedback.

Setting limits on work-in-progress (WIP) is also simpler when there is no distinction between points and stories. A WIP limit of story points can mean that there are too few stories in development to usefully work on in parallel, but a WIP limit of stories can allow too much work to be in progress at once if large stories are in development. When there is too much WIP, it takes longer for problems to show. Also, it becomes difficult to change priorities because expedited stories are blocked by large pieces of work already in progress.

The most obvious disadvantage of the technique is that stories may need to be divided, even when the product owner envisages them as a single unit. This is a problem for any team that does not accept stories of an unlimited magnitude, but it happens more frequently when using Procrustean sizing.

Adopting Procrustean sizing is a tradeoff. If reliable flow is more important than the narrative integrity of stories, then it can assist a team's development process. If preserving stories in the form that product owners originally imagined them is important for communicating with stakeholders, then levelling your stories may not be good option.

Procrustean sizing forces teams to organise plan stories in the form that they can most easily be worked on. It's the software equivalent of Ohno's advice to reduce batch size in manufacturing and, like reducing batch size, takes discipline and effort. Procrustean sizing is a simple constraint, but kanban has already demonstrated the value and power of simple constraints correctly applied.

NB: I have not used this technique in its pure form on a real project. This post is intended as an RFC and exposition of the Procrustean sizing concept. I would be very interested in hearing others' experiences with this or similar ideas.

Saturday, 11 June 2011

Agile teams need the same skills

There is occasionally anxiety about how project managers, business analysts, QAs and other traditional roles fit into agile projects. A project manager new to agile development might feel that their skills and experience aren't valued because they don't see their old job title explicitly mentioned in a description of an agile team.

Agile schedules still need to be managed. Agile requirements still need to be analysed. Agile codebases still need to be tested. The difference is that agile teams don't assume that these activities have to be done by dedicated team members.

A cross-functional team needs a wide variety of skills to succeed. Sometimes these will be provided by specialists and sometimes by generalists. The balance has to be struck for the specific needs of a project and can't be determined by a crude, one-fits-all rule.

Though agile is a great step forward in software development, it would be arrogant and unfair to think that there is no place in agile teams for software professionals that happen to come from traditionally structured organisations.

Monday, 30 May 2011

Elastic teams absorb shocks

I've come across the idea that because cross-functional teams need a variety of skills, members of the team should aspire to be generalists. I think that is overstating the case somewhat.

Agile teams need to be able to handle workloads requiring varying mixtures of skillsets. In order to cope, teams need elasticity, that is they need to be able to temporarily boost their capacity in a particular kind of work as the situation requires.

This can be better achieved by a mixture of flexibility and focus than by either radical specialisation or radical generalisation.

Specialisation has the obvious drawback that if the specialist in a given area is at full capacity the team cannot take on any more of this kind of work. This leads to the situation where some team members aren't fully utilised but the team cannot commit to taking any more stories to "done done" in the current iteration.

Total generalisation doesn't produce idle team members, but it does prevent a team from reaching its maximum possible potential. Software development is full of difficult and technically-specific problems that may not be understood by someone who has not deeply immersed themselves in that discipline. Furthermore, it's hard for someone who isn't fully comfortable with a domain to coordinate others at the same level.

I prefer teams where there is an expert ready to take charge in the face of any given challenge and who can rely on the rest of the team to fall in behind them. This happens best when team members are specialists in their own discipline and generalists in everything else.

To return to the metaphor in the title of this post, effective agile teams have a definite shape but they have enough elasticity to absorb the shocks of atypical workloads.

Sunday, 15 May 2011

Declarative management

Broadly speaking, programming languages can be categorised as either imperative or declarative. This post will explain the difference, and also argue that a declarative approach to management leads to better outcomes.

Imperative programming

Imperative programs are lists of commands. Here is a piece of C code that tells the computer how to make a meal based on a recipe. Get a bowl, start at the first ingredient, add and mix ingredients one at a time and stop when there are none left:
Bowl make_recipe(char** ingredients, int num_ingredients)
{
  int bowl_size = number_of_ingredients / 3;
  Bowl bowl = fetch_bowl(bowl_size);

  for(int i=0; i < number_of_ingredients; i++)
  {
    add_ingredient(ingredient_list[i], bowl);
    mix(bowl);
  }
  return bowl;
}
For some recipes it doesn't matter what order ingredients are added. It wouldn't make a whole lot of difference to your pancakes batter if you add the milk before the eggs or the other way around. But this code doesn't trust the computer to decide when rearranging ingredients is safe; they must be added in the exact order that they appear on the list.

Declarative programming

Prolog is a kind of declarative language known as a logic programming language. I write Prolog code by declaring facts (Susan is Mark's parent) and rules (a grandparent is a parent of a parent):
parent(susan, mark).
parent(gerald, susan).

grandparent(X, Z) :- parent(X, Y), parent(Y, Z).
I can now ask Prolog questions based on these declarations:  
| ?- grandparent(gerald, mark).
Yes
| ?- grandparent(mark, susan).
No
I didn't describe how to arrive at those answers. I merely provided the necessary information and left the rest up to Prolog. If someone changes details in Prolog's algorithm it won't affect my program so long as it continues to interpret facts and rules in the same way. There are many different kinds of declarative programming language but they all focus on what needs to be achieved and leave the how to the implementors' discretion.

Declarative management

Declarative management describes the desired outcome but give the team freedom to achieve it in the way that appears best to them. One way that we encourage this on agile projects is to make sure that all our stories have a "so that" clause that explains their intent. This allows requirements to be much more concise, because we concentrate on the things that matter - in other words the what.

If the story says "As a user, I want the system to know my favourite colour, so that the site can be personalised to suite my aesthetic taste", then the team are free to bring their expertise on the problem. The solution they come up with might even surprise the person who originally came up with the story. In the agile world, we call this "delighting the customer".

On the other hand, imperative management is very specific about exactly how the things must be done. To follow the recipe example above, workers in a fast-food chain might be given exact steps on how to put together a hamburger.

There is a place for both imperative and declarative programming. Declarative programming is a good choice when it's important to make intent very clear or when the computer is in a position to make better optimisations than the programmer. Imperative languages are often the only sensible choice when the how needs to be tightly controlled e.g. to ensure high performance in a video game.

However, there is very little justification for imperative management. The only time that its helpful to give a team the how is when you believe that they would come up with a worse solution than you. If you think that's true, you're probably either wrong or you haven't properly trained your team.

I'm sure there are rare cases when imperative management can help the business. In the fast food example, the fastest possible way to assemble a burger might not be independently arrived at by every part-time employee. Just like a C++ programmer might use her detailed understanding of a PS3's hardware to force it to behave in a certain way, central management might find it advantageous to promulgate an optimised burger production process that's faster than what employees could come up with themselves. But as a general rule this doesn't happen, because while computers are naive and unable to show initiative, your employees are not.

Imperative management is micromanagement. Declarative management is empowerment. And since your employees are (hopefully) better at their jobs than you are, empowerment is more likely to achieve the results you are after.

Thursday, 14 April 2011

The waterfall mindset

Some developers are reluctant to delete code. If I understand their logic correctly, it goes something like this:

Writing code takes effort. Therefore, removing code means wasted effort.

This relies on two fallacies.

Firstly, it assumes that code is valuable in of itself. But code is a liability, not an asset. Removing code while maintaining functionality creates value because it improves agility without costing anything that stakeholders care about.

Secondly, it assumes that code/functionality is the only purpose of development. But agile practitioners use development as a way of gaining information. So even if a change is fully backed-out and never restored, the process of developing that change yielded an improved understanding of the solution space.

I realise that deconstructing fallacies is somewhat of a fallacy itself. Counterproductive practices are more likely to be driven by psychological factors than by logical errors. In this case, I think that a reluctance to delete code is motivated by an attitude I call the waterfall mindset.

Within the waterfall mindset, coding proceeds through a series of decisions that are never revisited once made. It's very similar to a sequential development process, except that the phases can be in the mind of a single developer. Like normal waterfall development, it relies on the supremely unlikely possibility of getting decisions right the first time:
  • Generating reports is too slow? Let's cache them on the webserver.
  • Cached files are out of date? Let's write a cron job to renew them every night.
  • The webserver is under unacceptable load during the regeneration process? Let's delegate to a separate report server.
  • Report generation fails silently when there are network problems? Let's develop a custom protocol with failure semantics so that the webserver knows when to re-send messages to the report server.
  • And so on...
Or perhaps the report generation process itself could be profiled and optimised, eliminating the need for a caching mechanism altogether.

When I discover developers (including myself) in the grip of waterfall coding, I'm reminded of the nursery rhyme about the woman who swallowed a fly. It didn't work out well for her either.

Sunday, 6 March 2011

Freedom of movement

One of the key disciplines of agile software development is avoiding Big Up Front Design. The conscientious XP developer will resolutely refuse to build anything that isn't required for the current iteration. Speculative code is poison to a project's agility, because it costs effort to build, effort to maintain and obscures the code that is performing useful functions. And often, as Ron Jeffries' adage goes, you ain't gonna need it.

Quality is ensured by a kind of mathematical induction. The original version of the application is small, focused and well-tested, which makes it responsive to change. By writing the minimum of new code every time a new feature is added, developers ensure that the code-base never accrues excess baggage that might impede it in the future.

This approach is counter-intuitive at first, because most people associate planning ahead with saving effort. The idea is that tackling future challenges before they arise can be cheaper. The first catch is that if a developer guesses wrongly they will have spent effort actively contaminating the project with detritus. The second catch is that the first catch happens every time.

However, I do not take this to mean that a developer should never look ahead when making decisions about the present. Simplicity is merely a means to an end, which is the greatest possible flexibility to take the project where it might need to go.

Flexibility must be evaluated relative to the needs of the project in question. If a website is likely to be promoted to a mass audience, then preserving code quality means maintaining the option to introduce caching in a later iteration. It would be a mistake to introduce any caching logic "just in case", because performance optimisations are notoriously hard to second-guess. But it would also be a mistake to take a decision that would make caching hard or impossible to implement.

The minimal solution is not the one with the least files, classes or lines of code. Minimal code makes the least possible number of assumptions about the future. Developers need to interpret what that means for their particular project by maintaining an awareness of upcoming requirements.

Freedom of movement is only meaningful if you know what moves you might need to make.


Friday, 29 October 2010

Against technical debt

Technical debt is a very useful concept for explaining the consequences of dirty code to management. However, there is a problem that I have with the debt metaphor. The phrase technical debt implies that it's possible to avoid the debt. If I don't write shoddy code today, I wont have to pay for it tomorrow.

This obscures the fact that though dirty code costs more than clean code, every line of code impedes your agility. Sometimes product owners ask for features that compromise a system's architecture or domain model. When I've tried to describe the technical debt that will be incurred by an awkward feature, I've (quite reasonably) been asked how much effort it would take to "do it properly". I'm stumped, because no matter how thoroughly I implement the feature, it will still cause problems down the line.

Sometimes I fall back on depreciation, which I can use to explain anything that reduces the system's ability to meet future needs. Unlike debt, depreciation isn't automatically reversible. I've also considered that fear-driven estimation might produce estimates that more accurately reflect the long-term cost of a story.

I don't want to see the technical debt analogy deprecated, but I do want to encourage people to think critically about how they use it, because all metaphors have their limits.

Sunday, 24 October 2010

As a stakeholder

A common template for user stories is "As a user, I want". This forces stakeholders to make the business value of the story explicit and encourages consistency.

However, there are some stories that this doesn't make sense for, including ones that are to the business' advantage and the users' detriment. Stating all stories in terms of users' wants can result in bizarre stories that conceal who has a stake in the their completion:
As a user, I want my DVDs to not work in other regions, so that I have to buy them again if I move countries.
As much as we focus on users, we don't build commercial software for them. It just so happens that satisfying users is a necessary part of achieving our other aims - like making money.

Users are stakeholders, but they aren't the only stakeholders. If we revise the template to "As a stakeholder, I want", then we're able to state anti-user stories much more naturally:
As the sales department, I want to prevent DVDs bought in one region from being played in another, so that I can release and price DVDs in different markets independently.
Thanks to @MrsSarahJones for pointing this out to me.

Saturday, 2 October 2010

My agile canon

As much as I enjoy reading blog posts, they cannot match the sustained argument from a well-written book. Fortunately, the agile movement has many articulate advocates who have been busy committing their thoughts to paper (and eReader) over the past few years.

Here are some agile books that I recommend:
The exciting thing about reading these books is that they are part of an ongoing conversation. We haven't worked out how to build software well yet, but I think that these books bring us a little closer.

Tuesday, 24 August 2010

Tolstoy on iterative development

But their mother country was too far off, and a man who has six or seven hundred miles to walk before reaching his destination must be able to put his final goal out of his mind and say to himself that he will 'do thirty miles today and then spend the night somewhere'; and during this first stage of the journey that resting-place for the night eclipses the image of his ultimate goal and absorbs all his hopes and desires.

- Leo Tolstoy, War and Peace

Monday, 16 August 2010

Fear-based estimation

I prefer agile development because early feedback mechanisms like TDD, pair-programming and frequent releases reduce my fear of the unknown. I don't like feeling afraid, but when I am I try to take notice, because it's usually an indication that something is not right.

A great agile tradition is acknowledging the human element in software development. In that spirit, I'd like to suggest that developers' fear can be harnessed to improve estimation.

Humans are notoriously bad at temporal reasoning and programmers are even-more-notoriously bad at guessing how long a given piece of work will take. Agile teams work around this by estimating effort in 'story points' that measure the comparative size of tasks. We might not be sure how long designing the new schema will take, but we assign it 2 story points because we know it will take twice as long as another task we gave 1 point.

The rate at which a team completes points is known as its 'velocity' and can be determined by examining the team's progress over time. It's a robust, self-correcting system, but I've observed two drawbacks on projects I've worked:
  • The confidence of estimates isn't captured.
  • The technical debt incurred by a particular piece of work is ignored altogether.
A story that has a chance of going horribly wrong and that will in any case cripple the system's architecture could receive a low story point score because it (probably) won't take long to implement. Confronted with such a story, developers are terrified - and helpless.

I propose that we estimate in terms of 'fear points' to give product owners a disincentive to prioritise dangerous stories. The question we ask in estimation meetings shouldn't be "how big is this task" but "how afraid does this task make you?".

So long as the developers have an appropriate level of professional cowardice, a story's fear points will reflect the danger the story represents to the project's current and future timelines. And I think that 'danger' is as useful to communicate to management as 'size'.

If product owners want to negotiate a reduction in a story's fear points then they need to reduce uncertainty, remove risky features and compromise on ideas that would incur a lot of technical debt. They could also prioritise other stories that improve the team's confidence in the development process, like improved Continuous Integration.

Whenever I have seen developers afraid of a feature, it turns out badly. It ends up buggy, expensive to change and probably doesn't serve its intended purpose. By gauging developers' fear, agile organisations have an opportunity to avoid trouble before it happens.

If your developers are afraid of a feature - be very afraid.