Technical debt is a very useful concept for explaining the consequences of dirty code to management. However, there is a problem that I have with the debt metaphor. The phrase technical debt implies that it's possible to avoid the debt. If I don't write shoddy code today, I wont have to pay for it tomorrow.
This obscures the fact that though dirty code costs more than clean code, every line of code impedes your agility. Sometimes product owners ask for features that compromise a system's architecture or domain model. When I've tried to describe the technical debt that will be incurred by an awkward feature, I've (quite reasonably) been asked how much effort it would take to "do it properly". I'm stumped, because no matter how thoroughly I implement the feature, it will still cause problems down the line.
Sometimes I fall back on depreciation, which I can use to explain anything that reduces the system's ability to meet future needs. Unlike debt, depreciation isn't automatically reversible. I've also considered that fear-driven estimation might produce estimates that more accurately reflect the long-term cost of a story.
I don't want to see the technical debt analogy deprecated, but I do want to encourage people to think critically about how they use it, because all metaphors have their limits.
"Let us concentrate on explaining to human beings what we want a computer to do"
Friday, 29 October 2010
Sunday, 24 October 2010
As a stakeholder
A common template for user stories is "As a user, I want". This forces stakeholders to make the business value of the story explicit and encourages consistency.
However, there are some stories that this doesn't make sense for, including ones that are to the business' advantage and the users' detriment. Stating all stories in terms of users' wants can result in bizarre stories that conceal who has a stake in the their completion:
Users are stakeholders, but they aren't the only stakeholders. If we revise the template to "As a stakeholder, I want", then we're able to state anti-user stories much more naturally:
However, there are some stories that this doesn't make sense for, including ones that are to the business' advantage and the users' detriment. Stating all stories in terms of users' wants can result in bizarre stories that conceal who has a stake in the their completion:
As a user, I want my DVDs to not work in other regions, so that I have to buy them again if I move countries.As much as we focus on users, we don't build commercial software for them. It just so happens that satisfying users is a necessary part of achieving our other aims - like making money.
Users are stakeholders, but they aren't the only stakeholders. If we revise the template to "As a stakeholder, I want", then we're able to state anti-user stories much more naturally:
As the sales department, I want to prevent DVDs bought in one region from being played in another, so that I can release and price DVDs in different markets independently.Thanks to @MrsSarahJones for pointing this out to me.
Saturday, 16 October 2010
Tests are facts. Code is theory.
Programmers have turned to science to help resolve the software crisis. But they're doing it wrong.
The software industry has tried to learn from science and engineering's success. We call our programming degrees "Computer Science" and "Software Engineering", though they are neither. "Computer Science" students do almost no experiments. The "Software Engineering" concept of exhaustive up-front design has become so discredited that even those who can't imagine any other way feel obliged to pretend that they "don't do Waterfall".
Of course, science and engineering are just analogies when applied to programming. They are meant to be useful ways of imagining our profession, not to be literally true. But in their naive form, I don't think analogies between programming and science are very useful. If we want to benefit from scientific rigour, we need to be more rigorous in how we appropriate scientific concepts.
I find the reciprocal of this approach useful when debugging. I start with a defect, form a theory as to its cause, then design a test to try and falsify that theory. If I suspect that the issue is caused by rogue javascript, I'll disable javascript and attempt to reproduce the issue. If I can, I've disproved my theory and I need to find another explanation. This helps me to eliminate false causes and gradually home in on the bug.
The problem with analogies that treat tests as theories and code as a phenomena is that they tell us nothing about how to write code. The software under test is like gravity, a chemical reaction or the weather. It may or may not have an underlying structure and beauty, but any insights we gain during testing are inevitably after-the-fact.
Worse, they are static models. When software changes over time, the knowledge gathered through "scientific testing" may no longer apply. The scope of scientific testing is confined to a specific version of the software. For example, a tested and verified "theory" about the memory profile of an application may become invalid when a programmer makes a small change to a caching policy.
We can go some way to achieving this by reversing the roles of testing and coding in the scientific testing model. Tests are facts. Code's role is as a theory that explains those facts as gracefully and simply as possible.
New requirements mean new tests. New tests are newly discovered facts that must be incorporated into the code's model of reality. Software can be seen as a specialised theory that attempts to embody what the stakeholders want the application to do.
Simple code isn't just cheaper. It's more valuable too, because it's easier to change and extend. We can justify this with reference to scientists' experience that the simplest theory is the most likely to survive subsequent discoveries.
As new requirements arrive and our understanding of the domain deepens, we have the opportunity to refactor. Refactoring isn't rework or throwing away effort. Refactoring is enhancing code's value by incorporating new knowledge on what we want our software to do. This could be by adding functionality, or in reducing complexity. Either makes the software as a whole more valuable.
Science celebrates refactoring. Each new piece of evidence clarifies scientists' understanding of phenomena and helps yield more useful theories. Often these refinements are small, but occasionally Einstein will have an insight that supercedes Newton's laws of motion. Domain driven design founder Eric Evans describes such pivotal moments on software projects as "breakthroughs".
Non-developers often assume an application is invariably more valuable with a feature than without it. Yet the example of special relativity allows us to explain otherwise. Newton's laws of motion are perfectly adequate for ordinary use. Unless we are interested in bodies moving close to the speed of light, it's not worth bothering with the additional complexity Einstein's theories bring.
If stakeholders are willing to accept that the application targets the common case and excludes troublesome edge cases, they will enjoy software that is simpler and therefore cheaper and more valuable. Sometimes, there is value in absent features. Always, there is value in simpler code.
Crave simplicity. Celebrate deletion. If science responded to new information by adding special cases then science would be in as big a mess as the software industry. As you incorporate new requirements, attempt to refine your code so that it remains flexible enough to accomodate tomorrow's requirements. Otherwise, your code will become less and less fit for its purpose, which is to provide business value.
Science envy
Programmers have science envy. We feel that, unlike much of our code, science works. Scientists have spent hundreds of years honing a methodology that helps them assimilate new knowledge and correct error, while we have spent decades frantically accumulating complexity that we can't handle. Strangely, scientific theories become more accurate over time, whereas software systems often decay.The software industry has tried to learn from science and engineering's success. We call our programming degrees "Computer Science" and "Software Engineering", though they are neither. "Computer Science" students do almost no experiments. The "Software Engineering" concept of exhaustive up-front design has become so discredited that even those who can't imagine any other way feel obliged to pretend that they "don't do Waterfall".
Of course, science and engineering are just analogies when applied to programming. They are meant to be useful ways of imagining our profession, not to be literally true. But in their naive form, I don't think analogies between programming and science are very useful. If we want to benefit from scientific rigour, we need to be more rigorous in how we appropriate scientific concepts.
Scientific testing
Some software testers have used the scientific method as a way of framing their testing activities. For example, David Saff, Marat Boshernitsan and Michael D. Ernst explicitly cite Karl Popper and the scientific method in their paper on test theories. Test theories are invariant properties possessed by a piece of code which Saff et al attempt to falsify over a wide range of data points with an extension to the JUnit testing framework.I find the reciprocal of this approach useful when debugging. I start with a defect, form a theory as to its cause, then design a test to try and falsify that theory. If I suspect that the issue is caused by rogue javascript, I'll disable javascript and attempt to reproduce the issue. If I can, I've disproved my theory and I need to find another explanation. This helps me to eliminate false causes and gradually home in on the bug.
The problem with analogies that treat tests as theories and code as a phenomena is that they tell us nothing about how to write code. The software under test is like gravity, a chemical reaction or the weather. It may or may not have an underlying structure and beauty, but any insights we gain during testing are inevitably after-the-fact.
Worse, they are static models. When software changes over time, the knowledge gathered through "scientific testing" may no longer apply. The scope of scientific testing is confined to a specific version of the software. For example, a tested and verified "theory" about the memory profile of an application may become invalid when a programmer makes a small change to a caching policy.
Tests are facts. Code is theory.
Science's strength is its ability to assimilate new discoveries. If we want to share in its success, a scientific model of software development needs to preserve science's adaptability.We can go some way to achieving this by reversing the roles of testing and coding in the scientific testing model. Tests are facts. Code's role is as a theory that explains those facts as gracefully and simply as possible.
New requirements mean new tests. New tests are newly discovered facts that must be incorporated into the code's model of reality. Software can be seen as a specialised theory that attempts to embody what the stakeholders want the application to do.
How does that help us?
Once we accept that code as a theory, we are then in a position to justify employing the most powerful weapon in science's armoury - Occam's razor. Our role is to write the simplest possible code that is consistent with the facts/tests/requirements. Whenever we have the opportunity to eliminate concepts from our code, we should.Simple code isn't just cheaper. It's more valuable too, because it's easier to change and extend. We can justify this with reference to scientists' experience that the simplest theory is the most likely to survive subsequent discoveries.
As new requirements arrive and our understanding of the domain deepens, we have the opportunity to refactor. Refactoring isn't rework or throwing away effort. Refactoring is enhancing code's value by incorporating new knowledge on what we want our software to do. This could be by adding functionality, or in reducing complexity. Either makes the software as a whole more valuable.
Science celebrates refactoring. Each new piece of evidence clarifies scientists' understanding of phenomena and helps yield more useful theories. Often these refinements are small, but occasionally Einstein will have an insight that supercedes Newton's laws of motion. Domain driven design founder Eric Evans describes such pivotal moments on software projects as "breakthroughs".
Non-developers often assume an application is invariably more valuable with a feature than without it. Yet the example of special relativity allows us to explain otherwise. Newton's laws of motion are perfectly adequate for ordinary use. Unless we are interested in bodies moving close to the speed of light, it's not worth bothering with the additional complexity Einstein's theories bring.
If stakeholders are willing to accept that the application targets the common case and excludes troublesome edge cases, they will enjoy software that is simpler and therefore cheaper and more valuable. Sometimes, there is value in absent features. Always, there is value in simpler code.
Crave simplicity. Celebrate deletion. If science responded to new information by adding special cases then science would be in as big a mess as the software industry. As you incorporate new requirements, attempt to refine your code so that it remains flexible enough to accomodate tomorrow's requirements. Otherwise, your code will become less and less fit for its purpose, which is to provide business value.
Conclusion
When John Maynard Keynes was attacked for repeatedly revising his economic theories, he said, "When the facts change, I change my mind – what do you do, sir?" Take the same attitude with your code, but treat requirements and tests as your facts. And remember, your code is just your best approximation of what your stakeholders want it to be.Saturday, 2 October 2010
My agile canon
As much as I enjoy reading blog posts, they cannot match the sustained argument from a well-written book. Fortunately, the agile movement has many articulate advocates who have been busy committing their thoughts to paper (and eReader) over the past few years.
Here are some agile books that I recommend:
Here are some agile books that I recommend:
- Extreme Programming Explained by Kent Beck is the original XP manifesto.
- The Art of Agile Development by James Shore is a comprehensive reference for agile/XP.
- Clean Code by Uncle Bob is a detailed manual for producing high quality code, right down to variable naming.
- The Art of Unit Testing (with examples in .NET) by Roy Osherove is the only good book I've found specifically on unit testing.
- Working Effectively with Legacy Code by Michael Feathers is an interesting guide to bringing legacy code under test and under control.
- I'm currently reading Agile Estimating and Planning by Mike Cohn, which covers story points, sprint length, agile planning and a lot of other topics that are important for management to understand as well as developers.
- Two very reputable books on use cases and user stories are Writing Effective Use Cases by Alistair Cockburn and User Stories Applied by Mike Cohn, but I've read neither (yet).
Subscribe to:
Posts (Atom)