Sunday, 19 October 2014

Types don't substitute for tests

When reading discussions about the benefits of types in software construction, I've come across the following claim:
When I use types, I don't need as many unit tests.
This statement is not consistent with my understanding of either types or test-driven design. When I've inquired into reasoning behind the claim, it often boils down to the following:
Types provide assurance over all possible arguments (universal quantification). Unit tests provide assurance only for specific examples (existential quantification). Therefore, when I have a good type system I don't need to rely on unit tests.
This argument does not hold in my experience, because I use types and unit tests to establish different kinds of properties about a program.

Types prove that functions within a program will terminate successfully for all possible inputs (I'm ignoring questions of totality for the sake of simplifying the discussion).

Unit tests demonstrate that functions yield the correct result for a set of curated inputs. The practice of test-driven design aims to provide confidence that the inputs are representative of the function's behaviour through the discipline of expanding a function's definition only in response to an example that doesn't yet hold.

All of the examples that I use in my practice of test-driven design are well-typed, whether or not I use a type system. I do not write unit tests that exercise the behaviour of the system in the presence of badly-typed input, because in an untyped programming language it would be a futile exercise and in a typed programming language such tests would be impossible to write.

If I write a program using a type system, I still require just as many positive examples to drive my design and establish that the generalisations I've created are faithful to the examples that drove them. Simply put, I can't think of a unit test that I would write in the absence of a type system that I would not have to write in the presence of one.

I don't use a type system to prove that my functions return the output I intend for all possible inputs. I use a type system to prove that there does not exist an input, such that my functions will not successfully terminate (again, sidestepping the issue of non-total functions). In other words, a type checker proves the absence of certain undesirable behaviours, but it does not prove the presence of the specific desirable behaviours that I require.

Type systems are becoming more sophisticated and are capable of proving increasingly interesting properties about programs. In particular, dependently typed programming languages like Idris can be used to establish that lists are always non-empty or the parity of addition.

But unless the type system proves that there is exactly one inhabitant of a particular type, I still require a positive example to check that I've implemented the right well-typed solution. And even if the type provably has only one inhabitant, I would still likely write a unit test to help explain to myself how the abstract property enforced by the type system manifests itself.

A type system is complementary to unit tests produced by test-driven design. The presence of a type system provides additional confidence as to the correctness of a program, but as I write software it does not reduce the need for examples in the form of unit tests.

Sunday, 12 October 2014

Hiding the REPL

I depend heavily on Clojure's REPL, because it's where I write music. Over time, however, I've become less focused on directly interacting with the REPL, and pushed it more and more into the background.

I use Vim Fireplace, which gives me the ability to evaluate forms in a buffer by sending them to an nREPL instance. There's also a REPL available within Fireplace, but I find I only use it for simple commands like stopping a piece of music or printing a data structure.

Speaking to Aidy Lewis on Twitter today, I've come to realise that there may be two different models for REPL-driven development.

Aidy described a model where the REPL is ever-present in a split-window. This brings the REPL to the foreground, and makes it conveniently available for experimentation. I would describe this as a side-by-side model.

On the other hand, I treat the buffer itself as my REPL. I write and refine forms, evaluating them as I go. If I want to experiment, I do so by writing code in my buffer and either evolving it or discarding it. My navigation and interaction are as they would be in any other Vim session, punctuated by occasional re-evaluation of something I've changed. This seems to me more like a layered model, with my buffer on the surface and the REPL below.

The reason I value this mode of interaction is it makes me feel more like I'm directly interacting with my code. When I make a change and re-evaluate the form, I have the sense that I'm somehow touching the code. I don't have a mental separation between my code-as-text and the state of my REPL session. Rather they're two ways of perceiving the same thing.

Saturday, 7 June 2014

Pragmatic people

She came from Greece. She had a thirst for knowledge.
She studied Haskell at Imperial College -
that's where I caught her eye.
She told me monads are burritos too.
I said, "In that case Maybe I have a beef with you."
She said, "Fine",
and then in thirty metaphors time she said,

"I wanna live like pragmatic people.
I wanna do whatever pragmatic people do.
Wanna sleep with pragmatic people,
I wanna sleep with pragmatic people like you."
Oh, what else could I do?
I said, "I'll see what I can do."

I showed her some JavaScript.
I don't know why,
but I had to start it somewhere,
so it started there.

I said, "Pretend undefined == null"
And she just laughed and said, "Oh you're so droll!"
I said, "Yeah?
I can't see anyone else smiling in here."

"Are you sure you want to live like pragmatic people?
You wanna see whatever pragmatic people see?
Wanna sleep with pragmatic people,
You wanna sleep with pragmatic people like me?"
But she didn't understand.
And she just smiled and held my hand.

Rent a flat above a shop,
Cut your hair and get a job.
Write a language in 10 days
and you're stuck with it always.

But still you'll never get it right,
'cos when you're debugging late at night,
watching errors climb the wall,
if you just used types you could stop it all, yeah.

You'll never live like pragmatic people.
You'll never do whatever pragmatic people do.
You'll never fail like pragmatic people.
You'll never watch Curry-Howard slide out of view.
Time flies like a Kleisli arrow
because your perspective is so narrow.

Sing along with the pragmatic people.
Sing along and it might just get you through.
Laugh along with the pragmatic people.
Laugh along even though they're laughing at you.
And the stupid things that you do,
because you think that proof is cool.

Like a dog lying in a corner,
faults will bite you and never warn you.
Look out they'll tear your insides out.
Pragmatists hate a tourist,
especially one who thinks it's all such laffs.
Yeah and the stains of faulty reasoning
will come out in the maths.

You will never understand
how it feels to live your life
with no meaning or control.
And I'm with nowhere left to go.
You are amazed untyped languages exist
and they burn so bright
while you can only wonder.

Why rent a flat above a shop,
cut your hair and get a job?
Dynamic languages are cool.
Pretend you never went to school.

Still you'll never get it right,
'cos when you're debugging late at night,
watching stacktraces 10 screens tall,
if you had types you could stop it all, yeah.

Never live like pragmatic people.
Never do what pragmatic people do.
Never fail like pragmatic people.
Never watch Curry-Howard slide out of view,
and on error resume,
because there's nothing else to do.

With apologies to Jarvis Cocker.

Sunday, 27 April 2014

Microservices relax

Recently I blogged explaining that I view microservice architectures as a collection of strategies to independently vary system characteristics. I suggested that these strategies could be framed in terms of a generic architectural constraint:
X can be varied independently of the rest of the system.
Mark Baker pointed out that while I employed the language of architectural constraints, my one "isn't upon components, connectors, and data." He referred me to Roy Fielding's excellent thesis, famous for defining the REST architectural style, for a rigorous description of how architectural constraints can be described.

Thinking about the problem further, I considered what kind of data flows through the microservice systems I've worked with. I realised that many of the qualities that characterise a microservice architecture are not visible during a single system run. Microservice architecture can only be explained in terms of components, connectors and data if the constituent services are themselves regarded as data, and the processes that evolve them as components and connectors.

I don't think that's too much of a stretch. For systems that are deployed via continuous delivery pipelines, the automated deployment infrastructure is part of the system itself. If the build is red, or otherwise unavailable, then the production system loses one of its most important capabilities - to evolve. The manner in which the code/data that specifies the behaviour of the system evolves can be subject to similar constraints to the introduction of novelty into conventional system data.

Viewed that way, it becomes apparent that the constraint I proposed was the inverse of what it should have been, namely:
X's variation must be synchronised with the rest of the system.
Monolithic architectures are subject to this constraint for various values of 'X'. Microservice architectures are produced via the relaxation of this constraint along different axes.

The process of starting with a highly constrained system and gradually relaxing constraints that it is subject to is the opposite of the approach Fielding advocates in his thesis. When deriving REST, Fielding starts with the null style and layers on six constraints to cumulatively define the style.

However, the microservice style is a reaction against highly-constrained monolithic architectures, so it has taken the opposite approach, which is to peel away constraints layer by layer in order to introduce new flexibility into the system.

For example, a monolithic system might consist of five functional areas, all integrated within a single codebase. For a functional change to make its away to production, the data describing the system's behaviour (its source code) is modified, checked-in and moves through the deployment pipeline into production. This must happen for all five functional areas as one, because the transaction of deployment encompasses all five pieces of data.

This system is devolved into a system of five microservices, each living in an independent source control root. A change to any one of those services can be checked in and move through its pipeline to production independently of the others. The deployment transactions are now smaller in scope, encompassing only a single service at a time.

The change in the scope of a single deployment generates options for the maintainers of the system. They can now release changes to one of the five functional areas without downtime to any of the other four.

But, as is the case with real options, this comes at a price. If the five areas interact, their APIs need to be versioned and kept backwards compatible. Changes that crosscut multiple services need to be carefully orchestrated to avoid breaking changes, whereas in the original monolithic system such refactorings might have been made with impunity.

Generally speaking, for each constraint that is relaxed in a microservice architecture, there is a corresponding real option generated and a corresponding coordination cost incurred. For good explanations on real options and how they apply to software development, see the various works of Chris Matts, including the book he wrote with Olav MaassenCommitment.

I think that it's a productive approach to describe microservice architectures in terms of the the relaxation of architectural constraints relative to a monolithic architecture. This provides us with a model not only to analyse the runtime properties of the system, but also its evolution and the options it affords its maintainers.

Sunday, 13 April 2014

Review: Adaptive web design

Adaptive web design by Aaron Gustafson is a good book on an important topic. How the web evolves, and how it caters to the broad spectrum of its users, that now make up the majority of the population in affluence societies, is a crucial consideration.

The book is a curious mixture of tips, philosophy and polemic. Gustafason includes specific technical detail, for example microformats, structuring CSS selectors and ARIA roles.

He also treats progressive enhancement as a moral issue, which is both a reasonable point of view and a rhetorical device. For example, he describes the technique of graceful degradation as "fault tolerance's superficial, image-obsessed sister" and that eventually "smart folks working on the web began to realize that graceful degradation's emphasis". He thus frames issues in a way as to try and ensure the reader's conclusion cannot be anything other than agreement, which is a time-honoured technique of polemic.

This book is a good overview of a topic that can fade into the background under the pressure of deadlines and corporate disregard for users, so it's great that Gustafason has put his energy into a book that software developers can rally around.

However it has two serious flaws, one of omission and one of commission, that make it less useful to a contemporary software team than it otherwise might have been.

The flaw of omission is that it does not engage with the recent crop of client-side MVC applications. Adaptive web design was published in 2011, two years after the release of Google's framework AngularJS, but it contains no mention of this style of application development, which challenges some of the fundamental assumptions of the progressive enhancement movement.

Client-side MVC is, literally speaking, progressive enhancement, but it's not of the sort that Gustafason lionises. These applications, usually constructed with the help of extensive JavaScript frameworks, place rich interactivity on top of APIs, where as traditional progressive enhancement layers unobtrusive interactivity on top of HTML.

The major difference with Gustafason's vision is that the HTML comes on top of the JavaScript rather than the other way around. Data is pulled into the browser by Ajax calls, before being rendered using client-side templating engines directly into the DOM. This allows the application to keep a model of the application's state and rules in the browser, and to update that model and apply those rules in the interests of a rich experience without the latency of calls back to the server.

I don't know how such applications should be regarded. They don't, for example, allow users to access content if JavaScript is unavailable. This may or may not have the same urgency as it did when Adaptive web design was written three years ago, but in this book at least, Gustafason is unable to give us guidance.

That substantially diminishes the relevance of this book, as the most pressing question most contemporary teams will have about progressive enhancement is how it relates to the major movement in the JavaScript world towards client-side applications.

The flaw of commission, that makes it difficult to treat Adaptive web design as even a point-in-time reference, is the way that Gustafason uses definitions and terminology.

One such example is the term "accessibility". Gustafason takes the perfectly reasonable view that accessibility is a broader issue than simply people who use assistive technologies. The problem with this definition is that it's incompatible with the common usage of the term.

Interestingly, this is an example of a failure of progressive enhancement of language itself. Specialising normal language into terms of art allows the common punter to at least get the gist of what's being discussed. Expanding the definition of terms is much more confusing, because it leads to sentences where the word simply does not make sense under its old definition.

But all this is bunk when we get to the "Accessibility" chapter itself, where Gustafason reverts to the classic idea of accessibility being about people with physical needs that might impede them from accessing content. I guess Gustafason just found it too difficult to sustain his own revised definition.

But the terminology that most grates with me is the assertion that progressive enhancement can be defined as "fault tolerance". The most obvious issue is that there are many instances of progressive enhancement that don't involve "fault". For example, it doesn't make sense to talk about the fault of my browser not specifically recognising the XFN microformat, but my ability to browse the page regardless is, as Gustafason points out, a classic case of progressive enhancement.

The greater problem is that there are times when fault tolerance and progressive enhancement are actively opposed. For example, when browsers tolerate malformed markup it makes it more difficult for later changes to e.g. HTML to be added that bring new features because they are more likely to clash with the "fault tolerant" HTML implementations of older browsers.

The pity is that I don't think this particular confusion is necessary. Progressive enhancement is a very specific kind of fault tolerance, and Gustafason's attempt to compare progressive enhancement to Galapagos finches is entirely misleading, if entertaining. If Gustafason had chosen an alternative term to "fault tolerance", perhaps "graceful gradation" (to contrast with "graceful degradation") then the same point could have been made with much greater clarity.

Adaptive web design is a solid book, written with energy and with a strong sense of the historical development of the web. If a team reads this book together, it will spark a lot of important discussions, but its occasional problems with terminology and its failure to engage with client-side MVC means it will not settle them.

Sunday, 16 March 2014

The microservice declaration of independence

Recently, an article on microservice architectures by my colleagues Martin Fowler and James Lewis has caused some mild controversy in the Twittersphere. For example, Kelly Sommers said:
I’m still trying to figure out what is distinct period. Maybe I’ll accept the name once I see a diff
I have yet to find someone who believes it a bad idea to build systems from small, loosely-coupled services that can be independently changed and deployed. The hesitance that some have shown for the term "microservice architecture" derives from their doubt that the term adds anything above and beyond the well-worn term service-oriented architecture (here described by Stefan Tilkov).

This is a serious charge. The value that concepts like "microservice" have is in their use for communication and education. If all a term does is to add to the menagerie of poorly-defined architectural buzzwords, then we might be better off sticking with the poorly-defined buzzwords we're already familiar with.

On the other hand, that ideas of such uncontroversial value are still worthy of discussion is some indication that there is room for improvement in the current conceptual landscape.

In order for "microservice architecture" to be a useful way of characterising systems, it needs an accepted and well-understood definition. Mark Baker laments the lack of a precise description, because he feels its absence compromises the value of the term:
Really unfortunate that the Microservices style isn't defined in terms of architectural constraints #ambiguity #herewegoagain
In their article, Martin and James have this to say about the style's definition:
We cannot say there is a formal definition of the microservices architectural style, but we can attempt to describe what we see as common characteristics for architectures that fit the label.
I'm still optimistic that a clearer picture is yet to emerge, though it might not be as unambiguous as the REST architectural style, which is defined in terms of six constraints.

My own opinion is that microservice architectures can be understood through a single abstract architectural constraint which can be interpreted along many different degrees of freedom.
X can be varied independently of the rest of the system.
What might X be? There are many qualities that might be locally varied in an architecture to beneficial effect. Here are some ways we could introduce novelty into a service without wishing to disturb the greater system:
  • Select implementation and storage technology
  • Test
  • Deploy to production
  • Recover from failure
  • Monitor
  • Horizontally scale
  • Replace
There are other aspects of systems that are harder to formalise, but which the human beings that work on our software systems might wish to do with without being troubled by the fearsome complexity of the whole architecture:
I don't wish to imply that this is an exhaustive list. To me, the commonality lies in the concept of independent variation, not the kinds of variation themselves.

Monolithic architectures are monolithic in the sense that they have few or no degrees of freedom. This may be a perfectly valid choice. The more degrees of freedom available in a system, the more coordination is required to keep the disparate parts of the system working together.

For example, a human shoulder is well articulated. This makes it an amazingly flexible subsystem. I can touch my toes, reach above my head and even scratch my own back. On the other hand, the infrastructure required to support such free movement is quite complex and therefore fragile. My right shoulder is currently broken. However, if it were simpler and flexed only in one direction there is a good chance that it would have been able to withstand the impact undamaged.

(Luckily, my shoulders are able to fail independently and therefore my left is still in good working order!)

Systems suffer when architectures are not articulated in the directions they need to be. We may wish to horizontally scale, but are unable to isolate two parts of the system which exhibit very different behaviour under load. We might wish to enable two teams of people in different countries to work on different features, but find that they step on each others' toes because they are hacking on the same deployable artifact.

I value the concept of independence because it gives me a way of evaluating architectural choices beyond "microservices are good" and "microservices are bad". Degrees of architectural freedom are valuable, but they may or may not be worth the effort required to sustain them.

For example, independent deployment is useful because you don't have to stop the whole world to introduce a change in a small part of the system. To steal an example from conversation with James, a bank would not want to stop processing credit card transactions in order to release a new feature on its mobile app.

However, once you move away from monolithic deployments, API versioning and backwards compatibility are required to coordinate releases. That can be hard work, and if you never need to exercise your ability to upgrade a part of the system in isolation, then that work might be wasted.

I've worked on a system which had two services that talked to each other through a poorly designed API. Every time one service changed, the other had to as well. Backwards compatibility was almost impossible to maintain due to some very bad design choices we made early on. Consequently, we almost always had to do double deployments.

In the end we merged the two repositories holding the two services and started deploying them simultaneously. We lost the ability to take one to production without the other, but in our situation that was more a trap than a valuable feature. The two services continued to be distinct as runtime entities, but they now evolved along a single timeline.

This brings me to how I think service-oriented architecture might relate to microservice architecture. Stefan's fourth principle of service-oriented architectures is that services be autonomous:
Services can be changed and deployed, versioned and managed independently of each other.
Personally, I see that as the principle attribute from which all others derive. However, under that definition the two services that my team merged no longer qualify as such, at least not in terms of traditional service-oriented architecture. We consciously gave up the ability to change, deploy and version them independently.

However, I would argue they still qualify as micro-services. We could understand and discuss them separately. They had distinct responsibilities. We could monitor, diagnose and restart them separately in production. If we'd wanted to horizontally scale one without the other we could have. It just happened that on the evolution axis we did not require independence.

According to the terms of the suggested constraint family that "X can be varied independently of the rest of the system", traditional service-oriented architecture could be seen as a specific set of values for X. Some of the degrees of freedom that were important 10 years ago are less important now. Most are still relevant. New kinds of variation that have risen to prominence.

I don't think that microservice architectures are a break from the past. I would probably characterise the movement as a kind of neo-SOA that is trying to get back to the roots of the original movement. But I do think that microservice architectural analysis has a different and more general preoccupation than service-oriented architectural analysis because it revolves around the notion of independent variation itself, rather than prescribing certain kinds of independence.