Sunday, 13 April 2014

Review: Adaptive web design

Adaptive web design by Aaron Gustafson is a good book on an important topic. How the web evolves, and how it caters to the broad spectrum of its users, that now make up the majority of the population in affluence societies, is a crucial consideration.

The book is a curious mixture of tips, philosophy and polemic. Gustafason includes specific technical detail, for example microformats, structuring CSS selectors and ARIA roles.

He also treats progressive enhancement as a moral issue, which is both a reasonable point of view and a rhetorical device. For example, he describes the technique of graceful degradation as "fault tolerance's superficial, image-obsessed sister" and that eventually "smart folks working on the web began to realize that graceful degradation's emphasis". He thus frames issues in a way as to try and ensure the reader's conclusion cannot be anything other than agreement, which is a time-honoured technique of polemic.

This book is a good overview of a topic that can fade into the background under the pressure of deadlines and corporate disregard for users, so it's great that Gustafason has put his energy into a book that software developers can rally around.

However it has two serious flaws, one of omission and one of commission, that make it less useful to a contemporary software team than it otherwise might have been.

The flaw of omission is that it does not engage with the recent crop of client-side MVC applications. Adaptive web design was published in 2011, two years after the release of Google's framework AngularJS, but it contains no mention of this style of application development, which challenges some of the fundamental assumptions of the progressive enhancement movement.

Client-side MVC is, literally speaking, progressive enhancement, but it's not of the sort that Gustafason lionises. These applications, usually constructed with the help of extensive JavaScript frameworks, place rich interactivity on top of APIs, where as traditional progressive enhancement layers unobtrusive interactivity on top of HTML.

The major difference with Gustafason's vision is that the HTML comes on top of the JavaScript rather than the other way around. Data is pulled into the browser by Ajax calls, before being rendered using client-side templating engines directly into the DOM. This allows the application to keep a model of the application's state and rules in the browser, and to update that model and apply those rules in the interests of a rich experience without the latency of calls back to the server.

I don't know how such applications should be regarded. They don't, for example, allow users to access content if JavaScript is unavailable. This may or may not have the same urgency as it did when Adaptive web design was written three years ago, but in this book at least, Gustafason is unable to give us guidance.

That substantially diminishes the relevance of this book, as the most pressing question most contemporary teams will have about progressive enhancement is how it relates to the major movement in the JavaScript world towards client-side applications.

The flaw of commission, that makes it difficult to treat Adaptive web design as even a point-in-time reference, is the way that Gustafason uses definitions and terminology.

One such example is the term "accessibility". Gustafason takes the perfectly reasonable view that accessibility is a broader issue than simply people who use assistive technologies. The problem with this definition is that it's incompatible with the common usage of the term.

Interestingly, this is an example of a failure of progressive enhancement of language itself. Specialising normal language into terms of art allows the common punter to at least get the gist of what's being discussed. Expanding the definition of terms is much more confusing, because it leads to sentences where the word simply does not make sense under its old definition.

But all this is bunk when we get to the "Accessibility" chapter itself, where Gustafason reverts to the classic idea of accessibility being about people with physical needs that might impede them from accessing content. I guess Gustafason just found it too difficult to sustain his own revised definition.

But the terminology that most grates with me is the assertion that progressive enhancement can be defined as "fault tolerance". The most obvious issue is that there are many instances of progressive enhancement that don't involve "fault". For example, it doesn't make sense to talk about the fault of my browser not specifically recognising the XFN microformat, but my ability to browse the page regardless is, as Gustafason points out, a classic case of progressive enhancement.

The greater problem is that there are times when fault tolerance and progressive enhancement are actively opposed. For example, when browsers tolerate malformed markup it makes it more difficult for later changes to e.g. HTML to be added that bring new features because they are more likely to clash with the "fault tolerant" HTML implementations of older browsers.

The pity is that I don't think this particular confusion is necessary. Progressive enhancement is a very specific kind of fault tolerance, and Gustafason's attempt to compare progressive enhancement to Galapagos finches is entirely misleading, if entertaining. If Gustafason had chosen an alternative term to "fault tolerance", perhaps "graceful gradation" (to contrast with "graceful degradation") then the same point could have been made with much greater clarity.

Adaptive web design is a solid book, written with energy and with a strong sense of the historical development of the web. If a team reads this book together, it will spark a lot of important discussions, but its occasional problems with terminology and its failure to engage with client-side MVC means it will not settle them.

Sunday, 16 March 2014

The microservice declaration of independence

Recently, an article on microservice architectures by my colleagues Martin Fowler and James Lewis has caused some mild controversy in the Twittersphere. For example, Kelly Sommers said:
I’m still trying to figure out what is distinct period. Maybe I’ll accept the name once I see a diff
I have yet to find someone who believes it a bad idea to build systems from small, loosely-coupled services that can be independently changed and deployed. The hesitance that some have shown for the term "microservice architecture" derives from their doubt that the term adds anything above and beyond the well-worn term service-oriented architecture (here described by Stefan Tilkov).

This is a serious charge. The value that concepts like "microservice" have is in their use for communication and education. If all a term does is to add to the menagerie of poorly-defined architectural buzzwords, then we might be better off sticking with the poorly-defined buzzwords we're already familiar with.

On the other hand, that ideas of such uncontroversial value are still worthy of discussion is some indication that there is room for improvement in the current conceptual landscape.

In order for "microservice architecture" to be a useful way of characterising systems, it needs an accepted and well-understood definition. Mark Baker laments the lack of a precise description, because he feels its absence compromises the value of the term:
Really unfortunate that the Microservices style isn't defined in terms of architectural constraints #ambiguity #herewegoagain
In their article, Martin and James have this to say about the style's definition:
We cannot say there is a formal definition of the microservices architectural style, but we can attempt to describe what we see as common characteristics for architectures that fit the label.
I'm still optimistic that a clearer picture is yet to emerge, though it might not be as unambiguous as the REST architectural style, which is defined in terms of six constraints.

My own opinion is that microservice architectures can be understood through a single abstract architectural constraint which can be interpreted along many different degrees of freedom.
X can be varied independently of the rest of the system.
What might X be? There are many qualities that might be locally varied in an architecture to beneficial effect. Here are some ways we could introduce novelty into a service without wishing to disturb the greater system:
  • Select implementation and storage technology
  • Test
  • Deploy to production
  • Recover from failure
  • Monitor
  • Horizontally scale
  • Replace
There are other aspects of systems that are harder to formalise, but which the human beings that work on our software systems might wish to do with without being troubled by the fearsome complexity of the whole architecture:
I don't wish to imply that this is an exhaustive list. To me, the commonality lies in the concept of independent variation, not the kinds of variation themselves.

Monolithic architectures are monolithic in the sense that they have few or no degrees of freedom. This may be a perfectly valid choice. The more degrees of freedom available in a system, the more coordination is required to keep the disparate parts of the system working together.

For example, a human shoulder is well articulated. This makes it an amazingly flexible subsystem. I can touch my toes, reach above my head and even scratch my own back. On the other hand, the infrastructure required to support such free movement is quite complex and therefore fragile. My right shoulder is currently broken. However, if it were simpler and flexed only in one direction there is a good chance that it would have been able to withstand the impact undamaged.

(Luckily, my shoulders are able to fail independently and therefore my left is still in good working order!)

Systems suffer when architectures are not articulated in the directions they need to be. We may wish to horizontally scale, but are unable to isolate two parts of the system which exhibit very different behaviour under load. We might wish to enable two teams of people in different countries to work on different features, but find that they step on each others' toes because they are hacking on the same deployable artifact.

I value the concept of independence because it gives me a way of evaluating architectural choices beyond "microservices are good" and "microservices are bad". Degrees of architectural freedom are valuable, but they may or may not be worth the effort required to sustain them.

For example, independent deployment is useful because you don't have to stop the whole world to introduce a change in a small part of the system. To steal an example from conversation with James, a bank would not want to stop processing credit card transactions in order to release a new feature on its mobile app.

However, once you move away from monolithic deployments, API versioning and backwards compatibility are required to coordinate releases. That can be hard work, and if you never need to exercise your ability to upgrade a part of the system in isolation, then that work might be wasted.

I've worked on a system which had two services that talked to each other through a poorly designed API. Every time one service changed, the other had to as well. Backwards compatibility was almost impossible to maintain due to some very bad design choices we made early on. Consequently, we almost always had to do double deployments.

In the end we merged the two repositories holding the two services and started deploying them simultaneously. We lost the ability to take one to production without the other, but in our situation that was more a trap than a valuable feature. The two services continued to be distinct as runtime entities, but they now evolved along a single timeline.

This brings me to how I think service-oriented architecture might relate to microservice architecture. Stefan's fourth principle of service-oriented architectures is that services be autonomous:
Services can be changed and deployed, versioned and managed independently of each other.
Personally, I see that as the principle attribute from which all others derive. However, under that definition the two services that my team merged no longer qualify as such, at least not in terms of traditional service-oriented architecture. We consciously gave up the ability to change, deploy and version them independently.

However, I would argue they still qualify as micro-services. We could understand and discuss them separately. They had distinct responsibilities. We could monitor, diagnose and restart them separately in production. If we'd wanted to horizontally scale one without the other we could have. It just happened that on the evolution axis we did not require independence.

According to the terms of the suggested constraint family that "X can be varied independently of the rest of the system", traditional service-oriented architecture could be seen as a specific set of values for X. Some of the degrees of freedom that were important 10 years ago are less important now. Most are still relevant. New kinds of variation that have risen to prominence.

I don't think that microservice architectures are a break from the past. I would probably characterise the movement as a kind of neo-SOA that is trying to get back to the roots of the original movement. But I do think that microservice architectural analysis has a different and more general preoccupation than service-oriented architectural analysis because it revolves around the notion of independent variation itself, rather than prescribing certain kinds of independence.

Tuesday, 20 August 2013

Real live music

“Yes,” said Golg. “I have heard of those little scratches in the crust that you Topdwellers call mines. But that’s where you get dead gold, dead silver, dead gems. Down in Bism we have them alive and growing. There I’ll pick you bunches of rubies that you can eat and squeeze out a cupful of diamond juice. You won’t care much about fingering the cold, dead treasures of your shallow mines after you have tasted the live ones in Bism.”

- C. S. Lewis, The Silver Chair
I am fascinated by our collective fascination with live music. Why does it make a difference whether we access a piece of music through the thick dark air of a crowded pub or via a commercial recording and distribution process?

I question the mechanism, but I have no doubt as to the potency of its effect. I revel in live performance. I even listen to live albums - which I simultaneously find conceptually daft and utterly compelling.

My own pet theory is that liveness (life?) is a perception of the possibility of being otherwise. As each note is struck, plucked or sung, the audience knows that though the piece may be written a certain way, in a live rendition it ain't necessarily so. Any given note has the potential to be substituted, varied or flubbed.

But once a note has transformed from a musical intention into a perturbation of air and propagated through the room at (approximately) 340.29 metres per second, it drops down dead to the floor. The high note has been hit, or not. The high hat has been hit, or not. The blue note has been blown and the possibility of being otherwise has ceased.

Music-as-code offers a variation on this theme. If I, for example, describe my music with Clojure and Overtone, then I can render my music to my speakers. But I can also alter it at will. If I take care to directly represent the deep structure of my music in code, then I can easily make changes to instrumentation/tempo/swing that even editors and sequencers of binary sound files cannot.

I can bring the power of iteration, of version control, of collective review, of unit testing, of my combined past and present selves, to bear on a piece of music. The possibility of being otherwise is preserved.

Music created with Overtone isn't just live - in a sense it's immortal. What's more, we now have the perfect retort to the unkind critic - "Pull requests accepted".

Wednesday, 12 June 2013

Chomsky on static typing

Colorless green ideas sleep furiously.

- Noam Chomsky, Syntactic Structures

Saturday, 25 May 2013

Goldilocks services

Github is a document store. Each repository is a document. Transactions are easy to execute within a
document and only eventually consistent across documents.

If your documents are too large, the scope of your transactions will be excessive, and you will get contention. If your documents are too small, the scope of your transactions will be insufficient, and you will be forced to manage changes across multiple transactions.

Each service is an aggregate root, so design them so that they can be evolved independently. 

Saturday, 18 May 2013

Second order poka-yoke

Poka-yoke is the practice of "mistake-proofing" - designing interfaces in ways that prevent users from making mistakes.

Continuous integration is a form of poka-yoke for software development. Developers are prevented from releasing (obviously) broken software to production by a battery of tests.

However, poka-yoke is about designing to mitigate specific user mistakes (at least as I understand Shingo's definition). In other words, it's about putting up safety barriers between users and anticipated failure modes.

How much better to develop in a software ecosystem that discourages not only the occurrence of mistakes, but encourages the growth of designs that do not admit the possibility of these mistakes.

My colleague Gurpreet Luthra cites the example of making heat-resistant bolts a different size to ordinary ones so that they cannot be accidentally interchanged. However, were the design to be simplified so that the engine was a single piece of metal, bolts would not be required at all.

Sometimes simplification is possible. Other times essential elements of the domain force us to introduce scope for error. Perhaps engines could be constructed differently to simplify assembly. Perhaps not. The only way to tell the difference is to be constantly vigilant for the possibility of eliminating accidental complexity.

Having thorough test coverage is good. Preventing the complexity that disrupts developers' ability to comprehend the system well enough to avoid mistakes is better.

Simple designs can not only make mistakes avoidable, they can make them literally inconceivable. By that construction, you could consider a disciplined pursuit of simplicity second-order poka-yoke.

Saturday, 11 May 2013

Fixation and variability

While explaining Why Everyone (Eventually) Hates (or Leaves) Maven, Neal Ford divides extensible software into two categories: composable and contextual.

Contextual software is best exemplified by plugin architectures - new features are added by slotting code into explicitly provided extension points. Composable software is best exemplified by the Unix ecosystem - new features are added by wiring together a combination of existing and custom components.

So what is it about contextual systems that provokes such hatred?

The key problem when making software extensible is where to put the variability. I have a fixed set of core features, but I also want others to contribute extra features that vary depending on their needs. How do I bind the fixed and variable functionality together so that they form a coherent system?

Maven accommodates variability like a classic contextual system. The promise (and the curse) of Maven is the consistency of its build lifecycle. There is a place for everything, and everything must be in its place. Maven affords very little flexibility on the relationship between variable additions and its fixed feature set.

Rake, on the other hand, empowers developers by allowing extension via a general purpose programming language. Rake qualifies as a composable system because any means of adding new functionality that can be expressed in Ruby can be used to extend it.

In composable systems, orchestration is variable. In contextual systems, orchestration is fixed. Keeping the means of combination in userland is an additional burden on those who would extend your system, but it's more than offset by the flexibility and power they enjoy in the long term.

Your software will grow, but your relationship with Maven won't. If When that happens, it's time to leave.