Sunday, 21 June 2009

Email considered harmful

I don't like email. More precisely, I'm disappointed that we don't have something that's better than email.

I suppose it's to its creators' credit that we are still using a tool that was standardised in close to its present form in 1973. But email has so many shortcomings in its very architecture that it's high time we upgraded to something else. There are many partial solutions to the problems listed below, but to properly fix them all requires a ground-up rebuild.

No guarantee of delivery

If an SMTP server swallows your email, tough luck. An email is like a postcard hurled out into the void. If it disappears somehow then no one will ever know.

No support for high-level abstractions like conversations

People do not send emails in isolation. Often, an email will be part of a series of replies, perhaps involving multiple recipiants.

Email gives you no good way of grouping individual messages into a conversation, other than by dumping the entire previous contents of the conversation at the bottom of each message. Gmail does a valiant job of threading emails, but the process it's using doesn't help you if you're not using Gmail, is unreliable and is inherently just a hack.

The lack of any coherent high-level organising principle makes email communication chaotic when the number of messages involved is large. Sometimes this is so unmanagable that it causes individuals to take the drastic step of declaring email bankruptcy, notably including Donald Knuth (founder of literate programming) and Lawrence Lessig (of the Creative Commons and the EFF).

No canonical and independent copy

An email exists in its sender's outbox and its receiver's inbox. It may also be stored by an email server somewhere. If these copies are deleted or lost then it's gone.

If someone tampers with an email that you sent them, you may have no way of proving this to a third party. If someone tampers with your email en route then you have no way of proving this even to the receiver.

There's also no good way to introduce someone into an email conversation they have not been following (you can forward an email containing a bunch of replies, but that's hardly usable). Emails don't have a URL that you can pass around or use as a reference if, for example, the email contains an important decision that needs documenting.

No native encryption

It is possible to encrypt emails. But if you do, then both sender and receiver need to be using email clients that support encryption. The sender would also have to have access to the receiver's public key.

No way of verifying the sender's identity

The only way you know who sent an email is by looking at the 'from' field. If that field is filled out wrongly then there is no way to tell. Impersonating someone over email is technically trivial (unless you use digital signatures, which have the same disadvantages as encryption).

The future

I have high hopes that Google Wave will solve some or all of these problems. But there are there two big advantages email has over Google Wave:
  • It's proven
  • It's widely supported and understood
Email is not going to disappear overnight. After all, fax machines are still reasonably common and faxes have been almost entirely superceded - by emails.

For Google to get wide adoption of Wave they're going to have to come up with a solution that allows incremental adoption. Perhaps the Google Wave client could support email as well as Waves so that I can communicate with the vast majority of my contacts who aren't bleeding-edge adoptors.

But until then we're going to have to suffer the absurdity of disagreements and uncertainty about whether a particular email was sent, who sent it and what was in it - like this Australian political scandal.

Update: The scandalous email has turned out to be a fake.

Saturday, 13 June 2009

Crime, punishment and reinventing the wheel

Crime and Punishment, the classic 19th century novel by Fyodor Dostoevsky, describes the execution and aftermath of a brutal double murder committed by the poor ex-student Raskolnikov. I am not spoiling the ending by telling you this - the murder itself takes place early on and the bulk of the novel deals with Raskolnikov's guilt and mental anguish.

Initially, Raskolnikov justifies his crime by imagining himself to be one of the elite few who transcend ordinary morality. Like Napoleon Bonaparte, these extraordinary men are destined to seize society bend it to their will. Their higher purpose excuses them from the constraints of morality that ordinary members of society must abide by.

The reader soon realises that Raskolnikov is not a member of this elite cadre. True Napoleons are too busy invading Spain to construct self-serving psuedo-philosophical justifications. As the novel progresses, Raskolnikov's crippling doubts reveal to him the fallacy of his delusions of grandeur. He realises that men who are preordained to shake civilisation to its very foundations do not agonise over their calling.

In the world of software, it is not at all uncommon to encounter a developer who is convinced that they are a Napoleon. Perhaps it's ignorance. Perhaps it's arrogance. Whatever the reason, they are motivated to create their own inadequate solutions to problems that have already been well and truly solved. Often they take it on themselves to improve upon things that ordinary programmers take as given (like the nature of truth itself).

Google Wave may just be an example of a revolution that we actually need. Email is a tried-and-true technology, but it has its limits and could benefit from a ground-up redesign. The success of Google Maps certainly suggests that the Rasmussen brothers are candidates for web Napoleons.

On the other hand, Google's non-standard implementation of OpenID looks more like it was designed by Rodion Romanovich Raskolnikov. The whole point of OpenID is that it is a universal protocol, yet they have extended it for their own specific needs (they want to be able to use gmail addresses rather than URLs). What's worse, every developer who wishes to accomodate Google OpenIDs on their site will have to contaminate their code with a special case to handle gmail addresses.

If you are contemplating producing your own version of a well-established technology, it is just possible that you possess a unique insight and that by reinventing the wheel you will drag software in a bright new direction. But if you are not sure, then your code is more likely to resemble an opportunistic act of violence than the Code Napoléon.

And even if you are certain that your way is better, you're probably still wrong.

Tuesday, 9 June 2009

When is the right time for the fancy stuff?

Derek Featherstone wrote an interesting post on When is the right time for accessibility? His thesis is that accessibility should be planned for at the design stage, but that implementing it should not be a high priority early on. The idea is that accessibility should be worked on as the product matures and included in a subsequent release.

As far as web accessibility and interaction-heavy sites are concerned, he is asking the wrong question. What he should be wondering is, When is the right time for the fancy stuff?

Sites with a lot of flash and javascript tend to be the worst accessibility offenders because the meaning of the site is only apparent by interacting with scripts on the page. The content is not comprehensible from the DOM itself. If you're sight impaired, don't use a mouse or don't have the reflexes of a twenty-something-year-old flash developer then you might be out of luck.

Derek is of course aware of the importance of accessibility. However his approach seems to be to build the stairs first and then put in the wheelchair ramp later, and no matter how well you plan you will always come across implementation problems you had not considered.

My main criticism of Derek's article is that he constructs a dichotomy between business imperatives (getting the site up) and doing the right thing (implementing accessibility). However there are tangible benefits for your project in getting accessibility right early on.

Most importantly, there is one blind and deaf user that every web developer should be concerned about: Googlebot. If a site is not accessible for a human with a disability then it almost certainly will not be indexed properly by Google. You cannot improve your site through user feedback if you have no users because no one can find your site.

The other main advantage of tackling accessibility early is that progressive enhancement is a sounder development methodology than building everything big-bang style. Build your pages so that they work without any flash or javascript. Once you have that working, you have a sound basis on which to build your incredibly sexy ajax effects.

That affords your testing finer granularity. You can test the plain version of your page before you spoon on the flash and javascript. If there is a bug, you will know whether it occurs as part of the form-submission process or in the interaction layer. That beats monitoring HTTP requests with Firebug trying to work out where the hell the problem is.

Of course, a big motivation web developers to make their sites accessible is that it's the right thing to do. And it is. But if you follow progressive enhancement and make accessibility part of your development process then you'll get more out of it than just a warm fuzzy feeling.

Sunday, 7 June 2009

Literate programming

The title of this blog is a reference to literate programming, a software development methodology founded by Donald Knuth. Literate programming is best described in Knuth's own words:

Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.

The practitioner of literate programming can be regarded as an essayist, whose main concern is with exposition and excellence of style. Such an author, with thesaurus in hand, chooses the names of variables carefully and explains what each variable means. He or she strives for a program that is comprehensible because its concepts have been introduced in an order that is best for human understanding, using a mixture of formal and informal methods that reinforce each other.

- Donald Knuth, "Literate Programming" (1984)

I have great affinity for this way of viewing software development. Software design has more in common with the composition of an essay than any strictly scientific activity. I think it's an accident of history that programming is placed within engineering faculties rather than being understood as an outgrowth of philosophy and formal logic.

Literate programming acknowledges software development's place among the humanities. By extension, it acknowledges the relevence of non-scientific ideas to the process of cutting code. Our craft requires the creative and disciplined presentation of thought, so we would be foolhardy to ignore thousands of years of the history of ideas. Programming does not exist inside a vaccuum. Neither should the programmer.

I am not trying to argue that programmers do not need a firm grasp of science. But good programmers cannot rely solely on scientific concepts if they wish their code to be comprehensible to their peers (or future selves).

In the spirit of literate programming I will use this blog to explore software development and its interplay with literature, philosophy, politics and mathematics.