Pragmatic progressive enhancement - why you should bother with it

Introduction

In this article I want to point out the reasons why progressive enhancement is a very clever way of developing software and give you some tips and tricks how to develop when it comes to applying it. I am a strong believer in progressive enhancement and had my fair share of criticism for it as a lot of people neither consider it a need nor an easy technique to apply — it is often seen as a unnecessary overhead rather than a safety precaution.

Here are some of my ideas about software development:

  • You never develop for yourself, but for the person that takes over from you. How many times have you sworn about code that you had to alter and called the initial developer names?
  • You never have enough time to fix things - once you get into the hotfix phase, code quality will suffer — a lot.
  • Computers and browsers are stupid. They will fail, and they will fail in ways you didn't expect and you cannot reproduce. Code as if everything will break and prepare traps for it and you will come out laughing.
  • You have no clue what is used out there to try to access the content your code turns into an interface — don't assume you know what people need, have and don't fool yourself into believing you can dictate anything.

Back to table of contents

Things that help

The code examples in this article use the Yahoo User Interface Library and the reason I am using it is pure pragmatism. There are only three reasons not to use JavaScript libraries in this day and age:

  • You are creating a very bespoke, high traffic project for a very special environment.
  • You enjoy trying to fix random, non-reproducible browser bugs and revel in trying to find reasons why things break in a certain browser in a certain version and on a certain operating system.
  • You still feel guilty about leaving your parents back on Krypton.

Of course you can use any library you want, as all good libraries want to make your life as a developer easier and help us work in a predictable fashion until browsers do what they promise to do consistently and everybody upgraded to those and left the old ones behind.

I'd be very happy to see someone do a jQuery or prototype version of this article.

Back to table of contents

The seven rules of pragmatic progressive enhancement

Working with progressive enhancement in mind for a long time, I found that the following rules all made a lot of sense and have quite an impact to the final product in terms of maintainability, size of code and general sturdiness:

  1. Separate as much as possible
  2. Build on things that work
  3. Generate dependent markup
  4. Test for everything before you apply it
  5. Explore the environment
  6. Load on demand
  7. Modularize code

1. Separate as much as possible

This is really the first step to progressive enhancement. Only when you separate your structure (HTML), presentation (CSS) and behaviour (JavaScript) from another you reap the following benefits:

  • You leave it up to the browser to apply the technologies it supports and you don't need to guess or assume.
  • You know where to fix a problem when it occurs.
  • You allow the browser to cache things it should not require to re-load on every single page.
  • You allow for easier maintenance as the maintainer does not need to have all the skill-sets involved — instead you allow experts and people passionate about the subject to do the job — and that means a good job done.

Back to table of contents

2. Build on things that work

Using JavaScript you can turn anything in the document into an interactive element — it can react to clicks, the mouse hovering, keys being pressed and so on.

However, you simulate the real thing and simulation is never as good. Interactive elements in browsers go a long way — they send information to the back-end, they talk to assistive technology and most important of all — they work with a multitude of input devices.

If you are up for this challenge, god speed and may you be successful. Personally, I'd rather build on things that work, and in the case of browsers these are forms, links and buttons. Say for example a span with a click handler:

<span onclick=”help()”>Help</span>

This does the job when JavaScript is enabled and the user has a mouse. However, you cannot reach it with a keyboard and without JavaScript nothing happens.

Instead, it makes more sense to use something that works without JavaScript (and can be followed by a search engine, for example) and enhance it when possible. For example a link to a "help" view of your application enhanced to call a doHelp() function (in this case using YUI):

<a href="/help" id="help">Help</a>
<script type="text/javascript">
YAHOO.example.helpdemo = function(){
  function doHelp(){
    // other work here…
  }
  YAHOO.util.Event.on('help','click',function(e){
    doHelp();
    YAHOO.util.Event.preventDefault(e);
  });
}();
</script>

This way you know that if JavaScript fails — for whatever reason — your site/application will still be usable.

It is a matter of trust — visitors coming to your site or using your application trust you to give them what they came for. If you offer them a link and it doesn't do what it promises, you violated their trust. You lost competence in their eyes and they are not as likely to come back or continue the process of using your site (or in the worst case the checkout of a buying process.)

You can make any construct look like a menu bar, a tree or a group of tabs, but it is very important to think of several things:

  • If JavaScript is not available, does the construct still make sense? Is there a hierarchy?
  • You simulate an interaction pattern that is available in the wider world, specifically in application design. Are you aware of how people use it there? Menus, for example need to be navigable with the arrow keys, not by jumping from link to link using the tab key.

Back to table of contents

3. Generate dependent markup

Sometimes you need to have HTML that only makes sense when it gets enhanced by JavaScript. The classic example is a "print this" link.

<a href="window.print()">Print this document</a>

Technically what we are doing here is simulate browser behaviour and generate a shortcut to the print button of the browser. Not really needed, but there might be reasons for it. However, in order to make this safe, you need once again not rely on the JavaScript functionality to be available.

So instead of the above example, generate the link only when JavaScript is available using the DOM:

<div id="printthis">
<h5>Easy printing</h5>
<p>This document has a separate style for print 
versions. All you need to do is print it with your 
browser.</p>
</div>      
<script type="text/javascript">
YAHOO.example.printThis = function(){
// test if the element with the ID printthis is available  
// and can be modified
YAHOO.util.Event.onContentReady('printthis',function(){
  // create a new Paragraph and append it to the element
  var p = document.createElement('p');
  this.appendChild(p);
  // set the text content of the paragraph
  var text = 'Alternatively you can use the following link: ';
  p.appendChild(document.createTextNode(text));
  // create a new link, set its href to window.print()
  var printLink = document.createElement('a');
  printLink.setAttribute('href','javascript:window.print()');
  // set the text content and append it to the main element
  var linkText = 'Print this document';
  printLink.appendChild(document.createTextNode(linkText));
  p.appendChild(printLink);
});
}();
</script>

That way you can tell the visitors how they can print out the document and make it available as a link.

Note: It is normally a big no-no to have a link that uses the unofficial javascript: pseudo protocol. However as we generate the link it is a short way of creating the link functionality. We could add an event handler and call window.print() in this one. But then we need a pointless href attribute like "#" to make the link available via keyboard.

This example uses a clever YUI method called onContentAvailable(). This method checks if the element with the ID you parse in is available and that its children can be modified. This is a safety technique that is another big part of pragmatic progressive enhancement.

Back to table of contents

4. Test for everything before you apply it

This should be very obvious — after all you check the depth of the water before you plunge into it or see if the ice can hold you before stepping on it — but we do this wrong all the time.

It is simple though: if you try to apply something to something, test that it is available. Say for example you want to add a class to an element when it is available and JavaScript is turned on. You can do it the following way:

<script type="text/javascript" charset="utf-8">
var c = document.getElementById('intro');
c.className = 'jsenabled';
</script>

The problem is that if there is no element with the id intro, the second line will throw an error as you cannot modify a property of undefined. The safer way is to check if c exists before trying to modify properties or call methods.

<script type="text/javascript" charset="utf-8">
var c = document.getElementById('intro');
if(c){
  c.className = 'jsenabled';
}
</script>

the if(c){} is a lazy approach of testing. Nothing in there tells you that it really is an element. A paranoid version of the same code would be:

<script type="text/javascript" charset="utf-8">
var c = document.getElementById('intro');
if(c && c.nodeType && c.nodeType === 1){
  c.className = 'jsenabled';
}
</script>

Notice the triple equal sign. This is safer than a double equal as it tests both for type and for value. A double equal sign says for example that 1 and true are the same thing - but you might need it to be an integer.

Another great tool is typeof, which allows you to test for certain that something you want to use is what you need it to be.

<script type="text/javascript" charset="utf-8">
  myObject = 1;
  // check if myObject is an object
  if(typeof myObject !== 'object'){
    // if it isn't, check that it is defined
    if(typeof myObject !== 'undefined'){
      // if it is defined, keep a copy in stored
      var stored = myObject;
    }
    // create a new object
    var myObject = {};
    // store the old information as a property
    myObject.stored = stored;
  }
  // now it is safe to add new properties, as the 
  // object is created when there is none;
  myObject.state = 'saved';
</script>

Back to table of contents

5. Explore the environment

One thing that you can do once you know that JavaScript is available is to use it to explore what the browser can safely display. You can do that in a few ways:

  • You read the browser viewport size
  • You read the position and dimensions of elements
  • You read the document dimensions and compare the viewport with it, thus knowing which part of the document is currently visible.
  • You read how far the user has scrolled in the document.

That way you can write much cleaner and cleverer solutions. Say for example you have a menu that has several sub-menus popping out when you hover over or click on it. This can be a problem as shown in the following figure: schema of a multi-level menu causing a horizontal scrollbar.

At step 1 all is fine, the problems start when you expand a third-level menu. If you just assume that you can show it to the left of the second-level menu (and this is what *all* CSS solutions do), you might cause a horizontal scroll-bar. This wouldn't be the end of the world either, but it makes the third level menu inaccessible to mouse users (keyboard users — for once — wouldn't be that bothered). If you try to scroll the document to reach the third menu level, the menu will collapse again — a frustrating experience.

The work-around is to read the width of the browser viweport (the window minus any chrome and scrollbars), check the starting point of the menu level element you want to show and its dimensions and calculate if it'll fit. If it doesn't fit (either vertically or horizontally) you try to show it at the opposite side (as shown in step 3) and avoid the scroll-bar issue.

Note: Tricks like these are both expensive in terms of calculation time and a real pain to make work across browsers. To keep your sites responsive and to stay sane it is a good plan to use a JavaScript library that has both these issues tackled.

Back to table of contents

6. Load on demand

A different aspect of progressive enhancement is so called "lazy loading" or loading on demand. This means that you start your solutions with as little code as possible and download other solutions when and if they are needed rather than up-front. Say for example you want to check if JavaScript is available and the DOM is fully supported before you load the library files. You can do that by creating new script nodes using the DOM:

<script type="text/javascript" charset="utf-8">
  (function(){
    if(document.getElementById && document.createTextNode){
      var s = document.createElement('script');
      s.setAttribute('src','lib.js');
      s.setAttribute('type','text/javascript');
      var head = document.getElementsByTagName('head');
      if(typeof head[0] !== 'undefined'){
        head[0].appendChild(s);
      }
    }
  })();
</script>

This technique can be used in many different ways, you can for example check for an element with a certain ID, class names on elements (load the animation classes only when there is an animate class) — you can even extend it to include style sheets, as nothing stops you from creating link elements:

<script type="text/javascript" charset="utf-8">
  (function(){
    if(document.getElementById && document.createTextNode){

      var s = document.createElement('script');
      s.setAttribute('src','lib.js');
      s.setAttribute('type','text/javascript');

      var skin = document.createElement('link');
      skin.setAttribute('rel','stylesheet');
      skin.setAttribute('type','text/css');
      skin.setAttribute('href','skin.css');

      var head = document.getElementsByTagName('head');
      if(typeof head[0] !== 'undefined'){
        head[0].appendChild(s);
        head[0].appendChild(skin);
      }
    }
  })();
</script>

Note: The only problem you will find is that you cannot rely on the includes being loaded in case you need them in a certain order. To achieve that you either need to write your script includes in a certain manner (described in this article on 24ways - also in German) or use the Get utility of the YUI. This one allows you to define callback methods that get triggered when a certain include (script or CSS file) was successfully retrieved.

Back to table of contents

7. Modularize code

In order to allow for loading on demand you need to modularize your code. This starts in your scripts and ends in different physical includes for different needs.

Instead of creating a massive chunk of code that does everything and becomes unmaintainable over time, consider creating a main script object and extend this as needed with smaller utilities that each do a single job. This is how large JavaScript libraries and solutions are built without making them a nightmare to maintain and use.

Collate these different utilities into logical include files (dom.js, forms.js, i18n.js, ajax.js) and you'll have a much easier time debugging and extending the final product.

Back to table of contents

Performance concerns and build processes

This modularization and separation into lots of different smaller files is going against some of the best practices in web performance you might have heard.

Every HTTP request your document makes while the page gets rendered slows it down. Each request means the browser needs to initiate the request, look up the right DNS servers, negotiate the IP, load and render the resource. This is bad with images and CSS, but even worse with scripts. Every time the browser encounters a script, it stops rendering the HTML, loads the script, executes it and then commences rendering. This can take quite a while.

While all of this is true, modularization is still a very good thing to do. We have to start moving away from the concept of "write it and open it in a browser". While this ease of development is most probably the main reason why people took up developing in JavaScript it is not clever. Other programming languages have build processes, and so should web applications and web sites.

A build process is there to turn human readable code into computer readable code. Things you can do in a build process:

  • Validate the code (JSLint is a good tool for that)
  • Remove unnecessary whitespace - this saves file size
  • Replace strings inside your scripts with array lookups (this is a hardcore performance tip as MSIE creates a string object every time it encounters a string - even in loops and condiditons)
  • Collate several includes into a single file
  • Check your code for security vulnerabilities

The last point is hard to automate, but there are several projects in the making at the moment that try to tackle that.

Back to table of contents

Use and extend this document

Last but not least, I hope this document gave you some ideas and was helpful. It is licensed with Creative Commons which means you can re-publish, build upon and use it commercially for training - the only thing you need to do is mention me as the original author.

Back to table of contents

Written by , a web developer living and working in London, England.
This work is licenced under a Creative Commons Licence.