slideout menu


Julian's photo Hi there! I'm Julian M Bucknall, a programmer by trade, an actor by ambition, and an algorithms guy by osmosis. I chat here about pretty much everything but those occupations. Unless you're really lucky...

Most recently this is what I've come up with:

Contact details

Although this certainly didn’t all come about from the get-go, this is a fun little graphic for the different ways to contact me:


My contact details

Now, agreed, you still have to know how Twitter and LinkedIn construct their URLs for member pages (Twitter: + handle; LinkedIn: + id), but it’s pretty cool.

(This was mercilessly stolen from an idea I saw on John D Cook’s blog.)

Now playing:
Jean-Michel Jarre & Vince Clarke - Automatic, Pt. 1
(from Electronica 1: The Time Machine)

Making your web pages fast (part three)

Now that we’ve seen that it’s perception that defines how your users grade the speed of your webpages (although I’m not going to argue that spending a good deal of time speeding things up in an absolute sense will not go amiss), and how to analyze the network traffic that goes into displaying your pages (one, two), it’s time to look for solutions to the performance issues we saw.

Finally, the fruits of success!

Finally the fruits of all this work!

Number of files

A simple one, this; it even belongs in the duh! category. Reduce the number of files that your markup requires. As I said, simple.

OK, a bit more information. My classic car website required 5 CSS files to be downloaded, all from my domain. All of them were marked as media=”all” so they can all be concatenated one onto another in the same order with no ill effects. By doing so, I am (A) reducing the work that my web server needs to do (for every reader visiting the site they will download just one file, instead of issuing five requests at the same time), and (B) reducing the work that the browser has to do, especially if it batches requests. A win all round. The only issue is that I have to incorporate a concatenation process into my web deployment script, and that is, after all, very simple.

Next, my webpage requires 14 JavaScript files to be downloaded. So, instead of 14 requests, batched, I could have one request for a concatenated file.

There are a couple of issues to bear in mind here. The first one is that by doing so you are sidestepping any obvious caching that can occur. In my example from last time, the jQuery script file was being downloaded from a well-known CDN URL. Obviously, if I visit a lot of sites that also download that same file from the same URL, the file will be cached in the browser. It’ll only be downloaded once. If, however, I concatenate it into a big ol’ JavaScript file for my site, I will only benefit from caching provided the reader has visited my webpage at least once. This is one of those things that you will have to experiment with and measure, to be honest.

The next one is a little more obscure. We can call it the Strict Mode problem. Strict mode? These days we are encouraged to use the “use strict”; pragma in our code. This tells the browser’s JavaScript interpreter to use Strict Mode when parsing the code and executing it. Strict mode helps developers out in a few ways:

  • It catches some common coding bloopers, throwing exceptions.
  • It prevents, or throws errors, when relatively "unsafe" actions are taken (such as gaining access to the global object).
  • It disables features that are confusing or poorly thought out.

There are two scopes to strict mode: the whole file – what you might call Global Strict Mode – and per function. The issue with creating concatenated JavaScript is that some script file of the set may invoke Strict Mode globally, meaning that every other script file after it will need to obey Strict Mode or face the consequences. You may find that the unconcatenated JavaScript works fine, but the concatenated JavaScript does not. So, test. Mind you, these days, the recommendation is to only apply strict mode at a function level, not at the file level, but please make sure.

Again, all this solution requires is that you have a special concatenation task in your deployment script.

Size of files

Another simple solution: reduce the size of the files that have to be downloaded. Reduced file sizes means reduced time to download them. For JavaScript and CSS files, the easiest way to reduce the file sizes is to minify them. Or, equivalently, use the minified version of any library files that you are using.

What minification does is to remove unnecessary whitespace (and comments) and to rename private identifiers to smaller names, usually one character long. By doing this, some serious reductions in file sizes can be achieved. As an example, unminified jQuery 1.11 is 277KB, whereas the official minified version is 95KB, a third the size. Obviously if you are going to be minifying your JavaScript, you should do your debugging with the unminified code, because doing it the other way round is an exercise in complete frustration.

You can also minify your HTML, but, to be honest, I don’t know of many sites that do. After all, a lot of CRMs are going to be generating the page’s HTML on the fly from the content and a template; it’s not static HTML by any means. Therefore, you’d have to add a minification step to your web server’s codebase so that any page that is served up is then minified before being sent to the browser. The time savings of reducing the transit time to the browser versus the extra processing time needed on the server for just a single file are probably just not worth it.

There are many open-source minifiers out there. Personally I use the YUI compressor, but the big problem with it is that it’s built in Java, so you need to have that installed as well. .NET developers would probably go for the Microsoft Ajax Minifier (AjaxMin).

Obviously, if you are going to be minifying your code and CSS, it makes sense to concatenate them afterwards as well to produce a single minified CSS file and a single minified script file.

Optimizing images

Once you have minified and concatenated your code, don’t stop there. One extra trick that not many people do is to optimize your images.

The very first optimization is to add image width and height attributes to the IMG tags in your page markup. This simple change has one huge benefit: the browser can kick off a request to download an image, but still knows how big the image will be to display. The browser can reserve space in the rendered display for the image when it finally arrives. The big bonus here is that the browser does not have to wait to get the image before continuing rendering the page, or -- equally -- doesn’t just “guess” the image size (say by using the space occupied by the default “cannot load image” icon) and then have to re-render the page once the size of the image is known. To the user, the page just loads more smoothly (and hence is perceived as quicker). Luckily most blog post editors will calculate the image sizes for you and insert them into the markup (Windows Live Writer, which I use for all my blogs, does this).

Optimizing JPG photos? Not really. The JPEG format is lossy, so, in theory you’d be accepting a “fuzzier” image for a reduction in file size.

Optimizing PNG images? Now there’s a real possibility for improvement: PNGs use lossless compression rather than JPEG’s lossy compression, so all you have to do is tweak the DEFLATE compression “knobs” to find a better compression of a particular image. Yes, the DEFLATE algorithm is tunable to a certain extent. One of the better algorithms for compressing PNG files is PNGOUT by Ken Silverman. I use a commercial version of it called PNGOUTWin. There are others; TinyPNG, for example, has a plug-in for Photoshop.

Scripts in the markup

As it happens, last time we discussed one of those performance gains: putting your script elements at the end of the markup, just before the </body> tag.

The reason for doing this is that, unless we explicitly mark the script with the async attribute in the markup, the browser will stop parsing the markup, request and download the script (if it’s external), then compile and execute the code. For an external script, the browser has to issue a DNS request to get the IP address, construct a request packet and send it to that IP address, wait for and download the response (which presumably has the script file), compile the script, call any entry points that can be immediately executed, before it can continue parsing the markup. These types of script elements are known as blocking scripts, because, well, they block the browser from doing what the reader wants: displaying content. And note that even if your script looks like this:

$(function ($) {
  "use strict";
  // code code code

…it’s still going to have to execute that call to the jQuery $(document).ready() function.

(As an example of what I’m talking about here, Steve Souders has created a simple webpage where it takes a full 10 seconds to download the script file (select “Rule 6” then “Example 2” to see it in action). You can view the page with the script element in the head, and with it at the bottom of the markup. You can observe directly what I’m talking about when I discuss perceived performance.)

How to avoid these blocking scripts? By far the easiest option is to put them at the end of the markup as I discussed. That way they continue to block, sure, but only after the content has been displayed to the reader. This is one of those perceived performance improvements: overall the time taken to fully display the functioning page has not changed, it’s just that the reader is unaware of it because they can start scanning/reading/interacting with the page earlier.

Another option for current browsers (and by that I mean anything reasonably modern, desktop or mobile, but for IE it means IE10 or later), is to mark the script elements as async. This is an HTML5 attribute, so don’t look for it in markup before that (but then again you are using HTML5, right? Right?)

    <script async src=""></script>

What this does is to instruct the browser to initiate a download, but that it’s not required to block until the script has downloaded. Sounds ideal but it comes with its own caveats. Say, for example, I have script A and script B, with B building on something in A. If they’re blocking scripts, the browser will execute them in the order you specify:

    <script src=""></script>
    <script src=""></script>

If however I mark both as async then I no longer know in which order the scripts will be downloaded and executed. B could be downloaded and executed before A for example. So I have to alter B in some way to ensure that A is present and correct (such as using RequireJS, for example).

Another option, for smaller scripts, say, is to inline the script into the markup. Yes, I just told you to copy and paste. Horror! As software developers we intimately know in our bones how this can go wrong: you want to make a change to the script and now you have to identify … every … single … webpage into which it was inserted. Forget one and boom! your site is dead. Let’s just say, I’ve never reached this particular point in my web development career.

Domain sharding

If you recall, in the previous installment, we observed batching in the file download requests, at least in Firefox. The browser was batching the number of requests in groups of five or six at a time and would only release the next batch of requests once the previous batch had completed. This is done for two reasons: to minimize the resources needed in the browser and also at the server. After all, for a popular website, the number of requests coming in at any time would be large, so batching might help make the server more efficient at the cost perhaps of making the performance at the browser slightly slower.

One way to make batching work for you at the browser is to separate the files that need to be downloaded onto different domains. We saw that earlier with the webpage downloading jQuery from a CDN rather than the website’s domain. At first glance, such a strategy – storing your files on several hostnames – might speed things up, especially for webpages that have a lot of files to download (we assume that you haven’t concatenated your scripts and CSS). This strategy is known as domain sharding.

In reality, it seems that having more than, say, three hostnames doesn’t really speed anything up at all. For each new domain, the browser must do a DNS lookup to find the server’s IP address. That takes time too. Better is to try and put as much on the main domain hostname as you can (only one DNS lookup required!), with images stored on some other service (say, AWS), and possibly getting your JavaScript libraries from their respective CDNs. To be brutally honest, though, I don’t see much benefit these days from domain sharding; maybe in earlier years it was more important than it is now. By all means try it out, but I think you’ll find the other suggestions here far more important and effective.


To improve the perceived performance of your webpages, try these recommendations:

  • Minimize the number of files. Concatenate your JavaScript and CSS files to have just two downloads rather than tens of downloads.
  • Minimize file sizes. Minify your scripts and CSS. Optimize your images, especially PNGs.
  • Avoid blocking script problems. Put the script elements at the bottom of the markup.
  • Investigate domain sharding. But don’t spend too long on it.

With those easy-to-make changes, you’ll find your readers will thank you for the speedier interactions with your site.

The fruits of success - banner

Now playing:
Incognito - Pieces Of A Dream
(from Best Of/20th Century)

Making your web pages fast (part two)

In the previous episode of this series I discussed why you might want to speed up your web pages and how it is more about perceived performance, rather than absolute performance. However, this optimization, as with anything, comes with a cost. If you have a site that receives occasional use, then maybe you don't want to overdo the time and effort that these performance optimizations might entail. Or maybe what I'll be describing may not go far enough: in which case, I hope the analysis side of things helps you more.

Anyway, this first thing that needs to be done is to analyze the web page with the developer tools that come with your browser. Me, I prefer Firebug in Firefox for this work, but you can use Chrome's tools or even the new ones with Microsoft Edge. Whichever you choose, you need to open up the tool that displays and times the requests and responses initiated by the web page.

Once the developer tools are open, load (or reload) the page you wish to analyze. Unfortunately there's another catch here: ideally you want to clear the cache for that particular page. (For Firefox: load up the page so it’s first in your history list, then go to the History list, click Show All History, right-click the page you just loaded and select Forget About This Site.) Different browsers do this in different ways, but I've been known in the past to clear the entire cache for the browser as a whole. Nuke it from orbit, in other words; just to make sure it's all gone.

Then, it's the moment of truth: refresh the page. Let the refresh complete and you will have a display that looks something like this (click to enlarge, although it might be better to right-click and open in its own window):

Network Analysis

Network analysis for

This network analysis is for the blog I have for my Volvo 1800S. The content is served from an instance of GraffitiCMS on a shared GoDaddy hosting. I purchased a theme for the look and feel. I’ll note that the theme’s template is not particularly optimized, which is great for this series: we can see how it could be improved.

Let's look over these results. The very first item is the request for the main page itself, every other request is going to be triggered because of something in that page's markup: CSS files, script files, images, and so on. So the very first possible optimization you could make is to improve that initial response. If your pages are served up from some CMS (Content Management System) or blog engine (like my pages here are), then they are probably generated on the fly by inserting content into one or more templates. Can that process be improved? Maybe it's a case of a faster web server? No matter what, if the initial response takes 5 or more seconds to arrive, you are going to have real issues in producing a fast web page, perceived or not.

For me, on my Volvo blog, the page itself is being returned really quickly: waiting 97ms (purple bar) with an almost instantaneous receive time (too thin a green bar to see) isn’t too shabby at all. Nothing much to optimize there.

Then notice that nothing happens for a few milliseconds. What is actually happening in the background is that the browser is starting to parse the HTML returned. It’s going to be building up a DOM (Document Object Model) in memory representing the elements and their relationships in that HTML. Along the way it’s going to identify other files that have to be downloaded for the webpage and it will start those downloads.

The next thing to notice about this network display is that it shows a waterfall. Some file is downloaded, some other file is downloaded, and yet some other file is downloaded. The timeline progresses towards the right. For Firebug, a purple color in a file download timeline indicates the time spent waiting for a response, green indicates the receiving time. The grey at the beginning of a file timeline is the “blocking time”, the time during which the file has been identified to be downloaded, but the browser is too busy doing “other stuff” and so blocks initiating the file download. Hence, for each file download in Firebug, you’ll see a grey segment (which may be so thin as to be invisible), a purple segment, and a green segment (which for small files may be very thin again).

The thin blue vertical line represents an event when the DOM content has been loaded (essentially, the markup minus images, etc), the red one is when the page Load event is fired. In this case, the onload event is fired at 2.42s, which again isn’t too shabby. Room for improvement certainly, but it’s not the end of the world if nothing is done (the number of readers of this site is in the order of tens a day).

My head hurts

My head hurts

The next thing to observe is a little subtle. Observe the start of the purple “waiting” bars, ignoring the grey “blocking” segments. The second to seventh file downloads on the timeline all start waiting at the same time. The first is a webfont from Google, the other five are CSS files from the same domain as the webpage. If you look carefully, this same batching occurs elsewhere as well: files seems to be downloaded in batches of five or six. This batching size is browser-dependent, by the way; in my experience Firefox certainly seems to use six-at-a-time batches.

Another small point to notice here: jQuery is downloaded from the CDN (Content Delivery Network) and although it is a smaller file than shortcode.css, it’s taking longer to download (the length of the green bar). That’s possibly a hint I should host my own copy of jQuery rather than using the CDN.

The next major point to notice is that, once all the JavaScript files are downloaded (and there are 14!), there’s another gap: the downloads of the images are blocked until jQuery has been downloaded. This in fact brings up a very important optimization point: the way that browsers work is that script elements are processed immediately they are encountered in the HTML markup. (And yes, before someone brings it up, you can mark a script element as async so that it doesn’t happen immediately. I’ve yet to see that in the websites I look at.) They have to: after all the script may want to change the DOM – add some elements, change some styles – before the rest of the markup is parsed.

This in fact is one of the major “perceived performance” optimizations you can make for your webpages: declare your script elements as late in the markup as possible. In an ideal world, that would be just before the </body> tag. In this particular example blog, I deliberately have all of my script elements in the head element, the absolute worst place for them. After all, at that point in the HTML, no displayable markup has been parsed at all. The user is sitting there with a blank window, wondering what happened to all the text they’re expecting to see, with the browser beavering away madly in the background compiling JavaScript, executing what it can.

In this example, it’s nearly a whole second before the DOM content is finally loaded, with well over half of that time just sitting around waiting for jQuery to download and be parsed and be executed (together with all the other JavaScript files that depend on it). Now, I’m lucky here in that, even with this delay, it’s not too long to ask the reader to wait. But in other sites, it’s much, much worse.

Anyway, that’s possibly about all we can analyze from this one network session. Already, in this particular article, we’ve identified two optimizations: sometimes downloading from your own domain can be quicker (an absolute optimization), and declaring your script elements as late as possible means the reader can see the content before the JavaScript parsing, compiling, and executing takes over (a perceived optimization). I’m sure just by looking at this bar graph, you can immediately see the other main optimizations we can do: reduce the number of files and make them smaller.

But that can wait until the next post in this series. Until then have fun analyzing your webpages’ network traffic.

My head hurts - banner

Now playing:
Jean-Michel Jarre & Boys Noize - The Time Machine
(from Electronica 1: The Time Machine)

Making your web pages fast (part one)

Recently, I had occasion to want to read an article on <a well-known development company>’s developer blog. It took, believe it or not, over 17 seconds to load and display on my wired connection, around 10 seconds longer than I would have waited if I hadn’t have wanted to read the content. Apparently on a phone it took over 60 seconds to load. I ran it under Firebug because I just didn’t believe it and wanted to see what would take so long. This is the tweet I sent:

So this one blog post used 117 HTTP requests for various files (HTML, CSS, JavaScript, images, whatever) from 16 separate web servers. It took a smidge under 7 seconds just to generate and receive the initial HTML page (from which all the other requests would be derived). It was a grand total of 17 seconds before the browser signaled the onload event (after which a whole bunch of scripts would run, etc). All in all, pretty bad. And in reality a lot of this can be avoided with just a little more care.

About to reveal all

About to reveal all

When we navigate to a web page, we tend to have certain expectations. We assume that the page renders quickly, or at least quickly enough that we’re not aware of it (rather than the opposite case: has our sketchy internet connection died again). We also take it as read that there won’t be weird rendering artifacts, such as the content rendering one way and then immediately rendering in another. After all, we are visiting a web page because we have to accomplish one or more tasks with that page. Our task may be as simple as just reading the article, or it may be that we need to see some list of products, one of which we want to buy, or it may be a login screen.

In this series of posts, I want to explore how to present the content of a web page as quickly as possible to the reader. It’s based on a session I’ve presented at various conferences over the past year, and it’s also been used by others at DevExpress when I’ve been unable to attend.

The first thing to realize is that it’s not necessarily about raw performance – although that has a lot to do with it – but rather it’s about perceived performance. If the web developer was canny enough to present the content you, the reader, needed as fast as possible, but the remaining parts of the page took longer (say, a list of recent posts or similar posts, ads, the tweet stream, whatever), you’d rate the page as a whole as faster than the alternative (that is, get all the data and render only when it was available). The overall time to render the whole page would be the same, give or take, but the reader’s task (read the content) could be started much earlier. The “performance experience”, if I may call it that, is essentially subjective, and not necessarily objective.

As an example: navigate to As you do so, don’t look at the page, but stare at where the scrollbar will be displayed on the right. Depending on your connection speed, general traffic, etc, you’ll glimpse the main banner displayed on the left out of the corner of your eye, well before the scrollbar gets displayed when the rest of the page is downloaded and renders large enough. An Amazon shopper’s perception then will be that the website is displayed instantaneously, even though the content “below the fold” doesn’t arrive immediately. You could say Amazon’s devs have heightened this perception to the level of performance art.

There have been studies published on web response times showing that taking longer than a few seconds means the user will probably leave and maybe never come back. One such study is Jacob Nielsen’s article on the subject, where he divides response times into orders of magnitude.

  • At around 0.1 second, the user feels it’s instantaneous. To quote Nielsen: “The outcome feels like it was caused by the user, not the computer. This level of responsiveness is essential to support the feeling of direct manipulation.” That is, at this order of magnitude, it feels like the browser is directly responding to you, the user.
  • At around 1 second, the user is aware that the browser is doing something or that the network is introducing some level of latency, but their train of thought is not broken (and as devs we know how annoying that can be). Quote: “Users [. . .] still feel in control of the overall experience and that they're moving freely rather than waiting on the computer.”
  • At 10 seconds, well that’s it: you have pretty much lost the user. Nielsen again: “They start thinking about other things, making it harder to get their brains back on track once the computer finally does respond.” That page I was describing above? I really wouldn’t have bothered had I not been timing it. (And if fact I haven’t visited that overall blog site again since, so it could have the best content in the world, I wouldn’t know.)

Next time, in part two, I’ll talk about how to measure the speed of a webpage and how to use that information to speed up your own webpages. As an amuse-bouche until then, I will consider optimizing three main things: the number of requests for files, the sizes of the files returned, and where they are coming from. Sharding to the rescue! After that I’ll consider the content of the markup and how that can affect your perception of the rendering speed of the page.

About to reveal all

Album cover for MCMXC A.D.Now playing:
Enigma - Mea Culpa
(from MCMXC A.D.)

The HTML end tag means end of document, or does it?

As anyone who’s ever written an HTML document would surely know, everything apart from the initial DOCTYPE declaration appears in between <html> and </html> . Putting it in XML terms, an HTML document consists of one element, the HTML element. And, as it happens, it has two elements within it: the head and the body. End of story? Well, no; otherwise I wouldn’t be writing this. Paul Usher (DevExpress tech evangelist extraordinaire) and I were perusing some extremely – can I be blunt here...

Read more »

That time when CSS’ position:fixed didn’t

There’s been an ongoing bug with my blog after I added the “hamburger” menu/options on the left side. In essence, adding it interfered with another “feature” of the individual blog post pages where the title of the post sticks to the top of the browser window as you scroll down through the text. And, yes, you guessed it, both features are provided by JavaScript libraries, different ones, by different people. It’s this week’s edition of JavaScript Libraries Gone Wild! Let me describe what was happening...

Read more »

Putting on the Blue Apron

In our house, we’ve divided up what might be called the food duties. I’m the savory cook and Donna the pastry chef. It’s not like we sat down early on in our relationship and threw the dice, I’m just not interested baking cakes, making cookies, rolling out pastry for a fruit pie, whereas Donna is. She on the other hand would way prefer someone else do the meats, the veg, the salads. This week's recipes Of course, over the months and years, I’ve got into a rut. Every now and then, I’ll read up on...

Read more »

The Auto Warranty sleaze

A month ago, we purchased my wife’s Acura off the lease. She’d done less than 30,000 miles in the three years she’d had it, there was nothing wrong with it, and there wasn’t anything available for the models she liked, in the colors and with the luxury level she was keen on. So rather than worry too much about that elusive new car, we just bought the current one off the lease. Maybe in a couple of years there’ll be something she likes and we’ll consider what to do then. Anyway, this is not about...

Read more »

Windows 10 upgrade: the Microsoft Money mess

OK, I get it: I’m behind the times. I still use Microsoft Money, the “sunset” edition . Yes, it’s been six years since it was retired, but I prefer it way, WAY more than Quicken . And, to be honest, thus far – I’ve now been using it for 20 years, believe it or not (first entry: July 3, 1995) – it’s been just fine. However, yesterday, I was suddenly brought up short with a jolt, or to be more accurate with an error message about Internet Explorer 6. It wants IE6??? So what was special about yesterday...

Read more »

WOFF files and Azure: the 404 conundrum

More than anything, this is going to be a discussion about testing, but the headline is all.  Lazy tester This afternoon, in trying to keep cool inside on this hot day, I thought I’d remove the Google Ads on this site. Frankly they were a pain in the neck to design for: they used to be a sidepanel on the right and trying to get the code to make them disappear when the browser window was too small width-wise was just annoying. Plus the ads were being loaded anyway even if they weren’t being displayed...

Read more »



About Me

I'm Julian M Bucknall, an ex-pat Brit living in Colorado, an atheist, a microbrew enthusiast, a Volvo 1800S owner, a Pet Shop Boys fanboy, a slide rule and HP calculator collector, an amateur photographer, a Altoids muncher.


I'm Chief Technology Officer at Developer Express, a software company that writes some great controls and tools for .NET and Delphi. I'm responsible for the technology oversight and vision of the company.


Validate markup as HTML5 (beta)     Validate CSS

Bottom swirl