## Welcome!

Hi there! I'm Julian M Bucknall, a programmer by trade, an actor by ambition, and an algorithms guy by osmosis. I chat here about pretty much everything but those occupations. Unless you're really lucky...

Most recently this is what I've come up with:

## Dumb CSS: cursor pointer or hand?

So I had an occasion to peruse someone else’s CSS today, when I came across this peculiar construct:

.someClass
{
cursor: pointer;
cursor: hand;
}

Do what? Set the cursor to “pointer” and then set it to “hand”? Whut?

After a bit of research, I found out that this is accepted provided that you are supporting IE5 and IE5.5 users. Double whut?

Back to the Olde Days. It seems that in those times, Microsoft had gone its own way with regard to displaying a pointing cursor, usually shaped like a hand with extended index finger. They used, well, hand. Everyone else, pointer. With IE6, Microsoft added cursor:pointer (whereas everyone else allowed cursor:hand as a grudging alternative, it seemed, even though it was off-spec). So, in essence, from IE6 onwards, cursor:pointer was – and is – the way to go. The extra CSS for the hand cursor is not standard and is ignored by modern browsers. (See here on the quirksmode site.)

Hence, a gentle nudge, dear reader. Check your own CSS and get rid of those cursor:hands. You really don’t want users who are still relying on IE5 or IE5.5 for their browser, do you?

Now playing:
Electric Light Orchestra - Here Is The News
(from The Very Best of Electric Light Orchestra)

## CSS3 line height is important for drop caps

Recently I was playing around and added drop caps to the blog posts on blog.boyet.com. I decided to go for a pure CSS3 version (so, you’ll have to view this site in a reasonably fresh browser to see the effect) rather than a hacky <span> version that mixes presentation “hints” in the content. (For a brief discussion on the two possible methods, see Chris Coyier’s blog post here.) I certainly didn’t want to change all my posts to include spans on the first letter of the first paragraph.

The way I implemented it was to add a class style to the surrounding div:

.initialcap > p:first-child:first-letter {
background: url("images/classy_fabric.png") repeat scroll 0 0;
color: #efefef;
font-size: 48px;
margin-right: 3px;
float: left;
}

The style makes reference to a paragraph child of the div, and uses the cd:first-child and cd:first-letter pseudo-classes. The initial letter is styled with a background image, a contrasting color, a larger size and relevant padding and margins. The whole lot is then floated left, so that the text wraps around it.

Pretty good. I viewed it in Firefox, saw it was good, and went off to do something else.

A few days later, I happened to run Chrome and immediately saw a problem, the drop cap was stretched vertically:

The same problem happened in IE10, too. What was going on? Firefox still showed the initial capital just fine.

It turned out that I’d missed off a line-height clause from my style, and this was affecting the display in Chrome and IE, but not, for some unknown reason, in Firefox. So I changed the style to this:

.initialcap > p:first-child:first-letter {
background: url("images/classy_fabric.png") repeat scroll 0 0;
color: #efefef;
font-size: 48px;
line-height: 32px; /* that is, 48px font size - padding top&bottom */
margin-right: 3px;
float: left;
}


As you can see, I added a comment for myself to explain how I’d calculated the value since it’s a little bit “magic”. If you now visit this blog in Chrome and IE (and Safari for that matter) the drop caps are displayed correctly.

The moral of this tale is: test your websites in all four major desktop browsers. You’d be wrong in believing that they all render the same way, even in this day and age.

Now playing on Pandora:
Groove Armada – Inside My Mind (Blue Skies) on Vertigo (Import)

## Weird XML bug on iPad when displaying this site

The call came though the batphone from Mehul Harry: he was seeing an issue displaying blog posts from this site on an iPad. It was a new one on me and I quickly checked on my iPad using Safari: no problem.

After a bit of back-and-forth, we’d nailed it down. If you navigated to a blog post via the Facebook app, the iPad displayed an XML error (due to the missing schema, essentially). If you navigated to the exact same blog post using Safari, it displayed fine. The weird thing is that this site doesn’t use XML; it’s HTML5, not XHTML.

Although this was news to me on the iPad, it was something I’d run into before: when I try and validate the site with the W3C HTML5 validator, it complains of exactly the same problem: bad XML. At least with the W3C validator, I could glean a little more information: in its response the web server was sending the Content-Type as application/xhtml+xml in the response header, not text/html (which is what I was expecting and which is what I would get in the browser). Why would the web server (IIS7 as it happens) send this erroneous Content-Type?

By default, ASPX files (which this site serves up; it’s a big old .NET program) are assumed to be text/html by both IIS and ASP.NET. Since the same page gets the correct Content-Type when it’s a browser making the requests, I can only assume that ASP.NET 3.5 was miscategorizing the user-agent of the request when it’s a UIWebView (what the Facebook app is using) or the W3C validator and assuming that it required XHTML. (This is the job of the machine.config file, in essence; something that, in a shared hosting environment, I have no access to in order to change it.)

Time to be clear-cut since the default was wrong. I altered the ancestor page class’s OnLoad() method (it’s the TemplatedThemePage class if you’re interested) to explicitly state that the pages returned by GraffitiCMS were text/html:

Response.AppendHeader("Content-Type", "text/html");

I recompiled, uploaded the new assemblies, and – zing – the problem was solved both on the iPad and for the validator.

Now playing on Pandora:
IIO – Runaway on Runaway [Maxi-Single]

## Adventures with JSONP and jQuery

This whole thing started out as a nice-to-have. I have a blog (you’re reading it). I have a URL shortener (jmbk.nl). They are separate apps on separate domains. When I publish a post here, I diligently create the short URL for it manually in order to publish that short URL on social sites (the URL shortener has some minimalist stats associated with each short URL; so minimal, it’s only a count of the number of times it was used). Yeah, I know, silly, huh: why can’t each post generate its own short URL?

Now I could have done this in C# and .NET as a plug-in to the blogging software I use, but where’s the fun in that? Let’s do it in JavaScript!

What I want is some code that will get the URL of a blog post when that post is displayed in the browser (easy!), call my URL shortener service with it via good-old AJAX, and receive the short URL as a reply. The big issue about this simple plan is the so-called “Same-Origin Policy”. In essence, getting data from the same site (that is, protocol, domain/host, and port number) via JavaScript is smiled upon, but getting data from another site entirely is frowned upon to the point it doesn’t work. Since my pages are on blog.boyet.com, I can only get data from blog.boyet.com. It’s all to do with maintaining strict security boundaries (think of cookies as a big example).

Nevertheless, sometimes it might be advantageous for your code to get some data from another site. An example might be to display the top ten most recent of your tweets from Twitter in your blog webpage. The problem here is that twitter.com is not the domain of your personal blog, so what can be done?

If you think about it, the one thing you can get from other sites is JavaScript code inside script elements. Here as an example is how this site gets jQuery:

<script type="text/javascript" src="http://ajax.microsoft.com/ajax/jQuery/jquery-1.9.1.min.js"></script>

When the browser encounters a script element likes this, two things happen: first, the JavaScript file referenced by that URL is downloaded (via HTTP GET), and second, the script thus downloaded is executed. Hmm.

Let’s continue this thought experiment. These days we get our data in the JSON format; that is, code that defines a JavaScript object. Here’s an example of a JSON object that is relevant to my discussion:

{
shortURL: "http://jmbk.nl/j6Q2K",
requestCount: 42
}

Running before I can walk, I could construct the source URL I need in a <script> tag, say something like “http://jmbk.nl/MakeUrl/?shorten=http://blog.boyet.com/” (that isn’t the real URL I use for this operation, by the way, so there’s no point in trying it), and the browser would GET the URL, which would return the above JSON. Which would then be executed and crash with some kind of syntax error: it’s not executable code.

OK, then, let’s alter the URL shortener’s code to return this instead:

foobar({ shortURL: "http://jmbk.nl/j6Q2K", requestCount: 42 });

This time it’s real executable code, and the browser will call the foobar() function for me once the AJAX request is done, passing the JSON object I want as a parameter. Except I don’t have a foobar() function yet in my client code, so I have to write one and include its code in some kind of script tag. This function must take the JSON object and so something with it (like replace the href attribute on an anchor element I have somewhere in my HTML markup).

This, in essence, is this mysterious protocol called JSONP you may have heard of. JSONP stands for JSON with Padding, where the “padding” is wrapping the JSON object inside a function call as a parameter.

Now it’s a real pain to have to have two different ways of getting JSON objects from servers: one for your “same origin” server (a simple AJAX get request) and one for everything else (adding a silly script element, having a special function that does something with the JSON object, yadda, yadda), so jQuery simplifies it in one function: $.getJSON(). The code in this function looks at the origin for the URL that’s passed in. If it’s for the same site, the function issues an AJAX get request and the callback gets called normally on completion. In the other case, jQuery does some pretty fancy footwork in the background that creates a temporary script element on the fly with a slightly modified version of the URL you pass in, creates a callback function that saves the returned JSON object and that calls your callback with that object (and deletes the temporary script element). To you, the developer, it’s as if a ‘local’ and a ‘remote’ getJSON call just work in the same way: you specify the URL, and your callback is executed once the JSON object is returned. I mentioned that for a ‘remote’ call the URL is modified a little: jQuery adds a callback query string to the end of the URL. This query string defines the name of the function that should wrap the returned JSON. For example, for “http://jmbk.nl/MakeUrl/?shorten=someurl”, the URL actually used is something like “http://jmbk.nl/MakeUrl/?shorten=someurl&callback=foobar”. The server is supposed to read this query string and construct the reply so that it becomes a call to this function. Better still is to be explicit about the callback function name: write the URL as “http://jmbk.nl/MakeUrl/?shorten=someurl&callback=?” and jQuery will make up a function with a unique name on the fly. This is by far the preferred way to do it: jQuery will ensure the returned JSON is not cached for example. (Note: it does get a little confusing since there are two callbacks in play. There’s the callback function you write and pass to the $.getJSON() function. This function has one parameter: the JSON object that is returned. This callback will get called by jQuery once the AJAX request has completed. The other callback is the function that is used by the server to return the JSON. The browser calls this callback by executing the code returned from the AJAX request.)

Of course, in my case, since the URL I’m trying to shorten may itself have query strings, it behooves me to encode the long URL and for the server to decode it. Here’s the code I use on the client:

$(function () { var url = encodeURIComponent(http://blog.boyet.com/?foo=this&bar=that); url = "http://jmbk.nl/MakeUrl/?shorten=" + url + "&callback=?";$.getJSON(url, function (json) {
$("#shortURL").attr("href", json.shortUrl); }); }); And here’s the (somewhat redacted) code on the server:  public void ProcessRequest(HttpContext context) { string urlToShorten = context.Request.QueryString["shorten"]; if (!string.IsNullOrEmpty(urlToShorten)) { string jsonpCallback = context.Request.QueryString["callback"]; if (!string.IsNullOrEmpty(jsonpCallback)) { context.Response.ContentType = "text/javascript"; string responseFormat = jsonpCallback + "({{ shortUrl : '{0}', requestCount : {1} }})"; urlToShorten = Uri.UnescapeDataString(urlToShorten); ShortUrl shortUrl = new ShortUrl(urlToShorten); shortUrl.Save(); context.Response.Write(string.Format(responseFormat, shortUrl.PublicShortenedUrl, shortUrl.UsageCount)); } } } And that’s about it. If you want to do more with JSONP, check out the relevant options for $.ajax.

Now playing:
Karminsky Experience Inc. - Exploration
(from The Power Of Suggestion)

## PCPlus 321: Tilt-shift photography

This was one of those articles where I had to start from scratch with my research: I knew pretty much nothing about the subject. Sure, I was familiar enough with those photos of real buildings that looked as if they were made as a model on the kitchen table, but I had no idea how they were produced. I’d assumed that it might be some kind of digital post-processing of a photo, but I didn’t have any idea that you could purchase special tilt-shift lenses for DSLRs. I start off with the universal...

## Buying a sleeve for the Dell XPS 12

My new XPS 12 is a pretty nice machine. Small (0.8” × 12.5” × 8.5”) and light (3.35 lbs), despite the intricacies of the flip screen and the touch capability. The top and bottom are covered in some kind of black matt surface that’s easy to grip and doesn’t seem to mark that easily. Yet, I’d prefer having something to protect it. Unfortunately it’s not such a bestseller like, say, the 13” MacBook Air that a whole accessories industry has...

## Upgrading hardware: Dell XPS 15z and XPS 12

We’re getting close to the conference season at work: in June I’m going to both TechEd US in New Orleans and //build/ in San Francisco . Rather than cart around my very unloved Surface RT (haven’t used it in possibly two months), I’ve been dithering about buying an Intel Windows 8 machine, a touch device, say a convertible.I’ve had my eye on two possibilities, the Surface Pro and the Dell XPS 12. Since none of the touch machines available right now just aren’t powerful or expansive enough to be my...

## PCPlus 320: Error detection and correction

OK, I admit it. I’ve been in the programming industry for more years that I care to count, and although I’d vaguely considered error detection in the past, it wasn’t until I did some research for this next article that I finally got to have some understanding of it all. And not only error detection but correction too: now that was pure magic. But, as with all these things, once you get the basic idea about how it works, all the magic gets stripped away. I suppose the first kind...

## Scientists in Paris

We were vacationing in Paris last week in a hotel in the 16eme arrondissement, just outside the boundary of the 8eme. We’d stayed there before, but I noticed this time that several streets in the vicinity has names commemorating scientists and mathematicians. I thought it would be fun to take pictures of the ones I could find that were close and put them in a collage: In order, they are: Augustin-Jean Fresnel . We also saw a lighthouse lens built on his principles in the Musée Maritime...

## Inline scripts: sometimes the web is just screwed up

I don’t know about you, but one of my favorite commands in the browser is “View Page Source”, especially on a site that’s modern, visually attractive, or shows off some clever interactions. After all, I’m a developer: I like to find out how things work so I can, if I want to, replicate on my own web sites. Some web pages though are really nasty when you look at their source. And one of the places they excel at nastiness is in their use of inline scripts. Now, don’t get me wrong, I’m not particularly...

# Extras

## Search

I'm Julian M Bucknall, an ex-pat Brit living in Colorado, an atheist, a microbrew enthusiast, a Volvo 1800S owner, a Pet Shop Boys fanboy, a slide rule and HP calculator collector, an amateur photographer, a Altoids muncher.

## DevExpress

I'm Chief Technology Officer at Developer Express, a software company that writes some great controls and tools for .NET and Delphi. I'm responsible for the technology oversight and vision of the company.

May 2013 (7)
SMTWTFS
« Apr
1234
567891011
12131415161718
19202122232425
262728293031