Guillermo Esteves

Web design & stuff.

I love Chrome’s automatic updates

Last night I signed up for Clicky Web Analytics, and looking around their site I saw that they offer market share stats for the major browsers (IE, Chrome, Safari, Firefox, and Opera,) both in general, and split by browser version. Looking at Chrome’s stats, I noticed something interesting in the graph:

Chrome market share

Thanks to Chrome’s silent automatic updates, as soon as a new version is released, the previous one virtually disappears in a matter of days! I’m sure there are valid arguments against updating software automatically and silently, for example at organizations that need to control what software their employees use, or that need to test existing applications in new browser versions before deploying them; but from a developer’s point of view I think it’s awesome, because for all intents and purposes there’s only one version of Chrome – the current one. Since older versions aren’t a big concern, testing in Chrome becomes simpler and easier: there’s no need to hunt down and keep multiple versions for testing.

Compare to Internet Explorer, where the four most recent versions coexist, so if it represents a major portion of your visits (and it probably does,) then you’ll have to support at least two of them: Internet Explorer 8, for the large number of people still running Windows XP; and 9, for those running Windows Vista and 7. Unfortunately, unless dropping XP and Vista is an option, you’ll probably have to keep supporting them even after Internet Explorer 10 comes out, since it won’t support Windows Vista.

Internet Explorer market share

Safari’s market share behaves a bit like IE’s, inasmuch as it doesn’t automatically update and the newest version coexists with the older one, but remarkably Safari 5.1 has already overtaken the previous version, just a month after its release with the launch of Lion. Still, until 5.0 is gone, testing in it might be problematic unless you have an older Mac nearby, or a Snow Leopard Server disc you can install in VMware Fusion or Parallels.

Safari market share

Firefox, meanwhile, behaves in a combination of both ways. After Firefox 4, which introduced automatic updates, it behaves like Chrome, with the previous version dropping off after a new release; but with a good number of users still on version 3.6, which didn’t have automatic updates.

Firefox market share

And Opera… oh, who cares, I’m pretty sure Opera’s market share is composed entirely of developers testing their sites in Opera.

Anyway, knowing that I can stop worrying about testing in older versions of Chrome (and to a much lesser degree, Firefox and Safari) personally makes my job much easier, but as usual, your mileage may vary. Let your own browser stats be your guide.

Better infinite scrolling with the HTML5 History API

Now that Piictu finally launched and is out of beta, I want to write a bit about one of my favorite things I worked on as the front-end web developer there, which is our implementation of an infinite scrolling page improved by the use of the HTML5 History API, the problem it tried to solve, and the solution we arrived at.

What’s Piictu?

A bit of background first. In case you haven’t tried it (and you totally should,) Piictu is an iPhone social photo app that revolves around the concept of “photo streams”, or threads of photos by different users on the same subject. For example, you can take a photo of a sandwich, start a stream titled “eating a sandwich”, and watch as your friends and followers reply with their own photos of their own sandwiches, or whatever they’re having for lunch. Check it out, there are some incredibly creative games and memes going on over there. It’s a lot of fun.

Since these streams could conceivably have hundreds of photos, and we wanted an uninterrupted photo-viewing experience, we immediately decided to implement each photo stream as an infinitely-scrolling page, instead of using regular pagination. However, this concept of streams of thematically-related photos defined one of the main requirements for the design: we never wanted to take a photo out of its context, which meant that when people shared them, we couldn’t have traditional permalinks with just the one photo. The challenge was to figure out the best way to allow a user to share any photo without taking it out of the context of its stream.

The problem with infinite scroll

I’m not a big fan of many sites’ implementations of infinite/endless scroll, and given a choice, I turn it off. Most times it just drives me nuts. For example, in most sites that use it, if my Internet connection goes out or there’s a server error or my browser crashes, I’m forced to start back at the top, which I find infuriating if I’m really deep down the page. Another problem is that I usually can’t bookmark my position, so if I leave and come back later, I’ll have to start over. So, in addition to the photo-sharing-on-an-infinite-page problem, I also wanted to tackle these issues, for a better user experience.

The old Ajax way

My first idea when tackling this problem was a traditional solution using Ajax and fragment identifiers, so we could start the stream of photos at an arbitrary point defined by storing the ID of the desired photo in a URL hash (e.g. /stream/123/#/photo/456.) Since anything after the hash (#) character, or fragment identifier, in a URL isn’t sent to the server, this would require passing the photo ID to the server using Ajax, and loading the correct photos in the sequence with JavaScript. To make sharing easier, I wanted the hash fragment to be updated with the ID of the photo currently in the viewport as the user scrolls up and down, so they could share it by simply copying and pasting the URL.

However, I had a few issues with this approach. The first obvious one was that it doesn’t degrade gracefully. If the visitor doesn’t have JavaScript enabled or an error prevents the JavaScript from loading, then the user will get a nice empty page – probably not the best experience. It also prevents the page from being crawled, not just by Google, but also by Facebook. When sharing or liking a page, Facebook determines what title, description, and thumbnail to display in the News Feed by crawling the page and looking for Open Graph tags, falling back to things like the <title> tag, description meta tags, and other images on the page. On a traditional permalink page like Instagram’s, it’s easy, just set the Open Graph tags with the metadata of the one photo in the page. But on a page with a multitude of Ajax-loaded photos, without the server knowing which photo is being requested (remember, the server never gets the hash fragment,) how do you set these tags? If Facebook can’t see that information for the photo being shared, it wouldn’t know what to display in the News Feed, undermining what we set out to accomplish in the first place, which was to make it easier for users to share the photos. As an up-and-coming startup looking to get traffic and exposure, this was a real deal-breaker, so I quickly scrapped this solution.

A better solution using the HTML5 History API

Instead, I decided to use the HTML5 History API. Instead of getting the ID of the photo currently in the viewport and using it to change the fragment identifier, I update the URL in the address bar by calling the replaceState() method. The basic idea is this:

  1. Wait for the scroll event to fire. (Note that since the scroll event can fire a lot, for performance reasons it’s best to run any code attached to this event after a small delay, using a setInterval, as per John Resig’s recommendation.)
  2. When the page has scrolled, get the ID of the top-most photo in the viewport. For this I used the Viewport Selectors jQuery plugin, which adds a handy :in-viewport selector. I also embedded the ID of each photo as a data-photo-id attribute in their markup, to make it easy to get with JavaScript.
  3. If the browser supports the History API, use replaceState() to add the photo ID to the base URL of the stream page, or remove it, if it’s the first photo in the stream (i.e. if we scroll back to the top.) The reason I chose to use replaceState() (which updates the current browser history entry) instead of pushState() (which adds a new history entry) was because I didn’t want to have to click “back” a bunch of times and go back through every photo just to get to the previous page.

An abridged version of the JavaScript code used in Piictu looks somewhat like this (I removed some functionality that wasn’t really necessary for the History API explanation from the code, such as the actual infinite scroll implementation. I hope it’s clear enough):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
$(window).scroll(function(){
  did_scroll = true;
});

// Every 250ms, check if the page scrolled
setInterval(function(){
  if (did_scroll) {
      updatePhotoPath();
      loadMorePhotos(); // Infinite scroll, etc.
      did_scroll = false;
  }
}, 250);

// Update the URL
function updatePhotoPath() {
  var new_url;
  var in_viewport = $('div.piic:in-viewport').first();
  if (history.replaceState) {
      if (in_viewport.hasClass('original')) {
          new_url = base_url; // The original URL of the stream page
      } else {
          new_url = base_url + "/photo/" + in_viewport.data('photoId');
      }
  history.replaceState('', '', new_url);
  }
}

You can see this in action by going to any stream on Piictu, such as this Hipstamatic stream I started a few months ago. As you scroll up and down, you’ll notice that the ID of the photo in the viewport is appended to the URL of the stream page, and when you return to the top, it’s restored to the original URL (the base_url variable in the source code, which is also saved in a data-* attribute in the markup for easy retrieval.)

Screenshot of a Piictu stream page

So what happens on the server when we request a stream? If we request a plain stream URL, such as /streams/123, the server returns the first few photos normally, starting with the first photo in the stream. If we request a URL that contains a photo ID, like /streams/123/photo/345, the server again returns a few photos, but this time starting at the photo with the specified ID, with an option to load the photos above it, or scroll down to load more photos below. No need to use JavaScript to figure out which photos to show, it’s all returned directly from the server. Also, the metadata of the photo being requested is also returned as Open Graph tags in the <head> of the page, so when you post /streams/123/photo/345 on Facebook or Google+, they’ll show the correct thumbnail and caption for that photo. It solves our goal for the photos, which was to help users to easily share them: regardless of whether they use the sharing buttons next to each photo, or simply grab the URL from the address bar and paste it in an instant message or their favorite social network, it’ll just work.

It also alleviates some of my pet peeves with infinite scrolling. Since the URL updates automatically as you move up and down the page, you can easily bookmark your position, which is particularly handy on very long streams; and if for whatever reason you’re forced to reload the page or your browser crashes, you’ll start where you left off, avoiding the frustration of having to start over (assuming your browser reopens your tabs after a crash.)

Finally, it degrades somewhat gracefully, as it’ll show the appropriate photos even if JavaScript is disabled, since JavaScript isn’t necessary to figure out which photos to load. (I say “somewhat” because it doesn’t yet offer regular pagination as a fallback, but it’s on the to-do list.)

What about Internet Explorer?

As always, the biggest issue with using any modern technology is Internet Explorer, since in this case it doesn’t support the History API in versions 9 and below. I briefly worked on a workaround for IE, using the ol’ hash fragments as a fallback. In the end we simply decided not to support IE, mainly because between January and May, Internet Explorer accounted for only 2.42% of the visits to our signup and teaser page, so the added effort and maintenance it would require seemed counterproductive. In addition, our implementation degrade gracefully in IE. The URL may not change as the user scrolls, but everything else works properly and sharing photos is still possible, using the Twitter and Facebook buttons. In other words, it simply behaves as a traditional implementation of infinite scroll. Finally, it’s a temporary situation, as Internet Explorer 10 will support the History API, and it shouldn’t require any further work. I tested it in the Windows 8 Developer Preview, which includes a preview version of Internet Explorer 10, and it worked perfectly.

Conclusion

I really believe that using the HTML5 History API to augment infinite scrolling offers a superior user experience by alleviating some of the annoyances caused by traditional approaches, such as the lack of bookmarking and sharing. I expect this technique will be used more once Internet Explorer supports the History API, but if you’re willing to live without IE support for a bit (or use one of the many polyfills available,) it’s definitely worth giving it a try now.

Let me know what you think about this; I look forward to your comments and questions, even though I still haven’t gotten around to adding comments to this blog. In the meantime, feel free to tweet @gesteves or send me an email.

Hello, World

After almost four years on Tumblr, I’ve decided it’s time to switch blog platforms. My blog now runs on Octopress and Heroku.

The reason I’m switching, and the reason I was using Tumblr in the first place, are a bit of a long story. I used to have a real blog, one I built myself in 2005 or so, when I was teaching myself Rails, and which I loved and updated frequently. However, a couple of years later, thanks to Venezuela’s foreign currency restrictions which forbid us from spending more than US$400 a year in electronic payments, having a self-hosted blog running on paid hosting – even cheap, shared hosting – became untenable. Back then I was spending $9 a month at Rails Playground to host my blog, which may not sound like a lot, but it added up to $108 a year – over a quarter of what the venezuelan government allows me to spend on the Internet in a year. So, in 2008, when Tumblr began to take off in popularity, I decided to cancel my hosting plan, scrap my blog, write a small script to import all my content, and switch to Tumblr.

It had the advantage of not having to worry about servers or hosting costs while being flexible enough to allow me to tinker with the code and design to my heart’s content, plus an amazing community that led me to meet some of the best friends I’ve ever had. However, in the past few months I’ve become quite dissatisfied with the service and the constant outages and downtime, like this recent, ongoing issue. I’m also a bit uneasy with the content I post over there, because it’s a strange mix of work/professional stuff and personal posts, reblogs, memes, and inside jokes that probably aren’t interesting to anyone except my close circle of friends. So I thought it would be better to have a place that’s just for serious business, and leave Tumblr for personal posts, socializing with my friends, and sharing photos of cats. All-Encompassing Trip will keep going at its new address, but this will be my primary blog for now.

As for my choice of platform, I chose Octopress after reading Matt Gemmell rave about it. I’ve always liked the idea of having a “baked” blog (i.e. one that’s entirely static HTML), and I’ve experimented in the past with things like nanoc, but Octopress makes it dead simple to set up, generate, and deploy a static HTML blog. If you’re considering starting a blog, and feel comfortable working in the Terminal, I can’t recommend Octopress enough.

There are plenty of advantages to the “baked” approach. Since there are no slow and expensive database calls, it’s blazing fast, lighter and more responsive, and it won’t fall down the first traffic spike it gets – not that I’m expecting to get fireballed or anything, but for me it also means that it’s lightweight enough that I can probably get away with running it with a single Heroku dyno for the foreseeable future – which makes it free. There’s also a security argument to be made, since there’s no admin interface to hack, or any chance of SQL injection. I also like that it makes backups really simple: the posts live on my computer, so they get backed up to Time Machine and SuperDuper as part of my regular backup process; my Sites folder is symlinked in my Dropbox folder, so there’s a backup there too; and since the posts are also under source control, everything gets committed to my Github repo as well. Finally, migration is trivial, because it’s just a bunch of static HTML files. Just put them on a new server and it’ll work. Octopress can even automate the process of copying the files to the server with rsync after writing a post.

A few other things of note:

  • The theme is mostly built from scratch, based on the design of my website. Unfortunately, I mostly destroyed all the sensible patterns and defaults Brandon (the creator of Octopress) created for theming it, so I think I’ll have to do some work rebuilding them to keep the code more organized. I did keep his awesome port of Solarized syntax highlighting, though.
  • I modified it for deployment to Heroku. This included, on Brandon’s instructions, removing the public folder from .gitignore, but I also added everything but the public folder and the config files to .slugignore, to keep the slug size as small as possible (it clocks in at 460 KB, vs. 4.4 MB for my website’s slug); and I added a rake task for Heroku deployment, which is mostly a copy of the default Github one, but pushing to Heroku master instead.
  • I wanted to use iA Writer as my blogging software, so I modified the new_post rake task so that it calls open #{filename} at the end, which opens the newly created post in the default editor for Markdown files, which I had previously set to Writer.
  • I also symlinked the _posts folder to the Writer and Elements folders in my Dropbox, so I can theoretically write posts from my iPad and iPhone, although I’d have to SSH into my computer to actually publish them.
  • I’m trying to simplify the process of creating new posts and deploying to Heroku, and integrate it better to the OS. I’m currently trying to figure out how to add the rake tasks to OS X’s Services menu, to make it easier to publish posts after writing them. I also replaced some of the puts calls in the Rakefile with calls to growlnotify, to get nice Growl notifications on successful deploys and whatnot.
  • I still haven’t decided whether or not I want comments here, or if I want to use Disqus or Facebook Comments. In the meantime, you can tweet @gesteves with any comments.
  • I didn’t import any of the old content from Tumblr; I couldn’t figure out a good way to do it without breaking a ton of links, since Tumblr’s permalink format doesn’t match Octopress’s. I thought about modifing Octopress’s permalinks and work something out using my local backup of Tumblr, but instead I opted for a clean break and a fresh start, using a bit of Sinatra code to 301-redirect any traffic looking for a Tumblr permalink to my Tumblr’s new domain.

Anyway, I’m not entirely sure how much or how often I’ll write here – Twitter & Tumblr seem to have atrophied my ability to write more than 140 characters at a time, and writing this post took longer than I care to admit – but I do hope to at least comment on interesting web design & development resources I find, in the style of Assaf Arkin’s “Rounded Corners” series – which I love – and maybe get back in the habit of writing well enough to, you know, express, like, opinions and stuff. Wish me luck, and thanks for reading.