localStorage DOS, Lunr.js, Vlug

01 Mar 2013 | By Alex Young | Comments | Tags security libraries search benchmarking node

localStorage DOS

Even though the Web Storage specification says user agents should limit the amount of space used to store data, a new exploit uses it to store gigabytes of junk. The exploit is based around storing data per-subdomain, which gets around the limits most browsers have already implemented. Users testing it found Chrome would crash when run in incognito mode, but Firefox was immune to the attack.

Other security researchers have raised concerns about localStorage in the past. Joey Tyson talked about storing malicious code in localStorage, and Todd Anglin wrote about some of the more obscure facts about localStorage which touches on security.


Oliver Nightingale from New Bamboo sent in his extremely well-presented full-text browser-based search library (GitHub: olivernn / lunr.js, License: MIT), which indexes JSON documents using some of the core techniques of larger server-side full-text search engines: tokenising, stemming, and stop word removal.

By removing the need of extra server side processes, search can be a feature on sites or apps that otherwise would not have warranted the extra complexity.

Trie is used for mapping tokens to matching documents, so if you’re interested in JavaScript implementations of data structures then take a look at the source. The source includes tests and benchmarks, and a build script so you can generate your own builds.


Vlug (GitHub: pllee / vlug, License: MIT, npm: vlug) by Patrick Lee is a small instrumentation library for benchmarking code without manually adding log statements. The Vlug.Interceptor object takes a specification of things to log, which will dynamically invoke calls to console.time and console.timeEnd to collect benchmarks.

Patrick has tested it with browsers and Node, and has included Vlug.Runner for running iterations on functions. The readme and homepage both have documentation and examples.

Upgrading to Grunt 0.4

28 Feb 2013 | By Alex Young | Comments | Tags backbone.js node backgoog

I was working on dailyjs-backbone-tutorial and I noticed issue #5 where “fiture” was unable to run the build script. That tutorial uses Grunt to invoke r.j from RequireJS, and it turned out I forgot to specify the version of Grunt in the project’s package.json file, which meant newcomers were getting an incompatible version of Grunt.

I changed the project to first specify the version of Grunt, and then renamed the grunt file to Gruntfile.js, and it pretty much worked. You can see these changes in commit 0f98f7.

So, what’s the big deal? Why is Grunt breaking projects and how can this be avoided in the future?

Global vs. Local

If you’re a client-side developer, npm is probably just part of your toolkit and you don’t really care about how it works. It gets things like Grunt for you so you can work more efficiently. However, us server-side developers like to obsess about things like dependency management, and to us it’s important to be careful about specifying the version of a given module.

Previous versions of Grunt kind of broke this whole idea, because Grunt’s documentation assumed you wanted to install Grunt “globally”. I’ve never liked doing that, as I’ve experienced why this is bad first-hand with the Ruby side projects I’ve been involved with. What I’ve always preferred to do with Node is write a package.json for every project, and specify the version of each dependency. I either specify the exact version, or the minor version if the project uses semantic versioning.

For example, with Grunt I might write this:

 , "grunt": "0.3.x"

This causes the grunt command-line tool to appear in ./node_modules/.bin/grunt, which probably isn’t in your $PATH. Therefore, when you’re ready to build the project and you type grunt, the command won’t be found.

Knowing this, I usually add node_modules/.bin/grunt as a “script” to package.json which allows grunt to be invoked through the npm command. This works in Unix and Windows, which was partly the reason I used Grunt instead of make anyway.

There were problems with this approach, however. Grunt comes with a load of built-in tasks, so when the developers updated one of these smaller sub-modules they had to release a whole new version of Grunt. This is dangerous when a module is installed globally – what happens if an updated task has an API breaking change? Now all of your projects that use it need to be updated.

To fix this, the Grunt developers have pulled out the command-line part of Grunt from the base package, and they’ve also removed the tasks and released those as plugins. That means you can now write this:

 , "grunt": "0.4.x"

And install the command-line tool globally:

npm install -g grunt-cli

Since grunt-cli is a very simple module it’s safer to install it globally, while the part that we want to manage more carefully is locked down to a version range that shouldn’t break our project.

Built-in Tasks: Gone

The built-in tasks have been removed in Grunt 0.4. I prefer this approach because Grunt was getting extremely large, so it seems natural to move them out into plugins. You’ll need to add them back as devDependencies to your package.json file.

If you’re having trouble finding the old plugins, they’ve been flagged on the Grunt plugin site with stars.

Uninstall Grunt

Before switching to the latest version of Grunt, be sure to uninstall the old one if you installed it globally.

Other Changes

There are other changes in 0.4 that you may run into that didn’t affect my little Backbone project. Fortunately, the Grunt developers have written up a migration guide which explains everything in detail.

Also worth reading is Tearing Grunt Apart in which Tyler Kellen and Ben Alman explain why Grunt has been changed, and what to look forward to in 0.5.

Peer Dependencies

If you write Grunt plugins, then I recommend reading Peer Dependencies on the Node blog by Domenic Denicola. As a plugin author, you can now take advantage of the peerDependencies property in package.json for defining the version of Grunt that your plugin is compatible with.

Take a look at grunt-contrib-requirejs/package.json to see how this is used in practice. The authors have locked the plugin to Grunt 0.4.x.

Node Roundup: 0.8.21, Node Redis Pubsub, node-version-assets

27 Feb 2013 | By Alex Young | Comments | Tags node modules redis databases grunt
You can send in your Node projects for review through our contact form.

Node 0.8.21

Node 0.8.21 is out. There are fixes for the http and zlib modules, so it’s safe and sensible to update.

Node Redis Pubsub

Louis Chatriot sent in NRP (Node Redis Pubsub) (GitHub: louischatriot / node-redis-pubsub, License: MIT, npm: node-redis-pubsub) which provides a Node-friendly API to Redis’ pub/sub functionality. The API looks a lot like EventEmitter, so if you need to communicate between separate processes and you’re using Redis then this might be a good solution.

Louis says he’s using it at tl;dr in production, and the project comes with tests.


node-version-assets (GitHub: techjacker / node-version-assets, License: MIT, npm: node-version-assets) by Andrew Griffiths is a module for hashing assets and placing the hash in the filename. For example, styles.css would become styles.7d47723e723251c776ce9deb5e23062b.css. This is implemented using Node’s file system streams, and the author has provided a Grunt example in case you want to invoke it that way.

jQuery Roundup: jQuery.IO, Animated Table Sorter, jQuery-ui-pic

26 Feb 2013 | By Alex Young | Comments | Tags jquery plugins forms json icons
Note: You can send your plugins and articles in for review through our contact form.


jQuery.IO (GitHub: sporto / jquery_io.js, License: MIT) by Sebastian Porto can be used to convert between form data, query strings, and JSON strings. It uses JSON.parse, and comes with tests and a Grunt build script.

Converting a form to a JavaScript object is just $.io.form($('form')).object(), and the output has form names as keys rather than the array results .serializeArray returns.

Animated Table Sorter

Animated Table Sorter (GitHub: matanhershberg / animated_table_sorter, License: MIT, jquery: AnimatedTableSorter) by Matan Hershberg is a table sorting plugin that moves rows using .animate when they’re reordered.

All you need to do is call .tableSort() on a table. CSS and images have been provided for styling the selected column and sort direction.


jQuery-ui-pic (GitHub: rtsinani / jQuery-ui-pic) by Artan Sinani provides an extracted version of the icons from Bootstrap and the image sprites from jQuery UI.

In this version the CSS classes are all prefixed with pic-, so you can use them like this: <i class="pic-trash"></i>. This might prove useful if you’re looking for a quick way to reuse jQuery UI’s icons without using the rest of jQuery UI. I licensed Glyphicons Pro myself because I find myself using them so much.

Mobile Testing on the Chromebook Pixel

25 Feb 2013 | By Alex Young | Comments | Tags testing laptops touchscreen mobile chrome-os

The Pixel.

Last week I was invited to a “secret workshop” at one of the Google campuses in London. Knowing that Addy Osmani works there I expected something to do with Yeoman or AngularJS, but it turned out to be a small launch event for the Chromebook Pixel. I ended up walking out of there with my very own Pixel – no, I didn’t slip one into my backpack and run off wearing a balaclava, each attendee was given one to test.

Looking at this slick metallic slab of Google-designed hardware I was left wondering what to do with it as a JavaScript hacker. It runs Chrome OS and has a high density multi-touch display. That made me wonder: how useful is the Pixel as a multi-touch, “retina” testing device? My personal workflow for client-side development is to preview sites on my desktop or laptop, then switch to testing with mobile devices towards the end of the project. I may occasionally have a tablet or phone close by for experimental work or feasibility studies, but generally I leave the device testing until later.

By using a Pixel early in development the potential is there to work “touch natively” – focusing on touch as a primary input mode rather than a secondary option.

If you’re working on mobile sites, responsive designs, or browser-based games, then how well does the Pixel function as a testing machine? With one device you get several features that are useful for testing these kinds of sites:

  1. The touchscreen. Rather than struggling with a tablet or phone during early development you get a touchscreen and the standard inputs we’re more used to.
  2. The screen’s high resolution is useful for testing sites that optionally support high density displays.
  3. Chrome’s developer tools make it easy to override the browser’s reported size and user agent which is useful for testing mobile and responsive designs.

I’ve been using the Pixel with several mobile frameworks and well-known mobile-friendly sites to see how well these points play out in practice.

Testing Mobile Sites with Chrome

Pressing ctrl-shift-j opens the JavaScript console on a Chromebook. Once you’re in there, selecting the settings icon opens up some options that can be used to simulate a mobile browser. The ‘Overrides’ tab has a user agent switcher which has some useful built-in browsers like Firefox Mobile and Android 4.x. There’s also an option for changing the resolution, which changes the resolution reported to the DOM rather than resizing the window.

The Screen and Multi-Touch

The screen itself is 2560x1700 and 3:2 (239 ppi). It’s sharp, far better than my battle-worn last gen MacBook Air. One of the Google employees at the event said the unusual aspect ratio was because websites are usually tall rather than wide, so they optimised for that. In practice I haven’t noticed it – the high pixels per inch is the most significant thing about it.

I tried Multitouch Test and it was able to see 10 unique points – I’m not sure what the limit is and I can’t find it documented in Google’s technical specs for the Pixel.

The Touchpad

This article isn’t intended to be a review of the Pixel. However, I really love the touchpad. I’ve struggled with non-Apple trackpads on laptops before, but the Pixel’s touchpad is accurate and ignores accidental touches fairly well. The click action feels right – it doesn’t take too much pressure but has a satisfying click. I also like the soft finish, which makes it feel comfortable to use even with my sweaty hands.

Testing Mobile Frameworks

I tested some well-known mobile frameworks and sites purely using the touchscreen. I set Chrome to send the Android browser user agent, and I also tried Chrome for Android and Firefox Mobile.

The Google employees at the event were quick to point out that everything works using touch. It seems like a lot of effort has been put into translating touches into events that allow UI elements to behave as expected.

That made me wonder if sites optimised for other touch devices would fail to interpret touch events and gestures on the Pixel – perhaps reading them as mouse events instead – but all of the widgets and gestures I tried seemed to work as intended. I even ran the touch event reporting in Enyo and Sencha Touch to confirm the gestures were being reported correctly.

During the event, I opened The Verge on my phone just to check what the press was saying about the Pixel. There was mention of touchscreen interface lag, and Gruber picked this up on Daring Fireball. I don’t have any way of measuring the lag scientifically myself (I hope to see a Digital Foundry-style analysis of the device), but in practice it feels like a modern tablet so I haven’t had a problem with it. I’m not sure where Gruber gets “janky” from, but as the Pixel will be sold in high street stores across the UK and US you should be able to try it out in person.

jQuery Mobile worked using the touchscreen for standard navigation, and also recognised swipes and dragging.

jQuery Mobile's widgets worked with touch-based gestures.

Enyo also seemed to recognise the expected gestures.

Enyo worked as it would on a touchscreen phone or tablet.

The Sencha Touch demos behaved as they would on a mobile device.

Sencha Touch, showing the event viewer.

Bootstrap’s responsive design seemed to cope with different sizes and touch gestures.

Bootstrap on the Pixel.

Testing Mobile Sites

The Guardian's mobile site running on the Pixel.

I tested some sites that I know work well on mobile devices, and used the touchscreen to interact with them. Again, this was with Chrome’s user agent changed to several mobile browsers.

Development Test

The way I write both client-side projects and server-side code is with Vim, tmux, and command-line tools. This doesn’t translate well to Chrome OS – it can be done by switching the machine into developer mode, but this requires some Linux experience. The Pixel supports dual booting, and Crouton seems worth checking out if you’re a Chromebook user.

I wrote this article primarily for client-side developers, so I imagine you’d prefer to use the OS as it was intended rather than installing Linux. With that in mind, I tried making some small projects using jQuery Mobile and Cloud9 IDE. Cloud9 worked well for the most part – I had the occasional crashed tab, but I managed to get a project running.

Cloud9 IDE with its HTML preview panel.

One quirk I found was I used the jQuery Mobile CDN assets served using HTTP, whereas Cloud9 is always served over SSL. When I tried to preview my HTML files the CDN assets were blocked by Chrome, and only a small shield icon in the address bar indicated this so it wasn’t immediately obvious.

Also, Cloud9 might not fit into your existing workflow. While it supports GitHub, Bitbucket, SSH, and FTP, it takes a bit of effort to get an existing project running with it.

If you were sold on using the Pixel as a high DPI touchscreen testing device, then the fact you can at least get some kind of JavaScript-friendly development going is useful. However, prepare to make some compromises.

Other Notes

Chrome syncs quickly. Try signing in with Chrome on multiple computers and installing apps or changing themes to see what I mean. The upshot of this is the Chromebook is reliable when syncing with Google’s services. You lose this somewhat with other services depending on how they’re built. Cloud9 IDE, for example, has an offline mode, but I haven’t tested it well enough to see how resilient it is at syncing the data back again.

Switching accounts on a Chromebook isn’t much fun. Chrome OS doesn’t support anything like fast user switching, and I use a ridiculously long password stored in a password manager, so I’ll do anything to avoid typing it in. Also, 1Password doesn’t have an extension for Chrome OS – you can use the HTML version (1Password Anywhere), but that is limited and isn’t particularly friendly. Last Pass works though.


I love the look of the Pixel, it exudes luxury, and the OS is incredibly low maintenance. As for a mobile development testing rig – it does the job, but you may find Chrome’s remote debugging tools and a cheap tablet to work well enough. Being able to dip into Chrome’s developer tools on a local device and use a keyboard and mouse is natural and convenient: it makes mobile testing feel like cheating!

Chromebooks are designed to sync constantly, which means you technically don’t have to worry about losing data if yours gets damaged or stolen. As it stands it’s a trade-off: you lose the ability to install your standard development tools but gain a lower maintenance and potentially more secure OS.

While I respect Cloud9 IDE, I feel like there are people clamouring for a product close to the Pixel that better supports developers. Perhaps Native Client will make this possible. We are the ultimate early adopters, so sell us machines we can code on with our preferred tools!

Discussion: What Do You Want From Components?

22 Feb 2013 | By Alex Young | Comments | Tags component bower

Yogruntbower component!

Assume for a second that TJ Holowaychuk’s Component project isn’t the future. And then let’s say that, due to support from Twitter and Google (through Yeoman), Bower becomes the de facto tool for managing and installing client-side dependencies. Whether you’re using Yeoman or Bower with another build tool, you’re still left with a gap where reusable UI widgets should be.

While Yeoman improves workflow, Component also tackles the notion of sharing “widgets” that contain markup, stylesheets, and code. If you read TJ’s tutorials he pushes the idea of structural markup – stripping away unnecessary markup to leave behind a vanilla slice of templates that can easily be reskinned with CSS.

With more advanced client-side workflows provided by libraries like Backbone.js, RequireJS, and tools like Yeoman and Bower, I feel like moving away from monolithic UI projects is necessary. While I like to jump start projects with jQuery UI, Bootstrap, Closure Library, or perhaps even Dojo’s Dijit and DojoX, these projects are more monolithic than the modular dream promised by the Component project.

I believe there’s a missing library to the Bower/Yeoman future: something to support the notion of reusable widgets. Packaging chunks of markup, styles, and JavaScript is nothing new – but what is being done to solidify this goal outside of the Component project and older monolithic approaches?


The component idea is being formalised in Introduction to Web Components, and lists Dominic Cooney and Dimitri Glazkov from Google as the editors. Therefore, the concept is being standardised, but this particular vision of components seems very different to what TJ and other developers have envisioned.


Should a widget/component library enforce ARIA, or provide tools for making accessible components? jQuery UI went through many iterations to improve its accessibility, and Dojo has documentation on creating accessible widgets.

Data Binding APIs

What about data binding or MVC style development? How would UI components fit in? If you’re shipping JavaScript inside a component, how would the API provide hooks that can work with Knockout, AngularJS, and other libraries without manually plugging them in?

It feels like we’re settling on binding using data- attributes, so this might be relatively trivial in practice. Perhaps the “ultimate” component library would address this, or perhaps it’s unnecessary.

What Do You Want?

Let’s say you settle on Yeoman as your workflow tool for client-side development. What do you think reusable client-side widgets should look like? What would be the perfect fit alongside Yeoman, Bower, Grunt, and a data binding library?

Cloudinary Tutorial



This tutorial introduces Cloudinary, and demonstrates how to build a gallery application using Express and Node. Cloudinary is a service for streamlining asset management – if you’re tired of optimising images and manually uploading them to an asset server or CDN, then Cloudinary might be what you’re looking for.

One reason Cloudinary is useful to us as Node developers is the Cloudinary Node module (GitHub: cloudinary / cloudinary_npm, License: MIT, npm: cloudinary). It can be used to easily generate optimised images, thumbnails, and automatically upload them to Cloudinary. Let’s drop it into an Express application to see what happens!

The full source for this tutorial is available here: alexyoung / dailyjs-cloudinary-gallery.

Step 1: Create a Cloudinary Account

Register for a free account at Cloudinary. That’ll give you 500 MB of storage and a gigabyte of monthly bandwidth. Paid plans start at $39 a month, and that increases the storage to 10 GB and adds 40 GB a month of bandwidth.

Once you’ve created your account, sign in and take a look at the right-hand panel that reads “Account Details”.

The Cloudinary management interface.

To follow this tutorial, you’ll need the api_key and api_secret, so make a note of those.

Step 2: The Express App

Assuming you’ve already installed Node, open a terminal and run npm install -g express. Once that’s done, run express dailyjs-cloudinary-gallery to create a new Express project.

Open package.json and add the cloudinary dependency:

  "name": "application-name",
  "version": "0.0.1",
  "private": true,
  "scripts": {
    "start": "node app"
  "dependencies": {
    "express": "3.0.3",
    "jade": "*",
    "cloudinary": "*"

Once you’ve done that, save the file and run npm install. This will install the project’s dependencies, including the Cloudinary module.

Step 3: Image Uploads

Cloudinary can be used for every aspect of an image gallery:

  • Uploading images
  • Creating and serving thumbnails
  • Fetching a list of images to display

Go to the top of app.js and add the following require statements:

var cloudinary = require('cloudinary')
  , fs = require('fs')

Now add a new route to handle uploads:

app.post('/upload', function(req, res){
  var imageStream = fs.createReadStream(req.files.image.path, { encoding: 'binary' })
    , cloudStream = cloudinary.uploader.upload_stream(function() { res.redirect('/'); });

  imageStream.on('data', cloudStream.write).on('end', cloudStream.end);

Add your Cloudinary configuration to app.configure:

app.configure('development', function(){
  cloudinary.config({ cloud_name: 'yours', api_key: 'yours', api_secret: 'yours' });

app.locals.api_key = cloudinary.config().api_key;
app.locals.cloud_name = cloudinary.config().cloud_name;

This allows you to use different Cloudinary accounts for development and production purposes, depending on your requirements.

The last two lines make the values available to the templates. The secret is not meant to be accessible from outside the server, but the api_key and cloud_name options can be used by client-side scripts.

Change views/index.jade to show an upload form:

extends layout

block content
  h1= title
  p Welcome to #{title}

  form(action="/upload", method="post", enctype="multipart/form-data")
    input(type="file", name="image")
    input(type="submit", value="Upload Image")

  - if (images && images.length)
    - images.forEach(function(image){
    - })

Before you try out the app, change the / route in app.js to load a set of images from Cloudinary:

app.get('/', function(req, res, next){
    res.render('index', { images: items.resources, title: 'Gallery' });

The cloudinary.api.resources method fetches all of the images in your account. Note that this is rate limited to 500 requests per hour – I’ve used it here to keep the tutorial simple, but in production you should cache the results or store them in a database.

At this point, image uploads should work if you start up the app with npm start and navigate to http://localhost:3000, but the output won’t look great without thumbnails.

Step 4: Thumbnails

Cloudinary supports a wide range of image transformations. The API is based around generating a URL that includes parameters to change the image in some way. All we need for the gallery is a simple crop, which is supported through the cloudinary.url method. Change the / route to pass a reference to the cloudinary object to the template:

app.get('/', function(req, res, next){
    res.render('index', { images: items.resources, title: 'Gallery', cloudinary: cloudinary });

Now, open views/index.jade so that it calls cloudinary.url with some options to get the desired effect:

- images.forEach(function(image){
    img(src=cloudinary.url(image.public_id + '.' + image.format, { width: 100, height: 100, crop: 'fill', version: image.version }))
- })

The important part here is this:

cloudinary.url(image.public_id + '.' + image.format, { width: 100, height: 100, crop: 'fill', version: image.version })

The cloudinary.url method generates a URL that includes the width, height, and crop options:


Because the API is based around URLs, you could easily use this from browser-based JavaScript, utilising Cloudinary to add behaviour that would typically be associated with server-side web development.

I’ve also included the version property, which is recommended by Cloudinary when overriding the public_id. It’s returned by both the upload API and the admin API.

Step 5: Effects

The previous example can easily be adapted to generate lots of interesting effects. This change makes it generate images that feature a vignette lens effect:

- images.forEach(function(image){
    img(src=cloudinary.url(image.public_id + '.' + image.format, { width: 100, height: 100, crop: 'fill', effect: 'vignette', version: image.version }))
- })

The effect parameter can be one of the following effects:

  • grayscale
  • blackwhite
  • vignette
  • sepia
  • brightness
  • saturation
  • contrast
  • hue
  • pixelate
  • blur
  • sharpen

Some effects take an argument, and this is simply prefixed with a colon. For example, brightness:40.

As well as face detection, adding rounded corners, overlays, and other transformations are also available. Again, check the image transforms documentation for full details.

The vignette effect applied to several images.

jQuery Uploads

Cloudinary has a CORS API for file uploads which degrades to an iframe in legacy browsers. This means you can do image uploads with no server-side code at all! The cloudinary_js repository has a jQuery plugin, with jQuery UI support, which can be used to upload images.

Download the JavaScript files from the cloudinary/cloudinary_js repository and add them to the public/ folder. Then edit views/layout.jade to load jQuery and the other files in this order:

  block scripts

The block scripts part at the end is some Jade jiggery-pokery to allow HTML to be appended to this template from another template. Open views/index.jade and add this markup:

  h2 jQuery Uploads



block scripts
    // Configure Cloudinary
    $.cloudinary.config({ api_key: '!{api_key}', cloud_name: '!{cloud_name}' });

    $('.cloudinary-fileupload').bind('fileuploadstart', function(e){
      $('.preview').html('Upload started...');

    // Upload finished
    $('.cloudinary-fileupload').bind('cloudinarydone', function(e, data){
        $.cloudinary.image(data.result.public_id, { format: data.result.format, version: data.result.version, crop: 'scale', width: 100, height: 100 })
      return true;

This displays a form with a file input, generated by the cloudinary.uploader.image_upload_tag helper. That keeps the markup lightweight by doing all of the signing and other things Cloudinary’s API needs behind the scenes.

The client-side JavaScript at the end of the template will display a message when an image is being uploaded, and then display it once it’s finished uploading. The other event which I haven’t used here is fileuploadfail, which is, of course, useful for displaying errors when file uploads fail.

If you want to read more about Cloudinary and jQuery, check out this article: Direct image uploads from the browser to the cloud with jQuery, Upload Images: Remote Uploads.


The completed gallery.

In this tutorial you’ve seen how to integrate both Node and client-side projects with Cloudinary. If you’d like more details on the service, visit Cloudinary.com.

This gallery example could be easily expanded using features from Cloudinary’s API to do a lot of practical and cool stuff:

  • Pagination could be added
  • The effects API could be used for editing photos
  • Face detection could be used to tag people in photos

The full source for my example Express app is available here: https://github.com/alexyoung/dailyjs-cloudinary-gallery.

Node Roundup: 0.8.20, 0.9.10, continuation.js, selenium-node-webdriver

20 Feb 2013 | By Alex Young | Comments | Tags node modules testing functional
You can send in your Node projects for review through our contact form.

Node 0.8.20, 0.9.10

Node 0.8.20 was released last week. The most significant updates in this version are fixes for the HTTP core module, so if you’re on 0.8.19 then I can’t see any reason not to upgrade.

Node 0.9.10 meanwhile has several stream-related updates. The default options for WriteStream have been updated to improve performance, and empty strings and buffers no longer signal EOF.


continuation.js (GitHub: dai-shi / continuation.js, License: BSD, npm: continuation.js) by Daishi Kato automatically adds tail call optimisation to modules loaded with require. It’s written using esprima and escodegen to parse and generate a new version of existing code. It does this by using trampolined functions, which is also how tail recursive functions are implemented in functional languages like Lisp.

The author has included benchmarks that show where the module improves performance. There are cases where it won’t be faster due to how trampolining is handled – there are also some interesting posts by Guillaume Lathoud about implementing tail call optimisation without trampolining.


selenium-node-webdriver (GitHub: WaterfallEngineering / selenium-node-webdriver, License: Apache 2, npm: selenium-node-webdriver) by Lon Ingram packages a prebuilt WebDriver client so it’s easier to get started writing tests that use WebDriver. Lon notes that it was designed to work with PhantomJS, but it could be used with any WebDriver server.

jQuery Roundup: Durandal, Version.js, Navi.js

19 Feb 2013 | By Alex Young | Comments | Tags jquery plugins frameworks libraries testing navigation
Note: You can send your plugins and articles in for review through our contact form.


Durandal (GitHub: BlueSpire / Durandal, License: MIT) combines jQuery, Knockout, and RequireJS with some of its own code to create a framework for developing single page applications. Durandal apps are built using AMD-based modules, and it also supports the notion of a widget.

One interesting feature is application-wide messaging – the main app object can handle events, so it can be used as a universal message bus to help keep functionality nicely decoupled.

The project includes Jasmine/PhantomJS tests in the test/ directory, but the documentation itself doesn’t mention tests and the application skeletons don’t include them either. That seems like an oversight to me, given that the project claims to be “single page apps done right”.


Justin Stayton sent in Version.js (jstayton / version.js, License: MIT, bower: version.js), which he developed while testing scripts against multiple versions of jQuery. It works by using attributes to specify the required versions of libraries:

<script src="version.js" data-url="google" data-lib="jquery" data-ver="1.7.2"></script>

This will cause jQuery 1.7.2 to be loaded from Google’s CDN as the default. If another version is required, the versionjs GET parameter can be used. This makes it easy to switch between versions of a dependency, which might be useful in tests or during local development.

Navi.js (GitHub: tgrant54 / Navi.js, License: MIT) by Tyler Grant makes single pages behave like a full site using hash routing. It has breadcrumb support, and can be called multiple times. The jQuery plugin method takes a hash option so you could embed multiple menus on a page, each using a different hash to distinguish between them:

  hash: '#!/'
, content: $('#naviContent')

The project’s homepage has markup samples and demos.

memdiff, numerizerJS, Obfuscate.js

18 Feb 2013 | By Alex Young | Comments | Tags testing debugging memory node modules parsing text


memdiff (GitHub: azer / memdiff, License: WTFPL, npm: memdiff) by Azer Koculu is a BDD-style memory leak tool based on memwatch. It can either be used by writing scripts with describe and it, and then running them with memdiff:

function SimpleClass(){}
var leaks = [];

describe('SimpleClass', function() {
  it('is leaking', function() {
    leaks.push(new SimpleClass);

  it('is not leaking', function() {
    new SimpleClass;

Or by loading memdiff with require and passing a callback to memdiff. The memwatch module itself has an event-based API, and includes a platform-independent native module – so both of these projects are tied to Node and won’t work in a browser.


numerizerJS (GitHub: bolgovr / numerizerJS, License: MIT, npm: numerizer) by Roman Bolgov is a library for parsing English language string representations of numbers:

var numerizer = require('numerizer');
numerizer('forty two'); // '42'

It’s currently very simple, and doesn’t support browsers out of the box, but I like the fact the author has included Mocha tests. It’d work well alongside other libraries like Moment.js for providing intuitive text-based interfaces.


Obfuscate.js (GitHub: miohtama / obfuscate.js, License: MIT) by Mikko Ohtamaa is a client-side script for replacing text on pages with nonsense that may be more desirable than private information. Mikko suggests this might be useful for making screenshots, so post-processing isn’t required to blur out personal information. The obfuscate function takes an optional selector, so either the entire body of a document can be obfuscated, or just the contents of a given selector.

It walks through each child node looking for text nodes, so it’s lightweight and doesn’t have any dependencies. It also tries to make the text look similar (at a glance) to the original text.

HHHHold, w2ui, Event Spy

15 Feb 2013 | By Alex Young | Comments | Tags node testing ui events jquery


HHHHold (GitHub: ThisIsJohnBrown / hhhhold-js, License: MIT) is a library for faking user generated content with hhhhold!:

Drop hhhhold URLs into your code for quick access to safe-for-work, attributed images from ffffound. Simulate real user content in your project.

It can be included as a client-side script for automatically generating random images whenever an image element has hhhhold.js/ in the src attribute. This allows various parameters to be passed to hhhhold, like the size of the image, or other options such as image brightness.



w2ui (GitHub: vitmalina / w2ui, License: MIT) by Vitali Malinouski is a UI library that is designed to be used with jQuery. The site has demos which use Bootstrap, but it doesn’t actually depend on Bootstrap as such – the project’s CSS files have been designed to work alongside other CSS libraries. There’s also a w2ui demo page that shows what the various widgets look like without Bootstrap.

So, what’s included? There are some widgets I find myself needing for a lot of projects that don’t come with Bootstrap, like sidebars and the data grid. There are also utility functions for validating values base64 encoding and decoding.

The JavaScript is all namespaced in w2utils and w2ui, and the CSS styles all have a w2ui- prefix, so it should be easy to drop it into a project to see what the widgets look like alongside existing functionality.

Event Spy

Event Spy (Google Code: event-spy, License: New BSD, Chrome Web Store: Event Spy) by Johan Laursen is a Chrome extension that adds event tracking to the developer tools. Only events with a listener will be displayed, and the target will be highlighted on the page.

The Chrome Web Store page for the project has a video that demonstrates the plugin in action, along with screenshots.

Backbone.js Tutorial: Testing with Mocks

14 Feb 2013 | By Alex Young | Comments | Tags backbone.js mvc node backgoog testing


Before starting this tutorial, you’ll need the following:

  • alexyoung / dailyjs-backbone-tutorial at commit 5b0a529
  • The API key from part 2
  • The “Client ID” key from part 2
  • Update app/js/config.js with your keys (if you’ve checked out my source)

To check out the source, run the following commands (or use a suitable Git GUI tool):

git clone git@github.com:alexyoung/dailyjs-backbone-tutorial.git
cd dailyjs-backbone-tutorial
git reset --hard 5b0a529


Last week I wrote about testing a custom Backbone.sync implementation using Sinon’s spies. This worked well in our situation where the transport layer isn’t necessarily pinned down – Sinon includes Fake XMLHttpRequest, but this won’t work with Google’s API as far as I know. This week I want to introduce another testing concept that Sinon provides: mocks.

Mocks are fake methods that allow expectations to be registered. Historically, you’ll find mocks being used in unit tests where I/O occurs. If you’re testing business logic you don’t need to check if a file was written or a network call was made, it’s often preferable to attach an expectation to make sure the appropriate API would have been called.

In Sinon, creating a mock returns an object that can be decorated with expectations. The API is chainable, so it’s low on boilerplate and high on readability. What you’re aiming to do is state “whenever this method is called, ensure it was called with these parameters”. This can be done through mocks by setting up expectations using matchers.

Matchers are similar to assertions – they can be used to check that arguments are everything from primitive types to instances of a constructor, or even literal values.

Last week we used spies to ensure Google’s API was accessed in the expected way. Mocks could be used for this as well. We don’t really care about the request so much as the fact a particular CRUD operation was requested. The signature for Backbone.gapiRequest is request, method, model, options – the method argument is generally what we’re interested in. Therefore, to set up an expectation that saving an existing task caused update to fire, we can use a mock with sinon.match.object:

var mock = sinon.mock(Backbone);
mock.expects('gapiRequest').once().withArgs(sinon.match.object, 'update');

// Do UI stuff to cause the task to be edited and the form to be submitted

Mocks Compared to Spies

The previous example looked a lot like last week’s spies, and using spies for the same thing used less code. So, when should we use mocks, and when should we use spies? Mocks give you a fine-grained control over the order and behaviour of method calls. Spies have a different API which focuses on checking how callbacks or methods are used. If you were testing a method that accepts a callback, you could pass in a spy to see how the callback gets used. With a mock, the callback would be from the system under test, and you’d set up expectations on it.

When it comes to UI testing – triggering interface actions to invoke code, I find it’s easier to treat the entire Backbone stack as a whole and use spies to ensure the expected behaviour occurs. Rather than writing a test for each model, view, and collection, it makes more sense to drive the UI and hook into model or sync operations to verify the outcome.

In last week’s tests where lists were being tested, I probably wouldn’t use mocks because mocks should have a closer relationship to a given method under test. The kinds of tests we’re writing involve more than one method, so spies and assertions on the DOM make more sense.

Mock Example

A good place to use mocks is for testing app/js/gapi.js. Let’s say we’re interested in making sure gapiRequest gets called by Backbone.sync. We could use mocks:

test('gapiRequest is called by Backbone.sync', function() {
  var mock = sinon.mock(Backbone);
  Backbone.sync('update', model, {});

This calls Backbone.sync to cause gapiRequest to be called once. This test doesn’t verify the behaviour of gapiRequest itself, just the fact it gets called.

One quirk of the custom Backbone.sync API is Task.prototype.get is called twice: once to fetch task’s ID, and another to get the list’s ID. We could test this with mocks if it was deemed important:

test('Ensure Task.prototype.get is called twice', function() {
  var mock = sinon.mock(model);

  Backbone.sync('update', model);

This uses the twice expectation with another mock.

Hopefully you’re starting to understand how mocks and spies differ. There’s another major part of Sinon, though, and that’s the stub API.


Digging further into Backbone.gapiRequest, requests are expected to have an execute method which gets called to send data to Google’s API. Both spies and stubs can be used to test this using the yieldsTo method:

test('gapiRequest causes the execute callback to fire', function() {
  var spy = sinon.spy();
  sinon.stub(Backbone, 'gapiRequest').yieldsTo('execute', spy);
  Backbone.sync('update', model);


This test causes the following chain of events to occur:

  1. Backbone.sync calls Backbone.gapiRequest
  2. Backbone.gapiRequest receives an object with an execute property, which we’ve replaced with a spy
  3. Backbone.gapiRequest calls this execute method, therefore satisfying assert.ok(spy.calledOnce)

Putting these ideas together can be used to make sure the right success or error callbacks are triggered after a request has completed:

test('Errors get called', function() {
  var spy = sinon.spy()
    , options = { error: spy }

  // Stub the internal update method that would usually come from Google
  sinon.stub(gapi.client.tasks.tasks, 'update').returns({
    execute: sinon.stub().yields(options)

  // Invoke a sync with a fake model and the options with the error callback
  Backbone.sync('update', model, options);


This test makes sure error gets called by using a spy, and it also stubs out gapi.client.tasks.tasks.update with our own object. This object has an execute property which causes the callback inside gapiRequest to run, and ultimately call error.

Clearing Up

I’ve written a test suite for tasks. It’s based on last week’s tests so there isn’t really anything new, apart from the teardown method:

setup(function() {
  /// ...

  spyUpdate = sinon.spy(gapi.client.tasks.tasks, 'update')
  spyCreate = sinon.spy(gapi.client.tasks.tasks, 'insert')
  spyDelete = sinon.spy(gapi.client.tasks.tasks, 'delete')

  // ...

teardown(function() {

I’ve found this pattern is better than calling reset, because it’s easy to attempt to wrap objects more than once when multiple test files are loaded.

Writing Good Tests with Sinon

Sinon might look like a small library that you can drop into Mocha, Jasmine, or QUnit, but there’s an art to writing good Sinon tests. Sinon’s documentation has some explanation of when exactly spies, mocks, and stubs are useful, but there is a subjective factor at play particularly when it comes to deciding whether a test is best written with a mock or a stub.

A few tips I’ve found useful are:

  • Spies are great for the times when you want to test the entire application, rather than a specific class or method
  • Stubs come in handy when there are methods you don’t want run or want to force execution down a given path
  • Mocks are good for testing specific methods
  • A single mock per test case should be used
  • You should use restore() after using spies and stubs, it’s easy to forget and causes “double wrap” errors


Stylistically spies, stubs, and mocks are very different, but they’re vexingly similar until you’ve had some practice with Sinon. There have been mock vs. stub discussions on the Sinon.JS Google Group, so it’s probably best to ask Christian on that group if you’re struggling get Sinon to do what you want.

The source for this tutorial can be found in alexyoung / dailyjs-backbone-tutorial, commit 0c6de32.

Node Roundup: 0.8.19, 0.9.9, Peer Dependencies, hapi, node-treap

13 Feb 2013 | By Alex Young | Comments | Tags node modules frameworks web
You can send in your Node projects for review through our contact form.

Node 0.8.19, 0.9.9, and Peer Dependencies

Node 0.8.19 was released last week and this version includes an update for npm that supports peer dependencies. I’m excited about this feature, and I’ll be interested to see how it pans out over time. Basically, you can now specify dependencies for “plugins”. Think jQuery plugins, or in a Node project Grunt plugins are a good example.

This will require plugin authors to update their package.json files with a peerDependencies property, but it should make managing things like Express middleware and Grunt easier in the future. I already find npm’s dependency management relatively stress-free, and this seems like a step in the right direction.

Also, Node 0.9.9 was also released last week, which now features a streams2-powered tls module.



hapi (GitHub: walmartlabs / hapi, License: LICENSE, npm: hapi) from Walmart Labs is a framework for building RESTful API services. There are already a few solid RESTful API modules for Node (like restify), so hapi looks to be building on that concept rather than being an MVC web framework.

There’s a basic example that provides an overview of the API:

var Hapi = require('hapi');

// Create a server with a host, port, and options
var server = new Hapi.Server('localhost', 8080);

// Define the route
var hello = {
  handler: function(request) {
    request.reply({ greeting: 'hello world' });

// Add the route
  method : 'GET',
  path : '/hello',
  config : hello

// Start the server

Gone is the req, res pattern (which comes from Node’s core modules, not Connect). The hapi API documentation is extremely detailed, and includes examples for the main features. The routing API does seem extremely flexible, but it’s hard to judge it without seeing a large hapi application.

hapi, a Prologue by Eran Hammer is a detailed post that compares hapi to Express, which is useful if you’re familiar with Express. Eran writes:

We also had some bad experience with Express’ lack of true extensibility model. Express was a pleasure and easy to use 2 years ago with a limited set of middleware and very little interdependencies among them. But with a long list of chained middleware, we found hard to debug problems when we simply changed the order in which middleware modules were being loaded.

I’ve always thought the answer to this was to make smaller, interconnected services. Rather than a large Express application with complex middleware, shouldn’t we be using multiple Express applications that communicate with each other? Technically Express could be a veneer on top of a more complex architecture.

Eran brings up other points as well, but it’s difficult to say how well hapi or Express satisfy the goals of building large production web applications because people building things at that level don’t release gory details about their solutions to architectural problems. I occasionally run into a friend who works on a Rails project with thousands of models. It doesn’t work very well. But where are concrete details on real solutions to the problem of scaling business logic?


node-treap (GitHub: brenden / node-treap, License: MIT, npm: treap) by Brenden Kokoszka is a Treap implementation. A treap is a self-balanced binary search tree. Once a tree has been created, keys can be added with data objects:

var treap = require('treap');
var t = treap.create();

// Insert some keys, augmented with data fields
t.insert(5, { foo: 12 });
t.insert(2, { foo: 8 });
t.insert(7, { foo: 1000 });

Then elements can be fetched and removed:

var a = t.find(5);
t.remove(a); // by reference to node
t.remove(7); // by key

Brenden has included tests, and each API method has documentation in the readme. He’s included some notes on what treaps are, so you don’t need to be fresh off the back of a computer science degree to figure out what the module does.

jQuery Roundup: Formwin, Three Sixty Slider, slideToucher

12 Feb 2013 | By Alex Young | Comments | Tags jquery plugins animations forms ui
Note: You can send your plugins and articles in for review through our contact form.


Formwin (GitHub: rocco / formwin, License: MIT) by Rocco Georgi started as a fork of Uniform, but is now very different. It removes legacy browser support (IE8+) and relies on CSS for things like rounded corners.

The required markup is documented in the project’s readme. In general it relies on labels and spans:

<label class="formwintexts">
  <span>Label Text</span>
  <input type="text" name="yourinput">

It must be initialised to be used on a page, either with $.formwin.init(); or by setting $.formwinSettings. The init method accepts several options for configuring which CSS classes get used for things like active elements and hovering. This is similar to Uniform, and makes it extremely easy to drop into an existing project.

Three Sixty Slider


Three Sixty Slider (GitHub: creativeaura / threesixty-slider, License: MIT/GPL) by Gaurav Jassal allows multiple images to be displayed to give the illusion of multiple viewing angles. It features smooth animations, mouse and touchscreen support, and has a lot of tweakable options.

Basic usage is like this:

  totalFrames: 72, // Total no. of image you have for 360 slider
  endFrame: 72, // end frame for the auto spin animation
  currentFrame: 1, // This the start frame for auto spin
  imgList: '.threesixty_images', // selector for image list
  progress: '.spinner', // selector to show the loading progress
  imagePath:'/assets/product1/', // path of the image assets
  filePrefix: 'ipod-', // file prefix if any
  ext: '.jpg', // extension for the assets
  height: 265,
  width: 400,
  navigation: true

It’ll figure out the image names based on the settings, so the markup doesn’t need to include lots of img tags. There’s a live demo on creativeaura.github.com/threesixty-slider/.


slideToucher (GitHub: Yuripetusko / slideToucher, License: MIT) by Yuri Petusko is a swipe gesture plugin that is designed to be high performance. It supports horizontal and vertical swipes, and uses translate3d to produce smooth animations where available.

It expects markup with the slide and row classes, and is invoked with $(selector).slideToucher({ vertical: true, horizontal: true });. The author has posted a demo here: yuripetusko.github.com/slideToucher/.

Numeric JavaScript, howler.js, depot.js

11 Feb 2013 | By Alex Young | Comments | Tags localStorage html5 mathematics audio

Numeric JavaScript

Numeric JavaScript (GitHub: sloisel / numeric, License: MIT) by Sébastien Loisel is a library that provides tools for matrix and vector calculations, convex optimisation, and linear programming. This library was sent in by Emil Bay, who uses it for computationally intensive tasks like genetic programming and AI. Emil says it’s extremely fast, and the Numeric author has some detailed benchmarks of Numeric with comparisons against Closure and Sylvester.



howler.js (GitHub: goldfire / howler.js, License: MIT) by James Simpson and GoldFire Studios is an audio library that works with Web Audio and HTML5 Audio. Like similar libraries, it can automatically load the right file format for a given browser, but also comes with a bevy of other features as well. It has an event-based API, and methods like fadeIn for handling some of the basic tasks you’ll face when working with audio.

It implements a cache pool and automatically fetches the audio files, which explains why it seemed so fast when I played around with the examples. It’s implemented without any dependencies, and I noticed the source was consistently formatted and easy to follow.


depot.js (GitHub: mkuklis / depot.js, License: MIT, bower: depot) by Michal Kuklis is a localStorage wrapper that can be used with CommonJS or AMD, but also works with plain-old script tags. To use it, define a store and then call methods on the store’s instance:

var todoStore = depot('todos');

todoStore.save({ title: 'todo1' });
todoStore.updateAll({ completed: false });

// Fetch all:

It comes with Mocha tests which can be run with PhantomJS.

voxel.js, holla, Blitz, OneJS 2.0

08 Feb 2013 | By Alex Young | Comments | Tags graphics games webrtc WebSocket frameworks modules



When I was at BathCamp this week, Andrew Nesbitt mentioned voxel.js – a collection of projects for building browser-based 3D games. The core components were written by Max Ogden and James Halliday, take a look at voxel-engine (GitHub: maxogden / voxel-engine, License: BSD, npm: voxel-engine) if you want to see some code examples.

There are lots of demos on the voxel.js site, at the moment most of them support simple world traversal and the removal of blocks just like Minecraft. The project also has add-ons which includes voxel-creature for adding NPCs and player-physics. A huge amount of effort has already gone into the project, and it was apparently inspired by the awesome 0 FPS blog posts about voxels.


holla (GitHub: wearefractal / holla, License: MIT, npm: holla) from Fractal is a module for WebRTC signalling. The author calls it “WebRTC sugar” – compared to the underlying API the library’s use of methods like .pipe make it a lot easier to get the hang of.

It has some helpers for creating audio and video streams, and there’s a demo up at holla.jit.su that accesses your webcam and microphone.


Blitz (GitHub: Blitz, License: Modified MIT) by Eli Snow can help safely extend objects, overload functions based on types and arguments, and provides some native type recognition across global contexts:

Unlike other frameworks that have one generic wrapper for every object, Blitz creates unique wrappers for every prototype. So, for example, instead of having one method replace that works only with Arrays we can have a replace method for Arrays another for HTMLElements and/or any other object type.

Some of the functionality is accessible through a chainable API, so you can do things like this:

// [35, 16]
blitz([35, 16, 21, 9]).length(2).value;

Function overloading works using blitz.overload, which accepts an object that lists types alongside target functions.

OneJS 2.0

Azer Koculu has updated OneJS to version 2.0. OneJS converts CommonJS modules to standalone, browser-compatible files. It now supports splitting bundles into multiple files, and loading them asynchronously. It also has a more flexible build system: you can use it from the command-line, package.json, or from within a Node script.

Backbone.js Tutorial: Spies, Stubs, and Mocks

07 Feb 2013 | By Alex Young | Comments | Tags backbone.js mvc node backgoog testing


Before starting this tutorial, you’ll need the following:

  • alexyoung / dailyjs-backbone-tutorial at commit 9691fc1
  • The API key from part 2
  • The “Client ID” key from part 2
  • Update app/js/config.js with your keys (if you’ve checked out my source)

To check out the source, run the following commands (or use a suitable Git GUI tool):

git clone git@github.com:alexyoung/dailyjs-backbone-tutorial.git
cd dailyjs-backbone-tutorial
git reset --hard 9691fc1

Testing Custom Backbone.Sync APIs

The goal of this tutorial is to demonstrate how to write tests so they don’t need to use live APIs. The way this is usually done is through a technique known as mocking – when the application attempts to talk to the API, it will communicate with a special object that we can control.

When testing applications that revolve around a custom Backbone.sync implementation, you need to cleanly separate application testing from testing the remote API. We just want to test what we’re responsible for, and run tests without an Internet connection! If you look at the way app/js/gapi.js works, it relies on gapi.client which is provided by Google. This is an easy target for mocking – we can replace Google’s library with something that returns sample data instead.

Sinon.JS does all of this, and more. Sinon makes it easy to “spy” on the methods that would usually result from trying to connect to the remote server, and these spies can easily be plugged into suitable assertions.

The basic approach I employ for testing Backbone applications with Sinon is as follows:

  • Create spies for CRUD operations
  • Script the DOM to trigger the things I want to test
  • Ensure the spies have seen the expected calls
  • Check that the UI has been updated accordingly

Here’s an example in our application:

  • The user clicks ‘Add List’ and enters a list title
  • The form is submitted
  • Assert that gapi.client.tasks.tasklists has been called to insert the new list
  • Assert that a new list item has been added to the UI

In Mocha/Sinon, that could be expressed like this:

suite('Lists', function() {
  var spyUpdate = sinon.spy(gapi.client.tasks.tasklists, 'update')
    , spyCreate = sinon.spy(gapi.client.tasks.tasklists, 'insert')
    , spyDelete = sinon.spy(gapi.client.tasks.tasklists, 'delete')

  setup(function() {

  test('Creating a list', function() {
    // TODO: Do the DOM stuff for creating a list
    assert.equal(1, spyCreate.callCount);

  test('Editing a list', function() {
    // TODO: Do the DOM stuff for editing a list
    assert.equal(1, spyUpdate.callCount);

  // Example: Abstraction for testing
  test('Deleting a list', function() {
    // TODO: Do the DOM stuff for deleting a list

Save this as test/lists.test.js and add lists.test.js as a script tag to test/index.html (after app.test.js).

Sinon spies are created with sinon.spy(). In this example I’ve provided two arguments to sinon.spy, an object and a method. This causes Sinon to replace the method with a wrapped version that can count the number of times it gets called. It behaves like the original method, but makes testing things easier.

Spies can be invoked in other ways, but in this tutorial I’m just going to focus on this particular pattern. In general this is the part of Sinon that I find myself using the most when it comes to testing Backbone applications.


In my quest to keep these tutorials clear and simple, I noticed I made some changes to the app which caused the tests to break. That means you’ll need to do a bit of housekeeping to get the tests to work correctly.

First, let’s get the application views loaded inside a container instead of overwriting the entire body. Mocha needs a div to display test results, and the last version of the app caused it to be overwritten when the tests urn.

Add a new div to app/index.html and test/index.html just after the body opening tag:

<div id="todo-app"></div>

You can hide this div in the tests if you like, setting display: none won’t break anything. This change also requires that app/js/views/app.js is updated to use #todo-app instead of body for the el property (near the top of the file).

You’ll also need to add some new script tags to test/index.html:

<script src="lib/sinon.js"></script>
<script src="fixtures/gapi.js"></script>

There’s one thing left to do in test/index.html – change the way the Mocha tests are invoked at the bottom of the file:

  // Only run the tests when the application is ready
  require(['app'], function() {
    bTask.apiManager.on('ready', function() { mocha.run(); });

This is a good way to make sure the tests run only when the application’s dependencies have all loaded and have been evaluated.

Download Sinon.JS

This is the version of Sinon.JS that I’ve used: sinon-1.5.2.js. Save it to test/lib and create the lib/ directory if it doesn’t already exist.

Mocking Google’s API

To mock Google’s API, I’ve simply overwritten the library it provides with my own object that runs the expected callbacks with suitable test data. I created this test data by using the app and looking at the Network tab in WebKit Inspector, so it’s based on a subset of my actual to-do lists and tasks.

You can use my test data if you want to follow this tutorial instead of checking out the full source from GitHub: alexyoung/4730178 (gapi.js).

Save it to test/fixtures/gapi.js, creating the directory if required.

Now when app/js/gapi.js calls gapi.client.load and logs in with OAuth2, it’ll actually receive fake user data. I’ve used similar data structures to real values, but I’ve removed a few things like etags to make it easier to read.

Testing Lists

Now test/lists.test.js can be finished off. Here’s one way to test that lists are created correctly:

test('Creating a list', function() {
  var $el = bTask.views.app.$el
    , listName = 'Example list';

  // Show the add list form

  // Fill out a value for the new list's title

  // Make sure the spy has seen a call for a list being created
  assert.equal(1, spyCreate.callCount);

  // Ensure the expected UI element has been added
  assert.equal(listName, $('.list-menu-item:last').text().trim());

This test uses jQuery to click the button that opens the list add/edit form, then fills out a title, and subsequently submits the form. That will cause Backbone.sync to run, and call an insert operation from Google’s API. Because I’ve replaced gapi it will call the fixture instead, and since Sinon is spying on it we’ll get an incremented call count. Boom!

Editing a list is practically the same:

test('Editing a list', function() {
  var $el = bTask.views.app.$el;

  // Show the edit list form

  $el.find('#list_title').val('Edited list');

  assert.equal(1, spyUpdate.callCount);
  assert.equal('Edited list', $('.list-menu-item:first').text().trim());

This time an update API call will fire rather than an insert.

There is one curious wrinkle in these tests – handling delete. I’ve used a confirm dialog instead of a fancy modern modal widget, which means the tests will cause a confirm dialog to appear. It’s possible to stop the dialog from appearing by replacing confirm:

test('Deleting a list', function() {
  var $el = bTask.views.app.$el;

  // Automatically accept the confirmation
  window.confirm = function() { return true; };

  // Show the edit list form

  // Click the list delete button

  assert.equal(1, spyDelete.callCount);

The problem with this is it’ll cause Mocha to see a “global leak”. You can add confirm to test/setup.js to prevent that error. However, ideally we shouldn’t need to do this – coupling tests to implementation details like this is usually a bad idea. It would be better to design the application to allow the dialog to be turned off to better support testing. I don’t usually mind adapting applications to make them easier to test, it can be beneficial in the long run.

To run these tests, launch the Node app with npm start and then visit http://localhost:8080/test/ in your browser. Make sure you type the URL correctly or the tests won’t work due to the use of relative paths.


In this part you’ve seen how to write tests for an application based around a custom Backbone.sync implementation. Sinon spies provide the perfect way to wrap internal parts of a Backbone application to write succinct test suites.

This particular type of testing focuses on the business logic represented by Backbone models, collections, and views. This ultimately helps in maintaining your client-side applications as they grow and change over time.

The majority of the source for this tutorial can be found in alexyoung / dailyjs-backbone-tutorial, commit 5b0a529. The final commit was 45dd59.

Node Roundup: GNOME, fs, procjs

06 Feb 2013 | By Alex Young | Comments | Tags node modules gnome desktop bindings browser
You can send in your Node projects for review through our contact form.

JavaScript and GNOME

GNOME now recommends JavaScript for authoring GNOME applications. For information on what this means for the near future of GNOME desktop development, see JavaScript in GNOME. Although it looks like they’re using SpiderMonkey rather than Node, Jérémy Lal sent in an email detailing his positive experiences with node-gir (GitHub: creationix / node-gir, npm: gir) by Tim Caswell which provides bindings for GObject Introspection.

These bindings can be used to make dynamic calls to any library that has GI annotations installed – Jérémy said he was using it to generate PDFs from HTML.

Component: fs

fs (GitHub: matthewp / fs, License: MIT, component: matthewp/fs) by Matthew Phillips is a component that brings Node’s fs module to the browser. It’s designed to be cross-browser, with the FileSystem API for Chrome and IndexedDB for Firefox and Internet Explorer.


procjs (GitHub: vzaccaria / procjs, License: MIT, npm: procjs) by Vittorio Zaccaria is a set of command-line utilities for getting JSON representations from the output of ps. It also comes with a REST server that provides a JSON API for the same data.

The project is built with LiveScript, and can be invoked with jsps along with several arguments.

jQuery Roundup: 1.9.1, jui_datagrid, jQuery Waiting, jquery.defer

05 Feb 2013 | By Alex Young | Comments | Tags jquery plugins animations database backbone.js
Note: You can send your plugins and articles in for review through our contact form.

jQuery 1.9.1

jQuery 1.9.1 has been released:

Whether you’re using 1.9.0 or using an older version, these are the droids you’re looking for.

There are bug fixes for Chrome, IE, and Safari, and a few small enhancements like #13150: Be able to determine if $.Callback() has functions.



jui_datagrid (GitHub: pontikis / jui_datagrid, License: MIT) by Christos Pontikis is a one of those “rich table” plugins that makes tabular data sortable, editable, and so on. It has a specific focus on editing server-side data, and will work with JSON data out of the box. It supports multiple instances on the same page, jQuery UI themes, localisation, and a modular design that makes adding new data filters easier.

There is a a demo of jui_datagrid that shows the major features.

jQuery Waiting

jQuery Waiting (GitHub: trentrichardson / jQuery-Waiting, License: MIT/GPL) by Trent Richardson is a plugin for displaying spinners that’s designed to be cross-browser. Instead of relying on modern CSS animations, it simply switches CSS classes on sets of elements. It has a namespaced event-based API, so you can see when the control is enabled, starts playing, and so on:

// Initialise

// Play

// Event example
$el.bind('play.waiting', function(e){});


jquery.defer/jquery.undefer (GitHub: wheresrhys / jquery.defer, License: MIT) by Rhys Evans are a pair of utility methods for making an object’s methods wait until a deferred object has resolved. The example Rhys provides of this in action is lazy loading Google Maps:

$.defer(GoogleMaps.prototype, _mapsLoaded, {exclude: 'init'});

Rhys also sent in Backbone Namespaced Events (GitHub: wheresrhys / backbone.namespaced-events, License: MIT), which uses the syntax of namespaced events for Backbone’s custom events implementation. To use namespaced events, call Backbone.extend(obj, Backbone.NamespacedEvents) on a Backbone object instance. Alternatively, Backbone.NamespacedEvents.overwriteNativeEvents() can be called to use it everywhere.

Meet the New Stack, Same as the Old Stack

04 Feb 2013 | By Alex Young | Comments | Tags components twitter google jquery

Five years ago, if you asked any client-side developer which library or framework to use the most likely answer would have been jQuery. Since then, client-side development has become far more complex. A friendly wrapper for DOM programming and a browser compatibility layer isn’t enough to help us write modern applications.

Starting a project today involves selecting a package manager, module system, build system, templating language, data binding library, a sync layer, a widget library, and a test framework. And it doesn’t end there: many of us also regularly use a dynamic CSS language, and mobile SDKs.

I wouldn’t say this is a negative trend, but with large companies backing or offering solutions to these problems, making an informed decision on each of these technologies is difficult. And soon the major players will offer a complete set of tools and libraries to satisfy each requirement: we’ll be back to something that looks like the original monolithic frameworks.

Until recently, starting a client-side web application might have looked like this:

  • Module system: RequireJS, AMD
  • Build system: RequireJS (r.js)
  • Templates: text.js
  • Data binding: Backbone.js
  • Sync: Backbone.js
  • Widgets: Bootstrap
  • Test framework: QUnit

There are other popular choices for each of these bullets, of course: Zurb Foundation is a popular front-end framework, and I’ve used Mocha instead of QUnit since Mocha appeared. I also like Knockout for data binding, because the two-way declarative bindings are easy to get the hang of.

These libraries are not interchangeable once a project has been started – Bootstrap uses different CSS classes to jQuery UI, for example. The major difficulty is keeping libraries up to date, particularly if they have a lot of dependencies.

And that’s when you need a package manager. Using a package manager can make the choices even more fine grained, because managing each library and its dependencies is easier. Switching to something like Component is one option, which can lead to a totally new stack:

  • Package manager: Component
  • Module system: CommonJS
  • Build system: Component
  • Templates: Take your pick
  • Data binding: Reactive or Rivets (you could easily use Knockout or Backbone though)
  • Sync: component/model can communicate with JSON APIs
  • Widgets: Componentised UI widgets are popular
  • Test framework: test/assert, Mocha

Bootstrap and Zurb Foundation can be provided as components, there are projects on GitHub to do this. I’ve tried to design projects 100% around components without these larger libraries, and it was a huge amount of work. It may get easier with time, or once the right balance of functionality is found. I’ve noticed there are some “meta packages” that exist to group commonly used dependencies together.

You’ll notice I haven’t mentioned AngularJS yet. The reason for that is AngularJS is now compatible with Closure Library, which makes it possible to use an almost 100% Google-powered development stack:

  • Package manager: None (to my knowledge)
  • Module system: Closure Library modules
  • Build system: ClosureBuilder
  • Templates: AngularJS
  • Data binding: AngularJS
  • Sync: AngularJS services
  • Widgets: Closure Library
  • Test framework: Closure Library testing

While Closure Library is more like the “last generation” monolithic frameworks, each module can be loaded separately, so you don’t need to use the whole thing. You could make a project with Closure Library, ClosureBuilder, Backbone.js, and Boostrap if you wanted. You could also go the other way: deploy a Go/Python/Java app to App Engine that’s built on Closure Library and AngularJS. Google effectively provides the entire stack, including server-side development, data storage, user authentication, billing, and client-side development.

Recently we’ve also seen a huge amount of open source projects coming out of Twitter. A pure Twitter stack looks like this:

  • Package manager: Bower
  • Module system: Flight/AMD
  • Build system:
  • Templates: Hogan.js
  • Data binding: Flight
  • Sync:
  • Widgets: Bootstrap
  • Test framework:

Using components through Flight could satisfy the other dependencies as well. It wouldn’t be difficult to use Flight as a test runner with a suitable assertion library, although the authors use Jasmine and PhantomJS at the moment.

Let’s not forget, however, the incredible features provided by Dojo. Compare Google and Twitter’s client-side stacks to this:

  • Package manager: Dojo Packages, CPM
  • Module system: AMD
  • Build system: dojoBuild
  • Templates: Dijit templates
  • Data binding: dojo.store.Observable
  • Sync: Dojo Object Store
  • Widgets: Dijit
  • Test framework: D.O.H.

Dojo has the entire client-side stack covered, and also includes many more features that I haven’t mentioned here. YUI is comparable:

On the surface, it seems like Twitter’s developers aim to create something more like Component, where each piece of the client-side puzzle can be picked à la carte. Closure Library is more like the older, monolithic model used by YUI, where a custom module system is used, and modules are harder to reuse without the base framework.

The question is, can these larger companies support modern, “componentised” client-side development, or is it easier to offer a consolidated monolithic framework? Projects like AngularJS and Flight suggest to me that developers within Twitter and Google want to promote that approach, while how that fits into the wider organisation remains to be seen. Will we see AngularJS alongside Closure Library in the Python, Java, and Go App Engine documentation, or will it remain a useful library that exists outside of Google’s core developer offering?

An interesting third option is Yeoman. This is another project from Google that provides a selection of smaller libraries to kick-start development:

  • Package manager: yeoman
  • Module system: RequireJS, AMD
  • Build system: Grunt
  • Widgets: Bootstrap

Yeoman generates application templates by helping you select various options, and then gives you a build system so you can easily generate something deployable. It doesn’t enforce decisions like template language or CoffeeScript, but provides a harness for installing packages, and building and testing your application.


Client-side development is changing. It’s no-longer enough to learn a small amount of jQuery – an entire client-side stack is becoming the norm. The libraries that plugged the gaps in jQuery and its clones now have serious competition from tech giants.