Backbone.js Tutorial: Authenticating with OAuth2

13 Dec 2012 | By Alex Young | Comments | Tags backbone.js mvc node backgoog

In Part 2: Google’s APIs, I laid the groundwork for talking to Google’s JavaScript APIs. Now you’re in a position to start talking to the todos API, but first a user account is required.


Before starting this tutorial, you’ll need the following:

  • alexyoung / dailyjs-backbone-tutorial at commit 9d09a66b1f
  • The API key from part 2
  • The “Client ID” key from part 2
  • Update app/js/config.js with your keys (if you’ve checked out my source)

To check out the source, run the following commands (or use a suitable Git GUI tool):

git clone
cd dailyjs-backbone-tutorial
git reset --hard 9d09a66b1f

Google’s OAuth 2.0 Client-side API

Open app/js/gapi.js and take a look at lines 11 to 25. There’s a method, provided by Google, called gapi.auth.authorize. This uses the “Client ID” and some scopes to attempt to authenticate. I’ve already set the scopes in app/js/config.js:

config.scopes = '';

This tells the authentication system that our application would like to access the user’s profile and Gmail tasks. Everything is almost ready to work, but two things are missing: an implementation for handleAuthResult and an interface.


RequireJS can load templates by using the text plugin. Download text.js from GitHub and save it to app/js/lib/text.js.

This is my preferred technique for handling templates. Although this application could easily fit into a monolithic index.html file, breaking up projects into smaller templates is more manageable in the long run, so it’s a good idea to get used to doing this.

Now open app/js/main.js and add the text plugin to the paths property of the RequireJS configuration:

paths: {
  text: 'lib/text'

Finally, add this to app/js/config.js:

_.templateSettings = {
  interpolate: /\{\{(.+?)\}\}/g

This tells Underscore’s templating system to use double curly braces for inserting values, otherwise known as interpolation.

The app needs some directories to store template-related things:

  • app/js/views – This is for Backbone.js views
  • app/js/templates – Plain HTML templates, to be loaded by the views
  • app/css

The app/index.html file needs to load the CSS, so add a link tag to it:

<link rel="stylesheet" href="css/app.css">

And create a suitable CSS file in app/css/app.css:

#sign-in-container, #signed-in-container { display: none }

The application will start up by hiding both the sign-in button and the main content. The oauth API will be queried for existing user credentials – if the user has already logged in recently their details will be stored in a cookie, so the views need to be configured appropriately.


The templates aren’t particularly remarkable at this stage, just dump this into app/js/templates/app.html:

<div class="row-fluid">
  <div class="span2 main-left-col" id="lists-panel">
    <div class="left-nav"></div>
  <div class="main-right-col">
    <small class="pull-right" id="profile-container"></small>
      <div id="sign-in-container"></div>
      <div id="signed-in-container">
        <p>You're signed in!</p>

This template shows some things that we won’t be using yet, just ignore it for now and focus on sign-in-container and signed-in-container.

Next, paste the following into app/js/templates/auth.html:

<a href="#" id="authorize-button" class="btn btn-primary">Sign In with Google</a>

The auth.html template will be inserted into sign-in-container. It’s very simple at the moment, I only really included it for an excuse to create extra Backbone.js views so you can see how it’s done.

Backbone Views

These templates need corresponding Backbone.js views to manage them. This part demonstrates how to load templates with RequireJS and render them. Create a file called app/js/views/app.js:


function(template) {
  var AppView = Backbone.View.extend({
    id: 'main',
    tagName: 'div',
    className: 'container-fluid',
    el: 'body',
    template: _.template(template),

    events: {

    initialize: function() {

    render: function() {
      return this;

  return AppView;

The AppView class doesn’t have any events yet, but it does bind to an element, body, and load a template: define(['text!templates/app.html']. The text! directive is provided by the RequireJS “text” plugin we added earlier. The template itself is just a string that contains the corresponding HTML. It’s rendered by binding it to the Backbone.View, and then calling jQuery’s html() method which replaces the HTML within an element: this.$el.html(this.template());.

The AuthView is a bit different. Create a file called app/js/views/auth.js:

define(['text!templates/auth.html'], function(template) {
  var AuthView = Backbone.View.extend({
    el: '#sign-in-container',
    template: _.template(template),

    events: {
      'click #authorize-button': 'auth'

    initialize: function(app) { = app;

    render: function() {
      return this;

    auth: function() {;
      return false;

  return AuthView;

The app object is passed to initialize when AuthView is instantiated (with new AuthView(this) later on). The reason I’ve done this is to allow the view to call the required authentication code from ApiManager. This could also be handled with events, or many other ways – I just wanted to show that we can initialise views with values just like any other class.

App Core

The views need to be instantiated and rendered. Open app/js/app.js and change it to load the views using RequireJS:

, 'views/app'
, 'views/auth'

function(ApiManager, AppView, AuthView) {
  var App = function() { = new AppView();;

    this.views.auth = new AuthView(this);


The rest of the file can remain the same. Notice that the order these views are rendered is important – AuthView won’t work unless it has some of AppView’s tags available. A better way of modeling this might be to move AuthView inside AppView so the dependency is reflected. You can try this yourself if you want to experiment.

Authentication Implementation

The app/js/gapi.js file still doesn’t have the handleAuthResult function, so nothing will work yet. Here’s the code to handle authentication:

function handleAuthResult(authResult) {
  var authTimeout;

  if (authResult && !authResult.error) {
    // Schedule a check when the authentication token expires
    if (authResult.expires_in) {
      authTimeout = (authResult.expires_in - 5 * 60) * 1000;
      setTimeout(checkAuth, authTimeout);

  } else {
    if (authResult && authResult.error) {
      // TODO: Show error
      console.error('Unable to sign in:', authResult.error);


this.checkAuth = function() {
  gapi.auth.authorize({ client_id: config.clientId, scope: config.scopes, immediate: false }, handleAuthResult);

The trick to a smooth sign-in flow is to determine when the user is already signed in. If so, then authentication should be handled transparently, otherwise the user should be prompted.

The handleAuthResult function is called by gapi.auth.authorize from the checkAuth function, which isn’t displayed here (it’s before handleAuthResult in the source file if you want to check it). The this.checkAuth method is different – this is a public method that calls gapi.auth.authorize with immediate set to false, while the other invocation calls it with true.

The immediate option is important because it determines whether a popup will be displayed or not. I’ve used it to check if the user is already signed in, otherwise it’s called again with immediate: false and will display a suitable popup so the user can see what permissions the app wants to use:

Authentication process

I designed it this way based on the Google APIs Client Library for JavaScript documentation:

“The standard authorize() method always shows a popup, which can be a little jarring if you are just trying to refresh an OAuth 2.0 token. Google’s OAuth 2.0 implementation also supports “immediate” mode, which refreshes a token without a popup. To use immediate mode, just add “immediate: true” to the login config as in the example above.”

I’ve also changed the ApiManager class to store a reference to App:

// Near the top of gapi.js
var app;

function ApiManager(_app) {
  app = _app;


In this tutorial you’ve seen how to use Google’s APIs to sign into an app you’ve previously registered with the Google API Console (documented in part 2). It might seem like a lot of work to get RequireJS, Backbone.js, and Google OAuth working together, but think about what this has achieved: 100% client-side scripting that can authenticate with existing Google accounts.

If I’ve missed anything here, you can check out the full source from commit c1d5a2e7c.

Node Roundup: thin-orm, node-tar.gz, connect-bruteforce

12 Dec 2012 | By Alex Young | Comments | Tags node modules middleware express compression databases
You can send in your Node projects for review through our contact form or @dailyjs.


thin-orm (License: MIT, npm: thin-orm) by Steve Hurlbut is a lightweight ORM module for SQL databases with a MongoDB-inspired API:

var orm = require('thin-orm');

   .columns('id', 'login', 'firstName', 'lastName', 'createdAt');

It’s designed to be used with existing libraries, like pg and sqlite3, so you’ll need one of those modules installed to use it.

thin-orm currently supports the following features:

  • Filtering
  • Sorting
  • Pagination
  • Joins
  • Optional camelCase property-to-field mapping
  • SQL injection protection

Steve has included Nodeunit tests that cover the basic functionality, and some integration tests for PostgreSQL and SQLite.


node-tar.gz (License: MIT, npm: tar.gz) by Alan Hoffmeister is a tar helper module and command-line utility, built with Node’s zlib module, tar, and commander.

The module can be used to easily tar and compress a folder, and it will install a targz script that supports the zxvf flags. There are also Vows tests.


connect-bruteforce (License: GPLv2, npm: connect-bruteforce) by Pedro Narciso García Revington provides middleware that can help prevent bruteforce attacks. It will add a small delay to requests when an attack is detected.

The author has written a useful example that requires captcha validation after a successive number of validation failures: express-recaptcha.

For a simpler example, see express-hello-world.

The project includes Mocha tests.

jQuery Roundup: SocialCount, Literally Canvas, Socrates

11 Dec 2012 | By Alex Young | Comments | Tags jquery plugins social markdown apps images
Note: You can send your plugins and articles in for review through our contact form or @dailyjs.


Social buttons can slow down page loading times due to a variety of reasons. One solution is to lazy load them, and this is exactly what the SocialCount (GitHub: filamentgroup / SocialCount, License: MIT) from Filament Group does. It can show “like” style buttons, counts, and lazy load the social network’s original buttons.

It’s designed using progressive enhancement techniques, and is tested against IE 7+, as well as the other major browsers, and various touchscreen devices. It also includes QUnit tests.

Literally Canvas

Drawing with a trackpad is tricky business.

Literally Canvas (GitHub: literallycanvas / literallycanvas, License: BSD) by Stephen Johnson and Cameron Paul is a drawing widget built with jQuery and Underscore.js. It has some basic drawing tools and can upload images to imgur.

The plugin accepts an options object that can be used to enable or disable certain features:

  backgroundColor: 'rgb(255, 0, 0)', // default rgb(230, 230, 230)
  keyboardShortcuts: false,          // default true
  sizeToContainer: false,            // default true
  toolClasses: [LC.Pencil]           // see coffee/


Real-time Markdown editing with Socrates.

Socrates (GitHub: segmentio / socrates, License: MIT) by Ilya Volodarsky and Ian Storm Taylor is a Markdown editor and previewer. It’s built with jQuery, Backbone.js, and a client-side Markdown parser by Christopher Jeffrey.

The data is stored in Firebase, so you’ll need an account with Firebase to install your own instance of Socrates.

Extender, Gridy.js, grunt-reduce

10 Dec 2012 | By Alex Young | Comments | Tags libraries build grunt tv browser


Extender (GitHub: doug-martin / extender, npm: extender, License: MIT) by Doug Martin is a library for making chainable APIs. It works as a Node module or with RequireJS.

Extender has a define method that accepts a function and an object with methods that will form the API:

function isString(obj) {
  return !isUndefinedOrNull(obj) && (typeof obj === "string" || obj instanceof String);

var myExtender =
  .define(isString, {
    multiply: function(str, times) {
      var ret = str, i;
      for (i = 1; i < times; i++) {
        ret += str;
      return ret;
    toArray: function(str, delim) {
      delim = delim || '';
      return str.split(delim);

myExtender('hello').multiply(2).value(); // hellohello

The author has included tests and lots of examples. Although making chainable APIs is pretty straightforward, Extender might be a more explicit and testable way to do it.



In the UK only one of my favourite shows is on Netflix. The situation might be better in the US with Hulu, but if you’re a cult TV fan outside of the US finding content can be challenging. Even with a dearth of legitimate content sources, I’ll always prefer hacking my TV to using locked down devices – I had loads of fun this weekend with a Raspberry Pi and open source media projects.

One thing that’s often lacking is cool web interfaces for media software. Igor Alpert sent in Gridy.js (GitHub: ialpert / gridy.js), which is a library designed for building media browser interfaces. It includes tools for carousels, grids, and sliders.

Igor said he’s tested it with the Samsung SDK, Opera TV, and Google TV for the LG and Vizio platforms.


grunt-reduce by Peter Müller is a Grunt task for AssetGraph and other related projects from AssetGraph is a Node-based module for optimising web pages. By adding grunt-reduce to your project, you can bundle and minify assets, rename assets to MD5-sums of their content, optimise images, and even generate automatic CSS sprites.

Although AssetGraph doesn’t currently work with AngularJS, Peter notes that this is being addressed: #84: Support AngularJS templates.

JavaScript Survey 2012, Gitgraph, ES6 Proxies

07 Dec 2012 | By Alex Young | Comments | Tags ES6 git graphs Canvas survey community

JavaScript Survey 2012: RFC

I’m currently researching the next JavaScript Developer Survey. I’d like feedback on questions. If there’s anything you’d strongly like to see in the survey, please contact me and I’ll see if I can incorporate it.

Previous surveys can be found here:

In general, the surveys try to determine:

  • How many readers are client-side or server-side developers
  • Whether or not readers write tests
  • Other languages used (C#, Java, Objective-C, PHP, Ruby, Python, etc.)

It’s not necessarily used to design content for DailyJS – the results are shared with the community benefit everyone.


Gitgraph (GitHub: bitpshr / Gitgraph, License: WTFPL) by Paul Bouchon is a Canvas-based GitHub participation graph library. It’s based around a constructor function that accepts arguments for things like GitHub username, width, height, and colours:

var graph = new Gitgraph({ 
    user        : 'nex3',                // any github username
    repo        : 'sass',                // name of repo
    domNode     : document.body,         // (optional) DOM node to attach to 
    width       : 800,                   // (optional) graph width
    height      : 300,                   // (optional) graph height
    allColor    : "rgb(202, 202, 202)",  // (optional) color of user's participation
    userColor   : "rgb(51, 102, 153)",   // (optional) color of total participation
    background  : "white",               // (optional) background styles
    showName    : true                   // (optional) show or hide name of user / repo

The author wrote some background on it in GitHub Graphs Fo’ Errbody, because he had to wrap missing API functionality with a proxy.

Multiple Inheritance in ES6 with Proxies

Multiple Inheritance in ES6 with Proxies is an introduction to ES6 proxies by Jussi Kalliokoski. The author’s example uses EventEmitter, which I find useful because multiple inheritance with EventEmitter is something I’ve seen typically implemented using a for loop to copy properties.

The Proxy solution isn’t far off that approach and requires more code, but it’s worth reading if you’re struggling to understand proxies.

Backbone.js Tutorial: Google's APIs and RequireJS

06 Dec 2012 | By Alex Young | Comments | Tags backbone.js mvc node backgoog

In Part 1: Build Environment, I explained how to set up a simple Node server to host your Backbone.js app and test suite. Something that confused people was the way I used relative paths, which meant the tests could fail if you didn’t visit /test/ (/test won’t work). There was a reason for this: I developed the original version to run on Dropbox, so I wanted to use relative paths. It’s probably safer to use absolute paths, so I should have made this clearer.

In this part you’ll learn the following:

  • How Backbone.sync works
  • How to load Backbone.js and Underscore.js with RequireJS
  • How to get started with Google’s APIs

The Backbone.sync Method

Network access in Backbone.js is nicely abstracted through a single method which has the following signature:

Backbone.sync = function(method, model, options) {

The method argument contains a string that can be one of the following values:

  • create
  • update
  • delete
  • read

Internally, Backbone.js maps these method names to HTTP verbs:

var methodMap = {
  'create': 'POST',
  'update': 'PUT',
  'delete': 'DELETE',
  'read':   'GET'

If you’re familiar with that particular flavour of RESTful API then this should all look familiar.

The second argument, model, is a Backbone.Model or Backbone.Collection – collections are used when reading multiple values.

The final argument, options, is an object that contains success and error callbacks. It’s ultimately handed off to the jQuery Ajax API.

To work with Google’s APIs we need to write our own Backbone.sync method. In general terms our implementation should be structured like this:

Backbone.sync = function(method, model, options) {
  switch (method) {
    case 'create':
      googleAPI.create(model, options);

    case 'update':
      googleAPI.update(model, options);

    case 'delete':
      googleAPI.destroy(model, options);

    case 'read':
      // The model value is a collection in this case
      googleAPI.list(model, options);

      // Something probably went wrong
      console.error('Unknown method:', method);

The googleAPI object is fictitious, but this is basically how Backbone.sync is usually extended – a lightweight wrapper that maps the method names and models to another API. Using a lightweight wrapper means the underlying target API can be easily used outside of a Backbone.js.

In our case, Google actually provides a JavaScript API – there will be a gapi.client object available once the Google APIs have been loaded.

Setting Up A Google API Account

The main page for Google’s developer documentation is at, but what we’re interested in is the Google Tasks API which can be found under Application APIs.

Google’s Application APIs are designed to work well with both server-side scripting and client-side JavaScript. To work with the Google Tasks API you’ll need three things:

  1. A Google account (an existing one is fine)
  2. Google API Console access (you may have already enabled it if you work with Google’s services)
  3. An API key

To set up your account to work with the Google API Console, visit Once you’ve enabled it, scroll down to Tasks API:

Google Tasks API switch

Then switch the button to on, and accept the terms (if you agree to them). Next, click API Access in the left-hand navigation bar, and look under Simple API Access for the API key. This “browser apps” key is what we need. Make a note of it for later.

OAuth 2.0 for Client-side Applications

Still in the API Access section of the console, click the button to create an OAuth 2.0 project. Enter “bTask” (or whatever you want) for the product name, then http://localhost:8080 for the URL. In the next dialog, make sure http:// is selected instead of https://, then enter localhost:8080 and click “Create client ID”.

You’ll now see a set of values under “Client ID for web applications”. The field that says “Client ID” is important – make a note of this one as well.

You should now have an API key and a Client ID. These will be used to load Google’s APIs and allow us to use an OAuth 2.0 service from within the browser – we won’t need to write our own server-side code to authenticate users.

Follow Along

If you want to check out the source from Part 1 so you can follow along, you can use Git to get the exact revision from last week:

git clone
cd dailyjs-backbone-tutorial
git reset --hard 2a8517e

Required Libraries

Before progressing, download the following libraries to app/js/lib/:

Open app/js/main.js and edit the shim property under requirejs.config to load Underscore.js and Backbone.js:

  baseUrl: 'js',

  paths: {

  shim: {
    'lib/underscore-min': {
      exports: '_'
    'lib/backbone-min': {
      deps: ['lib/underscore-min']
    , exports: 'Backbone'
    'app': {
      deps: ['lib/underscore-min', 'lib/backbone-min']


function(App) {
  window.bTask = new App();

This looks weird, but remember we’re using RequireJS to load scripts as modules. RequireJS “shims” allow dependencies to be expressed for libraries that aren’t implemented using AMD.

Loading the Tasks API

The basic pattern for loading the Google Tasks API is:

  1. Load the Google API client library:
  2. Call gapi.client.load to load the “tasks” API
  3. Set the API key using gapi.client.setApiKey()

To implement this, you’ll need a place to put the necessary credentials. Create a file called app/js/config.js and add the API key and Client ID to it:

define([], function() {
  var config = {};
  config.apiKey = 'your API key';
  config.scopes = '';
  config.clientId = 'your client ID';
  return config;

This file can be loaded by our custom Google Tasks API/Backbone.sync implementation.

Now create a new file called app/gapi.js:

define(['config'], function(config) {
  function ApiManager() {

  _.extend(ApiManager.prototype, Backbone.Events);

  ApiManager.prototype.init = function() {

  ApiManager.prototype.loadGapi = function() {

  Backbone.sync = function(method, model, options) {
    options || (options = {});

    switch (method) {
      case 'create':

      case 'update':

      case 'delete':

      case 'read':

  return ApiManager;

This skeleton module shows the overall layout of our Google Tasks API loader and Backbone.sync implementation. The ApiManager is a standard constructor, and I’ve used Underscore.js to inherit from Backbone.Events. This code will be asynchronous, so event handling will be useful later.

The loadGapi method loads Google’s JavaScript using RequireJS. Once the gapi global object has been found, it will do the rest of the configuration by calling the init method:

ApiManager.prototype.loadGapi = function() {
  var self = this;

  // Don't load gapi if it's already present
  if (typeof gapi !== 'undefined') {
    return this.init();

  require([''], function() {
    // Poll until gapi is ready
    function checkGAPI() {
      if (gapi && gapi.client) {
      } else {
        setTimeout(checkGAPI, 100);

All the init method needs to do is load the Tasks API with gapi.client.load:

ApiManager.prototype.init = function() {
  var self = this;

  gapi.client.load('tasks', 'v1', function() { /* Loaded */ });

  function handleClientLoad() {
    window.setTimeout(checkAuth, 100);

  function checkAuth() {
    gapi.auth.authorize({ client_id: config.clientId, scope: config.scopes, immediate: true }, handleAuthResult);

  function handleAuthResult(authResult) {


The config variable was one of the dependencies for this file, and contains the credentials required by Google’s API.

Loading the API Manager

Now open app/js/app.js and make it depend on gapi, then create an instance of ApiManager:


function(ApiManager) {
  var App = function() {

  App.prototype = {
    connectGapi: function() {
      this.apiManager = new ApiManager();

  return App;

If you want to check this works by running the tests, you’ll need to change test/setup.js to flag gapi as a global:

var assert = chai.assert;

  ui: 'tdd'
, globals: ['bTask', 'gapi', '___jsl']

However, I don’t intend to load the API remotely during testing – this will effectively be mocked. I’ll come onto that in a later tutorial.


gapi loaded

If you run the app or tests and open a JavaScript console, a gapi global object should be available. Using Google’s APIs with RequireJS and Backbone.js seems like a lot of work, but most of this stuff is effectively just configuration, and once it’s done it should work solidly enough, allowing you to focus on the app design and development side of things.

Full Source Code

Commit 9d09a6.


Node Roundup: pkgcloud, rewire, ssh2

05 Dec 2012 | By Alex Young | Comments | Tags node modules ssh cloud testing
You can send in your Node projects for review through our contact form or @dailyjs.


pkgcloud (GitHub: nodejitsu / pkgcloud, License: MIT, npm: pkgcloud) from Nodejitsu is a module for scripting interactions with cloud service providers. It supports various services from Joyent, Microsoft, Rackspace, and several database providers like MongoHQ and RedisToGo. The authors have attempted to unify the vocabulary used by each provider – for example, pkgcloud uses the term ‘Server’ to refer to Joyent’s “machines” and Amazon’s “instances”.

Services can be introspected and resources can be fetched. The API is naturally asynchronous, with callback arguments using the standard error-first pattern.

The roadmap promises support for more services in the future, including CDN and DNS.


rewire (License: MIT, npm: rewire by Johannes Ewald is a dependency injection implementation that can be used to inject mocks into other modules and access private variables.

As an example, consider a module within your project that uses the standard fs module to read a file. When writing tests for this module, it would be entirely possible to use rewire to modify the fs module to mock the readFile method:

var rewire = require('rewire')
  , exampleModule = rewire('./exampleModule')

exampleModule.__set__('fs', {
  readFile: function(path, encoding, cb) {
    cb(null, 'Success!');

// Tests would follow...

Notice that rewire was used instead of require – rewire itself works by appending special getters and setters to modules rather than using an eval-based solution.


SSH2 (License: MIT, npm: ssh2) by Brian White is an SSH2 client written in pure JavaScript. It’s built with the standard Node modules – streams, buffers, events, and lots of prototype objects and regular expressions.

It supports several authentication methods, including keys, bidirectional port forwarding, execution of remote commands, interactive sessions, and SFTP. Brian has provided some detailed examples of how to use the library’s event-based API.

jQuery Roundup: jquery.snipe, Photobooth.js, jHERE

04 Dec 2012 | By Alex Young | Comments | Tags jquery images plugins maps
Note: You can send your plugins and articles in for review through our contact form or @dailyjs.


jquery.snipe showing two images, one black and white.

jquery.snipe (GitHub: RayFranco / jquery.snipe, License: Apache 2.0, Component: RayFranco/jquery.snipe) by Franco Bouly shows a pleasing lens effect over images that follows the mouse. The “zoom” image is separate, so effects can be created – Franco’s examples have black and white images with a colour image for the zoom.

Note that Franco’s example image is slightly risqué – it’s perfectly safe for work in the DailyJS office but it might not be elsewhere.


Photobooth.js (GitHub: WolframHempel / photobooth-js, License: BSD) by Wolfram Hempel is an API for working with webcams. It currently works in recent versions of Chrome, Firefox, and Opera – browsers that support navigator.getUserMedia. It can be used like a class or through jQuery. It supports hue, saturation, and brightness adjustments and image resizing.


jHERE displaying a route using KML.

jHERE (GitHub: mmarcon / jhere, License: MIT) by Massimiliano Marcon is a jQuery and Zepto API for working with maps. It supports markers, KML (Keyhole Markup Language), and heatmaps.

The map service used is Here from Nokia.

JS101: Deep Equal

03 Dec 2012 | By Alex Young | Comments | Tags js101 tutorials language beginner testing

Back in JS101: Equality I wrote about the difference between == and ===. This is one area of the language that quite clearly causes issues for beginners. In addition, there is another equality concept that can come in handy when writing tests: deep equal. It also illustrates some of the underlying mechanics of the language. As an intermediate JavaScript developer, you should have at least a passing familiarity with deepEqual and how it works.

Unit Testing/1.0

Deep equality is defined in CommonJS Unit Testing/1.0, under subsection 7. The algorithm assumes two arguments: expected and actual. The purpose of the algorithm is to determine if the values are equivalent. It supports both primitive values and objects.

  1. Strict equals (===) means the values are equivalent
  2. Compare dates using the getTime method
  3. If values are not objects, compare with ==
  4. Otherwise, compare each object’s size, keys, and values

The fourth point is probably what you would assume deep equality actually means. The other stages reveal things about the way JavaScript works – the third stage means values that are not objects can easily be compared with == because they’re primitive values (Undefined, Null, Boolean, Number, or String).

The second step works because getTime is the most convenient way of comparing dates:

var assert = require('assert')
  , a = new Date(2012, 1, 1)
  , b = new Date(2012, 1, 1)

assert.ok(a !== b);
assert.ok(a != b);
assert.ok(a.getTime() == b.getTime());
assert.deepEqual(a, b);

This script can be run in Node, or with a suitable CommonJS assertion library. It illustrates the point that dates are not considered equal using the equality or strict equality operators – the easiest way to compare them is with getTime.

Object comparison implies recursion), as some values may also be objects. Also, key comparison isn’t as simple as it might seem: real implementations sort keys, compare length, then compare each value.


Bugs have been found in the Unit Testing/1.0 specification since it originally appeared. Two have been flagged up on the main Unit Testing page. The Node assert module addresses these points. For example, regular expressions are a special case in the deepEqual implementation:

return actual.source === expected.source && === &&
       actual.multiline === expected.multiline &&
       actual.lastIndex === expected.lastIndex &&
       actual.ignoreCase === expected.ignoreCase;

The source property has a string that represents the original regular expression, and then each flag has to be compared.

Object Comparison

The next time you’re writing a test, or even just comparing objects, remember that == will only work for “shallow” comparisons. Testing other values like arrays, dates, regular expressions, and objects requires a little bit more effort.

Viewport Component, grunt-saucelabs, FastClick

30 Nov 2012 | By Alex Young | Comments | Tags grunt mobile components libraries

Viewport Component

Viewport Component (License: MIT, component: pazguille/viewport) by Guille Paz can be used to get information about a browser’s viewport. This includes the width and height as well as the vertical and horizontal scroll positions.

It also emits events for scrolling, resizing, and when the top or bottom has been reached during scrolling.

var Viewport = require('viewport')
  , viewport = new Viewport()

viewport.on('scroll', function() {

viewport.on('top', function() {

viewport.on('bottom', function() {


grunt-saucelabs (License: MIT, npm: grunt-saucelabs) by Parashuram N (axemclion) is a Grunt task for qunit and jasmine tests using Sauce Labs’ Cloudified Browsers. This is similar to the built-in qunit Grunt task, but uses the remote service provided by Sauce Labs instead.

Sauce Connect can be used to create a tunnel for testing pages that aren’t accessible on the Internet.


FastClick (License: MIT, npm: fastclick, component: ftlabs/fastclick) from the Financial Times helps remove the delay in mobile browsers that occurs between a tap and the trigger of click events.

The authors have included some simple tests, documentation, and examples. The project is extremely well packaged, including support for npm, component, AMD, and Google Closure Compiler’s ADVANCED_OPTIMIZATIONS.

Internally, FastClick works by binding events to a “layer” and binding several touch event handlers. These handlers use internal properties to determine how elements are being interacted with. If certain conditions are met, then a click event will be generated, and several attributes will be added to the event to allow further tracking.

The event handlers can be easily removed using FastClick.prototype.destroy, and the project has a wide range of special cases for handling divergent behaviour in iOS and Android.

Backbone.js Tutorial: Build Environment

29 Nov 2012 | By Alex Young | Comments | Tags backbone.js mvc node backgoog

This new Backbone.js tutorial series will walk you through building a single page web application that has a customised Backbone.sync implementation. I started building the application that these tutorials are based on back in August, and it’s been running smoothly for a few months now so I thought it was safe enough to unleash it.

Gmail to-do lists: not cool enough!

The application itself was built to solve a need of mine: a more usable Google Mail to-do list. The Gmail-based interface rubs me the wrong way to put it mildly, so I wrote a Backbone.sync method that works with Google’s APIs and stuck a little Bootstrap interface on top. As part of these tutorials I’ll also make a few suggestions on how to customise Bootstrap – there’s no excuse for releasing vanilla Bootstrap sites!

The app we’ll be making won’t feature everything that Google’s to-do lists support: I haven’t yet added support for indenting items for example. However, it serves my needs very well so hopefully it’ll be something you’ll actually want to use.


Over the next few weeks I’ll cover the following topics:

  • Creating a new Node project for building the single page app
  • Using RequireJS with Backbone.js
  • Google’s APIs
  • Writing and running tests
  • Creating the Backbone.js app itself
  • Techniques for customising Bootstrap
  • Deploying to Dropbox, Amazon S3, and potentially other services

Creating a Build Environment

If your focus is on client-side scripting, then I think this will be useful to you. Our goal is to create a development environment that can do the following:

  • Allow the client-side code to be written as separate files
  • Combine separate files into something suitable for deployment
  • Run the app locally using separate files (to make development and debugging easier)
  • Manage supporting Node modules
  • Run tests
  • Support Unix and Windows

To do this we’ll need a few tools and libraries:

Make sure your system has Node installed. The easiest way to install it is by using one of the Node packages for your system.

Step 1: Installing the Node Modules

Create a new directory for this project, and create a new file inside it called package.json that contains this JSON:

  "name": "btask"
, "version": "0.0.1"
, "private": true
, "dependencies": {
    "requirejs": "latest"
  , "connect": "2.7.0"
, "devDependencies": {
    "mocha": "latest"
  , "chai": "latest"
  , "grunt": "latest"
  , "grunt-exec": "latest"
, "scripts": {
    "grunt": "node_modules/.bin/grunt"

Run npm install. These modules along with their dependencies will be installed in ./node_modules.

The private property prevents accidentally publicly releasing this module. This is useful for closed source commercial projects, or projects that aren’t suitable for release through npm.

Even if you’re not a server-side developer, managing dependencies with npm is useful because it makes it easier for other developers to work on your project. When a new developer joins your project, they can just type npm install instead of figuring out what downloads to grab.

Step 2: Local Web Server

Create a directory called app and a file called app/index.html:

<!DOCTYPE html>
  <meta charset="utf-8">
  <script type="text/javascript" src=""></script>
  <script type="text/javascript" src="js/lib/require.js"></script>

Once you’ve done that, create a file called server.js in the top-level directory:

var connect = require('connect')
  , http = require('http')
  , app

app = connect()
  .use('/js/lib/', connect.static('node_modules/requirejs/'))
  .use('/node_modules', connect.static('node_modules'))

http.createServer(app).listen(8080, function() {
  console.log('Running on http://localhost:8080');

This file uses the Connect middleware framework to act as a small web server for serving the files in app/. You can add new paths to it by copying the .use(connect.static('app')) line and changing app to something else.

Notice how I’ve mapped the web path for /js/lib/ to node_modules/requirejs/ on the file system – rather than copying RequireJS to where the client-side scripts are stored we can map it using Connect. Later on the build scripts will copy node_modules/requirejs/require.js to build/js/lib so the index.html file won’t have to change. This will enable the project to run on a suitable web server, or a hosting service like Amazon S3 for static sites.

To run this Node server, type npm start (or node server.js) and visit http://localhost:8080. It should display an empty page with no client-side errors.

Step 3: Configuring RequireJS

This project will consist of modules written in the AMD format. Each Backbone collection, model, view, and so on will exist in its own file, with a list of dependencies so RequireJS can load them as needed.

RequireJS projects that work this way are usually structured around a “main” file that loads the necessary dependencies to boot up the app. Create a file called app/js/main.js that contains the following skeleton RequireJS config:

  baseUrl: 'js',

  paths: {

  shim: {


function(App) {
  window.bTask = new App();

The part that reads require(['app'] will load app/js/app.js. Create this file with the following contents:

define([], function() {
  var App = function() {

  App.prototype = {

  return App;

This is a module written in the AMD format – the define function is provided by RequireJS and in future will contain all of the internal dependencies for the project.

To finish off this step, the main.js should be loaded. Add some suitable script tags near the bottom of app/index.html, before the </body> tag.

<script type="text/javascript" src="js/main.js"></script>

If you refresh http://localhost:8080 in your browser and open the JavaScript console, you should see that bTask has been instantiated.


Step 4: Testing

Everything you’ve learned in the previous three steps can be reused to create a unit testing suite. Mocha has already been installed by npm, so let’s create a suitable test harness.

Create a new directory called test/ (next to the ‘app/’ directory) that contains a file called index.html:

  <meta charset="utf-8">
  <title>bTask Tests</title>
  <link rel="stylesheet" href="/node_modules/mocha/mocha.css" />
.toast-message, #main { display: none }
  <div id="mocha"></div>
  <script src=""></script>
  <script src="/node_modules/chai/chai.js"></script>
  <script src="/node_modules/mocha/mocha.js"></script>
  <script src="/js/lib/require.js"></script>
  <script src="/js/main.js"></script>
  <script src="setup.js"></script>
  <script src="app.test.js"></script>
  <script>require(['app'], function() {; });</script>

The require near the end just makes sure only runs when /js/app.js has been loaded.

Create another file called test/setup.js:

var assert = chai.assert;

  ui: 'tdd'
, globals: ['bTask']

This file makes Chai’s assertions available as assert, which is how I usually write my tests. I’ve also told Mocha that bTask is an expected global variable.

With all this in place we can write a quick test. This file goes in test/app.test.js:

suite('App', function() {
  test('Should be present', function() {

All it does is checks window.bTask has been defined – it proves RequireJS has loaded the app.

Finally we need to update where Connect looks for files to serve. Modify ‘server.js’ to look like this:

var connect = require('connect')
  , http = require('http')
  , app

app = connect()
  .use('/js/lib/', connect.static('node_modules/requirejs/'))
  .use('/node_modules', connect.static('node_modules'))
  .use('/test', connect.static('test/'))
  .use('/test', connect.static('app'))

http.createServer(app).listen(8080, function() {
  console.log('Running on http://localhost:8080');

Restart the web server (from step 2), and visit http://localhost:8080/test/ (the last slash is important). Mocha should display that a single test has passed.

Step 5: Making Builds

Create a file called grunt.js for our “gruntfile”:

module.exports = function(grunt) {

    exec: {
      build: {
        command: 'node node_modules/requirejs/bin/r.js -o require-config.js'

  grunt.registerTask('copy-require', function() {
    grunt.file.copy('node_modules/requirejs/require.js', 'build/js/lib/require.js');

  grunt.registerTask('default', 'exec copy-require');

This file uses the grunt-exec plugin by Jake Harding to run the RequireJS command that generates a build of everything in the app/ directory. To tell RequireJS what to build, create a file called require-config.js:

  appDir: 'app/'
, baseUrl: 'js'
, paths: {}
, dir: 'build/'
, modules: [{ name: 'main' }]

RequireJS will minimize and concatenate the necessary files. The other Grunt task copies the RequireJS client-side code to build/js/lib/require.js, because our local Connect server was mapping this for us. Why bother? Well, it means whenever we update RequireJS through npm the app and builds will automatically get the latest version.

To run Grunt, type npm run-script grunt. The npm command run-script is used to invoke scripts that have been added to the package.json file. The package.json created in step 1 contained "grunt": "node_modules/.bin/grunt", which makes this work. I prefer this to installing Grunt globally.

I wouldn’t usually use Grunt for my own projects because I prefer Makefiles. In fact, a Makefile for the above would be very simple. However, this makes things more awkward for Windows-based developers, so I’ve included Grunt in an effort to support Windows. Also, if you typically work as a client-side developer, you might find Grunt easier to understand than learning GNU Make or writing the equivalent Node code (Node has a good file system module).


In this tutorial you’ve created a Grunt and RequireJS build environment for Backbone.js projects that use Mocha for testing. You’ve also seen how to use Connect to provide a convenient local web server.

This is basically how I build and manage all of my Backbone.js single page web applications. Although we haven’t written much code yet, as you’ll see over the coming weeks this approach works well for using Backbone.js and RequireJS together.

The code for this project can be found here: dailyjs-backbone-tutorial (2a8517).


Node Roundup: 0.8.15, JSPath, Strider

28 Nov 2012 | By Alex Young | Comments | Tags node modules apps express json
You can send in your Node projects for review through our contact form or @dailyjs.

Node 0.8.15

Node 0.8.15 is out, which updates npm to 1.1.66, fixes a net and tls resource leak, and has some miscellaneous fixes for Windows and Unix systems.


JSPath (License: MIT/GPL, npm: jspath) by Filatov Dmitry is a DSL for working with JSON. The DSL can be used to select values, and looks a bit like CSS selectors:

// var doc = { "books" : [ { "id" : 1, "title" : "Clean Code", "author" : { "name" : "Robert C. Martin" } ...

JSPath.apply('', doc);
// [{ name : 'Robert C. Martin' }, ...

It can also be used to apply conditional expressions, like this: .books{ === 'Robert C. Martin'}.title. Other comparison operators are also supported like >= and !==. Logical operators can be used to combine predicates, and the API also supports substitution.

The author has included unit tests, and build scripts for generating a browser-friendly version.


Strider (GitHub: Strider-CD / strider, License: BSD, npm: strider) by Niall O’Higgins is a continuous deployment solution built with Express and MongoDB. It’s designed to be easy to deploy to Heroku, and can be used to deploy applications to your own servers. It directly supports Node and Python applications, and the author is also working on supporting Rails and JVM languages.

Strider integrates with GitHub, so it can display a list of your repositories and allow them to be deployed as required. It can also test projects, so it can be used for continuous integration if deployment isn’t required.

The documentation includes full details for installing Strider, linking a GitHub account, and then setting it up as a CI/CD server with an example project.

jQuery Roundup: 1.8.3, UI 1.9.2, oolib.js

27 Nov 2012 | By Alex Young | Comments | Tags jquery jquery-ui oo
Note: You can send your plugins and articles in for review through our contact form or @dailyjs.

jQuery 1.8.3

jQuery 1.8.3 and jQuery Color 2.1.1 are out. There are a few interesting bug fixes in this release that you might want to check out:

jQuery UI 1.9.2

jQuery UI 1.9.2 is out:

This update brings bug fixes for Accordion, Autocomplete, Button, Datepicker, Dialog, Menu, Tabs, Tooltip and Widget Factory.

The 1.9.2 changelog contains a full breakdown of the recent changes.


oolib.js (GitHub: idya / oolib, License: MIT) by Zsolt Szloboda is a JavaScript object-oriented library that is conceptually similar to jQuery UI’s Widget Factory. It supports private methods, class inheritance, object initialisation and deinitialisation, super methods, and it’s fairly small (min: 1.6K, gz: 0.7K).

It looks like this in practice:

var MyClass = oo.createClass({
  _create: function(foo) {
    this.myField = foo;

  _myPrivateMethod: function(bar) {
    return this.myField + bar;

  myPublicMethod: function(baz) {
    return this._myPrivateMethod(baz);

var MySubClass = oo.createClass(MyClass, {
  _myPrivateMethod: function(bar) {
    return this.myField + bar + 1;

JS101: __proto__

26 Nov 2012 | By Alex Young | Comments | Tags js101 tutorials language beginner

When I originally wrote about prototypes in JS101: Prototypes a few people were confused that I didn’t mention the __proto__ property. One reason I didn’t mention it is I was sticking to standard ECMAScript for the most part, using the Annotated ECMAScript 5.1 site as a reference. It’s actually hard to talk about prototypes without referring to __proto__, though, because it serves a very specific and useful purpose.

Recall that objects are created using constructors:

function User() {

var user = new User();

The prototype property can be used to add properties to instances of User:

function User() {

User.prototype.greet = function() {
  return 'hello';

var user = new User();

So far so good. The original constructor can be referenced using the constructor property on an instance:

assert.equal(user.constructor, User);

However, user.prototype is not the same as User.prototype. What if we wanted to get hold of the original prototype where the greet method was defined based on an instance of a User?

That’s where __proto__ comes in. Given that fact, we now know the following two statements to be true:

assert.equal(user.constructor, User);
assert.equal(user.__proto__, User.prototype);

Unfortunately, __proto__ doesn’t appear in ECMAScript 5 – so where does it come from? As noted by the documentation on MDN it’s a non-standard property. Or is it? It’s included in Ecma-262 Edition 6, which means whether it’s standard or not depends on the version of ECMAScript that you’re using.

It follows that an instance’s constructor should contain a reference to the constructor’s prototype. If this is true, then we can test it using these assertions:

assert.equal(user.constructor.prototype, User.prototype);
assert.equal(user.constructor.prototype, user.__proto__);

The standards also define Object.getPrototypeOf – this returns the internal property of an object. That means we can use it to access the constructor’s prototype:

assert.equal(Object.getPrototypeOf(user), User.prototype);

Putting all of this together gives this script which will pass in Node and Chrome (given a suitable assertion library):

var assert = require('assert');

function User() {

var user = new User();

assert.equal(user.__proto__, User.prototype);
assert.equal(user.constructor, User);
assert.equal(user.constructor.prototype, User.prototype);
assert.equal(user.constructor.prototype, user.__proto__);
assert.equal(Object.getPrototypeOf(user), User.prototype);

Internal Prototype

The confusion around __proto__ arises because of the term internal prototype:

All objects have an internal property called [[Prototype]]. The value of this property is either null or an object and is used for implementing inheritance.

Internally there has to be a way to access the constructor’s prototype to correctly implement inheritance – whether or not this is available to us is another matter. Why is accessing it useful to us? In the wild you’ll occasionally see people setting an object’s __proto__ property to make objects look like they inherit from another object. This used to be the case in Node’s assertion module, but Node’s util.inherits method is a more idiomatic way to do it:

// Compare to: assert.AssertionError.__proto__ = Error.prototype;
util.inherits(assert.AssertionError, Error);

This was changed in assert: remove unnecessary use of __proto__.

The Constructor’s Prototype

The User example’s internal prototype is set to Function.prototype:

assert.equal(User.__proto__, Function.prototype);

If you’re about to put on your hat, pick up your briefcase, and walk right out the door: hold on a minute. You’re coming to the end of the chain – the prototype chain that is:

assert.equal(User.__proto__, Function.prototype);
assert.equal(Function.prototype.__proto__, Object.prototype);
assert.equal(Object.prototype.__proto__, null);

Remember that the __proto__ property is the internal prototype – this is how JavaScript’s inheritance chain is implemented. Every User inherits from Function.prototype which in turn inherits from Object.prototype, and Object.prototype’s internal prototype is null which allows the inheritance algorithm to know it has reached the end of the chain.

Therefore, adding a method to Object.prototype will make it available to every object. Properties of the Object Prototype Object include toString, valueOf, and hasOwnProperty. That means instances of the User constructor in the previous example will have these methods.

Pithy Closing Remark

JavaScript’s inheritance model is not class-based. Joost Diepenmaat’s post, Constructors considered mildly confusing, summarises this as follows:

In a class-based object system, typically classes inherit from each other, and objects are instances of those classes. … constructors do nothing like this: in fact constructors have their own [[Prototype]] chain completely separate from the [[Prototype]] chain of objects they initialize.

Rather than visualising JavaScript objects as “classes”, try to think in terms of two parallel lines of prototype chains: one for constructors, and one for initialised objects.


Blanket.js, xsdurationjs, attr

23 Nov 2012 | By Alex Young | Comments | Tags libraries testing node browser dates


Blanket and QUnit

Blanket.js (GitHub: Migrii / blanket, License: MIT, npm: blanket) by Alex Seville is a code coverage library tailored for Mocha and QUnit, although it should work elsewhere. Blanket wraps around code that requires coverage, and this can be done by applying a data-cover attribute to script tags, or by passing it a path, regular expression, or array of paths in Node.

It actually parses and instruments code using uglify-js, and portions of Esprima and James Halliday’s falafel library.

The author has prepared an example test suite that you can run in a browser: backbone-koans-qunit. Check the “Enable coverage” box, and it will run through the test suite using Blanket.js.


xsdurationjs (License: MIT, npm: xsdurationjs) by Pedro Narciso García Revington is an implementation of Adding durations to dateTimes from the W3C Recommendation XML Schema Part 2. By passing it a duration and a date, it will return a new date by evaluating the duration expression.

The duration expressions are ISO 8601 durations – these can be quite short like P5M, or contain year, month, day, and time:

For example, “P3Y6M4DT12H30M5S” represents a duration of “three years, six months, four days, twelve hours, thirty minutes, and five seconds”.

The project includes Vows tests that include coverage for the W3C functions (fQuotient and modulo).


attr (License: MIT) by Jonah Fox is a component for “evented attributes with automatic dependencies.” Once an attribute has been created with attr('name'), it will emit events when the value changes. Convenience methods are also available for toggling boolean values and getting the last value.

It’s designed to be used in browsers, and comes with Mocha tests.

The State of Backbone.js

22 Nov 2012 | By Alex Young | Comments | Tags backbone.js mvc


Looking at you’d be forgiven for thinking the project has stagnated somewhat. It’s currently at version 0.9.2, released back in March, 2012. So what’s going on? It turns out a huge amount of work! The developers have committed a slew of changes since then. The latest version and commit history is readily available in the master Backbone.js branch on GitHub. Since March there has been consistent activity on the master branch, including community contributions. The core developers are working hard on releasing 1.0.

If you’ve been sticking with the version from the Backbone.js website (0.9.2), you’re probably wondering what’s changed between that version and the current code in the master branch. Here’s a summary of the new features and tweaks:

In addition to these changes, there are a lot of fixes, refactored internals, and documentation improvements.

If you’re interested in testing this against your Backbone-powered apps, then download the Backbone.js edge version to try it out. I’m not sure when the next major version will be released, but I’ll be watching both the Backbone.js Google Group and GitHub repository for news.

Node Roundup: Knockout Winners, Node for Raspberry Pi, Benchtable

21 Nov 2012 | By Alex Young | Comments | Tags node raspberry-pi hardware benchmarking
You can send in your Node projects for review through our contact form or @dailyjs.

Node.js Knockout Winners Announced


Node.js Knockout had an outstanding 167 entries this year. The overall winner was Disasteroids by SomethingCoded. It’s an original take on several arcade classics: imagine a multiplayer version of Asteroids crossed with the shooting mechanics of Missile Command, but with projectiles that are affected by gravity.

The other winners are currently listed on the site, but I’ve reproduced them here to give the entrants more well-earned kudos:

Congratulations to all the winners, and be sure to browse the rest of the entries for hours of fun!

Node on Raspberry Pi

Node Pi

If you’ve got a Raspberry Pi you probably already know it’s possible to run Node on the ARM-based tiny computer. If not then Node.js Debian package for ARMv5 by Vincent Rabah explains how to get Node running with his custom Debian package.

“But the Raspberry Pi is just a cheap computer, what’s so great about it?” I hear you cry in the comments. There’s an intrinsic value to the Raspberry Pi Foundation’s efforts in making such hardware suitable for school children. No offence to Microsoft, but in a country where Office was on the curriculum for “IT” we can use any help we can get aiding the next generation of hackers and professional engineers.



I love the command-line, it’s where I write code, DailyJS, notes, email – colourful text and ancient Unix utilities abound. But, I also like to fiddle with the way things look. For example, if I’m writing benchmarks I don’t want to just print them out in boring old monochrome text, I want them to look cool.

Ivan Zuzak’s Benchtable (License: Apache 2.0, npm: benchtable) is built for just such a need. It prints benchmarks in tables, making it a little bit easier to compare values visually. It’s built on Benchmark.js, which is one of the most popular benchmarking modules.

The API is based around the Benchtable prototype which is based on Benchmark.Suite, so it can be dropped into an existing benchmarking suite without too much effort.

jQuery Roundup: pickadate.js, jQuery Interdependencies, Timer.js

20 Nov 2012 | By Alex Young | Comments | Tags jquery date-pickers forms timers
Note: You can send your plugins and articles in for review through our contact form or @dailyjs.



pickadate.js (GitHub: amsul / pickadate.js, License: MIT) by Amsul is a date picker that works with type="date" or regular text fields, supports various types of date formatting options, and is easy to theme.

The pickadate.js documentation explains how to use and configure the plugin. Basic usage is just $('.datepicker').datepicker(), given a suitable form field.

jQuery Interdependencies

jQuery Interdependencies (GitHub: miohtama / jquery-interdependencies, License: MIT) by Mikko Ohtamaa is a plugin for expressing relationships between form fields. Rule sets can be created that relate the value of a field to the presence of another field. The simplest example of this would be selecting “Other”, and then filling out a value in a text field.

It works with all standard HTML inputs, and can handle nested decision trees. There’s also some detailed documentation, jQuery Interdependencies documentation and an introductory blog post that covers the basics.


Florian Schäfer sent in his forked version of jQuery Chrono, Timer.js. It’s a periodic timer API for browsers and Node, with some convenience methods and time string expression parsing:

timer.every('2 seconds', function () {});
timer.after('5 seconds', function () {});

He also sent in Lambda.js which is a spin-off from Oliver Steele’s functional-javascript library. String expressions are used to concisely represent small functions, or lambdas:

lambda('x -> x + 1')(1); // => 2
lambda('x y -> x + 2*y')(1, 2); // => 5
lambda('x, y -> x + 2*y')(1, 2); // => 5

Mastering Node Streams: Part 2

19 Nov 2012 | By Roly Fentanes | Comments | Tags tutorials node streams

If you’ve ever used the Request module, you’ve probably noticed that calling it returns a stream object synchronously. You can pipe it right away. To see what I mean, this is how you would normally pipe HTTP responses:

var http = require('http');

http.get('', function onResponse(response) {

Compare that example to using the Request module:

var request = require('request');


That’s easier to understand, shorter, and requires one less level of indentation. In this article, I’ll explain how this is done so you can make modules that work this way.

How to do It

First, it’s vitally important to understand how the stream API works. If you haven’t done so yet, take a look at the stream API docs, I promise it’s not too long.

First, we’ll take a look at readable streams. Readable streams can be paused()d and resume()d. If we’re using another object to temporarily represent it while it’s not available, the reasonable thing to do would be to keep a paused property on this object, updated properly as pause() and resume() are called. Some readable streams also have destroy() and setEncoding(). Again, the first thing that might come to mind is to keep the properties destroyed and encoding on the temporary stream.

But, not all readable streams are created equal, some might have more methods or they might not have a destroy() method. The most reliable method I’ve found is to look at the stream’s prototype, iterate through the functions including those it inherits, and buffer all calls to them until the real stream is available. This works for a writable stream’s write() and end() methods, and for even emitter methods such as on().

Standard stream methods don’t return anything, except for write() which returns false if the write buffer is full. In this case it will be false as long as the real stream is not yet available.

Another special case is pipe(). Every readable stream’s pipe method works the same way. It doesn’t need to be overwritten or queued. When pipe() is called, it listens for events from both the source and destination streams. It writes to the destination stream whenever data is emitted from the source, and it pauses and resumes the source as needed. We’re already queueing calls to methods inherited from event emitter.

What about emitting an event before the real source stream is available? You couldn’t do this if you queued calls to emit(). The events would fire only after the real stream becomes available. If you’re a perfectionist, you would want to consider this very rare case and come up with a solution.

Introducing Streamify

Streamify does all of this for you, so you don’t have to deal with the complexities and still get the benefits of a nicer API. Our previous http example can be rewritten to work like Request does.

var http = require('http');
var streamify = require('streamify');

var stream = streamify();
http.get('', function onResponse(response) {

// `stream` can be piped already

You might think this is unnecessary since Request already exists and it already does this. Keep in mind Request is only one module which handles one type of stream. This can be used with any type of stream which is not immediately available in the current event loop iteration.

You could even do something crazy like this:

var http = require('http');
var fs = require('fs');
var streamify = require('streamify');

function uploadToFirstClient() {
  var stream = streamify({ superCtor: http.ServerResponse });

  var server = http.createServer(function onRequest(request, response) {

  stream.on('pipe', function onpipe(source) {
    source.on('end', server.close.bind(server));

  return stream;


In the previous example I used Node’s standard HTTP module, but it could easily be replaced with Request – Streamify works fine with Request.

Streamify also helps when you need to make several requests before the stream you actually want is available:

var request = require('request');
var streamify = require('streamify');

module.exports = function myModule() {
  var stream = streamify();

  request.get('', function onAuthenticate(err, response) {
    if (err) return stream.emit('error', err);
    var options = { uri: '', json: true };
    request.get(options, function onList(err, result) {
      if (err) return stream.emit('error', err);
      stream.resolve(request.get('' + result.file));

  return stream;

This works wonders for any use case in which we want to work with a stream that will be around in the future, but is preceded by one or many asynchronous operations.


LinkAP, typed, SCXML Simulation

16 Nov 2012 | By Alex Young | Comments | Tags libraries testing node browser


LinkAP (GitHub: pfraze / link-ap, License: MIT) by Paul Frazee is a client-side application platform based around web workers and services. It actually prevents the use of what the author considers dangerous APIs, including XMLHttpRequest – one of the LinkAP design goals is to create an architecture for safely coordinating untrusted programs within an HTML document. The design document also addresses sessions:

In LinkAP, sessions are automatically created on the first request from an agent program to a domain. Each session must be approved by the environment. If a destination responds with a 401 WWW-Authenticate, the environment must decide whether to provide the needed credentials for a subsequent request attempt.

To build a project with LinkAP, check out the LinkAP repository and then run make. This will create a fresh project to work with. It expects to be hosted by a web server, you can’t just open the index.html page locally. It comes with Bootstrap, so you get some fairly clean CSS to work with out of the box.


typed (GitHub: alexlawrence / typed, License: MIT, npm: typed) by Alex Lawrence is a static typing library. It can be used with Node and browsers. The project’s homepage has live examples that can be experimented with.

A function is provided called typed which can be used to create constructors that bestow runtime static type checking on both native types and prototype classes. There are two ways to declare types: comment parsing and suffix parsing:

// The 'greeting' argument must be a string
var Greeter = typed(function(greeting /*:String*/) {
  this.greeting = greeting;

// This version uses suffix parsing
var Greeter = typed(function(greeting_String) {
  this.greeting = greeting_String;

The library can be turned off if desired by using = false – this could be useful for turning it off in production environments.

The author has included a build script and tests.

SCXML Simulation

“Check out this cool thing I built using d3:,” says Jacob Beard. That does look cool, but what is it? It’s a visual representation of a state chart, based on SCXML. Jacob has written two libraries for working with SCXML:

We previously wrote about the SCION project in Physijs, SCION, mmd, Sorting.