Backbone.js Tutorial: Build Environment

29 Nov 2012 | By Alex Young | Comments | Tags backbone.js mvc node backgoog

This new Backbone.js tutorial series will walk you through building a single page web application that has a customised Backbone.sync implementation. I started building the application that these tutorials are based on back in August, and it’s been running smoothly for a few months now so I thought it was safe enough to unleash it.

Gmail to-do lists: not cool enough!

The application itself was built to solve a need of mine: a more usable Google Mail to-do list. The Gmail-based interface rubs me the wrong way to put it mildly, so I wrote a Backbone.sync method that works with Google’s APIs and stuck a little Bootstrap interface on top. As part of these tutorials I’ll also make a few suggestions on how to customise Bootstrap – there’s no excuse for releasing vanilla Bootstrap sites!

The app we’ll be making won’t feature everything that Google’s to-do lists support: I haven’t yet added support for indenting items for example. However, it serves my needs very well so hopefully it’ll be something you’ll actually want to use.

Roadmap

Over the next few weeks I’ll cover the following topics:

  • Creating a new Node project for building the single page app
  • Using RequireJS with Backbone.js
  • Google’s APIs
  • Writing and running tests
  • Creating the Backbone.js app itself
  • Techniques for customising Bootstrap
  • Deploying to Dropbox, Amazon S3, and potentially other services

Creating a Build Environment

If your focus is on client-side scripting, then I think this will be useful to you. Our goal is to create a development environment that can do the following:

  • Allow the client-side code to be written as separate files
  • Combine separate files into something suitable for deployment
  • Run the app locally using separate files (to make development and debugging easier)
  • Manage supporting Node modules
  • Run tests
  • Support Unix and Windows

To do this we’ll need a few tools and libraries:

Make sure your system has Node installed. The easiest way to install it is by using one of the Node packages for your system.

Step 1: Installing the Node Modules

Create a new directory for this project, and create a new file inside it called package.json that contains this JSON:

{
  "name": "btask"
, "version": "0.0.1"
, "private": true
, "dependencies": {
    "requirejs": "latest"
  , "connect": "2.7.0"
  }
, "devDependencies": {
    "mocha": "latest"
  , "chai": "latest"
  , "grunt": "latest"
  , "grunt-exec": "latest"
  }
, "scripts": {
    "grunt": "node_modules/.bin/grunt"
  }
}

Run npm install. These modules along with their dependencies will be installed in ./node_modules.

The private property prevents accidentally publicly releasing this module. This is useful for closed source commercial projects, or projects that aren’t suitable for release through npm.

Even if you’re not a server-side developer, managing dependencies with npm is useful because it makes it easier for other developers to work on your project. When a new developer joins your project, they can just type npm install instead of figuring out what downloads to grab.

Step 2: Local Web Server

Create a directory called app and a file called app/index.html:

<!DOCTYPE html>
<head>
  <meta charset="utf-8">
  <title>bTask</title>
  <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script>
  <script type="text/javascript" src="js/lib/require.js"></script>
</head>
<body>
</body>
</html>

Once you’ve done that, create a file called server.js in the top-level directory:

var connect = require('connect')
  , http = require('http')
  , app
  ;

app = connect()
  .use(connect.static('app'))
  .use('/js/lib/', connect.static('node_modules/requirejs/'))
  .use('/node_modules', connect.static('node_modules'))
  ;

http.createServer(app).listen(8080, function() {
  console.log('Running on http://localhost:8080');
});

This file uses the Connect middleware framework to act as a small web server for serving the files in app/. You can add new paths to it by copying the .use(connect.static('app')) line and changing app to something else.

Notice how I’ve mapped the web path for /js/lib/ to node_modules/requirejs/ on the file system – rather than copying RequireJS to where the client-side scripts are stored we can map it using Connect. Later on the build scripts will copy node_modules/requirejs/require.js to build/js/lib so the index.html file won’t have to change. This will enable the project to run on a suitable web server, or a hosting service like Amazon S3 for static sites.

To run this Node server, type npm start (or node server.js) and visit http://localhost:8080. It should display an empty page with no client-side errors.

Step 3: Configuring RequireJS

This project will consist of modules written in the AMD format. Each Backbone collection, model, view, and so on will exist in its own file, with a list of dependencies so RequireJS can load them as needed.

RequireJS projects that work this way are usually structured around a “main” file that loads the necessary dependencies to boot up the app. Create a file called app/js/main.js that contains the following skeleton RequireJS config:

requirejs.config({
  baseUrl: 'js',

  paths: {
  },

  shim: {
  }
});

require(['app'],

function(App) {
  window.bTask = new App();
});

The part that reads require(['app'] will load app/js/app.js. Create this file with the following contents:

define([], function() {
  var App = function() {
  };

  App.prototype = {
  };

  return App;
});

This is a module written in the AMD format – the define function is provided by RequireJS and in future will contain all of the internal dependencies for the project.

To finish off this step, the main.js should be loaded. Add some suitable script tags near the bottom of app/index.html, before the </body> tag.

<script type="text/javascript" src="js/main.js"></script>

If you refresh http://localhost:8080 in your browser and open the JavaScript console, you should see that bTask has been instantiated.

window.bTask

Step 4: Testing

Everything you’ve learned in the previous three steps can be reused to create a unit testing suite. Mocha has already been installed by npm, so let’s create a suitable test harness.

Create a new directory called test/ (next to the ‘app/’ directory) that contains a file called index.html:

<html>
<head>
  <meta charset="utf-8">
  <title>bTask Tests</title>
  <link rel="stylesheet" href="/node_modules/mocha/mocha.css" />
  <style>
.toast-message, #main { display: none }
  </style>
</head>
<body>
  <div id="mocha"></div>
  <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script>
  <script src="/node_modules/chai/chai.js"></script>
  <script src="/node_modules/mocha/mocha.js"></script>
  <script src="/js/lib/require.js"></script>
  <script src="/js/main.js"></script>
  <script src="setup.js"></script>
  <script src="app.test.js"></script>
  <script>require(['app'], function() { mocha.run(); });</script>
</body>
</html>

The require near the end just makes sure mocha.run only runs when /js/app.js has been loaded.

Create another file called test/setup.js:

var assert = chai.assert;

mocha.setup({
  ui: 'tdd'
, globals: ['bTask']
});

This file makes Chai’s assertions available as assert, which is how I usually write my tests. I’ve also told Mocha that bTask is an expected global variable.

With all this in place we can write a quick test. This file goes in test/app.test.js:

suite('App', function() {
  test('Should be present', function() {
    assert.ok(window.bTask);
  });
});

All it does is checks window.bTask has been defined – it proves RequireJS has loaded the app.

Finally we need to update where Connect looks for files to serve. Modify ‘server.js’ to look like this:

var connect = require('connect')
  , http = require('http')
  , app
  ;

app = connect()
  .use(connect.static('app'))
  .use('/js/lib/', connect.static('node_modules/requirejs/'))
  .use('/node_modules', connect.static('node_modules'))
  .use('/test', connect.static('test/'))
  .use('/test', connect.static('app'))
  ;

http.createServer(app).listen(8080, function() {
  console.log('Running on http://localhost:8080');
});

Restart the web server (from step 2), and visit http://localhost:8080/test/ (the last slash is important). Mocha should display that a single test has passed.

Step 5: Making Builds

Create a file called grunt.js for our “gruntfile”:

module.exports = function(grunt) {
  grunt.loadNpmTasks('grunt-exec');

  grunt.initConfig({
    exec: {
      build: {
        command: 'node node_modules/requirejs/bin/r.js -o require-config.js'
      }
    }
  });

  grunt.registerTask('copy-require', function() {
    grunt.file.mkdir('build/js/lib');
    grunt.file.copy('node_modules/requirejs/require.js', 'build/js/lib/require.js');
  });

  grunt.registerTask('default', 'exec copy-require');
};

This file uses the grunt-exec plugin by Jake Harding to run the RequireJS command that generates a build of everything in the app/ directory. To tell RequireJS what to build, create a file called require-config.js:

({
  appDir: 'app/'
, baseUrl: 'js'
, paths: {}
, dir: 'build/'
, modules: [{ name: 'main' }]
})

RequireJS will minimize and concatenate the necessary files. The other Grunt task copies the RequireJS client-side code to build/js/lib/require.js, because our local Connect server was mapping this for us. Why bother? Well, it means whenever we update RequireJS through npm the app and builds will automatically get the latest version.

To run Grunt, type npm run-script grunt. The npm command run-script is used to invoke scripts that have been added to the package.json file. The package.json created in step 1 contained "grunt": "node_modules/.bin/grunt", which makes this work. I prefer this to installing Grunt globally.

I wouldn’t usually use Grunt for my own projects because I prefer Makefiles. In fact, a Makefile for the above would be very simple. However, this makes things more awkward for Windows-based developers, so I’ve included Grunt in an effort to support Windows. Also, if you typically work as a client-side developer, you might find Grunt easier to understand than learning GNU Make or writing the equivalent Node code (Node has a good file system module).

Summary

In this tutorial you’ve created a Grunt and RequireJS build environment for Backbone.js projects that use Mocha for testing. You’ve also seen how to use Connect to provide a convenient local web server.

This is basically how I build and manage all of my Backbone.js single page web applications. Although we haven’t written much code yet, as you’ll see over the coming weeks this approach works well for using Backbone.js and RequireJS together.

The code for this project can be found here: dailyjs-backbone-tutorial (2a8517).

Contributions

Node Roundup: 0.8.15, JSPath, Strider

28 Nov 2012 | By Alex Young | Comments | Tags node modules apps express json
You can send in your Node projects for review through our contact form or @dailyjs.

Node 0.8.15

Node 0.8.15 is out, which updates npm to 1.1.66, fixes a net and tls resource leak, and has some miscellaneous fixes for Windows and Unix systems.

JSPath

JSPath (License: MIT/GPL, npm: jspath) by Filatov Dmitry is a DSL for working with JSON. The DSL can be used to select values, and looks a bit like CSS selectors:

// var doc = { "books" : [ { "id" : 1, "title" : "Clean Code", "author" : { "name" : "Robert C. Martin" } ...

JSPath.apply('.books.author', doc);
// [{ name : 'Robert C. Martin' }, ...

It can also be used to apply conditional expressions, like this: .books{.author.name === 'Robert C. Martin'}.title. Other comparison operators are also supported like >= and !==. Logical operators can be used to combine predicates, and the API also supports substitution.

The author has included unit tests, and build scripts for generating a browser-friendly version.

Strider

Strider (GitHub: Strider-CD / strider, License: BSD, npm: strider) by Niall O’Higgins is a continuous deployment solution built with Express and MongoDB. It’s designed to be easy to deploy to Heroku, and can be used to deploy applications to your own servers. It directly supports Node and Python applications, and the author is also working on supporting Rails and JVM languages.

Strider integrates with GitHub, so it can display a list of your repositories and allow them to be deployed as required. It can also test projects, so it can be used for continuous integration if deployment isn’t required.

The documentation includes full details for installing Strider, linking a GitHub account, and then setting it up as a CI/CD server with an example project.

jQuery Roundup: 1.8.3, UI 1.9.2, oolib.js

27 Nov 2012 | By Alex Young | Comments | Tags jquery jquery-ui oo
Note: You can send your plugins and articles in for review through our contact form or @dailyjs.

jQuery 1.8.3

jQuery 1.8.3 and jQuery Color 2.1.1 are out. There are a few interesting bug fixes in this release that you might want to check out:

jQuery UI 1.9.2

jQuery UI 1.9.2 is out:

This update brings bug fixes for Accordion, Autocomplete, Button, Datepicker, Dialog, Menu, Tabs, Tooltip and Widget Factory.

The 1.9.2 changelog contains a full breakdown of the recent changes.

oolib.js

oolib.js (GitHub: idya / oolib, License: MIT) by Zsolt Szloboda is a JavaScript object-oriented library that is conceptually similar to jQuery UI’s Widget Factory. It supports private methods, class inheritance, object initialisation and deinitialisation, super methods, and it’s fairly small (min: 1.6K, gz: 0.7K).

It looks like this in practice:

var MyClass = oo.createClass({
  _create: function(foo) {
    this.myField = foo;
  },

  _myPrivateMethod: function(bar) {
    return this.myField + bar;
  },

  myPublicMethod: function(baz) {
    return this._myPrivateMethod(baz);
  }
});

var MySubClass = oo.createClass(MyClass, {
  _myPrivateMethod: function(bar) {
    return this.myField + bar + 1;
  }
});

JS101: __proto__

26 Nov 2012 | By Alex Young | Comments | Tags js101 tutorials language beginner

When I originally wrote about prototypes in JS101: Prototypes a few people were confused that I didn’t mention the __proto__ property. One reason I didn’t mention it is I was sticking to standard ECMAScript for the most part, using the Annotated ECMAScript 5.1 site as a reference. It’s actually hard to talk about prototypes without referring to __proto__, though, because it serves a very specific and useful purpose.

Recall that objects are created using constructors:

function User() {
}

var user = new User();

The prototype property can be used to add properties to instances of User:

function User() {
}

User.prototype.greet = function() {
  return 'hello';
};

var user = new User();
user.greet();

So far so good. The original constructor can be referenced using the constructor property on an instance:

assert.equal(user.constructor, User);

However, user.prototype is not the same as User.prototype. What if we wanted to get hold of the original prototype where the greet method was defined based on an instance of a User?

That’s where __proto__ comes in. Given that fact, we now know the following two statements to be true:

assert.equal(user.constructor, User);
assert.equal(user.__proto__, User.prototype);

Unfortunately, __proto__ doesn’t appear in ECMAScript 5 – so where does it come from? As noted by the documentation on MDN it’s a non-standard property. Or is it? It’s included in Ecma-262 Edition 6, which means whether it’s standard or not depends on the version of ECMAScript that you’re using.

It follows that an instance’s constructor should contain a reference to the constructor’s prototype. If this is true, then we can test it using these assertions:

assert.equal(user.constructor.prototype, User.prototype);
assert.equal(user.constructor.prototype, user.__proto__);

The standards also define Object.getPrototypeOf – this returns the internal property of an object. That means we can use it to access the constructor’s prototype:

assert.equal(Object.getPrototypeOf(user), User.prototype);

Putting all of this together gives this script which will pass in Node and Chrome (given a suitable assertion library):

var assert = require('assert');

function User() {
}

var user = new User();

assert.equal(user.__proto__, User.prototype);
assert.equal(user.constructor, User);
assert.equal(user.constructor.prototype, User.prototype);
assert.equal(user.constructor.prototype, user.__proto__);
assert.equal(Object.getPrototypeOf(user), User.prototype);

Internal Prototype

The confusion around __proto__ arises because of the term internal prototype:

All objects have an internal property called [[Prototype]]. The value of this property is either null or an object and is used for implementing inheritance.

Internally there has to be a way to access the constructor’s prototype to correctly implement inheritance – whether or not this is available to us is another matter. Why is accessing it useful to us? In the wild you’ll occasionally see people setting an object’s __proto__ property to make objects look like they inherit from another object. This used to be the case in Node’s assertion module, but Node’s util.inherits method is a more idiomatic way to do it:

// Compare to: assert.AssertionError.__proto__ = Error.prototype;
util.inherits(assert.AssertionError, Error);

This was changed in assert: remove unnecessary use of __proto__.

The Constructor’s Prototype

The User example’s internal prototype is set to Function.prototype:

assert.equal(User.__proto__, Function.prototype);

If you’re about to put on your hat, pick up your briefcase, and walk right out the door: hold on a minute. You’re coming to the end of the chain – the prototype chain that is:

assert.equal(User.__proto__, Function.prototype);
assert.equal(Function.prototype.__proto__, Object.prototype);
assert.equal(Object.prototype.__proto__, null);

Remember that the __proto__ property is the internal prototype – this is how JavaScript’s inheritance chain is implemented. Every User inherits from Function.prototype which in turn inherits from Object.prototype, and Object.prototype’s internal prototype is null which allows the inheritance algorithm to know it has reached the end of the chain.

Therefore, adding a method to Object.prototype will make it available to every object. Properties of the Object Prototype Object include toString, valueOf, and hasOwnProperty. That means instances of the User constructor in the previous example will have these methods.

Pithy Closing Remark

JavaScript’s inheritance model is not class-based. Joost Diepenmaat’s post, Constructors considered mildly confusing, summarises this as follows:

In a class-based object system, typically classes inherit from each other, and objects are instances of those classes. … constructors do nothing like this: in fact constructors have their own [[Prototype]] chain completely separate from the [[Prototype]] chain of objects they initialize.

Rather than visualising JavaScript objects as “classes”, try to think in terms of two parallel lines of prototype chains: one for constructors, and one for initialised objects.

References

Blanket.js, xsdurationjs, attr

23 Nov 2012 | By Alex Young | Comments | Tags libraries testing node browser dates

Blanket.js

Blanket and QUnit

Blanket.js (GitHub: Migrii / blanket, License: MIT, npm: blanket) by Alex Seville is a code coverage library tailored for Mocha and QUnit, although it should work elsewhere. Blanket wraps around code that requires coverage, and this can be done by applying a data-cover attribute to script tags, or by passing it a path, regular expression, or array of paths in Node.

It actually parses and instruments code using uglify-js, and portions of Esprima and James Halliday’s falafel library.

The author has prepared an example test suite that you can run in a browser: backbone-koans-qunit. Check the “Enable coverage” box, and it will run through the test suite using Blanket.js.

xsdurationjs

xsdurationjs (License: MIT, npm: xsdurationjs) by Pedro Narciso García Revington is an implementation of Adding durations to dateTimes from the W3C Recommendation XML Schema Part 2. By passing it a duration and a date, it will return a new date by evaluating the duration expression.

The duration expressions are ISO 8601 durations – these can be quite short like P5M, or contain year, month, day, and time:

For example, “P3Y6M4DT12H30M5S” represents a duration of “three years, six months, four days, twelve hours, thirty minutes, and five seconds”.

The project includes Vows tests that include coverage for the W3C functions (fQuotient and modulo).

attr

attr (License: MIT) by Jonah Fox is a component for “evented attributes with automatic dependencies.” Once an attribute has been created with attr('name'), it will emit events when the value changes. Convenience methods are also available for toggling boolean values and getting the last value.

It’s designed to be used in browsers, and comes with Mocha tests.

The State of Backbone.js

22 Nov 2012 | By Alex Young | Comments | Tags backbone.js mvc

Backbone.js

Looking at backbonejs.org you’d be forgiven for thinking the project has stagnated somewhat. It’s currently at version 0.9.2, released back in March, 2012. So what’s going on? It turns out a huge amount of work! The developers have committed a slew of changes since then. The latest version and commit history is readily available in the master Backbone.js branch on GitHub. Since March there has been consistent activity on the master branch, including community contributions. The core developers are working hard on releasing 1.0.

If you’ve been sticking with the version from the Backbone.js website (0.9.2), you’re probably wondering what’s changed between that version and the current code in the master branch. Here’s a summary of the new features and tweaks:

In addition to these changes, there are a lot of fixes, refactored internals, and documentation improvements.

If you’re interested in testing this against your Backbone-powered apps, then download the Backbone.js edge version to try it out. I’m not sure when the next major version will be released, but I’ll be watching both the Backbone.js Google Group and GitHub repository for news.

Node Roundup: Knockout Winners, Node for Raspberry Pi, Benchtable

21 Nov 2012 | By Alex Young | Comments | Tags node raspberry-pi hardware benchmarking
You can send in your Node projects for review through our contact form or @dailyjs.

Node.js Knockout Winners Announced

Disasteroids

Node.js Knockout had an outstanding 167 entries this year. The overall winner was Disasteroids by SomethingCoded. It’s an original take on several arcade classics: imagine a multiplayer version of Asteroids crossed with the shooting mechanics of Missile Command, but with projectiles that are affected by gravity.

The other winners are currently listed on the site, but I’ve reproduced them here to give the entrants more well-earned kudos:

Congratulations to all the winners, and be sure to browse the rest of the entries for hours of fun!

Node on Raspberry Pi

Node Pi

If you’ve got a Raspberry Pi you probably already know it’s possible to run Node on the ARM-based tiny computer. If not then Node.js Debian package for ARMv5 by Vincent Rabah explains how to get Node running with his custom Debian package.

“But the Raspberry Pi is just a cheap computer, what’s so great about it?” I hear you cry in the comments. There’s an intrinsic value to the Raspberry Pi Foundation’s efforts in making such hardware suitable for school children. No offence to Microsoft, but in a country where Office was on the curriculum for “IT” we can use any help we can get aiding the next generation of hackers and professional engineers.

Benchtable

Benchtable

I love the command-line, it’s where I write code, DailyJS, notes, email – colourful text and ancient Unix utilities abound. But, I also like to fiddle with the way things look. For example, if I’m writing benchmarks I don’t want to just print them out in boring old monochrome text, I want them to look cool.

Ivan Zuzak’s Benchtable (License: Apache 2.0, npm: benchtable) is built for just such a need. It prints benchmarks in tables, making it a little bit easier to compare values visually. It’s built on Benchmark.js, which is one of the most popular benchmarking modules.

The API is based around the Benchtable prototype which is based on Benchmark.Suite, so it can be dropped into an existing benchmarking suite without too much effort.

jQuery Roundup: pickadate.js, jQuery Interdependencies, Timer.js

20 Nov 2012 | By Alex Young | Comments | Tags jquery date-pickers forms timers
Note: You can send your plugins and articles in for review through our contact form or @dailyjs.

pickadate.js

pickadate.js

pickadate.js (GitHub: amsul / pickadate.js, License: MIT) by Amsul is a date picker that works with type="date" or regular text fields, supports various types of date formatting options, and is easy to theme.

The pickadate.js documentation explains how to use and configure the plugin. Basic usage is just $('.datepicker').datepicker(), given a suitable form field.

jQuery Interdependencies

jQuery Interdependencies (GitHub: miohtama / jquery-interdependencies, License: MIT) by Mikko Ohtamaa is a plugin for expressing relationships between form fields. Rule sets can be created that relate the value of a field to the presence of another field. The simplest example of this would be selecting “Other”, and then filling out a value in a text field.

It works with all standard HTML inputs, and can handle nested decision trees. There’s also some detailed documentation, jQuery Interdependencies documentation and an introductory blog post that covers the basics.

Timer.js

Florian Schäfer sent in his forked version of jQuery Chrono, Timer.js. It’s a periodic timer API for browsers and Node, with some convenience methods and time string expression parsing:

timer.every('2 seconds', function () {});
timer.after('5 seconds', function () {});

He also sent in Lambda.js which is a spin-off from Oliver Steele’s functional-javascript library. String expressions are used to concisely represent small functions, or lambdas:

lambda('x -> x + 1')(1); // => 2
lambda('x y -> x + 2*y')(1, 2); // => 5
lambda('x, y -> x + 2*y')(1, 2); // => 5

Mastering Node Streams: Part 2

19 Nov 2012 | By Roly Fentanes | Comments | Tags tutorials node streams

If you’ve ever used the Request module, you’ve probably noticed that calling it returns a stream object synchronously. You can pipe it right away. To see what I mean, this is how you would normally pipe HTTP responses:

var http = require('http');

http.get('http://www.google.com', function onResponse(response) {
  response.pipe(destinationStream);
});

Compare that example to using the Request module:

var request = require('request');

request('http://www.google.com').pipe(destinationStream);

That’s easier to understand, shorter, and requires one less level of indentation. In this article, I’ll explain how this is done so you can make modules that work this way.

How to do It

First, it’s vitally important to understand how the stream API works. If you haven’t done so yet, take a look at the stream API docs, I promise it’s not too long.

First, we’ll take a look at readable streams. Readable streams can be paused()d and resume()d. If we’re using another object to temporarily represent it while it’s not available, the reasonable thing to do would be to keep a paused property on this object, updated properly as pause() and resume() are called. Some readable streams also have destroy() and setEncoding(). Again, the first thing that might come to mind is to keep the properties destroyed and encoding on the temporary stream.

But, not all readable streams are created equal, some might have more methods or they might not have a destroy() method. The most reliable method I’ve found is to look at the stream’s prototype, iterate through the functions including those it inherits, and buffer all calls to them until the real stream is available. This works for a writable stream’s write() and end() methods, and for even emitter methods such as on().

Standard stream methods don’t return anything, except for write() which returns false if the write buffer is full. In this case it will be false as long as the real stream is not yet available.

Another special case is pipe(). Every readable stream’s pipe method works the same way. It doesn’t need to be overwritten or queued. When pipe() is called, it listens for events from both the source and destination streams. It writes to the destination stream whenever data is emitted from the source, and it pauses and resumes the source as needed. We’re already queueing calls to methods inherited from event emitter.

What about emitting an event before the real source stream is available? You couldn’t do this if you queued calls to emit(). The events would fire only after the real stream becomes available. If you’re a perfectionist, you would want to consider this very rare case and come up with a solution.

Introducing Streamify

Streamify does all of this for you, so you don’t have to deal with the complexities and still get the benefits of a nicer API. Our previous http example can be rewritten to work like Request does.

var http = require('http');
var streamify = require('streamify');

var stream = streamify();
http.get('http://www.google.com', function onResponse(response) {
  stream.resolve(response);
});

// `stream` can be piped already
stream.pipe(destinationStream);

You might think this is unnecessary since Request already exists and it already does this. Keep in mind Request is only one module which handles one type of stream. This can be used with any type of stream which is not immediately available in the current event loop iteration.

You could even do something crazy like this:

var http = require('http');
var fs = require('fs');
var streamify = require('streamify');

function uploadToFirstClient() {
  var stream = streamify({ superCtor: http.ServerResponse });

  var server = http.createServer(function onRequest(request, response) {
    response.writeHead(200);
    stream.resolve(response);
  }).listen(3000);

  stream.on('pipe', function onpipe(source) {
    source.on('end', server.close.bind(server));
  });

  return stream;
}

fs.createReadStream('/path/to/myfile').pipe(uploadToFirstClient);

In the previous example I used Node’s standard HTTP module, but it could easily be replaced with Request – Streamify works fine with Request.

Streamify also helps when you need to make several requests before the stream you actually want is available:

var request = require('request');
var streamify = require('streamify');

module.exports = function myModule() {
  var stream = streamify();

  request.get('http://somesite.com/authenticate', function onAuthenticate(err, response) {
    if (err) return stream.emit('error', err);
    
    var options = { uri: 'http://somesite.com/listmyfiles', json: true };
    request.get(options, function onList(err, result) {
      if (err) return stream.emit('error', err);
      stream.resolve(request.get('http://somesite.com/download/' + result.file));
    });
  });

  return stream;
};

This works wonders for any use case in which we want to work with a stream that will be around in the future, but is preceded by one or many asynchronous operations.

streamland

LinkAP, typed, SCXML Simulation

16 Nov 2012 | By Alex Young | Comments | Tags libraries testing node browser

LinkAP

LinkAP (GitHub: pfraze / link-ap, License: MIT) by Paul Frazee is a client-side application platform based around web workers and services. It actually prevents the use of what the author considers dangerous APIs, including XMLHttpRequest – one of the LinkAP design goals is to create an architecture for safely coordinating untrusted programs within an HTML document. The design document also addresses sessions:

In LinkAP, sessions are automatically created on the first request from an agent program to a domain. Each session must be approved by the environment. If a destination responds with a 401 WWW-Authenticate, the environment must decide whether to provide the needed credentials for a subsequent request attempt.

To build a project with LinkAP, check out the LinkAP repository and then run make. This will create a fresh project to work with. It expects to be hosted by a web server, you can’t just open the index.html page locally. It comes with Bootstrap, so you get some fairly clean CSS to work with out of the box.

typed

typed (GitHub: alexlawrence / typed, License: MIT, npm: typed) by Alex Lawrence is a static typing library. It can be used with Node and browsers. The project’s homepage has live examples that can be experimented with.

A function is provided called typed which can be used to create constructors that bestow runtime static type checking on both native types and prototype classes. There are two ways to declare types: comment parsing and suffix parsing:

// The 'greeting' argument must be a string
var Greeter = typed(function(greeting /*:String*/) {
  this.greeting = greeting;
});

// This version uses suffix parsing
var Greeter = typed(function(greeting_String) {
  this.greeting = greeting_String;
});

The library can be turned off if desired by using typed.active = false – this could be useful for turning it off in production environments.

The author has included a build script and tests.

SCXML Simulation

“Check out this cool thing I built using d3: http://goo.gl/wG5cq,” says Jacob Beard. That does look cool, but what is it? It’s a visual representation of a state chart, based on SCXML. Jacob has written two libraries for working with SCXML:

We previously wrote about the SCION project in Physijs, SCION, mmd, Sorting.

Node Daemon Architecture

15 Nov 2012 | By Alex Young | Comments | Tags node unix daemons

I’ve been researching the architecture of application layer server implementations in Node. I’m talking SMPT, IMAP, NNTP, Telnet, XMPP, and all that good stuff.

Node has always seemed like the perfect way to write network oriented daemons. If you’re a competent JavaScript developer, it unlocks powerful asynchronous I/O features. In The Architecture of Open Source Applications: nginx by Andrew Alexeev, the author explains nginx’s design in detail – in case you don’t know, nginx is a HTTP daemon that’s famous for solid performance. Andrew’s review states the following:

It was actually inspired by the ongoing development of advanced event-based mechanisms in a variety of operating systems. What resulted is a modular, event-driven, asynchronous, single-threaded, non-blocking architecture which became the foundation of nginx code.

Furthermore,

Connections are processed in a highly efficient run-loop in a limited number of single-threaded processes called workers. Within each worker nginx can handle many thousands of concurrent connections and requests per second.

Highly efficient run-loop and event-based mechanisms? That sounds exactly like a Node program! In fact, Node comes with several built-in features that make dealing with such an architecture a snap.

Events

If you read DailyJS you probably know all about EventEmitter. If not, then this is the heart of Node’s event-based APIs. Learn EventEmitter and the Stream API and you’ll be able to easily learn Node’s other APIs very quickly.

EventEmitter is the nexus of Node’s APIs. You’ll see it underlying the network APIs, including the HTTP and HTTPS servers. You can happily stuff it into your own classes with util.inherits – and you should! At this point, many popular third-party Node modules use EventEmitter or one of its descendants as a base class.

If you’re designing a server of some kind, it would be wise to consider basing it around EventEmitter. And once you realise how common this is, you’ll find all kinds of ways to improve the design of everything from daemons to web applications. For example, if I need to notify disparate entities within an Express application that something has happened, knowing that Express mixes EventEmitter into the app object means I can do things like app.on and app.emit rather than requiring access to a global app object.

Process

Guess what else is an instance of EventEmitter? The process global object. It can be used to manage the current process – including events for signals.

Domain

Domains can be used to group I/O operations – that means working with errors in nested callbacks is a little bit less painful:

If any of the event emitters or callbacks registered to a domain emit an error event, or throw an error, then the domain object will be notified, rather than losing the context of the error in the process.on('uncaughtException') handler, or causing the program to exit with an error code.

Domains are currently experimental, but from my own experiences writing long-running daemons with Node, they definitely bring a level of sanity to my spaghetti code.

Cluster

The Cluster module is also experimental, but makes it easier to spawn multiple Node processes that share server ports. These processes, or workers, can communicate using IPC (Inter-process communication) – all using the EventEmitter-based API you know and love.

In the Wild

I’ve already mentioned that Express “mixes in” EventEmitter. This is in contrast to the inheritance-based approach detailed in Node’s documentation. It’s incorrect to say Express does this because it’s actually done by Connect, in connect.js:

function createServer() {
  function app(req, res){ app.handle(req, res); }
  utils.merge(app, proto);
  utils.merge(app, EventEmitter.prototype);
  app.route = '/';
  app.stack = [];
  for (var i = 0; i < arguments.length; ++i) {
    app.use(arguments[i]);
  }
  return app;
};

The utils.merge method copies properties from one object to another:

exports.merge = function(a, b){
  if (a && b) {
    for (var key in b) {
      a[key] = b[key];
    }
  }
  return a;
};

There’s also a unit test that confirms that the authors intended to mix in EventEmitter.

An extremely popular way to daemonize (never demonize, which means to “portray as wicked and threatening”) a program is to use the forever module. It can be used as a command-line script or as a module, and is built on some modules that are useful for creating Node daemons, like forever-monitor and winston.

However, what I’m really interested in is the architecture of modules that provide services rather than utility modules for managing daemons. One such example is statsd from Etsy. It’s a network daemon for collecting statistics. The core server code, stats.js, uses net.createServer and a switch statement to execute commands based on the server’s protocol. Notable uses of EventEmitter include backendEvents for asynchronously communicating with the data storage layer, and automatic configuration file reloading. I particularly like the fact the configuration file is reloaded – it’s a good use of Node’s built-in features.

James Halliday’s smtp-protocol can be used to implement SMTP servers (it isn’t itself an SMTP server). The server part of the module is based around a protocol parser, ServerParser – a prototype class and a class for representing clients (Client). Servers are created using net.createServer, much like the other projects I’ve already mentioned.

This module is useful because it demonstrates how to separate low-level implementation details from the high-level concerns of implementing a real production-ready server. Completely different SMTP servers could be built using smtp-protocol as the foundation. Real SMTP servers need to deal with things like relaying messages, logging, and managing settings, so James has separated that out whilst retaining a useful level of functionality for his module.

I’ve also been reading through the telnet module, which like smtp-protocol can be used to implement a telnet server.

At the moment there seems to be a void between these reusable server modules and daemons that can be installed on production servers. Node makes asynchronous I/O more accessible, which will lead to novel server implementations like Etsy’s stats server. If you’ve got an idea for a niche application layer server, then why not build it with Node?

Node Roundup: Knockout, bignumber.js, r...e, Mongoose-Filter-Denormalize

14 Nov 2012 | By Alex Young | Comments | Tags node number mongo mongoose
You can send in your Node projects for review through our contact form or @dailyjs.

Node.js Knockout

Node.js Knockout site

Node.js Knockout is currently being judged, the winners will be announced on the 20th of November. The site is actually a small game in itself – click around to move your character and type to say something.

bignumber.js

bignumber.js (License: MIT Expat, npm: bignumber.js) by Michael Mclaughlin is an arbitrary-precision arithmetic module. It works in Node, browsers, and is available as an AMD module. It comes with both tests and benchmarks, which is useful because one of Mile’s goals was to write something faster and easier to use than JavaScript versions of Java’s BigDecimal.

Objects created with BigNumber behave like the built-in Number type in that they have toExponential, toFixed, toPrecision, and toString methods.

Mike found the only other serious arbitrary-precision library for decimal arithmetic on npm is bigdecimal, which originates from the Google GWT project. Mike has written some examples of bigdecimal’s problems to illustrate bugs he found while working with it, and offers bignumber.js as an alternative.

r…e

r…e (License: MIT, npm: r…e) by Veselin Todorov is a module for manipulating range expressions. Ranges are specified as separate arguments or strings, and a suitable array will be returned:

range(1, 3).toArray();
range('1..3').toArray();
// [1, 2, 3]

range('a', 'c').toArray();
// ['a', 'b', 'c']

Stepped ranges are supported (0, 10, 5) as well as convenience methods like range(1, 3).include(2), map, join, sum, and so on. It works in browsers, and includes Mocha tests.

Mongoose-Filter-Denormalize

Mongoose-Filter-Denormalize (License: MIT) by Samuel Reed is a filtering and denormalization for Mongoose – it essentially provides a way of preventing Mongoose from accidentally exposing sensitive data:

UserSchema.plugin(filter, {
  readFilter: {
    'owner' : ['name', 'address', 'fb.id', 'fb.name', 'readOnlyField'],
    'public': ['name', 'fb.name']
  },
  writeFilter: {
    'owner' : ['name', 'address', 'fb.id', 'writeOnlyField']
  },
  // 'nofilter' is a built-in filter that does no processing, be careful with this
  defaultFilterRole: 'nofilter',
  sanitize: true // Escape HTML in strings
});

Now when passing the result of a findOne or other query to, say, res.send in your Express app, fields can be restricted based on user:

User.findOne({ name: 'Foo Bar' }, User.getReadFilterKeys('public')), function(err, user) {
  if (err) next(err);
  res.send({ success: true, users: [user] });
});

jQuery Roundup: SliderShock, printThis, readMore

13 Nov 2012 | By Alex Young | Comments | Tags jquery slideshow printing truncation sponsored-content
Note: You can send your plugins and articles in for review through our contact form or @dailyjs.

jQuery SliderShock

SliderShock example

jQuery SliderShock is a responsive slider plugin that can be dropped into WordPress sites. There’s full documentation for regular websites and WordPress sites. The WordPress version has some additional features, like built-in support for sliders from external data sources like RSS. The plugin supports various options, including up to 31 transition effects, control over delay time and positioning, and easy styling.

Clicking Build Your Own on the SliderShock homepage displays all of the available options:

SliderShock options

SliderShock can display text captions within slides, and comes bundled with lots of CSS3-based effects. The free version, licensed for use in personal projects, has 10 effects, and the premium version has 31 effects and 39 skins. A license for a single site costs $19, while multiple sites costs $29. There’s also a developer license for $99 that allows resale. “Combo” licenses are available if you wish to buy both the WordPress and jQuery versions at once, otherwise you’ll have to license them separately.

jquery.printThis

jquery.printThis (License: MIT/GPL) by Jason Day is a fork of permanenttourist / jquery.jqprint, and based on Ben Nadel’s Ask Ben: Print Part Of A Web Page With jQuery post – a combination of several things to solve the problem of printing a given element on a page. Jason’s changes can optionally include page styles, import additional stylesheets, and avoids using document.write.

It’s interesting to see how this works internally – the plugin is concise but it takes a bit of iframe manipulation to get the desired results.

jQuery.readMore

Dealing with dynamically truncating content is tricky, and there are many solutions out there. Stephan Ahlf sent in his solution, jQuery.readMore (GitHub: s-a / jQuery.readMore, License: MIT/GPL) which uses a method called $.isOverflowed to add text until there isn’t room for any more.

Previously covered related plugins include jQuery.smarttruncation, and jQuery.textFit.

How Does Watch.js Work?

12 Nov 2012 | By Alex Young | Comments | Tags observer code-review

Last week noticed a lot of interest in Watch.js (License: MIT) by Gil Lopes Bueno, so I thought it would be interesting to take a look at how it works. It allows changes to properties on objects to be observed – whenever a change is made, a callback receives the new and old values, as well as the property name:

var ex1 = {
  attr1: 'initial value of attr1'
, attr2: 'initial value of attr2'
};

ex1.watch('attr1', function() {
  alert('attr1 changed');
});

The watch method can also accept an array of property names, and omitting it will cause the callback to run whenever any property changes. The unwatch method will remove a watcher.

First, let’s get the elephant out of the room: this implementation modifies Object.prototype – that’s where the watch method comes from in the previous example. The author is planning to change the API to avoid modifying Object.prototype.

Second, this method is polymorphic in that it behaves differently based on the supplied arguments. This is quite common in client-side code. It’s implemented by looking at the number of arguments without requiring too much type checking, in watch:

WatchJS.defineProp(Object.prototype, "watch", function() {

    if (arguments.length == 1) 
        this.watchAll.apply(this, arguments);
    else if (WatchJS.isArray(arguments[0])) 
        this.watchMany.apply(this, arguments);
    else
        this.watchOne.apply(this, arguments);

});

You’re probably wondering what WatchJS.defineProp is. It’s actually a convenience method to use ES5’s Object.defineProperty in browsers that support it:

defineProp: function(obj, propName, value){
    try{
        Object.defineProperty(obj, propName, {
                enumerable: false
            , configurable: true
            , writable: false
            , value: value
            });
    }catch(error){
        obj[propName] = value;
    }
}

The watchMany method uses a utility method, WatchJS.isArray, to determine how to loop over the supplied arguments, calling watchOne on each in turn. The watchAll method calls watchMany, so there’s a lot of internal code reuse.

Most of the work gets carried out by watchOne. This calls WatchJS.defineGetAndSet(obj, prop, getter, setter) with a custom getter and setter to wrap around values so they can be watched. However, watching values change has a few complications.

For one thing, arrays have mutator methods like push and pop. Therefore, watchFunctions is called to wrap each of these methods with a suitable call to WatchJS.defineProp. Also, when a property is set to a new value, all of these wrapped methods will be lost. To get around this, the custom setter calls obj.watchFunctions(prop) again.

When a value has changed, callWatchers is called. An internal list of watcher callbacks indexed on property names is maintained and called by a for-in loop. It’s important to note that these callbacks only run if the values are actually different. This is tested by calling JSON.stringify(oldval) != JSON.stringify(newval) – presumably the author used this approach because it’s an easy way to compare the value of objects. I’d consider benchmarking this against other solutions.

Finally, we get to WatchJS.defineGetAndSet, which attempts to use Object.defineProperty if available, otherwise Object.prototype.__defineGetter__.call and Object.prototype.__defineSetter__.call are used.

Conclusion

Rather than using exceptions to track browser support, it might be better to use capability testing to check for Object.defineProperty and Object.prototype.__defineGetter__.call when the first call is made, then cache the result. As noted in the project’s issues, Object.observe should provide a more efficient approach in the future – this is an accepted Harmony proposal.

To really mature this project, changing the API to avoid modifying Object.prototype would be a good idea, and adding benchmarks and unit tests would be useful as well.

This article was based on watch.js commit dc9aac6a6e.

Ditto, sudo.js, dry-observer

09 Nov 2012 | By Alex Young | Comments | Tags mvc backbone.js dojo libraries

Ditto

Ditto (GitHub: bitpshr / ditto, License: WTFPL) by Paul Bouchon is a tool for resolving a project’s Dojo dependencies. A project’s zip file can be dropped onto the site, and it will analyse it and extract all of the required AMD modules. There’s an options tab that has an option for the legacy dojo.require syntax, and paths can be ignored as well.

sudo.js

sudo.js (License: MIT, npm: sudoclass) by Rob Robbins (sent in by Yee Lee) is a library that features inheritance helpers, view and view controllers, and data binding support. The API is documented in the sudo.js wiki.

The author has included a test suite, compatibility polyfills for things like the console object and String.prototype.trim, and an optional build with a small templating library.

dry-observer

dry-observer (License: MIT) by Austin Bales is a small library for working with centralising binding and unbinding to events, while encouraging consistent handler naming. It requires a Backbone.js or at least a Backbone.Events-compatible object.

Here’s a quick example by the author:

# Observe a Model by passing a hash…
@observe model,
  'song:change'   : @onSongChange
  'volume:change' : @onVolumeChange
  'focus'         : @onFocus

# …or a String or Array.
# Observation will camelCase and prefix your events.
@observe model, 'song:change volume:change focus'

# Stop observing and dereference your model…
@stopObserving model

# …or stop observing /everything/
@stopObserving()

AngularJS: Common Pitfalls

08 Nov 2012 | By Alex Young | Comments | Tags mvc angularjs backbone.js

I noticed this commit to AngularJS by Braden Shepherdson that updates the AngularJS FAQ with a list of common pitfalls to avoid. Some of these pieces of sage advice can be applied to other MVC frameworks like Backbone.js, or at least act as some inspiration:

Stop trying to use jQuery to modify the DOM in controllers. Really. That includes adding elements, removing elements, retrieving their contents, showing and hiding them.

Similarly, the Backbone.js documentation advises the use of scoped DOM queries. Also, Backbone patterns suggests ensuring event handlers are correctly abstracted rather than using jQuery event handlers outside of Backbone.js code.

If you’re struggling to break the habit, consider removing jQuery from your app.

This might seem drastic, but AngularJS packs in enough functionality to get by without jQuery for the most part. The FAQ suggests using jQLite for binding to events.

There’s a good chance that your app isn’t the first to require certain functionality. There are a few pieces of Angular that are particularly likely to be reimplemented out of old habits.

After this the author describes how to take advantage of ng-repeat, ng-show, and ng-class. It’s the kind of practical knowledge that comes from hard-won experience, but it’s all explained quite clearly here. For example, ng-class can accept objects, including conditional expressions:

<div ng-class="{ errorClass: isError, warningClass: isWarning, okClass: !isError && !isWarning }">...</div>

Although AngularJS, Backbone.js, and Knockout all have great documentation, I feel like the learning curve once the basics have been mastered is pretty tough. It’s good to see these low-level tips cropping up to clarify how the authors intend the project to be used.

Node Roundup: Fastworks.js, Probability.js, Colony

You can send in your Node projects for review through our contact form or @dailyjs.

Fastworks.js

Fastworks.js (License: GPL3, npm: fastworks) by Robee Shepherd is an alternative to Connect. It includes “stacks” for organising middleware, and middleware for routing, static files, compression, cookies, query strings, bodies in various formats (including JSON), and a lot more. It can also work with Connect modules.

StaticFile serves things like images, style sheets and javascript files, using the pretty nifty Lactate node module. According to the author’s benchmarks, it can handle more than twice the requests per second that Connect’s Send module can.

That Lactate module sounds promising. On the subject of performance, one motivation for developing Fastworks.js was speed, but as of yet it doesn’t include benchmarks or tests. Hopefully the author will include these at a later date so we can see how it shapes up against Connect.

Probability.js

Probability.js

Probability.js (License: MIT) by Florian Schäfer is a fascinating little project that helps call functions based on probabilities. Functions are paired alongside a probability so they’ll only be called some of the time.

That doesn’t sound useful on the surface, but the author suggests it could be useful in game development. Although if you’ve played the recent XCOM game you may be disillusioned by randomness in games, which is actually quite a well-trodden topic in the games development community. Analysis: Games, Randomness And The Problem With Being Human by Mitu Khandaker is an interesting analysis of games and chance.

Colony

Colony

Colony (GitHub: hughsk / colony, License: MIT, npm: colony) by Hugh Kennedy displays network graphs of links between Node code and its dependencies, using D3.js.

The network can be navigated around by clicking on files – the relevant source will be displayed in a panel. Files are coloured in groups based on dependencies, so it’s an intuitive way to navigate complex projects.

jQuery Roundup: Airbnb Style Guide, jPanelMenu, Crumble, imgLiquid

06 Nov 2012 | By Alex Young | Comments | Tags jquery style-guides images on-page guidance mobile
Note: You can send your plugins and articles in for review through our contact form or @dailyjs.

Airbnb JavaScript Style Guide

Airbnb JavaScript Style Guide is a pretty detailed JavaScript style guide that includes suggestions for jQuery. As a bonus, the Resources section includes links to other style guides (and DailyJS, thanks Airbnb!)

I’ve been using Google’s Style Guide for a few projects, although I don’t necessarily have a preference for any style guide. The code style used in examples on DailyJS came from trying to use horizontal space effectively (the blog has a fixed-width narrow design), but also by trying to make things explicit for beginners. Even these humble goals are subjective enough to cause endless arguments. The best advice I could give on the matter is to pick a style and be consistent.

jPanelMenu

jPanelMenu screenshot

jPanelMenu (GitHub: acolangelo / jPanelMenu) by Anthony Colangelo creates a navigation panel similar to those found in recent mobile applications. It can be configured to use selectors for the menu and an element that opens the menu:

var jPM = $.jPanelMenu({
  menu: '#custom-menu-selector'
, trigger: '.custom-menu-trigger-selector'
});

jPM.on();

This creates two div elements that contain the menu and the corresponding panel for the content.

jPanelMenu is well documented, and the documentation itself is built using the plugin so it doubles as a detailed example.

Crumble

Crumble (GitHub: tommoor / crumble, License: MIT) by Tom Moor is an interactive tour built using grumble.js. Similar to the plugins mentioned by Oded Ben-David in Introduction to On-Page Guidance, Crumble shows help bubbles that draw attention to elements on a page.

imgLiquid

imgLiquid (GitHub: karacas / imgLiquid, License: MIT/GPL) by Alejandro Emparan is a plugin for resizing images to fit a given container. It can try to fill a container or shrink the image instead, by using CSS:

$(selector).imgLiquid({ fill: true });

There’s also a fadeInTime option that’ll trigger a fadeTo animation.

State of the Art: Web MIDI API

05 Nov 2012 | By Alex Young | Comments | Tags standards sota audio browser

In a previous life I wrote a lot of music, so I have a certain amount of familiarity with MIDI. Mentioning MIDI to non-musicians usually results in either a disbelief that MIDI still exists, or nostalgia for 90s PC games (I’m a big LucasArts fan). So if you’re not familiar with MIDI, it’s not all about bad sounding music files on GeoCities, it’s actually a specification for networking musical instruments, computers, and a wide variety of wonderful hardware.

Yamaha's Tenori-On can act as a slightly more unusual way to control MIDI devices.

The Web MIDI API was published as a working draft on the 25th of October, with input from Google’s Chris Wilson. It outlines an API that supports the MIDI protocol, so future web applications will be able to enumerate and select MIDI input and output devices, and also send and receive MIDI messages. This is distinct from playing back MIDI files – the HTML5 <audio> element should take care of that.

JavaScript-powered MIDI strikes me as of particular interest to both musicians and makers – hackers building art installations or novel control schemes for music projects.

Code

The navigator object, typically used to query user agent information, provides interfaces to a few new APIs like geolocation. The MIDI API uses it to expose MIDI access, and then available devices can be iterated over and inspected further:

navigator.getMIDIAccess(function(MIDIAccess) {
  var inputs = MIDIAccess.enumerateInputs()
    , outputs = MIDIAccess.enumerateOutputs()
    ;
});

MIDI messages can then be sent and received. There’s a test suite by Jussi Kalliokoski with more JavaScript examples here: web-midi-test-suite.

Implementation Progress

There’s currently a low-level MIDI plugin called Jazz-Plugin. For tracking browser support and other audio-related topics, the most accessible source is probably the HTML5 Audio Blog written by Jory K. Prum.

References

Introduction to On-Page Guidance

02 Nov 2012 | By Oded Ben-David | Comments | Tags on-screen guidance on-page guidance iridize tutorials

My name is Oded Ben-David, and I’m a co-founder at Iridize – a service for enhancing website user-experience through on-screen guidance. Documentation is important to new users of a software product, yet it’s typically added as an afterthought. Worse, users don’t always want to spend time reading documentation. On-screen guidance turns this on its head by showing help where and when it’s needed.

On-screen guidance is becoming more commonplace on the web. It is used by prominent companies including Facebook, Yahoo, and Google. Additionally, several JavaScript libraries for website on-screen guidance were published recently, most of which are mentioned below.

On-screen guidance seeks to solve two issues: presenting help when and where it is needed. It should propel users to their goals and keep them focused on the task at hand, not stop them in their tracks and point their attention elsewhere, to external help texts and videos. Also, guidance should only be there when the user actually wants it. Permanently cluttering a web page with too much help will end up confusing users.

To actually implement this, on-screen guidance systems use an interactive layer over a web page that is activated either on-demand, or automatically based on business logic.

By guiding users step-by-step throughout fulfilling a task they are always focused on the task at hand. This provides first time users with an interactive site tour which helps illustrate key features.

These tools are not just for help content. Pointing out newly added or key but underused features can also benefit from being presented in a dedicated layer. This dedicated layer can also serve for increasing conversion, by inviting users to sign-up or subscribe, as well as increase user engagement by enticing them to try up a service or a feature. It can easily be coupled with A/B testing services to further optimize conversions.

Tutorial: Open Source On-Screen Guidance

There are several JavaScript libraries available for adding on-screen guidance to your project:

  1. king_tour is probably one of the earliest available libraries. Tours are defined as markup using a simple structure based around containers and title attributes. Global options and tour activation are handled by JavaScript.
  2. Joyride is a more modern jQuery plugin that takes a similar approach.
  3. A different approach is taken by the very nice bootstrap-tour (reviewed in DailyJS), where steps are defined using a factory method, rather than in markup. Bootstrap-tours also manages a simple state and allows for some multi-page capabilities.
  4. Guiders.js by the good people of Optimizley, is another library where guides can be automatically started using a special hash parameter, and an optional overlay can be used to highlight the element the guider refers to on the page.

Let’s take a look at how to create a guide using the Guiders.js library. For the purpose of this short demonstration I have created a jsFiddle, where you can see the final result. The first thing we need for our demo is some page markup. I chose the markup for a bug-tracking form from the free gallery at wufoo. Next, we need to embed jQuery, and the Guiders.js JavaScript and CSS files. jQuery is readily available via jsFiddle’s default libraries. For this demo I am serving the Guiders.js sources from via our own CDN (so please do not hotlink there).

<script type="text/javascript" src="https://static-iridize.netdna-ssl.com/static/guiders/guiders-1.2.8.js"></script>
<link rel="stylesheet" type="text/css" href="https://static-iridize.netdna-ssl.com/static/guiders/guiders-1.2.8.css">

Now that we are all set we can start adding guiders. For starters, let’s add an introductory ‘splash’ tip with an overlay. Note that this bit of code should be executed on DOM ready event. This is the JavaScript for creating the first tip:

guiders.createGuider({
  buttons: [{ name: 'Next' }]
, description: 'Introducing Guiders.js for the DailyJS readers. I am the first tip. Click my Next button to get started.'
, id: 'first'
, next: 'second'
, overlay: true
, title: 'Demoing Guiders.js!'
}).show();

There’s quite a bit going on here, so I’ll break it down. First, as mentioned before, guiders are created using the createGuider method, which accepts a configuration object containing the guider options. Guider ids are used for chaining them together using the next property. This guider is set as first, and the next guider will be the one with id second. Setting the overlay property to true tells sets an overlay mask blocking the rest of the page.

The buttons property sets which buttons should be displayed at the bottom of the tip. This property accepts an array of button definition objects. You can define custom buttons and custom button behaviors, but the three default buttons available are Next, Close, and Prev. For this first tip we set only a single Next button, used for going forward.

You may have noticed that there’s another method call chained to the createGuider call, namely .show(). Guiders are created hidden by default, so the show() method is called to show the first tip automatically. Of course, we could have saved a reference to the created guider in a variable instead, and issued the show call at a later time.

Now that the first guider is ready, a second can be added:

guiders.createGuider({
  attachTo: '\#foli0'
, buttons: [{name: 'Close'}, {name: 'Next'}]
, description: 'Please write down the full URL of the page where you experienced a bug. You can copy paste it from the address bar.'
, id: 'second'
, next: 'third'
, position: 3
, title: 'Oh Where?'
});

This guider has two new properties: attachTo and position. The attachTo property accepts a selector for the element next to which the tip should be positioned. The position property specifies the position of the tip relative to the element specified by attachTo. The number stands for ‘clock position’, so specifying 3 makes the tip appear to the right of the element. This tip also has an option to close the guide, by adding the Close button to the buttons array. Also, note that show() was not called for this guider, as it should only be shown when clicking the Next button on the previous tip.

The rest of the tips in this short demo are more of the same. You can find the full reference of guider options on the Guiders.js repository page.

Iridize

Video showing one of the demos available at iridize.com

Our vision for Iridize was that the developer should not be required to create or maintain guidance material. Creating and embedding on-screen guidance should be at least as easy as creating a presentation. This should leave developers with more time to code instead of writing help guides.

Iridize is the realization of that vision - on-page guidance and site-tours as a service.

We created a fully visual editor which runs in the browser. This means that there is absolutely no installation required to get started. Other than embedding a generic JavaScript snippet there is no setup necessary even for deployment (similar to embedding Google Analytics). All JavaScript and guidance content is loaded from our servers, and delivered through a CDN – with SSL if necessary. Our service includes a full management system, complete with an editing and publishing cycle that supports collaboration for guide creation.

True to the spirit of providing guidance where and when it is needed, our guides can be launched by the end user from a side-panel or launched automatically based on the page path, parameters, or a hash URL fragment. We provide an API for even tighter control on how guides are started.

We are now in a close beta phase, and we would like to invite you to take a look and register for a beta account. Moreover, as we are keen on giving back to the wonderful JavaScript community that built so many of the great tools we are using, we would like to offer free service to select open-source projects for the long term. Please contact us if you are member of the core team of such a project that could benefit from our services (even if just for your public website).

Conclusion

On-screen guidance can be a powerful tool for enhancing user-experience and improving website usability. Providing guidance where and when it is needed can alleviate the pains of first time and occasional website users, as well as expose new feature to the experienced crowd. Guiding users step-by-step keeps them focused and can ensure they don’t miss or forget steps along the way.

Whether you choose to use Iridize to implement on-screen guidance in your projects or decided to take the DIY route with one of the libraries I listed, we are certain that your users will thank you for helping them out!