The JavaScript blog.


commonjs modules amd js101 terminology basics

Terminology: Modules

Posted on .

Learning modern modular frameworks like Backbone.js and AngularJS involves mastering a large amount of terminology, even just to understand a Hello, World application. With that in mind, I wanted to take a break from higher-level libraries to answer the question: what is a module?

The Background Story

Client-side development has always been rife with techniques for patching missing behaviour in browsers. Even the humble <script> tag has been cajoled and beaten into submission to give us alternative ways to load scripts.

It all started with concatenation. Rather than loading many scripts on a page, they are instead joined together to form a single file, and perhaps minimised. One school of thought was that this is more efficient, because a long HTTP request will ultimately perform better than many smaller requests.

That makes a lot of sense when loading libraries -- things that you want to be globally available. However, when writing your own code it somehow feels wrong to place objects and functions at the top level (the global scope).

If you're working with jQuery, you might organise your own code like this:

$(function() {
  function MyConstructor() {

  MyConstructor.prototype = {
    myMethod: function() {

  var instance = new MyConstructor();

That neatly tucks everything away while also only running the code when the DOM is ready. That's great for a few weeks, until the file is bustling with dozens of objects and functions. That's when it seems like this monolithic file would benefit from being split up into multiple files.

To avoid the pitfalls caused by large files, we can split them up, then load them with <script> tags. The scripts can be placed at the end of the document, causing them to be loaded after the majority of the document has been parsed.

At this point we're back to the original problem: we're loading perhaps dozens of <script> tags inefficiently. Also, scripts are unable to express dependencies between each other. If dependencies between scripts can be expressed, then they can be shared between projects and loaded on demand more intelligently.

Loading, Optimising, and Dependencies

The <script> tag itself has an async attribute. This helps indicate which scripts can be loaded asynchronously, potentially decreasing the time the browser blocks when loading resources. If we're going to use an API to somehow express dependencies between scripts and load them quickly, then it should load scripts asynchronously when possible.

Five years ago this was surprisingly complicated, mainly due to legacy browsers. Then solutions like RequireJS appeared. Not only did RequireJS allow scripts to be loaded programmatically, but it also had an optimiser that could concatenate and minimise files. The lines between loading scripts, managing dependencies, and file optmisation are inherently blurred.


The problem with loading scripts is it's asynchronous: there's no way to say load('/script.js') and have code that uses script.js directly afterwards. The CommonJS Modules/AsynchronousDefinition, which became AMD (Asynchronous Module Definition), was designed to get around this. Rather than trying to create the illusion that scripts can be loaded synchronously, all scripts are wrapped in a function called define. This is a global function inserted by a suitable AMD implementation, like RequireJS.

The define function can be used to safely namespace code, express dependencies, and give the module a name (id) so it can be registered and loaded. Module names are "resolved" to script names using a well-defined format.

Although this means every module you write must be wrapped in a call to define, the authors of RequireJS realised it meant that build tools could easily interpret dependencies and generate optimised builds. So your development code can use RequireJS's client-side library to load the necessary scripts, then your production version can preload all scripts in one go, without having to change your HTML templates (r.js is used to do this in practice).


Meanwhile, Node was becoming popular. Node's module system is characterised by using the require statement to return a value that contains the module:

var User = require('models/user');  

Can you imagine if every Node module had to be wrapped in a call to define? It might seem like an acceptable trade-off in client-side code, but it would feel like too much boilerplate in server-side scripting when compared to languages like Python.

There have been many projects to make this work in browsers. Most use a build tool to load all of the modules referenced by require up front -- they're stored in memory so require can simply return them, creating the illusion that scripts are being loaded synchronously.

Whenever you see require and exports you're looking at CommonJS Modules/1.1. You'll see this referred to as "CommonJS".

Now you've seen CommonJS modules, AMD, and where they came from, how are they being used by modern frameworks?

Modules in the Wild

Dojo uses AMD internally and for creating your own modules. It didn't originally -- it used to have its own module system. Dojo adopted AMD early on.

AngularJS uses its own module system that looks a lot like AMD, but with adaptations to support dependency injection.

RequireJS supports AMD, but it can load scripts and other resources without wrapping them in define. For example, a dependency between your own well-defined modules and a jQuery plugin that doesn't use AMD can be defined by using suitable configuration options when setting up RequireJS.

There's still a disparity between development and production builds. Even though RequireJS can be used to create serverless single page applications, most people still use a lightweight development server that serves raw JavaScript files, before deploying concatenated and minimised production builds.

The need for script loading and building, and tailoring for various environments (typically development, test, and production) has resulted in a new class of projects. Yeoman is a good example of this: it uses Grunt for managing builds and running a development server, Bower for defining the source of dependencies so they can be fetched, and then RequireJS for loading and managing dependencies in the browser. Yeoman generates skeleton projects that set up development and build environments so you can focus on writing code.

Hopefully now you know all about client-side modules, so the next time you hear RequireJS, AMD, or CommonJS, you know what people are talking about!


libraries oo touch commonjs

Caress, cjs2web, zoe.js

Posted on .


Caress (GitHub: ekryski / caress-server, License: MIT, npm: caress-server) by Eric Kryski converts TUIO events to browser events (W3C Touch Events version 2), allowing desktop browsers to be driven by multitouch devices. This includes Apple's Magic Trackpad, Android, and iOS devices.

Caress uses Node and Socket.IO to send TUIO messages to the browser -- this is usually done using UDP, but Node and Socket.IO seem to work well. Since Caress can be used with the Magic Trackpad, it might work well as a shortcut for testing touch-based interfaces during development.


cjs2web (GitHub: cjs2web, License: MIT, npm: cjs2web) by Alex Lawrence is a CommonJS module-to-browser translation tool. It currently supports mapping local modules, and the exports object (including module.exports). It doesn't support Node's process and global modules, so it's useful for lightweight porting of browser-friendly code. This is in contrast to something like OneJS that actually aims to create a Node-like environment in the browser.

Jasmine specs are included, and the Grunt build script used to run them.


zoe.js (GitHub: zestjs / zoe, License: MIT, npm: zoe, component: zestjs/zoe) by Guy Bedford is a Node/AMD/browser library for working with multiple inheritance:

The basic principle is that inheritance is a form of object extension. A core object is extended with a number of implemented definitions. When that object is extended, a new object is created implementing the core definitions as well as any new definitions. This is the inheritance system of zoe.create.

Objects created this way can be configured with "function chains", which allows the library to support asynchronous code and various forms of the observer pattern.

Basic object extension uses zoe.create:

var baseClass = {  
  hello: 'world'

var derivedClass = zoe.create([baseClass], {  
  another: 'property'

If an _extend property is supplied, zoe will use it to apply various rules. In this example, chaining is used which will cause both greet methods to run:

var greetClass = {  
  _extend: {
    greet: 'CHAIN'
  greet: function() {
    return 'howdy';

var myClass = zoe.create([greetClass], {  
  greet: function() {



oo commonjs modules backbone.js kinect

Chaplin, KinectJS, Inject, Minion

Posted on .


Chaplin (License: MIT) from Moviepilot and 9elements is an example architecture using Backbone.js. It features lazy loading through RequireJS, module inter-communication using the mediator and publish/subscribe patterns, controllers for managing UI views, Rails-style routes, and strict memory management and object disposal.

While developing web applications like moviepilot.com and salon.io, we felt the need for conventions on how to structure Backbone applications. While Backbone is fine [...], it's not a framework for single-page applications.

All Chaplin needs is a web server to serve the client-side code. The example app, "Facebook Likes Browser", even includes client-side OAuth 2.0, thereby demonstrating client-side authentication.


KinectJS (License: MIT) aims to bring Kinect controls to HTML5. The author has created some KinectJS demos and KinectJS YouTube videos, so with the required hardware it should be possible to try it out.

The client-side JavaScript is only one part of the implementation -- the other is an Adobe AIR application that provides the bridge to the Kinect drivers. The AIR application isn't currently open source, but the JavaScript code is MIT licensed.


Inject (License: Apache 2.0, GitHub: linkedin / inject) from LinkedIn is a library agnostic dependency manager. It adds CommonJS support to the browser, and the authors have created a Inject/CommonJS compatibility document to show what exactly is supported. Resources can be loaded cross-domain, and the AMD API is also supported. Inject also has a lot of unit tests, written with QUnit.

Once Inject is loaded, it'll find and load dependencies automatically:

<script type="text/javascript" src="/js/inject.js"></script>  
<script type="text/javascript">  

Now if program.js looked liked this:

var hello = require('hello').hello;  

Then Inject will load the hello module.

Inject finds the dependencies automatically and loads them asynchronously. If a developer changes some downstream dependency - for example, changes hello.js to depend on new-name-module.js instead of name.js - your code will keep working because Inject will automatically find and download the new dependencies on the next page load.


MinionJS (License: MIT/X11, npm: minion) by Taka Kojima is a small library that provides classical inheritance for Node and browsers:

minion.require('example.Example', function(Example) {  
  var instance = new Example();

Minion also includes a publish/subscribe implementation. All classes have subscribe and publish methods which can be used to bind callbacks to notifications.

Finally, Minion has a build script that can be used to package classes for client-side deployment.


tutorials frameworks testing lmaf commonjs

Let's Make a Framework: More Tests

Posted on .

Welcome to part 43 of Let's Make a Framework, the ongoing series about
building a JavaScript framework.

If you haven't been following along, these articles are tagged with
lmaf. The project we're creating is called Turing.

Over the last few weeks we've built a test framework based on the
CommonJS assert module, a suitable test runner, and started converting
Turing's tests to use it. The test framework is called Turing

JavaScript Testing

I get a lot of questions about JavaScript tests, so before continuing
converting Turing's tests I'm going to just explain a little bit about
the basics behind JavaScript testing.

If you're primarily a front-end developer, a lot of the innovative new
work in JavaScript testing might appear confusing. The needs of
server-side testing and client-side diverge, so it's perfectly
acceptable to use a different testing framework for each part of your

According to our 2010 JavaScript
Qunit is the most popular test framework. So you could use Qunit for your browser tests, and the
server-side developers could use something like
nodeunit or jasmine.

There are different types of test libraries. The one we've been
developing on this tutorial series is a unit
framework. The
assert module defined in Unit Testing/1.0 by CommonJS is an example of a large part of a unit
testing framework.

Libraries like Jasmine are Behavior Driven

(BDD) libraries. Some frameworks like Cucumber take this to another extreme, where tests are written as executable

There are more flavours of test libraries, and some become fashionable
and widely used. Because tests can be useful documentation for another
developer, if you're writing an open source project it might be a good
idea to use what your community uses. If you're writing a jQuery plugin,
why not use Qunit?

If you're working on your own project or in a small team, the choice of
test libraries can seem bewildering. You could fall back on the CommonJS
modules and treat that as the "standard" way to write tests, as it's
likely that other people will be able to understand your tests. The
point is to actually write tests in a way that makes you feel productive
-- some people get bogged down with the abstraction of BDD libraries, other people find that style of testing can help communicate business
logic to clients or managers.

Test Conversion

I've converted the core tests to Turing Test (from

The enumerable tests demonstrated where using assert.equal
or assert.deepEqual are appropriate:

exports.testEnumerable = {
  'test array iteration with each': function() {
    var a = [1, 2, 3, 4, 5],
        count = 0;
    turing.enumerable.each(a, function(n) { count += 1; });
    assert.equal(5, count, 'count should have been iterated');

  'test array iteration with map': function() {
    var a = [1, 2, 3, 4, 5],
        b = turing.enumerable.map(a, function(n) { return n + 1; });
    assert.deepEqual([2, 3, 4, 5, 6], b, 'map should iterate');

The second test here need deepEqual to check the arrays are
the same. It's important to remember which type of equal to use when
writing CommonJS assertions.

The OO

had to be reworked slightly to suit CommonJS module constraints in the

Running All Tests

I've made
run.html run all of the tests in the browser. It was actually very easy to do, I
just had to list all of the unit test files in script tags:

Future Updates

When I started writing the Turing Test tutorials I mentioned I'd like to
be able to run all tests, in a similar way to Qunit. This kind of works
now, but isn't presented particularly well. The test runner we've built
doesn't summarise all of the tests, and the layout isn't quite clear

Turing Test also needs to display benchmarks, and according to our
survey 12% of people benchmarking code use unit tests as an indicator of

I'm pleased with the results now everything has been converted to
CommonJS tests. As usual, more next week!



tutorials frameworks testing lmaf commonjs

Let's Make a Framework: DOM and Events Tests

Posted on .

Welcome to part 42 of Let's Make a Framework, the ongoing series about
building a JavaScript framework.

If you haven't been following along, these articles are tagged with
lmaf. The project we're creating is called Turing.

In this part I'll discuss using our new CommonJS-based testing
turing-test.js, to test our framework.

Animation Tests

The old animation tests were fairly basic. Once I'd rewritten them I
noticed something interesting:

exports.testAnimations = {
  'test hex colour converts to RGB': function() {
    assert.equal('rgb(255, 0, 255)', turing.anim.parseColour('#ff00ff').toString());

  'test RGB colours are left alone': function() {
    assert.equal('rgb(255, 255, 255)', turing.anim.parseColour('rgb(255, 255, 255)').toString());

  'test chained animations': function() {
    var box = document.getElementById('box');
      .move(100, { x: '100px', y: '100px', easing: 'ease-in-out' })
      .animate(250, { width: '1000px' })
      .animate(250, { width: '20px' });

    // New:
    setTimeout(function() { assert.equal(box.style.top, '100px'); }, 350);
    setTimeout(function() { assert.equal(box.style.width, '20px'); }, 2000);

The chained calls at the end didn't have any tests before, so I tried
adding some with setTimeout to test a few steps that would
be executed as part of the chain. Another way to test time-based events
like this would be to put the asserts inside any callbacks the API

With the current version of the test library, exceptions will be logged
for the failed tests rather than printing them to the test reports.
There'd need to be a mechanism for waiting for asserts in
setTimeout to handle this correctly (maybe an async test

DOM Tests

The DOM tests can be pretty much translated directly from the original

exports.testDOM = {
  'test tokenization': function() {
    assert.equal('class', turing.dom.tokenize('.link').finders(), 'return class for .link');
    assert.equal('name and class', turing.dom.tokenize('a.link').finders(), 'return class and name for a.link');

  'test selector finders': function() {
    assert.equal('dom-test', turing.dom.get('#dom-test')[0].id, 'find with id');
    assert.equal('dom-test', turing.dom.get('div#dom-test')[0].id, 'find with id and name');
    assert.equal('Example Link', turing.dom.get('a')[0].innerHTML, 'find with tag name');

// etc.

It's a good idea to have a message for each assertion to make it easier
to track down failing tests.

Events Tests

The events tests look a little bit different because my other testing
library will run functions when they're passed:

given('a delegate handler', function() {
  var clicks = 0;
  turing.events.delegate(document, '#events-test a', 'click', function(e) {

  should('run the handler when the right selector is matched', function() {
    turing.events.fire(turing.dom.get('#events-test a')[0], 'click');
    return clicks;

  should('only run when expected', function() {
    turing.events.fire(turing.dom.get('p')[0], 'click');
    return clicks;

This is the old test which binds a counter, clicks, then
triggers events with handlers that change clicks. Because
assert.equal won't run functions, it needs to be rewritten
like this:

exports.testEvents = {
  'test delegate handlers': function() {
    var clicks = 0;
    turing.events.delegate(document, '#events-test a', 'click', function(e) {

    turing.events.fire(turing.dom.get('#events-test a')[0], 'click');
    assert.equal(1, clicks, 'run the handler when the right selector is matched');

    turing.events.fire(turing.dom.get('p')[0], 'click');
    assert.equal(1, clicks, 'run the handler when the right selector is matched');

Being acutely aware of closure binding is the key to testing events.
I've also ported the other event tests and they run fine with the
assertion module.


These browser-based tests were straightforward to port from the
BDD-style library, Riot.js, to
CommonJS. So far it's only really been a cosmetic, syntax shift.

I feel like these CommonJS-based tests read well, but I've worked with
unit testing libraries more intensely than more modern BDD libraries.
There's no reason why BDD libraries can't be built on CommonJS
assertions though, which is what many open source developers have been
working on.

This code is available in commit