Let's Make a Framework: Submodules and Browser CommonJS Modules

2010-12-09 00:00:00 +0000 by Alex R. Young

Welcome to part 41 of Let's Make a Framework, the ongoing series about
building a JavaScript framework.

If you haven't been following along, these articles are tagged with
lmaf. The project we're creating is called Turing.

The last few parts have been concerned with building a test framework
based on CommonJS:

In this part I'll show you how to use external libraries through git
submodules, and how to simulate CommonJS modules in a browser (in a
limited fashion).


Writing good reusable code isn't just about well-written code, it also
involves project management. I don't mean a pointy-haired boss telling
you what to do, just practical techniques for managing projects and
sharing code between them.

I've already hit on the issues around packaging JavaScript frameworks,
and we've looked at libraries like jQuery and Prototype to see how they
package their code. Now that
turing-test.js is in a usable state we need a way of distributing it with Turing, without
having to manually update it every time we make changes.

One way Real Open Source Projects™ do this is using git
submodules. From Pro Git:

It often happens that while working on one project, you need to use another project from within it. Perhaps it's a library that a third party developed or that you're developing separately and using in multiple parent projects.

Adding Submodules

Adding a submodule is easy:

git submodule add git@github.com:alexyoung/turing-test.js.git turing-test

Getting and Updating Submodules

Git can't do everything, so it's important to communicate the fact we're
using submodules in our README. I've put a note in Turing's README with
instructions on how to get the required submodules. Of course, users
don't really need to worry about this, it's only for people who want to
run tests.

If you check Turing out from Git, you'll need to do this:

git submodule init
git submodule update

If the submodule has been updated, you'll need to run git
submodule update
to get the latest version.

Integrating Turing Test

Now we have a process for sharing Turing-Test with Turing, we need to
actually use the library to test something. For now I've put the test
library submodule in test/turing-test/ because it makes the
path handling easier between Node and the web-based file tests.

The goal of Turing Test was to make tests use CommonJS assertions and a
CommonJS test runner, which could work in a browser and server-side
JavaScript. We've got most of what we need so far, except for one thing:

var module = require('library').property;

Supporting modules in browsers isn't easy because it's incredibly
awkward to block execution until something has loaded. Turing Test
addressed this initially by mocking out require and forcing
the user to load scripts with script tags in a HTML stub.

This is a simple solution, but can we go even further?

Giving Browsers CommonJS Module Support

I decided to see how close I could get a browser-based test to resemble
a CommonJS test. This will run using Turing Test in both environments:


turing = require('../turing.core.js').turing;
var test = require('test'),
    assert = require('assert'),
    $t = require('../turing.alias.js');

exports.testAlias = {
  'test turing is present': function() {
    assert.ok(turing, 'turing.core should have loaded');

  'test alias exists': function() {
    assert.ok($t, 'the $t alias should be available');


This test,
alias_test.js will work in both a browser and Node. How? Well, the browser has a HTML
test harness file that performs some setup before the test is run:

    Alias Test

The file turing-test.js referenced here is where some
browser patching occurs to provide require. Now
require in the browser loads scripts by inserting a script
tag and watching for the script to finish loading. Certain parts of the
library block with setTimeout until the scripts have
finished loading.

function addEvent(obj, type, fn)  {
  if (obj.attachEvent) {
    obj['e' + type + fn] = fn;
    obj[type + fn] = function(){obj['e' + type + fn](window.event);}
    obj.attachEvent('on' + type, obj[type + fn]);
  } else
    obj.addEventListener(type, fn, false);

var scriptTag = document.createElement('script'),
    head = document.getElementsByTagName('head');
addEvent(scriptTag, 'load', tt.doneLoading);
scriptTag.setAttribute('type', 'text/javascript');
scriptTag.setAttribute('src', script);
head[0].insertBefore(scriptTag, head.firstChild);

The first call to require causes Turing Test to install
some browser-patching code. The final call to test.run at
the end of the tests will actually hit a fake test runner that uses
setTimeout to block until the scripts have loaded:

fakeTest: {
  run: function(tests) {
    if (tt.isLoading) {
      setTimeout(function() { tt.fakeTest.run(tests); }, 10);
    } else {
      return tt.testRunner.run(tests);

As long as our tests haven't been run yet they won't hit the empty
objects returned by our hacked version of require.


At this point we've got a test framework that looks 100% CommonJS in the
browser and console, with a very simple format for the HTML loading
stubs. That doesn't mean tests that depend on the DOM will work, and
module loading doesn't really work very well outside of this limited

In fact, the insanity I had to pull to get this to work was quite
ridiculous, and took hours of work. Because it was a stint from 6pm
until 11pm I haven't had chance to fully explore it yet, but hopefully
it gets people thinking about 'write once, run anywhere' testing.

The code I checked in was commit b16caa90a for

and commit a7be08a5 for