WegGL Hobbit, Zombies, Debugging and Profiling Tools

22 Nov 2013 | By Alex Young | Comments | Tags games webgl

WegGL Hobbit, Zombies

There’s a Chrome Experiment called The Hobbit: The Desolation of Smaug that has some pretty fancy effects. While I was playing with it I wondered what open source WebGL stuff people had been making, which is when I found this simple zombie game.

This ain't The Walking Dead, but where's your zombie game?

The source is here: Goobuzz / NavMesh-Project, and there’s a reddit thread which I think the author started.

WebGL Debugging and Profiling Tools

WebGL Debugging and Profiling Tools by Patrick Cozzi has a whole load of resources for working with WebGL. He covers a Firefox WebGL shader editor, WebGL Inspector, Chrome Canvas Inspector, Google Web Tracing Framework, and more.

He even includes useful performance tips:

Depending on how many frames the GPU is behind, a better practice would be to do all the texSubImage2D calls, followed by all the reprojection draw calls, or even move the reprojection draw calls to the end of the frame with the scene draw calls. The idea here is to ensure that the texture upload is complete by the time the reprojection draw call is executed. This trades the latency of completing any one for the throughput of computing many. I have not tried it in this case so I can’t say for certain if the driver lagging behind isn’t already enough time to cover the upload.

And is glad to see browsers including developer tools for WebGL:

Building WebGL tools, such as the Firefox Shader Editor and Chrome Canvas Inspector, directly into the browser developer tools is the right direction. It makes the barrier to entry low, especially for projects with limited time or developers. It helps more developers use the tools and encourages using them more often, for the same reason that unit tests that run in the blink of an eye are then used frequently.

JavaScript Developer Survey 2013: RFC

21 Nov 2013 | By Alex Young | Comments | Tags community surveys

Every year I like to run a survey for the readers of DailyJS. It helps me figure out what I should write about, but I also share the results with the community so you can use the data however you wish.

This year I’ve decided to change the approach. A draft of the survey questions can be found on GitHub, here: alexyoung / dailyjs-survey. You can fork it and send pull requests for questions you’d like to add or change.

After a week or so I’ll compile the changes into a Google Drive form and announce the survey has gone live so people can submit their responses.

I’d really appreciate input on the survey before publishing it, because it helps us get a better idea about what’s going on in the world of client-side and server-side JavaScript development.

Node Roundup: Fowl, grunt-ec2, connect-body-rewrite

20 Nov 2013 | By Alex Young | Comments | Tags node modules grunt amazon foundationdb express


Fowl (GitHub: OptimalBits / fowl, License: MIT, npm: fowl) by Manuel Astudillo is a document and query layer for FoundationDB. It provides a similar API to NoSQL databases like MongoDB, but has support for multidocument transactions:

Transaction support is an incredibly powerful feature that simplifies server logic and helps avoiding difficult to solve race conditions.

Fowl provides a low level API based on keypaths for describing documents and its properties following CRUD semantics.

It includes tests and each API method is documented in the readme file. Basic usage looks like this:

// Open a foundationDB database

// Create a document (if _id not specify a GUID will be generated)
var john = fowl.create('people', {
  _id: 'john',
  name: 'John',
  lastname: 'Smith',
  balance: 100

// Use transactions to transfer money from one account to another
var tr = fowl.transaction()

tr.get(['people', 'john', 'balance']).then(function(johnBalance) {
  tr.put(['people', 'john', 'balance'], johnBalance - 10);


grunt-ec2 (GitHub: bevacqua / grunt-ec2, License: MIT, npm: grunt-ec2) by Nicolas Bevacqua is a set of Grunt tasks for creating, terminating, and deploying Node applications to AWS EC2 instances.

The deployed Node applications are served from behind an Nginx proxy. The task reference explains what each task does – there are quite a few.

It supports most of the things you want to do when setting up Node applications, including SSL, SSH keys for each instance, rsync support for fast and painless uploads, and hot code swaps.


There are times when the logic of my Node web applications have seemed to need the response body to be rewritten, but in the middleware rather than the main route logic. The connect-body-rewrite (GitHub: rubenv / connect-body-rewrite, License: MIT, npm: connect-body-rewrite) by Ruben Vermeersch makes this possible. The examples use regular expressions to replace text, based on the request headers:

  accept: function (res) {
    return res.getHeader('content-type').match(/text\/html/);
  rewrite: function (body) {
    return body.replace(/<\/body>/, "Copyright 2013 </body>");

I like the way it’s designed to use an accept callback, because it makes it easy to see what the rewriter actually does by keeping the logic close together.

Chained, Door

19 Nov 2013 | By Alex Young | Comments | Tags jquery es6 components libraries


Chained (GitHub: vzaccaria / chained, License: MIT) is another ES6 experiment. It allows APIs that return promises to be mixed with functions that take parameters and return mutated objects. In Vittorio’s example he mixes jQuery’s network methods with Underscore.js to download JSON and then filter it:

getUser = (user) ->
    .filter(-> /package/.test(arguments[0]))
    .map(-> "https://npmjs.org#{arguments[0]}")


In this CoffeeScript example, methods that use promises (get) are mixed with functions that take objects as the first argument (filter, map), using a consistent chainable API. To make this work, Vittorio has used ES6’s introspection features.

The project has detailed notes in the readme about how this works. He mentions that the library came about after trying to create DSLs with JavaScript.


Olivier Wietrich sent in Doors (GitHub: bredele / doors, License: MIT, component: bredele/doors), a module for conditionally restricting access to open events that are triggered when all locks are unlocked.

[State machines and promises] both have something missing. A transition occurs when one condition is triggered. Things are not so simple in real life. You will probably have more than one condition to do something, but one condition is sufficient to not do it. Think about a door with multiple locks: you can’t open the door until all locks are unlocked.

Looking at the documentation, it seems like the author wants to use it to restrict access to an API until certain authentication preconditions are met. There’s a simple HTML example that uses a graphical door, and two locks. You can toggle locks and add more.

Moonjs, jQuery Removes Sourcemap Comments

18 Nov 2013 | By Alex Young | Comments | Tags jquery space simulation


Moonjs (GitHub: siravan / moonjs, License: GPL) by Shahriar Iravanian is a port of the Apollo Guidance Computer using Emscripten.

AGC was the main computer system of the Apollo program that successfully landed 12 astronauts on Moon. There was one AGC on each of the Apollo Command Modules and another one on each Lunar Module. There was also a second backup computer system called Abort Guidance System (AGS) on the Lunar Modules, which is simulated by Virtual AGC, but not the current version of Moonjs.

Recent advances in the JavaScript language - such as optimized engines, ahead-of-time (AOT) compilation, and asm.js - make it possible to write computationally extensive applications in JavaScript. My previous experience with online JavaScript-based simulation (svtsim and hemosim) was very positive and convinced me of the suitability of the HTML5/JavaScript combination in writing portable, easy-to-use simulators.

I was going to try figuring it out, but it reminded me of Kerbal Space Program and I got distracted…

jQuery 1.11.0/2.1.0

jQuery 1.11.0/2.1.0 Beta 2 were released last week. The beta includes AMD support, which is still the headline feature.

Something that I found interesting was the removal of the sourcemap comment:

One of the changes we’ve made in this beta is to remove the sourcemap comment. Sourcemaps have proven to be a very problematic and puzzling thing to developers, generating scores of confused questions on forums like StackOverflow and causing users to think jQuery itself was broken.

We’ll still be generating and distributing sourcemaps, but you will need to add the appropriate sourcemap comment at the end of the minified file if the browser does not support manually associating map files (currently, none do). If you generate your own jQuery file using the custom build process, the sourcemap comment will be present in the minified file and the map is generated; you can either leave it in and use sourcemaps or edit it out and ignore the map file entirely.

That fact sourcemaps generate so much confusion is worth thinking about, because it’s one of those things that people cite as making compile-to-JavaScript languages easier to work with.

Negative Array Indexes

15 Nov 2013 | By Alex Young | Comments | Tags node modules es6 code-review

Sindre Sorhus sent in negative-array, a module for supporting negative array indexes. It’s built using ES6’s Proxy.

Proxies allow you to run methods when certain conditions are met, which means things like profilers become easier to implement. They’re created with var proxy = Proxy(target, handler), where target is an object that will be wrapped with the proxy, and handler is an object that implements the proxy API.

The handler can include methods like has, defineProperty, getPrototypeOf, and more, for controlling access to an object. For more details on how this works, see the Direct Proxies page on the ECMAScript Harmony Wiki.

Sindre’s module allows you to do this:

var negativeArray = require('negative-array');

// adds negative array index support to any passed array
var unicorn = negativeArray(['pony', 'cake', 'rainbow']);

// get the last item by using an negative index

It’ll work in Node 0.8+ with the --harmony flag, and Chrome with Harmony enabled. Visit chrome://flags/#enable-javascript-harmony to set it up.

The implementation is what will probably become a classic pattern: Proxy is used to wrap the array instance with get and set methods that dynamically map the requested array index to something native JavaScript can handle.

Proxy(arr, {
  get: function (target, name) {
    var i = +name;
    return target[i < 0 ? target.length + i : i];
  set: function (target, name, val) {
    var i = +name;
    return target[i < 0 ? target.length + i : i] = val;

I like this example because it adds new functionality that feels like a language feature without changing built-in prototypes. It’s clean and fairly easy to understand once you know what Proxy does. If you wanted to learn about proxies but couldn’t find any good examples, then check out the source on GitHub.

Assertion Counting in Mocha

14 Nov 2013 | By Alex Young | Comments | Tags node libraries testing mocha

A few weeks ago I wrote about node-tap, in Why Don’t You Use Tap?. I usually use Mocha for my tests, and one thing I liked about node-tap was the idea of test plans:

test('Check out my plan', function(t) {
  t.ok(true, "It's ok to plan, and also end.  Watch.");

Test plans help in situations where you want to put assertions inside asynchronous events. For example, if you’re testing a web application and it makes HTTP requests in the test cases, but you also intercept events that indicate various lifecycle events. These could be things like ensuring a notification email was sent when a user signs up, and also checking that their account was saved to the database.

In Mocha, such a test might look like this:

describe('Account creation', function() {
  it('should allow users to sign up', function(done) {
    app.on('notify:accounts:create', function(account) {
      assert(account, 'expected a user account object');

    request(app).post('/accounts').send(userDetails).expect(200, done);

That’s OK, but it has some problems: done will be called twice – once when the web request finishes, and again when the email event is sent (notify:accounts:create).

We could fix this by counting how many assertions have been called:

describe('Account creation', function() {
  it('should allow users to sign up', function(done) {
    var expected = 2;

    function checkDone() {
      if (expected === 0) {

    app.on('notify:accounts:create', function(account) {
      assert(account, 'expected a user account object');

    request(app).post('/accounts').send(userDetails).expect(200, checkDone);

Seeing checkDone all over my tests made me create something more generic. In the following example I use instances of an object called Plan that allows assertions to be counted, and done to only get called once the specified number of assertions have passed. This example can be run with the mocha command-line script.

var assert = require('assert');

function Plan(count, done) {
  this.done = done;
  this.count = count;

Plan.prototype.ok = function(expression) {

  if (this.count === 0) {
    assert(false, 'Too many assertions called');
  } else {

  if (this.count === 0) {

describe('Asynchronous example', function() {
  it('should run two asynchronous methods', function(done) {
    var plan = new Plan(2, done);

    setTimeout(function() {
    }, 50);

    setTimeout(function() {
    }, 25);

This code could be expanded to tie in with Mocha’s timeouts to display the number of missed assertions, if any. Otherwise it does the job: if any of the asynchronous callbacks don’t fire, Mocha will raise a timeout error. It also protects against calling assertions too many times, which can actually happen if you’re making your own asynchronous APIs: I’ve had cases where I’ve triggered callbacks twice by mistake. And, it ensures done is only called when needed.

The question of ensuring assertions were actually called was brought up in this issue for Chai.js: Asserting that assertions were made. And, the Mocha wiki has this assertion counting snippet: Assertion counting.

Outside of Mocha, I found QUnit has asyncTest which allows assertions to be planned like TAP-style tests. With this approach we don’t need to broker calls to done because it uses start instead.

I’ve never quite found the perfect solution to this problem, however. How do you ensure assertions are triggered in Mocha, and how do you handle calling done in tests where there are multiple asynchronous operations?

Node Roundup: 0.10.22, genome.js, Bellhop

13 Nov 2013 | By Alex Young | Comments | Tags node modules biology streams

Node 0.10.22

Node 0.10.22 was released this week. This version has fixes for process, the debugger and the repl, and a memory leak on closed handles.

There’s also a fix for Mac OS 10.9: apparently “Not Responding” was displayed in Activity Monitor for Node processes.


DNA card

genome.js is an interesting project that proclaims “Welcome to the OpenDNA movement”:

genome.js is a fully open source platform built on Node.js that utilizes streams for high-performance analysis of DNA SNPs

There are currently several related repositories for the overall project on GitHub:

Why is this useful? Perhaps you’ve had your genome sequenced by a site like 23andMe, and want to do something with the data. Apparently the cutting edge in web-based DNA browsing isn’t particularly great, so there may be room for innovation.


Bellhop (GitHub: mscdex / bellhop, License: MIT, npm: bellhop) by Brian White is a stream for Pub/Sub and RPC. It can serialize data types that aren’t supported by JSON. Bellhop streams should work over any transport, from HTTP to TCP sockets.

The project has tests, and the readme includes API examples.

Recreating core.async with ES6 Generators, JSON Mask

12 Nov 2013 | By Alex Young | Comments | Tags tutorials node json es6 generators clojure

Recreating core.async with ES6 Generators

Recreating core.async with ES6 Generators is a post by Andrey Popp inspired by Clojure’s core.async. He uses Browserify with a transpiler module that allows ES6 generators to be used in browsers:

function *listen(el, evType) {
  while (true)
    yield function(cb) {
      var fire = function(ev) {
        el.removeEventListener(evType, fire);
        cb(null, ev);
      el.addEventListener(evType, fire);

The tutorial goes on to use JSONP to fetch data from the Wikipedia API. The full source is available as a gist: andreypopp / index.html.


JSON Mask (GitHub: nemtsov / json-mask, License: MIT, npm: json-mask) by Yuriy Nemtsov is a module that provides a DSL for filtering JSON. It works in browsers, includes unit tests, and there’s also Express middleware that can help you filter responses in your Express applications.

One use-case is when you have an HTTP service that returns {"name": "mary", "age": 25} (for example), and you have two clients: (1) that needs all of that information, and (2) that just needs the name.

Now, if the server uses JSON Mask, it would accept a ?fields= query-string. The (2)nd client would then add the following query-string to the request: ?fields=name; after which the server would respond with {"name": "mary"}, and filter out the rest of the information.

This saves bandwidth and improves performance; especially on large objects.

Although trivial in the simplest cases, the task of filtering objects becomes a difficult one quickly. This is why JSON Mask is actually a tiny (and highly optimized) language that is loosely based on XPath, and has support for filtering parts of objects, arrays, wild-card filtering and combinations of the three. So, here’s a less trivial example and usage (note how p/a/b is filtered out):

Getting Started with Hoodie

11 Nov 2013 | By Alex Young | Comments | Tags tutorials node couchdb
Fast ship?

Hoodie (GitHub: hoodiehq / hoodie.js, License: Apache 2.0, npm: hoodie) is a noBackend framework that uses Node, with a focus on client-side development.

noBackend is an approach to decouple apps from backends, by abstracting backend tasks with frontend code. This allows frontend developers to focus on user experience and gives backend developers more flexibility on the implementation side.

Hoodie applications are based around documents, and are backed by CouchDB. It embraces event-based APIs, JSON across the whole stack, and uses npm for plugins. It’s also designed to be easy to deploy to Nodejitsu.

Applications are instances of Hoodie objects. This provides the entry point to most functionality, including user accounts – Hoodie comes with a baked-in account system. Data is stored per-user through hoodie.store, which is accessible from client-side code. In this respect it’s reminiscent of Meteor.

Data lifecycle stages trigger events, so you can easily see when data is changed in some way:

hoodie.store.on('add:task', function(event, changedObject) {
  // Update the view with the changedObject

The API is chainable and will remind client-side developers of jQuery.

As a teaser, here are some of the account handling method calls:

hoodie.account.signUp('joe@example.com', 'secret');
hoodie.account.changeUsername('currentpassword', 'newusername');

To start your own project you’ll need CouchDB installed locally. Then you can npm install -g hoodie-cli, which will allow you to run hoodie new to create a new project. The process is straightforward if you’re already using Node and don’t mind running CouchDB, and it’s fully documented on Hoodie’s website.

The documentation for Hoodie is solid, and I’ve found it easy to follow so far. Although I only learned about it at the Great British Node Conference, the developers have been working on it for over a year. It has sponsors, and is looking for more. It’s a well-presented, open source project, with momentum behind it.

Talking to Sven Lito at the Great British Node Conference I got the impression that the team are treating it as a commercial project, while believing in the open source model. This made me want to get on board and use it myself, so I suggest you give it a try.

Socket.IO Debugging

08 Nov 2013 | By Alex Young | Comments | Tags npm node websockets

How do you debug WebSockets, other than inserting console.log all over the place? One technique is to look at the Frames tab, under Network, in WebKit Inspector.

The Frames tab in WebKit Inspector.

The term “frames” refers to the data that is sent in a WebSocket connection. Unfortunately, I’ve been doing some work with WebSockets inside a desktop application, so I can’t easily see WebKit inspector.

Diego Costantino sent in ThreePin (GitHub: dieguitoweb / ThreePin, License: MIT, npm: threepin, bower: dieguitoweb/ThreePin), a tool for developing and testing software that uses Socket.IO and Node:

ThreePinJS is a stress-free test environment for Socket.IO allows you to test your WebSocket server code before you write the client code.

It uses a configuration file called threepin.json that sets up a server, and a list of events to listen for and send. The readme has a full example.

Once the server is configured, you can use any local HTTP server you want to serve a simple HTML file that connects and runs through the event list. It means you can focus on the server-side logic before worrying about the client-side code.

I’m still looking for options to help test and debug Socket.IO projects, but I thought ThreePin was an interesting stab at the problem.

Should You Share Client-Side Projects on npm?

07 Nov 2013 | By Alex Young | Comments | Tags npm node browser

Should you share your client-side projects with npm? Or, as a client-side developer, should you be using npm and a package.json to organise your project’s dependencies?

People are already using npm for distributing client-side code for many types of projects:

  1. Generic JavaScript that is sometimes used on the server but often used in the client (Underscore.js, Backbone.js)
  2. Client-side libraries that are so popular they’ve ended up npm out of convenience (jQuery)
  3. Client-side plugins for Backbone.js, AngularJS, jQuery (check out peer dependencies if you want to do this in a clean way)

If you decide to share a module using npm, you don’t necessarily need to wrap it in Node’s module system, but I would argue that you should, and you should also supply tests that can be run with Node. In an ideal world all modules on npm would include tests, and I believe this extends to client-side projects.

A recent client-side workflow pattern that has emerged, perhaps most notably from Substack, is to use Node in the browser, using something like Browserify. This means you can use require, so you can organise client-side code with Node’s module system. What I think is amazing about this is you can use Node’s core modules, so if you’re used to using EventEmitter and streams then you can bring these patterns over to the browser.

However, as Substack notes in this post on reddit, require is incompatible with RequireJS. It might seem surprising if you’re a Node developer, but the naming of require and RequireJS causes serious confusion to client-side developers who are just discovering module systems.

The Question

So, the question is, should you use npm for sharing client-side projects, or something like Bower? Do you prefer using Browserify or similar pre-processors, or is using AMD good enough?

Node Roundup: Mongovi, hoquet

06 Nov 2013 | By Alex Young | Comments | Tags node modules templating clojure mongodb


I’ve been working on a project that uses MongoDB, and one of the problems I have is with Mongo’s REPL. For one thing, I keep hitting CTRL-C because I expect it to cancel the current line rather than exit the whole REPL, but a bigger problem for me is they’ve switched to linenoise. I’m used to Vim’s shortcuts, which readline can use. When dealing with programs with non-readline REPLs, I often invoke rlwrap (or write an alias to use rlwrap), but when it comes to Mongo a better solution might be Tim Kuijsten’s Mongovi.

Mongovi (GitHub: timkuijsten / node-mongovi, License: MIT, npm: mongovi) is a REPL for MongoDB with Vi keys. It uses readline-vim and node-mongodb-native, so it isn’t a wrapper around the command-line mongo tool but instead a reimplementation in Node.

Several high-level commands work: show dbs lists databases, use db switches to a different database, and the usual commands like c.collectionName.find, update, and insert work. The author has included Mocha tests, and documentation can be found in the readme.


I had a brief love affair with Clojure. It was a romance that lasted a few months, but work got in the way and we had to break up. However, thanks to Tom Brennan I can relive those days with hoquet (GitHub: tjb1982 / hoquet, License: MIT, npm: hoquet). This is a templating library based on Clojure’s Hiccup. It uses a structured language based on arrays for generating HTML:

var http = require('http'),
    h = require('hoquet');

function layout(c) {
  var out =
      ['title', c.title],
     ['body', {'ng-app':'MyApp'}, c.body]];

  return out;

var index = layout({
  title: 'My Page',
  body: ['div', {'ng-view':''},
         ['h1', 'Hello world']],
  head: [['meta', {'name':'description',

http.createServer(function(q,s) {
  s.writeHead(200, {'Content-Type': 'text/html'});
  s.end( h.doc('html5', index) );

Tom also notes that hoquet can be used in browsers, because the underlying implementation is plain ol’ JavaScript:

You create your own functions/literals to pass in whatever you want and call render, which stringifies it. You can also render inner portions at any time and insert them as Strings so you don’t have to worry about when render is called.

stickUp, Backbone.js Guide

05 Nov 2013 | By Alex Young | Comments | Tags books backbonejs jquery plugins ui


stickUp (GitHub: LiranCohen / stickUp, License: LGPL) by Liran Cohen is a plugin for handling navigation bars that stick at the top of the page when it’s scrolled:

stickUp is a simple plugin that “sticks” an element to the top of the browser window while scrolling past it, always keeping it in view. This plugin works on multi-page sites, but has additional features for one-pager layouts.

It can detect the vertical position of the element and automatically fix it to the right position:

  marginTop: 'auto'

Backbone.js Guide

Julio Cesar Ody’s Backbone.js Guide is a freely available and somewhat opinionated book about Backbone that he’s currently in the process of writing. There are five chapters so far, and two more should be coming soon. The source is on GitHub at juliocesar / backbone-book.

The rule of thumb is get the hell away from the DOM. You won’t read from it ever (e.g.: getting an element’s class name, or the length of a list counts as that), because your data layer knows what has what value and in what state anything is at any given time. You’ll write to the DOM only by rendering views. If you like this approach, it’s totally ok to use just regular JS or jQuery.

Grasp, Optimizing AngularJS

04 Nov 2013 | By Alex Young | Comments | Tags angularjs search productivity optimisation


George Zahariev sent in his latest project, Grasp (GitHub: gkz / grasp, License: MIT, npm: grasp). Grasp allows you to search and replace JavaScript code based on its abstract syntax tree.

Unlike programs such as “grep” or “sed”, it searches the structure behind your code, rather than simply the text you’ve written - this allows for much more powerful searches.

It uses a language inspired by CSS selectors for creating intuitive search expressions. For example, given this code:

var obj = {
  toEven: function(x) {
    if (isEven(x)) {
      return x;
    } else {
      return x + 1;

assert.equal(false, isEven(7));

Searching for obj.props func! #isEven would match the call to isEven inside the property toEven on the object obj. References outside, like the assertion at the bottom, will not be matched.

My preferred solution for searching code is The Silver Searcher, but I like the idea of combining an AST with search and replace – it seems like a potentially fertile ground for experimentation.

Optimizing AngularJS: 1200ms to 35ms

In Optimizing AngularJS: 1200ms to 35ms, optimisations for AngularJS are discussed. It’s actually a lot deeper than you might expect: it goes from caching DOM elements to bypassing watchers for hidden elements, and deferring element creation:

A straightforward AngularJS implementation of the log view took 1.2 seconds to advance to the next page, but with some careful optimizations we were able to reduce that to 35 milliseconds. These optimizations proved to be useful in other parts of the application, and fit in well with the AngularJS philosophy, though we had to break few rules to implement them. In this article, we’ll discuss the techniques we used.

The conventional wisdom for AngularJS says that you should keep the number of data-bound elements below 200. With an element per word, we were far above that level.

Using Chrome’s JavaScript profiler, we quickly identified two sources of lag. First, each update spent a lot of time creating and destroying DOM elements. Second, each word had its own change watcher, which AngularJS would invoke on every mouse click. This was causing the lag on unrelated actions like the navigation dropdown.

The post got a huge amount of interest, so the authors are working on open sourcing their AngularJS directives.

WDS2013, flyLabel.js

01 Nov 2013 | By Alex Young | Comments | Tags animation jquery webgl

Web Directions South 2013 Opening Titles


Hugh Kennedy sent in the Web Directions South 2013 Opening Titles (GitHub: smallmultiples / south.im, MIT, MPL and Creative Commons). This is an extremely impressive WebGL animation complete with a cool soundtrack. It’s partly demoscene inspired, but also reminds me of Darwinia.

The code’s open source, mostly MIT with a couple of files under the Mozilla Public License and assets under Creative Commons.

Done in collaboration with Small Multiples’ (http://small.mu) Jack Zhao and Fran├žois Robichet, it’s pushing some of the newer browser APIs quite a bit - WebGL, IndexedDB, Web Audio, Web Workers, etc.


flyLabel.js (GitHub: athaeryn / flyLabel.js, License: MIT) by Mike Anderson is a jQuery plugin that adds fancy animations to form labels. It uses CSS animations, and has a simple JavaScript API. Mike’s example uses Modernizr:

if (Modernizr.input.placeholder) {

New Features in npm: Part 2

31 Oct 2013 | By Robert Kowalski | Comments | Tags node npm
Node has one of the best package managers around: npm. Every month a lot of features are added to npm. Here are some new features as of 1.3.12.

Default Homepage URLs

Many projects use the same URL for their homepage and GitHub page. As of 1.3.12, you do not have to define the homepage property in your package.json – if you have a GitHub URL as the repository field the homepage will default to the GitHub page.

For example, this:

"repository": {
  "type": "git",
  "url": "git@github.com:robertkowalski/npm-registry-mock.git"

will result in the homepage field being set to http://github.com/robertkowalski/npm-registry-mock.

Outdated Upate

npm outdated shows the latest compatible version of each module according to your package.json definition. The updated version of this command, which you can use with npm 1.3.12, will also show the very latest version, even if it is not compatible with your package.json definitions.

These will show as latest:

underscore node_modules/underscore current=1.3.1 wanted=1.3.3 latest=1.5.1


With a new version of npm (and also Node) a ton of features and bug fixes arrive, so you should always be up to date. Fortunately, you just have to to get the latest Node version each time, as the latest npm versions are bundled into each node release.

You can find me on Twitter @robinson_k and GitHub at robertkowalski.

Node Roundup: 0.11.8, tabby, Nixt

30 Oct 2013 | By Alex Young | Comments | Tags node modules substack node-web testing command-line

Node 0.11.8

The core developers are cranking the handle again and firing out releases. Today Node 0.11.8 was released, which upgrades uv and V8. There’s a new buf.toArrayBuffer API, debugger improvements, and core module fixes.

I saw 0.11.8 announced on Twitter, and the author hinted at more frequent unstable releases.


If it’s not TJ Holowaychuk it’s Substack. I don’t believe it’s possible to write a weekly column about Node without mentioning one of them at least once. Substack just released a client-side project called tabby (GitHub: substack / tabby, License: MIT, npm: tabby), a module for creating web applications with tabs using progressive enhancement techniques.

Why is this notable? Well, Substack has been ranting about Node’s module system and client-side code. He wants you to stop mucking about and require(). Tabby blends the browser and Node by using WebSockets, and makes judicious use of Node’s core modules and streams.

For something focused on client-side code, it uses an interesting cocktail of modules. The inherits module provides browser-friendly inheritance, whilst still being compatible with util.inherits. And trumpet is used for parsing and transforming streaming HTML using CSS selectors. In the documentation, Substack recommends using hyperspace for rendering.

Looking through the examples, it definitely feels like idiomatic Node. I can imagine this style scaling up well to a larger web application.



Test code readability can bring huge gains when maintaining projects. Tests should communicate intent, and the longer they don’t need heavy modification the happier everyone will be. I generally find myself writing DSL-like methods for tests that reduce code duplication, particularly in the set-up phase.

With that in mind, it’s interesting to see Nixt (GitHub: vesln / nixt, License: MIT, npm: nixt) by Veselin Todorov. This is a module for testing command-line applications. It’s based around expectations, which reminds me of good old Expect.

Nixt works well asynchronously because it supports middleware for ordering execution. It can be used with a test harness like Mocha, and allows you to define custom expectations, which is where your application-specific DSL-like test helpers come in.

Although I’m probably guilty of using Node for command-line scripts that could be done with a shell script (I write my share of shell script too though), Nixt seems like a great way to test them, particularly as people often forget to test such scripts.

Script Roundup: jWebAudio, Scrolling Component

29 Oct 2013 | By Alex Young | Comments | Tags jquery plugins browser audio scrolling
Note: You can send your scripts and articles in for review through our contact form.


jWebAudio (GitHub: 01org / jWebAudio, License: Apache 2.0) by Wenli Zhang and published by Intel’s Open Source Technology Center is an audio library focused on games:

Web Audio seeks to process and synthesize audio in web applications. jWebAudio keeps the technical details of Web Audio under the hood and makes it easier to control your audio.

It has a jQuery API and also a framework-agnostic JavaScript API. Playing a set of sounds looks like this:

$('.sound').each(function() {
  var $this = $(this);
  var url = $this.data('sound');
  $(this).jWebAudio('addSoundSource', {
    url: url,
    preLoad: true,
    callback: function() {

jWebAudio also supports synthesis and effects:

Sound effects include telephonize and cathedral currently. And you may create new sound effects using the combination of LOWPASS, HIGHPASS, BANDPASS, LOWSHELF, HIGHSHELF, PEAKING, NOTCH, ALLPASS.

There’s a demo here: jWebAudio demo.

Wenli also writes about JavaScript. Here’s a post about the gruesome details of Number, parseFloat, and parseInt: Converting To Numbers In JavaScript.

Scrolling.js Component

Guille Paz sent in Scrolling.js Component (GitHub: pazguille / scrolling, License: MIT, component: pazguille/scrolling). It allows you to decouple scrolling from callbacks to avoid generating too many scroll events (the old debouncing issue):

var scrolling = require('scrolling');

scrolling(document.querySelector('#box'), callback);

The project is distributed as a component, and has a demo on the homepage.

HiDPI Canvas Polyfill, formatter.js

28 Oct 2013 | By Alex Young | Comments | Tags browser canvas polyfills

HiDPI Canvas Polyfill

Don’t you just hate it when a cool canvas animation suddenly goes blurry? HiDPI Canvas Polyfill by Jonathan Johnson scales the canvas so the PPI is right, avoiding unsightly blurring. Jonathan notes that Safari is currently the only browser that does this properly.

I don’t know if the author was inspired by How do I fix blurry text in my HTML5 canvas? on Stack Overflow, but the solution looks similar to MyNameIsKo’s answer.

Jonathan has gone further by including tests, and he’s included a Grunt build script as well.

Jonathan also sent in BubbleChart (GitHub: jondavidjohn / bubblechart, License: Apache 2.0, npm: bubblechart) which is an interactive visualisation for two dimensional data.


formatter.js (GitHub: firstopinion / formatter.js, License: MIT) by Jarid Margolin helps you to define custom form fields. The examples given are a credit card form and a telephone number entry. The script can insert text as the user types, so the credit card form inserts hypens, and the telephone number uses the US-style format with brackets and a single hyphen.

The API looks like this, but there’s a jQuery wrapper as well:

new Formatter(document.getElementById('credit-input'), {
  pattern: '9999-9999-9999-9999'

The project includes tests that can be run with npm, and there are examples here: formatter.js demos.