Node Roundup: 0.11.16, io.js 1.1.0, Faster async, HTTP Tests

04 Feb 2015 | By Alex Young | Comments | Tags node iojs modules libraries

Node 0.11.16 and io.js 1.1.0

Node 0.11.16 is out, bringing an update to OpenSSL, npm, and some core modules: url, assert, net, and url. The util.inspect patch is interesting because errors generated by the assert module now get formatted with util.inspect instead of JSON.stringify. The idea behind this patch was to avoid errors due to circular references.

io.js 1.1.0 also has this patch, but there are again lots of unique commits that aren’t in Node. Using my limited Git forensic skills, I tried to compare commits between both projects, but I haven’t found a good way to do it yet. This is what I tried:

git clone
cd node
git remote add iojs
git fetch iojs
git checkout iojs/v1.x -b iojs
git branch --contains df48faf
git cherry -v origin/master iojs

I noticed that git branch showed a commit that I know is only in iojs actually in the iojs branch I made based on the v1.x branch from iojs/io.js. However, cherry shows commits that are in both Node and io.js, so I’m not sure what the best way of tracking unique commits to each project.


I use the async module all the time without really worrying too much about performance, but Shunsuke Hakamata sent in neo-async (GitHub: suguru03/neo-async, License: MIT, npm: neo-async), a drop-in replacement for async created by Suguru Motegi which aims to improve performance.

The benchmarks have been made on major browsers, Node (stable), and io.js. Not every method is implemented, but the results the author has generated look compelling. If you take a look at the async.waterfall benchmarks, async appears to get slower based on the number of tasks, while neo-async stays flat.

Testing Node Unit Tests

Jani Hartikainen sent in an article about HTTP testing with Node. It covers Mocha, Sinon, and uses stubs for the built-in http module:

Let’s start by adding a test that validates the get request handling. When the code runs, it calls http.request. Since we stubbed it, the idea is we’ll use the stub control what happens when http.request is used, so that we can recreate all the scenarios without a real HTTP request ever happening.

To make sure the expected data is written into it, we need a way to check write was called. This calls for another test double, in this case a spy. Just like real secret agents, a spy will give us information about its target – in this case, the write function.

You could use this as the basis for faking requests to third-party APIs in your tests.


03 Feb 2015 | By Alex Young | Comments | Tags data cache rest localStorage

Asaf Katz sent in JSData (GitHub: js-data/js-data, License: MIT, npm: js-data, Bower: js-data) by Jason Dobry, a model layer for caching data. It supports adapters for sending data to persistence layers, so you can install DSHttpAdapter (npm/Bower: js-data-http) to send data to a REST server or DSLocalStorageAdapter (npm/Bower: js-data-localstorage) for browser-based storage. There are even adapters for RethinkDB, Firebase, and Redis.

One nice thing about JSData is it uses Object.observe to watch for changes to data, so it doesn’t need any special getters or setters. It also supports promises, so you can use then after finding or inserting data.

It also has more advanced ORM-style features like relations, computed properties, lifecycle methods, and data validation. The schema and validation API isn’t based on JSON-Schema, but it has the kinds of features you’ve probably seen in other ORMs and ODMs like Mongoose:

var store = new JSData.DS();

var User = store.defineResource({
  name: 'user',
  schema: {
    id: {
      type: 'string'
    name: {
      type: 'string',
      maxLength: 255
    email: {
      type: 'string',
      nullable: false

  name: 'John'
}).catch(function(err) {
  // Email errors will be in

This is an example I adapted from the Schemata and Validation documentation.

JSData was originally written for AngularJS, but is now framework-agnostic. However, there is an Angular integration guide which shows how it can be used with Angular applications.

Because JSData can be used with Node and browsers, you could use it to define reusable models for single-page apps that sync when the server is available. It’ll also work well if you’re used to Rails-inspired frameworks like Ember, and you’re used to working with fat models that include data validation and lifecycle methods.

Syphon, json-schema-benchmark

02 Feb 2015 | By Alex Young | Comments | Tags node modules libraries react json json-schema


Syphon (GitHub: scttnlsn/syphon, License: MIT, npm: syphon) by Scott Nelson is an implementation of the Flux architectural pattern for React. It helps you to structure applications around a single immutable state value, and implements the dispatching of actions to various state-transitioning handler functions.

The application’s state is stored in an immutable data structure that you create with the atom method. To access a value, you deref it, and you can also modify it by using swap:

var state = syphon.atom({ foo: 'bar' });

state.swap(function(current) {

  return current.set('foo', 'baz');

// => { foo: 'baz' }

Responding to state changes involves writing handlers, which takes the form of functions with two arguments: value and currentState. The application’s state is available in the second argument.

Syphon’s author recommends using the multimethod module to deal with handlers that branch on multiple values.

Syphon includes the root method for mounting your component hierarchy, and a mixin for adding helpers to components for accessing application data and dispatch values.


I may need to create some JSON schemas in the near future, so it was fortunate that Allan Ebdrup sent in json-schema-benchmark. This is a benchmark for JSON schema validators, and covers the following modules:

It also shows test failures when run against the official JSON Schema Test Suite. One cool addition is a summary of tests that caused side-effects, so you can see which validators altered values.

Material Design Icons, rdljs, PhotoSwipe

30 Jan 2015 | By Alex Young | Comments | Tags ui icons graphics microsoft images

Material Design Icons for Angular

Google’s Material Design stuff is amazing, and their recent UI and animation libraries are useful for those of us who don’t want to spend too much time developing a totally new UI for every project. However, these tools have limitations. Urmil Parikh found that the official Material Design Icons were hard to recolour without patching the SVG files.

Material Design Icons

To work around this issue, Urmil created Angular Material Icons v0.3.0 (GitHub: klarsys/angular-material-icons, License: MIT, Bower: angular-material-icons, npm: angular-material-icons). By including this project, you get a new Angular directive called ng-md-icon which supports options for style and size. It also optionally supports SVG-Morpheus – this allows you to morph between icons which might work well in animation-heavy material design projects.

This library works by hard-coding the SVG paths in an object called shapes. The paths can be embedded in svg elements with the required size and style attributes.


André Vale sent in rdljs, a library for Microsoft RDL/RDLC reporting. This technology is completely new to me, so I had to ask him for clarification. Apparently, RDL stands for Report Definition Language, and is used with Microsoft SQL Server Reporting Services. RDL files are XML schemas for designing reports, so André’s library allows you to take RDLC files and render them in a browser.

The library comes with a demo, but you’ll need to run a webserver to try it out. It’s intended to be used with a server side application that sends the XML using Ajax requests. It uses D3 and jQuery. The source that does the XML parsing looks extremely involved, so it would be interesting to see what people can do with it who’ve got RDL experience.


PhotoSwipe (GitHub: dimsemenov/PhotoSwipe, License: MIT, Bower: photoswipe) by Dmitry Semenov is a mobile-friendly image gallery. It works through instances of the PhotoSwipe object, and there are methods for things like going to the next/previous slide, and skipping to a specific slide. The API also supports events.

There’s full documentation in website/documentation/ which explains how to add slides dynamically, so you can load slide data asynchronously if required.

One thing I liked about the PhotoSwipe was the mouse gestures: if you click and swipe it works in a similar way to swiping on a touchscreen. It also feels fast and lightweight.

Deploy Apps with Flightplan

29 Jan 2015 | By Alex Young | Comments | Tags node modules libraries deployment

If you saw my post about Shipit yesterday, then you might be interested in trying out Patrick Stadler’s module, Flightplan (GitHub: pstadler/flightplan, License: MIT, npm: flightplan). Flightplan is a deployment library that is more influenced by Python’s Fabric than Ruby’s Capistrano. It can also be used for general administration tasks as well.

If you want to use it, you’ll need a flightplan.js file. This defines the deployment targets, like staging and production, and contains the local and remote commands to run. I like the syntax of the plans because they’re based on simple JavaScript callbacks with locally scoped variables rather than any kind of magic globals:

plan.remote(function(remote) {
  remote.log('Move folder to web root');
  remote.sudo('cp -R /tmp/' + tmpDir + ' ~', {user: 'www'});
  remote.rm('-rf /tmp/' + tmpDir);

  remote.log('Install dependencies');
  remote.sudo('npm --production --prefix ~/' + tmpDir
                            + ' install ~/' + tmpDir, {user: 'www'});

  remote.log('Reload application');
  remote.sudo('ln -snf ~/' + tmpDir + ' ~/example-com', {user: 'www'});
  remote.sudo('pm2 reload example-com', {user: 'www'});

Groups of commands like this are called “flights”. You can run subsets of flights, which are known as tasks:

plan.local(['deploy', 'build'], function(transport) {});
plan.remote(['deploy', 'build'], function(transport) {});

The transport argument for flights contains a runtime property that lets you query the target, hosts, and options. You could use this to print extra debugging information based on options passed to the task. This design means you could split flights into separate modules, which might be useful if you need to orchestrate the deployment of a collection of separate apps or microservices.

Transports also have some handy utility methods. You can rsync files with the transfer method, like this: local.transfer(['file.txt', '/tmp/foo'). If you want to ask questions during a script you can use transport.prompt – this might be useful for restricting deployment to specific employees without having to store passwords in your repository. There’s also a waitFor method that can be used to wrap asynchronous operations.

The fact it supports synchronous, blocking operations means you can write lists of commands more like the way shell scripts operate. I assume this is how most devops and sysadmins prefer to work, so Flightplan might make more sense to them rather than dealing with asynchronous APIs.

To make this possible Flightplan uses fibers. Here’s a snippet from lib/flight/remote.js:

var execute = function(transport) {
  var future = new Future();
  new Fiber(function() {
    var t = process.hrtime();'Executing remote task on ' +;
    fn(transport);'Remote task on ' + +
                ' finished after ' + prettyTime(process.hrtime(t)));
    return future.return();
  return future;

Patrick notes that Flightplan now supports cloud providers (including EC2 on Amazon and DigitalOcean), so you can easily define the remote hosts which may change with large projects on such services. In a way I think this makes Flightplan a little bit like Puppet and Chef. If you still haven’t found your perfect deployment solution, or if you have to maintain remote servers, then you should give it a try.

Node Roundup: 0.10.36, io.js 1.0.4, Object.observe, Shipit

28 Jan 2015 | By Alex Young | Comments | Tags node modules libraries iojs deployment

Node 0.10.36, io.js 1.0.4

The latest stable release of Node, 0.10.36, came out earlier this week. The changelog is short with just three bullets, but one thing that stands out is “v8: don’t busy loop in cpu profiler thread”. The commit for this is 6ebd85:

Before this commit, the thread would effectively busy loop and consume 100% CPU time. By forcing a one nanosecond sleep period rounded up to the task scheduler’s granularity (about 50 us on Linux), CPU usage for the processor thread now hovers around 10-20% for a busy application.

In io.js, 1.0.4 has been released. I noticed in the changelog that there’s a patch for V8 and ARMv6, so io.js should work again on platforms like Raspberry Pi.


It seems like I’ve been writing about a lot of frameworks that use ES6/7, but sometimes you just want a specific polyfill rather than an entire framework. A new polyfill that Massimo Artizzu sent me is for Object.observe (GitHub: MaxArt2501/object-observe, License: MIT, npm: object.observe, Bower: object.observe), which you can use with Node and browsers.

The readme explains the basic usage, but there’s also more detailed documentation. This also explains the two versions of the API:

If you don’t need to check for “reconfigure”, “preventExtensions” and “setPrototype” events, and you are confident that your observed objects don’t have to do with accessor properties or changes in their descriptors, then go for the light version, which should perform reasonably better on older and/old slower environments.

In Node 0.10.x, you can use it like this:

if (!Object.observe) require('object.observe');

This will avoid loading it when your version of Node gets Object.observe in the future.



Greg Bergé sent in Shipit (GitHub: shipitjs/shipit, License: MIT, npm: shipit-cli), a deployment tool. It can use SSH to sign in to a server and run scripts, and it handles multiple environments so you can create settings that will work for production, staging, CI, and so on.

It has an event-based API with tasks, in a style that reminds me of Grunt and Gulp. It uses plain old SSH and rsync under the hood.

Shipit recently got lots of attention because Ghost uses it. I haven’t been able to find an exact reference to Shipit in Ghost’s source, so it’s presumably used for deploying their commercial hosted solution. However, I actually thought Shipit might be useful as part of Ghost’s open source stack, because it would help support people who download Ghost and deploy it to their own servers. It would be interesting to discover more about the Ghost connection.

The Aurelia Client Framework

27 Jan 2015 | By Alex Young | Comments | Tags frameworks libraries browser


Vildan Softic wrote in to tell me about Aurelia, a new client-side framework. It got a lot of press recently, so I’ve been reading through the documentation and source to learn more about it.

Aurelia was created by Rob Eisenberg and is the successor to an older framework called Durandal 2.x. It’s built from several libraries that provide the following features:

Aurelia uses ECMAScript 6 features like modules, so it uses jspm as the client-side dependency package manager. One of jspm’s main features is it can load any module format.

View models are managed using dependency injection containers, and it combines this with ES6 classes to make the syntax relatively lightweight. This example uses DI to provide a lazy resolver for HttpClient:

import {Lazy} from 'aurelia-framework';
import {HttpClient} from 'aurelia-http-client';

export class CustomerDetail{
  static inject() { return [Lazy.of(HttpClient)]; }
  constructor(getHTTP) {
    this.getHTTP = getHTTP;

In theory you could now swap the implementation of HttpClient, which would be ideal for writing tests against a mocked server.

Aurelia doesn’t really depend on too many Node modules – it’s mainly build tools like Gulp and Karma for testing. The build scripts allow it to provide ES6 features using the 6to6 transpiler, so it’s an important part of the framework. However, this entire layer is really a giant polyfill that allows you to focus on writing modern JavaScript.

It has features that are similar to other frameworks, particularly AngularJS. The dependency injection approach is different to Angular, it reminds me more of some of the .NET and Java desktop projects that I’ve seen over the last few years. Also, some features like two-way binding and custom HTML elements will appeal to people that are trying to make reusable UI components that can be shared across projects.

I realise that many readers suffer from framework fatigue, but Aurelia’s early adoption of ES6 features to solve some of the problems we now face in client-side development is extremely promising.


Lo-Dash v3, g5-knockout.js

26 Jan 2015 | By Alex Young | Comments | Tags lo-dash libraries articles knockout

Harder, Better, Faster, Stronger Lo-Dash v3

I’ve noticed that Lo-Dash is a popular module on npm, and it’s recently had a major release for version 3.0. This version brings some new features and interesting internal changes.

Gajus Kuizinas sent in Harder, Better, Faster, Stronger Lo-Dash v3, a post that explores version 3, like lazy evaluation, pipeline computing, deferred execution, and newly available methods.

Version 3 is the biggest update to Lo-Dash ever3. With ever increasing user base, modular code base, and cross-browser compatibility, Lo-Dash is the go-to utility library for years to come. With that in mind, there is an going competition with lazy.js and undescore.js, both of which are in active development.

Knockout/Browserify Base App with Events

Working with Knockout and Browserify is a pretty solid choice for many client-side projects, but because these are libraries rather than frameworks you need to know what you’re doing before getting started. Greg Babula has created g5-knockout.js which you can use as a foundation for your next MVVM project. It includes an event bus for decoupling, and this is based on Node’s events module, which is made possible thanks to Browserify. I also noticed Greg uses Lo-Dash.

Greg included a blog post in his submission which explains the concept and decisions behind the main dependencies. You can look at the source to see things like how view models are made.

EvenNode, Mozaik, Marko

23 Jan 2015 | By Alex Young | Comments | Tags node modules libraries sponsored-content hosting


EvenNode is a new hosting service that is dedicated to Node.js. You can get a free app instance which even includes MongoDB, so it’s great for quickly deploying MEAN apps. The paid tiers start at €6, which gets you 1GB storage and 256MB RAM. One interesting thing about the paid tiers is they all get unlimited custom domain names, so it’s easy to use multiple domain names for each application.


EvenNode supports multiple versions of Node, starting at 0.8.6. You can deploy with Git, and WebSockets are supported out of the box. For more technical details, take a look at the documentation. Sign up only requires an email address and password, so you don’t even have to hand over credit card details to try it out!



Mozaik (GitHub: plouc/mozaik, License: MIT, Demo) by Raphaël Benitte is a Node-based web-app for showing dashboards. It includes some widgets for CI and monitoring, but you can add more. It also comes with five themes. Of course, you can create your own widgets and themes.

Other than the very clear and attractive design, one feature that I liked was rotating dashboard layouts. This would be cool if you’ve got multiple teams in the same office that perform different roles. You could rotate between support information, developer tickets/CI, and server monitoring.

Mozaik is built with React and Express.


Marko (GitHub: raptorjs/marko, License: Apache 2.0, npm: marko) by Patrick Steele-Idem is a templating language for Node and browsers that uses HTML with custom tags. It’s being used at eBay as part of their Node stack, and it has been covered by some cool blogs like the StrongLoop blog.

There’s a live demo where you can try out the syntax. The markup uses attributes for data binding and iteration, so you can say if="notEmpty(data.colors)" and repeat elements with for="color in data.colors". I haven’t seen this syntax before, but I think it would be easy to learn if you’re used to declarative template systems.

One thing about Marko that makes a huge amount of sense to me is the fact it’s asynchronous. You can stream output with Node’s HTTP response objects, which will fit in extremely well with frameworks like Express. The design philosophy statement for Marko says “it should be possible to render HTML out-of-order, but the output HTML should be streamed out in the correct order”, and I think this is extremely useful.

The documentation is great and you can extend it with custom tag libraries – why not try it out now?

Terminal.js: A Terminal Emulator Library

22 Jan 2015 | By Alex Young | Comments | Tags client-side node libraries modules


These days the terms VT100 and terminal are synonymous with either text-based interfaces or a shell. But there was a time when terminal referred to “computer terminal” – a machine that allows you to input commands into a computing system and view the results. This includes CRTs and teletype printers. The VT100 terminal was based on an Intel 8080, and if I remember my computer history correctly represents a transitory stage between the era of remote mainframes and local minicomputer systems.

The thing that’s interesting about the VT100 is it implemented some of the ANSI escape code sequences that we still use to annoy people on IRC and make quirky command-line tools. So when you see a terminal emulator set to VT100, it means it supports the features the VT100 had for cursor movement, clearing the display, and so on.

Terminal.js (GitHub: Gottox/terminal.js, License: MIT/X Consortium License, npm: terminal.js) by Enno Boland is a VT100 emulator that works in browsers and Node. Enno has also published node-webterm, which you can use to try out the project in a browser.

The API allows you to create a terminal object, write data to it, and get data back out. This demo uses the colors module:

var colors = require('colors');
var Terminal = require('terminal.js');
var terminal = new Terminal({ columns: 20, rows: 2 });

terminal.write('Terminal.js in rainbows'.rainbow);

Internally, the source is split into handler, input, and output modules. The handlers are basically JSON maps between characters and internal state. For example, handler/esc.js has the escape code handlers for things like moving the cursor down a column (ESC D).

You could use this module to provide a command-line interface to web-based developer tools, or maybe even make your own Chrome OS/iOS-friendly IDE. The node-webterm example demonstrates how to connect the terminal up with WebSockets, so it actually feels pretty low latency. Check out node-webterm/index.html to see how that can be implemented!

Node Roundup: Node 0.11.15, PublicSuffixList, Robe

21 Jan 2015 | By Alex Young | Comments | Tags node modules libraries mongodb

Node 0.11.15

Node 0.11.15 is out. This is a substantial release that updates v8 to 3.28.73, npm to 2.1.6, and uv to 1.0.2. There are patches for the build system and core modules as well. I looked over the changes to the core modules and it looks like it’s mostly bug fixes, but some modules (url and readline) have had performance improvements as well.

Meanwhile, io.js was updated to 1.0.3. This upgrades v8 from 3.31 to 4.1, which the changelog notes is not a major release because “4.1” is derived from Chrome 41. There’s also better platform support, including Windows XP, Windows 2003, and improved FreeBSD support.

If you look through the commits in both projects it should be obvious that some commits are shared, but some are unique to each fork. There was a popular post on Hacker News about performance comparisons between Node and io.js, so it’ll be interesting to see how the projects diverge and converge over the next few months.


Matthias Thoemmes sent in PublicSuffixList (GitHub: cmtt/publicsuffixlist, License: MIT, npm: publicsuffixlist), a validator for domain names and top level domains. It uses data from Mozilla’s project:

A “public suffix” is one under which Internet users can directly register names. Some examples of public suffixes are .com, and The Public Suffix List is a list of all known public suffixes.

The module provides methods like validateTLD, validate(domainString), and you can also look up a domain to destructure it:

var result = psl.lookup('');

result === {
  domain: 'domain',
  tld: 'com',
  subdomain: 'www'
} */


Robe (GitHub: hiddentao/robe, License: MIT, npm: robe) by Ramesh Nair is a MongoDB ODM that uses ES6 generators. The author said he couldn’t quite find the MongoDB API that he wanted, although Mongorito came close. So he used Monk to build Robe.

Because it supports generators you can yield results in the synchronous style:

var result = yield collection.findOne({
  score: {
    $gt: 20
}, {
  fields: ['name'],
  sort: { name: 1 },
  skip: 1,

It also supports lifecycle hooks, so you can run callbacks after or before data is inserted, removed, and so on.

Robe will validate data based on your schema, so running yield collection.insert() with an invalid record will cause an exception to be raised. Finally, it also supports streams, so you can use an event-based API to handle large sets of data.

PWS Tabs, wheelnav.js

20 Jan 2015 | By Alex Young | Comments | Tags jquery tabs ui libraries animation

PWS Tabs

PWS Tabs (GitHub: alexchizhovcom/pwstabs, License: MIT) by Alex Chizhov is a jQuery plugin that helps you create modern, flat tabs, with CSS3 animations and Font Awesome Icons.

It uses data- attributes to set the tab’s title and icon. The $(selector).pwstabs method accepts options for the tab position (horizontal or vertical) and the animation effect name.

The plugin source itself is fairly leightweight, and it comes with examples that include CSS. When I reviewed this plugin I had to download the examples because I couldn’t access Alex’s site, but he does link to a live demo in the readme.


wheelnav.js (GitHub: softwaretailoring/wheelnav) by Gábor Berkesi is an SVG-based library that you can use for a centerpiece animated navigation component. This might work well for a marketing website, which is the example Gábor uses, but I think it would be really cool on a child-friendly site with lots of bright colours site as well.

The licensing is mixed – there are free versions but also more advanced paid versions.

A good example of the library can be found on the download page, which ties a spinning component in with some tabbed navigation.

angular-oauth2, angular-websocket


angular-oauth2 (GitHub: seegno/angular-oauth2, License: MIT, npm: angular-oauth2, Bower: angular-oauth2) by Rui Silva at Seegno is an AngularJS OAuth2 client module that uses ES6 internally. The author has kindly released it on npm and Bower, so it should be easy to quickly try it out.

To use it, you have to configure OAuthProvider with the server’s base URL, and then the client ID and secret. Then you can request access and refresh tokens as required. The error handling is nice because you can hook into error events on $rootScope.$on('oauth:error'.

I’ve written a few browser extensions that use OAuth, and this seems ideal for that kind of work. It will also be useful for your Angular single page web apps.


PatrickJS (Patrick Stapleton) sent in angular-websocket (GitHub: gdi2290/angular-websocket, License: MIT, npm: angular-websocket, Bower: angular-websocket), which is a WebSocket library for AngularJS 1.x. It works by providing a factory, $websocket, that you can use to stream data between the server and your Angular app.

The API is modeled around the standard WebSocket API:

angular.module('YOUR_APP', [
  'ngWebSocket' // you may also use 'angular-websocket' if you prefer
//                          WebSocket works as well
.factory('MyData', function($websocket) {
  // Open a WebSocket connection
  var dataStream = $websocket('wss://');

  var collection = [];

  dataStream.onMessage(function( {

  var methods = {
    collection: collection,
    get: function() {
      dataStream.send(JSON.stringify({ action: 'get' }));

  return methods;
.controller('SomeController', function(MyData) {
  $scope.MyData = MyData;

In this example you would use MyData in the corresponding markup. Patrick’s example uses ng-repeat to iterate over the data.

The project includes browser tests and a build script. It doesn’t have any fallback for browsers that don’t support WebSockets, so it’s a pure WebSocket module rather than something like Socket.IO.

Gifshot, Cquence.js

16 Jan 2015 | By Alex Young | Comments | Tags libraries animations



Gitshot (GitHub: yahoo/gifshot, License: MIT) by Chase West and Greg Franko at Yahoo Sports is a library for creating animated GIFs from video streams or images. It’s client-side and uses Web Workers, so you can use it with existing sites without too much server-side work. To generate GIFs themselves it uses lots of modern APIs, like WebRTC, FileSystem, Video, Canvas, Web Workers, and (everyone’s favourite!) Typed Arrays.

Gifshot is ideal for adding overlays to animations, or for extracting thumbnails from videos. You could capture footage from the webcam and share it to users as part of a chat service or game, for example. It comes with a demo that uses Node, so you can easily see the kinds of options it supports.

The basic API is just gifshot.createGIF, which accepts an options object that specifies the type of content. For example, generating a GIF from a video stream can be done with gifshot.createGIF({ video: ['example.mp4', 'example.ogv'] }).


Cquence.js by Ramon Gebben (GitHub: RamonGebben/Cquence, License: MIT) is a small animation library. The author developed it for advertising banners, and it has an interesting compositional style:

var render = combine(
    linear('frame3', 10000, { left: -900 }, { left: 300 })
    easeOut('frame1', 2000, { left: -1000 }, { left: 120 }),
    // ...

It goes against that “globals are bad” API style, but it would be possible to repackage it as a CommonJS module. It uses requestAnimationFrame and has older IE support for opacity.

There’s a demo that demonstrates the kind of sequential animations you can build up with it.

Mercury: A Modular Front-end Framework

15 Jan 2015 | By Alex Young | Comments | Tags libraries browser frameworks

Michael J. Ryan alerted me to a front-end framework called mercury. It’s written by Raynos (Jake Verbaten) who has created several useful JavaScript libraries, but mercury seems to have got lots of interest (it’s almost at 1000 stars on GitHub).

You can find it at Raynos/mercury on GitHub (License: MIT, npm: mercury). I think you’ll like it because it’s definitely a modern JavaScript programmer’s client-side framework. It has some things in common with React – the virtual DOM, state management, and render methods – but the big difference is you can almost completely forget about the DOM. It has a streamlined markup API, so there are no tags mixed in with JavaScript.

It’s still small, though, and it plays well with other libraries. The readme has examples that show mercury alongside Backbone and JSX. There are some demos as well that include things like todomvc. Here’s the basic example from the readme:

'use strict';

var document = require('global/document');
var hg = require('mercury');
var h = require('mercury').h;

function App() {
  return hg.state({
    value: hg.value(0),
    channels: {
      clicks: incrementCounter

function incrementCounter(state) {
  state.value.set(state.value() + 1);

App.render = function render(state) {
  return h('div.counter', [
    'The state ', h('code', 'clickCount'),
    ' has value: ' + state.value + '.', h('input.button', {
      type: 'button',
      value: 'Click me!',
      'ev-click': hg.send(state.channels.clicks)
};, App(), App.render);

Mercury encourages immutable state, so the render methods are pure in that they take state and generate markup. However, some of mercury’s features are not immutable. Widgets are more complex and allow low-level DOM manipulation.

This idea of mixing functional and imperative programming techniques reminds me of Clojure, and perhaps even how F# feels less “pure” than Haskell because it has to interface with .NET. Raynos identifies the mixed-paradigm approach in the documentation:

Mercury is also a hybrid of functional and imperative, the rendering logic is functional but the updating logic has an imperative updating interface (backed by a stream of immutable objects under the hood).

If you want to read more about mercury, the documentation is split into several Markdown files that you can view on GitHub: mercury/docs/ There’s nothing specifically Node-centric about mercury, but because it can be installed with npm and is made of small CommonJS modules then it feels like it would work well for Browserify/CommonJS projects.

Node Roundup: io.js 1.0.1, jsop, HAP-NodeJS

14 Jan 2015 | By Alex Young | Comments | Tags node modules libraries forks

io.js 1.0.1

io.js has now released version 1.0.0 and 1.0.1. There’s a changelog with a summary that explains what’s changed from Node 0.10.35 to 1.0.0. Things are slightly more complicated by the fact that changes from Node’s unstable branch are merged in as well:

The io.js codebase inherits the majority of the changes found in the v0.11 branch of the joyent/node repository and therefore can be seen as an extension to v0.11.

There are some important ES6 changes in io.js, partly because V8 has been updated, but also because you don’t need to use the –harmony flag for many useful ES6 features:

All harmony features are now logically split into three groups for shipping, staged and in progress features: All shipping features, the ones that V8 has considered stable, like generators, templates, new string methods and many others are turned on by default on io.js and do NOT require any kind of runtime flag.

The other binary libraries (C-Ares, http_parser, libuv, npm, openssl, punycode) and Node’s core modules have also been updated. There are even some new core modules: see smalloc and v8. The smalloc module can be found in Node 0.11.x, but v8 was reintroduced by Ben Noordhuis to io.js:

I introduced this module over a year ago in a pull request as the v8 module but it was quickly subsumed by the tracing module.

The tracing module was recently removed again and that is why this commit introduces the v8 module again, including the new features it picked up commits d23ac0e and f8076c4.

I realise that the existence of io.js is confusing and potentially worrying for Node developers. If this is new to you, take a look at the io.js Project Governance and this comment by duncanawoods on Hacker News:

In their words, io.js is “A spork of Node.js with an open governance model”.

Sporking: Creating a fork that you would really like to become the next main-line version but you kinda have to prove its awesome first (sporks are pretty awesome)


With the release of io.js 1.0.0 (and Node 0.11.13), typicode has created jsop (GitHub: typicode/jsop, License: MIT, npm: jsop), a one-way data binding for JSON files. It’s built with Object.observe, and allows you to watch a file like this:

var jsop = require('jsop')

var config = jsop('config.json') = 'bar'

This will actually write the change to the JSON file. There’s a before and after example in the readme, so you can see how much syntax this module saves. This module is based on observed, which provides Object.observe with nested object support.

The same author also sent in homerun, which is a module that takes advantage of npm 2’s support for script arguments. This project basically allows you to get a command-line interface for free, once you’ve run npm link or npm publish.


Node and HomeKit

Other than watches, the smartphone/tablet vendors are trying to push the idea of the connected home. I was looking into ways to script my Wi-Fi lightbulbs and ran into HAP-NodeJS:

HAP-NodeJS is a Node.js implementation of HomeKit Accessory Server. With this project, you should be able to create your own HomeKit Accessory on Raspberry Pi, Intel Edison or any other platform that can run Node.js :) The implementation may not 100% follow the HAP MFi Specification since MFi program doesn’t allow individual developer to join.

HomeKit is Apple’s iOS 8 framework for controlling electronics in the home. The author notes that the project isn’t necessarily 100% compatible with the HAP MFi specification, but there’s a short video of it working with an iPhone. That means you should be able to get an iOS device to connect to anything that is capable of running the HAP-NodeJS server.

ngTestHarness, call-n-times, modern.IE Automation

13 Jan 2015 | By Alex Young | Comments | Tags testing libraries scripts angular


David Posin sent in ngTestHarness, a test harness for Angular scopes, controllers, and providers. It helps to reduce the amount of boilerplate needed for dependency injection during testing.

By default it loads the ng, ngMock, and ngSanitize modules. By using the ngHarness API, tests can be as simple as:

describe('Test the note-editor directive', function() {
  var harness = new ngHarness(['noteEditor']);

  it('Adds the container div', function() {
    expect(harness.compileElement('<my-note></my-note>').html()).toBe('<div class="editor-container"></div>');

The ngHarness object manages the required dependency injections for each test context. The API documentation covers the ngTestHarness class and each method that it implements.

David notes that the project was the result of work at Team Titan at Gaikai, owned by Sony Entertainment.


call-n-times by Shahar Or (GitHub: mightyiam/call-n-times, License: MIT, npm: call-n-times) came about when the author was trying to make test code cleaner. Given a function, this module will synchronously run it the specified number of times:

var assert = require('assert');
var call = require('call-n-times');

function logAndReturn() {
  return 'foo';

var returns = call(logAndReturn, 3);
assert.equal(returns.length, 3);
assert.equal(returns[0], 'foo');

Shahar wondered if there’s a built-in JavaScript way to do this, something closer to how it would work if you used an array and forEach. Does anyone know?

modern.IE, VirtualBox, Selenium

I expect many readers use modern.IE for testing. Wouldn’t it be nice if you could quickly create new instances so you can trash old ones, or use them as part of a test suite? Denis Suckau sent in a script called (License: MIT) for automatically creating VirtualBox virtual machines with Modern IE Windows images. It’s a shell script that supports various configuration options, like the Selenium and Java environmental variables, VM memory usage, and so on.

The readme has details on how to configure it, and the background behind the project:

As the refuses to run more than 30-90 Days (at least for more than an hour) we remove the machines on a regular basis and recreate the original Appliance with all changes needed to run Selenium.

Use it with your favored test runner (maybe Karma or Nightwatch.js) to automate JavaScript tests in real browsers on your own Selenium Grid. Other WebDriver language bindings (Python, Java) should work as well.

If you wanted to install IE 6, then you could run VMs/IE6\ -\ WinXP.ova. It also supports deleting a VM, so you can delete and recreate IE 6 with VMs/IE6\ -\ WinXP.ova --delete "IE6 - WinXP".

Z3d, is-progressive, Japanese Translations, textlint

12 Jan 2015 | By Alex Young | Comments | Tags webgl graphics libraries japanese translations jpeg



Z3d (GitHub: NathanEpstein/Z3d, License: MIT, Bower: z3d) by Nathan Epstein is a library that generates 3D plots. It gives you a Three.js scene, so you can further manipulate the graph with the Three.js API.

One of the useful things about Z3d is it has sensible built-in defaults, so you can throw data at it and get something cool out. The basic example is just plot3d(x, y, z), where the arguments are arrays of numbers to plot in 3D space.

The plot3d function also accepts a configuration argument, which allows things like colours and labels to be defined.


If you’re running through a checklist of website performance improvements and see image optimisation, then you might want some automated tools to do the job for you. Sindre Sorhus suggests using progressive JPEG images, and sent in a tool that can check if a JPEG is progressive or not.

It’s called is-progressive (GitHub: is-progressive, License: MIT, npm: is-progressive), and can be used as a Node module or on the command-line. The command-line use is either is-progressive filename or you can redirect data into it.

The Node module supports streams and has a synchronous API. That means you can do isProgressive.fileSync, or pipe HTTP requests through it with res.pipe(

DailyJS Japanese Translation and textlint

Hideharu Sakai wrote in to say he’s been translating DailyJS into Japanese, here:

He also mentioned a nice project for linting text: textlint (License: MIT, npm: textlint). You can use it to check plain text and Markdown files for various rules which are customisable. The example ensures to-do lists are written a certain way.

You can use textlint as a command-line tool or a Node module, so you could put it into a build script to validate your project’s documentation.

Many thanks to Hideharu!

After Many Years of Desktop Programming...

09 Jan 2015 | By Alex Young | Comments | Tags webgl graphics

If you’re interested in WebGL development, then you might enjoy My forays into JavaScript by Stavros Papadopoulos. It’s basically a paper, WebGL GUI library, and terrain rendering engine all rolled into one fascinating project. The GUI toolkit is desktop and Android inspired, but it’s rendered with WebGL. There’s a live demo that loads chunks of terrain data as you fly around – the terrain is rendered offline by a Delphi program.

Project Windstorm

There are a few things that I found interesting about this project. The author is a desktop developer that is learning JavaScript, so the demo itself should be encouraging to anyone in the same position. The screenshots are incredible, and it runs fairly well (I get 30fps on a 2013 MacBook Air).

The section in the article about fetching tiles will be of interest to anyone starting out in WebGL game development. If you’ve got server-side experience then designing the API for this is an early problem that you’ll have to tackle:

A very simple and efficient way to determine the potentially visible tiles is by performing a simple intersection test against the bounding box of the viewing frustum’s projection on the Y = 0 (horizontal) plane (see first image below).

A slightly more involved but potentially better way (as long as the viewpoint remains above and relatively close to the ground) is to construct our bounding box from the points of the polygon that results from intersecting the frustum with the horizontal plane, combined with the projected points of the frustum’s near plane (got that?).

I find the GUI particularly impressive, it’s actually pretty usable and I haven’t managed to break it yet. It would be nice if Stavros released it as a library. I haven’t spent much time taking the JavaScript apart yet, so I don’t know if he’s used any libraries. In fact, I entered flight mode in the demo and ended up wasting all of my DailyJS writing time for the day!

All in all, the GUI weighs in at ~12KLOC and spans 24 files, which is quite a bit more than the terrain engine itself!

It sounds like WebGL GUIs can be complicated, depending on your requirements. In this case I think it was well worth the effort.

From AS3 to TypeScript

08 Jan 2015 | By Alex Young | Comments | Tags as3 typescript parsers

If you’ve got existing ActionScript assets but want to migrate them to another language, what do you do? One possibility is to convert AS3 to TypeScript. They share similar language features, but are different enough that the process isn’t trivial.

There’s a project called AS3toTypeScript by Richard Davey that runs through lots of simple replacements, like converting Boolean to bool. It’s written in PHP and can be used with a web interface.

In From AS3 to TypeScript by Jesse Freeman, the author converts a game to TypeScript, which seems like a typical task for an ActionScript developer. He points out that TypeScript works well because of the Visual Studio support, so it makes sense if you’re a Microsoft-based developer.

François de Campredon recently sent me as3-to-typescript (License: Apache, npm: as3-to-typescript). This is a Node-based project that includes the AS3Parser from Adobe. That means the conversion process actually parses ActionScript and attempts to output syntactically correct TypeScript.

The tests are fairly minimal given the goals of the project, but they do show what the tool currently supports. I’m not particularly for or against ActionScript or TypeScript, but as3-to-typescript is a very interesting and useful combination of technologies that might help you find new life for old game code.