npm-stat

07 Mar 2014 | By Alex Young | Comments | Tags npm node

Recently npm added back download stats, which means you can see how many downloads a package has had. The announcement includes the note that Mikito Takada submitted a pull request for the D3 graphs – it’s things like this that make me glad npm’s website is open source.

npm-stat

There’s a public API for the statistics, which is written using hapi.

Paul Vorbach sent in npm-stat (GitHub: pvorb / npm-stat.com, License: MIT), which generates another set of views on npm’s stats. It displays downloads per day, week, month, and year, and there are graphs for authors as well. Pages for certain authors that I won’t link to directly naturally take a while to generate, but it’s generally fairly responsive.

I’m interested in seeing what people build with npm-www and the stats public API, but so far it seems like they’ve made a big improvement over the older versions.

Book Review: Quality Code: Software Testing Principles, Practices, and Patterns

06 Mar 2014 | By Alex Young | Comments | Tags books testing jquery

Quality Code

Quality Code: Software Testing Principles, Practices, and Patterns ($44.99, eBook: $35.99, Addison-Wesley Professional) by Stephen Vance is a book about testing. It uses examples from several languages – Java is the most prominent, but there are JavaScript examples as well. The most significant part for DailyJS readers is a long practical exercise that involves testing an open source jQuery plugin, but there is a lot of general software design advice that you will find useful.

The book introduces automated testing, but also discusses how tests can be managed in real teams. One of the main points here is how the same best practices that you use for production code should go into automated tests – if you use certain object oriented patterns, small methods, SOLID principles, and so on, then these techniques should be used for test code as well.

This leads into the practice of writing maintainable test code: the relationship between engineering and craftsmanship.

Civil engineers may supervise and inspect the building of bridges or buildings, but they spend little time driving rivets, pouring concrete, or stringing suspension cables. Probably the closest to software engineers’ total immersion might be the handful of test pilots who are also aeronautical engineers, in that they participate in design, construction, inspection, and verification of the craft they fly.

There are JavaScript examples for code coverage issues, dynamic dispatch, scope, asynchronous computation and promises, and Jasmine:

Dynamic languages like JavaScript are less tied to an explicit interface, although usage still defines a de-facto interface. Jasmine’s spyOn functionality provides the full range of test-double variations by substituting a test-instrumented recording object for the function being replaced and letting you define how it behaves when invoked.

What I learned most from, though, was the higher-level advice, like “test the error paths”:

Many people stop before testing the error handling of their software. Unfortunately, much of the perception of software quality is forged not by whether the software fails, because it eventually will, but by how it handles those failures.

And the following point reinforced the way I work, mixing “spec”-like tests with integration and unit tests:

I prefer to test each defect, at least at the unit level.

Stephen sometimes talks about how most of our programming languages and tools aren’t designed specifically to support testing. One idea that runs through the book is about how to design code to be testable, and writing decoupled tests is part of this. Balancing encapsulation with access to internal state for testing is something that I think most of us struggle with.

As we have seen, verification of those internal representations sometimes occurs through interface access. Where null safety is either guaranteed by the representation or ignored by the test, we see code like A.getB().getC().getD() Despite the blatant violation of the Principle of Least Knowledge, we frequently find code like this-of course we do not write it ourselves!— in tests and production.

Chapter 12, “Use Existing Seams”, left an impression on me: it’s about the idea of finding places in code that allows you to take control of that code so you can bring it under test. Since reading that chapter I seem to have found more convenient places to grapple Express applications and test them more thoroughly.

If you write tests, but find they become unmaintainable over time, then this book may guide you to create less entangled tests. It mixes material for dynamic languages like JavaScript with statically typed languages such as C++ and Java. I find this useful as someone who writes a lot of JavaScript but works alongside Objective-C and .NET developers.

Stephen has combined years of experience into a rare, testing-focused book, that relates principles that we use to write well-designed code to the problems inherent in automated testing.

Node Roundup: npm Trademark, Cha

05 Mar 2014 | By Alex Young | Comments | Tags node modules npm

Charlie Robbins and the npm Trademark

Charlie Robbins, who you may know as indexzero, recently published An open letter to the Node community:

Being part of a community means listening to it. After listening to the deep concern that has been voiced over our application to register the npm trademark we have decided to withdraw the application from the USPTO. I want to apologize for the way that our message came across. We hastily reacted to something that clearly needed more thought behind it.

Nodejitsu previously announced its intention of registering the npm trademark, and although it seems like it was with the best intentions, the confusion that arose was understandable.

Charlie signs off the post by saying the Node community needs a non-profit “foundation” that helps manage Node:

There is little beyond GitHub issues and discussions as to the questions like roadmap and long term plans. A non-profit organization could get more of this tedious work done by having more dedicated resources instead of relying on individual community members to go it alone.

Many of us have seen something similar happen in companies we’ve worked at: we use GitHub issues and small, informal groups to manage things quite happily until the business grows and management mistakes become more dangerous.

Recently we’ve seen the arrival of npm, Inc and TJ Fontaine take over Node, so things are changing. I’m not sure how a non-profit Node Foundation fits into this, but as someone who depends on Node for his career I think Charlie has raised some important questions that need addressing.

Cha

Cha

Cha (GitHub: chajs / cha, License: MIT, npm: cha) is a module for defining tasks and chaining them together. It can be used to define build scripts, or whatever else you’d like to automate, and the author shows how to tie them to npm scripts as well.

This is what the basic API looks like:

var cha = require('../')

// Set a watcher.
cha.watch = require('./tasks/watch')

cha.in('read', require('./tasks/read'))
   .in('cat', require('./tasks/cat'))
   .in('coffee', require('./tasks/coffee'))
   .in('write', require('./tasks/write'))
   .in('uglifyjs', require('./tasks/uglifyjs'))
   .in('copy', require('./tasks/copy'))

There is a specification for tasks, and it allows text-based “expressions” to be defined that can glob files and do other cool stuff with less syntax:

cha(['glob:./fixtures/js/*.js', 'request:http://underscorejs.org/underscore-min.js'])

8 Bit Procedural Sound Generation, Flappy Bird 2

04 Mar 2014 | By Alex Young | Comments | Tags games audio

8 Bit Procedural Sound Generation

8 Bit Procedural Sound

8 Bit Procedural Sound Generation by Jerome Etienne is a post about generating sounds using jsfx. Jerome’s demo shows visualisations for sounds that might be useful in a game.

He also introduces the webaudiox WebAudio API helpers, which includes methods for converting from byte arrays to floating point numbers.

Flappy Bird 2

Thomas Palef sent in part 2 of his Flappy Bird tutorial:

In the last HTML5 tutorial we did a simple Flappy Bird clone. It was nice, but quite boring to play. We will see in this post how to add animations and sounds to our Flappy Bird clone. These won’t change the game’s mechanics, but the game will feel a lot more interesting.

There’s also an article about his experiences on the IndieGames.com blog.

I think games are an interesting way of teaching full stack development – if you can hook a game like this up to a server-side Node project that stores player details, scores, and perhaps multiplayer, then it covers a wide range of skills.

Some Atom-related Node Packages

03 Mar 2014 | By Alex Young | Comments | Tags node editors tools

Atom

Hugh Kennedy sent in npm-install, an Atom package for automatically installing and saving the npm modules in the current file.

To use it, you just need to open the Command Palette and type npm install. The Command Palette can be opened with cmd-shift-p.

There’s another npm-related Atom package as well: npm-docs by Jonathan Clem. This allows you to use the Command Palette to easily look up a module’s readme or homepage. This is the kind of thing I do all the time when I write about Node on DailyJS.

Tyler Benziger kindly sent me the Atom invitation, but I’ve only used it for a few small things so far. I’ve been trying to figure out how it fits in with the Node community, and whether or not it’ll be popular with DailyJS readers.

If you look at the screenshot in this post you might notice that I’ve got a folder open with lots of items. That’s DailyJS’s 1127 posts, which Atom handles without any trouble.

The Atom Editor

28 Feb 2014 | By Alex Young | Comments | Tags node editors tools

Atom

“Alex, you love talking about text editors, why don’t you write about that GitHub Atom project?”

Ah, text editors. Arguably our most important tools, yet we’re more fickle about them than our choice of programming language, web framework, and preferred caffeinated beverage. Atom is made by GitHub. It’s built using dozens of related open source projects, and some of these include “packages” that extend the editor.

All of the packages seem to be written with CoffeeScript, but before you get your pitchforks out, take a look at this thread:

You can use plain JS to develop packages.

Phew. The reason I wanted to write about Atom on DailyJS was it’s built using Node and a web view. The fact it embraces Node means it should be easier for us to extend it. It also claims to have TextMate support, and can use native extensions through Node C and C++ modules.

Parts of Atom are native as well, so it should feel desktop-like rather than web-based:

Atom is a desktop application based on web technologies. Like other desktop apps, it has its own icon in the dock, native menus and dialogs, and full access to the file system.

I’ve seen a few generations of desktop text editors come and go: BBEdit, TextMate, and Sublime Text. I expect the excitement around Atom to follow a similar pattern. I’m going to write about interesting Atom packages if I think they’re of interest to DailyJS readers (please send them in), but you’ll still find me happily plodding on with Vim. And vin (rouge), but that’s another story.

Nodyn: No Dice

27 Feb 2014 | By Alex Young | Comments | Tags node java

Nodyn (GitHub: projectodd / nodyn, License: Apache 2.0) is a Node API-compatible JVM-based project. That means you can technically use Java libraries from within Node programs.

I’ve been using it on my Mac, running Mavericks. Here’s what I had to do to get it to work:

brew install maven
git clone https://github.com/projectodd/nodyn.git
cd nodyn
export JAVA_HOME=`/usr/libexec/java_home`
mvn install -Dmaven.test.skip=true
cd nodyn-standalone/target
java -jar nodyn-standalone.jar --console

It took me a while to figure all of this out. I already had Homebrew installed, but I didn’t have Maven. I’m an amateur Android developer, so I only ever really write Java through Google’s recommended IDE tools.

Maven installed without too much trouble, except I found it used the wrong version of Java. The export JAVA_HOME line makes Maven use the right version. I’m not sure why this is required because java -version showed 1.7, but for some reason Maven was building Nodyn with 1.6, which generated a long and inscrutable error message.

The mvn install -Dmaven.test.skip=true line builds Nodyn, and skips tests. I wanted to skip the tests because they seemed to hang on this line:

Starting test: src/test/resources/os|os_test.js|testFreemem

Once I built it, I ran a small program that reads its own source and prints it to stdout:

var fs = require('fs');

console.log('I can print my own code');

fs.readFile('test.js', 'utf8', function(err, text) {
  if (err) console.error(err);
  console.log(text);
  console.log('When I work correctly');
});

This printed the following output, which is incorrect:

log4j:WARN No appenders could be found for logger (io.netty.util.internal.logging.InternalLoggerFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
I can print my own code

The expected output is this:

I can print my own code
var fs = require('fs');

console.log('I can print my own code');

fs.readFile('test.js', 'utf8', function(err, text) {
  if (err) console.error(err);
  console.log(text);
  console.log('When I work correctly');
});

When I work correctly

It seems like Node API compatibility isn’t quite there yet. I also noticed it takes much longer than Node to start up, but I seem to remember jRuby developers complaining about startup time so that might be something to do with how Java works. It probably doesn’t really matter for long-running server processes, but I quite like the fact Node programs start up quickly.

If you’re a Java programmer Nodyn might seem cool, but so far I’ve struggled with it. Despite my Maven issues, the project looks neatly organised and carefully written, so I’m going to keep watching it.

Node Roundup: No More Force Publish, Counterpart, mock-fs

26 Feb 2014 | By Alex Young | Comments | Tags node modules npm testing internationalisation

No More Force Publish

Isaac Z. Schlueter wrote on npm’s blog that publish -f will no longer work:

If you publish foo@1.2.3, you can still un-publish foo@1.2.3. But then, you will not be able to publish something else to that same package identifier and version. Ever.

The common wisdom is changing the code that a version number describes is dangerous, so it’s better to publish a new version. If you’re a module author, you may feel that this is frustrating – what if you just released something with a dangerous security flaw? In cases like this it may be best to remove the version and publish a new, fixed version.

Counterpart

Counterpart (GitHub: martinandert / counterpart, License: MIT, npm: counterpart) by Martin Andert is an internationalisation module based on Ruby’s I18n gem:

translate('damals.about_x_hours_ago.one')          // => 'about one hour ago'
translate(['damals', 'about_x_hours_ago', 'one'])  // => 'about one hour ago'
translate(['damals', 'about_x_hours_ago.one'])     // => 'about one hour ago'

You can write translation documents using JSON. Features include interpolation, pluralisation, and default fallbacks.

mock-fs

mock-fs (GitHub: tschaub / mock-fs, License: MIT, npm: mock-fs) by Tim Schaub is an API-compatible version of Node’s fs module that essentially allows you to temporarily use an in-memory filesystem.

It provides a mock function that accepts a specification of the files you want to mock:

mock({
  'path/to/fake/dir': {
    'some-file.txt': 'file content here',
    'empty-dir': {/** empty directory */}
  },
  'path/to/some.png': new Buffer([8, 6, 7, 5, 3, 0, 9]),
  'some/other/path': {/** another empty directory */}
});

You might find this useful if you want to write tests that avoid touching real files.

Matter.js

25 Feb 2014 | By Alex Young | Comments | Tags webgl html5 physics

The Matter.js Wrecking Ball demo.

Matter.js (GitHub: liabru / matter-js, License: MIT) by Liam Brummitt is a stable and flexible rigid body physics engine for browsers. The author describes it as an alpha project that came about as a result of learning game programming.

If you’re interested in reading more about physics for game programming, Liam has collected some useful resources in Game physics for beginners.

Matter.js uses time-corrected Verlet integration, adaptive grid broad-phase detection, AABB mid-phase detection, SAT narrow-phase detection, and other algorithms for managing collisions and physical simulation. More well-known engines like Box2D support these features, but if you take a look at the some of the classes Liam has written then you’ll see how clean and readable his version is.

I’ve been looking at the source to see how to use it, and the API seems friendly to me:

var Bodies = Matter.Bodies;
var Engine = Matter.Engine;
var engine = Engine.create(container, options);
var World = Matter.World;

World.addBody(engine.world, Bodies.rectangle(300, 180, 700, 20, { isStatic: true, angle: Math.PI * 0.06 }));
World.addBody(enigne.world, Bodies.rectangle(300, 70, 40, 40, { friction: 0.001 }));

The demo is cool, so try it out if you want to experiment!

Angular Selection Model, Normalized Particle Swarm Optimization

24 Feb 2014 | By Alex Young | Comments | Tags graphics optimisation angularjs

Angular Selection Model

Angular Selection Model (GitHub: jtrussell / angular-selection-model, License: MIT) by Justin Russell is an AngularJS directive for managing selections of items in lists and tables. It’s indifferent to how data is presented, and only tracks what items are selected.

This example allows a text input to filter a list of items, and also allow the user to select items from the list:

<input type="text" ng-model="fancyfilter" />

<table>
  <thead>
    <tr>
      <th></th>
      <th>#</th>
      <th>Label</th>
      <th>Value</th>
    </tr>
  </thead>
  <tr ng-repeat="item in fancy.bag | filter:fancyfilter"
      selection-model
      selection-model-type="checkbox"
      selection-model-mode="multiple-additive"
      selection-model-selected-class="foobar">
    <td><input type="checkbox"></td>
    <td>1</td>
    <td></td>
    <td></td>
  </tr>
</table>

The directive does a lot of things behind the scenes to make this work naturally. An internal read-only list is used to represent selected items, and there’s a provider for setting things like the selected attribute and class name assigned to selected items at a global level. Checkboxes are automatically managed, including support for multiple selection.

Justin has included tests, documentation, and examples.

Normalized Particle Swarm Optimization

Swarm optimisation

Adrian Seeley sent in this gist: JavaScript Normalized Particle Swarm Optimization Implementation. If you want to try it, just click “Download Gist” then open the HTML file locally.

The reason I wanted to write about it was he decided to license it as “Abandoned”, so rather than letting it languish I thought I’d share it in case someone finds it useful.

Here’s how Adrian described the project:

Particle swarm optimization is an incredibly viable machine learning structure, but is often implemented using database oriented designs splayed across multiple files in c++ or java making it very inaccessible to newcomers. I present a simple, unoptimized, and easy to follow javascript implementation of normalized particle swarm optimization, making use of full descriptive variable names entirely encapsulated in a single inlined function.

ViziCities, Flappy Bird in HTML5

21 Feb 2014 | By Alex Young | Comments | Tags webgl maps games html5

ViziCities

ViziCities

ViziCities (Demo, GitHub: robhawkes / vizicities, License: MIT) by Robin Hawkes and Peter Smart is a WebGL 3D city and data visualisation platform. It uses OpenStreetMap, and aims to overlay animated data views with 3D city layouts.

The developers have created some visualisations of social data, traffic simulation, and public transport.

It uses Three.js, D3, Grunt, and some stalwarts like Moment.js and Underscore.js.

Flappy Bird in HTML5 with Phaser

Thomas Palef, who has been making one HTML5 game per-week, has created a tutorial for making Flappy Bird in HTML5 and Phaser. The cool thing about the tutorial is he reduces Flappy Bird to its basic parts – collision detection, scoring, and the player controls. Instead of worrying about whether or not the graphics are stolen from Mario, you can just follow along and learn how a game like this works.

JavaScript Promises ... In Wicked Detail

20 Feb 2014 | By Matt Greer | Comments | Tags promises tutorial
This post is by Matt Greer. You can find the original here: mattgreer.org/articles/promises-in-wicked-detail/

I’ve been using Promises in my JavaScript code for a while now. They can be a little brain bending at first. I now use them pretty effectively, but when it came down to it, I didn’t fully understand how they work. This article is my resolution to that. If you stick around until the end, you should understand Promises well too.

We will be incrementally creating a Promise implementation that by the end will mostly meet the Promise/A+ spec, and understand how promises meet the needs of asynchronous programming along the way. This article assumes you already have some familiarity with Promises. If you don’t, promisejs.org is a good site to check out.

Why?

Why bother to understand Promises to this level of detail? Really understanding how something works can increase your ability to take advantage of it, and debug it more successfully when things go wrong. I was inspired to write this article when a coworker and I got stumped on a tricky Promise scenario. Had I known then what I know now, we wouldn’t have gotten stumped.

The Simplest Use Case

Let’s begin our Promise implementation as simple as can be. We want to go from this

doSomething(function(value) {
  console.log('Got a value:' value);
});

to this

doSomething().then(function(value) {
  console.log('Got a value:' value);
});

To do this, we just need to change doSomething() from this

function doSomething(callback) {
  var value = 42;
  callback(value);
}

to this “Promise” based solution

function doSomething() {
  return {
    then: function(callback) {
      var value = 42;
      callback(value);
    }
  };
}
fiddle

This is just a little sugar for the callback pattern. It’s pretty pointless sugar so far. But it’s a start and yet we’ve already hit upon a core idea behind Promises

Promises capture the notion of an eventual value into an object

This is the main reason Promises are so interesting. Once the concept of eventuality is captured like this, we can begin to do some very powerful things. We’ll explore this more later on.

Defining the Promise type

This simple object literal isn’t going to hold up. Let’s define an actual Promise type that we’ll be able to expand upon

function Promise(fn) {
  var callback = null;
  this.then = function(cb) {
    callback = cb;
  };

  function resolve(value) {
    callback(value);
  }

  fn(resolve);
}

and reimplement doSomething() to use it

function doSomething() {
  return new Promise(function(resolve) {
    var value = 42;
    resolve(value);
  });
}

There is a problem here. If you trace through the execution, you’ll see that resolve() gets called before then(), which means callback will be null. Let’s hide this problem in a little hack involving setTimeout

function Promise(fn) {
  var callback = null;
  this.then = function(cb) {
    callback = cb;
  };

  function resolve(value) {
    // force callback to be called in the next
    // iteration of the event loop, giving
    // callback a chance to be set by then()
    setTimeout(function() {
      callback(value);
    }, 1);
  }

  fn(resolve);
}
fiddle

With the hack in place, this code now works … sort of.

This Code is Brittle and Bad

Our naive, poor Promise implementation must use asynchronicity to work. It’s easy to make it fail again, just call then() asynchronously and we are right back to the callback being null again. Why am I setting you up for failure so soon? Because the above implementation has the advantage of being pretty easy to wrap your head around. then() and resolve() won’t go away. They are key concepts in Promises.

Promises have State

Our brittle code above revealed something unexpectedly. Promises have state. We need to know what state they are in before proceeding, and make sure we move through the states correctly. Doing so gets rid of the brittleness.

  • A Promise can be pending waiting for a value, or resolved with a value.
  • Once a Promise resolves to a value, it will always remain at that value and never resolve again.

(A Promise can also be rejected, but we’ll get to error handling later)

Let’s explicitly track the state inside of our implementation, which will allow us to do away with our hack

function Promise(fn) {
  var state = 'pending';
  var value;
  var deferred;

  function resolve(newValue) {
    value = newValue;
    state = 'resolved';

    if(deferred) {
      handle(deferred);
    }
  }

  function handle(onResolved) {
    if(state === 'pending') {
      deferred = onResolved;
      return;
    }

    onResolved(value);
  }

  this.then = function(onResolved) {
    handle(onResolved);
  };

  fn(resolve);
}
fiddle

It’s getting more complicated, but the caller can invoke then() whenever they want, and the callee can invoke resolve() whenever they want. It fully works with synchronous or asynchronous code.

This is because of the state flag. Both then() and resolve() hand off to the new method handle(), which will do one of two things depending on the situation:

  • The caller has called then() before the callee calls resolve(), that means there is no value ready to hand back. In this case the state will be pending, and so we hold onto the caller’s callback to use later. Later when resolve() gets called, we can then invoke the callback and send the value on its way.
  • The callee calls resolve() before the caller calls then(): In this case we hold onto the resulting value. Once then() gets called, we are ready to hand back the value.

Notice setTimeout went away? That’s temporary, it will be coming back. But one thing at a time.

With Promises, the order in which we work with them doesn't matter. We are free to call then() and resolve() whenever they suit our purposes. This is one of the powerful advantages of capturing the notion of eventual results into an object

We still have quite a few more things in the spec to implement, but our Promises are already pretty powerful. This system allows us to call then() as many times as we want, we will always get the same value back

var promise = doSomething();

promise.then(function(value) {
  console.log('Got a value:', value);
});

promise.then(function(value) {
  console.log('Got the same value again:', value);
});

This is not completely true for the Promise implementation in this article. If the opposite happens, ie the caller calls then() multiple times before resolve() is called, only the last call to then() will be honored. The fix for this is to keep a running list of deferreds inside of the Promise instead of just one. I decided to not do that in the interest of keeping the article more simple, it's long enough as it is :)

Chaining Promises

Since Promises capture the notion of asynchronicity in an object, we can chain them, map them, have them run in parallel or sequential, all kinds of useful things. Code like the following is very common with Promises

getSomeData()
.then(filterTheData)
.then(processTheData)
.then(displayTheData);

getSomeData is returning a Promise, as evidenced by the call to then(), but the result of that first then must also be a Promise, as we call then() again (and yet again!) That’s exactly what happens, if we can convince then() to return a Promise, things get more interesting.

then() always returns a Promise

Here is our Promise type with chaining added in

function Promise(fn) {
  var state = 'pending';
  var value;
  var deferred = null;

  function resolve(newValue) {
    value = newValue;
    state = 'resolved';

    if(deferred) {
      handle(deferred);
    }
  }

  function handle(handler) {
    if(state === 'pending') {
      deferred = handler;
      return;
    }

    if(!handler.onResolved) {
      handler.resolve(value);
      return;
    }

    var ret = handler.onResolved(value);
    handler.resolve(ret);
  }

  this.then = function(onResolved) {
    return new Promise(function(resolve) {
      handle({
        onResolved: onResolved,
        resolve: resolve
      });
    });
  };

  fn(resolve);
}
fiddle

Hoo, it’s getting a little squirrelly. Aren’t you glad we’re building this up slowly? The real key here is that then() is returning a new Promise.

Since then() always returns a new Promise object, there will always be at least one Promise object that gets created, resolved and then ignored. Which can be seen as wasteful. The callback approach does not have this problem. Another ding against Promises. You can start to appreciate why some in the JavaScript community have shunned them.

What value does the second Promise resolve to? It receives the return value of the first promise. This is happening at the bottom of handle(), The handler object carries around both an onResolved callback as well as a reference to resolve(). There is more than one copy of resolve() floating around, each Promise gets their own copy of this function, and a closure for it to run within. This is the bridge from the first Promise to the second. We are concluding the first Promise at this line:

var ret = handler.onResolved(value);

In the examples I’ve been using here, handler.onResolved is

function(value) {
  console.log("Got a value:", value);
}

in other words, it’s what was passed into the first call to then(). The return value of that first handler is used to resolve the second Promise. Thus chaining is accomplished

doSomething().then(function(result) {
  console.log('first result', result);
  return 88;
}).then(function(secondResult) {
  console.log('second result', secondResult);
});

// the output is
//
// first result 42
// second result 88


doSomething().then(function(result) {
  console.log('first result', result);
  // not explicitly returning anything
}).then(function(secondResult) {
  console.log('second result', secondResult);
});

// now the output is
//
// first result 42
// second result undefined

Since then() always returns a new Promise, this chaining can go as deep as we like

doSomething().then(function(result) {
  console.log('first result', result);
  return 88;
}).then(function(secondResult) {
  console.log('second result', secondResult);
  return 99;
}).then(function(thirdResult) {
  console.log('third result', thirdResult);
  return 200;
}).then(function(fourthResult) {
  // on and on...
});

What if in the above example, we wanted all the results in the end? With chaining, we would need to manually build up the result ourself

doSomething().then(function(result) {
  var results = [result];
  results.push(88);
  return results;
}).then(function(results) {
  results.push(99);
  return results;
}).then(function(results) {
  console.log(results.join(', ');
});

// the output is
//
// 42, 88, 99
Promises always resolve to one value. If you need to pass more than one value along, you need to create a multi-value in some fashion (an array, an object, concatting strings, etc)

A potentially better way is to use a Promise library’s all() method or any number of other utility methods that increase the usefulness of Promises, which I’ll leave to you to go and discover.

The Callback is Optional

The callback to then() is not strictly required. If you leave it off, the Promise resolves to the same value as the previous Promise

doSomething().then().then(function(result) {
  console.log('got a result', result);
});

// the output is
//
// got a result 42

You can see this inside of handle(), where if there is no callback, it simply resolves the Promise and exits. value is still the value of the previous Promise.

if(!handler.onResolved) {
  handler.resolve(value);
  return;
}

Returning Promises Inside the Chain

Our chaining implementation is a bit naive. It’s blindly passing the resolved values down the line. What if one of the resolved values is a Promise? For example

doSomething().then(result) {
  // doSomethingElse returns a Promise
  return doSomethingElse(result)
}.then(function(finalResult) {
  console.log("the final result is", finalResult);
});

As it stands now, the above won’t do what we want. finalResult won’t actually be a fully resolved value, it will instead be a Promise. To get the intended result, we’d need to do

doSomething().then(result) {
  // doSomethingElse returns a Promise
  return doSomethingElse(result)
}.then(function(anotherPromise) {
  anotherPromise.then(function(finalResult) {
    console.log("the final result is", finalResult);
  });
});

Who wants that crud in their code? Let’s have the Promise implementation seamlessly handle this for us. This is simple to do, inside of resolve() just add a special case if the resolved value is a Promise

function resolve(newValue) {
  if(newValue && typeof newValue.then === 'function') {
    newValue.then(resolve);
    return;
  }
  state = 'resolved';
  value = newValue;

  if(deferred) {
    handle(deferred);
  }
}
fiddle

We’ll keep calling resolve() recursively as long as we get a Promise back. Once it’s no longer a Promise, then proceed as before.

It is possible for this to be an infinite loop. The Promise/A+ spec recommends implementations detect infinite loops, but it's not required.
Also worth pointing out, this implementation does not meet the spec. Nor will we fully meet the spec in this regard in the article. For the more curious, I recommend reading the Promise resolution procedure.

Notice how loose the check is to see if newValue is a Promise? We are only looking for a then() method. This duck typing is intentional, it allows different Promise implementations to interopt with each other. It’s actually quite common for Promise libraries to intermingle, as different third party libraries you use can each use different Promise implementations.

Different Promise implementations can interopt with each other, as long as they all are following the spec properly.

With chaining in place, our implementation is pretty complete. But we’ve completely ignored error handling.

Rejecting Promises

When something goes wrong during the course of a Promise, it needs to be rejected with a reason. How does the caller know when this happens? They can find out by passing in a second callback to then()

doSomething().then(function(value) {
  console.log('Success!', value);
}, function(error) {
  console.log('Uh oh', error);
});

As mentioned earlier, the Promise will transition from pending to either resolved or rejected, never both. In other words, only one of the above callbacks ever gets called.

Promises enable rejection by means of reject(), the evil twin of resolve(). Here is doSomething() with error handling support added

function doSomething() {
  return new Promise(function(resolve, reject) {
    var result = somehowGetTheValue(); 
    if(result.error) {
      reject(result.error);
    } else {
      resolve(result.value);
    }
  });
}

Inside the Promise implementation, we need to account for rejection. As soon as a Promise is rejected, all downstream Promises from it also need to be rejected.

Let’s see the full Promise implementation again, this time with rejection support added

function Promise(fn) {
  var state = 'pending';
  var value;
  var deferred = null;

  function resolve(newValue) {
    if(newValue && typeof newValue.then === 'function') {
      newValue.then(resolve, reject);
      return;
    }
    state = 'resolved';
    value = newValue;

    if(deferred) {
      handle(deferred);
    }
  }

  function reject(reason) {
    state = 'rejected';
    value = reason;

    if(deferred) {
      handle(deferred);
    }
  }

  function handle(handler) {
    if(state === 'pending') {
      deferred = handler;
      return;
    }

    var handlerCallback;

    if(state === 'resolved') {
      handlerCallback = handler.onResolved;
    } else {
      handlerCallback = handler.onRejected;
    }

    if(!handlerCallback) {
      if(state === 'resolved') {
        handler.resolve(value);
      } else {
        handler.reject(value);
      }

      return;
    }

    var ret = handlerCallback(value);
    handler.resolve(ret);
  }

  this.then = function(onResolved, onRejected) {
    return new Promise(function(resolve, reject) {
      handle({
        onResolved: onResolved,
        onRejected: onRejected,
        resolve: resolve,
        reject: reject
      });
    });
  };

  fn(resolve, reject);
}
fiddle

Other than the addition of reject() itself, handle() also has to be aware of rejection. Within handle(), either the rejection path or resolve path will be taken depending on the value of state. This value of state gets pushed into the next Promise, because calling the next Promises’ resolve() or reject() sets its state value accordingly.

When using Promises, it's very easy to omit the error callback. But if you do, you'll never get any indication something went wrong. At the very least, the final Promise in your chain should have an error callback. See the section further down about swallowed errors for more info.

Unexpected Errors Should Also Lead to Rejection

So far our error handling only accounts for known errors. It’s possible an unhandled exception will happen, completely ruining everything. It’s essential that the Promise implementation catch these exceptions and reject accordingly.

This means that resolve() should get wrapped in a try/catch block

function resolve(newValue) {
  try {
    // ... as before
  } catch(e) {
    reject(e);
  }
}

It’s also important to make sure the callbacks given to us by the caller don’t throw unhandled exceptions. These callbacks are called in handle(), so we end up with

function handle(deferred) {
  // ... as before

  var ret;
  try {
    ret = handlerCallback(value);
  } catch(e) {
    handler.reject(e);
    return;
  }

  handler.resolve(ret);
}

Promises can Swallow Errors!

It's possible for a misunderstanding of Promises to lead to completely swallowed errors! This trips people up a lot

Consider this example

function getSomeJson() {
  return new Promise(function(resolve, reject) {
    var badJson = "<div>uh oh, this is not JSON at all!</div>";
    resolve(badJson);
  });
}

getSomeJson().then(function(json) {
  var obj = JSON.parse(json);
  console.log(obj);
}, function(error) {
  console.log('uh oh', error);
});
fiddle

What is going to happen here? Our callback inside then() is expecting some valid JSON. So it naively tries to parse it, which leads to an exception. But we have an error callback, so we’re good, right?

Nope. That error callback will not be invoked! If you run this example via the above fiddle, you will get no output at all. No errors, no nothing. Pure chilling silence.

Why is this? Since the unhandled exception took place in our callback to then(), it is being caught inside of handle(). This causes handle() to reject the Promise that then() returned, not the Promise we are already responding to, as that Promise has already properly resolved.

Always remember, inside of then()'s callback, the Promise you are responding to has already resolved. The result of your callback will have no influence on this Promise

If you want to capture the above error, you need an error callback further downstream

getSomeJson().then(function(json) {
  var obj = JSON.parse(json);
  console.log(obj);
}).then(null, function(error) {
  console.log("an error occured: ", error);
});

Now we will properly log the error.

In my experience, this is the biggest pitfall of Promises. Read onto the next section for a potentially better solution

done() to the Rescue

Most (but not all) Promise libraries have a done() method. It’s very similar to then(), except it avoids the above pitfalls of then().

done() can be called whenever then() can. The key differences are it does not return a Promise, and any unhandled exception inside of done() is not captured by the Promise implementation. In other words, done() represents when the entire Promise chain has fully resolved. Our getSomeJson() example can be more robust using done()

getSomeJson().done(function(json) {
  // when this throws, it won't be swallowed
  var obj = JSON.parse(json);
  console.log(obj);
});

done() also takes an error callback, done(callback, errback), just like then() does, and since the entire Promise resolution is, well, done, you are assured of being informed of any errors that erupted.

done() is not part of the Promise/A+ spec (at least not yet), so your Promise library of choice might not have it.

Promise Resolution Needs to be Async

Early in the article we cheated a bit by using setTimeout. Once we fixed that hack, we’ve not used setTimeout since. But the truth is the Promise/A+ spec requires that Promise resolution happen asynchronously. Meeting this requirement is simple, we simply need to wrap most of handle()’s implementation inside of a setTimeout call

function handle(handler) {
  if(state === 'pending') {
    deferred = handler;
    return;
  }
  setTimeout(function() {
    // ... as before
  }, 1);
}

This is all that is needed. In truth, real Promise libraries don’t tend to use setTimeout. If the library is NodeJS oriented it will possibly use process.nextTick, for browsers it might use the new setImmediate or a setImmediate shim (so far only IE supports setImmediate), or perhaps an asynchronous library such as Kris Kowal’s asap (Kris Kowal also wrote Q, a popular Promise library)

Why Is This Async Requirement in the Spec?

It allows for consistency and reliable execution flow. Consider this contrived example

var promise = doAnOperation();
invokeSomething();
promise.then(wrapItAllUp);
invokeSomethingElse();

What is the call flow here? Based on the naming you’d probably guess it is invokeSomething() -> invokeSomethingElse() -> wrapItAllUp(). But this all depends on if the promise resolves synchronously or asynchronously in our current implementation. If doAnOperation() works asynchronously, then that is the call flow. But if it works synchronously, then the call flow is actually invokeSomething() -> wrapItAllUp() -> invokeSomethingElse(), which is probably bad.

To get around this, Promises always resolve asynchronously, even if they don’t have to. It reduces surprise and allows people to use Promises without having to take into consideration asynchronicity when reasoning about their code.

Promises always require at least one more iteration of the event loop to resolve. This is not necessarily true of the standard callback approach.

Before We Wrap Up … then/promise

There are many, full featured, Promise libraries out there. The then organization’s promise library takes a simpler approach. It is meant to be a simple implementation that meets the spec and nothing more. If you take a look at their implementation, you should see it looks quite familiar. then/promise was the basis of the code for this article, we’ve almost built up the same Promise implementation. Thanks to Nathan Zadoks and Forbes Lindsay for their great library and work on JavaScript Promises. Forbes Lindsay is also the guy behind the promisejs.org site mentioned at the start.

There are some differences in the real implementation and what is here in this article. That is because there are more details in the Promise/A+ spec that I have not addressed. I recommend reading the spec, it is short and pretty straightforward.

Conclusion

If you made it this far, then thanks for reading! We’ve covered the core of Promises, which is the only thing the spec addresses. Most implementations offer much more functionality, such as all(), spread(), race(), denodeify() and much more. I recommend browsing the API docs for Bluebird to see what all is possible with Promises.

Once I came to understand how Promises worked and their caveats, I came to really like them. They have led to very clean and elegant code in my projects. There’s so much more to talk about too, this article is just the beginning!

If you enjoyed this, you should follow me on Twitter to find out when I write another guide like this.

Further Reading

More great articles on Promises

Found a mistake? if I made an error and you want to let me know, please email me or file an issue. Thanks!

Node Roundup: 0.10.26, DozerJS, Keybase

19 Feb 2014 | By Alex Young | Comments | Tags node modules

Node 0.10.26

Node 0.10.26 is out. It includes updates for V8, npm, and uv, and fixes for several core modules, including crypto, fs, and net.

DozerJS

DozerJS

DozerJS (GitHub: DozerJS / dozerjs, License: MIT, npm: dozerjs) is an Express-based project that aims to make it easier to develop MVC-style REST-based applications.

It looks like the focus is on simplifying the server-side implementation so you can focus on the UI. The conventions used for the server-side structure seem to follow the popular wisdom: route separation, simple models with validation, and HTTP verbs for CRUD operations.

Keybase

Keybase

Keybase (GitHub: keybase / node-installer) is a public key sharing tool that you can install with npm: npm install -g keybase-installer. It allows you to associate several keys with a single identity:

In one command, Keybase has acquired maria’s public key, her keybase username, and her public identities, and confirmed they’re all her, using GnuPG to review a signed tweet and gist she posted.

I think it’s an extremely interesting project – the website is clear, and I like the idea of being able to confirm identities for collaborating with people online. Using npm to distribute the client seems like a smart approach.

AngularJS Infinite Scroll, Bindable.js

18 Feb 2014 | By Alex Young | Comments | Tags dom angularjs data-binding

AngularJS Infinite Scroll

This project made me wonder if AngularJS modules are the new jQuery plugins: lrInfiniteScroll (GitHub: lorenzofox3 / lrInfiniteScroll, License: MIT), by Laurent Renard. It’s a small and highly reusable library that is specifically tailored to work well with Angular’s API.

It attaches an event handler to an element that fires when the element has been scrolled to the bottom. You can use it to automatically load items on demand, Angular style:

<ul lr-infinite-scroll="myEventHandler" scroll-threshold="200" time-threshold="600">
  <li ng-repeat="item in myCollection">
</ul>

Bindable.js

Data binding libraries are often coupled to view objects. Bindable.js (GitHub: classdojo / bindable.js, License: MIT) from ClassDojo (and Craig Condon) is a more generic bidirectional data binding library. Bindable objects are constructed, and then properties can be bound to callbacks:

var person = new bindable.Object({
  name: 'craig',
  last: 'condon',
  location: {
    city: 'San Francisco'
  }
});

person.bind('location.zip', function(value) {
  // 94102
}).now();

// Triggers the binding
person.set('location.zip', '94102'); 

Bindable objects emit other events (change, watching, dispose), and there are methods for introspection (bindable.has), context (bindable.context), and triggering callbacks after they’re defined (.now).

Backbone.React.Component, backbone-dom-view

17 Feb 2014 | By Alex Young | Comments | Tags backbone dom views react

Backbone.React.Component

If you like Facebook’s React library and Backbone.js, then take a look at José Magalhães’ Backbone.React.Component (GitHub: magalhas / backbone-react-component, License: MIT, Bower: backbone-react-component). It acts as a bridge so you can bind models, collections, and components on both the client and server.

The author has made a blog example that you can run locally. The server uses Express, and keeps collections updated with data both on the server and in the browser.

backbone-dom-view

backbone-dom-view (GitHub: redexp / backbone-dom-view, License: MIT, Bower: backbone-dom-view) by Sergii Kliuchnyk is a view class for Backbone that allows selectors to be bound to helper methods using a shorthand notation that supports binding model fields, view events, and calculations.

Sergii’s example is a to-do model:

View = Backbone.DOMView.extend
  template:
    '.title':
      html: '@title'
    '.state':
      class:
        'done': '@is_done'

It has RequireJS support, tests, and documentation in the readme.

JS-Git Progress, jide.js, Val

14 Feb 2014 | By Alex Young | Comments | Tags git browser ui

JS-Git Progress

Khalid Khan sent in an email to say that Tim Caswell JS-Git project is seeing a lot of activity recently. It seems like this new branch has changed a lot compared to the old branch.

I’d file this under “Captain’s Log: Supplemental”, but let’s see what happens over the next few weeks. If you’re interested in this project, it might be a good time to start following Tim on Twitter.

jide.js

Patrick Gotthardt recently wrote two articles about jide.js. One includes jide.js benchmarks:

Since the next release of jide.js is supposed to introduce massive performance improvements, I thought it might be a good idea to see how it holds up against this benchmark. I used a modified version from vue.js which seems to include a few more nice frameworks.

There’s also a nicely presented introduction to jide.js:

jide.js is a new toolkit for creating modern web applications. It consists of a collection of useful controls and all the tools you need to create your own, application specific, components. jide.js fully embraces AMD (require.js) to allow you to pick only those parts of it that you truly need. Starting with version 1.0.0-beta3, you’ll also be able to use it with Browserify.

At its core, jide.js is built around observable values, event emitters, and data binding. It utilizes the features of modern browsers (IE9+) to create a solid cross platform experience and to leverage the features of the current JavaScript language instead of clinging to the past.

Val

Mark Steve Samson has created a Valentine card generator (GitHub: marksteve / val, License: MIT). If you’ve been desperately searching for a tweenHeart function, then you’re in luck!

Pageres: Responsive Screenshots

13 Feb 2014 | By Alex Young | Comments | Tags node apps design

Pageres

Sindre Sorhus sent in pageres (GitHub: sindresorhus / pageres, License: MIT, npm: pageres), a command-line tool for generating screenshots of websites.

You can install it with npm install --global pageres, and then run it with a URL and a size:

pageres http://dailyjs.com/2014/02/12/node-roundup/ --sizes 800x600

It’s based on webshot, which is another small module that wraps around PhantomJS. There are other modules like pageres, but what I like about it is the focus on sizes: you could script it to generate screenshots of responsive websites. The command-line options allow you to specify many dimensions at once, so it’s easy to generate results for a responsive site.

This will work well if you’ve got marketing materials that include screenshots, or if your designers want to see useful outputs from a CI server.

Another cool feature is the Node API:

var pageres = require('pageres');

pageres(['todomvc.com'], ['1366x768', '1600x900'], function () {
  console.log('done');
});

I’ve made a few lightweight wrappers around PhantomJS before – there’s a HTML to PDF invoice generator in production that I made last year that’s been ticking over nicely. However, I like the focus on dimensions in pageres, and the command-line interface is very friendly.

Node Roundup: Multiple Node Instances, to-locals, pipes-and-filters

12 Feb 2014 | By Alex Young | Comments | Tags node modules apps

Running Multiple Instances in a Single Process

StrongLoop has a useful blog with lots of posts about Node. Yesterday Ben Noordhuis posted Running Multiple Instances in a Single Process:

Imagine a node-webkit application where each window runs its own Node instance that is isolated from all other windows. Or Node embedded in a phone or network switch where it is performing routing logic for multiple connections, but in a single process.

It’s a pretty tall order because Node started out as – and still is – a single-threaded application, built around the concept of a single event loop, with hundreds of global variables that store various bits of state.

The post goes on to show how add-on authors can use contexts, with NODE_MODULE_CONTEXT_AWARE.

to-locals

eyy sent in to-locals (GitHub: eyy / to-locals, License: MIT, npm: to-locals), a module that transforms callback functions into Connect middleware that automatically sets res.local for use in views. This example shows how easy it is to expose Mongoose models:

var users = toLocals(mongoose.model('users'), 'find', 'users');
app.get('users', users, function(req, res, next) {
  // res.locals now has the loaded users
});

It’s a small project, but even so Mocha tests have been included, and the documentation highlights the main features.

Pipes and Filters

Pipes and Filters (GitHub: slashdotdash / node-pipes-and-filters, License: MIT, npm: pipes-and-filters) by Ben Smith is a module for composing sets of asynchronous operations:

Pipeline.create('order processing')
  .use(decrypt)
  .use(authenticate)
  .use(deDuplicate)
  .breakIf(function(input) { return input.exists; })
  .execute(message, function completed(err, result) {
    // error or success handler
  });

Ben wrote a blog post that introduces the module with detailed examples: Pipes and Filters to cure Node.js async woes.

How can we perform complex processing on an input data structure, while maintaining independence and flexibility?

The Pipes and Filters pattern provides one possible solution. Described in the Enterprise Integration Patterns book as a method to “divide a larger processing task into a sequence of smaller, independent processing steps (Filters) that are connected by channels (Pipes).”

jQuery UI 1.10.4, jqModal, floatThead

11 Feb 2014 | By Alex Young | Comments | Tags jquery plugins ui

jQuery UI 1.10.4

jQuery UI 1.10.4 is out:

The fourth maintenance release for jQuery UI 1.10 is out. This update brings bug fixes for Widget Factory, Position, Droppable, Resizable, Accordion, Autocomplete, Button, Datepicker, Dialog, Menu, Slider, Spinner, Tabs, and the CSS Framework. For the full list of changes, see the changelog.

jQuery UI 1.10.3 was released last May, so it’s been quite a while since the last release!

jqModal

jqModal (GitHub: briceburg / jqModal, License: MIT, GPL) by Brice Burgess is a plugin for showing modals, popups, and notices. To use it, you just need a suitable container element with dialog content and then to call $('#dialog').jqm();.

It can load content using Ajax, and allows dialogs to be nested. External content can also be loaded using iframes.

jquery.floatThead

jquery.floatThead (GitHub: mkoryak / floatThead, License: CC BY-SA 4.0) by Misha Koryak is a plugin for floating table headers. Headers can be floated inside elements with overflow scrolling, and also for the entire window.

Overflow scrolling requires that the “scroll container” is specified:

var $table = $('table.demo');
$table.floatThead({
  scrollContainer: function($table){
  return $table.closest('.wrapper');
}});

Kettle.js, Backbone.SuperModel, Mem.js

10 Feb 2014 | By Alex Young | Comments | Tags mvvm mvc backbone

Kettle.js

Kettle.js (GitHub: smelnikov / kettle, License: MIT) by Sergey Melnikov is an alternative approach to Backbone views that uses a declarative syntax to define elements, their bindings, and their associated events.

It supports two-way data binding, sub views, and can be extended to support custom Kettle elements.

Mem.js

Mem.js (GitHub: artyomtrityak / mem.js, License: MIT) by Artyom Trityak is a memory management library for Backbone. It allows you to save, retrieve, and destroy instances of Backbone classes:

var View = Backbone.View({});

// On set returns new stored function instance or object
var headerViewIns = Mem.set('headerView', View, { el: 'body' });

It can remove and recreate instances with Mem.reset, and remove outdated objects with Mem.manage.

Backbone.SuperModel

Backbone.SuperModel (GitHub: laoshanlung/backbone.supermodel, License: MIT, npm: backbone.supermodel) by Tan Nguyen is model class that offers support for nested collections. It supports dot notation for getters and setters, and an updated toJSON that reflects the nested structure. Relationships can be defined between models as well.

var wallet = {
  money: {
    amount: 4000,
    currency: 'euro'
  },
  name: 'Tan Nguyen'
};

var myStuff = new Backbone.Model();
myStuff.set('wallet', wallet);
myStuff.get('wallet').money.amount; // 4000

The project includes tests and benchmarks, and examples can be found in the readme.