Generators and Suspend

31 May 2013 | By Alex Young | Comments | Tags node modules es6

ECMAScript 6 generators are at the draft stage, and available in Node 0.11 when node is run with --harmony or --harmony-generators. Generators are “first-class coroutines” – think functions that can be postponed and resumed.

Generators are denoted with function*, and return values by calling yield. The value isn’t really returned: yield could be placed inside a loop, and then is called to fetch the yielded value. The generator is said to be an iterator – it could be provided as the expression to an iteration statement like for:

function* generator() {
  for (;;) {
    yield someValue;

for (var value of generator()) {
  // Do something with `value`,
  // then `break` when enough values have been yielded

The ECMAScript 6 wiki has a Fibonacci sequence example, but generators don’t really hit their conceptual stride until you start hooking generators up to other generators. The classic example of this is consumer-producer relationships: generators that produce values, and then consumers that use them. The two generators are said to be symmetric – a continuous evaluation where coroutines yield to each other, rather than two functions that call each other.

Jeremy Martin sent in a small but novel module based on generators called suspend (GitHub: jmar777 / suspend, License: MIT, npm: suspend). As it needs Node 0.11 and for Node to be run with --harmony, let’s just say it’s academically interesting for now.

You can think of suspend as an early example of generators that feature an idiomatic Node API:

// async without suspend['file1','file2','file3'], fs.stat, function(err, results) {
  // results is now an array of stats for each file

// async with suspend
var res = yield['file1','file2','file3'], fs.stat, resume);

Here the async module has been modified to use suspend, resulting in more concise code.

suspend is “red light, green light” for asynchronous code execution. yield means stop, and resume means go.

If this sounds familiar, that’s because it’s not semantically too different to node-fibers. The node-fibers documentation includes a comparison between the ES6 generators example and its own syntax.

This is the entire source to suspend:

var suspend = module.exports = function suspend(generator, opts) {
  opts || (opts = {});

  return function start() {, function resume(err) {
      if (opts.throw) {
        if (err) return iterator.throw(err);
        iterator.send(, 1));
      } else {
    var iterator = generator.apply(this, arguments);;

The suspend function accepts a generator and returns a function. The callback supplied to suspend will be passed the resume function, which accepts an error argument to fit Node’s callback API style. The user-supplied callback can then call yield on an asynchronous function that accepts resume as its callback, allowing Node’s core modules (or any other asynchronous methods) to be used in a synchronous style:

suspend(function* (resume) {
  var data = yield fs.readFile(__filename, resume);

I liked this twist on generators, and I think modules like this will start to become more important in the JavaScript community over the next few years.

AngularJS: Adding Dependencies

30 May 2013 | By Alex Young | Comments | Tags angularjs mvc angularfeeds bower grunt

Adding Dependencies with Bower

This tutorial is really about Yeoman, Bower, and Grunt, because I still feel like it’s worth exploring the build system that I introduced for this AngularJS project. I appreciate that the number of files installed by Yeoman is a little bit bewildering, so we’re going to take a step back from AngularJS and look at how dependencies work and how to add new dependencies to a project.

Although Yeoman helps get a new project off the ground, it takes a fair amount of digging to figure out how everything is laid out. For example: let’s say we want to add sass-bootstrap to djsreader – how exactly do we do this?

Yeoman uses Bower for managing dependencies, and Bower uses component.json (or bower.json by default in newer versions). To add sass-bootstrap to the project, open component.json and add "sass-bootstrap": "2.3.x" to the dependencies property:

  "name": "djsreader",
  "version": "0.0.0",
  "dependencies": {
    "angular": "~1.0.5",
    "json3": "~3.2.4",
    "es5-shim": "~2.0.8",
    "angular-resource": "~1.0.5",
    "angular-cookies": "~1.0.5",
    "angular-sanitize": "~1.0.5",
    "sass-bootstrap": "2.3.x"
  "devDependencies": {
    "angular-mocks": "~1.0.5",
    "angular-scenario": "~1.0.5"

Next run bower install to install the dependencies to app/components. If you look inside app/components you should see sass-bootstrap in there.

Now the package is installed, how do we actually use it with our project? The easiest way is to create a suitable Grunt task.


Grunt runs the djsreader development server and compiles production builds that can be dropped onto a web server. Gruntfile.js is mostly configuration – it has the various settings needed to drive Grunt tasks so they can build our project. One task is compass – if you search the file for compass you should see a property that defines some options for compiling Sass files.

The convention for Grunt task configuration is taskName: { argument: options }. We want to add a new argument to the compass task for building the Bootstrap Sass files. We know the files are in app/components/sass-bootstrap, so we just need to tell it to compile the files in there.

Add a new property to compass called bootstrap. It should be on line 143:

compass: {
  // options/dist/server
  bootstrap: {
    options: {
      sassDir: '<%= %>/components/sass-bootstrap/lib',
      cssDir: '.tmp/styles'

Near the bottom of the file add an entry for compass:bootstrap to grunt.registerTask('server', [ and grunt.registerTask('build', [:

grunt.registerTask('server', [
  'compass:bootstrap', /* This one! */

This causes the Bootstrap .scss files to be compiled whenever a server is started.

Now open app/index.html and add styles/bootstrap.css:

<link rel="stylesheet" href="styles/bootstrap.css">
<link rel="stylesheet" href="styles/main.css">



The settings files Yeoman created for us makes managing dependencies easy – there’s a world of cool things you can find with bower search and try out.

This week’s code is in commit 005d1be.

Node Roundup: 0.10.8, msfnode, vnc-over-gif

29 May 2013 | By Alex Young | Comments | Tags node modules security gif vnc
You can send in your Node projects for review through our contact form.

Node 0.10.8

Node 0.10.8 was released last week. v8, uv, and npm were all upgraded, and there are fixes and improvements to be found in the http, buffer, and crypto modules. This is the third stable release so far in May.



msfnode (GitHub: eviltik / msfnode, License: GPL 3, npm: msfnode) by Michel Soisson is a Metasploit API client for Node. Metasploit is a hugely popular penetration testing framework. This module allows you to use Node to script Metasploit. The Metasploit API supports things like managing jobs, loading plugins, and interacting with open sessions to compromised systems.

The module provides a metasploitClient constructor, which can be passed an object that contains the Metasploit server’s details, including login and password. The client is event-based, and the project’s readme has an example of how to get a login token and make a request against a server.


Andrey Sidorov sent in vnc-over-gif (GitHub: sidorares / vnc-over-gif, License: MIT, npm: vnc-over-gif), a VNC viewer that uses animated gifs as the data transport. It currently has no client-side JavaScript, so it acts purely as a means of viewing a VNC session. The author is interested in expanding it further with an Ajax-based UI. There’s a good background to the project in the vnc-over-gif FAQ.

Although it isn’t interactive, it’s a great hack – the code is currently only around 50 lines, which Andrey claims only took 30 minutes to write.

jQuery Roundup: 1.10.0, 2.0.1, AopJS, Backbone.Cache and Backbone.Cleanup

28 May 2013 | By Alex Young | Comments | Tags jquery plugins backbone.js aspect-oriented
Note: You can send your plugins and articles in for review through our contact form.

1.10.0 and 2.0.1

jQuery 1.10.0 and 2.0.1 were released last week:

Our main goal with these two releases is to synchronize the features and behavior of the 1.x and 2.x lines, as we pledged a year ago when jQuery 2.0 was announced. Going forward, we’ll try to keep the two in sync so that 1.11 and 2.1 are feature-equivalent for example.

Even though these newer jQuery releases have shed the legacy IE support baggage, there are still IE-specific fixes: IE9 focus of death.


AopJS (GitHub: victorcastroamigo / aopjs, License: MIT, jQuery: aop) by Víctor Castro Amigo is a minimal aspect oriented library for JavaScript, with a jQuery plugin. It has a chainable API that can be used to define advice, with various types: before, after, afterReturning, afterThrowing, and around.

The author has included unit tests, and the readme has plenty of examples. The jQuery portion of the project doesn’t add any specific aspect-oriented enhancements to jQuery itself, it just binds to $.aop.aspect.

Backbone.Cache and Backbone.Cleanup

Naor Ye sent in some of his Backbone.js plugins. Backbone.Cache allows you to define a cache object that models and collections can use. You’ll need to make your models and collections inherit from the right classes: Backbone.CachedModel and Backbone.CachedCollection provide a cacheObject property that can be used to point to a suitable object to use as a cache.

Backbone.Cleanup offers parent classes for views and Backbone.Router to help you clean and reuse nested views. A method called markCurrentView is used to set the current view, so when the view is no longer active its cleanup method will be triggered.


27 May 2013 | By Alex Young | Comments | Tags libraries node maths


Cytoscape.js (GitHub: cytoscape / cytoscape.js, License: LGPL, npm: cytoscape), developed at the Donnelly Centre at the University of Toronto by Max Franz, is a graph library that works with Node and browsers. This library is for working with “graphs” in the mathematical sense – interconnected sets of nodes connected by edges.

The API uses lots of sensible JavaScript idioms: it’s event-based, functions return objects so calls can be chained, JSON definitions of elements can be used, and nodes can be selected with selectors that are modelled on CSS selectors and jQuery’s API. That means you can query a graph with something like this: cy.elements('node:locked, edge:selected').

Styling graphs is also handled in a natural manner:

Graph style and data should be separate, and the library should provide core functionality with extensions adding functionality on top of the library.

Max and several contributs have been working on the project for two years now, so it’s quite mature at this point. The project comes with detailed documentation, a build script, and a test suite written with QUnit.

Node Hosting with Modulus

25 May 2013 | By Alex Young | Comments | Tags node hosting

Modulus is a new hosting platform dedicated to Node. Why “platform”? Well, Modulus provides a complete stack for web application development: MongoDB is used for the database and file storage, and WebSockets are supported out of the box. Applications running on the Modulus stack get metrics – requests are logged and analysed in real-time. Horizontal scaling is supported by running multiple instances of your application.

Pricing is determined by the number of instances (servos) that you run, and storage used. The Modulus pricing page has some sliders, allowing you to see how much it’ll cost to run your application per-month.

I asked Modulus about using different versions of Node Node, as Heroku supports 0.4 to 0.10. However, at the time of writing only Node 0.8.15 is supported. Ghuffran Ali from Modulus said that they’re working on supporting multiple Node versions as soon as Monday (27th May), so keep an eye on the Modulus blog for details on that.

It’s easy to get started with Modulus – there’s a sample project, plus you can sign in with GitHub so it doesn’t take too much effort to get a basic application running. They’re also offering $15 free credit, so you could run something more substantial there to see how everything works.

Modulus uses a web-based interface for managing projects that allows various settings to be changed, like environmental variables, and a global SSL redirect. There’s also a command-line client – if you sign in with GitHub make sure you use modulus login with -g so you can sign in with your GitHub account.

On a related note, IrisCouch has joined Nodejitsu. That means CouchDB and Redis are now both supported by Nodejitsu:

This means that our users will be able to deploy their applications and databases from the same set of tools all backed by node.js. If you’re an existing IrisCouch user you will be notified and given ample time to migrate your IrisCouch account into a Nodejitsu account.

It’s impressive to see so much innovation in the Node hosting/PaaS space!

AngularJS: More on Dependency Injection

23 May 2013 | By Alex Young | Comments | Tags angularjs mvc angularfeeds

In the AngularJS tutorials I’ve been writing, you might have noticed the use of dependency injection. In this article I’m going to explain how dependency injection works, and how it relates to the small tutorial project we’ve created.

Dependency injection is a software design pattern. The motivation for using it in Angular is to make it easier to transparently load mocked objects in tests. The $http module is a great example of this: when writing tests you don’t want to make real network calls, but defer the work to a fake object that responds with fixture data.

The earlier tutorials used dependency injection for this exact use case: in main controller, the MainCtrl module is set up to load the $http module which can then be transparently replaced during testing.

  .controller('MainCtrl', function($scope, $http, $timeout) {

Now forget everything I just said about dependency injection, and look at the callback that has been passed to .controller in the previous example. The $http and $timeout modules have been added by me because I want to use the $http service and the $timeout service. These are built-in “services” (an Angular term), but they’re not standard arguments. In fact, I could have specified these arguments in any order:

  .controller('MainCtrl', function($scope, $timeout, $http) {

This is possible because Angular looks at the function argument names to load dependencies. Before you run away screaming about magic, it’s important to realise that this is just one way to load dependencies in Angular projects. For example, this is equivalent:

  .controller('MainCtrl', ['$scope', '$http', '$timeout'], function($scope, $http, $timeout) {

The array-based style is more like AMD, and requires a little bit of syntactical overhead. I call the first style “introspective dependency injection”. The array-based syntax allows us to use different names for the dependencies, which can be useful sometimes.

This raises the question: how does introspective dependency injection cope with minimisers, where variables are renamed to shorter values? Well, it doesn’t cope with it at all. In fact, minimisers need help to translate the first style to the second.

Yeoman and ngmin

One reason I built the tutorial series with Yeoman was because the Angular generator includes grunt-ngmin. This is a Grunt task that uses ngmin – an Angular-aware “pre-minifier”. It allows you to use the shorter, introspective dependency injection syntax, while still generating valid minimised production builds.

Therefore, building a production version of djsreader with grunt build will correctly generate a deployable version of the project.

Why is it that almost all of Angular’s documentation and tutorials include the potentially dangerous introspective dependency injection syntax? I’m not sure, and I haven’t looked into it. I’d be happier if the only valid solution was the array-based approach, which looks more like AMD which most of us are already comfortable with anyway.

Just to prove I’m not making things up, here is the minimised source for djsreader:

"use strict";angular.module("djsreaderApp",[]).config(["$routeProvider",function(e){e.when("/",{templateUrl:"views/main.html",controller:"MainCtrl"}).otherwise({redirectTo:"/"})}]),angular.module("djsreaderApp").controller("MainCtrl",["$scope","$http","$timeout",function(e,r,t){e.refreshInterval=60,e.feeds=[{url:""}],e.fetchFeed=function(n){n.items=[];var o="*%20from%20xml%20where%20url%3D'";o+=encodeURIComponent(n.url),o+="'%20and%20itemPath%3D'feed.entry'&format=json&diagnostics=true&callback=JSON_CALLBACK",r.jsonp(o).success(function(e){e.query.results&&(n.items=e.query.results.entry)}).error(function(e){console.error("Error fetching feed:",e)}),t(function(){e.fetchFeed(n)},1e3*e.refreshInterval)},e.addFeed=function(r){e.feeds.push(r),e.fetchFeed(r),e.newFeed={}},e.deleteFeed=function(r){e.feeds.splice(e.feeds.indexOf(r),1)},e.fetchFeed(e.feeds[0])}]);

The demangled version shows that we’re using the array-based syntax, thanks to ngmin:

angular.module("djsreaderApp").controller("MainCtrl", ["$scope", "$http", "$timeout",


In case you’re wondering how the introspective dependency injection style works, then look no further than annotate(fn). This function uses Function.prototype.toString to extract the argument names from the JavaScript source code. The results are effectively cached, so even though this sounds horrible it doesn’t perform as badly as it could.


Nothing I’ve said here is new – while researching this post I found The Magic Behind Dependency Injection by Alex Rothenberg, which covers the same topic, and the Angular Dependency Injection documentation outlines the issues caused by the introspective approach and suggests that it should only be used for pretotyping.

However, I felt like it was worth writing an overview of the matter, because although Yeoman is great for a quick start to a project, you really need to understand what’s going on behind the scenes!

Node Roundup: 0.10.7, JSON Editor, puid, node-mac

22 May 2013 | By Alex Young | Comments | Tags node modules mac windows json uuid
You can send in your Node projects for review through our contact form.

Node 0.10.7

Node 0.10.7 was released last week. This version includes fixes for the buffer and crypto modules, and timers. The buffer/crypto fix relates to encoding issues that could crash Node: #5482.

JSON Editor Online

JSON Editor Online

JSON Editor Online (GitHub: josdejong / jsoneditor, License: Apache 2.0, npm: jsoneditor, bower: jsoneditor) by Jos de Jong is a web-based JSON editor. It uses Node for building the project, but it’s actually 100% web-based. It uses the Ace editor, and includes features for searching and sorting JSON.

It’s installable with Bower, so you could technically use it as a component and embed it into another project.


Azer Koçulu sent in a bunch of new modules again, and one I picked out this time was english-time (GitHub: azer / english-time, License: BSD, npm: english-time). He’s using it with some of the CLI tools he’s written, so rather than specifying a date in an ISO format users can express durations in English.

The module currently supports milliseconds, seconds, minutes, hours, days, weeks, and shortened expressions based on combinations of these. For example, 3 weeks, 5d 6h would work.


puid (GitHub: pid / puid, License: MIT, npm: puid) by Sascha Droste can generate unique IDs suitable for use in a distributed system. The IDs are based on time, machine, and process, and can be 24, 14, or 12 characters long.

Each ID is comprised of an encoded timestamp, machine ID, process ID, and a counter. The counter is based on nanoseconds, and the machine ID is based on the network interface ID or the machine’s hostname.


node-windows provides integration for Windows-specific services, like creating daemons and writing to eventlog. The creator of node-windows, Corey Butler, has also released node-mac (GitHub: coreybutler / node-mac, License: MIT, npm: node-mac). This supports Mac-friendly daemonisation and logging.

Services can be created using an event-based API:

var Service = require('node-mac').Service;

// Create a new service object
var svc = new Service({
  name: 'Hello World',
  description: 'The example web server.',
  script: '/path/to/helloworld.js')

// Listen for the "install" event, which indicates the
// process is available as a service.
svc.on('install', function() {


It also supports service removal, and event logging.

jQuery Roundup: Anchorify.js, Minimalect

21 May 2013 | By Alex Young | Comments | Tags jquery plugins select
Note: You can send your plugins and articles in for review through our contact form.


Anchorify.js (GitHub: willdurand / anchorify.js, License: MIT) by William Durand automatically inserts unique anchored headings. The default markup is an anchor with a pilcrow sign, but this can be overridden if desired.

Even though the plugin is relatively simple, William has included QUnit tests and put the project up on jQuery’s new plugin site.


Minimalect (GitHub: groenroos / minimalect, License: MIT) by Oskari Groenroos is a select element replacement that supports optgroups, searching, keyboard navigation, and themes. It comes with two themes that are intentionally simple, allowing you to easily customise them using CSS, and no images are required by default.

Options include placeholder text, a message when no search results are found, class name overrides, and lifecycle callbacks.

Terminology: Modules

20 May 2013 | By Alex Young | Comments | Tags modules commonjs amd terminology basics js101

Learning modern modular frameworks like Backbone.js and AngularJS involves mastering a large amount of terminology, even just to understand a Hello, World application. With that in mind, I wanted to take a break from higher-level libraries to answer the question: what is a module?

The Background Story

Client-side development has always been rife with techniques for patching missing behaviour in browsers. Even the humble <script> tag has been cajoled and beaten into submission to give us alternative ways to load scripts.

It all started with concatenation. Rather than loading many scripts on a page, they are instead joined together to form a single file, and perhaps minimised. One school of thought was that this is more efficient, because a long HTTP request will ultimately perform better than many smaller requests.

That makes a lot of sense when loading libraries – things that you want to be globally available. However, when writing your own code it somehow feels wrong to place objects and functions at the top level (the global scope).

If you’re working with jQuery, you might organise your own code like this:

$(function() {
  function MyConstructor() {

  MyConstructor.prototype = {
    myMethod: function() {

  var instance = new MyConstructor();

That neatly tucks everything away while also only running the code when the DOM is ready. That’s great for a few weeks, until the file is bustling with dozens of objects and functions. That’s when it seems like this monolithic file would benefit from being split up into multiple files.

To avoid the pitfalls caused by large files, we can split them up, then load them with <script> tags. The scripts can be placed at the end of the document, causing them to be loaded after the majority of the document has been parsed.

At this point we’re back to the original problem: we’re loading perhaps dozens of <script> tags inefficiently. Also, scripts are unable to express dependencies between each other. If dependencies between scripts can be expressed, then they can be shared between projects and loaded on demand more intelligently.

Loading, Optimising, and Dependencies

The <script> tag itself has an async attribute. This helps indicate which scripts can be loaded asynchronously, potentially decreasing the time the browser blocks when loading resources. If we’re going to use an API to somehow express dependencies between scripts and load them quickly, then it should load scripts asynchronously when possible.

Five years ago this was surprisingly complicated, mainly due to legacy browsers. Then solutions like RequireJS appeared. Not only did RequireJS allow scripts to be loaded programmatically, but it also had an optimiser that could concatenate and minimise files. The lines between loading scripts, managing dependencies, and file optmisation are inherently blurred.


The problem with loading scripts is it’s asynchronous: there’s no way to say load('/script.js') and have code that uses script.js directly afterwards. The CommonJS Modules/AsynchronousDefinition, which became AMD (Asynchronous Module Definition), was designed to get around this. Rather than trying to create the illusion that scripts can be loaded synchronously, all scripts are wrapped in a function called define. This is a global function inserted by a suitable AMD implementation, like RequireJS.

The define function can be used to safely namespace code, express dependencies, and give the module a name (id) so it can be registered and loaded. Module names are “resolved” to script names using a well-defined format.

Although this means every module you write must be wrapped in a call to define, the authors of RequireJS realised it meant that build tools could easily interpret dependencies and generate optimised builds. So your development code can use RequireJS’s client-side library to load the necessary scripts, then your production version can preload all scripts in one go, without having to change your HTML templates (r.js is used to do this in practice).


Meanwhile, Node was becoming popular. Node’s module system is characterised by using the require statement to return a value that contains the module:

var User = require('models/user');

Can you imagine if every Node module had to be wrapped in a call to define? It might seem like an acceptable trade-off in client-side code, but it would feel like too much boilerplate in server-side scripting when compared to languages like Python.

There have been many projects to make this work in browsers. Most use a build tool to load all of the modules referenced by require up front – they’re stored in memory so require can simply return them, creating the illusion that scripts are being loaded synchronously.

Whenever you see require and exports you’re looking at CommonJS Modules/1.1. You’ll see this referred to as “CommonJS”.

Now you’ve seen CommonJS modules, AMD, and where they came from, how are they being used by modern frameworks?

Modules in the Wild

Dojo uses AMD internally and for creating your own modules. It didn’t originally – it used to have its own module system. Dojo adopted AMD early on.

AngularJS uses its own module system that looks a lot like AMD, but with adaptations to support dependency injection.

RequireJS supports AMD, but it can load scripts and other resources without wrapping them in define. For example, a dependency between your own well-defined modules and a jQuery plugin that doesn’t use AMD can be defined by using suitable configuration options when setting up RequireJS.

There’s still a disparity between development and production builds. Even though RequireJS can be used to create serverless single page applications, most people still use a lightweight development server that serves raw JavaScript files, before deploying concatenated and minimised production builds.

The need for script loading and building, and tailoring for various environments (typically development, test, and production) has resulted in a new class of projects. Yeoman is a good example of this: it uses Grunt for managing builds and running a development server, Bower for defining the source of dependencies so they can be fetched, and then RequireJS for loading and managing dependencies in the browser. Yeoman generates skeleton projects that set up development and build environments so you can focus on writing code.

Hopefully now you know all about client-side modules, so the next time you hear RequireJS, AMD, or CommonJS, you know what people are talking about!

Impossible Mission, Full-Text Indexing, Backbone.Projections

17 May 2013 | By Alex Young | Comments | Tags games backbone.js node pdf

Impossible Mission

Impossible Mission

Impossible Mission by Krisztián Tóth is a JavaScript remake of the C64 classic. You can view the source to see how it all works, if things like this.dieByZapFrames and this.searchedFurniture sound appealing to you.

Krisztián previously sent in Boulder Dash and Wizard of Wor which were similar remakes written using the same approach.

Client-Side Full-Text Indexing

Gary Sieling sent in a post he wrote about full-text indexing with client-side JavaScript, in which he looks at PDF.js and Lunr: Building a Full-Text Index in JavaScript. I briefly mentioned Lunr by Oliver Nightingale back in March.

One great thing about this type of index is that the work can be done in parallel and then combined as a map-reduce job. Only three entries from the above object need to be combined, as “fields” and “pipeline” are static.


In relational databases, a projection is a subset of available data. Backbone.Projections (GitHub: andreypopp / backbone.projections, License: MIT, npm: backbone.projections) by Andrey Popp is the equivalent for Backbone collections – they allow a transformed subset of values in a collection can be represented and synced.

The supported projections are Capped and Filtered. Capped collections are limited based on a size and function – the function will be used to order the results prior to truncating them. Filtered projections filter out results based on a function that returns a boolean.

Projections can be composed by passing one project to another. This example creates a Filtered projection, and then passes it to a Capped projection to limit and order the results:

var todaysPosts = new Filtered(posts, {
  filter: function(post) {
    return post.get('date').isToday();

var topTodaysPosts = new Capped(todaysPosts, {
  cap: 5,
  comparator: function(post) {
    return post.get('likes');

The author has written unit tests with Mocha, and documentation is available in the readme.

AngularJS: Tests

16 May 2013 | By Alex Young | Comments | Tags angularjs mvc angularfeeds


In the last part we changed the app to support multiple feeds.

This week you’ll learn how to write a short unit test to test the app’s main controller. This will involve mocking data.

If you get stuck at any part of this tutorial, check out the full source here: commit 7b4bda.

Neat and Tidy Tests

The goal of this tutorial is to demonstrate one method for writing neat and tidy tests. Ideally mocked data should be stored in separate files and loaded when required. What we absolutely don’t want is global variables littering memory.

To run tests with the Yeoman-generated app we’ve been working on, type grunt test. It’ll use Karma and Jasmine to run tests through Chrome using WebSockets. The workflow in the console is effortless, despite Chrome appearing and disappearing in the background (it won’t trample on your existing Chrome session, it’ll make a separate process). It doesn’t steal focus away, which means you can invoke tests and continue working on code without getting interrupted.

My workflow: mock, controller, test, and a terminal for running tests

The basic approach is to use $httpBackend.whenJSONP to tell AngularJS to return some mock data when the tests are run, instead of fetching the real feed data from Yahoo. That sounds simple enough, but there’s a slight compilation: leaving mock data in the test sucks. So, what do we do about this? The karma.conf.js file that was created for us by the Yeoman generator contains a line for loading files from a mocks directory: 'test/mock/**/*.js. These will be loaded before the tests, so let’s dump some JSON in there.

Interestingly, if you run grunt test right now it’ll fail, because the app makes a JSONP request, and the angular-mocks library will flag this as an error. Using $httpBackend.whenJSONP will fix this.

JSON Mocks

Open a file called test/mock/feed.js (you’ll need to mkdir test/mock first), then add this:

'use strict';

angular.module('mockedFeed', [])
  .value('defaultJSON', {
    query: {
      count: 2,
      created: '2013-05-16T15:01:31Z',
      lang: 'en-US',
      results: {
        entry: [
            title: 'Node Roundup: 0.11.2, 0.10.6, subscribe, Omelette',
            link: { href: '' },
            updated: '2013-05-15T00:00:00+01:00',
            id: '',
            content: { type: 'html', content: 'example' }
            title: 'jQuery Roundup: 1.10, jquery-markup, zelect',
            link: { href: '' },
            updated: '2013-05-14T00:00:00+01:00',
            id: '',
            content: { type: 'html', content: 'example 2' }

This uses angular.module().value to set a value that contains some JSON. I derived this JSON from Yahoo’s API by running the app and looking at the network traffic in WebKit Inspector, then edited out the content properties because they were huge (DailyJS has full articles in feeds).

Loading the Mocked Value

Open test/spec/controllers/main.js and change the first beforeEach to load mockedFeed:

beforeEach(module('djsreaderApp', 'mockedFeed'));

The beforeEach method is provided by Jasmine, and will make the specified function run before each test. Now the defaultJSON value can be injected, along with the HTTP backend:

var MainCtrl, scope, mockedFeed, httpBackend;

// Initialize the controller and a mock scope
beforeEach(inject(function($controller, $rootScope, $httpBackend, defaultJSON) {
  // Set up the expected feed data
  httpBackend = $httpBackend;

  scope = $rootScope.$new();
  MainCtrl = $controller('MainCtrl', {
    $scope: scope

You should be able to guess what’s happening with $httpBackend.whenJSONP(/ – whenever the app tries to contact Yahoo’s service, it’ll trigger our mocked HTTP backend and return the defaultJSON value instead. Cool!

The Test

The actual test is quite a comedown after all that mock wrangling:

it('should have a list of feeds', function() {
  expect(scope.feeds[0].items[0].title).toBe('Node Roundup: 0.11.2, 0.10.6, subscribe, Omelette');

The test checks $scope has the expected data. httpBackend.flush will make sure the (fake) HTTP request has finished first. The scope.feeds value is the one that MainCtrl from last week derives from the raw JSON returned by Yahoo.


You should now be able to run grunt test and see some passing tests (just like in my screenshot). If not, check out djsreader on GitHub to see what’s different.

Most of the work for this part can be found in commit 7b4bda.

Node Roundup: 0.11.2, 0.10.6, subscribe, Omelette

15 May 2013 | By Alex Young | Comments | Tags node modules cli events pubsub
You can send in your Node projects for review through our contact form.

Node 0.11.2 and 0.10.6

Clearly the Node core developers have had an early summer holiday, and are now back to unleash new releases. In the space of a few days 0.11.2 and 0.10.6 were released. I was intrigued by the Readable.prototype.wrap update, which makes it support objectMode for streams that emit objects rather than strings or other data.

The 0.11.2 release has an update that guarantees the order of 'finish' events, and another that adds some new methods: cork and uncork. Corking basically forces buffering of all writes – data will be flushed when uncork is called or when end is called.

There is a detailed discussion about cork and the related _writev method on the Node Google Group: Streams writev API. There are some interesting comments about a similar earlier implementation by Ryan Dahl, the validity of which Isaac questions due to Node’s code changing significantly since then.

If you want to read about writev, check out the write(2) (man 2 write) manual page:

Write() attempts to write nbyte of data to the object referenced by the descriptor fildes from the buffer pointed to by buf. Writev() performs the same action, but gathers the output data from the iovcnt buffers specified by the members of the iov array…


Azer Koçulu has been working on a suite of modules for subscribing and observing to changes on objects:

Now he’s also released a module for subscribing to multiple pub/sub objects:

var subscribe = require('subscribe');
var a = pubsub();
var b = pubsub();
var c = pubsub();

subscribe(a, b, c, function(updates) {
  // => a.onUpdate
  // => 3, 4
  // => c.onUpdate
  // => 5, 6

a.publish(3, 4);
c.publish(5, 6);

This brings a compositional style of working to Azer’s other modules, allowing subscriptions to multiple lists and objects at the same time. The next example uses subscribe to combine the new-list module with new-object:

var fruits = newList('apple', 'banana', 'grapes');
var people = newObject({ smith: 21, joe: 23 });

subscribe(fruits, people, function(updates) {
  // => melon

  // => { alex: 25 }

people('alex', '25');


Omelette (GitHub: f / omelette, License: MIT, npm: omelette) by Fatih Kadir Akın is a template-based autocompletion module.

Programs and their arguments are defined using an event-based completion API, and then they can generate the zsh or Bash completion rules. There’s an animated gif in the readme that illustrates how it works in practice.

jQuery Roundup: 1.10, jquery-markup, zelect

14 May 2013 | By Alex Young | Comments | Tags jquery plugins markup templating select
Note: You can send your plugins and articles in for review through our contact form.

jQuery 1.10

A new 1.x branch of jQuery has been released, jQuery 1.10. This builds on the work in the 1.9 line:

It’s our goal to keep the 1.x and 2.x lines in sync functionally so that 1.10 and 2.0 are equal, followed by 1.11 and 2.1, then 1.12 and 2.2…

A few of the inclded fixes were things originally planned for 1.9.x, and others are new to this branch. As always the announcement blog post contains links to full details of each change.


jquery-markup (GitHub: rse / jquery-markup, License: MIT) by Ralf S. Engelschall is a markup generator that works with several template engines (including Jade and handlebars). By adding a tag, <markup>, $(selector).markup can be used render templates interpolated with values.

Ralf said this about the plugin:

I wanted to use template languages like Handlebars but instead of having to store each fragment into its own file I still wanted to assemble all fragments together. Even more: I wanted to logically group and nest them to still understood the view markup code as a whole.

The <markup> tag can include a type attribute that is used to determine the templating language – this means you can use multiple template languages in the same document.


zelect (GitHub: mtkopone / zelect, License: WTFPL) by Mikko Koponen is a <select> component. It’s unit tested, and has built-in support for asynchronous pagination.

Unlike Chosen, it doesn’t come with any CSS, but that might be a good thing because it keeps the project simple. Mikko has provided an example with suitable CSS that you can use to get started.

If Chosen seems too large or inflexible for your project, then zelect might be a better choice.

Unix: It's Alive!

13 May 2013 | By Alex Young | Comments | Tags node modules unix

On a philosophical level, Node developers love Unix. I like to think that’s why Node’s core modules are relatively lightweight compared to other standard libraries (is an FTP library really necessary?) – Node’s modules quietly get out of the way, allowing the community to provide solutions to higher-level problems.

As someone who sits inside tmux/Vim/ssh all day, I’m preoccupied with command-line tools and ways to work more efficiently in the shell. That’s why I was intrigued to find bashful (GitHub: substack / bashful, License: MIT, npm: bashful) by substack. It allows Bash to be parsed and executed. To use it, hook it up with some streams:

var bash = require('bashful')(process.env);
bash.on('command', require('child_process').spawn);

var s = bash.createStream();

After installing bashful, running this example with node sh.js will allow you to issue shell commands. Not all of Bash’s built-in commands are supported yet (there’s a list and to-do in the readme), but you should be able to execute commands and run true and false, then get the last exit status with echo $?.

How does this work? Well, the bashful module basically parses each line, character-by-character, to tokenise the input. It then checks anything that looks like a command against the list of built-in commands, and runs it. It mixes Node streams with a JavaScript bash parser to create a Bash-like layer that you can reuse with other streams.

This module depends on shell-quote, which correctly escapes those gnarly quotes in shell commands. I expect substack will make a few more shell-related modules as he continues work on bashful.

ShellJS (GitHub: arturadib / shelljs, npm: shelljs) by Artur Adib has been around for a while, but still receives regular updates. This module gives you shell-like commands in Node:


mkdir('-p', 'out/Release');
cp('-R', 'stuff/*', 'out/Release');

It can even mimic Make, so you could write your build scripts with it. This would make sense if you’re sharing code with Windows-based developers.

There are plenty of other interesting Unix-related modules that are alive and regularly updated. One I was looking at recently is suppose (GitHub: jprichardson / node-suppose, License: MIT, npm: suppose) by JP Richardson, which is an expect(1) clone:

var suppose = require('suppose')
  , fs = require('fs')
  , assert = require('assert')

suppose('npm', ['init'])
  .on(/name\: \([\w|\-]+\)[\s]*/).respond('awesome_package\n')
  .on('version: (0.0.0) ').respond('0.0.1\n')
  // ...

It uses a chainable API to allow expect-like expressions to capture and react to the output of other programs.

Unix in the Node community is alive and well, but I’m sure there’s also lots of Windows-related fun to be had – assuming you can figure out how to use Windows 9 with a keyboard and mouse that is…

P, EasyWebWorker, OpenPGP.js

10 May 2013 | By Alex Young | Comments | Tags node modules crypto p2p webworkers


P (GitHub: oztu / p, License: Apache 2, npm: onramp, bower: p) by Ozan Turgut is a client-side library with a WebSocket server for creating P2P networks by allowing browser-to-browser connections.

The onramp Node module is used to establish connections, but after that it isn’t necessary for communication between clients. The author has written up documentation with diagrams to explain how it works. Like other similar projects, the underlying technology is WebRTC, so it only works in Chrome or Firefox Nightly.


EasyWebWorker (GitHub: ramesaliyev / EasyWebWorker, License: MIT) by Rameş Aliyev is a wrapper for web workers which allows functions to be executed directly, and can execute global functions in the worker.

A fallback is provided for older browsers:

# Create web worker fallback if browser doesnt support Web Workers.
if this.document isnt undefined and !window.Worker and !window._WorkerPrepared
  window.Worker = _WorkerFallback

The _WorkerFallback class is provided, and uses XMLHttpRequest or ActiveXObject.

The source code is nicely commented if you want to look at what it does in more detail:


Jeremy Darling sent in OpenPGP.js (GitHub: openpgpjs / openpgpjs, License: LGPL), which is an OpenPGP implementation for JavaScript:

This is a JavaScript implementation of OpenPGP with the ability to generate public and private keys. Key generation can be a bit slow but you can also import your own keys.

Jeremy found that OpenPGP.js is used by Mailvelope, which is a browser extension that brings OpenPGP to webmail services like Gmail. That means Mailvelope can encrypt messages without having to upload a private key to a server.

AngularJS: Managing Feeds

09 May 2013 | By Alex Young | Comments | Tags angularjs mvc angularfeeds


In the last part we looked at fetching and parsing feeds with YQL using Angular’s $http service. The commit for that part was 2eae54e.

This week you’ll learn more about Angular’s data binding by adding some input fields to allow feeds to be added and removed.

If you get stuck at any part of this tutorial, check out the full source here: commit c9f9d06.

Modeling Multiple Feeds

The previous part mapped one feed to a view by using the $scope.feed object. Now we want to support multiple feeds, so we’ll need a way of modeling ordered collections of feeds.

The easiest way to do this is simply by using an array. Feed objects that contain the post items and feed URLs can be pushed onto the array:

$scope.feeds = [{
  url: '',
  items: [ /* Blog posts go here */ ]

Rendering Multiple Feeds

The view now needs to be changed to use multiple feeds instead of a single feed. This can be achieved by using the ng-repeat directive to iterate over each one (app/views/main.html):

<div ng-repeat="feed in feeds">
    <li ng-repeat="item in feed.items"><a href=""></a></li>
  URL: <input size="80" ng-model="feed.url">
  <button ng-click="fetchFeed(feed)">Update</button>
  <button ng-click="deleteFeed(feed)">Delete</button>
  <hr />

The fetchFeed and deleteFeed methods should be added to $scope in the controller, but we’ll deal with those later. First let’s add a form to create new feeds.

Adding Feeds

The view for adding feeds needs to use an ng-model directive to bind a value so the controller can access the URL of the new feed:

  URL: <input size="80" ng-model="newFeed.url">
  <button ng-click="addFeed(newFeed)">Add Feed</button>

The addFeed method will be triggered when the button is clicked. All we need to do is push newFeed onto $scope.feed then clear newFeed so the form will be reverted to its previous state. The addFeed method is also added to $scope in the controller (app/scripts/controllers/main.js), like this:

$scope.addFeed = function(feed) {
  $scope.newFeed = {};

This example could be written to use $scope.newFeed instead of the feed argument, but don’t you think it’s cool that arguments can be passed from the view just by adding it to the directive?

Fetching Feeds

The original $http code should be wrapped up as a method so it can be called by the ng-click directive on the button:

$scope.fetchFeed = function(feed) {
  feed.items = [];

  var apiUrl = "*%20from%20xml%20where%20url%3D'";
  apiUrl += encodeURIComponent(feed.url);
  apiUrl += "'%20and%20itemPath%3D'feed.entry'&format=json&diagnostics=true&callback=JSON_CALLBACK";

    success(function(data, status, headers, config) {
      if (data.query.results) {
        feed.items = data.query.results.entry;
    error(function(data, status, headers, config) {
      console.error('Error fetching feed:', data);

The feed argument will be the same as the one in the $scope.feeds array, so by clearing the current set of items using feed.items = []; the user will see instant feedback when “Update” is clicked. That makes it easier to see what’s happening if the feed URL is changed to another URL.

I’ve used encodeURIComponent to encode the feed’s URL so it can be safely inserted as a query parameter for Yahoo’s service.

Deleting Feeds

The controller also needs a method to delete feeds. Since we’re working with an array we can just splice off the desired item:

$scope.deleteFeed = function(feed) {
  $scope.feeds.splice($scope.feeds.indexOf(feed), 1);

Periodic Updates

Automatically refreshing feeds is an interesting case in AngularJS because it can be implemented using the $timeout service. It’s just a wrapper around setTimeout, but it also delegates exceptions to $exceptionHandler.

To use it, add it to the list of arguments in the controller and set a sensible default value:

  .controller('MainCtrl', function($scope, $http, $timeout) {
    $scope.refreshInterval = 60;

Now make fetchFeed call itself, at the end of the method:

$timeout(function() { $scope.fetchFeed(feed); }, $scope.refreshInterval * 1000);

I’ve multiplied the value by 1000 so it converts seconds into milliseconds, which makes the view easier to understand:

<p>Refresh (seconds): <input ng-model="refreshInterval"></p>

The finished result


Now you can add more feeds to the reader, it’s starting to feel more like a real web application. Over the next few weeks I’ll add tests and a better interface.

The code for this tutorial can be found in commit c9f9d06.

Node Roundup: Node-sass, TowTruck, peer-vnc

08 May 2013 | By Alex Young | Comments | Tags node modules sass css vnc mozilla
You can send in your Node projects for review through our contact form.


Node-sass (GitHub: andrew / node-sass, License: MIT, npm: node-sass) by Andrew Nesbitt is a Node binding for libsass. It comes with some pre-compiled binaries, so it should be easy to get it running.

It has both synchronous and asynchronous APIs, and there’s an example app built with Connect so you can see how the middleware works: andrew / node-sass-example.

var sass = require('node-sass');
// Async
sass.render(scss_content, callback [, options]);

// Sync
var css = sass.renderSync(scss_content [, options]);

The project includes Mocha tests and more usage information in the readme.


C. Scott Ananian sent in TowTruck (GitHub: mozilla / towtruck, License: MPL) from Mozilla, which is an open source web service for collaboration:

Using TowTruck two people can interact on the same page, seeing each other’s cursors, edits, and browsing a site together. The TowTruck service is included by the web site owner, and a web site can customize and configure aspects of TowTruck’s behavior on the site.

It’s not currently distributed as a module on npm, so you’ll need to follow the instructions in the readme to install it. There’s also a bookmarklet for adding TowTruck to any page, and a Firefox add-on as well.


peer-vnc (GitHub: InstantWebP2P / peer-vnc, License: MIT, npm: peer-vnc) by Tom Zhou is a web VNC client. It uses his other project,, which is a P2P web service module.

I had trouble installing node-httpp on a Mac, so YMMV, but I like the idea of a P2P noVNC project.

jQuery Roundup: UI 1.10.3, simplePagination.js, jQuery Async

07 May 2013 | By Alex Young | Comments | Tags jquery plugins async pagination jquery-ui ui
Note: You can send your plugins and articles in for review through our contact form.

jQuery UI 1.10.3

jQuery UI 1.10.3 was released last week. This is a maintenance release that has fixes for Draggable, Sortable, Accordion, Autocomplete, Button, Datepicker, Menu, and Progressbar.



simplePagination.js (GitHub: flaviusmatis / simplePagination.js, License: MIT) by Flavius Matis is a pagination plugin that supports Bootstrap. It has options for configuring the page links, next and previous text, style attributes, onclick events, and the initialisation event.

There’s an API for selecting pages, and the author has included three themes. When selecting a page, the truncated pages will shift, so it’s easy to skip between sets of pages.

jQuery Async

jQuery Async

jQuery Async (GitHub: acavailhez / jquery-async, License: Apache 2) by Arnaud Cavailhez is a UI plugin for animating things while asynchronous operations take place. It depends on Bootstrap, and makes it easy to animate a button that triggers a long-running process.

The documentation has some clever examples that help visualise how the plugin works – two buttons are displayed so you can trigger the 'success' and 'error' events by hand. It’s built using $.Deferred, so it’ll work with the built-in Ajax API without much effort.

Swarm, Dookie, AngularJS Book

06 May 2013 | By Alex Young | Comments | Tags node modules books css


Swarm (GitHub: gritzko / swarm, License: MIT) by Victor Grishchenko is a data synchronisation library that can synchronise objects on clients and servers.

Swarm is built around its concise four-method interface that expresses the core function of the library: synchronizing distributed object replicas. The interface is essentially a combination of two well recognized conventions, namely get/set and on/off method pairs, also available as getField/setField and addListener/removeListener calls respectively.

var obj = swarmPeer.on(id, callbackFn); // also addListener()
obj.on('field', fieldCallbackFn);'field', fieldCallbackFn);, callbackFn);  // also removeListener()

The author has defined an addressing protocol that uses tokens to describe various aspects of an event and object. For more details, see Swarm: specifying events.


Dookie (GitHub: voronianski / dookie-css, License: MIT, npm: dookie-css) by Dmitri Voronianski is a CSS library that’s built on Stylus. It provides several useful mixins and components:

  • Reset mixins: reset(), normalize(), and fields-reset()
  • Helpers: Shortcuts for working with display, fonts, images and high pixel ratio images
  • Sprites
  • Vendor prefixes
  • Easings and gradients

Dmitri has also included Mocha/PhantomJS tests for checking the generated output and visualising it.

Developing an AngularJS Edge

Developing an AngularJS Edge by Christopher Hiller is a new book about AngularJS aimed at existing JavaScript developers who just want to learn AngularJS. The author has posted examples on GitHub here: angularjsedge / examples, and there’s a sample chapter (ePub).

The book is being sold through Gumroad for $14.95, reduced from $19.95. The author notes that it is also available through Amazon and Safari Books Online.