Yeoman Configstore, Debug.js, Sublime JavaScript Refactoring

26 Apr 2013 | By Alex Young | Comments | Tags yeoman libraries browser node debugging editors


Sindre Sorhus sent in configstore (GitHub: yeoman / configstore, License: BSD, npm: configstore), a small module for storing configuration variables without worrying about where and how. The underlying data file is YAML, and stored in $XDG_CONFIG_HOME.

Configstore instances are used with a simple API for getting, setting, and deleting values:

var Configstore = require('configstore');
var packageName = require('./package').name;

var conf = new Configstore(packageName, { foo: 'bar' });

conf.set('awesome', true);
console.log(conf.get('awesome'));  // true
console.log(conf.get('foo'));      // bar

console.log(conf.get('awesome'));  // undefined

The Yeoman repository on GitHub has many more interesting server-side and client-side modules – currently most projects are related to client-side workflow, but given the discussions on Yeoman’s Google+ account I expect there will be an increasing number of server-side modules too.


Jerome Etienne has appeared on DailyJS a few times with his WebGL libraries and tutorials. He recently released debug.js (GitHub: jeromeetienne / debug.js, License: MIT), which is a set of tools for browser and Node JavaScript debugging.

The tutorial focuses on global leak detection, which is able to display a trace that shows where the leak originated. Another major feature is strong type checking for properties and function arguments.

Methods can also be marked as deprecated, allowing debug.js to generate notifications when such methods are accessed.

More details can be found on the debug.js project page.

Sublime Text Refactoring Plugin

Stephan Ahlf sent in his Sublime Text Refactoring Plugin (GitHub: s-a / sublime-text-refactor, License: MIT/GPL). The main features are method extraction, variable and function definition navigation, and renaming based on scope.

The plugin uses Node, and has some unit tests written in Mocha. The author is planning to add more features (the readme has a to-do list).

AngularJS: Rendering Feeds

25 Apr 2013 | By Alex Young | Comments | Tags angularjs mvc angularfeeds


In last week’s part I introduced Yeoman and we created a template project that included AngularJS. You can get the source at alexyoung / djsreader. The commit was 2e15d97.


The workflow with Yeoman is based around Grunt. Prior to Yeoman, many developers had adopted a similar approach – a lightweight web server was started up using Node and Connect, and a filesystem watcher was used to rebuild the client-side assets whenever a file was edited.

Yeoman bundles all of this up for you so you don’t need to reinvent it. When working on a Yeoman project, type grunt server to start a web server in development mode.

This should open a browser window at http://localhost:9000/#/ with a welcome message. Now the web server is running, you can edit files under app/, and Grunt will rebuild your project as required.

Key Components: Controllers and Views

The goal of this tutorial is to make something that can download a feed and render it – all using client-side code. AngularJS can do all of this, with the help of YQL for mapping an RSS/Atom feed to JSON.

This example is an excellent “soft” introduction to AngularJS because it involves several of the key components:

  • Controllers, for combining the data and views
  • Views, for rendering the articles returned by the service
  • Services, for fetching the JSON data

The Yeoman template project already contains a view and controller. The controller can be found in app/scripts/controllers/main.js, and the view is in app/views/main.html.

If you take a look at these files, it’s pretty obvious what’s going on: the controller sets some values that are then used by the template. The template is able to iterate over the values that are set by using the ng-repeat directive.

Directives and Data Binding

Directives can be used to transform the DOM, so the main.html file is a dynamic template that is interpolated at runtime.

The way in which data is bound to a template is through scopes. The $scope object, which is passed to the controller, will cause the template to be updated when it is changed. This is actually asynchronous:

Scope is the glue between application controller and the view. During the template linking phase the directives set up $watch expressions on the scope. The $watch allows the directives to be notified of property changes, which allows the directive to render the updated value to the DOM.

Notice how the view is updated when properties change. That means the property assignment to $scope in the template app will be reflected by the template.

If you’re of an inquisitive nature, you’re probably wondering how the controller gets instantiated and associated with the view. There’s a missing piece of the story here that I haven’t mentioned yet: routing.

Router Providers

The MainCtrl (main controller) is bound to views/main.html in app/scripts/app.js:

angular.module('djsreaderApp', [])
  .config(function($routeProvider) {
      .when('/', {
        templateUrl: 'views/main.html',
        controller: 'MainCtrl'
        redirectTo: '/'

The $routeProvider uses a promise-based API for mapping URLs to controllers and templates. This file is a centralised configuration file that sets up the application.

The line that reads angular.module sets up a new “module” called djsreaderApp. This isn’t technically the same as a Node module or RequireJS module, but it’s very similar – modules are registered in a global namespace so they can be referenced throughout an application. That includes third-party modules as well.

Fetching Feeds

To load feeds, we can use the $http service. But even better… it supports JSONP, which is how the Yahoo! API provides cross-domain access to the data we want to fetch. Open app/scripts/controllers/main.js and change it to load the (extremely long) YQL URL:

  .controller('MainCtrl', function($scope, $http) {
    var url = "*%20from%20xml%20where%20url%3D''%20and%20itemPath%3D'feed.entry'&format=json&diagnostics=true&callback=JSON_CALLBACK";

      success(function(data, status, headers, config) {
        $scope.feed = {
          title: 'DailyJS',
          items: data.query.results.entry
      error(function(data, status, headers, config) {
        console.error('Error fetching feed:', data);

The second line has changed to include a reference to $http – this allows us to access Angular’s built-in HTTP module.

The $scope is now updated with the result of the JSONP request. When $scope.feed is set, AngularJS will automatically update the view with the new values.

Now the view needs to be updated to display the feed items.

Rendering Feed Items

To render the feed items, open app/views/main.html and use the ng-repeat directive to iterate over each item and display it:

  <li ng-repeat="item in feed.items"></li>

This will now render the title of each feed entry. If you’re running grunt server you should have found that whenever a file was saved it caused the browser window to refresh. That means your changes should be visible, and you should see the recent stories from DailyJS.

AngularJS feed rendering What you should see...


In this brief tutorial you’ve seen Angular controllers, views, directives, data binding, and even routing. If you’ve written much Backbone.js or Knockout before then you should be starting to see how AngularJS implements similar concepts. It takes a different approach – I found $scope a little bit confusing at first for example, but the initial learning curve is mainly down to learning terminology.

If you’ve had trouble getting any of this working, try checkout out my source on GitHub. The commit for this tutorial was 73af554.

Node Roundup: 0.10.5, Node Task, cap

24 Apr 2013 | By Alex Young | Comments | Tags node modules grunt network security streams
You can send in your Node projects for review through our contact form.

Node 0.10.5

Node 0.10.5 is out. Apparently it now builds under Visual Studio 2012.

One small change I noticed was added by Ryan Doenges, where the assert module now puts information into the message property:

4716dc6 made assert.equal() and related functions work better by generating a better toString() from the expected, actual, and operator values passed to fail(). Unfortunately, this was accomplished by putting the generated message into the error’s name property. When you passed in a custom error message, the error would put the custom error into name and message, resulting in helpful string representations like "AssertionError: Oh no: Oh no".

The pull request for this is nice to read (apparently Ryan is only 17, so he got his dad to sign the Contributor License Agreement document).

Node Task

Node Task, sent in by Khalid Khan, is a specification for a promise-based API that wraps around JavaScript tasks. The idea is that tasks used with projects like Grunt should be compatible, and able to be processed through an arbitrary pipeline:

Eventually, it is hoped that popular JS libraries will maintain their own node-task modules (think jshint, stylus, handlebars, etc). If/when this happens, it will be trivial to pass files through an arbitrary pipeline of interactions and transformations utilizing libraries across the entire npm ecosystem.

After reading through each specification, it seems like an interesting attempt to standardise Grunt-like tasks. The API seems streams-inspired, as it’s based around EventEmitter2 with various additional methods that are left for implementors to fill in.


Brian White sent in his cross-platform packet capturing library, “cap” (GitHub: mscdex / cap, License: MIT, npm: cap). It’s built using WinPcap for Windows and libpcap and libpcap-dev for Unix-like operating systems.

It’s time to write your vulnerability scanning tools with Node!

Brian also sent in “dicer” (GitHub: mscdex / dicer, License: MIT, npm: dicer), which is a streaming multipart parser. It uses the streams2 base classes and readable-stream for Node 0.8 support.

jQuery 2.0 Released

23 Apr 2013 | By Alex Young | Comments | Tags jquery plugins
Note: You can send your plugins and articles in for review through our contact form.

jQuery 2.0 has been released. The most significant, headline-grabbing change is the removal of support for legacy browsers, including IE 6, 7, and 8. The 1.x branch will continue to be supported, so it’s safe to keep using it if you need broad IE support.

The jquery-migrate plugin can be used to help you migrate away from legacy APIs. jQuery 2.0 is “API compatible” with 1.9, which means migration shouldn’t be as painful as it could be. They’ve been pushing jquery-migrate for a while now, so hopefully this stuff isn’t new to anyone who likes to keep current with jQuery.

The announcement blog post has more details on IE support, the next release of jQuery, and the benefits of upgrading to 2.0.

Upgrade Planning

If you’re interested in upgrading, the jQuery documentation has notes on each API method that is deprecated. It also documents features that can be used to mitigate API changes – for example, if you’re using a plugin that requires an earlier version of jQuery, you could technically run multiple versions on a page by using jQuery.noConflict:

<script type="text/javascript" src="other_lib.js"></script>
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript">
  // Code that uses another library's $ can follow here.

Plugins that are listed on the jQuery Plugin Registry should list the required jQuery version in the package manifest file. That means you can easily see what version of jQuery a plugin depends on. Many already do depend on jQuery 1.9 or above, so they should be safe to use with jQuery 2.0.

As always, well-tested projects should be easier to migrate. So get those QUnit tests out and see what happens!

Object.observe Shim, Behave, Snap.js

22 Apr 2013 | By Alex Young | Comments | Tags shims textarea mobile ui

Object.observe Shim

It’s encouraging to see Harmony taking on board influences from databinding frameworks, given how important they’re proving to be to front-end development. The proposed API aims to improve the way respond to changes in objects:

Today, JavaScript framework which provide databinding typically create objects wrapping the real data, or require objects being databound to be modified to buy in to databinding. The first case leads to increased working set and more complex user model, and the second leads to siloing of databinding frameworks. A solution to this is to provide a runtime capability to observe changes to an object

François de Campredon sent in KapIT’s Object.observe shim (GitHub: KapIT / observe-shim, License: Apache), which implements the algorithm described by the proposal. It’s compatible with ECMAScript 5 browsers, and depends on a method called ObserveUtils.defineObservableProperties for setting up the properties you’re interested in observing. The readme has more documentation and build instructions.


Behave.js (GitHub: jakiestfu / Behave.js, License: MIT) by Jacob Kelley is library for adding IDE-like behaviour to a textarea. It doesn’t have any dependencies, and has impressive browser support. Key features include hard and soft tabs, bracket insertion, and automatic and multiline indentation.

Basic usage is simply:

var editor = new Behave({
  textarea: document.getElementById('myTextarea')

There is also a function for binding events to Behave – the events are all documented in the readme, and include things like key presses and lifecycle events.


Snap.js (GitHub: jakiestfu / Snap.js, License: MIT) also by Jacob is another dependency-free UI component. This one is for creating mobile-style navigation menus that appear when clicking a button or dragging the entire view. It uses CSS3 transitions, and has an event-based API so it’s easy to hook it into existing interfaces.

The drag events use ‘slide intent’, which allows an integer to be specified (slideIntent) to control the angle at which a gesture is considered horizontal. Jacob has included helpful documentation on how to structure and style a suitable layout for the plugin:

Two absolute elements, one to represent all the content, and another to represent all the drawers. The content has a higher z-index than the drawers. Within the drawers element, it’s direct children should represent the containers for the drawers, these should be fixed or absolute.

He has lots of other useful client-side projects on GitHub/jakiestfu.

LevelDB and Node: What is LevelDB Anyway?

19 Apr 2013 | By Rod Vagg | Comments | Tags node leveldb databases

This is the first article in a three-part series on LevelDB and how it can be used in Node.

This article will cover the LevelDB basics and internals to provide a foundation for the next two articles. The second and third articles will cover the core LevelDB Node libraries: LevelUP, LevelDOWN and the rest of the LevelDB ecosystem that’s appearing in Node-land.


What is LevelDB?

LevelDB is an open-source, dependency-free, embedded key/value data store. It was developed in 2011 by Jeff Dean and Sanjay Ghemawat, researchers from Google. It’s written in C++ although it has third-party bindings for most common programming languages. Including JavaScript / Node.js of course.

LevelDB is based on ideas in Google’s BigTable but does not share code with BigTable, this allows it to be licensed for open source release. Dean and Ghemawat developed LevelDB as a replacement for SQLite as the backing-store for Chrome’s IndexedDB implementation.

It has since seen very wide adoption across the industry and serves as the back-end to a number of new databases and is now the recommended storage back-end for Riak.


  • Arbitrary byte arrays: both keys and values are treated as simple arrays of bytes, so content can anything from ASCII strings to binary blobs.
  • Sorted by keys: by default, LevelDB stores entries lexicographically sorted by keys. The sorting is one of the main distinguishing features of LevelDB amongst similar embedded data storage libraries and comes in very useful for querying as we’ll see later.
  • Compressed storage: Google’s Snappy compression library is an optional dependency that can decrease the on-disk size of LevelDB stores with minimal sacrifice of speed. Snappy is highly optimised for fast compression and therefore does not provide particularly high compression ratios on common data.
  • Basic operations: Get(), Put(), Del(), Batch()

Basic architecture

Log Structured Merge (LSM) tree


All writes to a LevelDB store go straight into a log and a “memtable”. The log is regularly flushed into sorted string table files (SST) where the data has a more permanent home.

Reads on a data store merge these two distinct data structures, the log and the SST files. The SST files represent mature data and the log represents new data, including delete-operations.

A configurable cache is used to speed up common reads. The cache can potentially be large enough to fit an entire active working set in memory, depending on the application.

String Sorted Table files (SST)

Each SST file is limited to ~2MB, so a large LevelDB store will have many of these files. The SST file is divided internally into 4K blocks, each of which can be read in a single operation. The final block is an index that points to the start of each data block and its the key of the entry at the start of the block. A Bloom filter is used to speed up lookups, allowing a quick scan of an index to find the block that may contain the desired entry.

Keys can have shared prefixes within blocks. Any common prefix for keys within a block will be stored once, with subsequent entries storing just the unique suffix. After a fixed number of entries within a block, the shared prefix is “reset”; much like a keyframe in a video codec. Shared prefixes mean that verbose namespacing of keys does not lead to excessive storage requirements.

Table file hierarchy

The table files are not stored in a simple sequence, rather, they are organised into a series of levels. This is the “Level” in LevelDB.

Entries that come straight from the log are organised in to Level 0, a set of up to 4 files. When additional entries force Level 0 above the maximum of 4 files, one of the SST files is chosen and merged with the SST files that make up Level 1, which is a set of up to 10MB of files. This process continues, with levels overflowing and one file at a time being merged with the (up to 3) overlapping SST files in the next level. Each level beyond Level 1 is 10 times the size of the previous level.

Log: Max size of 4MB (configurable), then flushed into a set of Level 0 SST files
Level 0: Max of 4 SST files, then one file compacted into Level 1
Level 1: Max total size of 10MB, then one file compacted into Level 2
Level 2: Max total size of 100MB, then one file compacted into Level 3
Level 3+: Max total size of 10 x previous level, then one file compacted into next level

0 ↠ 4 SST, 1 ↠ 10M, 2 ↠ 100M, 3 ↠ 1G, 4 ↠ 10G, 5 ↠ 100G, 6 ↠ 1T, 7 ↠ 10T


This organisation into levels minimises the reorganisation that must take place as new entries are inserted into the middle of a range of keys. Each reorganisation, or “compaction”, is restricted to a just a small section of the data store. The hierarchical structure generally leads to data in the higher levels being the most mature data, with the fresher data being stored in the log and the initial levels. Since the initial levels are relatively small, overwriting and removing entries incurs less cost than when it occurs in the higher levels, but this matches the typical database where you have a large set of mature data and a more volatile set of fresh data (of course this is not always the case, so performance will vary for different data write and retrieve patterns).

A lookup operation must also traverse the levels to find the required entry. A read operation that requests a given key must first look in the log, if it is not found there it looks in Level 0, moving up to Level 1 and so forth. In this way, a lookup operation incurs a minimum of one read per level that must be searched before finding the required entry. A lookup for a key that does not exist must search every level before a definitive “NotFound” can be returned (unless a Del operation is recorded for that key in the log).

Advanced features

  • Batch operations: provide a collection of Put and/or Del operations that are atomic; that is, the whole collection of operations succeed or fail in a single Batch operation.
  • Bi-directional iterators: iterators can start at any key in a LevelDB store (even if that key does not exist, it will simply jump to the next lexical key) and can move forward and backwards through the store.
  • Snapshots: a snapshot provides a reference to the state of the database at a point in time. Read-queries (Get and iterators) can be made against specific snapshots to retrieve entries as they existed at the time the snapshot was created. Each iterator creates an implicit snapshot (unless it is requested against an explicitly created snapshot). This means that regardless of how long an iterator is alive and active, the data set it operates upon will always be the same as at the time the iterator was created.

Some details on these advanced features will be covered in the next two articles, when we turn to look at how LevelDB can be used to simplify data management in your Node application.

If you’re keen to learn more and can’t wait for the next article, see the LevelUP project on GitHub as this is the focus of much of the LevelDB activity in the Node community at the moment.

AngularJS: Let's Make a Feed Reader

18 Apr 2013 | By Alex Young | Comments | Tags angularjs mvc angularfeeds

I’m looking forward to seeing what services appear to fill Google Reader’s wake. Reeder and Press are my favourite RSS apps, which I use to curate my sources for upcoming DailyJS content. It sounds like Reeder will support Feedbin, so hopefully Press and other apps will as well. I’ve also used Newsblur in the past, but I’m not sure if we’ll see Newsblur support in Reeder…

With that in mind, I thought it would be pretty cool to use a feed reader as the AngularJS tutorial series theme. A Bootstrap styled, AngularJS-powered feed reader would look and feel friendly and fast. The main question, however, is how exactly do we download feeds? Atom and RSS feeds aren’t exactly friendly to client-side developers. What we need is JSON!


The now standard way to fetch feeds in client-side code is to use JSONP. That’s where a remote resource is fetched, usually by inserting a script tag, and the server returns JavaScript wrapped in a callback that the client can run when ready.

I remember reading a post by John Resig many years ago that explained how to use this technique with RSS specifically: RSS to JSON Convertor. Ironically, a popular commercial solution for this was provided through Google Reader. Fortunately there’s another way to do it, this time by Yahoo! – the Yahoo! Query Language.


The YQL service (terms of use) is basically SQL for the web. It can be used to fetch and interpret all kinds of resources, including feeds. It has usage limits, so if you want to take this tutorial series to build something more commercially viable then you’ll want to check those out in detail. Even though the endpoints we’ll use are “public”, Yahoo! will still rate limit them if they go over 2,000 requests per hour. To support higher volume users, API keys can be created.

If you visit this link you’ll see a runnable example that converts the DailyJS Atom feed into JSON, wrapped in a callback. The result looks like this:

cb({ "query": { /* loads of JSON! */ } });

The cb method will be run from within our fancy AngularJS/Bootstrap client-side code. I wrote about how to build client-side JSONP implementations in Let’s Make a Framework: Ajax Part 2, so check that out if you’re interested in that area.

As far as feed processing goes, YQL will give us the JSON we need to make a little feed reader.


Before you press “next unread” in your own feed reader, let’s jump-start our application with Yeoman. First, install it and Grunt. I assume you already have a recent version of Node, if not get a 0.10.x copy installed and then run the following:

npm install -g yo grunt-cli bower generator-angular generator-karma

Yeoman is based around generators, which are separate modules that you can install using npm. The previous command installed the AngularJS generator, generator-angular.

Next you’ll need to create a directory for the application to live in:

mkdir djsreader
cd djsreader

You should also run the angular generator:

yo angular

It will install a lot of stuff, but fortunately most of the modules are ones I’d use anyway so I’m cool with that. Answer Y to each question, apart from the one about Compass (I don’t think I have Compass installed, so I didn’t want that option).

Run grunt server to see the freshly minted AngularJS-powered app!

Hello, AngularJS

You may have noticed some “karma” files have appeared. That’s the AngularJS test framework, which you can read about at If you type grunt test, Grunt will happily trudge through some basic tests that are in test/spec/controllers/main.js.


Welcome to the world of Yeoman, AngularJS, and… Yahoo!, apparently. The repository for this project is at alexyoung / djsreader. Come back in a week for the next part!

Node Roundup: 0.10.4, Papercut, rsz, sz

17 Apr 2013 | By Alex Young | Comments | Tags node modules graphics images uploads
You can send in your Node projects for review through our contact form.

Node 0.10.4

Node 0.10.4 was released last week. There are bug fixes for some core modules, and I also noticed this:

v8: Avoid excessive memory growth in JSON.parse (Fedor Indutny)

Another interesting patch was added to the stream module, to ensure write callbacks run before end:

stream: call write cb before finish event

The Node blog was quietly updated to change the latest 0.8 to read “legacy” instead of “stable”. I don’t recall previous stable releases being referred to in this way before, so I thought it was worth mentioning here.


Papercut (GitHub: Rafe / papercut, License: MIT, npm: papercut) by Jimmy Chao is an image uploading module that supports Amazon S3 and resizing and cropping through node-imagemagick.

Uploaders can be created according to a schema, allowing them to be used to manage different aspects of your application’s image handling requirements:

AvatarUploader = papercut.Schema(function(schema){
    name: 'avatar'
  , size: '200x200'
  , process: 'crop'

    name: 'small'
  , size: '50x50'
  , process: 'crop'

Papercut also supports configuration using NODE_ENV, so it’s easy to configure to work sensibly in various deployment environments.


rsz (GitHub: rvagg / node-rsz, License: MIT, npm: rsz) by Rod Vagg is a module for resizing images based on LearnBoost’s node-canvas. The API is based around a single method which accepts various signatures. The basic usage is rsz(src, width, height, function (err, buf) { /* */ }).


sz (GitHub: rvagg / node-sz, License: MIT, npm: sz), also by Rod, is another image-related module. This one can determine the size of an image. It should be noted that both of these modules work with image files and Buffer objects.

var buf = fs.readFileSync('image.gif');

sz(buf, function(err, size) {
  // where `size` may look like: { height: 280, width: 400 }

jQuery Roundup: TyranoScript, Sly, FPSMeter

16 Apr 2013 | By Alex Young | Comments | Tags jquery plugins graphics animations games
Note: You can send your plugins and articles in for review through our contact form.



Evan Burchard sent in TyranoScript (GitHub: ShikemokuMK / tyranoscript, License: MIT), a jQuery-powered HTML5 interactive fiction game engine:

The game engine was only in Japanese, so I spent the last week making it available in English. As far as what it does, it sits somewhere between an interactive fiction scripting utility and a presentation library like impress.js. It has built in functions (tags) for things like making text and characters pop up, saving the game, changing scenery and shaking the screen. But it supplies interfaces for arbitrary HTML, CSS and JavaScript to be run as well, so conceivably one could use it for presentations or other types of applications. One of the sample games on the project website demonstrates this with a compelling YouTube API integration. The games created with TyranoScript can run on modern browsers, Android, iPad and iPhone.

Evan’s English version is at EvanBurchard / tyranoscript. For a sample game, check out the delightfully nutty Jurassic Heart – a game where you date a dinosaur (of course)!


Sly (GitHub: Darsain / sly, License: MIT, Bower: sly) by Darsain is a library for scrolling – it can be used where you need to replace scrollbars, or where you want to build your own navigation solutions.

The author has paid particular attention to performance:

Sly has a custom high-performance animation rendering based around the Animation Timing Interface written directly for its needs. This provides an optimized 60 FPS rendering, and is designed to still accept easing functions from jQuery Easing Plugin, so you won’t event notice that Sly’s animations have nothing to do with jQuery :).

Sly’s site has a few examples – check out the infinite scrolling and parallax demos.



Sly’s author also sent in FPSMeter. When working on graphically-oriented projects, it’s sometimes useful to display the frames-per-second of animations. FPSMeter (GitHub: Darsain / fpsmeter, Bower: fpsmeter) measures FPS using WindowAnimationTiming, with a polyfill to patch in browser support for most browsers, including IE7+.

FPSMeter can measure FPS, milliseconds between frames, and the number of milliseconds it takes to render one frame. It can also cope with multiple instances on a page, and has show/hide methods that will pause rendering. It also supports theming, so you should be able to get it to sit in nicely in your existing interface.

The State of Node and Relational Databases

15 Apr 2013 | By Alex Young | Comments | Tags databases node modules sql

Recently I started work on a Node project that was built using Sequelize with MySQL. It was chosen to ease the transition from an earlier version written with Ruby on Rails. The original’s ActiveRecord models mapped quite closely to their Sequelize equivalents, which got things started smoothly enough.

Although Sequelize had some API quirks that didn’t feel very idiomatic alongside other Node code, the developers have hungrily accepted pull requests and it’s emerging as a reasonable ORM solution. However, like many others in the Node community I feel uncomfortable with ORM.

Why? Well, some of us have learned how to use relational databases correctly. Joining an established project that uses ORM only to find there’s no referential integrity or sensible indexes is to be expected these days, as programmers have moved away from caring about databases to application-level schemas. I’ve had my head down in MongoDB/Mongoose and Redis code for the last few years, but relational databases aren’t going away any time soon so either programmers need to get the hang of them or we need better database modules.

This all prompted me to look at alternative solutions to relational databases in Node. First, I broke down the problem into separate areas:

  • Driver: The module that manages database connections, sends queries, and responds with data
  • Abstraction layer: Provide tools for escaping queries to avoid SQL injection attacks, and wrap multiple drivers so it’s easy to port applications between MySQL/PostgreSQL/SQLite
  • Validator: Validates data against a schema prior to sending it to the database. Aids with the generation of human-readable error messages
  • Query generator: Generates SQL queries based on a more JavaScript-programmer-friendly API
  • Schema management: Keep schema up-to-date when fields are added or removed

Some projects won’t need to support all of these areas – you can mix and match them as needed. I prefer to create simple veneer-like “model” classes that wrap more low-level database operations. This works well in a web application where it can be make sense to decouple the HTTP layer from the database.

Database Driver

The mysql and pg modules are actively maintained, and are usually required by “abstraction layer” modules and ORM solutions.

A note about configuration: when it comes to connecting to the database, I strongly prefer modules that support connection URIs. It makes it a lot easier to deploy web applications to services like Heroku, because a single environmental variable can be set that contains the connection details for your production database.

Abstraction Layer

This level sits above the driver layer, and should offer lightweight JavaScript sugar. There are many examples of this, but a good one is transactions. Transactions are particularly useful in JavaScript because they can help create APIs that are less dependent on heavily nested callbacks. For example, it makes sense to model transactions as an EventEmitter descendent that allows operations to be pushed to an internal stack.

The author of the pg module, Brian Carlson, who occasionally stops by the #dailyjs IRC channel on Freenode, recently mentioned his new relational project that aims to provide a saner approach to ORM in Node. This module feels more like an abstraction layer API, but it’s gunning to be a formidable new ORM solution.

There are some popular libraries that tackle the abstraction layer, including any-db and massive.


I usually find myself dealing with errors in web forms, so anything that makes error handling easier is an advantage. Validation and schemas are closely related, which is why ORM libraries usually combine them.

It’s possible to treat them separately, and in the JavaScript community we have solutions based on or inspired by JSON Schema. The jayschema module by Nate Silva is one such project. It’s really aimed at validating JSON, but it could be used to validate JavaScript objects spat out by a database driver.

Validator has some simple tools for validating data types, but it also has optional Express middleware that makes it easy to drop into a web application. Another similar project is conform by Oleg Korobenko.

Query Generator

The sql module by Brian Carlson is an SQL builder – it has a chainable API that turns JavaScript into SQL:


He’s using this to build the previously mentioned relational module as well.

Schema Management

Sequelize has an API for managing database migrations. It can migrate to a given version and back, and it can also “sync” a model’s schema to the database (creating the table if it doesn’t exist).

There are also dedicated migration modules, like db-migrate by Jeff Kunkle.


The Node community has created a rich set of modules for working with relational databases, and although there’s a strong anti-ORM sentiment interesting projects like relational are appearing.

Although these modules don’t address my concerns about the way in which ORM gets used with apathy towards best practices, it’s promising to see lower-level modules that can be used as building blocks for more application-specific solutions.

All of this has come at a time when relational database projects are adapting, changing, and even growing in popularity despite the recent attention the NoSQL movement has been given. PostgreSQL is going from strength to strength, and Heroku provides it as default. MariaDB is a drop-in replacement for MySQL that has a non-blocking Node module. SQLite is probably technically growing in usage as it backs Core Data in iCloud applications – Android developers also use SQLite.

Let other readers know how you deal with SQL-backed Node projects in the comments!

ZestJS, backbone-pageable, Marionette and Chaplin

12 Apr 2013 | By Alex Young | Comments | Tags frameworks libraries backbone.js mvc


Øyvind Smestad sent in ZestJS (GitHub: zestjs, License: Apache 2.0), which offers an interesting take on client and server-side modularity:

ZestJS provides client and server rendering for static and dynamic HTML widgets (Render Components) written as AMD modules providing low-level application modularity and portability.

It treats widgets as AMD components, with separate files for markup, JavaScript, and CSS. It can then render the results on either the server or client. The server-side renderer, zest-server, is a small Node project that is capable of rendering views and serving static files. It also handles routing, essentially mapping HTTP routes to what Zest calls “Render Components”.

Some aspects of ZestJS remind me of Backbone.js – the data loading can be performed on the initial page load, but remote APIs are easy to integrate as well. It also uses r.js for building and optimizing single page web apps, which is similar to the workflow of many Backbone.js developers.

ZestJS was created by Guy Bedford, and there are client and server quick start guides.


backbone-pageable (GitHub: wyuenho / backbone-pageable, License: MIT) by Jimmy Yuen Ho Wong is a drop-in replacement for Backbone.Collection that adds support for server and client-side pagination. It includes options for sorting, infinite paging, and caching:

It comes with reasonable defaults and works well with existing server APIs. Besides being really good at pagination and sorting, it is also really smart as syncing changes across pages while paginating on the client-side. It is also extremely lightweight - only 4k minified and gzipped

It supports query parameters, so you can easily set up your pagination links with variables to meet your server’s requirements (or perhaps to allow multiple pagination controls on a page).

var Book = Backbone.Model.extend({});

var Books = Backbone.PageableCollection.extend({
  model: Book,
  url: '',

  state: {
    firstPage: 0,
    currentPage: 2,
    totalRecords: 200

  queryParams: {
    currentPage: 'current_page',
    pageSize: 'page_size'

The project comes with some serious test coverage, including a test suite geared up for Zepto which is a nice touch.

The author is currently working on releasing a new version that adds Backbone 1.0 support. It should be 1.2.2, so keep a lookout for that if you’re already on Backbone 1.0.

Comparison of Marionette and Chaplin

Mathias Schäfer, who is one of the creators of Chaplin.js, sent in a comparison of Marionette and Chaplin. Both libraries attempt to address various limitations of Backbone.js – Chaplin.js adds better support for CoffeeScript class hierarchies, stricter memory management, lazy-loading modules, and publish/subscribe for cross-module communication.

The comparison is detailed and reveals some of the thinking that went into Chaplin.js in the first place:

Compared to Marionette, Chaplin acts more like a framework. It’s more opinionated and has stronger conventions in several areas. It took ideas from server-side MVC frameworks like Ruby on Rails which follow the convention over configuration principle. The goal of Chaplin is to provide well-proven guidelines and a convenient developing environment.

The post already has interesting comments – Mathias seems to be following up questions with some well thought out responses.

Google, Twitter, and AngularJS

11 Apr 2013 | By Alex Young | Comments | Tags angularjs mvc angularfeeds


My behemoth of a Backbone.js tutorial series has run its course, so I wanted to follow it up with some posts about AngularJS. One thing that intrigues me about AngularJS is the emerging relationship between Google and Twitter. Or between prominent Google and Twitter developers. I don’t think there’s a overarching plan at the management level to create an open source partnership, just a set of coincidences as projects have aligned along certain vectors that are pushing front-end web development forward.

Most of what I’m going to talk about here has been packaged up into Yeoman, which has some Google employees behind it (Paul Irish, Addy Osmani) and includes technology from Twitter (Bower). The rest of this post will break down the emerging next generation Google/Twitter open source stack.

Data Binding, Views, Routes

AngularJS from Google is the obvious choice for data binding. It’s definitely growing in popularity, and Yeoman includes a generator for it.


Bootstrap is also supported by Yeoman, and offers a fine set of extensible UI widgets. Although seeing vanilla Bootstrap sites has become massively clichéd, it doesn’t take much effort to customise it.

Build/Preview: Grunt

In my Backbone.js tutorial, we used Node to create a small web server with RequireJS for local development. I included some details on Grunt purely because I use GNU Make for my own projects, so I wanted to look at Grunt more seriously. The developers behind Yeoman have selected Grunt as their build/preview tool.

Test: Karma, Mocha

Yeoman bundles in Mocha, and Karma can be used to script browsers. It’s used to test AngularJS, so that’s where the connection comes from, and there’s karma-mocha – an adapter for Karma to use Mocha.

Package Management: Bower

Bower, from Twitter, is a lightweight package manager. I’ve talked a bit about it on DailyJS before, and I try to include links to Bower package names when featuring front-end modules. Yeoman comes with Bower.

I covered Backbone.sync a fair bit in the Backbone.js tutorial series because it’s so flexible I was able to do some cool things with it like sync data with Google’s JavaScript APIs. This is interesting when you consider that Backbone is configured to talk to a Rails-style REST server out of the box.

So, what about this brave new world of Google/Twitter open source projects? What data syncing solutions are there? To my knowledge there isn’t anything generic, yet, but there is the Yeoman Express Stack:

A proof of concept end-to-end stack for development using Yeoman 0.9.6, Express and AngularJS. Note: This experimental branch is providing for testing purposes only (at present) and does not provide any guarantees.

This is a small project that builds on Yeoman and AngularJS to sync data with a Node/Express server. This came from a weekend hack project that involved Addy Osmani, who has contributed to many of the projects mentioned here.

There is also Angular Seed, which persists data to a Node/Express server using Socket.IO.

Also significant is a proposal, Entity-Driven Tooling from the Yeoman developers about adding CRUD generator support:

tl;dr: what if one command could scaffold out the CRUD models/views for your client and server side code, with baked in offline support. Would this help you? Would this solve a pain point of yours? Are there better ways to do this than what’s described below?

Although I’ve often wished for something like this, the post seems to imply a localStorage-based syncing solution would be developed. This would allow the browser-based portion of the project to behave like a client, making data available in localStorage for offline use.

However, syncing data can be difficult, so providing a generator that can do this would be more involved than Backbone.sync. Perhaps basing it around CouchDB’s eventual consistency model would work? Locally available records would have a version parameter which would be used to safely sync concurrently with the server. This would leave conflict resolution up to the developer of the application – some servers might store the latest version of a record, and others might throw an error, perhaps causing the client to display a conflict resolution dialog.

There may be a localStorage sync project the Yeoman developers have in mind.

I can’t help feeling that there’s more to Google open source than Closure Library. If you’ve used Android since Jelly Bean, Chrome OS, or Google Plus, then you know Google’s designers have been pushing things far beyond where the company was just a few years ago. Although Closure Library is a formidable set of tools, the widgets don’t fit in well with the Yeoman generation’s open source projects, and I’m eager to see what a next generation version of Bootstrap would look like.

But for now I’m looking at AngularJS, Grunt, Bootstrap, Bower, Mocha for my next tutorial series. I’ll have to find something interesting to sync data with, because I enjoyed figuring out the Google Tasks API.

Node Roundup: 0.8.23, indev, compressjs

10 Apr 2013 | By Alex Young | Comments | Tags node modules build compression
You can send in your Node projects for review through our contact form.

Node 0.8.23

In case you haven’t switched to 0.10 yet, Node 0.8.23 was released yesterday. This version adds bug fixes for the http, tls, child_process, and crypto modules.


indev (GitHub: azer / indev, License: BSD, npm: indev) by Azer Koçulu is a lightweight alternative to Makfiles. It supports “Devfiles” which can be written in either CoffeeScript or JavaScript, and includes shortcuts for lots of shell commands through ShellJS.

The inclusion of ShellJS makes it feel closer to make than Grunt, so if Grunt isn’t quite what you want then indev might be what you’re looking for.


compressjs (GitHub: cscott / compressjs, License: GPLv2, npm: compressjs) by C. Scott Ananian features several compression algorithms, implemented in pure JavaScript. It can run in browsers, and includes bzip2, LZP3, a modified LZJB, PPM-D, and an implementation of Dynamic Markov Compression.

The readme includes benchmarks for each algorithm, and a script is included so you can use it to compress things on the command-line.

jQuery Roundup: 2.0 Beta 3, betterToggle, Cavendish.js

09 Apr 2013 | By Alex Young | Comments | Tags jquery plugins css slideshows
Note: You can send your plugins and articles in for review through our contact form.

jQuery 2.0 Beta 3

jQuery 2.0 Beta 3 is out:

… we really need your help in finding and fixing any bugs that may be hiding in the nooks and crannies of jQuery 2.0. We want to get all the problems ironed out before this version ships, and the only way to do that is to find out whether it runs with your code.

This release introduces Node compatibility, so you can now load it with require(). It also makes it work in Windows 8 Store apps.


betterToggle (GitHub: kanakiyajay / betterToggle, License: GPLv2) by Jay Kanakiya is a plugin for toggling elements with CSS3 transforms. As an added bonus it allows multiple elements to be toggled.

Usage is similar to .toggle: $(selector).betterToggle(), and the project’s homepage has plenty of demos.


Cavendish.js (GitHub: michaek / cavendish.js, License: MIT) by Michael Hellein is a slide manager plugin aimed at front-end developers well-versed in CSS. It has a plugin-based API that allows it to support different styles for displaying and navigating slides:

var cavendish = $('.cavendish').cavendish('cavendish');

The bundled plugins include a simple player that pauses on hover, a pager, previous and next arrows, and a parallax scrolling effect. The API also exposes the events used, so you can add listeners to see when Cavendish has been initialised and after a slide has been transitioned.

LungoJS, Math.js, Collage

08 Apr 2013 | By Alex Young | Comments | Tags mobile maths frameworks libraries ui



LungoJS (GitHub: TapQuo / Lungo.js, License: GPLv3) from TapQuo is a framework for HTML5 apps that aims to be cross-device. It supports mobile, desktop, and TV devices. The JavaScript API has support for working with the DOM, localStorage, caching, navigation routing, remote services, and views.

There’s a designer-focused tutorial that explains how to create an application with Lungo, and a Google group (which currently requires permission to join).


Math.js (GitHub: josdejong / mathjs, License: Apache 2.0, npm: mathjs, bower: mathjs) by Jos de Jong is a maths library for client-side JavaScript and Node. It supports complex numbers, units, strings, arrays, and matrices, built-in functions and constants, as well as mathematical expression parsing.

It has no dependencies and is compatible with the built-in Math library. One feature I particularly like is the expression parser:

var parser = math.parser();
parser.eval('1.2 / (2.3 + 0.7)'); // 0.4
parser.eval('a = 5.08 cm');
parser.eval('a in inch');         // 2 inch
parser.eval('sin(45 deg) ^ 2');   // 0.5

This opens up some interesting possibilities for storing mathematical expressions in databases then safely evaluating them later on.

The project includes unit tests, and detailed documentation can be found in the readme file.


Collage (GitHub: ozanturgut / collage, License: Apache 2.0) by Ozan Turgut is a framework for creating interactive collages. It can knit together remote APIs then present media in a two-dimensional space.

This example demonstrates some of the APIs that are supported as standard:

var collage = Collage.create(document.getElementById('PopcornCollage'));
collage.load('popcorn media', {
  flickr: [{ tags: 'popcorn'}],
  googleNews: ['popcorn'],
  twitter: [{ query: 'popcorn' }]
  collage.start('popcorn media');

AngularCollection, datamock.js, store.js

05 Apr 2013 | By Alex Young | Comments | Tags mvc angularjs testing databases localStorage


AngularCollection (GitHub: tomkuk / angular-collection, License: MIT, bower: angular-collection) by Tomasz Kuklis is a collection module for AngularJS. It allows objects to be added, removed, updated, and fetched at a specific index. It also has a sort method and last.

It comes with a Grunt build script and some unit tests.


datamock.js (GitHub: marksteve / datamock.js, License: MIT) by Mark Steve Samson is a small library for generating sample data for mockups. Data attributes can be used to bind mocked data, like this: <p data-mock="lorem">Lorem ipsum here...</p>.

It includes some other value types, like names and emails. The author has included a bookmarklet task in the build script so you can generate a bookmarklet that fills a page in with test sample data.


store.js (GitHub: nbubna / store, License: MIT) by Nathan Bubna is a friendlier API for localStorage and sessionStorage. Basic usage is store(key, data), but it has other functions like store.setAll, store.getAll, and support for namespaces.

The sessionStorage API is accessed through store.session, for example: store.session(key, 'value'). The project includes a Grunt build script and PhantomJS-powered tests.

Backbone.js Tutorial: jQuery Plugins and Moving Tasks

04 Apr 2013 | By Alex Young | Comments | Tags backbone.js mvc node backgoog bootstrap jquery


Before starting this tutorial, you’ll need the following:

  • alexyoung / dailyjs-backbone-tutorial at commit 705bcb4
  • The API key from part 2
  • The “Client ID” key from part 2
  • Update app/js/config.js with your keys (if you’ve checked out my source)

To check out the source, run the following commands (or use a suitable Git GUI tool):

git clone
cd dailyjs-backbone-tutorial
git reset --hard 705bcb4

Using Backbone with jQuery Plugins

Although Backbone doesn’t need to be used with jQuery specifically, a lot of people use it with jQuery (and RequireJS) to get access to the diverse plugins made by the jQuery community. In this tutorial I’ll explain how to use jQuery plugins with Backbone projects, and how to find ones that will work well.

The example I’ve used is integrating a drag-and-drop “sortable” plugin to allow tasks to be reordered.

HTML5 Sortable

The plugin I’ve used for drag-and-drop is the HTML5 Sortable Plugin by Ali Farhadi. The reason I used this particular plugin is it has a simple event-based API that allows the plugin to be unloaded and sort events to be captured and responded to. It just needs a container element and the child elements that need to be sorted. The unordered list of tasks in this project directly translates to the expected markup.

Sometimes it’s easier to just write out data attributes to elements rather than trying to create relationships between the DOM nodes used by plugins and models. HTML5 Sortable emits a 'sortupdate' event when a node has been dragged and dropped, and it’ll pass the relevant element to the listener callback. From this we need to figure out which model has changed, then translate that into something Google’s API can understand.

Loading Plugins with RequireJS

In an earlier tutorial I demonstrated how to load non-AMD libraries using RequireJS. If you want a recap, just check out app/js/main.js and look at the shim property in the RequireJS configuration:

  baseUrl: 'js',

  paths: {
    text: 'lib/text'

  shim: {
    'lib/underscore-min': {
      exports: '_'
    'lib/backbone': {
      deps: ['lib/underscore-min']
    , exports: 'Backbone'
    'app': {
      deps: ['lib/underscore-min', 'lib/backbone', 'lib/jquery.sortable']

The 'app' property expresses a dependency between the main Backbone application file and lib/jquery.sortable, which means /lib/jquery.sortable.js will get automatically loaded (or compiled in by r.js when creating a production build of the app).

Google Tasks Ordering API

It would be too easy if HTML5 Sortable’s API was a one-to-one match with the Google Task’s ordering API. Google’s API has a specific method for moving tasks, and it’s based around moving one task to occupy the position of another one:

gapi.client.tasks.tasks.move({ tasklist: listId, task: id, previous: previousId });

Moving a task to the top of the list is handled by passing null for previous.

Next I’ll explain how to create some simple interface elements for the draggable handle, and then we’ll look at how to persist moved tasks by translating Google’s API into Backbone model and collection code.

Implementation: Views and Templates

I added a little handle by using a Bootstrap icon and an anchor element in app/js/templates/tasks/task.html:

<a href="#" class="handle pull-right"><i class="icon-move"></i></a>

Next I added the code that maps between the Backbone view and the jQuery HTML5 Sortable plugin to app/js/views/tasks/index.js:

makeSortable: function() {
  var $el = this.$el.find('#task-list');
  if (this.collection.length) {
    $el.sortable({ handle: '.handle' }).bind('sortupdate', _.bind(this.saveTaskOrder, this));

saveTaskOrder: function(e, o) {
  var id = $(o.item).find('.check-task').data('taskId')
    , previous = $(o.item).prev()
    , previousId = previous.length ? $(previous).find('.check-task').data('taskId') : null
    , request

  this.collection.move(id, previousId, this.model);

The makeSortable method makes an element that appears within TasksIndexView “sortable” – that is, HTML Sortable has been wrapped around it. The plugin’s 'sortupdate' method is then bound to saveTaskOrder.

The saveTaskOrder method gets the current task’s ID by looking at the checkbox, because I’d already added a data attribute to that element in the template. This ID is then passed to the collection with the previous task’s ID. In this case, the previous task is the one adjacent to it, which Google’s API needs to figure out how to move the task.

The collection property in this view is a Tasks property, so let’s take a look at how to implement the move method that causes the changes to be persisted.

Implementation: Models and Collections

Open app/js/collections/tasks.js and add a new method called move:

move: function(id, previousId, list) {
  var model = this.get(id)
    , toModel = this.get(previousId)
    , index = this.indexOf(toModel) + 1

  this.remove(model, { silent: true });
  this.add(model, { at: index, silent: true });

  // Persist the change
  list.moveTask({ task: id, previous: previousId });

This method just exists to trigger remove and add calls on the collection so cause the objects to be reshuffled internally. It then calls moveTask on the TaskList model (in app/js/models/tasklist.js):

moveTask: function(options) {
  options['tasklist'] = this.get('id');
  var request = gapi.client.tasks.tasks.move(options);

  Backbone.gapiRequest(request, 'update', this, options);

The gapiRequest method forms the basis for the custom Backbone.sync method used in this project, which I’ve talked about in previous tutorials. I wasn’t able to figure out how to make Backbone.sync cope with moving items in a way that made sense given how gapi.client.tasks.tasks.move works, but I was able to at least reuse some of the syncing functionality by creating a request and calling the “standard” request handler.


When you can’t find a suitable Backbone plugin for something and want to use a jQuery plugin, my advice is to look for plugins that have event-based APIs and can be cleanly unloaded. That will make them easy to hook into your Backbone views.

The full source for this tutorial can be found in alexyoung / dailyjs-backbone-tutorial, commit e9edfa3.

Node Roundup: 0.11.0, Dependo, Mashape OAuth, node-windows

03 Apr 2013 | By Alex Young | Comments | Tags node modules dependencies oauth authentication windows
You can send in your Node projects for review through our contact form.

0.11.0, 0.10.2

Node 0.11.0 has been released, which is the latest unstable branch of Node. Node 0.10.12 was also released, which includes some fixes for the stream module, an update for the internal uv library, and various other fixes for cryptographic modules and child_process.



Dependo (GitHub: auchenberg / dependo, License: MIT, npm: dependo) by Kenneth Auchenberg is a visualisation tool for generating force directed graphs of JavaScript dependencies. It can interpret CommonJS or AMD dependencies, and uses MaDGe to generate the raw dependency graph. D3.js is used for drawing the results.

Mashape OAuth

Mashape OAuth (GitHub: Mashape / mashape-oauth, License: MIT, npm: mashape-oauth) by Nijiko Yonskai is a set of modules for OAuth and OAuth2. It has been designed to work with lots of variations of OAuth implementations, and includes some lengthy Mocha unit tests.

The authors have also written a document called The OAuth Bible that explains the main concepts behind each supported OAuth variation, which is useful because the OAuth terminology isn’t exactly easy to get to grips with.


node-windows (GitHub: coreybutler / node-windows, License: MIT/BSD, npm: node-windows) by Corey Butler is a module designed to help write long-running Windows services with Node. It supports event logging and process management without requiring Visual Studio or the .NET Framework.

Using native node modules on Windows can suck. Most native modules are not distributed in a binary format. Instead, these modules rely on npm to build the project, utilizing node-gyp. This means developers need to have Visual Studio (and potentially other software) installed on the system, just to install a native module. This is portable, but painful… mostly because Visual Studio itself is over 2GB.

node-windows does not use native modules. There are some binary/exe utilities, but everything needed to run more complex tasks is packaged and distributed in a readily usable format. So, no need for Visual Studio… at least not for this module.

jQuery Roundup: Sidr, Huey, Backbone.Advice

02 Apr 2013 | By Alex Young | Comments | Tags jquery plugins backbone.js graphics components
Note: You can send your plugins and articles in for review through our contact form.



Sidr (GitHub: artberri / sidr, License: MIT) by Alberto Varela creates menus that look like the sidebars found in recent iOS apps. It can cope with multiple menus on a page, and can load content remotely. It’s also responsive, so it should work well in mobile projects.

The author has written up documentation complete with demos, and has included a Grunt build script. It seems like the exact sort of UI component that the next great web-based RSS reader might use…


Huey (GitHub: michaelrhodes / huey, License: MIT) by Michael Rhodes will find the dominant colour of an image and return it as an RGB array. This is all performed client-side, and doesn’t even depend on jQuery. It could be used to create the kind of effect seen in iTunes, where the background colour changes to suit the selected album art.


Backbone.Advice (GitHub: rhysbrettbowen / Backbone.Advice, License: MIT) by Rhys Brett-Bowen is a Backbone plugin based on Angus Croll’s advice pattern. It basically adds functional mixins to Backbone objects, and can be wrapped like this:


Rhys sent in a whole bunch of other Backbone-related projects, including Backbone.ComponentView and Backbone.ModelRegistry. Backbone.ComponentView is based on goog.ui.component from Closure Library, and also works with Backbone.Advice.

Five Minute Guide to Streams2

01 Apr 2013 | By Alex Young | Comments | Tags streams streams2 node 5min

Node 0.10 is the latest stable branch of Node. It’s the branch you should be using for Real Work™. The most significant API changes can be found in the stream module. This is a quick guide to streams2 to get you up to speed.

The Base Classes

There are now five base classes for creating your own streams: Readable, Writable, Duplex, Transform, and PassThrough. These base classes inherit from EventEmitter so you can attach listeners and emit events as you normally would. It’s perfectly acceptable to emit custom events – this might make sense, for example, if you’re writing a streaming parser. The parser could emit events like 'headers' to indicate the headers have been parsed, perhaps for a CSV file.

To make your own Readable stream class, inherit from stream.Readable and implement the _read(size) method. The size argument is “advisory” – a lot of Readable implementations can safely ignore it. Once your _read method has collected data from an underlying I/O source, it can send it by calling this.push(chunk) – internally data will be placed into a queue so “clients” of your class can deal with it when they’re ready.

The Writable class should also be inherited from, but this time a _write(chunk, encoding, callback) method should be implemented. Once you’ve written data to the underlying I/O source, callback can be called, passing an error if required.

The Duplex class is like a Readable and Writable stream in one – it allows data sources that transmit and receive data to be modelled. This makes sense when you think about it – TCP network sockets transmit and receive data. To implement a Duplex stream, inherit from stream.Duplex and implement both the _read and _write methods.

The Transform class is useful for implementing parsers, like the CSV example I mentioned earlier. In general, streams that change data in some way should be implemented using stream.Transform. Although Transform sounds a bit like a Duplex stream, this time you’ll need to implement a _transform(chunk, encoding, callback) method. I’ve noticed several projects in the wild that use Duplex streams with a stubbed _read method, and I wondered if these would be better served by using a Transform class instead.

Finally, the PassThrough stream inherits from Transform to do… nothing. It relays the input to the output. That makes it ideal for sitting inside a pipe chain to spy on streams, and people have been using this to write tests or instrument streams in some way.


Pipes must follow this pattern: readable.pipe(writable). As Duplex and Transform streams can both read and write, they can be placed in either position in the chain. For example, I’ve been using process.stdin.pipe(csvParser).pipe(process.stdout) where csvParser is a Transform stream.


The general pattern for inheriting from the base classes is as follows:

  1. Create a constructor function that calls the base class using, options)
  2. Correctly inherit from the base class using Object.create or util.inherits
  3. Implement the required underscored method, whether it’s _read, _write, or _transform

Here’s a quick stream.Writable example:

var stream = require('stream');

GreenStream.prototype = Object.create(stream.Writable.prototype, {
  constructor: { value: GreenStream }

function GreenStream(options) {, options);

GreenStream.prototype._write = function(chunk, encoding, callback) {
  process.stdout.write('\u001b[32m' + chunk + '\u001b[39m');

process.stdin.pipe(new GreenStream());

Forwards Compatibility

If you want to use streams2 with Node 0.8 projects, then readable-stream provides access to the newer APIs in an npm-installable module. Since the stream core module is implemented in JavaScript, then it makes sense that the newer API can be used in Node 0.8.

Some open source module authors are including readable-stream as a dependency and then conditionally loading it:

var PassThrough = require('stream').PassThrough;

if (!PassThrough) {
  PassThrough = require('readable-stream/passthrough');

This example is taken from until-stream.

Streams2 in the Wild

There are some interesting open source projects that use the new streaming API that I’ve been collecting on GitHub. multiparser by Jesse Tane is a stream.Writable HTML form parser. until-stream by Evan Oxfeld will pause a stream when a certain signature is reached.

Hiccup by naomik uses the new streams API to simulate sporadic throughput, and the same author has also released bun which can help combine pipes into composable units, and Burro which can package objects into length-prefixed JSON byte streams. Conrad Pankoff used Burro to write Pillion, which is an RPC system for object streams.

There are also less esoteric modules, like csv-streamify which is a CSV parser.