AngularJS: More on Dependency Injection

23 May 2013 | By Alex Young | Comments | Tags angularjs mvc angularfeeds

In the AngularJS tutorials I’ve been writing, you might have noticed the use of dependency injection. In this article I’m going to explain how dependency injection works, and how it relates to the small tutorial project we’ve created.

Dependency injection is a software design pattern. The motivation for using it in Angular is to make it easier to transparently load mocked objects in tests. The $http module is a great example of this: when writing tests you don’t want to make real network calls, but defer the work to a fake object that responds with fixture data.

The earlier tutorials used dependency injection for this exact use case: in main controller, the MainCtrl module is set up to load the $http module which can then be transparently replaced during testing.

  .controller('MainCtrl', function($scope, $http, $timeout) {

Now forget everything I just said about dependency injection, and look at the callback that has been passed to .controller in the previous example. The $http and $timeout modules have been added by me because I want to use the $http service and the $timeout service. These are built-in “services” (an Angular term), but they’re not standard arguments. In fact, I could have specified these arguments in any order:

  .controller('MainCtrl', function($scope, $timeout, $http) {

This is possible because Angular looks at the function argument names to load dependencies. Before you run away screaming about magic, it’s important to realise that this is just one way to load dependencies in Angular projects. For example, this is equivalent:

  .controller('MainCtrl', ['$scope', '$http', '$timeout'], function($scope, $http, $timeout) {

The array-based style is more like AMD, and requires a little bit of syntactical overhead. I call the first style “introspective dependency injection”. The array-based syntax allows us to use different names for the dependencies, which can be useful sometimes.

This raises the question: how does introspective dependency injection cope with minimisers, where variables are renamed to shorter values? Well, it doesn’t cope with it at all. In fact, minimisers need help to translate the first style to the second.

Yeoman and ngmin

One reason I built the tutorial series with Yeoman was because the Angular generator includes grunt-ngmin. This is a Grunt task that uses ngmin – an Angular-aware “pre-minifier”. It allows you to use the shorter, introspective dependency injection syntax, while still generating valid minimised production builds.

Therefore, building a production version of djsreader with grunt build will correctly generate a deployable version of the project.

Why is it that almost all of Angular’s documentation and tutorials include the potentially dangerous introspective dependency injection syntax? I’m not sure, and I haven’t looked into it. I’d be happier if the only valid solution was the array-based approach, which looks more like AMD which most of us are already comfortable with anyway.

Just to prove I’m not making things up, here is the minimised source for djsreader:

"use strict";angular.module("djsreaderApp",[]).config(["$routeProvider",function(e){e.when("/",{templateUrl:"views/main.html",controller:"MainCtrl"}).otherwise({redirectTo:"/"})}]),angular.module("djsreaderApp").controller("MainCtrl",["$scope","$http","$timeout",function(e,r,t){e.refreshInterval=60,e.feeds=[{url:""}],e.fetchFeed=function(n){n.items=[];var o="*%20from%20xml%20where%20url%3D'";o+=encodeURIComponent(n.url),o+="'%20and%20itemPath%3D'feed.entry'&format=json&diagnostics=true&callback=JSON_CALLBACK",r.jsonp(o).success(function(e){e.query.results&&(n.items=e.query.results.entry)}).error(function(e){console.error("Error fetching feed:",e)}),t(function(){e.fetchFeed(n)},1e3*e.refreshInterval)},e.addFeed=function(r){e.feeds.push(r),e.fetchFeed(r),e.newFeed={}},e.deleteFeed=function(r){e.feeds.splice(e.feeds.indexOf(r),1)},e.fetchFeed(e.feeds[0])}]);

The demangled version shows that we’re using the array-based syntax, thanks to ngmin:

angular.module("djsreaderApp").controller("MainCtrl", ["$scope", "$http", "$timeout",


In case you’re wondering how the introspective dependency injection style works, then look no further than annotate(fn). This function uses Function.prototype.toString to extract the argument names from the JavaScript source code. The results are effectively cached, so even though this sounds horrible it doesn’t perform as badly as it could.


Nothing I’ve said here is new – while researching this post I found The Magic Behind Dependency Injection by Alex Rothenberg, which covers the same topic, and the Angular Dependency Injection documentation outlines the issues caused by the introspective approach and suggests that it should only be used for pretotyping.

However, I felt like it was worth writing an overview of the matter, because although Yeoman is great for a quick start to a project, you really need to understand what’s going on behind the scenes!

Node Roundup: 0.10.7, JSON Editor, puid, node-mac

22 May 2013 | By Alex Young | Comments | Tags node modules mac windows json uuid
You can send in your Node projects for review through our contact form.

Node 0.10.7

Node 0.10.7 was released last week. This version includes fixes for the buffer and crypto modules, and timers. The buffer/crypto fix relates to encoding issues that could crash Node: #5482.

JSON Editor Online

JSON Editor Online

JSON Editor Online (GitHub: josdejong / jsoneditor, License: Apache 2.0, npm: jsoneditor, bower: jsoneditor) by Jos de Jong is a web-based JSON editor. It uses Node for building the project, but it’s actually 100% web-based. It uses the Ace editor, and includes features for searching and sorting JSON.

It’s installable with Bower, so you could technically use it as a component and embed it into another project.


Azer Koçulu sent in a bunch of new modules again, and one I picked out this time was english-time (GitHub: azer / english-time, License: BSD, npm: english-time). He’s using it with some of the CLI tools he’s written, so rather than specifying a date in an ISO format users can express durations in English.

The module currently supports milliseconds, seconds, minutes, hours, days, weeks, and shortened expressions based on combinations of these. For example, 3 weeks, 5d 6h would work.


puid (GitHub: pid / puid, License: MIT, npm: puid) by Sascha Droste can generate unique IDs suitable for use in a distributed system. The IDs are based on time, machine, and process, and can be 24, 14, or 12 characters long.

Each ID is comprised of an encoded timestamp, machine ID, process ID, and a counter. The counter is based on nanoseconds, and the machine ID is based on the network interface ID or the machine’s hostname.


node-windows provides integration for Windows-specific services, like creating daemons and writing to eventlog. The creator of node-windows, Corey Butler, has also released node-mac (GitHub: coreybutler / node-mac, License: MIT, npm: node-mac). This supports Mac-friendly daemonisation and logging.

Services can be created using an event-based API:

var Service = require('node-mac').Service;

// Create a new service object
var svc = new Service({
  name: 'Hello World',
  description: 'The example web server.',
  script: '/path/to/helloworld.js')

// Listen for the "install" event, which indicates the
// process is available as a service.
svc.on('install', function() {


It also supports service removal, and event logging.

jQuery Roundup: Anchorify.js, Minimalect

21 May 2013 | By Alex Young | Comments | Tags jquery plugins select
Note: You can send your plugins and articles in for review through our contact form.


Anchorify.js (GitHub: willdurand / anchorify.js, License: MIT) by William Durand automatically inserts unique anchored headings. The default markup is an anchor with a pilcrow sign, but this can be overridden if desired.

Even though the plugin is relatively simple, William has included QUnit tests and put the project up on jQuery’s new plugin site.


Minimalect (GitHub: groenroos / minimalect, License: MIT) by Oskari Groenroos is a select element replacement that supports optgroups, searching, keyboard navigation, and themes. It comes with two themes that are intentionally simple, allowing you to easily customise them using CSS, and no images are required by default.

Options include placeholder text, a message when no search results are found, class name overrides, and lifecycle callbacks.

Terminology: Modules

20 May 2013 | By Alex Young | Comments | Tags modules commonjs amd terminology basics js101

Learning modern modular frameworks like Backbone.js and AngularJS involves mastering a large amount of terminology, even just to understand a Hello, World application. With that in mind, I wanted to take a break from higher-level libraries to answer the question: what is a module?

The Background Story

Client-side development has always been rife with techniques for patching missing behaviour in browsers. Even the humble <script> tag has been cajoled and beaten into submission to give us alternative ways to load scripts.

It all started with concatenation. Rather than loading many scripts on a page, they are instead joined together to form a single file, and perhaps minimised. One school of thought was that this is more efficient, because a long HTTP request will ultimately perform better than many smaller requests.

That makes a lot of sense when loading libraries – things that you want to be globally available. However, when writing your own code it somehow feels wrong to place objects and functions at the top level (the global scope).

If you’re working with jQuery, you might organise your own code like this:

$(function() {
  function MyConstructor() {

  MyConstructor.prototype = {
    myMethod: function() {

  var instance = new MyConstructor();

That neatly tucks everything away while also only running the code when the DOM is ready. That’s great for a few weeks, until the file is bustling with dozens of objects and functions. That’s when it seems like this monolithic file would benefit from being split up into multiple files.

To avoid the pitfalls caused by large files, we can split them up, then load them with <script> tags. The scripts can be placed at the end of the document, causing them to be loaded after the majority of the document has been parsed.

At this point we’re back to the original problem: we’re loading perhaps dozens of <script> tags inefficiently. Also, scripts are unable to express dependencies between each other. If dependencies between scripts can be expressed, then they can be shared between projects and loaded on demand more intelligently.

Loading, Optimising, and Dependencies

The <script> tag itself has an async attribute. This helps indicate which scripts can be loaded asynchronously, potentially decreasing the time the browser blocks when loading resources. If we’re going to use an API to somehow express dependencies between scripts and load them quickly, then it should load scripts asynchronously when possible.

Five years ago this was surprisingly complicated, mainly due to legacy browsers. Then solutions like RequireJS appeared. Not only did RequireJS allow scripts to be loaded programmatically, but it also had an optimiser that could concatenate and minimise files. The lines between loading scripts, managing dependencies, and file optmisation are inherently blurred.


The problem with loading scripts is it’s asynchronous: there’s no way to say load('/script.js') and have code that uses script.js directly afterwards. The CommonJS Modules/AsynchronousDefinition, which became AMD (Asynchronous Module Definition), was designed to get around this. Rather than trying to create the illusion that scripts can be loaded synchronously, all scripts are wrapped in a function called define. This is a global function inserted by a suitable AMD implementation, like RequireJS.

The define function can be used to safely namespace code, express dependencies, and give the module a name (id) so it can be registered and loaded. Module names are “resolved” to script names using a well-defined format.

Although this means every module you write must be wrapped in a call to define, the authors of RequireJS realised it meant that build tools could easily interpret dependencies and generate optimised builds. So your development code can use RequireJS’s client-side library to load the necessary scripts, then your production version can preload all scripts in one go, without having to change your HTML templates (r.js is used to do this in practice).


Meanwhile, Node was becoming popular. Node’s module system is characterised by using the require statement to return a value that contains the module:

var User = require('models/user');

Can you imagine if every Node module had to be wrapped in a call to define? It might seem like an acceptable trade-off in client-side code, but it would feel like too much boilerplate in server-side scripting when compared to languages like Python.

There have been many projects to make this work in browsers. Most use a build tool to load all of the modules referenced by require up front – they’re stored in memory so require can simply return them, creating the illusion that scripts are being loaded synchronously.

Whenever you see require and exports you’re looking at CommonJS Modules/1.1. You’ll see this referred to as “CommonJS”.

Now you’ve seen CommonJS modules, AMD, and where they came from, how are they being used by modern frameworks?

Modules in the Wild

Dojo uses AMD internally and for creating your own modules. It didn’t originally – it used to have its own module system. Dojo adopted AMD early on.

AngularJS uses its own module system that looks a lot like AMD, but with adaptations to support dependency injection.

RequireJS supports AMD, but it can load scripts and other resources without wrapping them in define. For example, a dependency between your own well-defined modules and a jQuery plugin that doesn’t use AMD can be defined by using suitable configuration options when setting up RequireJS.

There’s still a disparity between development and production builds. Even though RequireJS can be used to create serverless single page applications, most people still use a lightweight development server that serves raw JavaScript files, before deploying concatenated and minimised production builds.

The need for script loading and building, and tailoring for various environments (typically development, test, and production) has resulted in a new class of projects. Yeoman is a good example of this: it uses Grunt for managing builds and running a development server, Bower for defining the source of dependencies so they can be fetched, and then RequireJS for loading and managing dependencies in the browser. Yeoman generates skeleton projects that set up development and build environments so you can focus on writing code.

Hopefully now you know all about client-side modules, so the next time you hear RequireJS, AMD, or CommonJS, you know what people are talking about!

Impossible Mission, Full-Text Indexing, Backbone.Projections

17 May 2013 | By Alex Young | Comments | Tags games backbone.js node pdf

Impossible Mission

Impossible Mission

Impossible Mission by Krisztián Tóth is a JavaScript remake of the C64 classic. You can view the source to see how it all works, if things like this.dieByZapFrames and this.searchedFurniture sound appealing to you.

Krisztián previously sent in Boulder Dash and Wizard of Wor which were similar remakes written using the same approach.

Client-Side Full-Text Indexing

Gary Sieling sent in a post he wrote about full-text indexing with client-side JavaScript, in which he looks at PDF.js and Lunr: Building a Full-Text Index in JavaScript. I briefly mentioned Lunr by Oliver Nightingale back in March.

One great thing about this type of index is that the work can be done in parallel and then combined as a map-reduce job. Only three entries from the above object need to be combined, as “fields” and “pipeline” are static.


In relational databases, a projection is a subset of available data. Backbone.Projections (GitHub: andreypopp / backbone.projections, License: MIT, npm: backbone.projections) by Andrey Popp is the equivalent for Backbone collections – they allow a transformed subset of values in a collection can be represented and synced.

The supported projections are Capped and Filtered. Capped collections are limited based on a size and function – the function will be used to order the results prior to truncating them. Filtered projections filter out results based on a function that returns a boolean.

Projections can be composed by passing one project to another. This example creates a Filtered projection, and then passes it to a Capped projection to limit and order the results:

var todaysPosts = new Filtered(posts, {
  filter: function(post) {
    return post.get('date').isToday();

var topTodaysPosts = new Capped(todaysPosts, {
  cap: 5,
  comparator: function(post) {
    return post.get('likes');

The author has written unit tests with Mocha, and documentation is available in the readme.

AngularJS: Tests

16 May 2013 | By Alex Young | Comments | Tags angularjs mvc angularfeeds


In the last part we changed the app to support multiple feeds.

This week you’ll learn how to write a short unit test to test the app’s main controller. This will involve mocking data.

If you get stuck at any part of this tutorial, check out the full source here: commit 7b4bda.

Neat and Tidy Tests

The goal of this tutorial is to demonstrate one method for writing neat and tidy tests. Ideally mocked data should be stored in separate files and loaded when required. What we absolutely don’t want is global variables littering memory.

To run tests with the Yeoman-generated app we’ve been working on, type grunt test. It’ll use Karma and Jasmine to run tests through Chrome using WebSockets. The workflow in the console is effortless, despite Chrome appearing and disappearing in the background (it won’t trample on your existing Chrome session, it’ll make a separate process). It doesn’t steal focus away, which means you can invoke tests and continue working on code without getting interrupted.

My workflow: mock, controller, test, and a terminal for running tests

The basic approach is to use $httpBackend.whenJSONP to tell AngularJS to return some mock data when the tests are run, instead of fetching the real feed data from Yahoo. That sounds simple enough, but there’s a slight compilation: leaving mock data in the test sucks. So, what do we do about this? The karma.conf.js file that was created for us by the Yeoman generator contains a line for loading files from a mocks directory: 'test/mock/**/*.js. These will be loaded before the tests, so let’s dump some JSON in there.

Interestingly, if you run grunt test right now it’ll fail, because the app makes a JSONP request, and the angular-mocks library will flag this as an error. Using $httpBackend.whenJSONP will fix this.

JSON Mocks

Open a file called test/mock/feed.js (you’ll need to mkdir test/mock first), then add this:

'use strict';

angular.module('mockedFeed', [])
  .value('defaultJSON', {
    query: {
      count: 2,
      created: '2013-05-16T15:01:31Z',
      lang: 'en-US',
      results: {
        entry: [
            title: 'Node Roundup: 0.11.2, 0.10.6, subscribe, Omelette',
            link: { href: '' },
            updated: '2013-05-15T00:00:00+01:00',
            id: '',
            content: { type: 'html', content: 'example' }
            title: 'jQuery Roundup: 1.10, jquery-markup, zelect',
            link: { href: '' },
            updated: '2013-05-14T00:00:00+01:00',
            id: '',
            content: { type: 'html', content: 'example 2' }

This uses angular.module().value to set a value that contains some JSON. I derived this JSON from Yahoo’s API by running the app and looking at the network traffic in WebKit Inspector, then edited out the content properties because they were huge (DailyJS has full articles in feeds).

Loading the Mocked Value

Open test/spec/controllers/main.js and change the first beforeEach to load mockedFeed:

beforeEach(module('djsreaderApp', 'mockedFeed'));

The beforeEach method is provided by Jasmine, and will make the specified function run before each test. Now the defaultJSON value can be injected, along with the HTTP backend:

var MainCtrl, scope, mockedFeed, httpBackend;

// Initialize the controller and a mock scope
beforeEach(inject(function($controller, $rootScope, $httpBackend, defaultJSON) {
  // Set up the expected feed data
  httpBackend = $httpBackend;

  scope = $rootScope.$new();
  MainCtrl = $controller('MainCtrl', {
    $scope: scope

You should be able to guess what’s happening with $httpBackend.whenJSONP(/ – whenever the app tries to contact Yahoo’s service, it’ll trigger our mocked HTTP backend and return the defaultJSON value instead. Cool!

The Test

The actual test is quite a comedown after all that mock wrangling:

it('should have a list of feeds', function() {
  expect(scope.feeds[0].items[0].title).toBe('Node Roundup: 0.11.2, 0.10.6, subscribe, Omelette');

The test checks $scope has the expected data. httpBackend.flush will make sure the (fake) HTTP request has finished first. The scope.feeds value is the one that MainCtrl from last week derives from the raw JSON returned by Yahoo.


You should now be able to run grunt test and see some passing tests (just like in my screenshot). If not, check out djsreader on GitHub to see what’s different.

Most of the work for this part can be found in commit 7b4bda.

Node Roundup: 0.11.2, 0.10.6, subscribe, Omelette

15 May 2013 | By Alex Young | Comments | Tags node modules cli events pubsub
You can send in your Node projects for review through our contact form.

Node 0.11.2 and 0.10.6

Clearly the Node core developers have had an early summer holiday, and are now back to unleash new releases. In the space of a few days 0.11.2 and 0.10.6 were released. I was intrigued by the Readable.prototype.wrap update, which makes it support objectMode for streams that emit objects rather than strings or other data.

The 0.11.2 release has an update that guarantees the order of 'finish' events, and another that adds some new methods: cork and uncork. Corking basically forces buffering of all writes – data will be flushed when uncork is called or when end is called.

There is a detailed discussion about cork and the related _writev method on the Node Google Group: Streams writev API. There are some interesting comments about a similar earlier implementation by Ryan Dahl, the validity of which Isaac questions due to Node’s code changing significantly since then.

If you want to read about writev, check out the write(2) (man 2 write) manual page:

Write() attempts to write nbyte of data to the object referenced by the descriptor fildes from the buffer pointed to by buf. Writev() performs the same action, but gathers the output data from the iovcnt buffers specified by the members of the iov array…


Azer Koçulu has been working on a suite of modules for subscribing and observing to changes on objects:

Now he’s also released a module for subscribing to multiple pub/sub objects:

var subscribe = require('subscribe');
var a = pubsub();
var b = pubsub();
var c = pubsub();

subscribe(a, b, c, function(updates) {
  // => a.onUpdate
  // => 3, 4
  // => c.onUpdate
  // => 5, 6

a.publish(3, 4);
c.publish(5, 6);

This brings a compositional style of working to Azer’s other modules, allowing subscriptions to multiple lists and objects at the same time. The next example uses subscribe to combine the new-list module with new-object:

var fruits = newList('apple', 'banana', 'grapes');
var people = newObject({ smith: 21, joe: 23 });

subscribe(fruits, people, function(updates) {
  // => melon

  // => { alex: 25 }

people('alex', '25');


Omelette (GitHub: f / omelette, License: MIT, npm: omelette) by Fatih Kadir Akın is a template-based autocompletion module.

Programs and their arguments are defined using an event-based completion API, and then they can generate the zsh or Bash completion rules. There’s an animated gif in the readme that illustrates how it works in practice.

jQuery Roundup: 1.10, jquery-markup, zelect

14 May 2013 | By Alex Young | Comments | Tags jquery plugins markup templating select
Note: You can send your plugins and articles in for review through our contact form.

jQuery 1.10

A new 1.x branch of jQuery has been released, jQuery 1.10. This builds on the work in the 1.9 line:

It’s our goal to keep the 1.x and 2.x lines in sync functionally so that 1.10 and 2.0 are equal, followed by 1.11 and 2.1, then 1.12 and 2.2…

A few of the inclded fixes were things originally planned for 1.9.x, and others are new to this branch. As always the announcement blog post contains links to full details of each change.


jquery-markup (GitHub: rse / jquery-markup, License: MIT) by Ralf S. Engelschall is a markup generator that works with several template engines (including Jade and handlebars). By adding a tag, <markup>, $(selector).markup can be used render templates interpolated with values.

Ralf said this about the plugin:

I wanted to use template languages like Handlebars but instead of having to store each fragment into its own file I still wanted to assemble all fragments together. Even more: I wanted to logically group and nest them to still understood the view markup code as a whole.

The <markup> tag can include a type attribute that is used to determine the templating language – this means you can use multiple template languages in the same document.


zelect (GitHub: mtkopone / zelect, License: WTFPL) by Mikko Koponen is a <select> component. It’s unit tested, and has built-in support for asynchronous pagination.

Unlike Chosen, it doesn’t come with any CSS, but that might be a good thing because it keeps the project simple. Mikko has provided an example with suitable CSS that you can use to get started.

If Chosen seems too large or inflexible for your project, then zelect might be a better choice.

Unix: It's Alive!

13 May 2013 | By Alex Young | Comments | Tags node modules unix

On a philosophical level, Node developers love Unix. I like to think that’s why Node’s core modules are relatively lightweight compared to other standard libraries (is an FTP library really necessary?) – Node’s modules quietly get out of the way, allowing the community to provide solutions to higher-level problems.

As someone who sits inside tmux/Vim/ssh all day, I’m preoccupied with command-line tools and ways to work more efficiently in the shell. That’s why I was intrigued to find bashful (GitHub: substack / bashful, License: MIT, npm: bashful) by substack. It allows Bash to be parsed and executed. To use it, hook it up with some streams:

var bash = require('bashful')(process.env);
bash.on('command', require('child_process').spawn);

var s = bash.createStream();

After installing bashful, running this example with node sh.js will allow you to issue shell commands. Not all of Bash’s built-in commands are supported yet (there’s a list and to-do in the readme), but you should be able to execute commands and run true and false, then get the last exit status with echo $?.

How does this work? Well, the bashful module basically parses each line, character-by-character, to tokenise the input. It then checks anything that looks like a command against the list of built-in commands, and runs it. It mixes Node streams with a JavaScript bash parser to create a Bash-like layer that you can reuse with other streams.

This module depends on shell-quote, which correctly escapes those gnarly quotes in shell commands. I expect substack will make a few more shell-related modules as he continues work on bashful.

ShellJS (GitHub: arturadib / shelljs, npm: shelljs) by Artur Adib has been around for a while, but still receives regular updates. This module gives you shell-like commands in Node:


mkdir('-p', 'out/Release');
cp('-R', 'stuff/*', 'out/Release');

It can even mimic Make, so you could write your build scripts with it. This would make sense if you’re sharing code with Windows-based developers.

There are plenty of other interesting Unix-related modules that are alive and regularly updated. One I was looking at recently is suppose (GitHub: jprichardson / node-suppose, License: MIT, npm: suppose) by JP Richardson, which is an expect(1) clone:

var suppose = require('suppose')
  , fs = require('fs')
  , assert = require('assert')

suppose('npm', ['init'])
  .on(/name\: \([\w|\-]+\)[\s]*/).respond('awesome_package\n')
  .on('version: (0.0.0) ').respond('0.0.1\n')
  // ...

It uses a chainable API to allow expect-like expressions to capture and react to the output of other programs.

Unix in the Node community is alive and well, but I’m sure there’s also lots of Windows-related fun to be had – assuming you can figure out how to use Windows 9 with a keyboard and mouse that is…

P, EasyWebWorker, OpenPGP.js

10 May 2013 | By Alex Young | Comments | Tags node modules crypto p2p webworkers


P (GitHub: oztu / p, License: Apache 2, npm: onramp, bower: p) by Ozan Turgut is a client-side library with a WebSocket server for creating P2P networks by allowing browser-to-browser connections.

The onramp Node module is used to establish connections, but after that it isn’t necessary for communication between clients. The author has written up documentation with diagrams to explain how it works. Like other similar projects, the underlying technology is WebRTC, so it only works in Chrome or Firefox Nightly.


EasyWebWorker (GitHub: ramesaliyev / EasyWebWorker, License: MIT) by Rameş Aliyev is a wrapper for web workers which allows functions to be executed directly, and can execute global functions in the worker.

A fallback is provided for older browsers:

# Create web worker fallback if browser doesnt support Web Workers.
if this.document isnt undefined and !window.Worker and !window._WorkerPrepared
  window.Worker = _WorkerFallback

The _WorkerFallback class is provided, and uses XMLHttpRequest or ActiveXObject.

The source code is nicely commented if you want to look at what it does in more detail:


Jeremy Darling sent in OpenPGP.js (GitHub: openpgpjs / openpgpjs, License: LGPL), which is an OpenPGP implementation for JavaScript:

This is a JavaScript implementation of OpenPGP with the ability to generate public and private keys. Key generation can be a bit slow but you can also import your own keys.

Jeremy found that OpenPGP.js is used by Mailvelope, which is a browser extension that brings OpenPGP to webmail services like Gmail. That means Mailvelope can encrypt messages without having to upload a private key to a server.

AngularJS: Managing Feeds

09 May 2013 | By Alex Young | Comments | Tags angularjs mvc angularfeeds


In the last part we looked at fetching and parsing feeds with YQL using Angular’s $http service. The commit for that part was 2eae54e.

This week you’ll learn more about Angular’s data binding by adding some input fields to allow feeds to be added and removed.

If you get stuck at any part of this tutorial, check out the full source here: commit c9f9d06.

Modeling Multiple Feeds

The previous part mapped one feed to a view by using the $scope.feed object. Now we want to support multiple feeds, so we’ll need a way of modeling ordered collections of feeds.

The easiest way to do this is simply by using an array. Feed objects that contain the post items and feed URLs can be pushed onto the array:

$scope.feeds = [{
  url: '',
  items: [ /* Blog posts go here */ ]

Rendering Multiple Feeds

The view now needs to be changed to use multiple feeds instead of a single feed. This can be achieved by using the ng-repeat directive to iterate over each one (app/views/main.html):

<div ng-repeat="feed in feeds">
    <li ng-repeat="item in feed.items"><a href=""></a></li>
  URL: <input size="80" ng-model="feed.url">
  <button ng-click="fetchFeed(feed)">Update</button>
  <button ng-click="deleteFeed(feed)">Delete</button>
  <hr />

The fetchFeed and deleteFeed methods should be added to $scope in the controller, but we’ll deal with those later. First let’s add a form to create new feeds.

Adding Feeds

The view for adding feeds needs to use an ng-model directive to bind a value so the controller can access the URL of the new feed:

  URL: <input size="80" ng-model="newFeed.url">
  <button ng-click="addFeed(newFeed)">Add Feed</button>

The addFeed method will be triggered when the button is clicked. All we need to do is push newFeed onto $scope.feed then clear newFeed so the form will be reverted to its previous state. The addFeed method is also added to $scope in the controller (app/scripts/controllers/main.js), like this:

$scope.addFeed = function(feed) {
  $scope.newFeed = {};

This example could be written to use $scope.newFeed instead of the feed argument, but don’t you think it’s cool that arguments can be passed from the view just by adding it to the directive?

Fetching Feeds

The original $http code should be wrapped up as a method so it can be called by the ng-click directive on the button:

$scope.fetchFeed = function(feed) {
  feed.items = [];

  var apiUrl = "*%20from%20xml%20where%20url%3D'";
  apiUrl += encodeURIComponent(feed.url);
  apiUrl += "'%20and%20itemPath%3D'feed.entry'&format=json&diagnostics=true&callback=JSON_CALLBACK";

    success(function(data, status, headers, config) {
      if (data.query.results) {
        feed.items = data.query.results.entry;
    error(function(data, status, headers, config) {
      console.error('Error fetching feed:', data);

The feed argument will be the same as the one in the $scope.feeds array, so by clearing the current set of items using feed.items = []; the user will see instant feedback when “Update” is clicked. That makes it easier to see what’s happening if the feed URL is changed to another URL.

I’ve used encodeURIComponent to encode the feed’s URL so it can be safely inserted as a query parameter for Yahoo’s service.

Deleting Feeds

The controller also needs a method to delete feeds. Since we’re working with an array we can just splice off the desired item:

$scope.deleteFeed = function(feed) {
  $scope.feeds.splice($scope.feeds.indexOf(feed), 1);

Periodic Updates

Automatically refreshing feeds is an interesting case in AngularJS because it can be implemented using the $timeout service. It’s just a wrapper around setTimeout, but it also delegates exceptions to $exceptionHandler.

To use it, add it to the list of arguments in the controller and set a sensible default value:

  .controller('MainCtrl', function($scope, $http, $timeout) {
    $scope.refreshInterval = 60;

Now make fetchFeed call itself, at the end of the method:

$timeout(function() { $scope.fetchFeed(feed); }, $scope.refreshInterval * 1000);

I’ve multiplied the value by 1000 so it converts seconds into milliseconds, which makes the view easier to understand:

<p>Refresh (seconds): <input ng-model="refreshInterval"></p>

The finished result


Now you can add more feeds to the reader, it’s starting to feel more like a real web application. Over the next few weeks I’ll add tests and a better interface.

The code for this tutorial can be found in commit c9f9d06.

Node Roundup: Node-sass, TowTruck, peer-vnc

08 May 2013 | By Alex Young | Comments | Tags node modules sass css vnc mozilla
You can send in your Node projects for review through our contact form.


Node-sass (GitHub: andrew / node-sass, License: MIT, npm: node-sass) by Andrew Nesbitt is a Node binding for libsass. It comes with some pre-compiled binaries, so it should be easy to get it running.

It has both synchronous and asynchronous APIs, and there’s an example app built with Connect so you can see how the middleware works: andrew / node-sass-example.

var sass = require('node-sass');
// Async
sass.render(scss_content, callback [, options]);

// Sync
var css = sass.renderSync(scss_content [, options]);

The project includes Mocha tests and more usage information in the readme.


C. Scott Ananian sent in TowTruck (GitHub: mozilla / towtruck, License: MPL) from Mozilla, which is an open source web service for collaboration:

Using TowTruck two people can interact on the same page, seeing each other’s cursors, edits, and browsing a site together. The TowTruck service is included by the web site owner, and a web site can customize and configure aspects of TowTruck’s behavior on the site.

It’s not currently distributed as a module on npm, so you’ll need to follow the instructions in the readme to install it. There’s also a bookmarklet for adding TowTruck to any page, and a Firefox add-on as well.


peer-vnc (GitHub: InstantWebP2P / peer-vnc, License: MIT, npm: peer-vnc) by Tom Zhou is a web VNC client. It uses his other project,, which is a P2P web service module.

I had trouble installing node-httpp on a Mac, so YMMV, but I like the idea of a P2P noVNC project.

jQuery Roundup: UI 1.10.3, simplePagination.js, jQuery Async

07 May 2013 | By Alex Young | Comments | Tags jquery plugins async pagination jquery-ui ui
Note: You can send your plugins and articles in for review through our contact form.

jQuery UI 1.10.3

jQuery UI 1.10.3 was released last week. This is a maintenance release that has fixes for Draggable, Sortable, Accordion, Autocomplete, Button, Datepicker, Menu, and Progressbar.



simplePagination.js (GitHub: flaviusmatis / simplePagination.js, License: MIT) by Flavius Matis is a pagination plugin that supports Bootstrap. It has options for configuring the page links, next and previous text, style attributes, onclick events, and the initialisation event.

There’s an API for selecting pages, and the author has included three themes. When selecting a page, the truncated pages will shift, so it’s easy to skip between sets of pages.

jQuery Async

jQuery Async

jQuery Async (GitHub: acavailhez / jquery-async, License: Apache 2) by Arnaud Cavailhez is a UI plugin for animating things while asynchronous operations take place. It depends on Bootstrap, and makes it easy to animate a button that triggers a long-running process.

The documentation has some clever examples that help visualise how the plugin works – two buttons are displayed so you can trigger the 'success' and 'error' events by hand. It’s built using $.Deferred, so it’ll work with the built-in Ajax API without much effort.

Swarm, Dookie, AngularJS Book

06 May 2013 | By Alex Young | Comments | Tags node modules books css


Swarm (GitHub: gritzko / swarm, License: MIT) by Victor Grishchenko is a data synchronisation library that can synchronise objects on clients and servers.

Swarm is built around its concise four-method interface that expresses the core function of the library: synchronizing distributed object replicas. The interface is essentially a combination of two well recognized conventions, namely get/set and on/off method pairs, also available as getField/setField and addListener/removeListener calls respectively.

var obj = swarmPeer.on(id, callbackFn); // also addListener()
obj.on('field', fieldCallbackFn);'field', fieldCallbackFn);, callbackFn);  // also removeListener()

The author has defined an addressing protocol that uses tokens to describe various aspects of an event and object. For more details, see Swarm: specifying events.


Dookie (GitHub: voronianski / dookie-css, License: MIT, npm: dookie-css) by Dmitri Voronianski is a CSS library that’s built on Stylus. It provides several useful mixins and components:

  • Reset mixins: reset(), normalize(), and fields-reset()
  • Helpers: Shortcuts for working with display, fonts, images and high pixel ratio images
  • Sprites
  • Vendor prefixes
  • Easings and gradients

Dmitri has also included Mocha/PhantomJS tests for checking the generated output and visualising it.

Developing an AngularJS Edge

Developing an AngularJS Edge by Christopher Hiller is a new book about AngularJS aimed at existing JavaScript developers who just want to learn AngularJS. The author has posted examples on GitHub here: angularjsedge / examples, and there’s a sample chapter (ePub).

The book is being sold through Gumroad for $14.95, reduced from $19.95. The author notes that it is also available through Amazon and Safari Books Online.

LevelDB and Node: Getting Up and Running

03 May 2013 | By Rod Vagg | Comments | Tags node leveldb databases

This is the second article in a three-part series on LevelDB and how it can be used in Node.

Our first article covered the basics of LevelDB and its internals. If you haven’t already read it you are encouraged to do so as we will be building upon this knowledge as we introduce the Node interface in this article.


There are two primary libraries for using LevelDB in Node, LevelDOWN and LevelUP.

LevelDOWN is a pure C++ interface between Node.js and LevelDB. Its API provides limited sugar and is mostly a straight-forward mapping of LevelDB’s operations into JavaScript. All I/O operations in LevelDOWN are asynchronous and take advantage of LevelDB’s thread-safe nature to parallelise reads and writes.

LevelUP is the library that the majority of people will use to interface with LevelDB in Node. It wraps LevelDOWN to provide a more Node.js-style interface. Its API provides more sugar than LevelDOWN, with features such as optional arguments and deferred-till-open operations (i.e. if you begin operating on a database that is in the process of being opened, the operations will be queued until the open is complete).

LevelUP exposes iterators as Node.js-style object streams. A LevelUP ReadStream can be used to read sequential entries, forward or reverse, to and from any key.

LevelUP handles JSON and other encoding types for you. For example, when operating on a LevelUP instance with JSON value-encoding, you simply pass in your objects for writes and they are serialised for you. Likewise, when you read them, they are deserialised and passed back in their original form.

A simple LevelUP example

var levelup = require('levelup')

// open a data store
var db = levelup('/tmp/dprk.db')

// a simple Put operation
db.put('name', 'Kim Jong-un', function (err) {

  // a Batch operation made up of 3 Puts
      { type: 'put', key: 'spouse', value: 'Ri Sol-ju' }
    , { type: 'put', key: 'dob', value: '8 January 1983' }
    , { type: 'put', key: 'occupation', value: 'Clown' }
  ], function (err) {

    // read the whole store as a stream and print each entry to stdout
      .on('data', console.log)
      .on('close', function () {

Execute this application and you’ll end up with this output:

{ key: 'dob', value: '8 January 1983' }
{ key: 'name', value: 'Kim Jong-un' }
{ key: 'occupation', value: 'Clown' }
{ key: 'spouse', value: 'Ri Sol-ju' }

Basic operations


There are two ways to create a new LevelDB store, or open an existing one:

levelup('/path/to/database', function (err, db) {
  /* use `db` */

// or

var db = levelup('/path/to/database')
/* use `db` */

The first version is a more standard Node-style async instantiation. You only start using the db when LevelDB is set up and ready.

The second version is a little more opaque. It looks like a synchronous operation but the actual open call is still asynchronous although you get a LevelUP object back immediately to use. Any calls you make on that object that need to operate on the underlying LevelDB store are queued until the store is ready to accept calls. The actual open operation is very quick so the initial is delay generally not noticeable.


To close a LevelDB store, simply call close() and your callback will be called when the underlying store is completely closed:

// close to clean up
db.close(function (err) { /* ... */ })

Read, write and delete

Reading and writing are what you would expect for asynchronous Node methods:

db.put('key', 'value', function (err) { /* ... */ })

db.del('key', function (err) { /* ... */ })

db.get('key', function (err, value) { /* ... */ })


As mentioned in the first article, LevelDB has a batch operation that performs atomic writes. These writes can be either put or delete operations.

LevelUP takes an array to perform a batch, each element of the array is either a 'put' or a 'del':

var operations = [
    { type: 'put', key: 'Franciscus', value: 'Jorge Mario Bergoglio' }
  , { type: 'del', key: 'Benedictus XVI' }

db.batch(operations, function (err) { /* ... */ })


LevelUP turns LevelDB’s Iterators into Node’s readable streams, making them surprisingly powerful as a query mechanism.

LevelUP’s ReadStreams share all the same characteristics as standard Node readable object streams, such as being able to pipe() to other streams. They also emit all of the the expected events.

var rs = db.createReadStream()

// our new stream will emit a 'data' event for every entry in the store

rs.on('data' , function (data) { /* data.key & data.value */ })
rs.on('error', function (err) { /* handle err */ })
rs.on('close', function () { /* stream finished & cleaned up */ })

But it’s the various options for createReadStream(), combined with the fact that LevelDB sorts by keys that makes it a powerful abstraction:

    start     : 'somewheretostart'
  , end       : 'endkey'
  , limit     : 100           // maximum number of entries to read
  , reverse   : true          // flip direction
  , keys      : true          // see db.createKeyStream()
  , values    : true          // see db.createValueStream()

'start' and 'end' point to keys in the store. These don’t need to even exist as actual keys because LevelDB will simply jump to the next existing key in lexicographical order. We’ll see later why this is helpful when we explore namespacing and range queries.

LevelUP also provides a WriteStream which maps write() operations to Puts or Batches.

Since ReadStream and WriteStream follow standard Node.js stream patterns, a copy database operation is simply a pipe() call:

function copy (srcdb, destdb, callback) {
    .on('error', callback)
    .on('close', callback)


LevelUP will accept most kinds of JavaScript objects, including Node’s Buffers, as both keys and values for all its operations. LevelDB stores everything as simple byte arrays so most objects need to be encoded and decoded as they go in and come out of the store.

You can specify the encoding of a LevelUP instance and you can also specify the encoding of individual operations. This means that you can easily store text and binary data in the same store.

'utf8' is the default encoding but you can change that to any of the standard Node Buffer encodings. You can also use the special 'json' encoding:

var db = levelup('/tmp/dprk.db', { valueEncoding: 'json' })

  , {
        name       : 'Kim Jong-un'
      , spouse     : 'Ri Sol-ju'
      , dob        : '8 January 1983'
      , occupation : 'Clown'
  , function (err) {
      db.get('dprk', function (err, value) {
        console.log('dprk:', value)

Gives you the following output:

dprk: { name: 'Kim Jong-un',
  spouse: 'Ri Sol-ju',
  dob: '8 January 1983',
  occupation: 'Clown' }

Advanced example

In this example we assume the data store contains numeric data, where ranges of data are stored with prefixes on the keys. Our example function takes a LevelUP instance and a range key prefix and uses a ReadStream to calculate the variance of the values in that range using an online algorithm:

function variance (db, prefix, callback) {
  var n = 0, m2 = 0, mean = 0

        start : prefix          // jump to first key with the prefix
      , end   : prefix + '\xFF' // stop at the last key with the prefix
    .on('data', function (data) {
      var delta = data.value - mean
      mean += delta / ++n
      m2 = m2 + delta * (data.value - mean)
    .on('error', callback)
    .on('close', function () {
      callback(null, m2 / (n - 1))

Let’s say you were collecting temperature data and you stored your keys in the form: location~timestamp. Sampling approximately every 5 seconds, collecting temperatures in Celsius we may have data that looks like this:

au_nsw_southcoast~1367487282112 = 18.23
au_nsw_southcoast~1367487287114 = 18.22
au_nsw_southcoast~1367487292118 = 18.23
au_nsw_southcoast~1367487297120 = 18.23
au_nsw_southcoast~1367487302124 = 18.24
au_nsw_southcoast~1367487307127 = 18.24

To calculate the variance we can use our function to do it while efficiently streaming values from our store by simply calling:

variance(db, 'au_nsw_southcoast~', function (err, v) {
  /* v = variance */


The concept of namespacing keys will probably be familiar if you’re used to using a key/value store of some kind. By separating keys by prefixes we create discrete buckets, much like a table in a traditional relational database is used to separate different kinds of data.

It may be tempting to create separate LevelDB stores for different buckets of data but you can take better advantage of LevelDB’s caching mechanisms if you can keep the data organised in a single store.

Because LevelDB is sorted, choosing a namespace separator character can have an impact on the order of your entries. A commonly chosen namespace character often used in NoSQL databases is ':'. However, this character lands in the middle of the list of printable ASCII characters (character code 58), so your entries may not end up being sorted in a useful order.

Imagine you’re implementing a web server session store with LevelDB and you’re prefixing keys with usernames. You may have entries that look like this:

rod.vagg:last_login    = 1367487479499
rod.vagg:default_theme = psychedelic 
rod1977:last_login     = 1367434022300
rod1977:default_theme  = disco
rod:last_login         = 1367488445080
rod:default_theme      = funky
roderick:last_login    = 1367400900133
roderick:default_theme = whoa

Note that these entries are sorted and that '.' (character code 46) and '1' (character code 49) come before ':'. This may or may not matter for your particular application, but there are better ways to approach namespacing.

At the beginning of the printable ASCII character range is '!' (character code 33), and at the end we find '~' (character code 126). Using these characters as a delimiter we find the following sorting for our keys:




But why stick to the printable range? We can go right to the edges of the single-byte character range and use '\x00' (null) or '\xff' (ÿ).

For best sorting of your entries, choose '\x00' (or '!' if you really can’t stomach it). But whatever delimiter you choose, you’re still going to need to control the characters that can be used as keys. Allowing user-input to determine your keys and not stripping out your delimiter character could result in the NoSQL equivalent of an SQL Injection Attack (e.g. consider the unintended consequences that may arise with the dataset above with a delimiter of '!' and allowing a user to have that character in their username).

Range queries

LevelUP’s ReadStream is the perfect range query mechanism. By combining 'start' and 'end', which just need to be approximations of actual keys, you can pluck out the exact the entries you want.

Using our namespaced dataset above, with '\x00' as delimiters, we can fetch all entries for just a single user by carafting a ReadStream range query:

var entries = []

db.createReadStream({ start: 'rod\x00', end: 'rod\x00\xff' })
  .on('data', function (entry) { entries.push(entry) })
  .on('close', function () { console.log(entries) })

Would give us:

[ { key: 'rod\x00last_login', value: '1367488445080' },
  { key: 'rod\x00default_theme', value: 'funky' } ]

The '\xff' comes in handy here because we can use it to include every string of characters preceding it, so any of our user session keys will be included, as long as they don’t start with '\xff'. So again, you need to control the allowable characters in your keys in order to avoid surprises.

Namespacing and range queries are heavily used by many of the libraries that extend LevelUP. In the final article in this series we’ll be exploring some of the amazing ways that developers are extending LevelUP to provide additional features, applications and complete databases.

If you want to jump ahead, visit the Modules page on the LevelUP wiki.

Book Review: The Meteor Book

02 May 2013 | By Alex Young | Comments | Tags meteor reviews books
Discover Meteor, by Sacha Greif and Tom Coleman

Sacha Greif sent me a copy of The Meteor Book, a new book all about Meteor that he’s writing with Tom Coleman, and will be released on May 7th. He was also kind enough to offer a 20% discount to DailyJS readers, which you can redeem at (available when the book is released).

The book itself is currently being distributed as a web application that allows the authors to collect early feedback. Each chapter is displayed as a page, with chapter navigation along the left-hand-side of the page and Disqus comments. There will also be downloadable formats like PDF, ePub, and Kindle.

The authors teach Meteor by structuring the book around building a web application called Microscope, based on the open source Meteor app Telescope. Each chapter is presented as a long form tutorial: a list of chapter goals is given, and then you’re walked through adding a particular feature to the app. Each code listing is visible on the web through a specific instance of the app, and every sample has a Git tag so it’s easy to look up the full source at any point. I really liked this aspect of the design of the book, because it makes it easier for readers to recover from mistakes when following along with the tutorial themselves (something many DailyJS readers contact me about).

Accessing a specific code sample is easy in The Meteor Book

There are also “sidebar chapters”, which are used to dive into technical topics in more detail. For example, the Deploying chapter is a sidebar, and explains Meteor deployment issues in general rather than anything specific to the Microscope app.

Although I work with Node, Express, Backbone.js (and increasingly AngularJS), I’m not exactly an expert on Meteor. The book is pitched at intermediate developers, so you’ll fly through it if you’re a JavaScript developer looking to learn about Meteor.

Something that appealed to me specifically was how the authors picked out points where Meteor is similar to other projects – there’s a section called Comparing Deps which compares Meteor’s data-binding system to AngularJS:

We’ve seen that Meteor’s model uses blocks of code, called computations, that are tracked by special ‘reactive’ functions that invalidate the computations when appropriate. Computations can do what they like when they are invalidated, but normally simply re-run. In this way, a fine level of control over reactivity is achieved.

They also explain the practical implications of some of Meteor’s design. For example, how hot code reloading relates to deployment and sessions, and how data can be limited to specific users for security reasons:

So we can see in this example how a local collection is a secure subset of the real database. The logged-in user only sees enough of the real dataset to get the job done (in this case, signing in). This is a useful pattern to learn from as you’ll see later on.

If you’ve heard about Meteor, then this book serves as a solid introduction. I like the fact a chapter can be digested in a single sitting, and it’s presented with some excellent diagrams and cleverly managed code listings. It’s not always easy to get started with a new web framework, given the sheer amount of disparate technologies involved, but this book makes learning Meteor fun and accessible.

Node Roundup: Caterpillar, squel, mongoose-currency

01 May 2013 | By Alex Young | Comments | Tags node modules sql databases mongo mongoose
You can send in your Node projects for review through our contact form.


Benjamin Lupton sent in Caterpillar (GitHub: bevry / caterpillar, License: MIT, npm: caterpillar), which is a logging system. It supports RFC-standard log levels, but the main reason I thought it was interesting is it’s based around the streams2 API.

By piping a Caterpillar stream through a suitable instance of stream.Transform, you can do all kinds of cool things. For example, caterpillar-filter can filter out unwanted log levels, and caterpillar-human adds fancy colours.


I was impressed by Brian Carlson’s sql module, and Ramesh Nair sent in squel (GitHub: hiddentao / squel, License: BSD, npm: squel) which is a similar project. This SQL builder module supports non-standard queries, and has good test coverage with Mocha.

Ramesh has included some client-side examples as well, which sounds dangerous but may find uses, perhaps by generating SQL fragments to be used by an API that safely escapes them, or for generating documentation examples.


mongoose-currency (GitHub: catalystmediastudios / mongoose-currency, License: MIT, npm: mongoose-currency) by Paul Smith adds currency support to Mongoose. Money values are stored as an integer that represents the lowest unit of currency (pence, cents). Input can be a string that contains a currency symbol, commas, and a decimal.

The Currency type works by stripping non-numerical characters. I’m not sure if this will work for regions where numbers use periods or spaces to separate groups of digits – it seems like this module would require localisation support to safely support anything other than dollars.

Paul has included unit tests written with Mocha, so it could be extended to support localised representations of currencies.

jQuery Roundup: Sco.js, Datepicker Skins, LocationHandler

30 Apr 2013 | By Alex Young | Comments | Tags jquery plugins jquery-ui datepicker bootstrap history
Note: You can send your plugins and articles in for review through our contact form.


Sco.js (GitHub: terebentina / sco.js, License: Apache 2.0) by Dan Caragea is a collection of Bootstrap components. They can be dropped into an existing Bootstrap project, or used separately as well.

Some of the plugins are replacements of the Bootstrap equivalents, but prefixed with $.scojs_. There are also a few plugins that are unique to Sco.js, like $.scojs_valid for validating forms, and $.scojs_countdown for displaying a stopwatch-style timer.

In cases where Sco.js replaces Bootstrap plugins, the author has been motivated by simplifying the underlying markup and reducing the reliance on IDs.

Dan has included tests, and full documentation for each plugin.

jQuery Datepicker Skins

jQuery datepicker skins

Artan Sinani sent in these jQuery datepicker skins (GitHub: rtsinani / jquery-datepicker-skins). They’re tested with jQuery UI v1.10.1 and jQuery 1.9.1, so they should work with new projects quite nicely.


LocationHandler (GitHub: slv / LocationHandler) by Michele Salvini is a plugin for managing pushState and onpopstate. It emits events for various stages of the history change life cycle. Each supported state is documented in the readme, but the basic usage looks like this:

$(document).ready(function() {
  var locationHandler = new LocationHandler({
    locationWillChange: function(change) {
    locationDidChange: function(change) {

The change object has properties for the from/to URLs, and page titles as well.

Packery, Gumba, watch-array

29 Apr 2013 | By Alex Young | Comments | Tags browser libraries ui arrays events



Victor sent in Packery (GitHub: metafizzy / packery, License: MIT/Commercial, bower: packery) from Metafizzy, which is a bin packing library. It organises elements to fit around the space available. Certain elements can be “stamped” into a specific position, fit an ideal spot, or be draggable.

Packery can be configured in JavaScript using the Packery constructor function, or purely in HTML using a class and data attributes. jQuery is not required, but the project does have some dependencies, so the authors recommend installation with Bower.

The project can be used under the MIT license, but commercial projects require a license that starts at $25.


Gumba (GitHub: welldan97 / gumba, License: MIT, npm: gumba) by Dmitry Yakimov is CoffeeScript for the command-line:

$ echo '1234567' | gumba 'toNumber().numberFormat()'

It’s a bit like Awk or sed, but for the chainable text operations supported by CoffeeScript and Underscore.string.


watch-array (GitHub: azer / watch-array, License: BSD, npm: watch-array) by Azer Koçulu causes arrays to emit events when mutator methods are used. Usage is simple – just call watchArray on an array, and pass it a callback that will be triggered when the array changes:

var watchArray = require('watch-array');
var people = ['Joe', 'Smith'];

watchArray(people, function(update) {
  // => { 1: Taylor, 2: Brown }

  // => [0]

people.push('Taylor', 'Brown');

In a way this is like a micro version of what data binding frameworks implement. The author has included tests written with his fox test framework.

Yeoman Configstore, Debug.js, Sublime JavaScript Refactoring

26 Apr 2013 | By Alex Young | Comments | Tags yeoman libraries browser node debugging editors


Sindre Sorhus sent in configstore (GitHub: yeoman / configstore, License: BSD, npm: configstore), a small module for storing configuration variables without worrying about where and how. The underlying data file is YAML, and stored in $XDG_CONFIG_HOME.

Configstore instances are used with a simple API for getting, setting, and deleting values:

var Configstore = require('configstore');
var packageName = require('./package').name;

var conf = new Configstore(packageName, { foo: 'bar' });

conf.set('awesome', true);
console.log(conf.get('awesome'));  // true
console.log(conf.get('foo'));      // bar

console.log(conf.get('awesome'));  // undefined

The Yeoman repository on GitHub has many more interesting server-side and client-side modules – currently most projects are related to client-side workflow, but given the discussions on Yeoman’s Google+ account I expect there will be an increasing number of server-side modules too.


Jerome Etienne has appeared on DailyJS a few times with his WebGL libraries and tutorials. He recently released debug.js (GitHub: jeromeetienne / debug.js, License: MIT), which is a set of tools for browser and Node JavaScript debugging.

The tutorial focuses on global leak detection, which is able to display a trace that shows where the leak originated. Another major feature is strong type checking for properties and function arguments.

Methods can also be marked as deprecated, allowing debug.js to generate notifications when such methods are accessed.

More details can be found on the debug.js project page.

Sublime Text Refactoring Plugin

Stephan Ahlf sent in his Sublime Text Refactoring Plugin (GitHub: s-a / sublime-text-refactor, License: MIT/GPL). The main features are method extraction, variable and function definition navigation, and renaming based on scope.

The plugin uses Node, and has some unit tests written in Mocha. The author is planning to add more features (the readme has a to-do list).