Dynamic Invocation Part 1: Methods

26 Oct 2012 | By Justin Naifeh | Comments | Tags reflection dynamic invocation method

Today we will explore dynamic invocation, which is a clever way of saying “invoke a method using the string method name.” Many people, including myself, refer to this as reflection, which is terminology borrowed from Java’s Reflection library. Because we don’t want to be sued, especially on a Friday, this will be known as dynamic invocation in the article, although it probably goes by other names.

JavaScript benefits from dynamic invocation because the capability is intrinsic to the language; no library is required. This means we can leverage dynamic invocation in all JavaScript environments! The question is…how?

The Interview Question

This is one of many interview questions I created at my company to test a developer’s knowledge of the JavaScript language.

We start with a basic domain class called Employee.


In case you want to float code in your head or test it yourself, here’s the (optional) reference implementation.

 * Constructor for the Employee class.
 * @param String name The full name of the employee.
 * @param String position The employee's position.
var Employee = function(name, position) {
  this._name = name;
  this._position = position;

Employee.prototype = {
   * @return String The Employee's name.
  getName: function() {
    return this._name;
   * @return String The Employee's position.
  getPosition: function() {
    return this._position;
   * Promotes an Employee to a new position.
   * @param String The Employee's new position.
  promote: function(newPosition) {
    this._position = newPosition;

We assume the following instance of Employee for the question:

var emp = new Employee("Tom Anders", "junior developer");

After years of hard work, Tom Anders has been promoted to “senior developer,” which must be evaluated by the code. The application in which this is hosted, due to its architecture, does not have direct knowledge of the promote() method. Instead, only the method name is known as a string. Given the following code setup, how does one promote Tom to senior developer?

var emp = new Employee("Tom Anders", "junior developer");

// ...

var method = "promote";
var newPosition = "senior developer";

// promote emp [Tom] to the position of newPosition
// ...???

How would you execute promote() in the context of emp?

There are two answers, both of which are correct, although one is more dynamic and powerful than the other.

The first answer casts the mantra eval is evil aside in the name of convenience. By concatenating the variable name with the method name and input a developer can promote the employee.

console.log(emp.getPosition()); // "senior developer"

This works…for now. But keep in mind that not only is the arbitrary execution of code insecure, new variables cannot be created by eval() in ES5’s strict mode. The solution will not scale over time.

Without eval() how could we possibly invoke the method? Remember that methods are like any other properties, and once attached to the object or its prototype it becomes accessible via bracket notation.

console.log(emp["promote"] === emp[method]); // true
console.log(emp[method] === emp.promote); // true
console.log(emp[method] instanceof Function); // true

Knowing this we can dynamically invoke the method with the arguments we want.

console.log(emp.getPosition()); // "senior developer"

This style of invocation is effective but vanilla. It assumes the number of arguments are known and the this context is the object referencing the method. This is only the tip of the iceberg, for invocation can be much more dynamic when the arguments and this context are unknown beforehand.

Before advancing, note that each object has an implicit property that references its defining class’s prototype, thus solidifying the relationship between a class and its instances.

console.log(emp.promote === Employee.prototype.promote); // true
console.log(emp.__proto__ === Employee.prototype); // true
console.log(emp.__proto__.promote === Employee.prototype.promote); // true

With this in mind, we can combine class, prototype, and object references with the string method to achieve truly dynamic invocations. This will help us demystify calls ubiquitous in framework and library code (e.g., Array.prototype.slice.call(arguments, 0, arguments.length)).

call() and apply()

Just as the class Employee defines promote(), the class Function defines methods call() and apply(), which are methods on methods. In other words, methods–instance/member functions–are first-class objects in JavaScript, hence its functional nature!

The methods call() and apply() take the same first argument, the this context of execution. The second argument of call() is n-arguments in explicit order, while apply() takes a single array where each element is an argument that is flattened in order as if call() were applied. Using the class’s prototype we can statically access methods yet execute them with a specific context.

// call
Employee.prototype[method].call(emp, newPosition);

// apply
Employee.prototype[method].apply(emp, [newPosition]);

console.log(emp.getPosition()); // "senior developer"

This syntax can also be used with the object itself because emp[method] === Employee.prototype[method].

// call
emp[method].call(emp, newPosition);

// apply
emp[method].apply(emp, [newPosition]);

console.log(emp.getPosition()); // "senior developer"

It might seem redundant to specify the context as emp when invoking promote() on that object, but this is the required syntax, and it allows for variable arguments.

Regardless of the syntax used, dynamic invocation is extremely flexible and introspective. Once the class, prototype, object, and property relationships are understood, a developer can manipulate the language and its constructs in clever ways. Unsolvable problems become solvable.


Dynamic invocation is useful when generating JavaScript code from the server or when matching aspect-oriented rules. And sometimes it’s just helpful to avoid the rightfully deprecated eval().

In part 2 I will go into detail about dynamic invocation and constructors.

Giveaway: Node Cookbook

25 Oct 2012 | By Alex Young | Comments | Tags node giveaways books

This competition has now closed.

Packt Publishing has five copies of Node Cookbook by David Mark Clements to give away. Node Cookbook is a recipe-style book that aims to teach you everything you need to able to publish modules with npm, work with databases, handle streams, and also includes plenty of coverage on web development with Express.

If you’re a client-side developer looking to learn more about Node, then it’s a great jumping off point.

The Rules

  • Post a message using our general contact form. Winners will be chosen at random.
  • Winners will be contacted by email.
  • The competition has ended.

Node Roundup: Express 3.0, Declare, Sourcery

24 Oct 2012 | By Alex Young | Comments | Tags node modules http frameworks express object-oriented
You can send in your Node projects for review through our contact form or @dailyjs.

Express 3.0

Express 3.0 (GitHub: visionmedia / express, License: MIT, npm: express) has been released, announced by TJ in a blog post with more details on Express 3.0 and Connect 2.

There’s a 2.x to 3.x migration guide and a list of Express 3.x features. In particular, express() now returns a Function that can be used with Node’s http.createServer, you’ll need to update the way helpers work, and there are a few changes to the request and response objects.

The Express website now also includes a list of applications powered by Express, which is useful for those of us evaluating frameworks for use in a new project.


Declare (GitHub: doug-martin / declare.js, License: MIT, npm: declare.js) by Doug Martin is an attempt to help write object oriented code that runs on both the client and server. The resulting classes can be used with RequireJS, and support features like mixins, super methods, static methods, and getters and setters.

The author has included some simple tests, and detailed usage examples.


Sourcery (License: MIT, npm: sourcery) by Veselin Todorov is designed for creating RESTful API clients. It’s influenced by ActiveResource, so it should work well with many CRUD-oriented REST APIs. It supports basic authentication as well:

var BasicAuth = require('sourcery').BasicAuth;

var Base = Resource.extend({
  host: 'http://example.com/api/v1'
, auth: {
    type:  BasicAuth
  , user: 'replace-with-real-user'
  , pass: 'replace-with-real-pass'

It includes Mocha tests, and is built using the popular request HTTP client library by Mikeal Rogers.

jQuery Roundup: Fuel UX, uiji.js, QuoJS

23 Oct 2012 | By Alex Young | Comments | Tags jquery jquery-ui bootstrap mobile
Note: You can send your plugins and articles in for review through our contact form or @dailyjs.

Fuel UX

Fuel UX

Fuel UX (GitHub: ExactTarget / fuelux, License: MIT) from ExactTarget is a lightweight library that extends Twitter Bootstrap with some additional JavaScript controls, a Grunt build, and AMD compatibility. At launch, the following controls are included:

  • Combobox - combines input and dropdown for easy and flexible data selection
  • Datagrid - renders data in a table with paging, sorting, and searching
  • Pillbox - manages selected items with color-coded text labels
  • Search - combines input and button for integrated search interaction
  • Spinner - provides convenient numeric input with increment and decrement buttons

The project is well-documented, covered in unit tests, and outside contributions are welcome and encouraged.

Contributed by Adam Alexander



uiji.js (GitHub: aakilfernandes / uiji) by Aakil Fernandes is a clever hack that inverts jQuery by allowing CSS selectors to create elements. This will create a paragraph with the class greeting, that contains the text Hello World!:

$('#helloWorld .output').uiji('p.greeting"Hello World!"')

Callbacks can be used to create hierarchy, and the API is chainable because the plugin returns $(this) once it has processed the input.


QuoJS (GitHub: soyjavi / QuoJS, License: MIT) by Javier Jiménez is a small library for mobile development. It supports HTML traversal and abstractions for touch-based gestures. It doesn’t require jQuery, but has a similar API:

// Subscribe to a tap event with a callback
$$('p').tap(function() {
  // Affects "span" children/grandchildren
  $$('span', this).style('color', 'red');

The same author has written a few other ambitious projects, including Monocle (GitHub: soyjavi / monocle, License: MIT), which is an MVC framework for CoffeeScript application development.

Worth Starring: Components

22 Oct 2012 | By Alex Young | Comments | Tags libraries client-side starred

Back in July, TJ Holowaychuk wrote a post entitled Components in which he outlined an approach to JavaScript project distribution. In the post he discusses CSS and the relationship with modernisation, asset bundling and packaging, require fragmentation and package distribution, and the potential move away from libraries like jQuery in the future.

An example of a calendar component.

Almost four months later I found myself wondering what TJ and his collaborators had done since then. I was surprised to find 123 public repositories in the component GitHub account. There are repositories with updates as recent as a day ago.

It’s impressive how many of the repositories are truly generic, given the number of them. For example, enumerable helps iterate over sets of objects – a very focused slice of Underscore’s functionality. Other repositories are reminiscent of popular jQuery plugins, and come bundled with cut-down CSS and structural markup, as promised by TJ’s post. The menu component is a good example of this.

There’s also a search-api repository for the component registry.

TJ attempted to formalise the structure of components in the Component Spec. The specification includes details on a JSON manifest file, project directory structure, and submission to the registry.

It would be entirely possible to use these repositories alongside jQuery-powered client-side apps. Quite a few will work well in Node. I haven’t seen many projects using these components yet, but lots of them are worth adding to your GitHub starred repositories.

Command-Query Separation in Practice

19 Oct 2012 | By Justin Naifeh | Comments | Tags software design command query separation

Let’s end the week by learning about a software design principle that is easy to follow, yet a violation of it can cause bugs and coincidental correctness.

Command-query Separation (CQS) is the principle by which a method is defined to perform a query, execute a command, but never both. A query returns information about the state of the system, which can include anything from primitive data to objects, or even complex aggregation results. A command, on the other hand, changes the state of the system, usually by writing (persisting) modified objects or data. If we are working with a domain to manage Person objects a query method to find a person by name could be PersonQuery.find(name), and a command to persist an instance could be Person.apply().

Normal Violations

While CQS is a sensible principle, it is violated in everyday practice. This is not as troublesome as it sounds, depending on the type of violation. Strict adherence to CQS makes some methods impossible such as query methods that count the number of invocations, thus allowing a developer to query the number of queries.

 * Finds a user by name and increments the
 * query counter.
 * @return the found Person or null if one was not found
PersonQuery.prototype.find = function(name) {

  // get the Person from the datasource or null if not found
  var person = this.dataSource.find("Person", name);
  return person;

 * @return the number of queries performed.
PersonQuery.prototype.getQueriesPerformed = function() {
  return this._queriesPerformed;

We could create an incrementQueriesPerformed() method instead of incrementing the counter within find(), but that would be excessive and burden the calling code.

// now the developer must synchronize the state of PersonQuery. Yikes!
var person = personQuery.find("someuser");

In practice, CQS is rarely taken in its pedantic form because doing so would make certain types of batched/transactional calls cumbersome. Also, multi-threaded and asynchronous tasks would be difficult to coordinate.

Two classic violations are the Stack.pop() and HashMap.remove() methods as was the case in the LinkedHashMap article (inspired by Java’s Map interface).

var map = new LinkedHashMap();
map.put('myNum', 5);

// ...

var val = map.remove('myNum');
alert(val); // prints "5"

The remove() method is both a command and query because it removes the value associated with the key (command) and returns the value, if any, that was associated with the key (query). While the violation is subtle, the same behavior could have been implemented with a remove():void method.

var map = new LinkedHashMap();
map.put('myNum', 5);

// ...

var val = map.get('myNum');
alert(val); // prints "5"

As with many instances, the code to avoid CQS violation is clunky and overkill.

Although there are often violations of CQS at the method scale, many libraries violate CQS as an integral approach to their API design by combining both getter and setter behavior within the same method.

var person = new Person("someuser");

// one possibility
person.enabled(true); // set enabled to true
var enabled = person.enabled(); // return true

// another possibility
person.value('enabled', true); // set enabled to true
var enabled = person.value('enabled'); // return true

I personally do not prefer this flavor of behavior overloading–instead recommending getEnabled() and setEnabled(Boolean)–but as long as the API is intuitive and documented then it’s generally not a problem.

Dangerous Violations

There are, however, instances where CQS violations cause side effects that can be difficult to debug. This usually happens when code tries to be too clever or helpful by merging command and query behavior.

Consider the following code to query and create a Person:

var person = personQuery.find("someuser");
if (person === null) {
  person = new Person("someuser");
  person.apply(); // persisted
  console.log("Created the Person ["+person+"]");
} else {
  console.log("Person ["+person+"] already exists.");

The code attempts to find a Person and create one if it does not exist; further calls to find() will correctly return the Person. The logic is trivial and easy to understand, but imagine that a developer decides to reduce the effort by changing find() to create a Person if a result is not found.

PersonQuery.prototype.find = function(name) {
  var person = this.dataSource.find("Person", name);

  if (person === null) {
    person = new Person(name);
  return person;

It can be argued that by combining both query and command behavior into find(), calling code can remove the if/else checks for null, thus making it simplier.

var person = personQuery.find("someuser");
console.log("Person ["+person+"] exists.");

The amount of code has been reduced, but we must consider other circumstances in which find() will be called.

As an example, when a user is creating a new Person via a standard HTML form, the system calls find() to ensure the name is not already taken. When find() was just a query method it could be called any number of times without affecting the system. But now it creates the Person it is attempting to find so a uniqueness constraint violation will occur.

// request one
var person = personQuery.find("newuser"); // returns newuser
if (person !== null) {
  alert("The name is already taken.");

// ... however, in a second request

// request two
var person = personQuery.find("newuser"); // Error (duplicate name)
if (person !== null) {
  // we can never reach this
  alert("The name is already taken.");

By adding command behavior to find(), the developer has accidentally broken the code. The original post-condition, the condition known to the code at the time, changed from returns null if no match was found to creates a Person and returns it if no match was found. Now the developer must spend time to refactor the original code, but maybe it’s legacy or outside of his or her control. Care must be taken when changing method conditions.

One solution is to name the method more aptly to describe its behavior: PersonQuery.findOrCreate(). Then the method find() could be added as a pure query alternative, and calling code can choose which one to invoke under the circumstances.

Another solution is to not change find() at all. It worked before, and it will work in the future.


Strict adherence to CQS is nearly impossible to achieve in normal application programming, although some special applications might require such an architecture.

What should be kept in mind is that a looser definition of CQS is valid and useful in practice. A method should stick to being a command or query, but if both behaviors are needed the method name should reflect its dual nature and a non-query/command alternative should be offered.

Totally Testacular

18 Oct 2012 | By Alex Young | Comments | Tags libraries testing node browser

Vojta Jina created Testacular (GitHub: vojtajina / testacular, License: MIT, npm: testacular) to write better tests for AngularJS, Google’s client-side library that’s rapidly gaining support from the wider web development community.

While established projects like Selenium are currently more complete than Testacular, the goal of bringing 100% of Angular’s tests to Testacular is an important milestone that should bring more attention to the project.

Testacular diagram

The design of Testacular itself is actually fairly lightweight. It uses a built-in Node-powered web server to send a test harness to a browser, and then Socket.IO is used for communication between the browser and client. A special page, /context.html, is used to contain tests, and a Node file watcher automatically runs tests when the associated test files change.

A configuration file is used to pull all of this together. It’s basically JavaScript, but uses special variables to set configuration options. For example, files is an array that holds a list of files to load in the browser, and reporters controls how test results are displayed.

Testacular automatically runs a browser instance when testacular start is invoked on the command-line. The entire workflow is optimised around development, so you shouldn’t have to leave your editor to see results. Also, a new instance of a browser is loaded – if you set up Chrome to run tests then your existing Chrome session won’t be interfered with.

Browsers are scripted through “launcher” files. The Chrome launcher includes command-line invocations suitable for Linux, Windows, and Mac OS. It’s also possible to use Internet Explorer and PhantomJS.

Testing cross-platform is also supported through custom launcher files. I found a custom browser example by digging through the project’s documentation. This is intended for invoking browsers through a virtual machine on a Continuous Integration (CI) server, but this could be adapted for local development. In fact, supporting Travis CI was one of Vojta’s initial goals for the project.

Trying Testacular

The Testacular readme has some details on how to get started writing Testacular tests. It’ll work with both Jasmine and Mocha. Let’s say you’re using Jasmine to test an existing project, and have added testacular to your package.json or installed it. All you need to do is run testacular init to create a configuration file, edit it with suitable values for your project, and then start up the Testacular server with testacular start.

Assuming you’ve created a Jasmine Testacular config file, change the files argument as follows:

files = [

And then add a test file to test/, anything will do:

describe('Array', function(){
  describe('#indexOf()', function(){
    it('should return -1 when the value is not present', function(){
      expect([1, 2, 3].indexOf(5)).toEqual(-1);

Now running testacular start will start up the tests. You should see something like the following screenshot – if not, check your basePath.

Testacular running on a Mac with Chrome


It’s good to see interesting spin-offs coming from MVC projects. Vojta has been working on Testacular for a while now, and it seems like his original plans are coming to fruition. If you’re frustrated with Selenium and work with Node, it might be a natural fit.

P.S. Writing this article without accidentally typing "testicular" was a formidable challenge.

Node Roundup: 0.8.12, Node CSV, Memoize

17 Oct 2012 | By Alex Young | Comments | Tags node modules csv functional
You can send in your Node projects for review through our contact form or @dailyjs.

Node 0.8.12

Node 0.8.12 is out, which has some fixes for Windows, the Buffer and HTTP modules, and the REPL. I upgraded as soon as this release came out, and it’s running all of my stuff fine as far as I can tell.

Node CSV

Node CSV (GitHub: wdavidw / node-csv-parser, License: New BSD, npm: csv) by David Worms is a streaming CSV parser. By implementing stream readers and writers, this module can parse CSV with less memory overheads when compared to reading the entire file into memory.

It can be used with fs.createReadStream like this:


Alternatively, David has added a more convenient property-based API:

  .to.string(function(data) { console.log(data); });

To set options, from.options({ option: 'value' }) can be used. This supports the usual CSV parser settings, like field delimiters and quoting. The project has Mocha tests, and there’s a growing list of contributors.


Memoize (License: MIT, npm: memoize) by Mariusz Nowak is a memoize module that implements pretty much everything related to memoization that I can think of. It works in both Node and browsers, works with function arguments without serialising arguments, supports asynchronous functions, and has cache management features.

The “primitive mode” is interesting because it’s optimised for large amounts of data. In fact, it made me think back to Map and WeakMap from Monday’s article on ES6 for Node. It seems like caching would get a boost from these ES6 features.

jQuery Roundup: jq-tiles, plusTabs, Kwicks

16 Oct 2012 | By Alex Young | Comments | Tags jquery jquery-ui tabs plugins slideshow
Note: You can send your plugins and articles in for review through our contact form or @dailyjs.



jq-tiles (GitHub: elclanrs / jq-tiles, License: MIT) is a slideshow plugin that breaks images up into tiles and uses CSS3-based effects. The number of tiles can be changed, and the transition and animation speeds can be configured.

To use the plugin, call $('.slider').tilesSlider(options) on an element that contains a set of images. Events are used to stop and start the slideshow: $('.slider').trigger('start').


plusTabs compared with standard tabs

plusTabs (GitHub: jasonday / plusTabs, License: MIT/GPL) by Jason Day groups jQuery UI tabs under a tab with a menu. Jason’s example is scaled to a slim resolution that might be found on a smartphone, showing how jQuery UI tabs become cluttered and messy in such circumstances.


Kwicks (GitHub: jmar777 / kwicks, License: MIT) by Jeremy Martin is a sliding panel plugin. It can display vertical or horizontal panels, and grow or shrink them on hover. It can also be used to create a slideshow.

Kwicks works with nested elements like an unordered list, but it’ll actually work with any tag, so <li> isn’t hardwired.

ES6 for Node

15 Oct 2012 | By Alex Young | Comments | Tags node ES6 tutorials

In Harmony of Dreams Come True, Brendan Eich discusses the “new-in-ES6 stuff” that is starting to come to fruition. Although his discussion mostly focuses on Mozilla-based implementations, he does relate upcoming language features to a wide range of JavaScript projects, including games. This is relevant to Node developers because ECMAScript 6 is happening, and changes are already present in V8 itself.

Let’s look at some of these changes in a moment. For now you might be wondering how to track such changes as they become available in Node. When new builds of Node are released, the version of V8 is usually mentioned if it has changed. You can also view the commit history on GitHub for a given release tag to see what version of V8 has been used, or take a look at the value of process.versions:

~  node -e 'console.log(process.versions)'
{ http_parser: '1.0',
  node: '0.8.12',
  v8: '',
  ares: '1.7.5-DEV',
  uv: '0.8',
  zlib: '1.2.3',
  openssl: '1.0.0f' }

Once you’ve got the V8 version, you can check take a look at the V8 ChangeLog to see what has been included. Just searching that text for “Harmony” shows the following for Node 0.8.12:

  • Block scoping
  • Harmony semantics for typeof
  • let and const
  • Map and WeakMap
  • Module declaration
  • The Proxy prototype

Running Node with Harmony Options

Typing node --v8-options shows all of the available V8 options:

  • --harmony_typeof: Enable harmony semantics for typeof
  • --harmony_scoping: Enable harmony block scoping
  • --harmony_modules: Enable harmony modules (implies block scoping)
  • --harmony_proxies: Enable harmony proxies
  • --harmony_collections: Enable harmony collections (sets, maps, and weak maps)
  • --harmony: Enable all harmony features (except typeof)

To actually use one of these options, just include it when running a script:

node --harmony script.js

Example: typeof

The --harmony_typeof option is special because it isn’t included with --harmony, this is most likely because the proposal was rejected: harmony:typeof_null. The possibility of a proposal being rejected is part of working with cutting edge language features – if you’re unsure about the status of a given feature the best thing to do is search the ECMAScript DokuWiki.

With this option enabled, typeof null === "null" is true.

Example: Type Checking

Standard Node 0.8 without the --harmony flag supports isNaN and isFinite. However, toInteger and isInteger don’t seem to be supported yet.

var assert = require('assert');


Example: Block Scoping

Strict mode helps fix a major JavaScript design flaw: a missing var statement makes a variable globally visible. ES6 goes a step further by introducing let which can be used to create block-local variables. The following example must be run with node --use-strict --harmony:

for (let i = 0; i < 3; i++) {
  console.log('i:', i);


The final statement, console.log(i), will cause a ReferenceError to be raised. The variable i is out of scope. Great, but doesn’t that mean forgetting let will just create a global? No, because in that case strict mode causes a ReferenceError to be raised.

The advantages of let are paired with const – by declaring a constant in global code the semantics are clear, and leaking uninitialised properties into the global object is avoided.

Example: Collections

ES6 adds new APIs for dealing with groups of values: Map, Set, and WeakMap. The Map constructor allows any object or primitive value to be mapped to another value. This is confusing because it sounds similar to plain old objects, but that’s only because we often use objects to implement what maps are designed to solve more efficiently.

var assert = require('assert')
  , m = new Map()
  , key = { a: 'Test' }
  , value = 'a test value'

m.set(key, value);

assert.equal(m.get(key), value);

This example shows that map keys don’t need to be converted to strings, unlike with objects.

Node also currently has Set when running with --harmony, but instantiation with an array doesn’t seem to work yet, and neither does Set.prototype.size.

var assert = require('assert')
  , s = new Set()



Finally, WeakMap is a form of map with weak references. Because WeakMap holds weak references to objects, the keys are not enumerable. The advantage of this is the garbage collector can remove entries when they’re no-longer in use. To justify the relevance of WeakMap, Brendan mentioned the Ephemeron:

Ephemerons solve a problem which is commonly found when trying to “attach” properties to objects by using a registry. When some property should be attached to an object, the property should (in terms of GC behavior) typically have the life-time that an instance variable of this object would have.

So the WeakMap API should give us a memory-efficient and faster-than-O(n) key/value map.

There’s a post from last year by Andy E called ES6 – a quick look at Weak Maps that relates WeakMap to jQuery’s expando property:

Weak maps come in here because they can do the job much better. They cut out the need for the expando property entirely, along with the requirement of handling JS objects differently to DOM objects. They also expand on jQuery’s ability to allow garbage collection when DOM elements are removed by its own methods, by automatically allowing garbage collection when DOM elements no longer reachable after they’ve been removed by any method.

I tried creating some instances of WeakMap with circular references and forcing the garbage collector to run by using node --harmony --expose_gc and calling gc(), but it’s difficult to tell if the object is actually being removed yet:

We can’t tell, however: there’s no way to enumerate a WeakMap, as doing so could expose the GC schedule (in browsers, you can’t call gc() to force a collection). Nor can we use wm.has to probe for entries, since we have nulled our objkey references!


The current version of Node seems to include the old Proxy API, so I don’t think it’s worth exploring here. The newer Proxy API doesn’t seem to work as expected, and I can’t find specific mention of a change to the new API style in the V8 issues or developer mailing list.

Generators, Classes, and Macros

Generators, classes, and macros are not currently supported by V8. These are still hotly debated areas, which you can read more about on the ECMAScript DokuWiki:

Andreas Rossberg said the V8 developers are aware of generators, but there aren’t any concrete plans for supporting them yet.

Destructuring has been added to the draft ECMAScript 6 specification.

If you’re desperate to try macros in Node now, Mozilla released sweet.js (GitHub: mozilla / sweet.js, License: BSD, npm: sweet.js) a few weeks ago. It’s a command-line tool that “compiles” scripts, in a similar way to CoffeeScript. This isn’t specifically an ES6 shim, although there are plenty of those out there. Some new features like WeakMap seem like they can be supported using shims, but a complete implementation isn’t always possible in older versions of ECMAScript.


Red Dwarf, Stately.js, ansi_up

12 Oct 2012 | By Alex Young | Comments | Tags github fsm console

Red Dwarf

Red Dwarf

Red Dwarf (GitHub: rviscomi / red-dwarf, License: MIT) by Rick Viscomi is a heat map visualisation of GitHub repository stars. It can display stars for a specific repository, so the joyent/node heat map is pretty interesting given the sheer amount of stars it has.

Google Maps is used for geocoding and displaying the map, and GitHub supplies the raw data. Both of these APIs are accessible with client-side JavaScript, so the whole thing can work purely in-browser. The visualisation itself is drawn using Heatmap Layer, provided by Google Maps.


Stately.js logo

Stately.js (License: MIT) by Florian Schäfer is a finite-state automaton engine, suitable for use in client-side projects. Given that most of us are used to working with events, state machines work quite naturally in JavaScript. Stately.js allows transitions to be tracked using notifications, and handlers can be registered and removed as required.

Florian’s documentation is detailed, and the “door” example is an easy one to follow if you’re confused about how the project can be used. Some simple tests have also been included, with a small HTML test harness.


ansiup example

I still hang out in IRC, and I still like using Mutt for email. There’s something reassuring about the glare of colourful text-based interfaces that no GUI will ever replace. If you’re a fellow console hacker, then you may find a use for ansi_up (License: MIT, npm: ansi_up) by Dru Nelson. It converts text with ANSI terminal colour commands into HTML, so you can take your FIGlet-powered nonsense to the web and annoy people with it there.

Dru says this project has been used “in production” since early 2012 – I wonder what it’s being used for?

Brain Training Node

11 Oct 2012 | By Alex Young | Comments | Tags node tutorials statistics

Game scraper

The other day a friend asked me about the validity of video game review scores. There was an accusation of payola against a well-known games magazine, and the gaming community was trying to work out how accurate the magazine’s scores were. My programmer’s brain immediately thought up ways to solve this – would a naive Bayesian classifier be sufficient to predict review scores given enough reviews?

The answer to that particular question is beyond the scope of this article. If you’re interesting in statistical tests for detecting fraudulent data, then Benford’s law is a better starting point.

Anyway, I couldn’t help myself from writing some Bayes experiments in Node, and the result is this brief tutorial.

This tutorial introduces naive Bayes classifiers through the classifier module by Heather Arthur, and uses it to classify article text from the web through the power of scraping. It’s purely educational rather than genuinely useful, but if you write something interesting based on it let me know in the comments and I’ll check it out!


To complete this tutorial, the following things are required:

  • A working installation of Node
  • Basic Node and npm knowledge
  • Redis


Completing this tutorial will teach you:

  • The basics of Bayesian classification
  • How to use the classifier module
  • Web scraping

Getting Started

Like all Node projects, this one needs a package.json. Nothing fancy, but enough to express the project’s dependencies:

  "author": "Alex R. Young"
, "name": "brain-training"
, "version": "0.0.1"
, "private": true
, "dependencies": {
    "classifier": "latest"
  , "request": "latest"
  , "cheerio": "latest"
, "devDependencies": {
    "mocha": "latest"
  "engines": {
    "node": "0.8.8"

The cheerio module implements a subset of jQuery, and a small DOM model. It’s a handy way to parse web pages where accuracy isn’t required. If you need a more accurate DOM simulation, the popular choice is JSDOM.

Core Module

The classifier module has an extremely simple API. It can work with in-memory data, but I wanted to persist data with Redis. To centralise this so we don’t have to keep redefining the Redis configuration, the classifier module can be wrapped up like this:

var classifier = require('classifier')
  , bayes

bayes = new classifier.Bayesian({
  backend: {
    type: 'Redis'
  , options: {
      hostname: 'localhost'
    , port: 6379
    , name: 'gamescores'

module.exports = {
  bayes: bayes

Now other scripts can load this file, and run train or classify as required. I called it core.js.

Naive Bayes Classifiers

The classifier itself implements a naive Bayes classifier. Such algorithms have been used as the core of many spam filtering solutions since the mid-1990s. Recently a book about Bayesian statistics, Think Bayes, was featured on Hacker News and garnered a lot of praise from the development community. It’s a free book by Allen Downey and makes a difficult subject relatively digestible.

The spam filtering example is probably the easiest way to get started with Bayes. It works by assigning each word in an email a probability of being ham or spam. When a mail is marked as spam, each word will weighted accordingly – this process is known as training. When a new email arrives, the filter can add up the probabilities of each word, and if a certain threshold is reached then the mail will be marked as spam. This is known as classification.

What makes this type of filtering naive is that each word is considered an independent “event”, but in reality the position of a word is important due to the grammatical rules of the language. Even with this arguably flawed assumption, naive classifiers perform well enough to help with a wide range of problems.

The Wikipedia page for Bayesian spam filtering goes into more detail, relating spam filtering algorithms to the formulas required to calculate probabilities.


Create a new file called train.js as follows:

var cheerio = require('cheerio')
  , request = require('request')
  , bayes = require('./core').bayes

function parseReview(html) {
  var $ = cheerio.load(html)
    , score
    , article

  article = $('.copy .section p').text();
  score = $('[typeof="v:Rating"] [property="v:value"]').text();
  score = parseInt(score, 10);

  return { score: score, article: article };

function fetch(i) {
  var trained = 0;

  request('http://www.eurogamer.net/ajax.php?action=frontpage&page=' + i + '&type=review', function(err, response, body) {
    var $ = cheerio.load(body)
      , links = []

    $('.article a').each(function(i, a) {
      var url;
      if (a.attribs) {
        url = 'http://www.eurogamer.net/' + a.attribs.href.split('#')[0];
        if (links.indexOf(url) === -1) {

    var left = links.length;

    links.forEach(function(link) {
      console.log('Fetching:', link);
      request(link, function(err, response, body) {
        var review = parseReview(body)
          , category

        if (review.score > 0 && review.score <= 5) {
          category = 'bad';
        } else if (review.score > 5 && review.score <= 10) {
          category = 'good';

        if (category) {
          console.log(category + ':', review.score);
          bayes.train(review.article, category);


        if (left === 0) {
          console.log('Trained:', trained);


This code is tailored for Eurogamer. If I wanted to write a production version, I’d separate out the scraping code from the training code. Here I just want to illustrate how to scrape and train the classifier.

The parseReview function uses the cheerio module to pull out the review’s paragraph tags and extract the text. This is pretty easy because cheerio automatically operates on arrays of nodes, so $('.copy .section p').text() will return a block of text for each paragraph without any extra effort.

The fetch function could be adapted to call Eurogamer’s article paginator recursively, but I thought if I put that in there they’d get angry if enough readers tried it out! In this example, fetch will download each article from the first page. I’ve tried to ensure unique links are requested by creating an array of links and then calling Array.prototype.indexOf to see if the link is already in the array. It also strips out links with hash URLs, because Eurogamer includes an extra #comments link.

Once the unique list of links has been generated, each one is downloaded. It’s worth noting that I use Mikeal Rogers’ request module here to simplify HTTP requests – Node’s built-in HTTP client library is fine, but Mikeal’s module cuts down a bit of boilerplate code. I use it in a lot of projects, from web scrapers to crawlers, and interacting with RESTful APIs.

The scraper code in parseReview tries to pull out the score from the HTML. If a score between 0 and 5 is found, then the article is categorised as ‘bad’, and anything else is ‘good’.


To actually classify other text, we need to find some other text and then call bayes.classify on it. This code expects review URLs from Edge magazine. For example: Torchlight II review.

var request = require('request')
  , cheerio = require('cheerio')
  , bayes = require('./core').bayes

request(process.argv[2], function(err, request, body) {
  if (err) {
  } else {
    var $ = cheerio.load(body)
      , text = $('.post-page p').text()


    bayes.classify(text, function(category) {
      console.log('category:', category);

Again, cheerio is used to pull out article text, and then it’s handed off to bayes.classify. Notice that the call to classify looks asynchronous – I quite like the idea of building a simple reusable asynchronous Node Bayes classifier service using Redis.

This script can be run like this:

node classify.js http://www.edge-online.com/review/liberation-maiden-review/


I’ve combined my interest in computer and video games with Node to attempt to use a naive Bayes classifier to determine if text about a given game is good or bad. Of course, this is a lot more subjective than the question of ham or spam, so the value is limited. However, hopefully you can see how easy the classifier module makes Bayesian statistics, and you should be able to adapt this code to work with other websites or plain text files.

Heather Arthur has also written brain, which is a neural network library. We’ve featured this module before on DailyJS, but as there’s only three dependents on npm I thought it was worth brining it up again.

Node Roundup: MongloDB, parseq.js, node-netpbm

10 Oct 2012 | By Alex Young | Comments | Tags node modules databases mongo markdown documentation async
You can send in your Node projects for review through our contact form or @dailyjs.


MongloDB Logo

MongloDB (GitHub: onglo / MongloDB, License: MIT) by Christian Sullivan is a database written with JavaScript that’s compatible with MongoDB’s queries. It has a plugin system for persistence, and a datastore for Titanium Mobile – this effectively allows a form of MongoDB to be used within iOS and Android applications.

Monglo has a DataStore API that can be used to persist data locally or remotely. It’s based around an object that implements each CRUD operation:

var monglo = require('./index').Monglo
  , db = monglo('DemoDB')

function DemoStore(){
  return {
     insert: function() {}
   , update: function() {}
   , open: function() {}
   , remove: function() {}
   , all: function() {}

db.use('store', new DemoStore());


parseq.js (GitHub: sutoiku / parseq, License: MIT, npm: parseq) from Sutoiku, Inc. is a flow control library for organising parallel and sequential operations. To manage asynchronous operations, this can be passed. If several calls are made, then this() can be passed, and the next function will receive an array that contains the results in the order they were called.

The same author also recently released jsdox, which is another JSDoc to Markdown generator.


netpbm (GitHub: punkave / node-netpbm, License: MIT, npm: netpbm) by Tom Boutell scales and converts images using the netpbm toolkit, which is a venerable set of graphics programs found on many Unix systems.

This library is a wrapper around the netpbm binaries, and takes advantage of the fact that most netpbm programs only read one row of pixels at a time into memory to keep memory usage low.

jQuery Roundup: jQuery UI 1.9.0, Delta Theme, jQuery.textFit

09 Oct 2012 | By Alex Young | Comments | Tags jquery jquery-ui plugins truncation themes
Note: You can send your plugins and articles in for review through our contact form or @dailyjs.

jQuery UI 1.9.0

jQuery UI 1.9.0 site

jQuery UI 1.9.0 is out, which adds new widgets, API refinements, improved accessibility, and hundreds of bug fixes. The new widgets are as follows:

  • Menu: A navigation menu with support for hierarchical pop-up submenus
  • Spinner: A “number stepper” for input fields (rather than a rotating progress indicator)
  • Tooltip: A pop-up message

There’s a detailed jQuery UI 1.9 Upgrade Guide which lists deprecations. Oh, and the jQuery UI site has been refreshed as well!

Delta: jQuery UI Theme

jQuery UI Delta Theme

Delta (GitHub: kiandra / Delta-jQuery-UI-Theme, License: MIT/GPL) is a jQuery UI theme by Tait Brown, who created the hugely popular Aristo port.

This theme has a metallic finish that reminds me if iOS 6, and includes light and dark variations. It’s also dubbed as Retina ready – CSS3 gradients and high-resolution images have been used.


jQuery.textFit (GitHub: STRML / jquery.textFit, License: MIT) by Samuel Reed can scale text to fit its container. It also correctly detects multiline strings with break tags.

To find the best font size, a binary search is performed. The demo on jQuery.textFit’s site is slowed down so you can actually see how the algorithm works, in reality it seems to run very quickly.

Vertical alignment and centred text are both supported, as are custom fonts.

Decorating Your JavaScript

08 Oct 2012 | By Justin Naifeh | Comments | Tags pattern object-oriented decorator

The decorator pattern, also known as a wrapper, is a mechanism by which to extend the run-time behavior of an object, a process known as decorating. The pattern is often overlooked because its simplicity belies its object-oriented benefits when writing scalable code. Decorating objects is also neglected in JavaScript because the dynamic nature of the language allows developers to abuse the malleability of objects, but just because you can doesn’t mean you should.

Before delving into the decorator pattern, let’s examine a realistic coding problem that can be solved with other solutions. The decorator is best understood after the shortcomings of other common solutions have been explored.

The Problem

You are writing a simple archiving tool that manages the display and lifecycle of publications and their authors. An important feature is the ability to list the contributing authors, which may be a subset of all authors. The default is to show the first three authors of any publication. The initial domain model is basic:


Using plain JavaScript we implement the read-only classes as follows:

 * Author constructor.
 * @param String firstName
 * @param String lastName
var Author = function(firstName, lastName) {
  this._firstName = firstName;
  this._lastName = lastName;

Author.prototype = {

   * @return String The author's first name.
  getFirstName: function() {
    return this._firstName;

   * @return String The author's last name.
  getLastName: function() {
    return this._lastName;

   * @return String The representation of the Author.
  toString: function() {
    return this.getFirstName()+' '+this.getLastName();

 * Publication constructor.
 * @param String title
 * @param Author[] authors
 * @param int type
var Publication = function(title, authors, type) {
  this._title = title;
  this._authors = authors;
  this._type = type;

Publication.prototype = {

   * @return String The publication title.
  getTitle: function() {
    return this._title;

   * @return Author[] All authors.
  getAuthors: function() {
    return this._authors;

   * @return int The publication type.
  getType: function() {
    return this._type;

   * A publication might have several authors, but we
   * are only interested in the first three for a standard publication.
   * @return Author[] The significant contributors.
  contributingAuthors: function() {
    return this._authors.slice(0, 3);

   * @return String the representation of the Publication.
  toString: function() {
    return '['+this.getType()+'] "'+this.getTitle()+'" by '+this.contributingAuthors().join(', ');

The API is straightforward. Consider the following invocations:

var pub = new Publication('The Shining', 
  [new Author('Stephen', 'King')],

// rely on the default toString() to print: [horror] "The Shining" by Stephen King

// ...

var pub2 = new Publication('Design Patterns: Elements of Reusable Object-Oriented Software', [
  new Author('Erich', 'Gamma'),
  new Author('Richard', 'Helm'),
  new Author('Ralph', 'Johnson'),
  new Author('John', 'Vlissides')
], 'programming');

// prints: [programming] "Design Patterns: Elements of Reusable Object-Oriented Software" by Erich Gamma, Richard Helm, Ralph Johnson

The design is simple and reliable…at least until the client specifies a new requirement:

In accordance with the convention for medical publications, only list the first (primary) and last (supervisor) authors if multiple authors exist.

This means that if Publication.getType() returns “medical” we must perform special logic to list the contributing authors. All other types (e.g., horror, romance, computer, etc) will use the default behavior.


There are many solutions to satisfy the new requirement, but some have disadvantages that are not readily apparent. Let’s explore a few of these and see why they are not ideal even though they are commonplace.

Overwrite Behavior

// overwrite the contributingAuthors definition
if (pub.getType() === 'medical') {
  pub.contributingAuthors = function() {
    var authors = this.getAuthors();

    // return the first and last authors if possible
    if (authors.length > 1) {
      return authors.slice(0, 1).concat(authors.slice(-1));
    } else {
      return authors.slice(0, 1);

This, one could argue, can be the most abused feature of the language: the ability to arbitrarily overwrite properties and behavior at run-time. Now the if/else condition must be maintained and expanded if more requirements are added to specify contributing authors. Furthermore, it is debatable whether or not pub is still an instance of Publication. A quick instanceof check will confirm that it is, but a class defines a set of state and behavior. In this case we have modified select instances and the calling code can no longer trust the consistency of Publication objects.

Change the Calling Code

var listing;
if (pub.getType() === 'medical') {
  var contribs = pub.getAuthors();

  // return the first and last authors if possible
  if (contribs.length > 1) {
    contribs = contribs.slice(0, 1).concat(contribs.slice(-1));
  } else {
    contribs = contribs.slice(0, 1);

  listing = '['+pub.getType()+'] "'+pub.getTitle()+'" by '+contribs.join(', ');
} else {
  listing = pub.toString();


This solution violates encapsulation by forcing calling code to understand the internal implementation of Publication.toString() and recreate it outside of the class. A good design should not burden calling code.

Subclass the Component


One of the most common solutions is to create a MedicalPublication class that extends Publication, with a contributingAuthors() override to provide custom behavior. While this approach is arguably less flawed than the first two, it pushes the limit of clean inheritance. We should always favor composition over inheritance to avoid overreliance on the base class internals (for the developer masochists).

Subclassing also fails as a viable strategy when more than one customization might occur or when there is an unknown combination of customizations. An often cited example is a program to model a coffee shop where customers can customize their cup of coffee, thus affecting the price. A developer could create subclasses that reflect the myriad combinations such as CoffeeWithCream and CoffeeWithoutCreamExtraSugar that override Coffee.getPrice(), but it is easy to see that the design will not scale.

Modify the Source Code

contributingAuthors: function() {
  if (this.getType() === 'medical') {
    var authors = this.getAuthors();

    if (authors.length > 1) {
      return authors.slice(0, 1).concat(authors.slice(-1));
    } else {
      return authors.slice(0, 1);
  } else {
    return this._authors.slice(0, 3);

This is somewhat of a hack, but in a small project where you control the source code it might suffice. A clear disadvantage is that the if/else condition must grow with every custom behavior, making it a potential maintenance nightmare.

Another thing to note is that you should never, ever modify source code outside of your control. Even the mention of such an idea should leave a taste in your mouth worse than drinking orange juice after brushing your teeth. Doing so will inextricably couple your code to that revision of the API. The cases where this is a valid option are so few and far between that it is usually an architectural issue in the application, not in the outside code.

The Decorator

These solutions fulfill the requirement at the cost of jeopardizing maintainability and scalability. As a developer you must pick what is right for your application, but there is one more option to examine before making a decision.

I recommend using a decorator, a flexible pattern by which to extend the behavior of your existing objects. The following UML represents an abstract implementation of the pattern:

decorator uml

The ConcreteComponent and Decorator classes implement the same Component interface (or extend Component if it’s a superclass). The Decorator keeps a reference to a Component for delegation except in the case where we “decorate” by customizing the behavior.

By adhering to the Component contract, we are guaranteeing a consistent API and guarding against implementation internals because calling code will not and should not know if the object is a ConcreteComponent or Decorator. Programming to the interface is the cornerstone of good object-oriented design.

Some argue that JavaScript is not object-oriented, and while it supports prototypical inheritance instead of classical, objects are still innate to the language. The language supports polymorphism and that fact that all objects extend Object is sufficient to argue the language is object-oriented as well as functional.

The Implementation

Our solution will use a slight variant of the decorator pattern because JavaScript does not have some classical inheritance concepts such as interfaces or abstract classes. There are many libraries that simulate such constructs, which is beneficial for certain applications, but here we will use the languages basics.


The class MedicalPublication and Publication implicitly implement PublicationIF. In this case MedicalPublication acts as the decorator to list the first and last authors as contributors while unchanging other behavior.

Note that MedicalPublication references PublicationIF, and not Publication. By referencing the interface instead of a specific implementation we can arbitrarily nest decorators within one another! (In the coffee shop problem we can create decorators such as WithCream, WithoutCream, and ExtraSugar–these can be nested to handle any complex order.)

The MedicalPublication class delegates for all standard operations and overrides contributingAuthors() to provide the “decorated” behavior.

 * MedicalPublication constructor.
 * @param PublicationIF The publication to decorate.
var MedicalPublication = function(publication) {
  this._publication = publication;

MedicalPublication.prototype = {

   * @return String The publication title.
  getTitle:  function() {
    return this._publication.getTitle();

   * @return Author[] All authors.
  getAuthors: function() {
    return this._publication.getAuthors();

   * @return int The publication type.
  getType: function() {
    return this._publication.getType();

   * Returns the first and last authors if multiple authors exists.
   * Otherwise, the first author is returned. This is a convention in the
   * medical publication domain.
   * @return Author[] The significant contributors.
  contributingAuthors: function() {

    var authors = this.getAuthors();

    if (authors.length > 1) {
      // fetch the first and last contributors
      return authors.slice(0, 1).concat(authors.slice(-1));
    } else {
      // zero or one contributors
      return authors.slice(0, 1);

   * @return String the representation of the Publication.
  toString: function() {
    return 'Decorated - ['+this.getType()+'] "'+this.getTitle()+'" by '+this.contributingAuthors().join(', ');

 * Factory method to instantiate the appropriate PublicationIF implementation.
 * @param String The discriminating type on which to select an implementation.
 * @param String The publication title.
 * @param Author[] The publication's authors.
 * @return PublicationIF The created object.
var publicationFactory = function(title, authors, type) {

  if (type === 'medical') {
    return new MedicalPublication(new Publication(title, authors, type));
  } else {
    return new Publication(type, title, authors);

By using the factory method we can safely create an instance of PublicationIF.

var title = 'Pancreatic Extracts as a Treatment for Diabetes';
var authors = [new Author('Adam', 'Thompson'), 
  new Author('Robert', 'Grace'), 
  new Author('Sarah', 'Townsend')];
var type = 'medical';

var pub = publicationFactory(title, authors, type);

// prints: Decorated - [medical] 'Pancreatic Extracts as a Treatment of Diabetes' by Adam Thompson, Sarah Townsend

In these examples we are using toString() for brevity and debugging, but now we can create utility classes and methods to print PublicationIF objects for application display.


Once the application is modified to expect PublicationIF objects we can accommodate further requirements to handle what constitutes a contributing author by adding new decorators. Also, the design is now open for any PublicationIF implementations beyond decorators to fulfill other requirements, which greatly increases the flexibility of the code.


One criticism is that the decorator must be maintained to adhere to its interface. All code, regardless of design, must be maintained to a degree, but it can be argued that maintaining a design with a clearly stated contract and pre- and post-conditions is much simpler than searching if/else conditions for run-time state and behavior modifications. More importantly, the decorator pattern safeguards calling code written by other developers (or even yourself) by leveraging object-oriented principles.

Another criticism is that decorators must implement all operations defined by a contract to enforce a consistent API. While this can be tedious at times, there are libraries and methodologies that can be used with JavaScript’s dynamic nature to expedite coding. Reflection-like invocation can be used to allay concerns when dealing with a changing API.

 * Invoke the target method and rely on its pre- and post-conditions.
Decorator.prototype.someOperation = function() {
  return this._decorated.someOperation.apply(this._decorated, arguments);

// ... or a helper library can automatically wrap the function

 * Dynamic invocation.
 * @param Class The class defining the function.
 * @param String The func to execute.
 * @param Object The *this* execution context.
function wrapper(klass, func, context) {
  return function() {
    return klass.prototype[func].apply(context, arguments);

The details are up to the developer, but even the most primitive decorator pattern is extremely powerful. The overhead and maintenance for the pattern itself is minimal, especially when compared to that of the opposing solutions.


The decorator pattern is not flashy, despite its name, nor does it give the developer bragging rights in the “Look at what I did!” department. What the decorator does do, however, is correctly encapsulate and modularize your code to make it scalable for future changes. When a new requirement states that a certain publication type must list all authors as contributors, regardless of ordinal rank, you won’t fret about having to refactor hundreds of lines of code. Instead, you’ll write a new decorator, drop it into the factory method, and take an extra long lunch because you’ve earned it.

Tutorial: Writing LispyScript Macros

05 Oct 2012 | By Santosh Rajan | Comments | Tags LispyScript tutorials lisp
This tutorial is by Santosh Rajan (@santoshrajan), the creator of LispyScript (GitHub: santoshrajan / lispyscript).

Writing LispyScript Macros

Macros are a powerful feature of LispyScript. They are much more powerful than C #define macros. While C #define macros do string substitution, LispyScript macros are code generators.

Functions take values as arguments and return a value. Macros take code as arguments, and then return code. Understanding this difference and its ramifications is the key to writing proper macros.

Functions get evaluated at runtime. Macros get evaluated at compile time, or pre-compile time to be more precise.

So, when should macros be used? When you cannot use a function! There is more to this answer than what is apparent. Consider this piece of code:

(* 2 2)

And elsewhere in the program we find this.

(* 4 4)

There is a pattern emerging here. In both cases we have the same code *, and a variable – a number that changes in each instance of the pattern. So we reuse this pattern by writing a function:

(var square
  (function (x)
    (* x x)))

Therefore, to reuse a repeating code pattern as a function, the code pattern must meet two conditions:

  1. The code must remain the same across every instance of the code pattern.
  2. It is only the data that can change across every instance of the code pattern.

Using functions to reuse repeated code patterns has its limitations. You cannot use a function if it is the code part that changes in a repeated code pattern.

Consider the two functions below (str is an expression that adds up given strings):

(var greet
  (function (username)
    (str "Welcome " username)))

(var link
  (function (href text)
    (str "<a href=\"" href "\">" text "</a>")))

There is a repeating code pattern here. Given below is the pattern with the parts that change in capitals:

(var NAME
  (function ARGUMENTS

We cannot use a function to reuse this code pattern, because the parts that change are parts of the code.

Functions are about reusing code patterns, where it is only the data that changes.

Macros are about reusing code patterns, where the code can also change.

In LispyScript, we can write a macro to reuse this code pattern. The macro needs a name, let’s call it template as it happens to be a template compiler:

(macro template (name arguments rest...)
  (var ~name
    (function ~arguments
      (str ~rest...))))

Now compare this with the meta code pattern in the previous example. The arguments to this macro are the parts of the code that change – NAME, ARGUMENTS, TEMPLATE_STRINGS – these correspond to name, arguments rest.. in the macro definition.

Arguments can be dereferenced in the generated code by adding a ~ to the argument name. rest... is a special argument that represents the rest of the arguments to the macro after the named arguments.

This macro can be used by making a call to template:

(template link (href text) "<a href=\"" href "\">" text "</a>")

This code will expand as follows:

(var link
  (function (href text)
    (str "<a href=\"" href "\">" text "</a>")))

This expansion happens just before the expanded code is compiled. This is known as the macro expansion phase of the compiler.

Now let’s try another example. We will write a benchmark macro, which benchmarks a line of code. But first we’ll write a benchmark function to get a couple of related issues out of the way.

(var benchmark
  (function ()
    (var start (new Date))
    (+ 1 1)
    (var end (new Date))
    (console.log (- end start))))

This is not an example that always works. Because JavaScript can only resolve time up to milliseconds, but to benchmark an integer + operation we need a resolution down to nanoseconds.

Furthermore, the function does not scale. We need to benchmark various operations and expressions, and since this involves changes to the above code we need to write a macro. In the macro we print the result of the operation along with the elapsed time:

(macro benchmark (code)
    (var start (new Date))
    (var result ~code)
    (var end (new Date))
    (console.log "Result: %d, Elapsed: %d" result (- end start)))

It can be used like this:

(var a 1)
(var b 2)
(benchmark (+ a b))

The result printed to the console should look like the following:

Result: 3, Elapsed: 0

Elapsed is 0 due to the millisecond resolution, but the example seems to run correctly… until one day someone attempts to do this:

(var start 1)
(var b 2)
(benchmark (+ start b))

Running this gives confusing results:

Result: NaN, Elapsed: 1

The result is NaN, so something has gone wrong since 3 was expected. To figure out what’s going on, let’s look at the macro expansion:

(var start 1)
(var b 2)
  (var start (new Date))
  (var result (+ start b))
  (var end (new Date))
  (console.log "Result: %d, Elapsed: %d" result (- end start))

The user has created a variable start. It so happens that the macro also creates a variable called start. The macro argument code gets dereferenced in the new scope. When (+ start b) got executed the start variable used was the start Date variable created in the macro code. This problem is known as variable capture.

When writing macros, you have to be very careful when creating a variable inside a macro. In our template macro example we were not concerned about this problem, because the template macro does not create its own variables.

In LispyScript we get around this problem by following two rules which are specified in the “guidelines” section of the document:

  1. When writing a LispyScript program, creating a variable name that starts with three underscores is NOT allowed, for example: ___varname.
  2. When writing a macro you MUST start a variable name with three underscores if you want to avoid variable capture. There are cases where you want variable capture to happen, in which case you do not need to use the three underscores. For example, when you want the passed code to use a variable defined in the macro.

The benchmark macro should be refactored using three underscores:

(macro benchmark (code)
    (var ___start (new Date))
    (var ___result ~code)
    (var ___end (new Date))
    (console.log "Result: %d, Elapsed: %d" ___result (- ___end ___start)))


Macros are a very powerful feature of LispyScript. It allows you to do some nifty programming, which is otherwise not possible with functions. At the same time we have to be very careful when using macros. Following the LispyScript macro guidelines will ensure your macros behave as expected.

Enyo Tutorial: Part 2

04 Oct 2012 | By Robert Kowalski | Comments | Tags tutorials enyo enyo-kowalski frameworks mobile

In my introduction to Enyo, I promised that Enyo is “very modularized, reusable, and encapsulated”. Today we’ll create a reusable component from our monolithic and minimalistic application by refactoring the tip calculator. Afterwards we will style the application to make it ready for app stores and the web.

As mentioned in the previous part of the tutorial, the Enyo style guide suggests using double quotes instead of single quotes. Enyo also uses tabs for indentation. Although I prefer two spaces and single quotes, I will follow these rules during this tutorial.

This tutorial builds on the previous part, which is available here:

Loading Mechanism

Enyo is using files called package.js to load dependencies. If you look into the folder source/, which contains the core of the application, then you’ll find a file named package.js from the bootplate project. Everything in this file will be loaded when the application starts up. Let’s create a file called with the filename calc.percent.js, and add calc.percent.js to the end of the package.js:



Component objects are using events to communicate with their parent kinds. As described in the first part, components can nest other components. It would be nice to split the app into a reusable percent-calculator kind which could be used in other projects.

Published Properties

The calc.percent.js file should look like the following example – I’ll explain it in detail below.

  name: 'PercentCalculator',
  kind: enyo.Component,
  published: {
    sum: 0, //optional default values
    percent: 0
  events: {
    onCalculated: ''
  create: function() {
  calculate: function() {
    var result;

    result = (this.sum * this.percent) / 100;

    this.doCalculated({percentValue: result});

Like the previous kind, this component has a name: PercentCalculator. This time the kind is not a control - we have chosen a component with kind: enyo.Component.

The next lines are the published properties of our kind. They can – but must not – have a default value, and it’s 0 in this example. Enyo automatically creates setters and getters for our exposed properties. We will use the setters from that pair later but in this file we access them with this.sum and this.percent.

I mentioned previously that components are communicating with events. This example registers onCalculated, which is exposed to the public. It can be triggered with this.doCalculated({percentValue: result}); in the calculate method. The results are communicated to the parent kind.

Refactoring and Integration

In order to use our kind we have to add the component to our first kind from the file App.js.

{ kind: "PercentCalculator", name: "percentCalculator", onCalculated: "updateControls" }

Every time the event calculated is fired the method updateControls is called. This method is just getting the value and setting the new value of the corresponding DOM node. Here is the snippet:

updateControls: function(inSource, inEvent) {

  return true; // stop bubbling

Notice the result is available as a property of the second argument: inEvent.percentValue.

The app, however, is not working yet. We have to give the values from the input fields to the component so it’s able to calculate and pass back the result. I deleted the old calculate method and introduced the method calculateWithComponent. Also, please don’t forget to update the ontap handler of the button. Here is the method:

calculateWithComponent: function(inSource, inEvent) {
  var sum = this.$.sumControl.hasNode().value;
  var percent = this.$.percentControl.hasNode().value;



As before, the kind is accessed with this.$ and its name. The automatically generated setters are used for the published properties, and afterwards calculate can be called on our kind. At this point the component is passing the calculated result back. There are also change-Handler available for changing properties, but we do not use them here.

Here is the updated kind in full:

  name: "App",
  kind: enyo.Control,
  style: "",
  classes: "onyx",
  components: [
    {kind: "onyx.InputDecorator", components: [
      {kind: "onyx.Input", name: "sumControl", placeholder: "Enter sum"}
    {kind: "onyx.InputDecorator", components: [
      {kind: "onyx.Input", name: "percentControl", placeholder: "Enter percent"}
    {kind: "onyx.Button", content: "Calculate tip", ontap: "calculateWithComponent"},
    {tag: "div", name: "tipAmount"},
    {kind: "PercentCalculator", name: "percentCalculator", onCalculated: "updateControls"}
  create: function() {
  updateControls: function(inSource, inEvent) {

    return true; // stop bubbling
  calculateWithComponent: function(inSource, inEvent) {
    var sum = this.$.sumControl.hasNode().value;
    var percent = this.$.percentControl.hasNode().value;



The commit is 8f931.


I reduced the styles in the App.css to a simple background-color: #c6c6c6;, and one CSS class:

.center {
  text-align: center;

Then I changed the kind in our App.js from the basic enyo.Control to the kind enyo.FittableRows. A basic control was a nice choice to show you the basics of Enyo and kinds, but we want to use a more complex one which is provided by the framework.

In commit 8bb19 I’ve added an onyx.Toolbar as the first child of the components block:

{kind: "onyx.Toolbar", class: "center", content: 'Tip calculator'},

This will display a bar across the top of the screen (or page), in a similar fashion to the UINavigationBar used in iOS applications. The end result looks something like this:

Enyo Tip Calc

Production Build

You can run deploy.sh in the tools/ folder to start a deploy. It will minify and merge the source files of the project. The result will be saved to deploy/, and can be used with Cordova or simply uploaded to a web server.


You should now have learned the core concepts of Enyo and built a small application. Here is a short summary:

Part 1

  • Concept of kinds
  • Controls, and how to use and when
  • Events
  • Getters and setters
  • Constructors and destructors

Part 2

  • Components
  • Loading mechanism
  • Published properties
  • More on getters and setters
  • Production builds


Node Roundup: otr, matches.js, mariasql

03 Oct 2012 | By Alex Young | Comments | Tags node modules security cryptography mysql
You can send in your Node projects for review through our contact form or @dailyjs.

Off-the Record Messaging Protocol

otr (License: LGPL, npm: otr) by Arlo Breault is an implementation of an Off-the Record Messaging Protocol:

Off-the-Record Messaging, commonly referred to as OTR, is a cryptographic protocol that provides strong encryption for instant messaging conversations. OTR uses a combination of the AES symmetric-key algorithm, the Diffie–Hellman key exchange, and the SHA-1 hash function. In addition to authentication and encryption, OTR provides perfect forward secrecy and malleable encryption.

It’s designed to be used in browsers, but can also be used with Node. The readme has details on how to get started with otr, and the author notes that the project has been used by Cryptocat.


matches.js (License: MIT, npm: matches) by Nathan Faubion is a pattern matching shorthand library that can create new objects with a convenient wrapper:

var myfn = pattern({
  // Null
  'null' : function () {...},

  // Undefined
  'undefined' : function () {...},

  // Numbers
  '42'    : function () { ... },
  '12.6'  : function () { ... },
  '1e+42' : function () { ... },

  // Strings
  '"foo"' : function () { ... },

  // Escape sequences must be double escaped.
  '"This string \\n matches \\n newlines."' : function () { ... }

The author has used this library to create adt.js, which is a library for making pseudo-algebraic types and immutable structures:

… I say pseudo because it just generates classes with boilerplate that make them look and work like types in functional languages like Haskell or Scala. It works in the browser or on the server.


mariasql (License: MIT, npm: mariasql) by Brian White is a high performance, single-threaded, asynchronous, cross-platform MySQL driver. It’s based on libmariadbclient, and the author notes that it works more like a typical Node module:

This module strives to keep with the “node way” by never buffering incoming rows. Also, to keep things simple, all column values are returned as strings (except MySQL NULLs are casted to JavaScript nulls).

Brian has posted benchmarks that compare various SQL operations across several client libraries, including C and PHP-based samples: MySQL client library benchmarks.

jQuery Roundup: jQuery UI 1.8.24, HTML5 Google Authenticator, pXY.js

02 Oct 2012 | By Alex Young | Comments | Tags jquery jquery-ui plugins Canvas security
Note: You can send your plugins and articles in for review through our contact form or @dailyjs.

jQuery UI 1.8.24

jQuery UI 1.8.24 is out, which is a maintenance release:

This update brings bug fixes for Datepicker, Draggable, Droppable and Sortable, as well as adding support for jQuery 1.8.2. This is likely to be the last release in the 1.8 family; you can expect 1.9.0 very soon. For the full list of changes, see the changelog.

The jQuery UI 1.9 release candidates have been around for a while now. Check out the 1.9 RC tags on GitHub for more.

HTML5 Google Authenticator

I use Google Authenticator, which is a two-step verification implementation. Google have released corresponding mobile apps which support multiple credentials. This means third-party services can plug into Google Authenticator, so users only need one app to manage all of their credentials. This works because Google Authenticator is built on open standards, and uses the Time-based One-time Password algorithm.

The TOPT algorithm has already been implemented in JavaScript back in 2011 by Russ Sayers:

Turns out the algorithm used to generate the OTPs is an open standard. When you set-up an account in the smartphone app you are storing a key that’s used to create a HMAC of the current time.

This has now been ported to a polished HTML5 Google Authenticator project, built with jQuery Mobile by Gerard Braad. He’s also deployed a demo version at gauth.apps.gbraad.nl.

Cryptography and security in client-side code will always be a tricky subject, but hopefully this kind of project will help demystify two-factor authentication and encourage more web application authors to offer it to those of us who are interested in it.


pXY.js (GitHub: leeoniya / pXY.js, License: MIT) by Leon Sorokin is an API for analysing the pixels in a Canvas elements. The author suggests using it as an algorithm visualisation tool for problems relating to OCR segmentation and document feature extraction.

The documentation has runnable examples of the major API features. For example, the Scanning pXY documentation shows how images can be scanned using the eight possible bidirectional scan patterns.

JavaScript for Node Part 1: Enumeration

JavaScript developers have been accustomed to a very scattered and incoherent API (the DOM) for some time. As a result, some of JavaScript’s most common patterns are pretty weird and unnecessary when programming for a unified and coherent API like Node. It can be easy to forget that the entire ES5 specification is available to you, but there are some standard patterns that deserve to be rethought because of ES5’s newer features.

Objects in ES5

Since no object in JavaScript can have identical same-tier keys, all objects can be thought of as being hash tables. Indeed, V8 implements a hash function for object keys. This important concept did not go unnoticed in the ES5 draft and so the method Object.keys was created to extract the internal associative array of any object and return it as a JavaScript Array. In layman’s terms, this means that Object.keys returns only the keys that belong to that object and NOT any properties that it may have inherited. This is a powerful and useful construct that can be utilized in Node when enumerating over an object.

The Old Way

Chances are you have run into the following looping pattern:

var key;
for (key in obj) {
  if (obj.hasOwnProperty(key))

This was the only way to traverse an object in ES3 without going up an object’s prototype chain.

A Better Way

In ES5 there is a better approach. Given that we can simply get the keys of an object and put them into an array, we can loop over an object, but only at the cost of looping over an array. First consider the following:

var keys = Object.keys(obj), i, l;

for (i = 0, l = keys.length; i < l; i++)

This is usually the fastest way of looping over an object in ES5 (at least in V8). However, this method has some drawbacks. If new variables are needed to make calculations, this approach starts to feel overly verbose. Consider the following:

function calculateAngularDistanceOfObject(obj) {
  if (typeof obj !== 'object') return;
  var keys = Object.keys(obj),
    , EARTH_RADIUS = 3959
    , RADIAN_CONST = Math.PI / 180
    , deltaLat
    , deltLng
    , halfTheSquareChord
    , angularDistanceRad
    , temp
    , a, b, i, l

  for (i = 0, l = keys.length; i < l; i++) {
    temp = obj[keys[i]];
    a = temp.a;
    b = temp.b;
    deltaLat = a.subLat(b) * RADIAN_CONST;
    deltaLng = a.subLng(b) * RADIAN_CONST;
    halfTheSquareChord = Math.pow(Math.sin(deltaLat / 2), 2) + Math.pow(Math.sin(deltaLng / 2), 2) * Math.cos(a.lat * RADIAN_CONST) * Math.cos(b.lat * RADIAN_CONST);
    obj[keys[i]].angularDistance = 2 * Math.atan2(Math.sqrt(halfTheSquareChord), Math.sqrt(1 - halfTheSquareChord));

An Even Better Way

In situations like this, instead of looping over the array of keys using Array’s native forEach method will allow us to create a new scope for the variables we are working with. This will allow us to do our processing in a more encapsulated manner:

function calculateAngularDistanceOfObject(obj) {
  if (typeof obj !== 'object') return;

  var EARTH_RADIUS = 3959
    , RADIAN_CONST = Math.PI / 180;

  Object.keys(obj).forEach(function(key) {
    var temp = obj[key]
      , a = temp.a
      , b = temp.b
      , deltaLat = a.subLat(b) * RADIAN_CONST
      , deltaLng = a.subLng(b) * RADIAN_CONST;

    halfTheSquareChord = Math.pow(Math.sin(deltaLat / 2), 2) + Math.pow(Math.sin(deltaLng / 2), 2) * Math.cos(a.lat * RADIAN_CONST) * Math.cos(b.lat * RADIAN_CONST);
    obj[key].angularDistance =  2 * Math.atan2(Math.sqrt(halfTheSquareChord), Math.sqrt(1 - halfTheSquareChord));


Choosing the right pattern depends on balancing maintainability with performance. Of the two patterns, forEach is generally considered more readable. In general, iterating over large arrays will generally perform worse with forEach (although better than the old ES3 way), but it’s important to correctly benchmark code before making a decision.

One popular solution for Node is node-bench (npm: bench) written by Isaac Schlueter. After installing it here is something to start with:

var bench = require('bench')
  , obj = { zero: 0, one: 1, two: 2, three: 3, four: 4, five: 5, six: 6, seven: 7, eight: 8, nine: 9 };

// This is to simulate the object having non-enumerable properties
Object.defineProperty(obj, 'z', { value: 26, enumerable: false });

exports.compare = {
  'old way': function() {
    for (var name in obj) {
      if (obj.hasOwnProperty(name))

  'loop array': function() {
    var keys = Object.keys(obj)
      , i
      , l;

    for (i = 0, l = keys.length; i < l; i++)

  'foreach loop': function() {
    Object.keys(obj).forEach(function(key) {

// This is number of iterations on each test we want to run
bench.COMPARE_COUNT = 8;