Node Roundup: mongo-lite, smog, sshfs-node

19 Sep 2012 | By Alex Young | Comments | Tags node modules libraries filesystem mongo
You can send in your Node projects for review through our contact form or @dailyjs.

mongo-lite

mongo-lite (GitHub: alexeypetrushin / mongo-lite, License: MIT, npm: mongo-lite) by Alexey Petrushin aims to simplify MongoDB by removing the need for most callbacks, adding reasonable defaults like safe updates, and offering optional compact IDs.

The chainable API looks more like MongoDB’s command-line interface:

var db = require('mongo-lite').connect('mongodb://localhost/test', ['posts', 'comments']);
db.posts.insert({ title: 'first' }, function(err, post) {
  // Use post
});

There’s also a Fiber-based API, so it can be used in a synchronous fashion.

smog

smog (License: MIT, npm: smog) from Fractal is a web-based MongoDB interface. It displays collections, and allows them to be sorted and edited. It also supports administration features, like shutting down servers, CPU/bandwidth usage graphs, and replica set management.

It’s built with Connect, and there’s an experimental GTK+ desktop interface made with the pane module by the same authors.

sshfs-node

sshfs-node (License: MIT, npm: sshfs-node) by Charles Bourasseau allows remote filesystems to be mounted using SSH. It uses sshfs and requires keys for authentication, rather than passwords.

It comes with Vows tests, and the same author has also released fs2http.

jQuery Roundup: equalize.js, jQuery Builder, Gridster.js

18 Sep 2012 | By Alex Young | Comments | Tags jquery plugins design ui layout grid
Note: You can send your plugins and articles in for review through our contact form or @dailyjs.

equalize.js

This plugin comes from the “should I just use a table?” department of design technicalities that we still have to deal with in 2012: equalize.js (GitHub: tsvensen / equalize.js, License: MIT/GPL). Created by Tim Svensen, this plugin resizes elements to match their height or any other dimension supported by jQuery Dimensions.

It works by calling a single method on the parent selector:

// Height is the default
$('#height-example').equalize();

$('.parent').equalize('outerHeight');
$('.parent').equalize('innerHeight');
$('.parent').equalize('width');
$('.parent').equalize('outerWidth');
$('.parent').equalize('innerWidth');

The documentation has full examples.

jQuery Builder

jQuery Builder (GitHub: jgallen23 / jquery-builder, License: MIT, npm: jquery-builder) by Greg Allen is a web-based tool for building a custom version of jQuery 1.8.1. As jQuery has evolved it’s got a lot easier to include only the components necessary for a given project. This particular solution has been made using Node, and is installable with npm.

Gridster.js

Gridster.js (GitHub: ducksboard / gridster.js, License: MIT) from Ducksboard is a grid plugin that allows layouts to be designed by drag and drop. Elements can span multiple columns, and by dynamically added and removed. Any element can be used because Gridster is based around data attributes.

Gridster is distributed with suitable CSS, and supports IE 9+, Firefox, Chrome, Safari, and Opera.

Encapsulation Breaking

17 Sep 2012 | By Justin Naifeh | Comments | Tags encapsulation tutorial

Encapsulation is the process by which an object’s internal components and behavioral details are hidden from calling code. Only that which should be exposed is exposed, making objects self-contained black boxes to the outside world. Many languages support encapsulation by supplying visibility modifiers (e.g., private) and constructs such as inner classes.

Unfortunately, JavaScript offers very little in the encapsulation department. While there are certain tricks that can wrap protected code in closures (see Module Pattern), many have disadvantages that compromise code flexibility and extensibility.

Standard Convention

Instead of using closure-based encapsulation, which often makes object-oriented inheritance difficult, many libraries and code-bases opt to mark private properties and functions with an underscore prepend. This convention makes inspecting the properties and functions easy within browser debuggers.

 
var Person = function(first, last){
  // private properties _first, _last, and _id
  this._first = first;
  this._last = last;
  this._id = this._generateId();
};

Person.prototype = {
  getId : function(){
    return this._id;
  },
  getFirstName: function(){
    return this._first;
  },
  getLastName : function(){
    return this._last;
  },
  // private function to generate an id for this object
  _generateId : function(){
    return new Date().getTime().toString();
  }
};

This convention is commonplace, similar to naming constants in uppercase. The downside is that private properties and functions can still be accessed, thus breaking encapsulation because of careless coding.

Encapsulation Breaking

The dynamic nature of JavaScript allows for a free-for-all environment where a developer can do whatever he or she wants. Consider the following:

 
var person = new Person("Bob", "Someguy");
console.log(person._first); // logs "Bob"

This appears all fine and well, but now we’ve coupled our code to the Person implementation. Any change to the internals – the category of change encapsulation should protect us against – could break calling code.

 
var Person = function(first, last){
  this._firstName = first; // property change
  this._lastName = last; // property change
  this._id = this._generateId();
};

// ... 
var person = new Person("Bob", "Someguy");
console.log(person._first); // logs "undefined"

These bugs can be difficult to track, especially since the application code may not have changed…an updated external library or resource, in which Person may be defined, is all that it takes. The best defense against such couplings is to avoid breaking encapsulation. If a property or method is marked as private, do not access, modify, or invoke it. The overhead in rethinking the architecture and design is almost always less than the cost of dealing with the consequences of breaking encapsulation.

In other words: “Developers don’t let developers break encapsulation.”

Method Stealing

As troublesome as accessing private properties can be in JavaScript, there is another much more insidious practice that seems commonly accepted: method stealing.

The methods Function.prototype.call and Function.protoype.apply are integral to modern libraries and code-bases by allowing a method to inject a custom context (this reference) into the execution scope. Without these capabilities the reliance on closures to achieve the same effect would be too cumbersome.

Just as some properties should be hidden, so too should some methods. Given our prior example, Person.prototype._generateId() might function as an inadequate UUID generator. A clever developer notices that another available method Book.prototype._setUUID() sets a this._id property on all Book objects whose value is much more unique across space and time than Person.prototype._generateId();

 
var Book = function(title, author){
  this._title = title;
  this._author = author;
  this._id = null;
  this._setUUID();
}
Book.prototype = {
  getId : function(){
    return this._id;
  },
  getTitle : function(){
    return this._title;
  },
  getAuthor : function(){
    return this._author;
  },
  _setUUID : function(){
    var result = '';
    for(var i=0; i<32; i++)
    {
      result += Math.floor(Math.random()*16).toString(16);
    } 
    this._id = result;
  }
};

In return, the developer has chosen to “steal” this behavior from Book for Person and modify Person.prototype._generateId() to invoke Book.prototype._setUUID().

 
var Person = function(first, last){
  // private properties _first, _last, and _id
  this._first = first;
  this._last = last;
  this._id = null;
  this._generateId();
};

Person.prototype = {
  getId : function(){
    return this._id;
  },
  getFirstName: function(){
    return this._first;
  },
  getLastName : function(){
    return this._last;
  },
  // private function to generate an id for this object
  _generateId : function(){
    // sets this._id withing _setUUID()
    Book.prototype._setUUID.call(this);
  }
};

Again this works…ostensibly so, by having this._id set by Book.prototype._setUUID(). The design is brittle, however, because Book’s internals can be refactored unbeknownst to Person, thus breaking Person objects. If Book.prototype._setUUID() is refactored to set this._uuid rather than this._id then all Person.prototype.getId() invocations will return undefined. With one myopic decision we broke our application because it was easier to break encapsulation rather than rethink the design.

Conclusion

There is not much of a conclusion except do not break encapsulation. In fact, apply the Golden Rule while coding: treat other code as you wish yours would be treated. Any API deficiencies should be brought to the original author, not hacked apart to make it usable for one use case or instance. The maintenance headache down the road is just not worth it.

Functional Programming in JavaScript

14 Sep 2012 | By Nathaniel Smith | Comments | Tags functional tutorial

JavaScript has two parents: Scheme and Self. We can thank Self for all of the object-orientedness of JavaScript and indeed we do in our code and our tutorials. However, Scheme played just as important a role in the language’s design, and we would do ourselves ill to overlook JavaScript’s functional heritage.

What exactly does it mean for JavaScript to be functional? “Functional” merely describes a collection of traits a given language may or may not have. A language like Haskell has all of them: immutable variables, pattern matching, first class functions, and others. Some languages hardly have any, like C. While JavaScript certainly doesn’t have immutable variables or pattern matching it does have a strong emphasis on first class functions; mutating, combining, and using these function objects for cleaner and more succinct code is the purpose of this tutorial.

Partial Application

Partial application is a technique for taking a function f and binding it against one or more arguments to produce a new function g with those arguments applied. We’ll demonstrate this operation by adding a helper function p to Function’s prototype.

Function.prototype.p = function() {
  // capture the bound arguments
  var args = Array.prototype.slice.call(arguments);
  var f = this;
  // construct a new function
  return function() {
    // prepend argument list with the closed arguments from above
    var inner_args = Array.prototype.slice.call(arguments);
    return f.apply(this, args.concat(inner_args))
  };
};

var plus_two = function(x,y) { return x+y; };
var add_three = plus_two.p(3);
add_three(4); // 7

Composition

Composition is an operation that produces a new function z by nesting functions f and g. You can think of it in this way: z(x) == f(g(x)). Let’s add a helper like we did for partial application.

Function.prototype.c = function(g) {
  // preserve f
  var f = this;
  // construct function z
  return function() {
    var args = Array.prototype.slice.call(arguments);
    // when called, nest g's return in a call to f
    return f.call(this, g.apply(this, args));
  };
};

var greet = function(s) { return 'hi, ' + s; };
var exclaim = function(s) { return s + '!'; };
var excited_greeting = greet.c(exclaim);
excited_greeting('Pickman') // hi, Pickman!

Flipping

Flipping at first seems like a scary and arbitrary thing to do to a poor Function. However, it is useful when one desires to use partial application to bind arguments other than the first. To perform a flip we take function f which takes parameters (a,b) and construct a function g which takes parameters (b,a).

Function.prototype.f = function() {
  // preserve f
  var f = this;
  // construct g
  return function() {
    var args = Array.prototype.slice.call(arguments);
    // flip arguments when called
    return f.apply(this, args.reverse());
  };
};

var div = function(x,y) { return x / y; };
div(1, 2) // 0.5
div.f()(1,2) // 2

Point-Free Style

Point-free programming is a style of coding that one doesn’t see much outside of languages like Haskell or OCaml. However, it can help drastically reduce the use of the rather verbose function declaration syntax omnipresent in JavaScript code. Programming in a point-free style is made possible by our helpers above, and we’ll combine them to illustrate this concept.

// We'll start by solving the following problem in a non point-free way.
// Produce a function which, given a list, returns the same list with
// every number made negative.

// First, declare some helpers:
var negate = function(x) { return -1 * x; };
var abs = function(x) { return Math.abs(x); };
var map = function(a, f) { return a.map(f); };
var numbers = [-1, 2, 0, -2, 3, 4, -6]

var negate_all = function(array) { return map(array, function(x) { return negate(abs(x)) };
negate_all(numbers); // [-1, -2, 0, -2, -3, -4, -6]

// That solves it; but we can do better:

var negate_all = map.f().p(negate.c(abs));
negate_all(numbers); // [-1, -2, 0, -2, -3, -4, -6]

What did we do here? First, we flipped map’s signature to be (f,a); this allows us to then partially apply a function to map and turn it into a function that takes only a single parameter: the array we wish to negate. But what function do we want to bind to our map? The result of negate.c(abs), which represents a function that does negate(abs(x)). We’ve produced the same function in the end and solved our problem. In the former attempt, we declare a new function to imperatively do what we wish; in the latter we construct a new function based on functions we already have.

What makes this point-free? Note the redundancy of the array argument in the former declaration. We already have a function that knows how to produce a new array from an existing one; why not convert that function into a new one to do what we want? We cut characters by 37% and, for many, achieve better readability.

Conclusions

In the end, functional programming is a matter of taste. For some it is a thing of subtle beauty and for others a wild nest of parentheses. This tutorial is a suggestion of styles that might be and is in no way a ‘Functional is better’ argument. If this has piqued the reader’s interest, she or he may be interested in the following resources:

Express 3 Tutorial: Contact Forms with CSRF

13 Sep 2012 | By Alex Young | Comments | Tags express tutorials bootstrap

The contact form

This tutorial is a hands on, practical introduction to writing Express 3 applications complete with CSRF protection. As a bonus, it should be fairly easy to install on Heroku.

Prerequisites

A working Node installation is assumed, and basic knowledge of Node and the command-line.

Getting Started

Create a new directory, then create a new file called package.json that looks like this:

{
  "author": "Alex R. Young"
, "name": "dailyjs-contact-example"
, "version": "0.0.1"
, "private": true
, "dependencies": {
    "express": "3.0"
  , "jade": "0.27.2"
  , "validator": "0.4.11"
  , "sendgrid": "latest"
  }
, "devDependencies": {
    "mocha": "latest"
  },
  "engines": {
    "node": "0.8.9"
  }
}

Express has a built-in app generator, but I want to explain all the gory details. If you want to try it out, try typing express myapp in the terminal.

Back to the package.json file. The author and name can be changed as required. The private flag is set so we don’t accidentally publish this module to npmjs.org. The dependencies are as follows:

  • express: The web framework we’re using, version 3 has been specified
  • jade: The template language, you could convert this project to ejs or something else if desired
  • validator: The validator library will be used to validate user input
  • sendgrid: SendGrid is a commercial email provider that’s easy to use with Heroku

The engines section has been included because it’s a good idea to be specific about Node versions when deploying to Heroku.

Configuration

Although I typically encourage breaking up Express projects into multiple files, this project will use a single JavaScript file for brevity.

First, the modules are loaded, and an Express app is instantiated. Users of Express 2.x will notice that there is no longer a createServer() method call:

var express = require('express')
  , app = express()
  , SendGrid = require('sendgrid').SendGrid
  , Validator = require('validator').Validator
  ;

The Validator object is just one way to work with the node-validator module. The author has also provided Express middleware for directly validating data in requests. I didn’t use it here because I was concerned it might not work with Express 3, and I’m writing to a deadline, but it’s worth taking a look at it. In general, I like to avoid tying too much code into Express in case I want to migrate to another framework, so that’s worth considering as well.

The next few lines are application configuration:

app.configure(function() {
  app.set('views', __dirname + '/views');
  app.set('view engine', 'jade');
  app.use(express.cookieParser());
  app.use(express.session({ secret: 'secret goes here' }));
  app.use(express.bodyParser());
  app.use(app.router);
  app.use(express.csrf());
  app.use(express.static(__dirname + '/public'));
});

When you’re writing Express configuration, avoid copying and pasting lines from examples without fully understanding what each line does – it will get you into trouble later! You should understand what every single line does here, because changing the order of app.use lines can impact the way requests are processed and result in frustrating errors.

With that in mind, here’s what each line does:

  • app.set('views', __dirname + '/views'): Use ./views as the default path for the client-side templates
  • app.set('view engine', 'jade'): Automatically load index.jade files just by passing index
  • app.use(express.cookieParser()): Parse the HTTP Cookie header and create an object in req.cookies with properties for each cookie
  • app.use(express.session...: Use a session store – this is needed for the CSRF middleware
  • app.use(express.bodyParser()): Parse the request body when forms are submitted with application/x-www-form-urlencoded (it also supports application/json and multipart/form-data)
  • app.use(app.router): Use the actual router provided by Express
  • app.use(express.csrf()): The CSRF protection middleware
  • app.use(express.static(__dirname + '/public')): Serve static files in the ./public directory

Next follows configuration for development and production environments:

app.configure('development', function() {
  app.use(express.errorHandler({ dumpExceptions: true, showStack: true }));
  app.locals.pretty = true;
  sendgrid = {
    send: function(opts, cb) {
      console.log('Email:', opts);
      cb(true, opts);
    }
  };
});

app.configure('production', function() {
  app.use(express.errorHandler());
  sendgrid = new SendGrid(process.env.SENDGRID_USERNAME, process.env.SENDGRID_PASSWORD);
});

The app.locals.pretty = true line causes Jade to render templates with indentation and newlines; otherwise it spits out a single line of HTML. Notice that app.use is being called outside of app.configure – this is perfectly fine, and app.use can actually be called anywhere. There was some discussion about removing app.configure from Express 3.x, and it isn’t technically required.

I’ve made a mock sendgrid object for development mode that just prints out the email and then runs a callback. The production configuration block uses environmental variables (process.env.SENDGRID_USERNAME) to set the SendGrid username and password. It’s a good idea to use environmental variables for passwords, because it means you can keep them out of your source code repository. Since only specific developers should have access to the deployment environment, then it’s potentially safer to store variables there. Heroku allows such variables to be set with heroku config:add SENDGRID_USERNAME=example.

Helpers

The next few lines are new to Express 3:

app.locals.errors = {};
app.locals.message = {};

The app.locals object is passed to all templates, and it’s how helpers are defined in Express 3 applications. I’ve used these properties so I can write templates without first checking if these objects exist, else a ReferenceError would be raised.

Middleware Callbacks: CSRF Protection

I’ve mentioned CSRF but haven’t fully explained it yet. It stands for “Cross-Site Request Forgery”, and is a class of exploits in web applications where an attacker forces another user to execute unwanted actions on a web site. In this case it’s not particularly useful, but it’s good practice to guard against CSRF attacks in production web apps. The Open Web Application Security Project has a good article on CSRF), which includes example attacks.

function csrf(req, res, next) {
  res.locals.token = req.session._csrf;
  next();
}

The Connect CSRF middleware automatically generates the req.session._csrf token, and this function maps it to res.locals.token so it will be available to templates. Any route that needs CSRF protection now just needs to include the middleware callback:

app.get('/', csrf, function(req, res) {
  res.render('index');
});

The form in views/index.jade has a hidden input:

form(action='/contact', method='post')
  input(type='hidden', name='_csrf', value=token)

The token variable is the one set by the middleware callback in res.locals.token.

Validating Data

The contact form must be validated before an email is sent. Seeing as database storage isn’t necessary for this project, we can use the node-validator module to verify user input. I’ve put this in a function to abstract it from the corresponding route:

function validate(message) {
  var v = new Validator()
    , errors = []
    ;

  v.error = function(msg) {
    errors.push(msg);
  };

  v.check(message.name, 'Please enter your name').len(1, 100);
  v.check(message.email, 'Please enter a valid email address').isEmail();
  v.check(message.message, 'Please enter a valid message').len(1, 1000);

  return errors;
}

An instance of a Validator is created, and I’ve set a custom error handling function. This error handling function collects the errors into an array, but there are many other solutions supported by node-validator’s API.

Each message property is checked against a single validation, but several could be chained together.

The validate function itself expects a message object which will come from the posted form later.

Sending Email

Emails are sent with SendGrid. Again, I’ve made a function for this to keep it out of the corresponding Express routes:

function sendEmail(message, fn) {
  sendgrid.send({
    to: process.env.EMAIL_RECIPIENT
  , from: message.email
  , subject: 'Contact Message'
  , text: message.message
  }, fn);
}

I’ve made it accept a callback so the Express route can handle cases where sending the mail fails.

Posting the Form

Here is the Express route that handles the form post:

app.post('/contact', csrf, function(req, res) {
  var message = req.body.message
    , errors = validate(message)
    , locals = {}
    ;

  function render() {
    res.render('index', locals);
  }

  if (errors.length === 0) {
    sendEmail(message, function(success) {
      if (!success) {
        locals.error = 'Error sending message';
        locals.message = message;
      } else {
        locals.notice = 'Your message has been sent.';
      }
      render();
    });
  } else {
    locals.error = 'Your message has errors:';
    locals.errors = errors;
    locals.message = message;
    render();
  }
});

It uses the csrf middleware callback to generate another token. This is required because the contact form will always be rerendered. The form data can be found in req.body.message – I’ve used form variables like message[email], so these will get translated into a JavaScript object with corresponding properties.

When there are invalid fields, or sending the email fails, the contact form will be rendered again with the original message. To make the form retain the values, the value property of each field must be set:

form(action='/contact', method='post')
  input(type='hidden', name='_csrf', value=token)
  .control-group
    label.control-label(for='message_name') Your Name
    .controls
      input#message_name.input-xxlarge(type='text', placeholder='Name', name='message[name]', value=message.name)
  .control-group
    label.control-label(for='message_email') Email
    .controls
      input#message_email.input-xxlarge(type='text', placeholder='Email', name='message[email]', value=message.email)
  .control-group
    label.control-label(for='message_message') Message
    .controls
      textarea#message_message.input-xxlarge(placeholder='Enter message', rows='6', name='message[message]')=message.message
  button.btn(type='submit') Send Message

This is quite a chunk of Jade, but the extra markup is there because I’ve used Bootstrap to style the project.

The locals object I’ve used gets passed to the res.render message and contains the form data when required.

Download

The full source is available here: alexyoung / dailyjs-contact-form-tutorial.

Node Roundup: 0.8.9, xmlson, Mubsub, Book on libuv

12 Sep 2012 | By Alex Young | Comments | Tags node modules libraries books xml json pubsub
You can send in your Node projects for review through our contact form or @dailyjs.

Node 0.8.9

Node 0.8.9 is out, and this looks like a significant release judging by the long changelog. v8, npm, and GYP have all been updated, and there are quite a few platform-specific bug fixes relating to memory.

xmlson

xmlson (License: MIT, npm: xmlson) by the developers at Fractal is a libexpat-based XML/JSON conversion module:

var xmlson = require('xmlson');

xmlson.toJSON('<p><h1 title="Details">Title</h1></p>', function(err, obj) {
  // Do something with obj
  console.log(obj.p.h1)
});

In the previous example, [ { '@title': 'Details', text: 'Title' } ] will be printed, so attributes are included when converting to JSON. There’s also a synchronous API. Installing xmlson with npm will compile the necessary dependencies with gyp.

This module is a fairly lightweight wrapper around ltx, which is worth checking out.

Mubsub

Mubsub (License: MIT, npm: mubsub) by Scott Nelson is a publish–subscribe implementation that uses MongoDB:

It utilizes Mongo’s capped collections and tailable cursors to notify subscribers of inserted documents that match a given query.

To use it, a channel must be created and then subscribed to. It can work with MongoDB connection URLs, so it’s fairly easy to drop into an existing MongoDB-based Node project. It comes with Mocha/Sinon.JS tests.

Book: An Introduction to libuv

An Introduction to libuv by Nikhil Marathe is a guide to libuv. It covers streams, threads, processes, event loops, and utilities. If you’re trying to understand what makes Node different, and how its asynchronous and event-based design works, then this is actually a great guide. Try looking at Basics of libuv: Event loops as an example.

jQuery Roundup: jQuery License Change, FileUploader, Raphaël Tutorial

11 Sep 2012 | By Alex Young | Comments | Tags jquery plugins raphael file
Note: You can send your plugins and articles in for review through our contact form or @dailyjs.

jQuery Now MIT Licensed

jQuery was previously dual licensed under the MIT and GPL. This shouldn’t technically change anything, because the work could be relicensed under the GPL if required:

Having just one license option makes things easier for the Foundation to manage and eliminates confusion that existed about the Foundation’s previous dual-licensing policy. However, this doesn’t affect your ability to use any of the Foundation’s projects. You are still free to take a jQuery Foundation project, make changes, and re-license it under the GPL if your situation makes that desirable.

Contributors are being asked to sign a license agreement to ensure everything published under the jQuery Foundation has the necessary legal background. The license agreement has been modeled on the Contributor Agreements for copyright assignment, published by the Civic Commons Community.

FileUploader

FileUploader (License: MIT/GPL2/LGPL2) by Andrew Valums and Ray Nicholus is a File API wrapper. It can handle multiple uploads by using XMLHttpRequest, and will fall back to an iframe-based solution in older browsers. The API looks like this:

var uploader = new qq.FileUploader({
  // pass the dom node (ex. $(selector)[0] for jQuery users)
  element: document.getElementById('file-uploader'),

  // path to server-side upload script
  action: '/server/upload'
});

It doesn’t have any external dependencies, and has many advanced features, including drag-and-drop file selection, multiple uploads, and keyboard support.

Raphaël Tutorial

Making a Simple Drawing Application using RaphaëlJS is a tutorial by Callum Macrae that uses Raphaël and jQuery to create a simple paint program. It includes a basic introduction to Raphaël, and uses jQuery-based event handling to create the mouse-driven drawing interface.

Mastering Node Streams: Part 1

10 Sep 2012 | By Roly Fentanes | Comments | Tags tutorials node streams

Streams are one of the most underused data types in Node. If you’re deep into Node, you’ve probably heard this before. But seeing several new modules pop up that are not taking advantage of streams or using them to their full potential, I feel the need to reiterate it.

The common pattern I see in modules which require input is this:

var foo = require('foo');

foo('/path/to/myfile', function onResult(err, results) {
  // do something with results
});

By only having your module’s entry point be a path to a file, you are limiting the stream they could use on it to a readable file stream that they have no control over.

You might think that it’s very common for your module to read from a file, but this does not consider the fact that a stream is not only a file stream. A stream could be several things. It could be a parser, HTTP request, or a child process. There are several other possibilities.

Only supporting file paths limits developers – any other kind of stream will have to be written to the file system and then read later, which is less efficient. One reason for this is the extra memory it takes to store the stream. Secondly, it takes longer to stream the file to disk and then read the data the user needs.

To avoid this, the above foo module’s API should be written this way

var stream = fs.createReadStream('/path/to/myfile');
foo(stream, function onResult(err, result) {
  // do something with results
});

Now foo can take in any type of stream, including a file stream. This is perhaps too long-winded when a file stream is passed; the fs module has to be required, and then a suitable stream must be created.

The solution is to allow both a stream and a file path as arguments.

foo(streamOrPath, function onResult(err, results) {
  // ...
});

Inside foo, it checks the type of streamOrPath, and will create a stream if needed.

module.exports = function foo(streamOrPath, callback) {
  if (typeof streamOrPath === 'string') {
    stream = fs.createReadStream(streamOrPath);
  } else if (streamOrPath.pipe && streamOrPath.readable) {
    stream = streamOrPath;
  } else {
    throw new TypeError('foo can only be called with a stream or a file path');
  }

  // do whatever with `stream`
};

There you have it, really simple right? So simple I’ve created a module just for this common use case, called streamin.

var streamin = require('streamin');

module.exports = function foo(streamOrPath, callback) {
  var stream = streamin(streamOrPath);
  
  // do whatever with `stream`
};

Don’t be fooled by its name, streamin works with writable streams too.

In the next part, I’ll show you how modules like request return streams synchronously even when they’re not immediately available.

HexGL, one.color, fs.js

07 Sep 2012 | By Alex Young | Comments | Tags webgl libraries html5 filesystem

HexGL

HexGL

HexGL is a WebGL-powered racing game similar in style to WipEout, developed by Thibaut Despoulain. It’s built using three.js, and is a pretty solid and fun game. One aspect that impressed me is there’s a selector for changing the quality, based on settings tailored for “Mobile”, “Mainstream”, and “Ultra” – the author suggests that the game should always run at 60fps.

Thibaut is planning on open sourcing the game, and his blog has a feed so you can stay up to date that way or by following @BKcore on Twitter.

one.color

one.color (License: BSD, npm: onecolor) is a browser and Node colour manipulation library. Morgan Roderick suggested this library on Twitter after seeing our jQuery Color coverage, and also pointed out that one of the creators has posted a video about it: Peter Müller: One-color.js.

This library has a chainable API, supports alpha channels and colour names, and has Vows tests to back it all up.

fs.js

fs.js (License: MIT, npm: fs.js) by Manuel Astudillo is a wrapper for the HTML5 File API, based on Node’s fs module. It’s got some Mocha unit tests, and supports the use of prefixed file systems:

var sizeInBytes = 1024 * 1024
  , prefix = 'filetest';

FSFactory.create(sizeInBytes, 'testfs', function(err, fs) {
  fs.read('foo', function(err, data){
    // data contains file contents.
  });
});

AngularJS: About Those Custom Attributes...

06 Sep 2012 | By Alex Young | Comments | Tags mvc tutorials angularjs

The first thing I noticed on the AngularJS homepage was the use of a non-standard attribute, ng-app:

<div ng-app>
  <div>
    <label>Name:</label>
    <input type="text" ng-model="yourName" placeholder="Enter a name here">
    <hr>
    <h1>Hello !</h1>
  </div>
</div>

Suspicious as I am, I wanted to look into this further. Running a more complete HTML5 example through the w3.org validator shows errors for each ng- attribute:

  • Attribute ng-app not allowed on element div at this point.
  • Attribute ng-model not allowed on element div at this point.

Earlier HTML specifications state that unrecognised attributes should be ignored, so this should be safe enough – clients will generally ignore the unrecognised attribute and JavaScript can handle it as required by AngularJS.

The AngularJS developers have gone a step further to quell fears of rogue attributes causing unexpected issues: it now transparently supports data- prefixed attributes. That means the previous example could be written with data-ng-app and it would still work. I tried it out and found that it even copes with mixed attribute styles.

Knockout

Unlike AngularJS, Knockout embraced data- attributes from the beginning. The documentation even clarifies the use of data attributes:

The data-bind attribute isn’t native to HTML, though it is perfectly OK (it’s strictly compliant in HTML 5, and causes no problems with HTML 4 even though a validator will point out that it’s an unrecognized attribute). But since the browser doesn’t know what it means, you need to activate Knockout to make it take effect.

Although AngularJS now fully supports this approach, using custom attributes may have hurt early adoption.

Directives

The underlying mechanism that AngularJS uses to support multiple attribute prefixes is Directives, which according to the documentation turns HTML into a “declarative domain specific language”. You may have noticed that AngularJS templates are HTML – this contrasts with many other frameworks that use a string-based template system. Since templates are HTML, the entire page can be loaded and parsed by the browser. The resulting DOM is traversed by AngularJS’s compiler to find directives. The resulting set of directives is associated with DOM elements and prioritised. Each directive has a compile method, which can modify the DOM, and generates a link function.

Links are live bindings, and splitting compilation into stages like this means AngularJS can do a certain amount of work before repeatedly rendering sets of elements. The example in the documentation is rendering lots of list elements:

The result of of the li element compilation is a linking function which contains all of the directives contained in the li element, ready to be attached to a specific clone of the li element.

Conclusion

Although AngularJS may have been treated with some trepidation due to the adoption of non-standard HTML attributes, the authors have identified this and it’s possible to write applications that will validate. The “declarative domain specific language” concept is definitely interesting, and the two-stage compilation process has some advantages over other schemes that I’ve seen.

Node Roundup: redis-stream, DataGen, Cushion

05 Sep 2012 | By Alex Young | Comments | Tags node modules libraries streams redis couchdb
You can send in your Node projects for review through our contact form or @dailyjs.

redis-stream

redis-stream (License: MIT, npm: redis-stream) by Thomas Blobaum is a stream-based wrapper around the Redis protocol. It’s actually an extremely lightweight module, but the author has included tests and some interesting examples. The standard Node stream methods work, so data can be piped:

var Redis = require('redis-stream')
  , client = new Redis(6379, localhost, 0)
  , rpop = client.stream('rpop');

rpop.pipe(process.stdout);
rpop.write('my-list-key');

This doesn’t just apply to rpop, other Redis commands will also work in a similar way.

DataGen

DataGen (GitHub: cliffano / datagen, License: MIT, npm: datagen) by Cliffano Subagio is a multi-process test data file generator. It can be used to generate files in various formats, including CSV and JSON, based on template files that describe the output. Random numbers, dates, and strings can be generated.

The underlying random data generation is based on the Faker library, and Mocha tests are included.

Cushion

Cushion (GitHub: Zoddy / cushion, License: MIT, npm: cushion) by André Kussmann is a CouchDB API. It has Node-friendly asynchronous wrappers around the usual CouchDB API methods, and it also supports low-level requests by calling cushion.request. Fetching documents returns a document object that can be modified and saved like this:

var doc = db.document('id');
doc.load(function(err, document) {
  document.body({ name: 'Quincy' });
  document.save();
});

Designs and users can also be fetched and manipulated.

RoyalSlider: Tutorial and Code Review

04 Sep 2012 | By Alex Young | Comments | Tags libraries browser plugins jquery sponsored-content

RoyalSlider

There are a lot of carousel-style plugins out there, and they all have various strengths and weaknesses. However, RoyalSlider (License: Commercial, CodeCanyon: RoyalSlider, Price: $12) by Dmitry Semenov is a responsive, touch-enabled, jQuery image gallery and content slider plugin, and is one of the slickest I’ve seen. The author has worked hard to ensure it’s fast and efficient – it features smart lazy loading, hardware accelerated CSS3 transitions, and a memory management algorithm that ensures only visible slides are in the DOM at any one time.

The plugin is actively maintained, and has seen over a dozen updates since its release in August 2011. It’s distributed exclusively through CodeCanyon, but Dmitry’s site also has documentation and details on WordPress integration. Purchasing RoyalSlider gives access to a set of RoyalSlider templates that includes several types of galleries that should slot right in to your projects.

Since the plugin was originally released it has received extremely positive feedback (which is partly why it was chosen for a Featured Content post) – Dmitry has sold over 4,500 licenses, and it’s earned a 5 star rating based on 378 reviews.

Browser Support

RoyalSlider has been tested on IE7+, iOS, Opera Mobile, Android 2.0+, Windows Phone 7+, and BlackBerry OS.

Download and Setup

RoyalSliders build tool

RoyalSlider can be downloaded as either a development archive (that contains the original, unminified source), or a customised build can be created using Dmitry’s web-based build tool (access is granted once a license has been purchased).

To add RoyalSlider to a page, ensure you’ve included jQuery 1.7 or above, and then include the stylesheet and JavaScript:

<link rel="stylesheet" href="royalslider/royalslider.css">
<script src="royalslider/jquery.royalslider.min.js"></script>

The plugin expects a container element with the royalSlider class. Each child element will be considered a slider:

<div class="royalSlider rsDefault">
  <!-- simple image slide -->
  <img class="rsImg" src="image.jpg" alt="image desc" />

  <!-- lazy loaded image slide -->
  <a class="rsImg" href="image.jpg">image desc</a>

  <!-- image and content -->
  <div>
    <img class="rsImg" src="image.jpg" data-rsVideo="https://vimeo.com/44878206" />
    <p>Some content after...</p>
  </div>
</div>

Then all you need to do is run $.fn.royalSlider:

$(function() {
  $('.royalSlider').royalSlider();
});

At this point options can be provided, and believe me there are a lot of options!

Examples

RoyalSlider example

The templates distributed alongside RoyalSlider include full examples with JavaScript, CSS, and HTML. The example above is suitable for a gallery, and it includes quite a few interesting features:

  • Scrolling thumbnail navigation
  • Fullscreen mode
  • Automatically loads higher quality images in fullscreen mode
  • Responsive images using media queries
  • Keyboard arrow navigation

To set up a gallery like this, all that’s required is suitable images and $.fn.royalSlider with the options along these lines:

$('#gallery-1').royalSlider({
  fullscreen: {
    enabled: true
  , nativeFS: true
  }
, controlNavigation: 'thumbnails'
, autoScaleSlider: true
, autoScaleSliderWidth: 960
, autoScaleSliderHeight: 850
, loop: false
, numImagesToPreload: 4
, arrowsNavAutoHide: true
, arrowsNavHideOnTouch: true
, keyboardNavEnabled: true
});

The option names are fairly verbose so it’s easy to tell what they do, but I’ll go over the main ones below.

  • autoScaleSlider: This automatically updates the slider height based on the width, most of the examples use this option
  • numImagesToPreload: Sets the number of images to load relative to the current image
  • arrowsNavAutoHide: Hide the navigation arrows when the user isn’t interacting with the plugin

Mobile Support

RoyalSlider running on Android and iOS

RoyalSlider includes several ways to support touchscreen devices. Swipe gestures work as expected, and there are a couple of relevant options:

  • arrowsNavHideOnTouch: Always hide arrows on touchscreen devices
  • sliderTouch: Allows the slider to work using touch-based gestures

There are also events for dealing with gestures, which you can hook into like this:

sliderInstance.ev.on('rsDragStart', function() {
  // mouse/touch drag start
});

sliderInstance.ev.on('rsDragRelease', function() {
  // mouse/touch drag end
});

I tested the plugin using several examples on iOS and Android 4.1 and was generally impressed by the performance.

Code Review

When I look at jQuery plugins I usually run through the advice found in the jQuery Plugin Authoring Guide. I’d like to only write about plugins that are well-written, and you’d be surprised how many are not, given that the jQuery team has worked hard to document exactly how to write a plugin. With that in mind, I took a look at RoyalSlider’s source to see how it stacks up.

RoyalSlider is split up into separate files using a modular approach. That enables the build tool to only include what’s necessary, so it’s actually pretty trivial to make a build directly suited to a given project. The code is also consistently formatted, so I strongly recommend downloading the development version just in case you’ve got questions that aren’t answered by the documentation – the code is easy enough to understand for an intermediate jQuery developer.

All of these modules and the main source file are wrapped in closures, so RoyalSlider doesn’t introduce any messy globals.

Most of the plugin’s code is based around a standard JavaScript constructor, which also adds to its readability. This made me wonder if the author intends to port it to other JavaScript frameworks, because it seems like large portions of functionality are neatly encapsulated from jQuery’s API.

In terms of low-level DOM coding and animation performance, it has Paul Irish and Tino Zijdel’s requestAnimationFrame fixes, and uses CSS vendor prefixing where required.

Namespacing

RoyalSlider adds these methods and objects to $:

  • $.rsProto
  • $.rsCSS3Easing
  • $.rsModules
  • $.fn.royalSlider

In general plugins should limit how many things they add to $, but I felt like the author has been careful here and only exposed what’s necessary.

  • Namespaces events and CSS classes, example: keydown.rskb
  • Correctly tracks state using royalSlider .data attribute

Other Notes

Most jQuery plugin authors seem to miss the section on using $.extend to handle options, but I was pleased to see Dmitry has done this. The main jQuery method also returns this, so calls after .royalSlider can be chained as expected.

Support and Community

RoyalSlider has its own Tender-powered support site, and the author also talks to users through his Twitter account: @dimsemenov.

WebSpecter, cerebral.js, Mobify.js

03 Sep 2012 | By Alex Young | Comments | Tags testing frameworks libraries backbone.js mobile

WebSpecter

WebSpecter (License: MIT) by Juliusz Gonera is an acceptance test framework built using PhantomJS. The author’s examples are written with CoffeeScript, but it can be used with JavaScript as well.

The tests use a BDD-style syntax, based around “features” and CSS selectors:

feature "GitHub search", (context, browser, $) ->
  before (done) -> browser.visit 'https://github.com/search', done

  it "finds WebSpecter", (done) ->
    $('input[name=q]').fill 'webspecter'
    $(button: 'Search').click ->
      $(link: "jgonera / webspecter").present.should.be.true
      done()

  it "looks only for users when asked to", (done) ->
    $('input[name=q]').fill 'webspecter'
    $(field: 'Search for').select 'Users'
    $(button: 'Search').click ->
      $(link: "jgonera / webspecter").present.should.be.false
      done()

The browser object is a wrapper around Phantom’s WebPage. A $ function is also present which is jQuery-like but not implemented using jQuery.

cerebral.js

cerebral.js (GitHub: gorillatron / cerebral) by Andre Tangen extends Backbone.js to provide a module system and a publish/subscribe application core. It uses RequireJS for modules and module loading, and modules are restricted to a “sandbox” designed to limit the elements the module has access to.

The main motivation behind cerebral.js is to encourage loosely coupled applications. When I’m working on my own Backbone.js applications I usually adopt a similar approach, so it’s reassuring to see the same ideas in a framework.

Mobify.js

Mobify.js (GitHub: mobify / mobifyjs, License: MIT, npm: mobify-client) is a new client-side web framework that aims to make it easier to adapt sites to any device. This includes responsive design techniques, but it can also be backed by a cloud service called Mobify Cloud that includes automatic image resizing, JavaScript concatenation, and a CDN. Mobify.js projects are built with Zepto and Dust.js.

The Mobify.js authors have also been building MIT-licensed Mobify.js modules, at the moment there’s a carousel and an accordion.

js13kGames, simplex-noise.js, Media Chooser, User Message Queue

31 Aug 2012 | By Alex Young | Comments | Tags games competitions services node

js13kGames

js13kGames

js13kGames is a HTML5 and JavaScript game development competition. It’s currently open for entries, and the competition will close on the 13th September 2012. The basic rule states that entries must be less than 13 KB, but please read through all of the rules before entering.

The judges include Michal Budzynski (Firefox OS developer) and Rob Hawkes (Mozilla), and the competition was organised by Andrzej Mazur.

simplex-noise.js

simplex-noise.js (npm: simplex-noise) by Jonas Wagner is a simplex noise implementation, which is often used to generate noise for graphics. The author has posted a plasma demo to jsFiddle.

Media Chooser

Media Chooser (GitHub: chute / media-chooser) from Chute is a client-side library for working with Chute’s media API. Files can be uploaded or selected from social networks like Facebook and Instagram. It’s an extremely simple way of accepting file uploads in a single page application without the traditional server-side requirements.

User Message Queue

User Message Queue (License: MIT) by Robert Murray is a FIFO message queue. It allows messages to be pushed to a queue that will be displayed one after another in a suitable container element. It’s simple and lightweight, so it might work well in combination with a client-side toolkit like Bootstrap.

Optimistic Server Interactions

30 Aug 2012 | By Alex Kessinger | Comments | Tags mobile async
Alex Kessinger is a programmer who lives in the Bay Area. He strives to make websites, cook, and write a little bit better each day. You can find more from Alex at his blog, on App.net, and Google+.

At PicPlz we built a hybrid mobile app. We had a native container written for iOS and Android that hosted a web version of our code. PicPlz was the first time I had worked on a fully mobile website. My operating paradigm was that I was building a website for a small screen.

One day, our iOS developer asked me why our follow button didn’t just react when a user touched it. He pointed out that most iOS apps work that way. It was a glaring reminder that our app was something other than native. I genuinely had never thought about doing it any other way. I was building a web app, and when building web apps there is network IO. For things to be consistent, you need to wait until the network IO has finished. The other engineer persisted though, claiming that it doesn’t have to work that way.

In order to make it feel more native I wrote the code so that the button would activate and change state immediately. If there was an error, which was infrequent, the button would flip back to inform the user. In the other 99.99% of the time the user would feel as if the interaction happened immediately.

Since implementing these interactions in PicPlz I have found out what they are called: Optimistic server interactions. While it is how things work in most mobile applications, it’s not how most things work in web applications. Why? Well, we all know exactly what’s going on when we make a request to a server – nothing is certain unless a response is received. When we see a spinner or a loading bar we understand, but does a user? Do they understand that your web page is making HTTP requests on their behalf, or are they about to click away from your website because it feels slow?

I am sure you might be worried that this approach feels strange from a user experience point of view. Yes, it’s weird, but how often will this happen? If your code is that fragile, then you might have a bigger problem.

Coding Style

There are times when optimistic server interactions are awkward to write. For example, building a chain of such interactions will result in highly indented callbacks.

Despite this, most cases shouldn’t be more complex than the following pseudo-code example:

$('body').on('click', '.favorite', function() {
  var button = $(this);
  button.addClass('active');
  $.post('/follow', { 'favorite': true }).fail(function() {
    // flip favorite button to inactive
    button.removeClass('active');
    // inform user action failed.
  });
});

Another criticism is that if this happens too often, users will begin to question whether their actions are actually doing anything. This is a valid concern, but as I said earlier if your code really is failing this often then you probably have larger problems.

To be fair, I haven’t really tried this on any major piece of code. This is a trick I use mostly for small interactions like follow or favorite buttons. Web apps like Google Docs are clearly using this type of interaction all the time. Still, this technique is slowly working its way into larger interactions. If you do client-side rendering, then you’re 90% there. You can capture user input and update the interface immediately.

I’d like to thank Mark Storus for providing counter arguments.

Node Roundup: Stream Handbook, Screenshot as a Service, captchagen, Suppose

29 Aug 2012 | By Alex Young | Comments | Tags node modules libraries security unix streams
You can send in your Node projects for review through our contact form or @dailyjs.

Stream Handbook

Stream Handbook by the venerable James Halliday is a guide to streams, a commonly overlooked feature of Node that’s only just starting to get the attention it deserves.

So far James has written a solid introduction to streams, and he’s working on adding more detailed coverage based on Node’s related API methods and objects.

Screenshot as a Service

Screenshot as a Service (GitHub: fzaninotto / screenshot-as-a-service, License: MIT) by Francois Zaninotto is a fork of TJ Holowaychuk’s screenshot-app, which is running at screenshot.etf1.fr. Since forking the app, Francois has worked on making it more robust. It can be used synchronously or asynchronously:

# Take a screenshot
GET /?url=www.google.com

# Asynchronous call
GET /?url=www.google.com&callback=http://www.myservice.com/screenshot/google

captchagen

captchagen

captchagen (License: MIT, npm: captchagen) from the team at Fractal is a CAPTCHA image generator. It can generate both a PNG and the corresponding audio through eSpeak.

Images are generated based on a custom algorithm and the Canvas module. Mocha tests have been included.

Suppose

Suppose (GitHub: jprichardson / node-suppose, License: MIT, npm: suppose) by JP Richardson is a JavaScript version of Expect (man expect). It has a chainable API, so it’s easy to create complex expectations with a familiar syntax:

suppose('npm', ['init'])
  .debug(fs.createWriteStream('/tmp/debug.txt'))
  .on(/name\: \([\w|\-]+\)[\s]*/).respond('awesome_package\n')
  .on('version: (0.0.0) ').respond('0.0.1\n')
  .on('description: ').respond("It's an awesome package man!\n")
  .on('entry point: (index.js) ').respond("\n")
  .on('test command: ').respond('npm test\n')
  .on('git repository: ').respond("\n")
  .on('keywords: ').respond('awesome, cool\n')
  .on('author: ').respond('JP Richardson\n')
  .on('license: (BSD) ').respond('MIT\n')
  .on('ok? (yes) ' ).respond('yes\n')
.error(function(err){
    console.log(err.message);
})
.end(function(code){

The author has included Mocha tests and examples in the readme file.

jQuery Roundup: jQuery Color 2.1.0, jQuery UI 1.9 RC, Avgrund Modal

28 Aug 2012 | By Alex Young | Comments | Tags jquery plugins jqueryui effects
Note: You can send your plugins and articles in for review through our contact form or @dailyjs.

jQuery Color 2.1.0

jQuery Color 2.1.0 (GitHub: jquery / jquery-color, License: MIT) has been released. This plugin includes lots of methods for defining, parsing, and otherwise manipulating and animating colours. Version 2 includes new API methods that allow colours to be created and modified, and this includes support for RGBA and HSLA colours and animations.

Here are some examples of the plugin in use:

// String colour parsing
$.Color('#abcdef');
$.Color('rgba(100,200,255,0.5)');
$.Color('aqua');

// RGB
$.Color(255, 0, 0);

// RGBA
$.Color(255, 0, 0, 0.8);

// Objects work as well
$.Color({ red: red, green: green, blue: blue, alpha: alpha });

// Getters and setters
$.Color(255, 100, 130)
  .green(101)
  .green(); // 101

// Conversion
$.Color(255, 100, 130).toRgbaString(); // 'rgb(255,100,130)'

jQuery UI 1.9 RC

jQuery UI 1.9 RC has been released, and updates jQuery to 1.8 and jQuery Color to the 2.0 series. The jQuery UI team are also working on upgrading the project’s infrastructure:

We’re working on a new web site, new download builder, and new documentation site to accompany the new release.

Avgrund Modal

Avgrund

Avgrund Modal (GitHub: voronianski / jquery.avgrund.js, License: MIT) by Dmitri Voronianski is a modal plugin that attempts to create the impression of depth as the modal appears on the page. The main content zooms out as the modal appears – the overall effect is surprisingly slick. Basic usage is $(selector).avgrund(), but the plugin has lots of options:

$('element').avgrund({
  width: 380
, height: 280
, showClose: false
, showCloseText: ''
, holderClass: ''
, overlayClass: ''
, enableStackAnimation: false
, template: 'Your content goes here..'
});

This plugin is based on the Avgrund concept by Hakim El Hattab.

JS101: Equality

27 Aug 2012 | By Alex Young | Comments | Tags js101 tutorials language beginner

There are four equality operators in JavaScript:

  • Equals: ==
  • Not equal: !=
  • Strict equal: ===
  • Strict not equal: !==

In JavaScript: The Good Parts, Douglas Crockford advises against using == and !=:

My advice is to never use the evil twins. Instead, always use === and !==.

The result of the equals operator is calculated based on The Abstract Equality Comparison Algorithm. This can lead to confusing results, and these examples are often cited:

'' == '0'           // false
0 == ''             // true
0 == '0'            // true

false == undefined  // false
false == null       // false
null == undefined   // true

Fortunately, we can look at the algorithm to better understand these results. The first example is false due to this rule:

If Type(x) is String, then return true if x and y are exactly the same sequence of characters (same length and same characters in corresponding positions). Otherwise, return false.

Basically, the sequence of strings is not the same. In the second example, the types are different, so this rule is used:

If Type(x) is Number and Type(y) is String, return the result of the comparison x == ToNumber(y).

This is where the behaviour of the == starts to get seriously gnarly: behind the scenes, values and objects are changed to different types. The equality operator always tries to compare primitive values, whereas the strict equality operator will return false if the two values are not the same type. For reference, the underlying mechanism used by the strict equality operator is documented in the The Strict Equality Comparison Algorithm section in the ECMAScript Specification.

Strict Equality Examples

Using the same example with the strict equality operator shows an arguably more intuitive result:

'' === '0'           // false
0 === ''             // false
0 === '0'            // false

false === undefined  // false
false === null       // false
null === undefined   // false

Is this really how professional JavaScript developers write code? And if so, does === get used that often? Take a look at ajax.js from jQuery’s source:

executeOnly = ( structure === prefilters );
if ( typeof selection === "string" ) {
} else if ( params && typeof params === "object" ) {

The strict equality operator is used almost everywhere, apart from here:

if ( s.crossDomain == null ) {

In this case, both undefined and null will be equal, which is a case where == is often used in preference to the strict equivalent:

if ( s.crossDomain === null || s.crossDomain === undefined ) {

Assertions

One place where the difference between equality and strict equality becomes apparent is in JavaScript unit tests. Most assertion libraries include a way to check ‘shallow’ equality and ‘deep equality’. In CommonJS Unit Testing, these are known as assert.equal and assert.deepEqual.

In the case of deepEqual, there’s specific handling for dates and arrays:

equivalence is determined by having the same number of owned properties (as verified with Object.prototype.hasOwnProperty.call), the same set of keys (although not necessarily the same order), equivalent values for every corresponding key, and an identical “prototype” property

Conclusion

To understand how equality and strict equality work in JavaScript, primitive values and JavaScript’s implicit type conversion behaviour must be understood. In general, experienced developers advocate using ===, and this is good practice for beginners.

In recognising the confusion surrounding these operators, there is a significant amount of documentation on the topic. For example, Comparison Operators in Mozilla’s JavaScript Reference.

Minecraft Character WebGL, OpenSceneGraph, BroadStreet, Bootstrap

24 Aug 2012 | By Alex Young | Comments | Tags webgl threejs bootstrap backbone.js

Minecraft Character in WebGL

Minecraft Items demo

In Minecraft Character in WebGL, Jerome Etienne demonstrates how to render and animate Minecraft characters using his tQuery library. This was inspired by the Minecraft Items Chrome Experiment.

OpenSceneGraph

Mickey point cloud

OpenSceneGraph (GitHub: cedricpinson / osgjs, License: LGPL) by Cedric Pinson is a WebGL framework based on OpenSceneGraph – a 3D API typically used in C++ OpenGL applications. This means it’s possible for developers experienced with OpenSceneGraph to bring their projects across to a familiar environment that runs in modern browsers thanks to WebGL.

BroadStreet

BroadStreet (GitHub: DarrenHurst / BroadStreet, License: MIT) by Darren Hurst is a set of controls for Backbone.js. It includes a list selector, iOS-style toggles and alerts, SVG icons, and labels.

Each control inherits from Backbone.View.extend, so the API looks like a standard Backbone object:

var toggle = new Toggle('controls', this).render();
toggle.setTitle('Example title');

The author recommends testing the project with a web server to avoid security restrictions caused when running the examples locally.

Bootstrap 2.1.0

Bootstrap 2.1.0 is out:

New docs, affix plugin, submenus on dropdowns, block buttons, image styles, fluid grid offsets, new navbar, increased font-size and line-height, 120+ closed bugs, and more. Go get it.

The Bootstrap homepage showcases the new features and has a slight redesign. Hopefully it’ll inspire Bootstrap users to customise their projects a little bit instead of using the same black gradient navigation bar on every single project!

How Ender Bundles Libraries for the Browser

23 Aug 2012 | By Rod Vagg | Comments | Tags ender frameworks modules libraries tutorials
This is a contributed post by Rod Vagg. This work is licensed under a Creative Commons Attribution 3.0 Unported License.

I was asked an interesting Ender question on IRC (#enderjs on Freenode) and as I was answering it, it occurred to me that the subject would be an ideal way to explain how Ender’s multi-library bundling works. So here is that explanation!

The original question went something like this:

When a browser first visits my page, they only get served Bonzo (a DOM manipulation library) as a stand-alone library, but on returning visits they are also served Qwery (a selector engine), Bean (an event manager) and a few other modules in an Ender build. Can I integrate Bonzo into the Ender build on the browser for repeat visitors?

What’s Ender?

Let’s step back a bit and start with some basics. The way I generally explain Ender to people is that it’s two different things:

  1. It’s a build tool, for bundling JavaScript libraries together into a single file. The resulting file constitutes a new “framework” based around the jQuery-style DOM element collection pattern: $('selector').method(). The constituent libraries provide the functionality for the methods and may also provide the selector engine functionality.
  2. It’s an ecosystem of JavaScript libraries. Ender promotes a small collection of libraries as a base, called The Jeesh, which together provide a large portion of the functionality normally required of a JavaScript framework, but there are many more libraries compatible with Ender that add extra functionality. Many of the libraries available for Ender are also usable outside of Ender as stand-alone libraries.

The Jeesh is made up of the following libraries, each of these also works as a stand-alone library:

  • domReady: detects when the DOM is ready for manipulation. Provides $.domReady(callback) and $.ready(callback) methods.
  • Qwery: a small and fast CSS3-compatible selector engine. Does the work of looking up DOM elements when you call $('selector') and also provides $(elements).find('selector'), $(elements).and(elements) and $(elements).is('selector').
  • Bonzo: a DOM manipulation library, providing some of the most commonly used methods, such as $(elements).css('property', 'value'), $(elements).empty(), $(elements).after(elements||html), and many more.
  • Bean: an event manager, provides jQuery-style $(elements).bind('event', callback) and others.

The Jeesh gives you the features of these four libraries bundled into a neat package for only 11.7 kB minified and gzipped.

The Basics: Bonzo

Bonzo is a great way to start getting your head around Ender because it’s so useful by itself. Let’s include it in a page and do some really simple DOM manipulation with it.

<!DOCTYPE HTML>
<html lang="en-us">
<head>
  <meta http-equiv="Content-type" content="text/html; charset=utf-8">
  <title>Example 1</title>
</head>
<body>
  <script src="bonzo.js"></script>
  <script id="scr">
    // the contents of *this* script,
    var scr = document.getElementById('scr').innerHTML

    // create a <pre></pre>
    var pre = bonzo.create('<pre>')

    // fill it with the script text, append it to body and style it
    bonzo(pre)
      .text(scr)
      .css({
        fontWeight: 'bold',
        border: 'solid 1px red',
        margin: 10,
        padding: 10
      })
      .appendTo(document.body);

  </script>
</body>
</html>

You can run this as example1, also available in my GitHub repository for this article.

This should look relatively familiar to a jQuery user – you can see that Bonzo is providing some of the important utilities you need for modifying the DOM.

Bonzo Inside Ender

Let’s see what happens when we use a simple Ender build that includes Bonzo. We’ll also include Qwery so we can skip the document.getElementById() noise, and we’ll also use Bean to demonstrate how neatly the libraries can mesh together.

This is done on the command line with: ender build qwery bean bonzo. A file named ender.js will be created that can be loaded on a suitable HTML page.

Our script becomes:

$('<pre>')
  .text($('#scr').text())
  .css({
    fontWeight: 'bold',
    border: 'solid 1px red',
    margin: 10,
    padding: 10
  })
  .bind('click', function () {
    alert('Clickety clack');
  })
  .appendTo('body');

You can run this as example2, also available in my GitHub repository for this article.

Bonzo performs most of the work here but it’s bundled up nicely into the $ object (also available as ender). The previous example can be summarised as follows:

  • bonzo.create() is now working when HTML is passed to $().
  • Qwery does the work when $() is called with anything else, in this case $('#scr') is used as a selector for the script element.
  • We’re using the no-argument variant of bonzo.text() to fetch the innerHTML of the script element.
  • Bean makes a showing with the .bind() call, but the important point is that it’s integrated into our call-chain even though it’s a separate library. This is where Ender’s bundling magic shines.
  • bonzo.appendTo() takes the selector argument which is in turn passed to Qwery to fetch the selected element from the DOM (document.body).

Also important here, which we haven’t demonstrated, is we can do all of this on multiple elements in the same collection. The first line could be changed to $('<pre></pre><pre></pre>') and we’d end up with two blocks, both responding to the click event.

Removing Bonzo

It’s possible to pull Bonzo out of the Ender build and manually stitch it back together again. Just like we used to do with our toys when we were children! (Or was that just me?)

First, our Ender build is now created with: ender build qwery bean (or we could run ender remove bonzo to remove Bonzo from the previous example’s ender.js file). The new ender.js file will contain the selector engine goodness from Qwery, and event management from Bean, but not much else.

Bonzo can be loaded separately, but we’ll need some special glue to do this. In Ender parlance, this glue is called an Ender Bridge.

The Ender Bridge

Ender follows the basic CommonJS Module pattern – it sets up a simple module registry and gives each module a module.exports object and a require() method that can be used to fetch any other modules in the build. It also uses a provide('name', module.exports) method to insert exports into the registry with the name of your module. The exact details here aren’t important and I’ll cover how you can build your own Ender module in a later article, for now we just need a basic understanding of the module registry system.

Using our Qwery, Bean and Bonzo build, the file looks something like this:

|========================================|
| Ender initialisation & module registry |
| (we call this the 'client library')    |
|========================================|
| 'module.exports' setup                 |
|----------------------------------------|
| Qwery source                           |
|----------------------------------------|
| provide('qwery', module.exports)       |
|----------------------------------------|
| Qwery bridge                           |
==========================================
| 'module.exports' setup                 |
|----------------------------------------|
| Bean source                            |
|----------------------------------------|
| provide('bean', module.exports)        |
|----------------------------------------|
| Bean bridge                            |
==========================================
| 'module.exports' setup                 |
|----------------------------------------|
| Bonzo source                           |
|----------------------------------------|
| provide('bonzo', module.exports)       |
|----------------------------------------|
| Bonzo bridge                           |
==========================================

To be a useful Ender library, the code should be able to adhere to the CommonJS Module pattern if a module.exports or exports object exists. Many libraries already do this so they can operate both in the browser and in a CommonJS environment such as Node. Consider Underscore.js for example, it detects the existence of exports and inserts itself onto that object if it exists, otherwise it inserts itself into the global (i.e. window) object. This is how Ender compatible libraries that can also be used as stand-alone libraries work too.

So, skipping over the complexities here, our libraries are registered within Ender and then we encounter the Bridge. Technically the bridge is just an arbitrary piece of code that Ender-compatible libraries are allowed to provide the Ender CLI tool; it could be anything. The intention, though, is to use it as a glue to bind the library into the core ender / $ object. A bridge isn’t necessary and can be omitted – in this case everything found on module.exports is automatically bound to the ender / $ object. Underscore.js doesn’t need a bridge because it conforms to the standard CommonJS pattern and its methods are utilities that logically belong on $ – for example, $.each(list, callback). If a module needs to operate on $('selector') collections then it needs a special binding for its methods. Many modules also require quite complex bindings to make them work nicely inside the Ender environment.

Bonzo has one of the most complex bridges that you’ll find in the Endersphere, so we won’t be looking into it here. If you’re interested in digging deeper, a simpler bridge with some interesting features can be found in Morpheus, an animation framework for Ender. Morpheus adds a $.tween() method and also an $('selector').animate() and some additional helper methods.

The simplest form of Ender bridge is one that lifts the module.exports methods to a new namespace. Consider Moment.js, the popular date and time library. When used in a CommonJS environment it adds all of its methods to module.exports. Without a bridge, when added to an Ender build you’d end up with $.utc(), $.unix(), $.add(), $.subtract() and other methods that don’t have very meaningful names outside of Moment.js. They are also likely to conflict with other libraries that you may want to add to your Ender build. The logical solution is to lift them up to $.moment.utc() etc., then you also get to use the exported main function as $.moment(Date|String|Number). To achieve this, Moment.js’ bridge looks like this:

$.ender({ moment: require('moment') })

The $.ender() method is the way that a bridge can add methods to the global ender / $ object, it takes an optional boolean argument to indicate whether the methods can operate on DOM element collections, i.e. $('selector').method().

Bonzo in Parts

Back to what we were originally trying to achieve: we’re loading Bonzo as a stand-alone library and we want to integrate it into an Ender build in the browser. There are two important things we need to do to achieve this: (1) load Bonzo’s bridge so it can wire Bonzo into Ender, and (2) make Ender aware of Bonzo so a require('bonzo') will do the right thing because this is how the bridge fetches Bonzo.

Let’s first do this the easy way. With an Ender build that just contains Qwery and Bean and Bonzo’s bridge in a separate file named bonzo-ender-bridge.js, we can do the following:

<!-- the order of the first two doesn't matter -->
<script src="ender.js"></script>
<script src="bonzo.js"></script>
<script>
  provide('bonzo', bonzo)
</script>
<script src="bonzo-ender-bridge.js"></script>

If you look at the diagram of the Ender file structure above you’ll see that we’re replicating it with our <script> tags but replacing provide('bonzo', module.exports) with provide('bonzo', bonzo) as Bonzo has detected that it’s not operating inside of a CommonJS environment with module.exports available. Instead, it’s attached itself to the global (window) object. Both provide() and require() are available on the global object and can be used outside of Ender (for example, to extract Bean out of an integrated build you could simply var bean = require('bean').)

We can now continue to use exactly the same script as in our fully integrated Ender build example:

$('<pre>')
  .text($('#scr').text())
  .css({
    fontWeight: 'bold',
    border: 'solid 1px red',
    margin: 10,
    padding: 10
  })
  .bind('click', function () {
    alert('Clickety clack');
  })
  .appendTo('body');

You can run this as example3, also available in my GitHub repository for this article.

Reducing <script> Tags

The main problem with the last example is that we have three <script> tags in our page with files loading (synchronously) from our server. We can trim that down to just two, and if bonzo.js is already cached in the browser then it’ll just be loading one script.

We could achieve this by hacking our ender.js file to include the needed code, or, we could create our own Ender package that contains our code so they will persist even after the Ender CLI tool has touched the file.

First we make a new directory to contain our package. We’ll include the Bonzo bridge as a separate file and also create a file for our provide() statement. Finally, a basic package.json file points to our provide() file as the source (“main”) of the package and the Bonzo bridge as our bridge (“ender”) file:

{
  "name": "fake-bonzo",
  "version": "0.0.0",
  "description": "Fake Bonzo",
  "main": "main.js",
  "ender": "bonzo-ender-bridge.js"
}

We then point the Ender CLI to this directory: ender build qwery bean ./fake-bonzo/ (or we could run ender add ./fake-bonzo/ to add it to the ender.js created in the above example).

The completed page now looks like this:

<!DOCTYPE HTML>
<html lang="en-us">
<head>
  <meta http-equiv="Content-type" content="text/html; charset=utf-8">
  <title>Example 4</title>
</head>
<body>
  <script src="bonzo.js"></script>
  <script src="ender.js"></script>
  <script id="scr">
    $('<pre>')
      .text($('#scr').text())
      .css({
        fontWeight: 'bold',
        border: 'solid 1px red',
        margin: 10,
        padding: 10
      })
      .bind('click', function () {
        alert('Clickety clack');
      })
      .appendTo('body');

  </script>
</body>
</html>

You can dig further into this and run it as example4, also available in my GitHub repository for this article.

Conclusion

Hopefully this has helped demystify the way that Ender packages libraries together; it’s really not magic. If you want to dig deeper then a good place to start would be to examine the client library that appears at the top of each Ender build—it’s relatively straightforward and fairly short.