Validatr, pointeraccuracy.js, Testacular Updates

Validatr

Validatr logo

Validatr (GitHub: jaymorrow / validatr, License: MIT, jquery: validatr) by Jay Morrow is a cross-browser HTML5 form validation library. That means it provides validators for the new native widgets, like color and date. The Validatr fields documentation lists all of the supported inputs and also features demos.

The library requires jQuery, and usage looks like this:

$('form').validatr('addTest', 'example', function(element) {});

Jay has also included QUnit tests for Validatr.

pointeraccuracy.js

pointeraccuracy.js (GitHub: n-fuse / pointeraccuracy.js, License: MIT) by Thomas Hoppe is a polyfill for the media query level 4 property “pointer”. This API heuristically determines the pointer accuracy, returning coarse or fine depending on the input device’s accuracy:

This media feature does not indicate that the user will never be able to click accurately, only that it is inconvenient for him to do so. Authors are expected to react to a value of coarse by designing pages that do not rely on accurate clicking to be operated.

Testacular Updates

Testacular, used by the AngularJS team, has been updated. The new version includes code coverage, Growl and TeamCity reporters, and an adapter for QUnit.

Vojta Jína posts about the project to his Google+ account: +Vojta Jína.

Backbone.js Tutorial: Routes

07 Mar 2013 | By Alex Young | Comments | Tags backbone.js mvc node backgoog

Preparation

Before starting this tutorial, you’ll need the following:

  • alexyoung / dailyjs-backbone-tutorial at commit 0c6de32
  • The API key from part 2
  • The “Client ID” key from part 2
  • Update app/js/config.js with your keys (if you’ve checked out my source)

To check out the source, run the following commands (or use a suitable Git GUI tool):

git clone git@github.com:alexyoung/dailyjs-backbone-tutorial.git
cd dailyjs-backbone-tutorial
git reset --hard 0c6de32

Routes

So far we’ve implemented basic list and task management, but working with multiple lists is tricky because lists can’t be referenced by the URL. If the page is reloaded, the current list isn’t remembered, and lists can’t be bookmarked.

Fortunately, Backbone provides a solution for both of these issues: Backbone.Router. This provides a neat wrapper around hash URLs and history.pushState.

When to Use Hash URLs

I’ll admit I find hash URLs annoying, and this sentiment seems to have been perpetuated by Twitter’s implementation of them. However, there is a good side to hash URLs: they require less work to build and are backwards compatible with older browsers.

Using history.pushState means the browser can potentially display any URL you want. Rather than /#lists/id, the prettier /lists/id can be displayed. However, without a suitable server-side setup, visiting /lists/id before the main application has loaded will fail while the hash URL version will work.

If you’re making a fairly simple and self-contained single page application, then you may wish to avoid pushState and use hash URLs instead.

Either way, Backbone makes it easy to switch between both schemes. Hash URLs are the default, and history.pushState will be used when specified with Backbone.history.start({ pushState: true }).

The Routes File

It’s generally a good idea to keep routes separate from the rest of the application. Create a new file called app/js/routes.js and extend Backbone’s router:

define(function() {
  return Backbone.Router.extend({
    routes: {
      'lists/:id': 'openList'
    },

    initialize: function() {
    },

    openList: function(id) {
    }
  });
});

This code defines the route. This application will just need one for now: lists/:id. The :id part is a parameter, which will be extracted by Backbone.Router and sent as an argument to openList.

Load the Router

The centralised App class is as good a place as any to load the routes and set them up. Open app/js/app.js and change define to include 'routes':

define([
  'gapi'
, 'routes'
, 'views/app'
, 'views/auth'
, 'views/lists/menu'
, 'collections/tasklists'
, 'collections/tasks'
],

function(ApiManager, Routes, AppView, AuthView, ListMenuView, TaskLists, Tasks) {
  var App = function() {
    this.routes = new Routes();

Now, move down to around line 35 where there’s a callback that runs when the API manager is ready. This is where Backbone.history.start should be called:

App.prototype = {
  views: {},
  collections: {},
  connectGapi: function() {
    var self = this;
    this.apiManager = new ApiManager(this);
    this.apiManager.on('ready', function() {
      self.collections.lists.fetch({ data: { userId: '@me' }, success: function(collection, res, req) {
        self.views.listMenu.render();
        Backbone.history.start();
      }});
    });
  }
};

It’s technically safe to call this when the routes have been loaded, but the openList route handler requires that some lists exist, so it’s better to load it when the API is ready.

The purpose of the start method is to begin monitoring hashchange events – whenever the browser address bar’s URL changes the router will be invoked.

Opening Lists Using Events

To write decoupled Backbone applications, you need to think in terms of the full Backbone stack: models and collections, and views. When someone visits a list URL from a bookmark that refers to a specific model, the route handler should be able to find the associated model.

Backbone’s documentation is quite clear about the power of custom events, and that’s basically how openList in app/js/routes.js should work:

openList: function(id) {
  if (bTask.collections.lists && bTask.collections.lists.length) {
    var list = bTask.collections.lists.get(id);
    if (list) {
      list.trigger('select');
    } else {
      console.error('List not found:', id);
    }
  }
}

I’ve been strict about checking for the existence of the lists collection, and even when fetching a given list model from the collection. The main reason for this was to be able to show sensible error messages, but for now there’s just a console.error to help track issues loading data.

The final piece of the puzzle is the view code that has the responsibility of opening lists. Open app/js/views/lists/menuitem.js and make the following changes:

  1. Add this.model.on('select', this.open, this); to the initialize method
  2. Add bTask.routes.navigate('lists/' + this.model.get('id')); to the render method

The first line binds the custom event, select, from the view’s model (which represents the list). The second line causes the browser’s URL to be updated – you’ll find yourself using routes.navigate quite a lot in more complicated applications.

Summary

Combining events and routes is the secret to writing decoupled Backbone applications. It can be difficult to do this well – there are definitely often lazy solutions that can result in a little bit too much spaghetti code. To avoid situations like this, think in terms of models, collections, views, and their relationships.

The full source for this tutorial can be found in alexyoung / dailyjs-backbone-tutorial, commit 85c358.

Node Roundup: 0.9.11, clinch, review

06 Mar 2013 | By Alex Young | Comments | Tags node modules streams build graphics testing stats
You can send in your Node projects for review through our contact form.

Node 0.9.11

Node 0.9.11 was released last week, which has more changes for the new streams API. The _read method no longer takes a callback, which means your current _read methods need to be updated to call push() instead. If you’re unsure about the new API, Node’s tests (in test/simple) are useful for figuring out what to do.

These updates were pulled into isaacs/readable-stream, which is a module that allows you to use streams2 in Node 0.8. If you haven’t tried it yet, I recommend attempting to migrate your pre-0.10 custom streams to it, at least to prepare for Node 0.10.

clinch

clinch (GitHub: Meettya / clinch, License: MIT, npm: clinch) by Meettya is a CommonJS module to browser build system. It has a lot of features, some of which streamline working with CoffeeScript and Jade:

  • Builds are generated by analysing the source code
  • A fake global require function is not used, which means the client-side overheads are relatively low
  • It supports pre-compilation of Jade templates
  • And, it emulates Node globals, like process

The author has provided tests, but at the moment the documentation could do with some work – it’s written in English which isn’t the author’s native language, so perhaps someone could help him with that?

review

Julian Gruber sent in a whole bunch of modules, but the one I thought was cool was review (GitHub: juliangruber / review, License: MIT, npm: review). This module can be used to generate screenshots of sites according to various parameters – you could use it to check designs at various resolutions, for example.

Julian also sent in jilla which is a command-line client for JIRA, and statsc, a browser library for graphite/statsd servers.

jQuery Roundup: 2.0 Beta 2, Alpha Image, jqTree

05 Mar 2013 | By Alex Young | Comments | Tags jquery plugins tree widgets images Canvas
Note: You can send your plugins and articles in for review through our contact form.

jQuery 2.0 Beta 2

jQuery 2.0 Beta 2 is out, which has fixes for most of the major parts of the framework. This version also includes a Grunt build script, so you can build custom versions of jQuery more easily. The announcement posts suggests swapping out Sizzle for another selector engine.

jQuery 2.0 removes compatibility for IE before version 9, so you’ll have to use 1.9 if you want to support legacy browsers.

jQuery Alpha Image

jQuery Alpha Image (GitHub: Sly777 / Jquery-Alpha-Image, License: MIT/GPL, bower: Jquery-Alpha-Image) by İlker Güller uses a temporary Canvas to make a colour in an image transparent. It supports RGB and hex colours and has a callback that runs when the process has finished:

$('.example3').alphaimage({
  colour: '#9CDAF0',
  onlyData: true,
  onComplete: function(result) {
    console.log(result);
  }
});

jqTree

jqTree (GitHub: mbraak / jqTree, License: Apache 2.0) by Marco Braak is a widget that creates trees using unordered lists based on JSON data. It supports drag and drop for reordering items or moving them between parents, and supports IE7+.

The author has included tests, and the documentation is detailed, with lots of examples. The events API is thoughtful as well – you can even track when items are added to the tree with onCreateLi:

$('#tree1').tree({
  data: data,
  onCreateLi: function(node, $li) {
    // Add 'icon' span before title
    $li.find('.jqtree-title').before('<span class="icon"></span>');
  }
});

Layer, Thorax, Emacs.js, Chrome OS Dev

04 Mar 2013 | By Alex Young | Comments | Tags libraries node proxies editors backbone.js chrome-os

Layer

Layer (GitHub: lovebear / layer, License: BSD, npm: layer) by “lovebear” is a module for creating proxies without changing existing code. Given a function, it can augment it with another function that will run first. The returned values from the new function will be supplied as arguments to the original function.

The author’s example looks like this:

// add a simple proxy without modifying any existing code!
var addBig = function(x, y) {
  x = x * 100;
  y = y * 100;
  return [x, y];
}
layer.set(null, add, addBig);

// existing code...
function add(x, y) {
  return x + y;
}

add(2, 2); // 400

The layer.set method is used to invoke addBig before add, and passes the arguments to add. If you look at the source you’ll notice that the author has left a note saying “what if the return value isn’t an arrray?”, but it seems like the proxy functions should always map return values to the original function’s arguments.

Layer works in both Node and browsers.

Thorax

Thorax (GitHub: walmartlabs / thorax, License: MIT, npm: thorax) by Ryan Eastridge and Kevin Decker is another Walmart Labs project that brings together Backbone and Handlebars to provide an “opinionated, battle tested framework for building large scale web applications”.

Thorax has some sugar to help with tasks like data binding and events. Descendants of Thorax.View automatically have their properties mapped to template variables, and binding a model will cause the model’s attributes to be bound. Models and collections can also trigger view events, and events are inheritable.

Collections can also be rendered using the collection keyword in the template, and the views will stay current as models in the collection change.

It seems like Thorax fits in with my particular way of working with Backbone, and it looks like it might cut down a lot of the boilerplate I find myself writing so I’m going to give it a go.

emacs.js

If you’re looking for an easy to use solution for JavaScript development in Emacs, then take a look at emacs.js (GitHub: azer / emacs.js) by Azer Koculu. It comes with tools for npm, syntax checking and highlighting for JavaScript and CoffeeScript, and some snippets and autocompletion.

Azer make a screencast about it here: emacs.js screencast.

Chrome OS Development Follow Up

After I wrote about the Chromebook Pixel last week, I noticed a post in the Google+ Chromebook community about creating a Chrome OS development environment:

We really need to design and implement a proper application, extension, and theme development environment.

It could be implemented as a web ui page, as chrome://extensions/ is today, or even cooler, it could conceivably be its own packaged application.

My opinion on this is they should find a Chrome OS “native” way of making our existing development tools more accessible. I think being able to run Node and Vim/Emacs natively and sensibly on Chrome OS would be huge. I’m actually doing that in developer mode – Chrome OS is my GUI and I do development in a shell with Vim and tmux.

localStorage DOS, Lunr.js, Vlug

01 Mar 2013 | By Alex Young | Comments | Tags security libraries search benchmarking node

localStorage DOS

Even though the Web Storage specification says user agents should limit the amount of space used to store data, a new exploit uses it to store gigabytes of junk. The exploit is based around storing data per-subdomain, which gets around the limits most browsers have already implemented. Users testing it found Chrome would crash when run in incognito mode, but Firefox was immune to the attack.

Other security researchers have raised concerns about localStorage in the past. Joey Tyson talked about storing malicious code in localStorage, and Todd Anglin wrote about some of the more obscure facts about localStorage which touches on security.

Lunr.js

Oliver Nightingale from New Bamboo sent in his extremely well-presented full-text browser-based search library (GitHub: olivernn / lunr.js, License: MIT), which indexes JSON documents using some of the core techniques of larger server-side full-text search engines: tokenising, stemming, and stop word removal.

By removing the need of extra server side processes, search can be a feature on sites or apps that otherwise would not have warranted the extra complexity.

Trie is used for mapping tokens to matching documents, so if you’re interested in JavaScript implementations of data structures then take a look at the source. The source includes tests and benchmarks, and a build script so you can generate your own builds.

Vlug

Vlug (GitHub: pllee / vlug, License: MIT, npm: vlug) by Patrick Lee is a small instrumentation library for benchmarking code without manually adding log statements. The Vlug.Interceptor object takes a specification of things to log, which will dynamically invoke calls to console.time and console.timeEnd to collect benchmarks.

Patrick has tested it with browsers and Node, and has included Vlug.Runner for running iterations on functions. The readme and homepage both have documentation and examples.

Upgrading to Grunt 0.4

28 Feb 2013 | By Alex Young | Comments | Tags backbone.js node backgoog

I was working on dailyjs-backbone-tutorial and I noticed issue #5 where “fiture” was unable to run the build script. That tutorial uses Grunt to invoke r.j from RequireJS, and it turned out I forgot to specify the version of Grunt in the project’s package.json file, which meant newcomers were getting an incompatible version of Grunt.

I changed the project to first specify the version of Grunt, and then renamed the grunt file to Gruntfile.js, and it pretty much worked. You can see these changes in commit 0f98f7.

So, what’s the big deal? Why is Grunt breaking projects and how can this be avoided in the future?

Global vs. Local

If you’re a client-side developer, npm is probably just part of your toolkit and you don’t really care about how it works. It gets things like Grunt for you so you can work more efficiently. However, us server-side developers like to obsess about things like dependency management, and to us it’s important to be careful about specifying the version of a given module.

Previous versions of Grunt kind of broke this whole idea, because Grunt’s documentation assumed you wanted to install Grunt “globally”. I’ve never liked doing that, as I’ve experienced why this is bad first-hand with the Ruby side projects I’ve been involved with. What I’ve always preferred to do with Node is write a package.json for every project, and specify the version of each dependency. I either specify the exact version, or the minor version if the project uses semantic versioning.

For example, with Grunt I might write this:

 , "grunt": "0.3.x"

This causes the grunt command-line tool to appear in ./node_modules/.bin/grunt, which probably isn’t in your $PATH. Therefore, when you’re ready to build the project and you type grunt, the command won’t be found.

Knowing this, I usually add node_modules/.bin/grunt as a “script” to package.json which allows grunt to be invoked through the npm command. This works in Unix and Windows, which was partly the reason I used Grunt instead of make anyway.

There were problems with this approach, however. Grunt comes with a load of built-in tasks, so when the developers updated one of these smaller sub-modules they had to release a whole new version of Grunt. This is dangerous when a module is installed globally – what happens if an updated task has an API breaking change? Now all of your projects that use it need to be updated.

To fix this, the Grunt developers have pulled out the command-line part of Grunt from the base package, and they’ve also removed the tasks and released those as plugins. That means you can now write this:

 , "grunt": "0.4.x"

And install the command-line tool globally:

npm install -g grunt-cli

Since grunt-cli is a very simple module it’s safer to install it globally, while the part that we want to manage more carefully is locked down to a version range that shouldn’t break our project.

Built-in Tasks: Gone

The built-in tasks have been removed in Grunt 0.4. I prefer this approach because Grunt was getting extremely large, so it seems natural to move them out into plugins. You’ll need to add them back as devDependencies to your package.json file.

If you’re having trouble finding the old plugins, they’ve been flagged on the Grunt plugin site with stars.

Uninstall Grunt

Before switching to the latest version of Grunt, be sure to uninstall the old one if you installed it globally.

Other Changes

There are other changes in 0.4 that you may run into that didn’t affect my little Backbone project. Fortunately, the Grunt developers have written up a migration guide which explains everything in detail.

Also worth reading is Tearing Grunt Apart in which Tyler Kellen and Ben Alman explain why Grunt has been changed, and what to look forward to in 0.5.

Peer Dependencies

If you write Grunt plugins, then I recommend reading Peer Dependencies on the Node blog by Domenic Denicola. As a plugin author, you can now take advantage of the peerDependencies property in package.json for defining the version of Grunt that your plugin is compatible with.

Take a look at grunt-contrib-requirejs/package.json to see how this is used in practice. The authors have locked the plugin to Grunt 0.4.x.

Node Roundup: 0.8.21, Node Redis Pubsub, node-version-assets

27 Feb 2013 | By Alex Young | Comments | Tags node modules redis databases grunt
You can send in your Node projects for review through our contact form.

Node 0.8.21

Node 0.8.21 is out. There are fixes for the http and zlib modules, so it’s safe and sensible to update.

Node Redis Pubsub

Louis Chatriot sent in NRP (Node Redis Pubsub) (GitHub: louischatriot / node-redis-pubsub, License: MIT, npm: node-redis-pubsub) which provides a Node-friendly API to Redis’ pub/sub functionality. The API looks a lot like EventEmitter, so if you need to communicate between separate processes and you’re using Redis then this might be a good solution.

Louis says he’s using it at tl;dr in production, and the project comes with tests.

node-version-assets

node-version-assets (GitHub: techjacker / node-version-assets, License: MIT, npm: node-version-assets) by Andrew Griffiths is a module for hashing assets and placing the hash in the filename. For example, styles.css would become styles.7d47723e723251c776ce9deb5e23062b.css. This is implemented using Node’s file system streams, and the author has provided a Grunt example in case you want to invoke it that way.

jQuery Roundup: jQuery.IO, Animated Table Sorter, jQuery-ui-pic

26 Feb 2013 | By Alex Young | Comments | Tags jquery plugins forms json icons
Note: You can send your plugins and articles in for review through our contact form.

jQuery.IO

jQuery.IO (GitHub: sporto / jquery_io.js, License: MIT) by Sebastian Porto can be used to convert between form data, query strings, and JSON strings. It uses JSON.parse, and comes with tests and a Grunt build script.

Converting a form to a JavaScript object is just $.io.form($('form')).object(), and the output has form names as keys rather than the array results .serializeArray returns.

Animated Table Sorter

Animated Table Sorter (GitHub: matanhershberg / animated_table_sorter, License: MIT, jquery: AnimatedTableSorter) by Matan Hershberg is a table sorting plugin that moves rows using .animate when they’re reordered.

All you need to do is call .tableSort() on a table. CSS and images have been provided for styling the selected column and sort direction.

jQuery-ui-pic

jQuery-ui-pic (GitHub: rtsinani / jQuery-ui-pic) by Artan Sinani provides an extracted version of the icons from Bootstrap and the image sprites from jQuery UI.

In this version the CSS classes are all prefixed with pic-, so you can use them like this: <i class="pic-trash"></i>. This might prove useful if you’re looking for a quick way to reuse jQuery UI’s icons without using the rest of jQuery UI. I licensed Glyphicons Pro myself because I find myself using them so much.

Mobile Testing on the Chromebook Pixel

25 Feb 2013 | By Alex Young | Comments | Tags testing laptops touchscreen mobile chrome-os

The Pixel.

Last week I was invited to a “secret workshop” at one of the Google campuses in London. Knowing that Addy Osmani works there I expected something to do with Yeoman or AngularJS, but it turned out to be a small launch event for the Chromebook Pixel. I ended up walking out of there with my very own Pixel – no, I didn’t slip one into my backpack and run off wearing a balaclava, each attendee was given one to test.

Looking at this slick metallic slab of Google-designed hardware I was left wondering what to do with it as a JavaScript hacker. It runs Chrome OS and has a high density multi-touch display. That made me wonder: how useful is the Pixel as a multi-touch, “retina” testing device? My personal workflow for client-side development is to preview sites on my desktop or laptop, then switch to testing with mobile devices towards the end of the project. I may occasionally have a tablet or phone close by for experimental work or feasibility studies, but generally I leave the device testing until later.

By using a Pixel early in development the potential is there to work “touch natively” – focusing on touch as a primary input mode rather than a secondary option.

If you’re working on mobile sites, responsive designs, or browser-based games, then how well does the Pixel function as a testing machine? With one device you get several features that are useful for testing these kinds of sites:

  1. The touchscreen. Rather than struggling with a tablet or phone during early development you get a touchscreen and the standard inputs we’re more used to.
  2. The screen’s high resolution is useful for testing sites that optionally support high density displays.
  3. Chrome’s developer tools make it easy to override the browser’s reported size and user agent which is useful for testing mobile and responsive designs.

I’ve been using the Pixel with several mobile frameworks and well-known mobile-friendly sites to see how well these points play out in practice.

Testing Mobile Sites with Chrome

Pressing ctrl-shift-j opens the JavaScript console on a Chromebook. Once you’re in there, selecting the settings icon opens up some options that can be used to simulate a mobile browser. The ‘Overrides’ tab has a user agent switcher which has some useful built-in browsers like Firefox Mobile and Android 4.x. There’s also an option for changing the resolution, which changes the resolution reported to the DOM rather than resizing the window.

The Screen and Multi-Touch

The screen itself is 2560x1700 and 3:2 (239 ppi). It’s sharp, far better than my battle-worn last gen MacBook Air. One of the Google employees at the event said the unusual aspect ratio was because websites are usually tall rather than wide, so they optimised for that. In practice I haven’t noticed it – the high pixels per inch is the most significant thing about it.

I tried Multitouch Test and it was able to see 10 unique points – I’m not sure what the limit is and I can’t find it documented in Google’s technical specs for the Pixel.

The Touchpad

This article isn’t intended to be a review of the Pixel. However, I really love the touchpad. I’ve struggled with non-Apple trackpads on laptops before, but the Pixel’s touchpad is accurate and ignores accidental touches fairly well. The click action feels right – it doesn’t take too much pressure but has a satisfying click. I also like the soft finish, which makes it feel comfortable to use even with my sweaty hands.

Testing Mobile Frameworks

I tested some well-known mobile frameworks and sites purely using the touchscreen. I set Chrome to send the Android browser user agent, and I also tried Chrome for Android and Firefox Mobile.

The Google employees at the event were quick to point out that everything works using touch. It seems like a lot of effort has been put into translating touches into events that allow UI elements to behave as expected.

That made me wonder if sites optimised for other touch devices would fail to interpret touch events and gestures on the Pixel – perhaps reading them as mouse events instead – but all of the widgets and gestures I tried seemed to work as intended. I even ran the touch event reporting in Enyo and Sencha Touch to confirm the gestures were being reported correctly.

During the event, I opened The Verge on my phone just to check what the press was saying about the Pixel. There was mention of touchscreen interface lag, and Gruber picked this up on Daring Fireball. I don’t have any way of measuring the lag scientifically myself (I hope to see a Digital Foundry-style analysis of the device), but in practice it feels like a modern tablet so I haven’t had a problem with it. I’m not sure where Gruber gets “janky” from, but as the Pixel will be sold in high street stores across the UK and US you should be able to try it out in person.

jQuery Mobile worked using the touchscreen for standard navigation, and also recognised swipes and dragging.

jQuery Mobile's widgets worked with touch-based gestures.

Enyo also seemed to recognise the expected gestures.

Enyo worked as it would on a touchscreen phone or tablet.

The Sencha Touch demos behaved as they would on a mobile device.

Sencha Touch, showing the event viewer.

Bootstrap’s responsive design seemed to cope with different sizes and touch gestures.

Bootstrap on the Pixel.

Testing Mobile Sites

The Guardian's mobile site running on the Pixel.

I tested some sites that I know work well on mobile devices, and used the touchscreen to interact with them. Again, this was with Chrome’s user agent changed to several mobile browsers.

Development Test

The way I write both client-side projects and server-side code is with Vim, tmux, and command-line tools. This doesn’t translate well to Chrome OS – it can be done by switching the machine into developer mode, but this requires some Linux experience. The Pixel supports dual booting, and Crouton seems worth checking out if you’re a Chromebook user.

I wrote this article primarily for client-side developers, so I imagine you’d prefer to use the OS as it was intended rather than installing Linux. With that in mind, I tried making some small projects using jQuery Mobile and Cloud9 IDE. Cloud9 worked well for the most part – I had the occasional crashed tab, but I managed to get a project running.

Cloud9 IDE with its HTML preview panel.

One quirk I found was I used the jQuery Mobile CDN assets served using HTTP, whereas Cloud9 is always served over SSL. When I tried to preview my HTML files the CDN assets were blocked by Chrome, and only a small shield icon in the address bar indicated this so it wasn’t immediately obvious.

Also, Cloud9 might not fit into your existing workflow. While it supports GitHub, Bitbucket, SSH, and FTP, it takes a bit of effort to get an existing project running with it.

If you were sold on using the Pixel as a high DPI touchscreen testing device, then the fact you can at least get some kind of JavaScript-friendly development going is useful. However, prepare to make some compromises.

Other Notes

Chrome syncs quickly. Try signing in with Chrome on multiple computers and installing apps or changing themes to see what I mean. The upshot of this is the Chromebook is reliable when syncing with Google’s services. You lose this somewhat with other services depending on how they’re built. Cloud9 IDE, for example, has an offline mode, but I haven’t tested it well enough to see how resilient it is at syncing the data back again.

Switching accounts on a Chromebook isn’t much fun. Chrome OS doesn’t support anything like fast user switching, and I use a ridiculously long password stored in a password manager, so I’ll do anything to avoid typing it in. Also, 1Password doesn’t have an extension for Chrome OS – you can use the HTML version (1Password Anywhere), but that is limited and isn’t particularly friendly. Last Pass works though.

Conclusion

I love the look of the Pixel, it exudes luxury, and the OS is incredibly low maintenance. As for a mobile development testing rig – it does the job, but you may find Chrome’s remote debugging tools and a cheap tablet to work well enough. Being able to dip into Chrome’s developer tools on a local device and use a keyboard and mouse is natural and convenient: it makes mobile testing feel like cheating!

Chromebooks are designed to sync constantly, which means you technically don’t have to worry about losing data if yours gets damaged or stolen. As it stands it’s a trade-off: you lose the ability to install your standard development tools but gain a lower maintenance and potentially more secure OS.

While I respect Cloud9 IDE, I feel like there are people clamouring for a product close to the Pixel that better supports developers. Perhaps Native Client will make this possible. We are the ultimate early adopters, so sell us machines we can code on with our preferred tools!

Discussion: What Do You Want From Components?

22 Feb 2013 | By Alex Young | Comments | Tags component bower

Yogruntbower component!

Assume for a second that TJ Holowaychuk’s Component project isn’t the future. And then let’s say that, due to support from Twitter and Google (through Yeoman), Bower becomes the de facto tool for managing and installing client-side dependencies. Whether you’re using Yeoman or Bower with another build tool, you’re still left with a gap where reusable UI widgets should be.

While Yeoman improves workflow, Component also tackles the notion of sharing “widgets” that contain markup, stylesheets, and code. If you read TJ’s tutorials he pushes the idea of structural markup – stripping away unnecessary markup to leave behind a vanilla slice of templates that can easily be reskinned with CSS.

With more advanced client-side workflows provided by libraries like Backbone.js, RequireJS, and tools like Yeoman and Bower, I feel like moving away from monolithic UI projects is necessary. While I like to jump start projects with jQuery UI, Bootstrap, Closure Library, or perhaps even Dojo’s Dijit and DojoX, these projects are more monolithic than the modular dream promised by the Component project.

I believe there’s a missing library to the Bower/Yeoman future: something to support the notion of reusable widgets. Packaging chunks of markup, styles, and JavaScript is nothing new – but what is being done to solidify this goal outside of the Component project and older monolithic approaches?

Standards

The component idea is being formalised in Introduction to Web Components, and lists Dominic Cooney and Dimitri Glazkov from Google as the editors. Therefore, the concept is being standardised, but this particular vision of components seems very different to what TJ and other developers have envisioned.

Accessibility

Should a widget/component library enforce ARIA, or provide tools for making accessible components? jQuery UI went through many iterations to improve its accessibility, and Dojo has documentation on creating accessible widgets.

Data Binding APIs

What about data binding or MVC style development? How would UI components fit in? If you’re shipping JavaScript inside a component, how would the API provide hooks that can work with Knockout, AngularJS, and other libraries without manually plugging them in?

It feels like we’re settling on binding using data- attributes, so this might be relatively trivial in practice. Perhaps the “ultimate” component library would address this, or perhaps it’s unnecessary.

What Do You Want?

Let’s say you settle on Yeoman as your workflow tool for client-side development. What do you think reusable client-side widgets should look like? What would be the perfect fit alongside Yeoman, Bower, Grunt, and a data binding library?

Cloudinary Tutorial

Cloudinary

Cloudinary

This tutorial introduces Cloudinary, and demonstrates how to build a gallery application using Express and Node. Cloudinary is a service for streamlining asset management – if you’re tired of optimising images and manually uploading them to an asset server or CDN, then Cloudinary might be what you’re looking for.

One reason Cloudinary is useful to us as Node developers is the Cloudinary Node module (GitHub: cloudinary / cloudinary_npm, License: MIT, npm: cloudinary). It can be used to easily generate optimised images, thumbnails, and automatically upload them to Cloudinary. Let’s drop it into an Express application to see what happens!

The full source for this tutorial is available here: alexyoung / dailyjs-cloudinary-gallery.

Step 1: Create a Cloudinary Account

Register for a free account at Cloudinary. That’ll give you 500 MB of storage and a gigabyte of monthly bandwidth. Paid plans start at $39 a month, and that increases the storage to 10 GB and adds 40 GB a month of bandwidth.

Once you’ve created your account, sign in and take a look at the right-hand panel that reads “Account Details”.

The Cloudinary management interface.

To follow this tutorial, you’ll need the api_key and api_secret, so make a note of those.

Step 2: The Express App

Assuming you’ve already installed Node, open a terminal and run npm install -g express. Once that’s done, run express dailyjs-cloudinary-gallery to create a new Express project.

Open package.json and add the cloudinary dependency:

{
  "name": "application-name",
  "version": "0.0.1",
  "private": true,
  "scripts": {
    "start": "node app"
  },
  "dependencies": {
    "express": "3.0.3",
    "jade": "*",
    "cloudinary": "*"
  }
}

Once you’ve done that, save the file and run npm install. This will install the project’s dependencies, including the Cloudinary module.

Step 3: Image Uploads

Cloudinary can be used for every aspect of an image gallery:

  • Uploading images
  • Creating and serving thumbnails
  • Fetching a list of images to display

Go to the top of app.js and add the following require statements:

var cloudinary = require('cloudinary')
  , fs = require('fs')

Now add a new route to handle uploads:

app.post('/upload', function(req, res){
  var imageStream = fs.createReadStream(req.files.image.path, { encoding: 'binary' })
    , cloudStream = cloudinary.uploader.upload_stream(function() { res.redirect('/'); });

  imageStream.on('data', cloudStream.write).on('end', cloudStream.end);
});

Add your Cloudinary configuration to app.configure:

app.configure('development', function(){
  app.use(express.errorHandler());
  cloudinary.config({ cloud_name: 'yours', api_key: 'yours', api_secret: 'yours' });
});

app.locals.api_key = cloudinary.config().api_key;
app.locals.cloud_name = cloudinary.config().cloud_name;

This allows you to use different Cloudinary accounts for development and production purposes, depending on your requirements.

The last two lines make the values available to the templates. The secret is not meant to be accessible from outside the server, but the api_key and cloud_name options can be used by client-side scripts.

Change views/index.jade to show an upload form:

extends layout

block content
  h1= title
  p Welcome to #{title}

  form(action="/upload", method="post", enctype="multipart/form-data")
    input(type="file", name="image")
    input(type="submit", value="Upload Image")

  - if (images && images.length)
    - images.forEach(function(image){
      img(src=image.url)
    - })

Before you try out the app, change the / route in app.js to load a set of images from Cloudinary:

app.get('/', function(req, res, next){
  cloudinary.api.resources(function(items){
    res.render('index', { images: items.resources, title: 'Gallery' });
  });
});

The cloudinary.api.resources method fetches all of the images in your account. Note that this is rate limited to 500 requests per hour – I’ve used it here to keep the tutorial simple, but in production you should cache the results or store them in a database.

At this point, image uploads should work if you start up the app with npm start and navigate to http://localhost:3000, but the output won’t look great without thumbnails.

Step 4: Thumbnails

Cloudinary supports a wide range of image transformations. The API is based around generating a URL that includes parameters to change the image in some way. All we need for the gallery is a simple crop, which is supported through the cloudinary.url method. Change the / route to pass a reference to the cloudinary object to the template:

app.get('/', function(req, res, next){
  cloudinary.api.resources(function(items){
    res.render('index', { images: items.resources, title: 'Gallery', cloudinary: cloudinary });
  });
});

Now, open views/index.jade so that it calls cloudinary.url with some options to get the desired effect:

- images.forEach(function(image){
  a(href=image.url)
    img(src=cloudinary.url(image.public_id + '.' + image.format, { width: 100, height: 100, crop: 'fill', version: image.version }))
- })

The important part here is this:

cloudinary.url(image.public_id + '.' + image.format, { width: 100, height: 100, crop: 'fill', version: image.version })

The cloudinary.url method generates a URL that includes the width, height, and crop options:

http://res.cloudinary.com/your_account/image/upload/c_fill,h_100,w_100/v1234/file.jpg

Because the API is based around URLs, you could easily use this from browser-based JavaScript, utilising Cloudinary to add behaviour that would typically be associated with server-side web development.

I’ve also included the version property, which is recommended by Cloudinary when overriding the public_id. It’s returned by both the upload API and the admin API.

Step 5: Effects

The previous example can easily be adapted to generate lots of interesting effects. This change makes it generate images that feature a vignette lens effect:

- images.forEach(function(image){
  a(href=image.url)
    img(src=cloudinary.url(image.public_id + '.' + image.format, { width: 100, height: 100, crop: 'fill', effect: 'vignette', version: image.version }))
- })

The effect parameter can be one of the following effects:

  • grayscale
  • blackwhite
  • vignette
  • sepia
  • brightness
  • saturation
  • contrast
  • hue
  • pixelate
  • blur
  • sharpen

Some effects take an argument, and this is simply prefixed with a colon. For example, brightness:40.

As well as face detection, adding rounded corners, overlays, and other transformations are also available. Again, check the image transforms documentation for full details.

The vignette effect applied to several images.

jQuery Uploads

Cloudinary has a CORS API for file uploads which degrades to an iframe in legacy browsers. This means you can do image uploads with no server-side code at all! The cloudinary_js repository has a jQuery plugin, with jQuery UI support, which can be used to upload images.

Download the JavaScript files from the cloudinary/cloudinary_js repository and add them to the public/ folder. Then edit views/layout.jade to load jQuery and the other files in this order:

  script(src='http://ajax.googleapis.com/ajax/libs/jquery/1.9.0/jquery.min.js')
  script(src='/jquery.ui.widget.js')
  script(src='/jquery.iframe-transport.js')
  script(src='/jquery.fileupload.js')
  script(src='/jquery.cloudinary.js')
  block scripts

The block scripts part at the end is some Jade jiggery-pokery to allow HTML to be appended to this template from another template. Open views/index.jade and add this markup:

  h2 jQuery Uploads

  .preview

  form(enctype="multipart/form-data")!=cloudinary.uploader.image_upload_tag('image')

block scripts
  script(type="text/javascript")
    // Configure Cloudinary
    $.cloudinary.config({ api_key: '!{api_key}', cloud_name: '!{cloud_name}' });

    $('.cloudinary-fileupload').bind('fileuploadstart', function(e){
      $('.preview').html('Upload started...');
    });

    // Upload finished
    $('.cloudinary-fileupload').bind('cloudinarydone', function(e, data){
      $('.preview').html(
        $.cloudinary.image(data.result.public_id, { format: data.result.format, version: data.result.version, crop: 'scale', width: 100, height: 100 })
      );
      return true;
    });

This displays a form with a file input, generated by the cloudinary.uploader.image_upload_tag helper. That keeps the markup lightweight by doing all of the signing and other things Cloudinary’s API needs behind the scenes.

The client-side JavaScript at the end of the template will display a message when an image is being uploaded, and then display it once it’s finished uploading. The other event which I haven’t used here is fileuploadfail, which is, of course, useful for displaying errors when file uploads fail.

If you want to read more about Cloudinary and jQuery, check out this article: Direct image uploads from the browser to the cloud with jQuery, Upload Images: Remote Uploads.

Summary

The completed gallery.

In this tutorial you’ve seen how to integrate both Node and client-side projects with Cloudinary. If you’d like more details on the service, visit Cloudinary.com.

This gallery example could be easily expanded using features from Cloudinary’s API to do a lot of practical and cool stuff:

  • Pagination could be added
  • The effects API could be used for editing photos
  • Face detection could be used to tag people in photos

The full source for my example Express app is available here: https://github.com/alexyoung/dailyjs-cloudinary-gallery.

Node Roundup: 0.8.20, 0.9.10, continuation.js, selenium-node-webdriver

20 Feb 2013 | By Alex Young | Comments | Tags node modules testing functional
You can send in your Node projects for review through our contact form.

Node 0.8.20, 0.9.10

Node 0.8.20 was released last week. The most significant updates in this version are fixes for the HTTP core module, so if you’re on 0.8.19 then I can’t see any reason not to upgrade.

Node 0.9.10 meanwhile has several stream-related updates. The default options for WriteStream have been updated to improve performance, and empty strings and buffers no longer signal EOF.

continuation.js

continuation.js (GitHub: dai-shi / continuation.js, License: BSD, npm: continuation.js) by Daishi Kato automatically adds tail call optimisation to modules loaded with require. It’s written using esprima and escodegen to parse and generate a new version of existing code. It does this by using trampolined functions, which is also how tail recursive functions are implemented in functional languages like Lisp.

The author has included benchmarks that show where the module improves performance. There are cases where it won’t be faster due to how trampolining is handled – there are also some interesting posts by Guillaume Lathoud about implementing tail call optimisation without trampolining.

selenium-node-webdriver

selenium-node-webdriver (GitHub: WaterfallEngineering / selenium-node-webdriver, License: Apache 2, npm: selenium-node-webdriver) by Lon Ingram packages a prebuilt WebDriver client so it’s easier to get started writing tests that use WebDriver. Lon notes that it was designed to work with PhantomJS, but it could be used with any WebDriver server.

jQuery Roundup: Durandal, Version.js, Navi.js

19 Feb 2013 | By Alex Young | Comments | Tags jquery plugins frameworks libraries testing navigation
Note: You can send your plugins and articles in for review through our contact form.

Durandal

Durandal (GitHub: BlueSpire / Durandal, License: MIT) combines jQuery, Knockout, and RequireJS with some of its own code to create a framework for developing single page applications. Durandal apps are built using AMD-based modules, and it also supports the notion of a widget.

One interesting feature is application-wide messaging – the main app object can handle events, so it can be used as a universal message bus to help keep functionality nicely decoupled.

The project includes Jasmine/PhantomJS tests in the test/ directory, but the documentation itself doesn’t mention tests and the application skeletons don’t include them either. That seems like an oversight to me, given that the project claims to be “single page apps done right”.

Version.js

Justin Stayton sent in Version.js (jstayton / version.js, License: MIT, bower: version.js), which he developed while testing scripts against multiple versions of jQuery. It works by using attributes to specify the required versions of libraries:

<script src="version.js" data-url="google" data-lib="jquery" data-ver="1.7.2"></script>

This will cause jQuery 1.7.2 to be loaded from Google’s CDN as the default. If another version is required, the versionjs GET parameter can be used. This makes it easy to switch between versions of a dependency, which might be useful in tests or during local development.

Navi.js (GitHub: tgrant54 / Navi.js, License: MIT) by Tyler Grant makes single pages behave like a full site using hash routing. It has breadcrumb support, and can be called multiple times. The jQuery plugin method takes a hash option so you could embed multiple menus on a page, each using a different hash to distinguish between them:

$('#naviMenu').navi({
  hash: '#!/'
, content: $('#naviContent')
});

The project’s homepage has markup samples and demos.

memdiff, numerizerJS, Obfuscate.js

18 Feb 2013 | By Alex Young | Comments | Tags testing debugging memory node modules parsing text

memdiff

memdiff (GitHub: azer / memdiff, License: WTFPL, npm: memdiff) by Azer Koculu is a BDD-style memory leak tool based on memwatch. It can either be used by writing scripts with describe and it, and then running them with memdiff:

function SimpleClass(){}
var leaks = [];

describe('SimpleClass', function() {
  it('is leaking', function() {
    leaks.push(new SimpleClass);
  });

  it('is not leaking', function() {
    new SimpleClass;
  });
});

Or by loading memdiff with require and passing a callback to memdiff. The memwatch module itself has an event-based API, and includes a platform-independent native module – so both of these projects are tied to Node and won’t work in a browser.

numerizerJS

numerizerJS (GitHub: bolgovr / numerizerJS, License: MIT, npm: numerizer) by Roman Bolgov is a library for parsing English language string representations of numbers:

var numerizer = require('numerizer');
numerizer('forty two'); // '42'

It’s currently very simple, and doesn’t support browsers out of the box, but I like the fact the author has included Mocha tests. It’d work well alongside other libraries like Moment.js for providing intuitive text-based interfaces.

Obfuscate.js

Obfuscate.js (GitHub: miohtama / obfuscate.js, License: MIT) by Mikko Ohtamaa is a client-side script for replacing text on pages with nonsense that may be more desirable than private information. Mikko suggests this might be useful for making screenshots, so post-processing isn’t required to blur out personal information. The obfuscate function takes an optional selector, so either the entire body of a document can be obfuscated, or just the contents of a given selector.

It walks through each child node looking for text nodes, so it’s lightweight and doesn’t have any dependencies. It also tries to make the text look similar (at a glance) to the original text.

HHHHold, w2ui, Event Spy

15 Feb 2013 | By Alex Young | Comments | Tags node testing ui events jquery

HHHHold

HHHHold (GitHub: ThisIsJohnBrown / hhhhold-js, License: MIT) is a library for faking user generated content with hhhhold!:

Drop hhhhold URLs into your code for quick access to safe-for-work, attributed images from ffffound. Simulate real user content in your project.

It can be included as a client-side script for automatically generating random images whenever an image element has hhhhold.js/ in the src attribute. This allows various parameters to be passed to hhhhold, like the size of the image, or other options such as image brightness.

w2ui

w2ui

w2ui (GitHub: vitmalina / w2ui, License: MIT) by Vitali Malinouski is a UI library that is designed to be used with jQuery. The site has demos which use Bootstrap, but it doesn’t actually depend on Bootstrap as such – the project’s CSS files have been designed to work alongside other CSS libraries. There’s also a w2ui demo page that shows what the various widgets look like without Bootstrap.

So, what’s included? There are some widgets I find myself needing for a lot of projects that don’t come with Bootstrap, like sidebars and the data grid. There are also utility functions for validating values base64 encoding and decoding.

The JavaScript is all namespaced in w2utils and w2ui, and the CSS styles all have a w2ui- prefix, so it should be easy to drop it into a project to see what the widgets look like alongside existing functionality.

Event Spy

Event Spy (Google Code: event-spy, License: New BSD, Chrome Web Store: Event Spy) by Johan Laursen is a Chrome extension that adds event tracking to the developer tools. Only events with a listener will be displayed, and the target will be highlighted on the page.

The Chrome Web Store page for the project has a video that demonstrates the plugin in action, along with screenshots.

Backbone.js Tutorial: Testing with Mocks

14 Feb 2013 | By Alex Young | Comments | Tags backbone.js mvc node backgoog testing

Preparation

Before starting this tutorial, you’ll need the following:

  • alexyoung / dailyjs-backbone-tutorial at commit 5b0a529
  • The API key from part 2
  • The “Client ID” key from part 2
  • Update app/js/config.js with your keys (if you’ve checked out my source)

To check out the source, run the following commands (or use a suitable Git GUI tool):

git clone git@github.com:alexyoung/dailyjs-backbone-tutorial.git
cd dailyjs-backbone-tutorial
git reset --hard 5b0a529

Mocks

Last week I wrote about testing a custom Backbone.sync implementation using Sinon’s spies. This worked well in our situation where the transport layer isn’t necessarily pinned down – Sinon includes Fake XMLHttpRequest, but this won’t work with Google’s API as far as I know. This week I want to introduce another testing concept that Sinon provides: mocks.

Mocks are fake methods that allow expectations to be registered. Historically, you’ll find mocks being used in unit tests where I/O occurs. If you’re testing business logic you don’t need to check if a file was written or a network call was made, it’s often preferable to attach an expectation to make sure the appropriate API would have been called.

In Sinon, creating a mock returns an object that can be decorated with expectations. The API is chainable, so it’s low on boilerplate and high on readability. What you’re aiming to do is state “whenever this method is called, ensure it was called with these parameters”. This can be done through mocks by setting up expectations using matchers.

Matchers are similar to assertions – they can be used to check that arguments are everything from primitive types to instances of a constructor, or even literal values.

Last week we used spies to ensure Google’s API was accessed in the expected way. Mocks could be used for this as well. We don’t really care about the request so much as the fact a particular CRUD operation was requested. The signature for Backbone.gapiRequest is request, method, model, options – the method argument is generally what we’re interested in. Therefore, to set up an expectation that saving an existing task caused update to fire, we can use a mock with sinon.match.object:

var mock = sinon.mock(Backbone);
mock.expects('gapiRequest').once().withArgs(sinon.match.object, 'update');

// Do UI stuff to cause the task to be edited and the form to be submitted
mock.verify();

Mocks Compared to Spies

The previous example looked a lot like last week’s spies, and using spies for the same thing used less code. So, when should we use mocks, and when should we use spies? Mocks give you a fine-grained control over the order and behaviour of method calls. Spies have a different API which focuses on checking how callbacks or methods are used. If you were testing a method that accepts a callback, you could pass in a spy to see how the callback gets used. With a mock, the callback would be from the system under test, and you’d set up expectations on it.

When it comes to UI testing – triggering interface actions to invoke code, I find it’s easier to treat the entire Backbone stack as a whole and use spies to ensure the expected behaviour occurs. Rather than writing a test for each model, view, and collection, it makes more sense to drive the UI and hook into model or sync operations to verify the outcome.

In last week’s tests where lists were being tested, I probably wouldn’t use mocks because mocks should have a closer relationship to a given method under test. The kinds of tests we’re writing involve more than one method, so spies and assertions on the DOM make more sense.

Mock Example

A good place to use mocks is for testing app/js/gapi.js. Let’s say we’re interested in making sure gapiRequest gets called by Backbone.sync. We could use mocks:

test('gapiRequest is called by Backbone.sync', function() {
  var mock = sinon.mock(Backbone);
  mock.expects('gapiRequest').once();
  Backbone.sync('update', model, {});
  mock.verify();
});

This calls Backbone.sync to cause gapiRequest to be called once. This test doesn’t verify the behaviour of gapiRequest itself, just the fact it gets called.

One quirk of the custom Backbone.sync API is Task.prototype.get is called twice: once to fetch task’s ID, and another to get the list’s ID. We could test this with mocks if it was deemed important:

test('Ensure Task.prototype.get is called twice', function() {
  var mock = sinon.mock(model);

  mock.expects('get').twice().returns(model.id);
  Backbone.sync('update', model);
  mock.verify();
});

This uses the twice expectation with another mock.

Hopefully you’re starting to understand how mocks and spies differ. There’s another major part of Sinon, though, and that’s the stub API.

Stubs

Digging further into Backbone.gapiRequest, requests are expected to have an execute method which gets called to send data to Google’s API. Both spies and stubs can be used to test this using the yieldsTo method:

test('gapiRequest causes the execute callback to fire', function() {
  var spy = sinon.spy();
  sinon.stub(Backbone, 'gapiRequest').yieldsTo('execute', spy);
  Backbone.sync('update', model);

  assert.ok(spy.calledOnce);
  Backbone.gapiRequest.restore();
});

This test causes the following chain of events to occur:

  1. Backbone.sync calls Backbone.gapiRequest
  2. Backbone.gapiRequest receives an object with an execute property, which we’ve replaced with a spy
  3. Backbone.gapiRequest calls this execute method, therefore satisfying assert.ok(spy.calledOnce)

Putting these ideas together can be used to make sure the right success or error callbacks are triggered after a request has completed:

test('Errors get called', function() {
  var spy = sinon.spy()
    , options = { error: spy }
    ;

  // Stub the internal update method that would usually come from Google
  sinon.stub(gapi.client.tasks.tasks, 'update').returns({
    execute: sinon.stub().yields(options)
  });

  // Invoke a sync with a fake model and the options with the error callback
  Backbone.sync('update', model, options);

  assert.ok(spy.calledOnce);
  gapi.client.tasks.tasks.update.restore();
});

This test makes sure error gets called by using a spy, and it also stubs out gapi.client.tasks.tasks.update with our own object. This object has an execute property which causes the callback inside gapiRequest to run, and ultimately call error.

Clearing Up

I’ve written a test suite for tasks. It’s based on last week’s tests so there isn’t really anything new, apart from the teardown method:

setup(function() {
  /// ...

  spyUpdate = sinon.spy(gapi.client.tasks.tasks, 'update')
  spyCreate = sinon.spy(gapi.client.tasks.tasks, 'insert')
  spyDelete = sinon.spy(gapi.client.tasks.tasks, 'delete')

  // ...
});

teardown(function() {
  gapi.client.tasks.tasks.update.restore();
  gapi.client.tasks.tasks.insert.restore();
  gapi.client.tasks.tasks.delete.restore();
});

I’ve found this pattern is better than calling reset, because it’s easy to attempt to wrap objects more than once when multiple test files are loaded.

Writing Good Tests with Sinon

Sinon might look like a small library that you can drop into Mocha, Jasmine, or QUnit, but there’s an art to writing good Sinon tests. Sinon’s documentation has some explanation of when exactly spies, mocks, and stubs are useful, but there is a subjective factor at play particularly when it comes to deciding whether a test is best written with a mock or a stub.

A few tips I’ve found useful are:

  • Spies are great for the times when you want to test the entire application, rather than a specific class or method
  • Stubs come in handy when there are methods you don’t want run or want to force execution down a given path
  • Mocks are good for testing specific methods
  • A single mock per test case should be used
  • You should use restore() after using spies and stubs, it’s easy to forget and causes “double wrap” errors

Summary

Stylistically spies, stubs, and mocks are very different, but they’re vexingly similar until you’ve had some practice with Sinon. There have been mock vs. stub discussions on the Sinon.JS Google Group, so it’s probably best to ask Christian on that group if you’re struggling get Sinon to do what you want.

The source for this tutorial can be found in alexyoung / dailyjs-backbone-tutorial, commit 0c6de32.

Node Roundup: 0.8.19, 0.9.9, Peer Dependencies, hapi, node-treap

13 Feb 2013 | By Alex Young | Comments | Tags node modules frameworks web
You can send in your Node projects for review through our contact form.

Node 0.8.19, 0.9.9, and Peer Dependencies

Node 0.8.19 was released last week and this version includes an update for npm that supports peer dependencies. I’m excited about this feature, and I’ll be interested to see how it pans out over time. Basically, you can now specify dependencies for “plugins”. Think jQuery plugins, or in a Node project Grunt plugins are a good example.

This will require plugin authors to update their package.json files with a peerDependencies property, but it should make managing things like Express middleware and Grunt easier in the future. I already find npm’s dependency management relatively stress-free, and this seems like a step in the right direction.

Also, Node 0.9.9 was also released last week, which now features a streams2-powered tls module.

hapi

hapi

hapi (GitHub: walmartlabs / hapi, License: LICENSE, npm: hapi) from Walmart Labs is a framework for building RESTful API services. There are already a few solid RESTful API modules for Node (like restify), so hapi looks to be building on that concept rather than being an MVC web framework.

There’s a basic example that provides an overview of the API:

var Hapi = require('hapi');

// Create a server with a host, port, and options
var server = new Hapi.Server('localhost', 8080);

// Define the route
var hello = {
  handler: function(request) {
    request.reply({ greeting: 'hello world' });
  }
};

// Add the route
server.addRoute({
  method : 'GET',
  path : '/hello',
  config : hello
});

// Start the server
server.start();

Gone is the req, res pattern (which comes from Node’s core modules, not Connect). The hapi API documentation is extremely detailed, and includes examples for the main features. The routing API does seem extremely flexible, but it’s hard to judge it without seeing a large hapi application.

hapi, a Prologue by Eran Hammer is a detailed post that compares hapi to Express, which is useful if you’re familiar with Express. Eran writes:

We also had some bad experience with Express’ lack of true extensibility model. Express was a pleasure and easy to use 2 years ago with a limited set of middleware and very little interdependencies among them. But with a long list of chained middleware, we found hard to debug problems when we simply changed the order in which middleware modules were being loaded.

I’ve always thought the answer to this was to make smaller, interconnected services. Rather than a large Express application with complex middleware, shouldn’t we be using multiple Express applications that communicate with each other? Technically Express could be a veneer on top of a more complex architecture.

Eran brings up other points as well, but it’s difficult to say how well hapi or Express satisfy the goals of building large production web applications because people building things at that level don’t release gory details about their solutions to architectural problems. I occasionally run into a friend who works on a Rails project with thousands of models. It doesn’t work very well. But where are concrete details on real solutions to the problem of scaling business logic?

node-treap

node-treap (GitHub: brenden / node-treap, License: MIT, npm: treap) by Brenden Kokoszka is a Treap implementation. A treap is a self-balanced binary search tree. Once a tree has been created, keys can be added with data objects:

var treap = require('treap');
var t = treap.create();

// Insert some keys, augmented with data fields
t.insert(5, { foo: 12 });
t.insert(2, { foo: 8 });
t.insert(7, { foo: 1000 });

Then elements can be fetched and removed:

var a = t.find(5);
t.remove(a); // by reference to node
t.remove(7); // by key

Brenden has included tests, and each API method has documentation in the readme. He’s included some notes on what treaps are, so you don’t need to be fresh off the back of a computer science degree to figure out what the module does.

jQuery Roundup: Formwin, Three Sixty Slider, slideToucher

12 Feb 2013 | By Alex Young | Comments | Tags jquery plugins animations forms ui
Note: You can send your plugins and articles in for review through our contact form.

Formwin

Formwin (GitHub: rocco / formwin, License: MIT) by Rocco Georgi started as a fork of Uniform, but is now very different. It removes legacy browser support (IE8+) and relies on CSS for things like rounded corners.

The required markup is documented in the project’s readme. In general it relies on labels and spans:

<label class="formwintexts">
  <span>Label Text</span>
  <input type="text" name="yourinput">
</label>

It must be initialised to be used on a page, either with $.formwin.init(); or by setting $.formwinSettings. The init method accepts several options for configuring which CSS classes get used for things like active elements and hovering. This is similar to Uniform, and makes it extremely easy to drop into an existing project.

Three Sixty Slider

360

Three Sixty Slider (GitHub: creativeaura / threesixty-slider, License: MIT/GPL) by Gaurav Jassal allows multiple images to be displayed to give the illusion of multiple viewing angles. It features smooth animations, mouse and touchscreen support, and has a lot of tweakable options.

Basic usage is like this:

$('.product1').ThreeSixty({
  totalFrames: 72, // Total no. of image you have for 360 slider
  endFrame: 72, // end frame for the auto spin animation
  currentFrame: 1, // This the start frame for auto spin
  imgList: '.threesixty_images', // selector for image list
  progress: '.spinner', // selector to show the loading progress
  imagePath:'/assets/product1/', // path of the image assets
  filePrefix: 'ipod-', // file prefix if any
  ext: '.jpg', // extension for the assets
  height: 265,
  width: 400,
  navigation: true
});

It’ll figure out the image names based on the settings, so the markup doesn’t need to include lots of img tags. There’s a live demo on creativeaura.github.com/threesixty-slider/.

slideToucher

slideToucher (GitHub: Yuripetusko / slideToucher, License: MIT) by Yuri Petusko is a swipe gesture plugin that is designed to be high performance. It supports horizontal and vertical swipes, and uses translate3d to produce smooth animations where available.

It expects markup with the slide and row classes, and is invoked with $(selector).slideToucher({ vertical: true, horizontal: true });. The author has posted a demo here: yuripetusko.github.com/slideToucher/.

Numeric JavaScript, howler.js, depot.js

11 Feb 2013 | By Alex Young | Comments | Tags localStorage html5 mathematics audio

Numeric JavaScript

Numeric JavaScript (GitHub: sloisel / numeric, License: MIT) by Sébastien Loisel is a library that provides tools for matrix and vector calculations, convex optimisation, and linear programming. This library was sent in by Emil Bay, who uses it for computationally intensive tasks like genetic programming and AI. Emil says it’s extremely fast, and the Numeric author has some detailed benchmarks of Numeric with comparisons against Closure and Sylvester.

howler.js

howler.js

howler.js (GitHub: goldfire / howler.js, License: MIT) by James Simpson and GoldFire Studios is an audio library that works with Web Audio and HTML5 Audio. Like similar libraries, it can automatically load the right file format for a given browser, but also comes with a bevy of other features as well. It has an event-based API, and methods like fadeIn for handling some of the basic tasks you’ll face when working with audio.

It implements a cache pool and automatically fetches the audio files, which explains why it seemed so fast when I played around with the examples. It’s implemented without any dependencies, and I noticed the source was consistently formatted and easy to follow.

depot.js

depot.js (GitHub: mkuklis / depot.js, License: MIT, bower: depot) by Michal Kuklis is a localStorage wrapper that can be used with CommonJS or AMD, but also works with plain-old script tags. To use it, define a store and then call methods on the store’s instance:

var todoStore = depot('todos');

todoStore.save({ title: 'todo1' });
todoStore.updateAll({ completed: false });

// Fetch all:
todoStore.all();

It comes with Mocha tests which can be run with PhantomJS.