AngularJS: Managing Feeds

09 May 2013 | By Alex Young | Comments | Tags angularjs mvc angularfeeds

Previously

In the last part we looked at fetching and parsing feeds with YQL using Angular’s $http service. The commit for that part was 2eae54e.

This week you’ll learn more about Angular’s data binding by adding some input fields to allow feeds to be added and removed.

If you get stuck at any part of this tutorial, check out the full source here: commit c9f9d06.

Modeling Multiple Feeds

The previous part mapped one feed to a view by using the $scope.feed object. Now we want to support multiple feeds, so we’ll need a way of modeling ordered collections of feeds.

The easiest way to do this is simply by using an array. Feed objects that contain the post items and feed URLs can be pushed onto the array:

$scope.feeds = [{
  url: 'http://dailyjs.com/atom.xml',
  items: [ /* Blog posts go here */ ]
}];

Rendering Multiple Feeds

The view now needs to be changed to use multiple feeds instead of a single feed. This can be achieved by using the ng-repeat directive to iterate over each one (app/views/main.html):

<div ng-repeat="feed in feeds">
  <ul>
    <li ng-repeat="item in feed.items"><a href=""></a></li>
  </ul>
  URL: <input size="80" ng-model="feed.url">
  <button ng-click="fetchFeed(feed)">Update</button>
  <button ng-click="deleteFeed(feed)">Delete</button>
  <hr />
</div>

The fetchFeed and deleteFeed methods should be added to $scope in the controller, but we’ll deal with those later. First let’s add a form to create new feeds.

Adding Feeds

The view for adding feeds needs to use an ng-model directive to bind a value so the controller can access the URL of the new feed:

<div>
  URL: <input size="80" ng-model="newFeed.url">
  <button ng-click="addFeed(newFeed)">Add Feed</button>
</div>

The addFeed method will be triggered when the button is clicked. All we need to do is push newFeed onto $scope.feed then clear newFeed so the form will be reverted to its previous state. The addFeed method is also added to $scope in the controller (app/scripts/controllers/main.js), like this:

$scope.addFeed = function(feed) {
  $scope.feeds.push(feed);
  $scope.fetchFeed(feed);
  $scope.newFeed = {};
};

This example could be written to use $scope.newFeed instead of the feed argument, but don’t you think it’s cool that arguments can be passed from the view just by adding it to the directive?

Fetching Feeds

The original $http code should be wrapped up as a method so it can be called by the ng-click directive on the button:

$scope.fetchFeed = function(feed) {
  feed.items = [];

  var apiUrl = "http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20xml%20where%20url%3D'";
  apiUrl += encodeURIComponent(feed.url);
  apiUrl += "'%20and%20itemPath%3D'feed.entry'&format=json&diagnostics=true&callback=JSON_CALLBACK";

  $http.jsonp(apiUrl).
    success(function(data, status, headers, config) {
      if (data.query.results) {
        feed.items = data.query.results.entry;
      }
    }).
    error(function(data, status, headers, config) {
      console.error('Error fetching feed:', data);
    });
};

The feed argument will be the same as the one in the $scope.feeds array, so by clearing the current set of items using feed.items = []; the user will see instant feedback when “Update” is clicked. That makes it easier to see what’s happening if the feed URL is changed to another URL.

I’ve used encodeURIComponent to encode the feed’s URL so it can be safely inserted as a query parameter for Yahoo’s service.

Deleting Feeds

The controller also needs a method to delete feeds. Since we’re working with an array we can just splice off the desired item:

$scope.deleteFeed = function(feed) {
  $scope.feeds.splice($scope.feeds.indexOf(feed), 1);
};

Periodic Updates

Automatically refreshing feeds is an interesting case in AngularJS because it can be implemented using the $timeout service. It’s just a wrapper around setTimeout, but it also delegates exceptions to $exceptionHandler.

To use it, add it to the list of arguments in the controller and set a sensible default value:

angular.module('djsreaderApp')
  .controller('MainCtrl', function($scope, $http, $timeout) {
    $scope.refreshInterval = 60;

Now make fetchFeed call itself, at the end of the method:

$timeout(function() { $scope.fetchFeed(feed); }, $scope.refreshInterval * 1000);

I’ve multiplied the value by 1000 so it converts seconds into milliseconds, which makes the view easier to understand:

<p>Refresh (seconds): <input ng-model="refreshInterval"></p>

The finished result

Conclusion

Now you can add more feeds to the reader, it’s starting to feel more like a real web application. Over the next few weeks I’ll add tests and a better interface.

The code for this tutorial can be found in commit c9f9d06.

Node Roundup: Node-sass, TowTruck, peer-vnc

08 May 2013 | By Alex Young | Comments | Tags node modules sass css vnc mozilla
You can send in your Node projects for review through our contact form.

Node-sass

Node-sass (GitHub: andrew / node-sass, License: MIT, npm: node-sass) by Andrew Nesbitt is a Node binding for libsass. It comes with some pre-compiled binaries, so it should be easy to get it running.

It has both synchronous and asynchronous APIs, and there’s an example app built with Connect so you can see how the middleware works: andrew / node-sass-example.

var sass = require('node-sass');
// Async
sass.render(scss_content, callback [, options]);

// Sync
var css = sass.renderSync(scss_content [, options]);

The project includes Mocha tests and more usage information in the readme.

TowTruck

C. Scott Ananian sent in TowTruck (GitHub: mozilla / towtruck, License: MPL) from Mozilla, which is an open source web service for collaboration:

Using TowTruck two people can interact on the same page, seeing each other’s cursors, edits, and browsing a site together. The TowTruck service is included by the web site owner, and a web site can customize and configure aspects of TowTruck’s behavior on the site.

It’s not currently distributed as a module on npm, so you’ll need to follow the instructions in the readme to install it. There’s also a bookmarklet for adding TowTruck to any page, and a Firefox add-on as well.

peer-vnc

peer-vnc (GitHub: InstantWebP2P / peer-vnc, License: MIT, npm: peer-vnc) by Tom Zhou is a web VNC client. It uses his other project, iWebPP.io, which is a P2P web service module.

I had trouble installing node-httpp on a Mac, so YMMV, but I like the idea of a P2P noVNC project.

jQuery Roundup: UI 1.10.3, simplePagination.js, jQuery Async

07 May 2013 | By Alex Young | Comments | Tags jquery plugins async pagination jquery-ui ui
Note: You can send your plugins and articles in for review through our contact form.

jQuery UI 1.10.3

jQuery UI 1.10.3 was released last week. This is a maintenance release that has fixes for Draggable, Sortable, Accordion, Autocomplete, Button, Datepicker, Menu, and Progressbar.

simplePagination.js

simplePagination

simplePagination.js (GitHub: flaviusmatis / simplePagination.js, License: MIT) by Flavius Matis is a pagination plugin that supports Bootstrap. It has options for configuring the page links, next and previous text, style attributes, onclick events, and the initialisation event.

There’s an API for selecting pages, and the author has included three themes. When selecting a page, the truncated pages will shift, so it’s easy to skip between sets of pages.

jQuery Async

jQuery Async

jQuery Async (GitHub: acavailhez / jquery-async, License: Apache 2) by Arnaud Cavailhez is a UI plugin for animating things while asynchronous operations take place. It depends on Bootstrap, and makes it easy to animate a button that triggers a long-running process.

The documentation has some clever examples that help visualise how the plugin works – two buttons are displayed so you can trigger the 'success' and 'error' events by hand. It’s built using $.Deferred, so it’ll work with the built-in Ajax API without much effort.

Swarm, Dookie, AngularJS Book

06 May 2013 | By Alex Young | Comments | Tags node modules books css

Swarm

Swarm (GitHub: gritzko / swarm, License: MIT) by Victor Grishchenko is a data synchronisation library that can synchronise objects on clients and servers.

Swarm is built around its concise four-method interface that expresses the core function of the library: synchronizing distributed object replicas. The interface is essentially a combination of two well recognized conventions, namely get/set and on/off method pairs, also available as getField/setField and addListener/removeListener calls respectively.

var obj = swarmPeer.on(id, callbackFn); // also addListener()
obj.set('field',value);
obj.getField()===obj.get('field')===obj.field;
obj.on('field', fieldCallbackFn);
obj.off('field', fieldCallbackFn);
swarmPeer.off(id, callbackFn);  // also removeListener()

The author has defined an addressing protocol that uses tokens to describe various aspects of an event and object. For more details, see Swarm: specifying events.

Dookie

Dookie (GitHub: voronianski / dookie-css, License: MIT, npm: dookie-css) by Dmitri Voronianski is a CSS library that’s built on Stylus. It provides several useful mixins and components:

  • Reset mixins: reset(), normalize(), and fields-reset()
  • Helpers: Shortcuts for working with display, fonts, images and high pixel ratio images
  • Sprites
  • Vendor prefixes
  • Easings and gradients

Dmitri has also included Mocha/PhantomJS tests for checking the generated output and visualising it.

Developing an AngularJS Edge

Developing an AngularJS Edge by Christopher Hiller is a new book about AngularJS aimed at existing JavaScript developers who just want to learn AngularJS. The author has posted examples on GitHub here: angularjsedge / examples, and there’s a sample chapter (ePub).

The book is being sold through Gumroad for $14.95, reduced from $19.95. The author notes that it is also available through Amazon and Safari Books Online.

LevelDB and Node: Getting Up and Running

03 May 2013 | By Rod Vagg | Comments | Tags node leveldb databases

This is the second article in a three-part series on LevelDB and how it can be used in Node.

Our first article covered the basics of LevelDB and its internals. If you haven’t already read it you are encouraged to do so as we will be building upon this knowledge as we introduce the Node interface in this article.

LevelDB

There are two primary libraries for using LevelDB in Node, LevelDOWN and LevelUP.

LevelDOWN is a pure C++ interface between Node.js and LevelDB. Its API provides limited sugar and is mostly a straight-forward mapping of LevelDB’s operations into JavaScript. All I/O operations in LevelDOWN are asynchronous and take advantage of LevelDB’s thread-safe nature to parallelise reads and writes.

LevelUP is the library that the majority of people will use to interface with LevelDB in Node. It wraps LevelDOWN to provide a more Node.js-style interface. Its API provides more sugar than LevelDOWN, with features such as optional arguments and deferred-till-open operations (i.e. if you begin operating on a database that is in the process of being opened, the operations will be queued until the open is complete).

LevelUP exposes iterators as Node.js-style object streams. A LevelUP ReadStream can be used to read sequential entries, forward or reverse, to and from any key.

LevelUP handles JSON and other encoding types for you. For example, when operating on a LevelUP instance with JSON value-encoding, you simply pass in your objects for writes and they are serialised for you. Likewise, when you read them, they are deserialised and passed back in their original form.

A simple LevelUP example

var levelup = require('levelup')

// open a data store
var db = levelup('/tmp/dprk.db')

// a simple Put operation
db.put('name', 'Kim Jong-un', function (err) {

  // a Batch operation made up of 3 Puts
  db.batch([
      { type: 'put', key: 'spouse', value: 'Ri Sol-ju' }
    , { type: 'put', key: 'dob', value: '8 January 1983' }
    , { type: 'put', key: 'occupation', value: 'Clown' }
  ], function (err) {

    // read the whole store as a stream and print each entry to stdout
    db.createReadStream()
      .on('data', console.log)
      .on('close', function () {
        db.close()
      })
  })
})

Execute this application and you’ll end up with this output:

{ key: 'dob', value: '8 January 1983' }
{ key: 'name', value: 'Kim Jong-un' }
{ key: 'occupation', value: 'Clown' }
{ key: 'spouse', value: 'Ri Sol-ju' }

Basic operations

Open

There are two ways to create a new LevelDB store, or open an existing one:

levelup('/path/to/database', function (err, db) {
  /* use `db` */
})

// or

var db = levelup('/path/to/database')
/* use `db` */

The first version is a more standard Node-style async instantiation. You only start using the db when LevelDB is set up and ready.

The second version is a little more opaque. It looks like a synchronous operation but the actual open call is still asynchronous although you get a LevelUP object back immediately to use. Any calls you make on that object that need to operate on the underlying LevelDB store are queued until the store is ready to accept calls. The actual open operation is very quick so the initial is delay generally not noticeable.

Close

To close a LevelDB store, simply call close() and your callback will be called when the underlying store is completely closed:

// close to clean up
db.close(function (err) { /* ... */ })

Read, write and delete

Reading and writing are what you would expect for asynchronous Node methods:

db.put('key', 'value', function (err) { /* ... */ })

db.del('key', function (err) { /* ... */ })

db.get('key', function (err, value) { /* ... */ })

Batch

As mentioned in the first article, LevelDB has a batch operation that performs atomic writes. These writes can be either put or delete operations.

LevelUP takes an array to perform a batch, each element of the array is either a 'put' or a 'del':

var operations = [
    { type: 'put', key: 'Franciscus', value: 'Jorge Mario Bergoglio' }
  , { type: 'del', key: 'Benedictus XVI' }
]

db.batch(operations, function (err) { /* ... */ })

Streams!

LevelUP turns LevelDB’s Iterators into Node’s readable streams, making them surprisingly powerful as a query mechanism.

LevelUP’s ReadStreams share all the same characteristics as standard Node readable object streams, such as being able to pipe() to other streams. They also emit all of the the expected events.

var rs = db.createReadStream()

// our new stream will emit a 'data' event for every entry in the store

rs.on('data' , function (data) { /* data.key & data.value */ })
rs.on('error', function (err) { /* handle err */ })
rs.on('close', function () { /* stream finished & cleaned up */ })

But it’s the various options for createReadStream(), combined with the fact that LevelDB sorts by keys that makes it a powerful abstraction:

db.createReadStream({
    start     : 'somewheretostart'
  , end       : 'endkey'
  , limit     : 100           // maximum number of entries to read
  , reverse   : true          // flip direction
  , keys      : true          // see db.createKeyStream()
  , values    : true          // see db.createValueStream()
})

'start' and 'end' point to keys in the store. These don’t need to even exist as actual keys because LevelDB will simply jump to the next existing key in lexicographical order. We’ll see later why this is helpful when we explore namespacing and range queries.

LevelUP also provides a WriteStream which maps write() operations to Puts or Batches.

Since ReadStream and WriteStream follow standard Node.js stream patterns, a copy database operation is simply a pipe() call:

function copy (srcdb, destdb, callback) {
  srcdb.createReadStream()
    .pipe(destdb.createWriteStream())
    .on('error', callback)
    .on('close', callback)
}

Encoding

LevelUP will accept most kinds of JavaScript objects, including Node’s Buffers, as both keys and values for all its operations. LevelDB stores everything as simple byte arrays so most objects need to be encoded and decoded as they go in and come out of the store.

You can specify the encoding of a LevelUP instance and you can also specify the encoding of individual operations. This means that you can easily store text and binary data in the same store.

'utf8' is the default encoding but you can change that to any of the standard Node Buffer encodings. You can also use the special 'json' encoding:

var db = levelup('/tmp/dprk.db', { valueEncoding: 'json' })

db.put(
    'dprk'
  , {
        name       : 'Kim Jong-un'
      , spouse     : 'Ri Sol-ju'
      , dob        : '8 January 1983'
      , occupation : 'Clown'
    }
  , function (err) {
      db.get('dprk', function (err, value) {
        console.log('dprk:', value)
        db.close()
      })
    }
)

Gives you the following output:

dprk: { name: 'Kim Jong-un',
  spouse: 'Ri Sol-ju',
  dob: '8 January 1983',
  occupation: 'Clown' }

Advanced example

In this example we assume the data store contains numeric data, where ranges of data are stored with prefixes on the keys. Our example function takes a LevelUP instance and a range key prefix and uses a ReadStream to calculate the variance of the values in that range using an online algorithm:

function variance (db, prefix, callback) {
  var n = 0, m2 = 0, mean = 0

  db.createReadStream({
        start : prefix          // jump to first key with the prefix
      , end   : prefix + '\xFF' // stop at the last key with the prefix
    })
    .on('data', function (data) {
      var delta = data.value - mean
      mean += delta / ++n
      m2 = m2 + delta * (data.value - mean)
    })
    .on('error', callback)
    .on('close', function () {
      callback(null, m2 / (n - 1))
    })
}

Let’s say you were collecting temperature data and you stored your keys in the form: location~timestamp. Sampling approximately every 5 seconds, collecting temperatures in Celsius we may have data that looks like this:

au_nsw_southcoast~1367487282112 = 18.23
au_nsw_southcoast~1367487287114 = 18.22
au_nsw_southcoast~1367487292118 = 18.23
au_nsw_southcoast~1367487297120 = 18.23
au_nsw_southcoast~1367487302124 = 18.24
au_nsw_southcoast~1367487307127 = 18.24
...

To calculate the variance we can use our function to do it while efficiently streaming values from our store by simply calling:

variance(db, 'au_nsw_southcoast~', function (err, v) {
  /* v = variance */
})

Namespacing

The concept of namespacing keys will probably be familiar if you’re used to using a key/value store of some kind. By separating keys by prefixes we create discrete buckets, much like a table in a traditional relational database is used to separate different kinds of data.

It may be tempting to create separate LevelDB stores for different buckets of data but you can take better advantage of LevelDB’s caching mechanisms if you can keep the data organised in a single store.

Because LevelDB is sorted, choosing a namespace separator character can have an impact on the order of your entries. A commonly chosen namespace character often used in NoSQL databases is ':'. However, this character lands in the middle of the list of printable ASCII characters (character code 58), so your entries may not end up being sorted in a useful order.

Imagine you’re implementing a web server session store with LevelDB and you’re prefixing keys with usernames. You may have entries that look like this:

rod.vagg:last_login    = 1367487479499
rod.vagg:default_theme = psychedelic 
rod1977:last_login     = 1367434022300
rod1977:default_theme  = disco
rod:last_login         = 1367488445080
rod:default_theme      = funky
roderick:last_login    = 1367400900133
roderick:default_theme = whoa

Note that these entries are sorted and that '.' (character code 46) and '1' (character code 49) come before ':'. This may or may not matter for your particular application, but there are better ways to approach namespacing.

At the beginning of the printable ASCII character range is '!' (character code 33), and at the end we find '~' (character code 126). Using these characters as a delimiter we find the following sorting for our keys:

rod!...
rod.vagg!...
rod1977!...
roderick!...

or

rod.vagg~...
rod1977~...
roderick~...
rod~...

But why stick to the printable range? We can go right to the edges of the single-byte character range and use '\x00' (null) or '\xff' (ÿ).

For best sorting of your entries, choose '\x00' (or '!' if you really can’t stomach it). But whatever delimiter you choose, you’re still going to need to control the characters that can be used as keys. Allowing user-input to determine your keys and not stripping out your delimiter character could result in the NoSQL equivalent of an SQL Injection Attack (e.g. consider the unintended consequences that may arise with the dataset above with a delimiter of '!' and allowing a user to have that character in their username).

Range queries

LevelUP’s ReadStream is the perfect range query mechanism. By combining 'start' and 'end', which just need to be approximations of actual keys, you can pluck out the exact the entries you want.

Using our namespaced dataset above, with '\x00' as delimiters, we can fetch all entries for just a single user by carafting a ReadStream range query:

var entries = []

db.createReadStream({ start: 'rod\x00', end: 'rod\x00\xff' })
  .on('data', function (entry) { entries.push(entry) })
  .on('close', function () { console.log(entries) })

Would give us:

[ { key: 'rod\x00last_login', value: '1367488445080' },
  { key: 'rod\x00default_theme', value: 'funky' } ]

The '\xff' comes in handy here because we can use it to include every string of characters preceding it, so any of our user session keys will be included, as long as they don’t start with '\xff'. So again, you need to control the allowable characters in your keys in order to avoid surprises.

Namespacing and range queries are heavily used by many of the libraries that extend LevelUP. In the final article in this series we’ll be exploring some of the amazing ways that developers are extending LevelUP to provide additional features, applications and complete databases.

If you want to jump ahead, visit the Modules page on the LevelUP wiki.

Book Review: The Meteor Book

02 May 2013 | By Alex Young | Comments | Tags meteor reviews books
Discover Meteor, by Sacha Greif and Tom Coleman

Sacha Greif sent me a copy of The Meteor Book, a new book all about Meteor that he’s writing with Tom Coleman, and will be released on May 7th. He was also kind enough to offer a 20% discount to DailyJS readers, which you can redeem at discovermeteor.com/dailyjs (available when the book is released).

The book itself is currently being distributed as a web application that allows the authors to collect early feedback. Each chapter is displayed as a page, with chapter navigation along the left-hand-side of the page and Disqus comments. There will also be downloadable formats like PDF, ePub, and Kindle.

The authors teach Meteor by structuring the book around building a web application called Microscope, based on the open source Meteor app Telescope. Each chapter is presented as a long form tutorial: a list of chapter goals is given, and then you’re walked through adding a particular feature to the app. Each code listing is visible on the web through a specific instance of the app, and every sample has a Git tag so it’s easy to look up the full source at any point. I really liked this aspect of the design of the book, because it makes it easier for readers to recover from mistakes when following along with the tutorial themselves (something many DailyJS readers contact me about).

Accessing a specific code sample is easy in The Meteor Book

There are also “sidebar chapters”, which are used to dive into technical topics in more detail. For example, the Deploying chapter is a sidebar, and explains Meteor deployment issues in general rather than anything specific to the Microscope app.

Although I work with Node, Express, Backbone.js (and increasingly AngularJS), I’m not exactly an expert on Meteor. The book is pitched at intermediate developers, so you’ll fly through it if you’re a JavaScript developer looking to learn about Meteor.

Something that appealed to me specifically was how the authors picked out points where Meteor is similar to other projects – there’s a section called Comparing Deps which compares Meteor’s data-binding system to AngularJS:

We’ve seen that Meteor’s model uses blocks of code, called computations, that are tracked by special ‘reactive’ functions that invalidate the computations when appropriate. Computations can do what they like when they are invalidated, but normally simply re-run. In this way, a fine level of control over reactivity is achieved.

They also explain the practical implications of some of Meteor’s design. For example, how hot code reloading relates to deployment and sessions, and how data can be limited to specific users for security reasons:

So we can see in this example how a local collection is a secure subset of the real database. The logged-in user only sees enough of the real dataset to get the job done (in this case, signing in). This is a useful pattern to learn from as you’ll see later on.

If you’ve heard about Meteor, then this book serves as a solid introduction. I like the fact a chapter can be digested in a single sitting, and it’s presented with some excellent diagrams and cleverly managed code listings. It’s not always easy to get started with a new web framework, given the sheer amount of disparate technologies involved, but this book makes learning Meteor fun and accessible.

Node Roundup: Caterpillar, squel, mongoose-currency

01 May 2013 | By Alex Young | Comments | Tags node modules sql databases mongo mongoose
You can send in your Node projects for review through our contact form.

Caterpillar

Benjamin Lupton sent in Caterpillar (GitHub: bevry / caterpillar, License: MIT, npm: caterpillar), which is a logging system. It supports RFC-standard log levels, but the main reason I thought it was interesting is it’s based around the streams2 API.

By piping a Caterpillar stream through a suitable instance of stream.Transform, you can do all kinds of cool things. For example, caterpillar-filter can filter out unwanted log levels, and caterpillar-human adds fancy colours.

squel

I was impressed by Brian Carlson’s sql module, and Ramesh Nair sent in squel (GitHub: hiddentao / squel, License: BSD, npm: squel) which is a similar project. This SQL builder module supports non-standard queries, and has good test coverage with Mocha.

Ramesh has included some client-side examples as well, which sounds dangerous but may find uses, perhaps by generating SQL fragments to be used by an API that safely escapes them, or for generating documentation examples.

mongoose-currency

mongoose-currency (GitHub: catalystmediastudios / mongoose-currency, License: MIT, npm: mongoose-currency) by Paul Smith adds currency support to Mongoose. Money values are stored as an integer that represents the lowest unit of currency (pence, cents). Input can be a string that contains a currency symbol, commas, and a decimal.

The Currency type works by stripping non-numerical characters. I’m not sure if this will work for regions where numbers use periods or spaces to separate groups of digits – it seems like this module would require localisation support to safely support anything other than dollars.

Paul has included unit tests written with Mocha, so it could be extended to support localised representations of currencies.

jQuery Roundup: Sco.js, Datepicker Skins, LocationHandler

30 Apr 2013 | By Alex Young | Comments | Tags jquery plugins jquery-ui datepicker bootstrap history
Note: You can send your plugins and articles in for review through our contact form.

Sco.js

Sco.js (GitHub: terebentina / sco.js, License: Apache 2.0) by Dan Caragea is a collection of Bootstrap components. They can be dropped into an existing Bootstrap project, or used separately as well.

Some of the plugins are replacements of the Bootstrap equivalents, but prefixed with $.scojs_. There are also a few plugins that are unique to Sco.js, like $.scojs_valid for validating forms, and $.scojs_countdown for displaying a stopwatch-style timer.

In cases where Sco.js replaces Bootstrap plugins, the author has been motivated by simplifying the underlying markup and reducing the reliance on IDs.

Dan has included tests, and full documentation for each plugin.

jQuery Datepicker Skins

jQuery datepicker skins

Artan Sinani sent in these jQuery datepicker skins (GitHub: rtsinani / jquery-datepicker-skins). They’re tested with jQuery UI v1.10.1 and jQuery 1.9.1, so they should work with new projects quite nicely.

LocationHandler

LocationHandler (GitHub: slv / LocationHandler) by Michele Salvini is a plugin for managing pushState and onpopstate. It emits events for various stages of the history change life cycle. Each supported state is documented in the readme, but the basic usage looks like this:

$(document).ready(function() {
  var locationHandler = new LocationHandler({
    locationWillChange: function(change) {
    },
    locationDidChange: function(change) {
    }
  });
});

The change object has properties for the from/to URLs, and page titles as well.

Packery, Gumba, watch-array

29 Apr 2013 | By Alex Young | Comments | Tags browser libraries ui arrays events

Packery

Packery

Victor sent in Packery (GitHub: metafizzy / packery, License: MIT/Commercial, bower: packery) from Metafizzy, which is a bin packing library. It organises elements to fit around the space available. Certain elements can be “stamped” into a specific position, fit an ideal spot, or be draggable.

Packery can be configured in JavaScript using the Packery constructor function, or purely in HTML using a class and data attributes. jQuery is not required, but the project does have some dependencies, so the authors recommend installation with Bower.

The project can be used under the MIT license, but commercial projects require a license that starts at $25.

Gumba

Gumba (GitHub: welldan97 / gumba, License: MIT, npm: gumba) by Dmitry Yakimov is CoffeeScript for the command-line:

$ echo '1234567' | gumba 'toNumber().numberFormat()'
1,234,567

It’s a bit like Awk or sed, but for the chainable text operations supported by CoffeeScript and Underscore.string.

watch-array

watch-array (GitHub: azer / watch-array, License: BSD, npm: watch-array) by Azer Koçulu causes arrays to emit events when mutator methods are used. Usage is simple – just call watchArray on an array, and pass it a callback that will be triggered when the array changes:

var watchArray = require('watch-array');
var people = ['Joe', 'Smith'];

watchArray(people, function(update) {
  console.log(update.add);
  // => { 1: Taylor, 2: Brown }

  console.log(update.remove);
  // => [0]
});

people.shift();
people.push('Taylor', 'Brown');

In a way this is like a micro version of what data binding frameworks implement. The author has included tests written with his fox test framework.

Yeoman Configstore, Debug.js, Sublime JavaScript Refactoring

26 Apr 2013 | By Alex Young | Comments | Tags yeoman libraries browser node debugging editors

Configstore

Sindre Sorhus sent in configstore (GitHub: yeoman / configstore, License: BSD, npm: configstore), a small module for storing configuration variables without worrying about where and how. The underlying data file is YAML, and stored in $XDG_CONFIG_HOME.

Configstore instances are used with a simple API for getting, setting, and deleting values:

var Configstore = require('configstore');
var packageName = require('./package').name;

var conf = new Configstore(packageName, { foo: 'bar' });

conf.set('awesome', true);
console.log(conf.get('awesome'));  // true
console.log(conf.get('foo'));      // bar

conf.del('awesome');
console.log(conf.get('awesome'));  // undefined

The Yeoman repository on GitHub has many more interesting server-side and client-side modules – currently most projects are related to client-side workflow, but given the discussions on Yeoman’s Google+ account I expect there will be an increasing number of server-side modules too.

Debug.js

Jerome Etienne has appeared on DailyJS a few times with his WebGL libraries and tutorials. He recently released debug.js (GitHub: jeromeetienne / debug.js, License: MIT), which is a set of tools for browser and Node JavaScript debugging.

The tutorial focuses on global leak detection, which is able to display a trace that shows where the leak originated. Another major feature is strong type checking for properties and function arguments.

Methods can also be marked as deprecated, allowing debug.js to generate notifications when such methods are accessed.

More details can be found on the debug.js project page.

Sublime Text Refactoring Plugin

Stephan Ahlf sent in his Sublime Text Refactoring Plugin (GitHub: s-a / sublime-text-refactor, License: MIT/GPL). The main features are method extraction, variable and function definition navigation, and renaming based on scope.

The plugin uses Node, and has some unit tests written in Mocha. The author is planning to add more features (the readme has a to-do list).

AngularJS: Rendering Feeds

25 Apr 2013 | By Alex Young | Comments | Tags angularjs mvc angularfeeds

Previously

In last week’s part I introduced Yeoman and we created a template project that included AngularJS. You can get the source at alexyoung / djsreader. The commit was 2e15d97.

Workflow

The workflow with Yeoman is based around Grunt. Prior to Yeoman, many developers had adopted a similar approach – a lightweight web server was started up using Node and Connect, and a filesystem watcher was used to rebuild the client-side assets whenever a file was edited.

Yeoman bundles all of this up for you so you don’t need to reinvent it. When working on a Yeoman project, type grunt server to start a web server in development mode.

This should open a browser window at http://localhost:9000/#/ with a welcome message. Now the web server is running, you can edit files under app/, and Grunt will rebuild your project as required.

Key Components: Controllers and Views

The goal of this tutorial is to make something that can download a feed and render it – all using client-side code. AngularJS can do all of this, with the help of YQL for mapping an RSS/Atom feed to JSON.

This example is an excellent “soft” introduction to AngularJS because it involves several of the key components:

  • Controllers, for combining the data and views
  • Views, for rendering the articles returned by the service
  • Services, for fetching the JSON data

The Yeoman template project already contains a view and controller. The controller can be found in app/scripts/controllers/main.js, and the view is in app/views/main.html.

If you take a look at these files, it’s pretty obvious what’s going on: the controller sets some values that are then used by the template. The template is able to iterate over the values that are set by using the ng-repeat directive.

Directives and Data Binding

Directives can be used to transform the DOM, so the main.html file is a dynamic template that is interpolated at runtime.

The way in which data is bound to a template is through scopes. The $scope object, which is passed to the controller, will cause the template to be updated when it is changed. This is actually asynchronous:

Scope is the glue between application controller and the view. During the template linking phase the directives set up $watch expressions on the scope. The $watch allows the directives to be notified of property changes, which allows the directive to render the updated value to the DOM.

Notice how the view is updated when properties change. That means the property assignment to $scope in the template app will be reflected by the template.

If you’re of an inquisitive nature, you’re probably wondering how the controller gets instantiated and associated with the view. There’s a missing piece of the story here that I haven’t mentioned yet: routing.

Router Providers

The MainCtrl (main controller) is bound to views/main.html in app/scripts/app.js:

angular.module('djsreaderApp', [])
  .config(function($routeProvider) {
    $routeProvider
      .when('/', {
        templateUrl: 'views/main.html',
        controller: 'MainCtrl'
      })
      .otherwise({
        redirectTo: '/'
      });
  });

The $routeProvider uses a promise-based API for mapping URLs to controllers and templates. This file is a centralised configuration file that sets up the application.

The line that reads angular.module sets up a new “module” called djsreaderApp. This isn’t technically the same as a Node module or RequireJS module, but it’s very similar – modules are registered in a global namespace so they can be referenced throughout an application. That includes third-party modules as well.

Fetching Feeds

To load feeds, we can use the $http service. But even better… it supports JSONP, which is how the Yahoo! API provides cross-domain access to the data we want to fetch. Open app/scripts/controllers/main.js and change it to load the (extremely long) YQL URL:

angular.module('djsreaderApp')
  .controller('MainCtrl', function($scope, $http) {
    var url = "http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20xml%20where%20url%3D'http%3A%2F%2Fdailyjs.com%2Fatom.xml'%20and%20itemPath%3D'feed.entry'&format=json&diagnostics=true&callback=JSON_CALLBACK";

    $http.jsonp(url).
      success(function(data, status, headers, config) {
        $scope.feed = {
          title: 'DailyJS',
          items: data.query.results.entry
        };
      }).
      error(function(data, status, headers, config) {
        console.error('Error fetching feed:', data);
      });
  });

The second line has changed to include a reference to $http – this allows us to access Angular’s built-in HTTP module.

The $scope is now updated with the result of the JSONP request. When $scope.feed is set, AngularJS will automatically update the view with the new values.

Now the view needs to be updated to display the feed items.

Rendering Feed Items

To render the feed items, open app/views/main.html and use the ng-repeat directive to iterate over each item and display it:

<h1></h1>
<ul>
  <li ng-repeat="item in feed.items"></li>
</ul>

This will now render the title of each feed entry. If you’re running grunt server you should have found that whenever a file was saved it caused the browser window to refresh. That means your changes should be visible, and you should see the recent stories from DailyJS.

AngularJS feed rendering What you should see...

Conclusion

In this brief tutorial you’ve seen Angular controllers, views, directives, data binding, and even routing. If you’ve written much Backbone.js or Knockout before then you should be starting to see how AngularJS implements similar concepts. It takes a different approach – I found $scope a little bit confusing at first for example, but the initial learning curve is mainly down to learning terminology.

If you’ve had trouble getting any of this working, try checkout out my source on GitHub. The commit for this tutorial was 73af554.

Node Roundup: 0.10.5, Node Task, cap

24 Apr 2013 | By Alex Young | Comments | Tags node modules grunt network security streams
You can send in your Node projects for review through our contact form.

Node 0.10.5

Node 0.10.5 is out. Apparently it now builds under Visual Studio 2012.

One small change I noticed was added by Ryan Doenges, where the assert module now puts information into the message property:

4716dc6 made assert.equal() and related functions work better by generating a better toString() from the expected, actual, and operator values passed to fail(). Unfortunately, this was accomplished by putting the generated message into the error’s name property. When you passed in a custom error message, the error would put the custom error into name and message, resulting in helpful string representations like "AssertionError: Oh no: Oh no".

The pull request for this is nice to read (apparently Ryan is only 17, so he got his dad to sign the Contributor License Agreement document).

Node Task

Node Task, sent in by Khalid Khan, is a specification for a promise-based API that wraps around JavaScript tasks. The idea is that tasks used with projects like Grunt should be compatible, and able to be processed through an arbitrary pipeline:

Eventually, it is hoped that popular JS libraries will maintain their own node-task modules (think jshint, stylus, handlebars, etc). If/when this happens, it will be trivial to pass files through an arbitrary pipeline of interactions and transformations utilizing libraries across the entire npm ecosystem.

After reading through each specification, it seems like an interesting attempt to standardise Grunt-like tasks. The API seems streams-inspired, as it’s based around EventEmitter2 with various additional methods that are left for implementors to fill in.

cap

Brian White sent in his cross-platform packet capturing library, “cap” (GitHub: mscdex / cap, License: MIT, npm: cap). It’s built using WinPcap for Windows and libpcap and libpcap-dev for Unix-like operating systems.

It’s time to write your vulnerability scanning tools with Node!

Brian also sent in “dicer” (GitHub: mscdex / dicer, License: MIT, npm: dicer), which is a streaming multipart parser. It uses the streams2 base classes and readable-stream for Node 0.8 support.

jQuery 2.0 Released

23 Apr 2013 | By Alex Young | Comments | Tags jquery plugins
Note: You can send your plugins and articles in for review through our contact form.

jQuery 2.0 has been released. The most significant, headline-grabbing change is the removal of support for legacy browsers, including IE 6, 7, and 8. The 1.x branch will continue to be supported, so it’s safe to keep using it if you need broad IE support.

The jquery-migrate plugin can be used to help you migrate away from legacy APIs. jQuery 2.0 is “API compatible” with 1.9, which means migration shouldn’t be as painful as it could be. They’ve been pushing jquery-migrate for a while now, so hopefully this stuff isn’t new to anyone who likes to keep current with jQuery.

The announcement blog post has more details on IE support, the next release of jQuery, and the benefits of upgrading to 2.0.

Upgrade Planning

If you’re interested in upgrading, the jQuery documentation has notes on each API method that is deprecated. It also documents features that can be used to mitigate API changes – for example, if you’re using a plugin that requires an earlier version of jQuery, you could technically run multiple versions on a page by using jQuery.noConflict:

<script type="text/javascript" src="other_lib.js"></script>
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript">
  $.noConflict();
  // Code that uses another library's $ can follow here.
</script>

Plugins that are listed on the jQuery Plugin Registry should list the required jQuery version in the package manifest file. That means you can easily see what version of jQuery a plugin depends on. Many already do depend on jQuery 1.9 or above, so they should be safe to use with jQuery 2.0.

As always, well-tested projects should be easier to migrate. So get those QUnit tests out and see what happens!

Object.observe Shim, Behave, Snap.js

22 Apr 2013 | By Alex Young | Comments | Tags shims textarea mobile ui

Object.observe Shim

It’s encouraging to see Harmony taking on board influences from databinding frameworks, given how important they’re proving to be to front-end development. The proposed Object.observer API aims to improve the way respond to changes in objects:

Today, JavaScript framework which provide databinding typically create objects wrapping the real data, or require objects being databound to be modified to buy in to databinding. The first case leads to increased working set and more complex user model, and the second leads to siloing of databinding frameworks. A solution to this is to provide a runtime capability to observe changes to an object

François de Campredon sent in KapIT’s Object.observe shim (GitHub: KapIT / observe-shim, License: Apache), which implements the algorithm described by the proposal. It’s compatible with ECMAScript 5 browsers, and depends on a method called ObserveUtils.defineObservableProperties for setting up the properties you’re interested in observing. The readme has more documentation and build instructions.

Behave.js

Behave.js (GitHub: jakiestfu / Behave.js, License: MIT) by Jacob Kelley is library for adding IDE-like behaviour to a textarea. It doesn’t have any dependencies, and has impressive browser support. Key features include hard and soft tabs, bracket insertion, and automatic and multiline indentation.

Basic usage is simply:

var editor = new Behave({
  textarea: document.getElementById('myTextarea')
});

There is also a function for binding events to Behave – the events are all documented in the readme, and include things like key presses and lifecycle events.

Snap.js

Snap.js (GitHub: jakiestfu / Snap.js, License: MIT) also by Jacob is another dependency-free UI component. This one is for creating mobile-style navigation menus that appear when clicking a button or dragging the entire view. It uses CSS3 transitions, and has an event-based API so it’s easy to hook it into existing interfaces.

The drag events use ‘slide intent’, which allows an integer to be specified (slideIntent) to control the angle at which a gesture is considered horizontal. Jacob has included helpful documentation on how to structure and style a suitable layout for the plugin:

Two absolute elements, one to represent all the content, and another to represent all the drawers. The content has a higher z-index than the drawers. Within the drawers element, it’s direct children should represent the containers for the drawers, these should be fixed or absolute.

He has lots of other useful client-side projects on GitHub/jakiestfu.

LevelDB and Node: What is LevelDB Anyway?

19 Apr 2013 | By Rod Vagg | Comments | Tags node leveldb databases

This is the first article in a three-part series on LevelDB and how it can be used in Node.

This article will cover the LevelDB basics and internals to provide a foundation for the next two articles. The second and third articles will cover the core LevelDB Node libraries: LevelUP, LevelDOWN and the rest of the LevelDB ecosystem that’s appearing in Node-land.

LevelDB

What is LevelDB?

LevelDB is an open-source, dependency-free, embedded key/value data store. It was developed in 2011 by Jeff Dean and Sanjay Ghemawat, researchers from Google. It’s written in C++ although it has third-party bindings for most common programming languages. Including JavaScript / Node.js of course.

LevelDB is based on ideas in Google’s BigTable but does not share code with BigTable, this allows it to be licensed for open source release. Dean and Ghemawat developed LevelDB as a replacement for SQLite as the backing-store for Chrome’s IndexedDB implementation.

It has since seen very wide adoption across the industry and serves as the back-end to a number of new databases and is now the recommended storage back-end for Riak.

Features

  • Arbitrary byte arrays: both keys and values are treated as simple arrays of bytes, so content can anything from ASCII strings to binary blobs.
  • Sorted by keys: by default, LevelDB stores entries lexicographically sorted by keys. The sorting is one of the main distinguishing features of LevelDB amongst similar embedded data storage libraries and comes in very useful for querying as we’ll see later.
  • Compressed storage: Google’s Snappy compression library is an optional dependency that can decrease the on-disk size of LevelDB stores with minimal sacrifice of speed. Snappy is highly optimised for fast compression and therefore does not provide particularly high compression ratios on common data.
  • Basic operations: Get(), Put(), Del(), Batch()

Basic architecture

Log Structured Merge (LSM) tree

LSM

All writes to a LevelDB store go straight into a log and a “memtable”. The log is regularly flushed into sorted string table files (SST) where the data has a more permanent home.

Reads on a data store merge these two distinct data structures, the log and the SST files. The SST files represent mature data and the log represents new data, including delete-operations.

A configurable cache is used to speed up common reads. The cache can potentially be large enough to fit an entire active working set in memory, depending on the application.

String Sorted Table files (SST)

Each SST file is limited to ~2MB, so a large LevelDB store will have many of these files. The SST file is divided internally into 4K blocks, each of which can be read in a single operation. The final block is an index that points to the start of each data block and its the key of the entry at the start of the block. A Bloom filter is used to speed up lookups, allowing a quick scan of an index to find the block that may contain the desired entry.

Keys can have shared prefixes within blocks. Any common prefix for keys within a block will be stored once, with subsequent entries storing just the unique suffix. After a fixed number of entries within a block, the shared prefix is “reset”; much like a keyframe in a video codec. Shared prefixes mean that verbose namespacing of keys does not lead to excessive storage requirements.

Table file hierarchy

The table files are not stored in a simple sequence, rather, they are organised into a series of levels. This is the “Level” in LevelDB.

Entries that come straight from the log are organised in to Level 0, a set of up to 4 files. When additional entries force Level 0 above the maximum of 4 files, one of the SST files is chosen and merged with the SST files that make up Level 1, which is a set of up to 10MB of files. This process continues, with levels overflowing and one file at a time being merged with the (up to 3) overlapping SST files in the next level. Each level beyond Level 1 is 10 times the size of the previous level.

Log: Max size of 4MB (configurable), then flushed into a set of Level 0 SST files
Level 0: Max of 4 SST files, then one file compacted into Level 1
Level 1: Max total size of 10MB, then one file compacted into Level 2
Level 2: Max total size of 100MB, then one file compacted into Level 3
Level 3+: Max total size of 10 x previous level, then one file compacted into next level

0 ↠ 4 SST, 1 ↠ 10M, 2 ↠ 100M, 3 ↠ 1G, 4 ↠ 10G, 5 ↠ 100G, 6 ↠ 1T, 7 ↠ 10T

Levels

This organisation into levels minimises the reorganisation that must take place as new entries are inserted into the middle of a range of keys. Each reorganisation, or “compaction”, is restricted to a just a small section of the data store. The hierarchical structure generally leads to data in the higher levels being the most mature data, with the fresher data being stored in the log and the initial levels. Since the initial levels are relatively small, overwriting and removing entries incurs less cost than when it occurs in the higher levels, but this matches the typical database where you have a large set of mature data and a more volatile set of fresh data (of course this is not always the case, so performance will vary for different data write and retrieve patterns).

A lookup operation must also traverse the levels to find the required entry. A read operation that requests a given key must first look in the log, if it is not found there it looks in Level 0, moving up to Level 1 and so forth. In this way, a lookup operation incurs a minimum of one read per level that must be searched before finding the required entry. A lookup for a key that does not exist must search every level before a definitive “NotFound” can be returned (unless a Del operation is recorded for that key in the log).

Advanced features

  • Batch operations: provide a collection of Put and/or Del operations that are atomic; that is, the whole collection of operations succeed or fail in a single Batch operation.
  • Bi-directional iterators: iterators can start at any key in a LevelDB store (even if that key does not exist, it will simply jump to the next lexical key) and can move forward and backwards through the store.
  • Snapshots: a snapshot provides a reference to the state of the database at a point in time. Read-queries (Get and iterators) can be made against specific snapshots to retrieve entries as they existed at the time the snapshot was created. Each iterator creates an implicit snapshot (unless it is requested against an explicitly created snapshot). This means that regardless of how long an iterator is alive and active, the data set it operates upon will always be the same as at the time the iterator was created.

Some details on these advanced features will be covered in the next two articles, when we turn to look at how LevelDB can be used to simplify data management in your Node application.

If you’re keen to learn more and can’t wait for the next article, see the LevelUP project on GitHub as this is the focus of much of the LevelDB activity in the Node community at the moment.

AngularJS: Let's Make a Feed Reader

18 Apr 2013 | By Alex Young | Comments | Tags angularjs mvc angularfeeds

I’m looking forward to seeing what services appear to fill Google Reader’s wake. Reeder and Press are my favourite RSS apps, which I use to curate my sources for upcoming DailyJS content. It sounds like Reeder will support Feedbin, so hopefully Press and other apps will as well. I’ve also used Newsblur in the past, but I’m not sure if we’ll see Newsblur support in Reeder…

With that in mind, I thought it would be pretty cool to use a feed reader as the AngularJS tutorial series theme. A Bootstrap styled, AngularJS-powered feed reader would look and feel friendly and fast. The main question, however, is how exactly do we download feeds? Atom and RSS feeds aren’t exactly friendly to client-side developers. What we need is JSON!

JSONP

The now standard way to fetch feeds in client-side code is to use JSONP. That’s where a remote resource is fetched, usually by inserting a script tag, and the server returns JavaScript wrapped in a callback that the client can run when ready.

I remember reading a post by John Resig many years ago that explained how to use this technique with RSS specifically: RSS to JSON Convertor. Ironically, a popular commercial solution for this was provided through Google Reader. Fortunately there’s another way to do it, this time by Yahoo! – the Yahoo! Query Language.

YQL

The YQL service (terms of use) is basically SQL for the web. It can be used to fetch and interpret all kinds of resources, including feeds. It has usage limits, so if you want to take this tutorial series to build something more commercially viable then you’ll want to check those out in detail. Even though the endpoints we’ll use are “public”, Yahoo! will still rate limit them if they go over 2,000 requests per hour. To support higher volume users, API keys can be created.

If you visit this link you’ll see a runnable example that converts the DailyJS Atom feed into JSON, wrapped in a callback. The result looks like this:

cb({ "query": { /* loads of JSON! */ } });

The cb method will be run from within our fancy AngularJS/Bootstrap client-side code. I wrote about how to build client-side JSONP implementations in Let’s Make a Framework: Ajax Part 2, so check that out if you’re interested in that area.

As far as feed processing goes, YQL will give us the JSON we need to make a little feed reader.

Yo!

Before you press “next unread” in your own feed reader, let’s jump-start our application with Yeoman. First, install it and Grunt. I assume you already have a recent version of Node, if not get a 0.10.x copy installed and then run the following:

npm install -g yo grunt-cli bower generator-angular generator-karma

Yeoman is based around generators, which are separate modules that you can install using npm. The previous command installed the AngularJS generator, generator-angular.

Next you’ll need to create a directory for the application to live in:

mkdir djsreader
cd djsreader

You should also run the angular generator:

yo angular

It will install a lot of stuff, but fortunately most of the modules are ones I’d use anyway so I’m cool with that. Answer Y to each question, apart from the one about Compass (I don’t think I have Compass installed, so I didn’t want that option).

Run grunt server to see the freshly minted AngularJS-powered app!

Hello, AngularJS

You may have noticed some “karma” files have appeared. That’s the AngularJS test framework, which you can read about at karma-runner.github.io. If you type grunt test, Grunt will happily trudge through some basic tests that are in test/spec/controllers/main.js.

Summary

Welcome to the world of Yeoman, AngularJS, and… Yahoo!, apparently. The repository for this project is at alexyoung / djsreader. Come back in a week for the next part!

Node Roundup: 0.10.4, Papercut, rsz, sz

17 Apr 2013 | By Alex Young | Comments | Tags node modules graphics images uploads
You can send in your Node projects for review through our contact form.

Node 0.10.4

Node 0.10.4 was released last week. There are bug fixes for some core modules, and I also noticed this:

v8: Avoid excessive memory growth in JSON.parse (Fedor Indutny)

Another interesting patch was added to the stream module, to ensure write callbacks run before end:

stream: call write cb before finish event

The Node blog was quietly updated to change the latest 0.8 to read “legacy” instead of “stable”. I don’t recall previous stable releases being referred to in this way before, so I thought it was worth mentioning here.

Papercut

Papercut (GitHub: Rafe / papercut, License: MIT, npm: papercut) by Jimmy Chao is an image uploading module that supports Amazon S3 and resizing and cropping through node-imagemagick.

Uploaders can be created according to a schema, allowing them to be used to manage different aspects of your application’s image handling requirements:

AvatarUploader = papercut.Schema(function(schema){
  schema.version({
    name: 'avatar'
  , size: '200x200'
  , process: 'crop'
  });

  schema.version({
    name: 'small'
  , size: '50x50'
  , process: 'crop'
  });
});

Papercut also supports configuration using NODE_ENV, so it’s easy to configure to work sensibly in various deployment environments.

rsz

rsz (GitHub: rvagg / node-rsz, License: MIT, npm: rsz) by Rod Vagg is a module for resizing images based on LearnBoost’s node-canvas. The API is based around a single method which accepts various signatures. The basic usage is rsz(src, width, height, function (err, buf) { /* */ }).

sz

sz (GitHub: rvagg / node-sz, License: MIT, npm: sz), also by Rod, is another image-related module. This one can determine the size of an image. It should be noted that both of these modules work with image files and Buffer objects.

var buf = fs.readFileSync('image.gif');

sz(buf, function(err, size) {
  // where `size` may look like: { height: 280, width: 400 }
});

jQuery Roundup: TyranoScript, Sly, FPSMeter

16 Apr 2013 | By Alex Young | Comments | Tags jquery plugins graphics animations games
Note: You can send your plugins and articles in for review through our contact form.

TyranoScript

TyranoScript

Evan Burchard sent in TyranoScript (GitHub: ShikemokuMK / tyranoscript, License: MIT), a jQuery-powered HTML5 interactive fiction game engine:

The game engine was only in Japanese, so I spent the last week making it available in English. As far as what it does, it sits somewhere between an interactive fiction scripting utility and a presentation library like impress.js. It has built in functions (tags) for things like making text and characters pop up, saving the game, changing scenery and shaking the screen. But it supplies interfaces for arbitrary HTML, CSS and JavaScript to be run as well, so conceivably one could use it for presentations or other types of applications. One of the sample games on the project website demonstrates this with a compelling YouTube API integration. The games created with TyranoScript can run on modern browsers, Android, iPad and iPhone.

Evan’s English version is at EvanBurchard / tyranoscript. For a sample game, check out the delightfully nutty Jurassic Heart – a game where you date a dinosaur (of course)!

Sly

Sly (GitHub: Darsain / sly, License: MIT, Bower: sly) by Darsain is a library for scrolling – it can be used where you need to replace scrollbars, or where you want to build your own navigation solutions.

The author has paid particular attention to performance:

Sly has a custom high-performance animation rendering based around the Animation Timing Interface written directly for its needs. This provides an optimized 60 FPS rendering, and is designed to still accept easing functions from jQuery Easing Plugin, so you won’t event notice that Sly’s animations have nothing to do with jQuery :).

Sly’s site has a few examples – check out the infinite scrolling and parallax demos.

FPSMeter

FPSMeter

Sly’s author also sent in FPSMeter. When working on graphically-oriented projects, it’s sometimes useful to display the frames-per-second of animations. FPSMeter (GitHub: Darsain / fpsmeter, Bower: fpsmeter) measures FPS using WindowAnimationTiming, with a polyfill to patch in browser support for most browsers, including IE7+.

FPSMeter can measure FPS, milliseconds between frames, and the number of milliseconds it takes to render one frame. It can also cope with multiple instances on a page, and has show/hide methods that will pause rendering. It also supports theming, so you should be able to get it to sit in nicely in your existing interface.

The State of Node and Relational Databases

15 Apr 2013 | By Alex Young | Comments | Tags databases node modules sql

Recently I started work on a Node project that was built using Sequelize with MySQL. It was chosen to ease the transition from an earlier version written with Ruby on Rails. The original’s ActiveRecord models mapped quite closely to their Sequelize equivalents, which got things started smoothly enough.

Although Sequelize had some API quirks that didn’t feel very idiomatic alongside other Node code, the developers have hungrily accepted pull requests and it’s emerging as a reasonable ORM solution. However, like many others in the Node community I feel uncomfortable with ORM.

Why? Well, some of us have learned how to use relational databases correctly. Joining an established project that uses ORM only to find there’s no referential integrity or sensible indexes is to be expected these days, as programmers have moved away from caring about databases to application-level schemas. I’ve had my head down in MongoDB/Mongoose and Redis code for the last few years, but relational databases aren’t going away any time soon so either programmers need to get the hang of them or we need better database modules.

This all prompted me to look at alternative solutions to relational databases in Node. First, I broke down the problem into separate areas:

  • Driver: The module that manages database connections, sends queries, and responds with data
  • Abstraction layer: Provide tools for escaping queries to avoid SQL injection attacks, and wrap multiple drivers so it’s easy to port applications between MySQL/PostgreSQL/SQLite
  • Validator: Validates data against a schema prior to sending it to the database. Aids with the generation of human-readable error messages
  • Query generator: Generates SQL queries based on a more JavaScript-programmer-friendly API
  • Schema management: Keep schema up-to-date when fields are added or removed

Some projects won’t need to support all of these areas – you can mix and match them as needed. I prefer to create simple veneer-like “model” classes that wrap more low-level database operations. This works well in a web application where it can be make sense to decouple the HTTP layer from the database.

Database Driver

The mysql and pg modules are actively maintained, and are usually required by “abstraction layer” modules and ORM solutions.

A note about configuration: when it comes to connecting to the database, I strongly prefer modules that support connection URIs. It makes it a lot easier to deploy web applications to services like Heroku, because a single environmental variable can be set that contains the connection details for your production database.

Abstraction Layer

This level sits above the driver layer, and should offer lightweight JavaScript sugar. There are many examples of this, but a good one is transactions. Transactions are particularly useful in JavaScript because they can help create APIs that are less dependent on heavily nested callbacks. For example, it makes sense to model transactions as an EventEmitter descendent that allows operations to be pushed to an internal stack.

The author of the pg module, Brian Carlson, who occasionally stops by the #dailyjs IRC channel on Freenode, recently mentioned his new relational project that aims to provide a saner approach to ORM in Node. This module feels more like an abstraction layer API, but it’s gunning to be a formidable new ORM solution.

There are some popular libraries that tackle the abstraction layer, including any-db and massive.

Validator

I usually find myself dealing with errors in web forms, so anything that makes error handling easier is an advantage. Validation and schemas are closely related, which is why ORM libraries usually combine them.

It’s possible to treat them separately, and in the JavaScript community we have solutions based on or inspired by JSON Schema. The jayschema module by Nate Silva is one such project. It’s really aimed at validating JSON, but it could be used to validate JavaScript objects spat out by a database driver.

Validator has some simple tools for validating data types, but it also has optional Express middleware that makes it easy to drop into a web application. Another similar project is conform by Oleg Korobenko.

Query Generator

The sql module by Brian Carlson is an SQL builder – it has a chainable API that turns JavaScript into SQL:

user
  .select(user.id)
  .from(user)
  .where(
    user.name.equals('boom').and(user.id.equals(1))
  ).or(
    user.name.equals('bang').and(user.id.equals(2))
  ).toQuery();

He’s using this to build the previously mentioned relational module as well.

Schema Management

Sequelize has an API for managing database migrations. It can migrate to a given version and back, and it can also “sync” a model’s schema to the database (creating the table if it doesn’t exist).

There are also dedicated migration modules, like db-migrate by Jeff Kunkle.

Conclusion

The Node community has created a rich set of modules for working with relational databases, and although there’s a strong anti-ORM sentiment interesting projects like relational are appearing.

Although these modules don’t address my concerns about the way in which ORM gets used with apathy towards best practices, it’s promising to see lower-level modules that can be used as building blocks for more application-specific solutions.

All of this has come at a time when relational database projects are adapting, changing, and even growing in popularity despite the recent attention the NoSQL movement has been given. PostgreSQL is going from strength to strength, and Heroku provides it as default. MariaDB is a drop-in replacement for MySQL that has a non-blocking Node module. SQLite is probably technically growing in usage as it backs Core Data in iCloud applications – Android developers also use SQLite.

Let other readers know how you deal with SQL-backed Node projects in the comments!

ZestJS, backbone-pageable, Marionette and Chaplin

12 Apr 2013 | By Alex Young | Comments | Tags frameworks libraries backbone.js mvc

ZestJS

Øyvind Smestad sent in ZestJS (GitHub: zestjs, License: Apache 2.0), which offers an interesting take on client and server-side modularity:

ZestJS provides client and server rendering for static and dynamic HTML widgets (Render Components) written as AMD modules providing low-level application modularity and portability.

It treats widgets as AMD components, with separate files for markup, JavaScript, and CSS. It can then render the results on either the server or client. The server-side renderer, zest-server, is a small Node project that is capable of rendering views and serving static files. It also handles routing, essentially mapping HTTP routes to what Zest calls “Render Components”.

Some aspects of ZestJS remind me of Backbone.js – the data loading can be performed on the initial page load, but remote APIs are easy to integrate as well. It also uses r.js for building and optimizing single page web apps, which is similar to the workflow of many Backbone.js developers.

ZestJS was created by Guy Bedford, and there are client and server quick start guides.

backbone-pageable

backbone-pageable (GitHub: wyuenho / backbone-pageable, License: MIT) by Jimmy Yuen Ho Wong is a drop-in replacement for Backbone.Collection that adds support for server and client-side pagination. It includes options for sorting, infinite paging, and caching:

It comes with reasonable defaults and works well with existing server APIs. Besides being really good at pagination and sorting, it is also really smart as syncing changes across pages while paginating on the client-side. It is also extremely lightweight - only 4k minified and gzipped

It supports query parameters, so you can easily set up your pagination links with variables to meet your server’s requirements (or perhaps to allow multiple pagination controls on a page).

var Book = Backbone.Model.extend({});

var Books = Backbone.PageableCollection.extend({
  model: Book,
  url: 'api.mybookstore.com/books',

  state: {
    firstPage: 0,
    currentPage: 2,
    totalRecords: 200
  },

  queryParams: {
    currentPage: 'current_page',
    pageSize: 'page_size'
  }
});

The project comes with some serious test coverage, including a test suite geared up for Zepto which is a nice touch.

The author is currently working on releasing a new version that adds Backbone 1.0 support. It should be 1.2.2, so keep a lookout for that if you’re already on Backbone 1.0.

Comparison of Marionette and Chaplin

Mathias Schäfer, who is one of the creators of Chaplin.js, sent in a comparison of Marionette and Chaplin. Both libraries attempt to address various limitations of Backbone.js – Chaplin.js adds better support for CoffeeScript class hierarchies, stricter memory management, lazy-loading modules, and publish/subscribe for cross-module communication.

The comparison is detailed and reveals some of the thinking that went into Chaplin.js in the first place:

Compared to Marionette, Chaplin acts more like a framework. It’s more opinionated and has stronger conventions in several areas. It took ideas from server-side MVC frameworks like Ruby on Rails which follow the convention over configuration principle. The goal of Chaplin is to provide well-proven guidelines and a convenient developing environment.

The post already has interesting comments – Mathias seems to be following up questions with some well thought out responses.