LevelDB and Node: What is LevelDB Anyway?

19 Apr 2013 | By Rod Vagg | Comments | Tags node leveldb databases

This is the first article in a three-part series on LevelDB and how it can be used in Node.

This article will cover the LevelDB basics and internals to provide a foundation for the next two articles. The second and third articles will cover the core LevelDB Node libraries: LevelUP, LevelDOWN and the rest of the LevelDB ecosystem that’s appearing in Node-land.


What is LevelDB?

LevelDB is an open-source, dependency-free, embedded key/value data store. It was developed in 2011 by Jeff Dean and Sanjay Ghemawat, researchers from Google. It’s written in C++ although it has third-party bindings for most common programming languages. Including JavaScript / Node.js of course.

LevelDB is based on ideas in Google’s BigTable but does not share code with BigTable, this allows it to be licensed for open source release. Dean and Ghemawat developed LevelDB as a replacement for SQLite as the backing-store for Chrome’s IndexedDB implementation.

It has since seen very wide adoption across the industry and serves as the back-end to a number of new databases and is now the recommended storage back-end for Riak.


  • Arbitrary byte arrays: both keys and values are treated as simple arrays of bytes, so content can anything from ASCII strings to binary blobs.
  • Sorted by keys: by default, LevelDB stores entries lexicographically sorted by keys. The sorting is one of the main distinguishing features of LevelDB amongst similar embedded data storage libraries and comes in very useful for querying as we’ll see later.
  • Compressed storage: Google’s Snappy compression library is an optional dependency that can decrease the on-disk size of LevelDB stores with minimal sacrifice of speed. Snappy is highly optimised for fast compression and therefore does not provide particularly high compression ratios on common data.
  • Basic operations: Get(), Put(), Del(), Batch()

Basic architecture

Log Structured Merge (LSM) tree


All writes to a LevelDB store go straight into a log and a “memtable”. The log is regularly flushed into sorted string table files (SST) where the data has a more permanent home.

Reads on a data store merge these two distinct data structures, the log and the SST files. The SST files represent mature data and the log represents new data, including delete-operations.

A configurable cache is used to speed up common reads. The cache can potentially be large enough to fit an entire active working set in memory, depending on the application.

String Sorted Table files (SST)

Each SST file is limited to ~2MB, so a large LevelDB store will have many of these files. The SST file is divided internally into 4K blocks, each of which can be read in a single operation. The final block is an index that points to the start of each data block and its the key of the entry at the start of the block. A Bloom filter is used to speed up lookups, allowing a quick scan of an index to find the block that may contain the desired entry.

Keys can have shared prefixes within blocks. Any common prefix for keys within a block will be stored once, with subsequent entries storing just the unique suffix. After a fixed number of entries within a block, the shared prefix is “reset”; much like a keyframe in a video codec. Shared prefixes mean that verbose namespacing of keys does not lead to excessive storage requirements.

Table file hierarchy

The table files are not stored in a simple sequence, rather, they are organised into a series of levels. This is the “Level” in LevelDB.

Entries that come straight from the log are organised in to Level 0, a set of up to 4 files. When additional entries force Level 0 above the maximum of 4 files, one of the SST files is chosen and merged with the SST files that make up Level 1, which is a set of up to 10MB of files. This process continues, with levels overflowing and one file at a time being merged with the (up to 3) overlapping SST files in the next level. Each level beyond Level 1 is 10 times the size of the previous level.

Log: Max size of 4MB (configurable), then flushed into a set of Level 0 SST files
Level 0: Max of 4 SST files, then one file compacted into Level 1
Level 1: Max total size of 10MB, then one file compacted into Level 2
Level 2: Max total size of 100MB, then one file compacted into Level 3
Level 3+: Max total size of 10 x previous level, then one file compacted into next level

0 ↠ 4 SST, 1 ↠ 10M, 2 ↠ 100M, 3 ↠ 1G, 4 ↠ 10G, 5 ↠ 100G, 6 ↠ 1T, 7 ↠ 10T


This organisation into levels minimises the reorganisation that must take place as new entries are inserted into the middle of a range of keys. Each reorganisation, or “compaction”, is restricted to a just a small section of the data store. The hierarchical structure generally leads to data in the higher levels being the most mature data, with the fresher data being stored in the log and the initial levels. Since the initial levels are relatively small, overwriting and removing entries incurs less cost than when it occurs in the higher levels, but this matches the typical database where you have a large set of mature data and a more volatile set of fresh data (of course this is not always the case, so performance will vary for different data write and retrieve patterns).

A lookup operation must also traverse the levels to find the required entry. A read operation that requests a given key must first look in the log, if it is not found there it looks in Level 0, moving up to Level 1 and so forth. In this way, a lookup operation incurs a minimum of one read per level that must be searched before finding the required entry. A lookup for a key that does not exist must search every level before a definitive “NotFound” can be returned (unless a Del operation is recorded for that key in the log).

Advanced features

  • Batch operations: provide a collection of Put and/or Del operations that are atomic; that is, the whole collection of operations succeed or fail in a single Batch operation.
  • Bi-directional iterators: iterators can start at any key in a LevelDB store (even if that key does not exist, it will simply jump to the next lexical key) and can move forward and backwards through the store.
  • Snapshots: a snapshot provides a reference to the state of the database at a point in time. Read-queries (Get and iterators) can be made against specific snapshots to retrieve entries as they existed at the time the snapshot was created. Each iterator creates an implicit snapshot (unless it is requested against an explicitly created snapshot). This means that regardless of how long an iterator is alive and active, the data set it operates upon will always be the same as at the time the iterator was created.

Some details on these advanced features will be covered in the next two articles, when we turn to look at how LevelDB can be used to simplify data management in your Node application.

If you’re keen to learn more and can’t wait for the next article, see the LevelUP project on GitHub as this is the focus of much of the LevelDB activity in the Node community at the moment.

AngularJS: Let's Make a Feed Reader

18 Apr 2013 | By Alex Young | Comments | Tags angularjs mvc angularfeeds

I’m looking forward to seeing what services appear to fill Google Reader’s wake. Reeder and Press are my favourite RSS apps, which I use to curate my sources for upcoming DailyJS content. It sounds like Reeder will support Feedbin, so hopefully Press and other apps will as well. I’ve also used Newsblur in the past, but I’m not sure if we’ll see Newsblur support in Reeder…

With that in mind, I thought it would be pretty cool to use a feed reader as the AngularJS tutorial series theme. A Bootstrap styled, AngularJS-powered feed reader would look and feel friendly and fast. The main question, however, is how exactly do we download feeds? Atom and RSS feeds aren’t exactly friendly to client-side developers. What we need is JSON!


The now standard way to fetch feeds in client-side code is to use JSONP. That’s where a remote resource is fetched, usually by inserting a script tag, and the server returns JavaScript wrapped in a callback that the client can run when ready.

I remember reading a post by John Resig many years ago that explained how to use this technique with RSS specifically: RSS to JSON Convertor. Ironically, a popular commercial solution for this was provided through Google Reader. Fortunately there’s another way to do it, this time by Yahoo! – the Yahoo! Query Language.


The YQL service (terms of use) is basically SQL for the web. It can be used to fetch and interpret all kinds of resources, including feeds. It has usage limits, so if you want to take this tutorial series to build something more commercially viable then you’ll want to check those out in detail. Even though the endpoints we’ll use are “public”, Yahoo! will still rate limit them if they go over 2,000 requests per hour. To support higher volume users, API keys can be created.

If you visit this link you’ll see a runnable example that converts the DailyJS Atom feed into JSON, wrapped in a callback. The result looks like this:

cb({ "query": { /* loads of JSON! */ } });

The cb method will be run from within our fancy AngularJS/Bootstrap client-side code. I wrote about how to build client-side JSONP implementations in Let’s Make a Framework: Ajax Part 2, so check that out if you’re interested in that area.

As far as feed processing goes, YQL will give us the JSON we need to make a little feed reader.


Before you press “next unread” in your own feed reader, let’s jump-start our application with Yeoman. First, install it and Grunt. I assume you already have a recent version of Node, if not get a 0.10.x copy installed and then run the following:

npm install -g yo grunt-cli bower generator-angular generator-karma

Yeoman is based around generators, which are separate modules that you can install using npm. The previous command installed the AngularJS generator, generator-angular.

Next you’ll need to create a directory for the application to live in:

mkdir djsreader
cd djsreader

You should also run the angular generator:

yo angular

It will install a lot of stuff, but fortunately most of the modules are ones I’d use anyway so I’m cool with that. Answer Y to each question, apart from the one about Compass (I don’t think I have Compass installed, so I didn’t want that option).

Run grunt server to see the freshly minted AngularJS-powered app!

Hello, AngularJS

You may have noticed some “karma” files have appeared. That’s the AngularJS test framework, which you can read about at karma-runner.github.io. If you type grunt test, Grunt will happily trudge through some basic tests that are in test/spec/controllers/main.js.


Welcome to the world of Yeoman, AngularJS, and… Yahoo!, apparently. The repository for this project is at alexyoung / djsreader. Come back in a week for the next part!

Node Roundup: 0.10.4, Papercut, rsz, sz

17 Apr 2013 | By Alex Young | Comments | Tags node modules graphics images uploads
You can send in your Node projects for review through our contact form.

Node 0.10.4

Node 0.10.4 was released last week. There are bug fixes for some core modules, and I also noticed this:

v8: Avoid excessive memory growth in JSON.parse (Fedor Indutny)

Another interesting patch was added to the stream module, to ensure write callbacks run before end:

stream: call write cb before finish event

The Node blog was quietly updated to change the latest 0.8 to read “legacy” instead of “stable”. I don’t recall previous stable releases being referred to in this way before, so I thought it was worth mentioning here.


Papercut (GitHub: Rafe / papercut, License: MIT, npm: papercut) by Jimmy Chao is an image uploading module that supports Amazon S3 and resizing and cropping through node-imagemagick.

Uploaders can be created according to a schema, allowing them to be used to manage different aspects of your application’s image handling requirements:

AvatarUploader = papercut.Schema(function(schema){
    name: 'avatar'
  , size: '200x200'
  , process: 'crop'

    name: 'small'
  , size: '50x50'
  , process: 'crop'

Papercut also supports configuration using NODE_ENV, so it’s easy to configure to work sensibly in various deployment environments.


rsz (GitHub: rvagg / node-rsz, License: MIT, npm: rsz) by Rod Vagg is a module for resizing images based on LearnBoost’s node-canvas. The API is based around a single method which accepts various signatures. The basic usage is rsz(src, width, height, function (err, buf) { /* */ }).


sz (GitHub: rvagg / node-sz, License: MIT, npm: sz), also by Rod, is another image-related module. This one can determine the size of an image. It should be noted that both of these modules work with image files and Buffer objects.

var buf = fs.readFileSync('image.gif');

sz(buf, function(err, size) {
  // where `size` may look like: { height: 280, width: 400 }

jQuery Roundup: TyranoScript, Sly, FPSMeter

16 Apr 2013 | By Alex Young | Comments | Tags jquery plugins graphics animations games
Note: You can send your plugins and articles in for review through our contact form.



Evan Burchard sent in TyranoScript (GitHub: ShikemokuMK / tyranoscript, License: MIT), a jQuery-powered HTML5 interactive fiction game engine:

The game engine was only in Japanese, so I spent the last week making it available in English. As far as what it does, it sits somewhere between an interactive fiction scripting utility and a presentation library like impress.js. It has built in functions (tags) for things like making text and characters pop up, saving the game, changing scenery and shaking the screen. But it supplies interfaces for arbitrary HTML, CSS and JavaScript to be run as well, so conceivably one could use it for presentations or other types of applications. One of the sample games on the project website demonstrates this with a compelling YouTube API integration. The games created with TyranoScript can run on modern browsers, Android, iPad and iPhone.

Evan’s English version is at EvanBurchard / tyranoscript. For a sample game, check out the delightfully nutty Jurassic Heart – a game where you date a dinosaur (of course)!


Sly (GitHub: Darsain / sly, License: MIT, Bower: sly) by Darsain is a library for scrolling – it can be used where you need to replace scrollbars, or where you want to build your own navigation solutions.

The author has paid particular attention to performance:

Sly has a custom high-performance animation rendering based around the Animation Timing Interface written directly for its needs. This provides an optimized 60 FPS rendering, and is designed to still accept easing functions from jQuery Easing Plugin, so you won’t event notice that Sly’s animations have nothing to do with jQuery :).

Sly’s site has a few examples – check out the infinite scrolling and parallax demos.



Sly’s author also sent in FPSMeter. When working on graphically-oriented projects, it’s sometimes useful to display the frames-per-second of animations. FPSMeter (GitHub: Darsain / fpsmeter, Bower: fpsmeter) measures FPS using WindowAnimationTiming, with a polyfill to patch in browser support for most browsers, including IE7+.

FPSMeter can measure FPS, milliseconds between frames, and the number of milliseconds it takes to render one frame. It can also cope with multiple instances on a page, and has show/hide methods that will pause rendering. It also supports theming, so you should be able to get it to sit in nicely in your existing interface.

The State of Node and Relational Databases

15 Apr 2013 | By Alex Young | Comments | Tags databases node modules sql

Recently I started work on a Node project that was built using Sequelize with MySQL. It was chosen to ease the transition from an earlier version written with Ruby on Rails. The original’s ActiveRecord models mapped quite closely to their Sequelize equivalents, which got things started smoothly enough.

Although Sequelize had some API quirks that didn’t feel very idiomatic alongside other Node code, the developers have hungrily accepted pull requests and it’s emerging as a reasonable ORM solution. However, like many others in the Node community I feel uncomfortable with ORM.

Why? Well, some of us have learned how to use relational databases correctly. Joining an established project that uses ORM only to find there’s no referential integrity or sensible indexes is to be expected these days, as programmers have moved away from caring about databases to application-level schemas. I’ve had my head down in MongoDB/Mongoose and Redis code for the last few years, but relational databases aren’t going away any time soon so either programmers need to get the hang of them or we need better database modules.

This all prompted me to look at alternative solutions to relational databases in Node. First, I broke down the problem into separate areas:

  • Driver: The module that manages database connections, sends queries, and responds with data
  • Abstraction layer: Provide tools for escaping queries to avoid SQL injection attacks, and wrap multiple drivers so it’s easy to port applications between MySQL/PostgreSQL/SQLite
  • Validator: Validates data against a schema prior to sending it to the database. Aids with the generation of human-readable error messages
  • Query generator: Generates SQL queries based on a more JavaScript-programmer-friendly API
  • Schema management: Keep schema up-to-date when fields are added or removed

Some projects won’t need to support all of these areas – you can mix and match them as needed. I prefer to create simple veneer-like “model” classes that wrap more low-level database operations. This works well in a web application where it can be make sense to decouple the HTTP layer from the database.

Database Driver

The mysql and pg modules are actively maintained, and are usually required by “abstraction layer” modules and ORM solutions.

A note about configuration: when it comes to connecting to the database, I strongly prefer modules that support connection URIs. It makes it a lot easier to deploy web applications to services like Heroku, because a single environmental variable can be set that contains the connection details for your production database.

Abstraction Layer

This level sits above the driver layer, and should offer lightweight JavaScript sugar. There are many examples of this, but a good one is transactions. Transactions are particularly useful in JavaScript because they can help create APIs that are less dependent on heavily nested callbacks. For example, it makes sense to model transactions as an EventEmitter descendent that allows operations to be pushed to an internal stack.

The author of the pg module, Brian Carlson, who occasionally stops by the #dailyjs IRC channel on Freenode, recently mentioned his new relational project that aims to provide a saner approach to ORM in Node. This module feels more like an abstraction layer API, but it’s gunning to be a formidable new ORM solution.

There are some popular libraries that tackle the abstraction layer, including any-db and massive.


I usually find myself dealing with errors in web forms, so anything that makes error handling easier is an advantage. Validation and schemas are closely related, which is why ORM libraries usually combine them.

It’s possible to treat them separately, and in the JavaScript community we have solutions based on or inspired by JSON Schema. The jayschema module by Nate Silva is one such project. It’s really aimed at validating JSON, but it could be used to validate JavaScript objects spat out by a database driver.

Validator has some simple tools for validating data types, but it also has optional Express middleware that makes it easy to drop into a web application. Another similar project is conform by Oleg Korobenko.

Query Generator

The sql module by Brian Carlson is an SQL builder – it has a chainable API that turns JavaScript into SQL:


He’s using this to build the previously mentioned relational module as well.

Schema Management

Sequelize has an API for managing database migrations. It can migrate to a given version and back, and it can also “sync” a model’s schema to the database (creating the table if it doesn’t exist).

There are also dedicated migration modules, like db-migrate by Jeff Kunkle.


The Node community has created a rich set of modules for working with relational databases, and although there’s a strong anti-ORM sentiment interesting projects like relational are appearing.

Although these modules don’t address my concerns about the way in which ORM gets used with apathy towards best practices, it’s promising to see lower-level modules that can be used as building blocks for more application-specific solutions.

All of this has come at a time when relational database projects are adapting, changing, and even growing in popularity despite the recent attention the NoSQL movement has been given. PostgreSQL is going from strength to strength, and Heroku provides it as default. MariaDB is a drop-in replacement for MySQL that has a non-blocking Node module. SQLite is probably technically growing in usage as it backs Core Data in iCloud applications – Android developers also use SQLite.

Let other readers know how you deal with SQL-backed Node projects in the comments!

ZestJS, backbone-pageable, Marionette and Chaplin

12 Apr 2013 | By Alex Young | Comments | Tags frameworks libraries backbone.js mvc


Øyvind Smestad sent in ZestJS (GitHub: zestjs, License: Apache 2.0), which offers an interesting take on client and server-side modularity:

ZestJS provides client and server rendering for static and dynamic HTML widgets (Render Components) written as AMD modules providing low-level application modularity and portability.

It treats widgets as AMD components, with separate files for markup, JavaScript, and CSS. It can then render the results on either the server or client. The server-side renderer, zest-server, is a small Node project that is capable of rendering views and serving static files. It also handles routing, essentially mapping HTTP routes to what Zest calls “Render Components”.

Some aspects of ZestJS remind me of Backbone.js – the data loading can be performed on the initial page load, but remote APIs are easy to integrate as well. It also uses r.js for building and optimizing single page web apps, which is similar to the workflow of many Backbone.js developers.

ZestJS was created by Guy Bedford, and there are client and server quick start guides.


backbone-pageable (GitHub: wyuenho / backbone-pageable, License: MIT) by Jimmy Yuen Ho Wong is a drop-in replacement for Backbone.Collection that adds support for server and client-side pagination. It includes options for sorting, infinite paging, and caching:

It comes with reasonable defaults and works well with existing server APIs. Besides being really good at pagination and sorting, it is also really smart as syncing changes across pages while paginating on the client-side. It is also extremely lightweight - only 4k minified and gzipped

It supports query parameters, so you can easily set up your pagination links with variables to meet your server’s requirements (or perhaps to allow multiple pagination controls on a page).

var Book = Backbone.Model.extend({});

var Books = Backbone.PageableCollection.extend({
  model: Book,
  url: 'api.mybookstore.com/books',

  state: {
    firstPage: 0,
    currentPage: 2,
    totalRecords: 200

  queryParams: {
    currentPage: 'current_page',
    pageSize: 'page_size'

The project comes with some serious test coverage, including a test suite geared up for Zepto which is a nice touch.

The author is currently working on releasing a new version that adds Backbone 1.0 support. It should be 1.2.2, so keep a lookout for that if you’re already on Backbone 1.0.

Comparison of Marionette and Chaplin

Mathias Schäfer, who is one of the creators of Chaplin.js, sent in a comparison of Marionette and Chaplin. Both libraries attempt to address various limitations of Backbone.js – Chaplin.js adds better support for CoffeeScript class hierarchies, stricter memory management, lazy-loading modules, and publish/subscribe for cross-module communication.

The comparison is detailed and reveals some of the thinking that went into Chaplin.js in the first place:

Compared to Marionette, Chaplin acts more like a framework. It’s more opinionated and has stronger conventions in several areas. It took ideas from server-side MVC frameworks like Ruby on Rails which follow the convention over configuration principle. The goal of Chaplin is to provide well-proven guidelines and a convenient developing environment.

The post already has interesting comments – Mathias seems to be following up questions with some well thought out responses.

Google, Twitter, and AngularJS

11 Apr 2013 | By Alex Young | Comments | Tags angularjs mvc angularfeeds


My behemoth of a Backbone.js tutorial series has run its course, so I wanted to follow it up with some posts about AngularJS. One thing that intrigues me about AngularJS is the emerging relationship between Google and Twitter. Or between prominent Google and Twitter developers. I don’t think there’s a overarching plan at the management level to create an open source partnership, just a set of coincidences as projects have aligned along certain vectors that are pushing front-end web development forward.

Most of what I’m going to talk about here has been packaged up into Yeoman, which has some Google employees behind it (Paul Irish, Addy Osmani) and includes technology from Twitter (Bower). The rest of this post will break down the emerging next generation Google/Twitter open source stack.

Data Binding, Views, Routes

AngularJS from Google is the obvious choice for data binding. It’s definitely growing in popularity, and Yeoman includes a generator for it.


Bootstrap is also supported by Yeoman, and offers a fine set of extensible UI widgets. Although seeing vanilla Bootstrap sites has become massively clichéd, it doesn’t take much effort to customise it.

Build/Preview: Grunt

In my Backbone.js tutorial, we used Node to create a small web server with RequireJS for local development. I included some details on Grunt purely because I use GNU Make for my own projects, so I wanted to look at Grunt more seriously. The developers behind Yeoman have selected Grunt as their build/preview tool.

Test: Karma, Mocha

Yeoman bundles in Mocha, and Karma can be used to script browsers. It’s used to test AngularJS, so that’s where the connection comes from, and there’s karma-mocha – an adapter for Karma to use Mocha.

Package Management: Bower

Bower, from Twitter, is a lightweight package manager. I’ve talked a bit about it on DailyJS before, and I try to include links to Bower package names when featuring front-end modules. Yeoman comes with Bower.

I covered Backbone.sync a fair bit in the Backbone.js tutorial series because it’s so flexible I was able to do some cool things with it like sync data with Google’s JavaScript APIs. This is interesting when you consider that Backbone is configured to talk to a Rails-style REST server out of the box.

So, what about this brave new world of Google/Twitter open source projects? What data syncing solutions are there? To my knowledge there isn’t anything generic, yet, but there is the Yeoman Express Stack:

A proof of concept end-to-end stack for development using Yeoman 0.9.6, Express and AngularJS. Note: This experimental branch is providing for testing purposes only (at present) and does not provide any guarantees.

This is a small project that builds on Yeoman and AngularJS to sync data with a Node/Express server. This came from a weekend hack project that involved Addy Osmani, who has contributed to many of the projects mentioned here.

There is also Angular Socket.io Seed, which persists data to a Node/Express server using Socket.IO.

Also significant is a proposal, Entity-Driven Tooling from the Yeoman developers about adding CRUD generator support:

tl;dr: what if one command could scaffold out the CRUD models/views for your client and server side code, with baked in offline support. Would this help you? Would this solve a pain point of yours? Are there better ways to do this than what’s described below?

Although I’ve often wished for something like this, the post seems to imply a localStorage-based syncing solution would be developed. This would allow the browser-based portion of the project to behave like a client, making data available in localStorage for offline use.

However, syncing data can be difficult, so providing a generator that can do this would be more involved than Backbone.sync. Perhaps basing it around CouchDB’s eventual consistency model would work? Locally available records would have a version parameter which would be used to safely sync concurrently with the server. This would leave conflict resolution up to the developer of the application – some servers might store the latest version of a record, and others might throw an error, perhaps causing the client to display a conflict resolution dialog.

There may be a localStorage sync project the Yeoman developers have in mind.

I can’t help feeling that there’s more to Google open source than Closure Library. If you’ve used Android since Jelly Bean, Chrome OS, or Google Plus, then you know Google’s designers have been pushing things far beyond where the company was just a few years ago. Although Closure Library is a formidable set of tools, the widgets don’t fit in well with the Yeoman generation’s open source projects, and I’m eager to see what a next generation version of Bootstrap would look like.

But for now I’m looking at AngularJS, Grunt, Bootstrap, Bower, Mocha for my next tutorial series. I’ll have to find something interesting to sync data with, because I enjoyed figuring out the Google Tasks API.

Node Roundup: 0.8.23, indev, compressjs

10 Apr 2013 | By Alex Young | Comments | Tags node modules build compression
You can send in your Node projects for review through our contact form.

Node 0.8.23

In case you haven’t switched to 0.10 yet, Node 0.8.23 was released yesterday. This version adds bug fixes for the http, tls, child_process, and crypto modules.


indev (GitHub: azer / indev, License: BSD, npm: indev) by Azer Koçulu is a lightweight alternative to Makfiles. It supports “Devfiles” which can be written in either CoffeeScript or JavaScript, and includes shortcuts for lots of shell commands through ShellJS.

The inclusion of ShellJS makes it feel closer to make than Grunt, so if Grunt isn’t quite what you want then indev might be what you’re looking for.


compressjs (GitHub: cscott / compressjs, License: GPLv2, npm: compressjs) by C. Scott Ananian features several compression algorithms, implemented in pure JavaScript. It can run in browsers, and includes bzip2, LZP3, a modified LZJB, PPM-D, and an implementation of Dynamic Markov Compression.

The readme includes benchmarks for each algorithm, and a script is included so you can use it to compress things on the command-line.

jQuery Roundup: 2.0 Beta 3, betterToggle, Cavendish.js

09 Apr 2013 | By Alex Young | Comments | Tags jquery plugins css slideshows
Note: You can send your plugins and articles in for review through our contact form.

jQuery 2.0 Beta 3

jQuery 2.0 Beta 3 is out:

… we really need your help in finding and fixing any bugs that may be hiding in the nooks and crannies of jQuery 2.0. We want to get all the problems ironed out before this version ships, and the only way to do that is to find out whether it runs with your code.

This release introduces Node compatibility, so you can now load it with require(). It also makes it work in Windows 8 Store apps.


betterToggle (GitHub: kanakiyajay / betterToggle, License: GPLv2) by Jay Kanakiya is a plugin for toggling elements with CSS3 transforms. As an added bonus it allows multiple elements to be toggled.

Usage is similar to .toggle: $(selector).betterToggle(), and the project’s homepage has plenty of demos.


Cavendish.js (GitHub: michaek / cavendish.js, License: MIT) by Michael Hellein is a slide manager plugin aimed at front-end developers well-versed in CSS. It has a plugin-based API that allows it to support different styles for displaying and navigating slides:

var cavendish = $('.cavendish').cavendish('cavendish');

The bundled plugins include a simple player that pauses on hover, a pager, previous and next arrows, and a parallax scrolling effect. The API also exposes the events used, so you can add listeners to see when Cavendish has been initialised and after a slide has been transitioned.

LungoJS, Math.js, Collage

08 Apr 2013 | By Alex Young | Comments | Tags mobile maths frameworks libraries ui



LungoJS (GitHub: TapQuo / Lungo.js, License: GPLv3) from TapQuo is a framework for HTML5 apps that aims to be cross-device. It supports mobile, desktop, and TV devices. The JavaScript API has support for working with the DOM, localStorage, caching, navigation routing, remote services, and views.

There’s a designer-focused tutorial that explains how to create an application with Lungo, and a Google group (which currently requires permission to join).


Math.js (GitHub: josdejong / mathjs, License: Apache 2.0, npm: mathjs, bower: mathjs) by Jos de Jong is a maths library for client-side JavaScript and Node. It supports complex numbers, units, strings, arrays, and matrices, built-in functions and constants, as well as mathematical expression parsing.

It has no dependencies and is compatible with the built-in Math library. One feature I particularly like is the expression parser:

var parser = math.parser();
parser.eval('1.2 / (2.3 + 0.7)'); // 0.4
parser.eval('a = 5.08 cm');
parser.eval('a in inch');         // 2 inch
parser.eval('sin(45 deg) ^ 2');   // 0.5

This opens up some interesting possibilities for storing mathematical expressions in databases then safely evaluating them later on.

The project includes unit tests, and detailed documentation can be found in the readme file.


Collage (GitHub: ozanturgut / collage, License: Apache 2.0) by Ozan Turgut is a framework for creating interactive collages. It can knit together remote APIs then present media in a two-dimensional space.

This example demonstrates some of the APIs that are supported as standard:

var collage = Collage.create(document.getElementById('PopcornCollage'));
collage.load('popcorn media', {
  flickr: [{ tags: 'popcorn'}],
  googleNews: ['popcorn'],
  twitter: [{ query: 'popcorn' }]
  collage.start('popcorn media');

AngularCollection, datamock.js, store.js

05 Apr 2013 | By Alex Young | Comments | Tags mvc angularjs testing databases localStorage


AngularCollection (GitHub: tomkuk / angular-collection, License: MIT, bower: angular-collection) by Tomasz Kuklis is a collection module for AngularJS. It allows objects to be added, removed, updated, and fetched at a specific index. It also has a sort method and last.

It comes with a Grunt build script and some unit tests.


datamock.js (GitHub: marksteve / datamock.js, License: MIT) by Mark Steve Samson is a small library for generating sample data for mockups. Data attributes can be used to bind mocked data, like this: <p data-mock="lorem">Lorem ipsum here...</p>.

It includes some other value types, like names and emails. The author has included a bookmarklet task in the build script so you can generate a bookmarklet that fills a page in with test sample data.


store.js (GitHub: nbubna / store, License: MIT) by Nathan Bubna is a friendlier API for localStorage and sessionStorage. Basic usage is store(key, data), but it has other functions like store.setAll, store.getAll, and support for namespaces.

The sessionStorage API is accessed through store.session, for example: store.session(key, 'value'). The project includes a Grunt build script and PhantomJS-powered tests.

Backbone.js Tutorial: jQuery Plugins and Moving Tasks

04 Apr 2013 | By Alex Young | Comments | Tags backbone.js mvc node backgoog bootstrap jquery


Before starting this tutorial, you’ll need the following:

  • alexyoung / dailyjs-backbone-tutorial at commit 705bcb4
  • The API key from part 2
  • The “Client ID” key from part 2
  • Update app/js/config.js with your keys (if you’ve checked out my source)

To check out the source, run the following commands (or use a suitable Git GUI tool):

git clone git@github.com:alexyoung/dailyjs-backbone-tutorial.git
cd dailyjs-backbone-tutorial
git reset --hard 705bcb4

Using Backbone with jQuery Plugins

Although Backbone doesn’t need to be used with jQuery specifically, a lot of people use it with jQuery (and RequireJS) to get access to the diverse plugins made by the jQuery community. In this tutorial I’ll explain how to use jQuery plugins with Backbone projects, and how to find ones that will work well.

The example I’ve used is integrating a drag-and-drop “sortable” plugin to allow tasks to be reordered.

HTML5 Sortable

The plugin I’ve used for drag-and-drop is the HTML5 Sortable Plugin by Ali Farhadi. The reason I used this particular plugin is it has a simple event-based API that allows the plugin to be unloaded and sort events to be captured and responded to. It just needs a container element and the child elements that need to be sorted. The unordered list of tasks in this project directly translates to the expected markup.

Sometimes it’s easier to just write out data attributes to elements rather than trying to create relationships between the DOM nodes used by plugins and models. HTML5 Sortable emits a 'sortupdate' event when a node has been dragged and dropped, and it’ll pass the relevant element to the listener callback. From this we need to figure out which model has changed, then translate that into something Google’s API can understand.

Loading Plugins with RequireJS

In an earlier tutorial I demonstrated how to load non-AMD libraries using RequireJS. If you want a recap, just check out app/js/main.js and look at the shim property in the RequireJS configuration:

  baseUrl: 'js',

  paths: {
    text: 'lib/text'

  shim: {
    'lib/underscore-min': {
      exports: '_'
    'lib/backbone': {
      deps: ['lib/underscore-min']
    , exports: 'Backbone'
    'app': {
      deps: ['lib/underscore-min', 'lib/backbone', 'lib/jquery.sortable']

The 'app' property expresses a dependency between the main Backbone application file and lib/jquery.sortable, which means /lib/jquery.sortable.js will get automatically loaded (or compiled in by r.js when creating a production build of the app).

Google Tasks Ordering API

It would be too easy if HTML5 Sortable’s API was a one-to-one match with the Google Task’s ordering API. Google’s API has a specific method for moving tasks, and it’s based around moving one task to occupy the position of another one:

gapi.client.tasks.tasks.move({ tasklist: listId, task: id, previous: previousId });

Moving a task to the top of the list is handled by passing null for previous.

Next I’ll explain how to create some simple interface elements for the draggable handle, and then we’ll look at how to persist moved tasks by translating Google’s API into Backbone model and collection code.

Implementation: Views and Templates

I added a little handle by using a Bootstrap icon and an anchor element in app/js/templates/tasks/task.html:

<a href="#" class="handle pull-right"><i class="icon-move"></i></a>

Next I added the code that maps between the Backbone view and the jQuery HTML5 Sortable plugin to app/js/views/tasks/index.js:

makeSortable: function() {
  var $el = this.$el.find('#task-list');
  if (this.collection.length) {
    $el.sortable({ handle: '.handle' }).bind('sortupdate', _.bind(this.saveTaskOrder, this));

saveTaskOrder: function(e, o) {
  var id = $(o.item).find('.check-task').data('taskId')
    , previous = $(o.item).prev()
    , previousId = previous.length ? $(previous).find('.check-task').data('taskId') : null
    , request

  this.collection.move(id, previousId, this.model);

The makeSortable method makes an element that appears within TasksIndexView “sortable” – that is, HTML Sortable has been wrapped around it. The plugin’s 'sortupdate' method is then bound to saveTaskOrder.

The saveTaskOrder method gets the current task’s ID by looking at the checkbox, because I’d already added a data attribute to that element in the template. This ID is then passed to the collection with the previous task’s ID. In this case, the previous task is the one adjacent to it, which Google’s API needs to figure out how to move the task.

The collection property in this view is a Tasks property, so let’s take a look at how to implement the move method that causes the changes to be persisted.

Implementation: Models and Collections

Open app/js/collections/tasks.js and add a new method called move:

move: function(id, previousId, list) {
  var model = this.get(id)
    , toModel = this.get(previousId)
    , index = this.indexOf(toModel) + 1

  this.remove(model, { silent: true });
  this.add(model, { at: index, silent: true });

  // Persist the change
  list.moveTask({ task: id, previous: previousId });

This method just exists to trigger remove and add calls on the collection so cause the objects to be reshuffled internally. It then calls moveTask on the TaskList model (in app/js/models/tasklist.js):

moveTask: function(options) {
  options['tasklist'] = this.get('id');
  var request = gapi.client.tasks.tasks.move(options);

  Backbone.gapiRequest(request, 'update', this, options);

The gapiRequest method forms the basis for the custom Backbone.sync method used in this project, which I’ve talked about in previous tutorials. I wasn’t able to figure out how to make Backbone.sync cope with moving items in a way that made sense given how gapi.client.tasks.tasks.move works, but I was able to at least reuse some of the syncing functionality by creating a request and calling the “standard” request handler.


When you can’t find a suitable Backbone plugin for something and want to use a jQuery plugin, my advice is to look for plugins that have event-based APIs and can be cleanly unloaded. That will make them easy to hook into your Backbone views.

The full source for this tutorial can be found in alexyoung / dailyjs-backbone-tutorial, commit e9edfa3.

Node Roundup: 0.11.0, Dependo, Mashape OAuth, node-windows

03 Apr 2013 | By Alex Young | Comments | Tags node modules dependencies oauth authentication windows
You can send in your Node projects for review through our contact form.

0.11.0, 0.10.2

Node 0.11.0 has been released, which is the latest unstable branch of Node. Node 0.10.12 was also released, which includes some fixes for the stream module, an update for the internal uv library, and various other fixes for cryptographic modules and child_process.



Dependo (GitHub: auchenberg / dependo, License: MIT, npm: dependo) by Kenneth Auchenberg is a visualisation tool for generating force directed graphs of JavaScript dependencies. It can interpret CommonJS or AMD dependencies, and uses MaDGe to generate the raw dependency graph. D3.js is used for drawing the results.

Mashape OAuth

Mashape OAuth (GitHub: Mashape / mashape-oauth, License: MIT, npm: mashape-oauth) by Nijiko Yonskai is a set of modules for OAuth and OAuth2. It has been designed to work with lots of variations of OAuth implementations, and includes some lengthy Mocha unit tests.

The authors have also written a document called The OAuth Bible that explains the main concepts behind each supported OAuth variation, which is useful because the OAuth terminology isn’t exactly easy to get to grips with.


node-windows (GitHub: coreybutler / node-windows, License: MIT/BSD, npm: node-windows) by Corey Butler is a module designed to help write long-running Windows services with Node. It supports event logging and process management without requiring Visual Studio or the .NET Framework.

Using native node modules on Windows can suck. Most native modules are not distributed in a binary format. Instead, these modules rely on npm to build the project, utilizing node-gyp. This means developers need to have Visual Studio (and potentially other software) installed on the system, just to install a native module. This is portable, but painful… mostly because Visual Studio itself is over 2GB.

node-windows does not use native modules. There are some binary/exe utilities, but everything needed to run more complex tasks is packaged and distributed in a readily usable format. So, no need for Visual Studio… at least not for this module.

jQuery Roundup: Sidr, Huey, Backbone.Advice

02 Apr 2013 | By Alex Young | Comments | Tags jquery plugins backbone.js graphics components
Note: You can send your plugins and articles in for review through our contact form.



Sidr (GitHub: artberri / sidr, License: MIT) by Alberto Varela creates menus that look like the sidebars found in recent iOS apps. It can cope with multiple menus on a page, and can load content remotely. It’s also responsive, so it should work well in mobile projects.

The author has written up documentation complete with demos, and has included a Grunt build script. It seems like the exact sort of UI component that the next great web-based RSS reader might use…


Huey (GitHub: michaelrhodes / huey, License: MIT) by Michael Rhodes will find the dominant colour of an image and return it as an RGB array. This is all performed client-side, and doesn’t even depend on jQuery. It could be used to create the kind of effect seen in iTunes, where the background colour changes to suit the selected album art.


Backbone.Advice (GitHub: rhysbrettbowen / Backbone.Advice, License: MIT) by Rhys Brett-Bowen is a Backbone plugin based on Angus Croll’s advice pattern. It basically adds functional mixins to Backbone objects, and can be wrapped like this:


Rhys sent in a whole bunch of other Backbone-related projects, including Backbone.ComponentView and Backbone.ModelRegistry. Backbone.ComponentView is based on goog.ui.component from Closure Library, and also works with Backbone.Advice.

Five Minute Guide to Streams2

01 Apr 2013 | By Alex Young | Comments | Tags streams streams2 node 5min

Node 0.10 is the latest stable branch of Node. It’s the branch you should be using for Real Work™. The most significant API changes can be found in the stream module. This is a quick guide to streams2 to get you up to speed.

The Base Classes

There are now five base classes for creating your own streams: Readable, Writable, Duplex, Transform, and PassThrough. These base classes inherit from EventEmitter so you can attach listeners and emit events as you normally would. It’s perfectly acceptable to emit custom events – this might make sense, for example, if you’re writing a streaming parser. The parser could emit events like 'headers' to indicate the headers have been parsed, perhaps for a CSV file.

To make your own Readable stream class, inherit from stream.Readable and implement the _read(size) method. The size argument is “advisory” – a lot of Readable implementations can safely ignore it. Once your _read method has collected data from an underlying I/O source, it can send it by calling this.push(chunk) – internally data will be placed into a queue so “clients” of your class can deal with it when they’re ready.

The Writable class should also be inherited from, but this time a _write(chunk, encoding, callback) method should be implemented. Once you’ve written data to the underlying I/O source, callback can be called, passing an error if required.

The Duplex class is like a Readable and Writable stream in one – it allows data sources that transmit and receive data to be modelled. This makes sense when you think about it – TCP network sockets transmit and receive data. To implement a Duplex stream, inherit from stream.Duplex and implement both the _read and _write methods.

The Transform class is useful for implementing parsers, like the CSV example I mentioned earlier. In general, streams that change data in some way should be implemented using stream.Transform. Although Transform sounds a bit like a Duplex stream, this time you’ll need to implement a _transform(chunk, encoding, callback) method. I’ve noticed several projects in the wild that use Duplex streams with a stubbed _read method, and I wondered if these would be better served by using a Transform class instead.

Finally, the PassThrough stream inherits from Transform to do… nothing. It relays the input to the output. That makes it ideal for sitting inside a pipe chain to spy on streams, and people have been using this to write tests or instrument streams in some way.


Pipes must follow this pattern: readable.pipe(writable). As Duplex and Transform streams can both read and write, they can be placed in either position in the chain. For example, I’ve been using process.stdin.pipe(csvParser).pipe(process.stdout) where csvParser is a Transform stream.


The general pattern for inheriting from the base classes is as follows:

  1. Create a constructor function that calls the base class using baseClass.call(this, options)
  2. Correctly inherit from the base class using Object.create or util.inherits
  3. Implement the required underscored method, whether it’s _read, _write, or _transform

Here’s a quick stream.Writable example:

var stream = require('stream');

GreenStream.prototype = Object.create(stream.Writable.prototype, {
  constructor: { value: GreenStream }

function GreenStream(options) {
  stream.Writable.call(this, options);

GreenStream.prototype._write = function(chunk, encoding, callback) {
  process.stdout.write('\u001b[32m' + chunk + '\u001b[39m');

process.stdin.pipe(new GreenStream());

Forwards Compatibility

If you want to use streams2 with Node 0.8 projects, then readable-stream provides access to the newer APIs in an npm-installable module. Since the stream core module is implemented in JavaScript, then it makes sense that the newer API can be used in Node 0.8.

Some open source module authors are including readable-stream as a dependency and then conditionally loading it:

var PassThrough = require('stream').PassThrough;

if (!PassThrough) {
  PassThrough = require('readable-stream/passthrough');

This example is taken from until-stream.

Streams2 in the Wild

There are some interesting open source projects that use the new streaming API that I’ve been collecting on GitHub. multiparser by Jesse Tane is a stream.Writable HTML form parser. until-stream by Evan Oxfeld will pause a stream when a certain signature is reached.

Hiccup by naomik uses the new streams API to simulate sporadic throughput, and the same author has also released bun which can help combine pipes into composable units, and Burro which can package objects into length-prefixed JSON byte streams. Conrad Pankoff used Burro to write Pillion, which is an RPC system for object streams.

There are also less esoteric modules, like csv-streamify which is a CSV parser.

Edge.js, Bespoke.js, Barcode39

29 Mar 2013 | By Alex Young | Comments | Tags Canvas node Microsoft presentations libraries


Edge.js (GitHub: tjanczuk / edge, License: Apache 2, npm: edge) by Tomasz Janczuk is an in-process interoperability layer between .NET and Node. This allows things like CPU-bound operations to be processed by .NET, or Node to access the Win32 APIs through C#.

The .NET code can be executed asynchronously and may be passed as a multiline comment or a string. A basic example looks like this:

var edge = require('edge');

var helloWorld = edge.func('async (input) => { return ".NET Welcomes " + input.ToString(); }');

helloWorld('JavaScript', function(err, result) {
  if (err) throw err;

Running this example would display “.NET welcomes JavaScript”.

Other CLR languages can be supported, should you be interested in playing with F# for example.

This project requires Windows, and needs Visual Studio 2012, Python 2.7, and npm-gyp to build.



Bespoke.js (GitHub: markdalgleish / bespoke.js, License: MIT, bower: bespoke.js) by Mark Dalgleish is a small but slick presentation library. It works with keyboard and touch events, and is intended to be used with CSS transitions.

It’s built using ECMAScript 5, so you’ll want to run your presentations on a compatible browser.

Creating presentations involves wrapping HTML slide content in <section> containers. Horizontal and vertical deck styles are supported, and Mark has documented the CSS classes in the project’s readme so you can hook into the provided JavaScript and styles.


Barcode39 (GitHub: erik5388 / barcode-39.js, License: MIT) by Erik Zettersten is a Code 39 implementation – it basically generates barcodes that almost all barcode readers can cope with. It can output data URIs and supports Canvas for drawing.

The JavaScript API is new Barcode39(elementId, type, delimeter), but it will also look for an element with the default ID of barcode and read the element’s content for the barcode’s source text.

Backbone.js Tutorial: Updates for 1.0, Clear Complete

28 Mar 2013 | By Alex Young | Comments | Tags backbone.js mvc node backgoog bootstrap


Before starting this tutorial, you’ll need the following:

  • alexyoung / dailyjs-backbone-tutorial at commit 711c9f6
  • The API key from part 2
  • The “Client ID” key from part 2
  • Update app/js/config.js with your keys (if you’ve checked out my source)

To check out the source, run the following commands (or use a suitable Git GUI tool):

git clone git@github.com:alexyoung/dailyjs-backbone-tutorial.git
cd dailyjs-backbone-tutorial
git reset --hard 711c9f6

Updating to Backbone 1.0

I updated bTask to work with Backbone 1.0, which required two small changes. The first was a change to the behaviour of callbacks in Backbone.sync – the internal call to the success callback now only needs one argument, which is the response data. I think I’ve mentioned that on DailyJS before, but you shouldn’t need to worry about this in your own Backbone projects unless you’ve written a custom Backbone.sync implementation.

The second change was the collection add events were firing when the views called fetch. I fixed this by passing reset: true to the fetch options. Details on this have been included in Backbone’s documentation under “Upgrading to 1.0”:

If you want to smartly update the contents of a Collection, adding new models, removing missing ones, and merging those already present, you now call set (previously named “update”), a similar operation to calling set on a Model. This is now the default when you call fetch on a collection. To get the old behavior, pass {reset: true}.

Adding “Clear Complete”

When a task in Google Tasks is marked as done, it will appear with strike-through and hang around in the list until it is cleared or deleted. Most Google Tasks clients will have a button that says “Clear Complete”, so I added one to bTask.

I added a method called clear to the Tasks collection which calls the .clear method from the Google Tasks API (rather than going through Backbone.sync):

define(['models/task'], function(Task) {
  var Tasks = Backbone.Collection.extend({
    model: Task,
    url: 'tasks',

    clear: function(tasklist, options) {
      var success = options.success || function() {}
        , request
        , self = this

      options.success = function() {
        self.remove(self.filter(function(task) {
          return task.get('status') === 'completed';


      request = gapi.client.tasks.tasks.clear({ tasklist: tasklist });
      Backbone.gapiRequest(request, 'update', this, options);

  return Tasks;

I also added a button (using Bootstrap’s built-in icons) to app/js/templates/app.html, and added an event to AppView (in app/js/views/app.js):

var AppView = Backbone.View.extend({
  // ...
  events: {
    'click .clear-complete': 'clearComplete'

  // ...
  clearComplete: function() {
    var list = bTask.views.activeListMenuItem.model;
    bTask.collections.tasks.clear(list.get('id'), { success: function() {
      // Show some kind of user feedback
    return false;

I had to change app/js/views/lists/menuitem.js to set the current collection in the open method to make this work.


Because I’ve been reviewing Backbone’s evolution as it progressed to 1.0 for DailyJS, updating this project wasn’t too much effort. In general the 1.0 release is backwards compatible, so you should definitely consider upgrading your own projects. Also, now bTask has ‘Clear Complete’, I feel like it does enough of the standard Google Tasks features for me to actually use it regularly.

Remember that you can try it out for yourself at todo.dailyjs.com.

The full source for this tutorial can be found in alexyoung / dailyjs-backbone-tutorial, commit 705bcb4.

Node Roundup: wish, Vow, shell-jobs

27 Mar 2013 | By Alex Young | Comments | Tags node modules testing promises async time daemons unix
You can send in your Node projects for review through our contact form.


wish (GitHub: EvanBurchard / wish, License: MIT, npm: wish) by Evan Burchard is an assertion module designed to raise meaningful, human-readable errors. When assertions fail, it parses the original source to generate a useful error message, which means the standard comparison operators can be used.

For example, if wish(a === 5) failed an error like this would be displayed:

  Expected "a" to be equal(===) to "5".

If assert(a === 5) had been used instead, AssertionError: false == true would have been raised. A fairer comparison would be assert.equal, which would produce AssertionError: 4 == 5, but it’s interesting that wish is able to introspect the variable name and add that to the error.


Vow (GitHub: dfilatov / jspromise, License: MIT/GPL, npm: vow) by Filatov Dmitry is a Promises/A+ implementation. Promises can be created, fulfilled, and rejected – you should be able to get the hang of it if you’ve used libraries with then methods elsewhere, but there are some differences to Promises/A which feels like it actually simplifies some of the potentially messier parts of the original CommonJS specification.

Here’s an example of the Vow API:

var promise1 = Vow.promise(),
    promise2 = Vow.promise();

Vow.all([promise1, promise2, 3])
  .then(function(value) {
    // value is [1, 2, 3]


The author has written some pretty solid looking tests, and benchmarks are included as well. The project performs favorably when compared to other popular promise libraries:

 mean timeops/sec


I like seeing daemons made in Node, and Azer Koçulu recently sent in a cron-inspired daemon called shell-jobs (GitHub: azer / shell-jobs, License: MIT, npm: shell-jobs). It uses .jobs files that are intended to be human readable. All you need to do is write a shell command followed by a # => and then a time:

cowsay "Hello" > /tmp/jobs.log # => 2 minutes

The shell-jobs script will then parse this file and output the following:

  jobs Starting "cowsay "Hello" > /tmp/jobs.log" [2 minutes] +2ms

After two minutes has passed the job will be executed:

  exec 1. Running cowsay "Hello" > /tmp/jobs.log. +0ms

jQuery Roundup: Individual Memberships, Bootstrap Tag Autocomplete, CDNJS

26 Mar 2013 | By Alex Young | Comments | Tags jquery plugins bootstrap cdn
Note: You can send your plugins and articles in for review through our contact form.

jQuery Foundation Individual Memberships

The jQuery Foundation has allowed corporations to become members for a year now, and they’ve just opened up the programme to individuals. If you’re interested in effectively sponsoring the jQuery Foundation, the jquery.com/join page has details on pricing and rewards.

Each pricing tier includes a gift, starting with a t-shirt, and the top $400 tier also includes “access to individual members only benefits at jQuery Foundation events”. I’m not sure what these individual benefits are, but where I come from $400 gets you a lot of benefits for your buck, so consider me cautiously intrigued.

Bootstrap Tag Autocomplete

When you’re writing Bootstrap-based projects, including any old jQuery plugin sometimes requires a bit of extra work to tailor the required markup and CSS to fit in with Bootstrap’s defaults. That means Bootstrap plugins are in demand from developers and designers. Nada Aldahleh recently sent in Bootstrap Tag Autocomplete (GitHub: Sandglaz / bootstrap-tagautocomplete, License: Apache 2.0), which is a Bootstrap and jQuery UI component for creating Twitter-like autocomplete interfaces.

It’s built on Bootstrap’s Typeahead library, and includes its own caret position library for getting and setting the caret position.

QUnit tests have been included, and the project’s website includes documentation and code samples.


Ryan Kirkman sent in CDNJS, which is an open source CDN. They’re looking for feedback on which libraries should be included – there are currently 325 listed. The code that runs the project is available on GitHub at cdnjs / cdnjs, and it’s based on Node.

Scripts can be added to the CDN by forking the GitHub project and following the instructions in the readme file. The general rule of thumb is that projects must have over 100 watchers on GitHub, but as long as sufficient popularity can be demonstrated the authors will consider including a new project. That means the list of libraries on cdnjs.com is useful for finding high quality scripts.

Dependent Types for JavaScript, radioactive.js, Minimail

25 Mar 2013 | By Alex Young | Comments | Tags computer-science education email apps

Dependent Types for JavaScript

Dependent Types for JavaScript published on Lambda the Ultimate is about Dependent JavaScript (DJS), a statically-typed dialect of JavaScript by Ravi Chugh, David Herman, and Ranjit Jhala (pdf):

DJS supports the particularly challenging features such as run-time type-tests, higher-order functions, extensible objects, prototype inheritance, and arrays through a combination of nested refinement types, strong updates to the heap, and heap unrolling to precisely track prototype hierarchies

The paper has a summary of related work that will be interesting to those of you who are experimenting with dialects of JavaScript with different type models.


radioactive.js (GitHub: reinpk / radioactive, License: MIT) by Peter Reinhardt is a library for modelling nuclear physics. Its intended use is for creating interactive demonstrations of radioactive decay:

One of the biggest problems I’ve encountered in writing about nuclear reactors is that people don’t understand radioactive decay. This is a huge problem because it means that 99% of the population is totally unqualified to decide anything about nuclear energy.

Suppose I have 1 kg of Cesium-134, with a half-life of 30 years. And 1 kg of Uranium-238, with a half-life of 4.5 billion years. I’m going to give you one of the blocks, and you have to sleep with it tonight like a teddy bear. Which one do you want?

If you guessed Cesium-134, you’re dead.

So, if you often find yourself presented with various radioactive isotopes and don’t want to die, Peter’s library may be of use to you. Or else you’re creating presentations or simulations using D3.js that you want to have some level of accuracy.


Minimail (GitHub: emailbox / minimail_mobileapp, License: BSD3) by Nicholas Reed is a mobile and server-side project to create a developer-friendly email client:

It is at the alpha stage, which means it kinda, sorta runs on Android and iOS, and is usable as a replacement mobile client (with changes synced to your Gmail web interface). I made it because there currently are no mobile email clients that are built with common frontend web languages. I’d like to see anyone able to run their own custom email client that fits their workflow.

It’s built using PhoneGap, and the server is Node with MongoDB.