Rendering Large Terrain in WebGL

14 Mar 2014 | By Alex Young | Comments | Tags webgl graphics

Rendering large terrains by Felix Palmer is a tutorial that demonstrates how to render terrain with a variable level of detail. There’s a demo and the source is on GitHub. It’s built with three.js, and is based on a paper on level-of-detail distribution.

Terrain

A simple way to do better is to split our plane into tiles of differing sizes, but of constant vertex count. So for example, each tile contains 64×64 vertices, but sometimes these vertices are stretched over an area corresponding to a large distant area, while for nearby areas, the tiles are smaller.

The code uses GLSL, but the main app/ JavaScript code is all neatly organised with RequireJS, so it’s surprisingly easy to navigate and understand. The tutorial blog post also makes some of these difficult ideas more accessible, so I thoroughly recommend checking it out.

Multiline strings in JavaScript

13 Mar 2014 | By Alex Young | Comments | Tags es6 hacks

Multiline (GitHub: sindresorhus / multiline, License: MIT, npm: multiline) by Sindre Sorhus is a clever hack that allows you to write multiline strings by using a callback to wrap around a comment:

var str = multiline(function(){/*
<!doctype html>
<html>
    <body>
        <h1>❤ unicorns</h1>
    </body>
</html>
*/});

This works by calling .toString() on the callback, then running a regular expression to extract the comment: /\/\*!?(?:\@preserve)?\s*(?:\r\n|\n)([\s\S]*?)(?:\r\n|\n)\s*\*\//.

Although this is a hack, I hadn’t thought about it before. Sindre notes that this has a performance impact, but that sometimes it might be worth writing things this way for the added clarity it brings.

EcmaScript 6 will introduce template strings, which can be used for multiline strings and interpolation with ${}:

A template string uses back ticks instead of double quotes or single quotes. The template string can contain place holders, which use the ${ } syntax. The value of the expressions in the place holders as well as the text between them gets passed to a function. This function is determined on the expression before the template string. If there is no expression before the template string the default template string is used.

Node Roundup: 0.11.12, Metalsmith, Promises and Error Handling

12 Mar 2014 | By Alex Young | Comments | Tags node modules npm promises generators
Job ad: Iridize is looking for a lead frontend developer.

Node 0.11.12

Node 0.11.12 is out. It updates uv, some of Node’s C++ code in src/, and several core modules including cluster and net.

One commit that caught my attention was buffer: allow toString to accept Infinity for end by Brian White. He said he sometimes sets INSPECT_MAX_BYTES to Infinity, allowing the buffer’s contents to be printed for debugging purposes. I think it’s interesting that this works, even though it’s not something I’d usually want to do.

Metalsmith

Ian Storm Taylor sent in Metalsmith, a really cool static site generator by Segment.io. Why is it cool? Well, they had me at the entirely plugin-based API that uses chainable calls:

Metalsmith(__dirname)
  .use(drafts())
  .use(markdown())
  .use(permalinks('posts/:title'))
  .use(templates('handlebars'))
  .build();

Promises and Error Handling

Promises and Error Handling by Jon Merrifield is all about error handling with promises in Node. It has guidelines for using promises safely, including the idea that you should reject rather than throw and how to terminate chains early and safely.

Changing the then in the above code to done means that there will be no outer promise returned from this, and the error will result in an asynchronous uncaught exception, which will bring down the Node process. In theory this makes it unlikely that any such problem would make it into production, given how loudly and clearly it would fail during development and testing.

Gremlins.js, Backbone.CustomSync

11 Mar 2014 | By Alex Young | Comments | Tags testing backbone

Gremlins.js

Gremlins.js

Gremlins.js (GitHub: marmelab / gremlins.js, License: MIT) from marmelab is a monkey testing library. According to the authors, it can be used to unleash a horde of undisciplined gremlins at a web application.

If that sounds weird, there’s a handy gif in the readme that illustrates how it works: it basically throws events at your HTML, and is able to report back when something goes wrong:

Mogwais only monitor the activity of the application and record it on the logger. For instance, the “fps” mogwai monitors the number of frame per second, every 500ms. Mogwais also report when gremlins break the application. For instance, if the number of frames per seconds drops below 10, the fps mogwai will log an error.

There are various kinds of gremlins that try to uncover issues. This includes a form filler, clicker, and scroller. You can manually create hordes using a chainable API:

gremlins.createHorde()
  .gremlin(gremlins.species.formFiller())
  .gremlin(gremlins.species.clicker().clickTypes(['click']))
  .gremlin(gremlins.species.scroller())
  .gremlin(function() {
    window.$ = function() {};
  })
  .unleash();

Backbone.CustomSync

Garrett Murphey sent in Backbone.CustomSync (GitHub: gmurphey / backbone.customsync, License: MIT, npm: backbone.customsync), a more pragmatic solution for defining Backbone.sync implementations that allows you to avoid giant switch statements:

To use the extension, all you have to do is use Backbone.CustomSync.Model or Backbone.CustomSync.Collection in place of Backbone.Model and Backbone.Collection, respectively. If you don’t define one of these sync methods - createSync, for example - and Backbone attempts to save a new model, the options.error callback will be invoked automatically. Backbone.CustomSync will only perform the operations you define.

Introducing Web Components to Control Authors

10 Mar 2014 | By Matthew Phillips | Comments | Tags components tutorials
This is a guest post by Matthew Phillips, from Bitovi. You can find him on Twitter: @matthewcp and GitHub: matthewp.

At this point unless you’ve been living under a rock you’ve likely heard at least a little about web components, a collection of emerging standards created with the intent of making web development more declarative. Among other things, web components allow for custom elements, an easier way to encapsulate your widgets. If you’ve been developing with MVC frameworks there is a learning curve to using components, but once things start to click you’ll want to use them everywhere in your application. Who hasn’t wanted an easy way to insert a random cat picture?

<randomcat width="200" height="300"></randomcat>

Creating widgets that are well encapsulated is something we do on a daily basis on as JavaScript engineers. In this article I’ll explain where traditional control-based MVC has fallen short of that goal and how web components can resurrect the ease of development from the web’s early roots.

MVC’s Brick Wall

Traditional MVC frameworks encourage you to organize your view code by creating constructor functions called Controls or Views. If you’ve developed in this way you probably recognize some of the problems you encounter with this approach.

Tracking Down Instantiation

Since Controls are implemented as constructor functions that operate on a template, any sub view must be manually instantiated within one of the control’s lifecycle methods, usually either init or render. Consider the following in Backbone:

var Widget = Backbone.View.extend({
  initialize: function() {
    this.subView = new InnerWidget({ el: $('#inner') });
  },

  render: function() {
    this.$el(this.template());
    this.subView.render();
  }
});

That’s a lot of code that will become more complex as you add additional subviews or conditions for rendering.

(Lack Of) External API

While creating a widget, its common to create an external API to interact with. With traditional MVC controls, there is no standard way to do this, so it ends up being ad-hoc at the whim of the author. For example, here’s an accordion containing Panel subviews:

var PanelView = Backbone.View.extend({
  template: _.template($('#panel-tmpl').html()),

  render: function() {
    this.$el.html(this.template(this.model.toJSON()));
    return this.$el;
  }
});

var AccordionView = Backbone.View.extend({
  template: _.template($('#acc-tmpl').html()),

  addPanel: function() {
    if (panel instanceof PanelView) {
      this.$el.find('.panels').add(panel.render());
    }
  }
});

And then to use this:

var panel = new PanelView({ model: data });
accordion.addPanel(panel);

You’ll want to abstract some of these pain points to create your own “standard API” for controls. You’ll probably create some base classes with stub functions for common tasks. Pretty soon you’ve created your own mini-framework. We’re learned to put up with these little things and they don’t bother us day-to-day, but when you discover a better way it will change the way you architect your application.

Hidden Model

Similarly, widgets commonly need to interact with external model data. Most MVC frameworks provide some way to pass data into the control so a lot of people have established a pattern of passing in a “model” property to fill this hole. Having a clearer way of setting the model for a control opens a lot of doors in terms of composability. With the traditional control pattern you usually wind up using an adhoc ViewModel created with some properties passed in to the constructor and some created as part of the control’s own logic.

Enter Web Components

Web Components are a W3C spec for an HTML and JavaScript construct that, at its heart, is a way to define custom HTML elements. The spec includes:

  • Native templates (spec)
  • A way to load them (spec)
  • A method to define custom elements and extend existing ones (spec)
  • Hooks to run functions when an element is inserted in the page.

A custom element can be as complex as the main router for single page application, a simple widget to display markdown inline:

<x-markdown table-of-contents-depth="2">
# Heading

## Sub-heading

* List item
   - Sub list item
</x-markdown>

or as simple as a way to embed a YouTube video.

<rick-roll></rick-roll>

Web Components provide an alternate way to define traditional MVC based controls, with several key advantages.

Maximal Encapsulation

The power of web components is their ability to easily encapsulate the details of a widget while still maintaining the ability to compose them together, thanks to the legacy of HTML’s document-oriented roots. Because it is layout-based, web components can be neatly organized in templates, solving the instantiation problem that a control-based workflow suffers from. Applications can become more template driven rather than instantiating controls with complex event handler logic. Let’s say you were A/B testing a feature, you could have your template (in Mustache) look something like this:

{{if randomlyPicked}}
  <rick-roll></rick-roll>
{{/if}}

Obvious API layer

The API layer for web components is easy to understand. Data is passed to components through HTML attributes.

<x-injector href="/some-page.html" />

Layout is passed through the element’s inner content, as shown in the markdown example above.

Models and Templates

The web components spec includes templates for the first time in the web’s history. This means no more script tag or hidden div hacks to include templates on a page. Templates allow you to create fragments of markup to be used by your components later. They are parsed, but not rendered until inserted into the page.

Models can be bound to the templates, and, through the power of Object.observe changes in the model, would result in relevant parts of the template being automatically rendered. If you’ve used an MVC framework with template binding and observable models you’re probably already familiar with this.

Models are passed into components the same way as all types of data, through attributes.

With CanJS’ can.Component you can pass the name of your model through the attributes and get the live-binding today, without waiting for standardization to flesh out that part of the spec. This brings me to my last point…

Using Web Components today

Despite this being early days for Web Components, there are already a few options if you are interested in getting started. Polymer and X-Tags are two projects started by Google and Mozilla engineers working on the Web Components spec. These projects are bleeding-edge and break compatibility often. Additionally they don’t attempt to extend the spec with functionality that won’t eventually end up in it. What they do offer you is the ability to start using components the way they will be used when browsers have fully implemented the specifications. can.Component, on the other hand, is an early implementation of Web Components that provides additional functionality that is beyond the scope of custom elements.

can.Component adds two-way binding between a component’s model (which is not yet part of the web component spec) and its form elements. Additionally it adds declarative binding to a component’s templates through it’s scope object. Scope is an object that is used to render the template, but with the addition of declarative binding it does much more than that. The scope can be used to update the state of a component and respond to events. Here’s a typical example of can.Component that shows off all of these features:

Usage:

<color-selection></color-selection>

Implementation:

<script id="color-section-template" type="text/mustache">
  <form>
    <span class="validations">{{validationMessage}}</span>
    <input name="name" type="text" can-value="color" can-change="validate">
  </form>
</script>

This is the component’s JavaScript:

can.Component.extend({
  tag: "color-selection",
  template: can.view("#color-selection-template"),
  scope: {
    color: "blue",
    validate: function() {
      if (!isAColor(this.attr("color"))) {
        this.attr("validationMessage", "Not a valid color");
      } else {
        this.removeAttr("validationMessage");
      }
    }
  }
});

We Still Need Libraries

I hope I’ve demonstrated the way in which web components breaks some of the boundaries we’ve hit with traditional control-based MVC. At the same time the specification is intentionally low level and leaves room for libraries to improve upon the experience, as can.Component is doing today.

As a consequence of Web Component’s inherent declarative nature your code will become more condensed, with far less boilerplate. We’re truly approaching a paradigm where separation of concerns is being achieved. But you can’t appreciate the way web components changes the way you write applications until you try it yourself. So I encourage you to choose a library and start on your first widget today.

Further (required) Reading

npm-stat

07 Mar 2014 | By Alex Young | Comments | Tags npm node

Recently npm added back download stats, which means you can see how many downloads a package has had. The announcement includes the note that Mikito Takada submitted a pull request for the D3 graphs – it’s things like this that make me glad npm’s website is open source.

npm-stat

There’s a public API for the statistics, which is written using hapi.

Paul Vorbach sent in npm-stat (GitHub: pvorb / npm-stat.com, License: MIT), which generates another set of views on npm’s stats. It displays downloads per day, week, month, and year, and there are graphs for authors as well. Pages for certain authors that I won’t link to directly naturally take a while to generate, but it’s generally fairly responsive.

I’m interested in seeing what people build with npm-www and the stats public API, but so far it seems like they’ve made a big improvement over the older versions.

Book Review: Quality Code: Software Testing Principles, Practices, and Patterns

06 Mar 2014 | By Alex Young | Comments | Tags books testing jquery

Quality Code

Quality Code: Software Testing Principles, Practices, and Patterns ($44.99, eBook: $35.99, Addison-Wesley Professional) by Stephen Vance is a book about testing. It uses examples from several languages – Java is the most prominent, but there are JavaScript examples as well. The most significant part for DailyJS readers is a long practical exercise that involves testing an open source jQuery plugin, but there is a lot of general software design advice that you will find useful.

The book introduces automated testing, but also discusses how tests can be managed in real teams. One of the main points here is how the same best practices that you use for production code should go into automated tests – if you use certain object oriented patterns, small methods, SOLID principles, and so on, then these techniques should be used for test code as well.

This leads into the practice of writing maintainable test code: the relationship between engineering and craftsmanship.

Civil engineers may supervise and inspect the building of bridges or buildings, but they spend little time driving rivets, pouring concrete, or stringing suspension cables. Probably the closest to software engineers’ total immersion might be the handful of test pilots who are also aeronautical engineers, in that they participate in design, construction, inspection, and verification of the craft they fly.

There are JavaScript examples for code coverage issues, dynamic dispatch, scope, asynchronous computation and promises, and Jasmine:

Dynamic languages like JavaScript are less tied to an explicit interface, although usage still defines a de-facto interface. Jasmine’s spyOn functionality provides the full range of test-double variations by substituting a test-instrumented recording object for the function being replaced and letting you define how it behaves when invoked.

What I learned most from, though, was the higher-level advice, like “test the error paths”:

Many people stop before testing the error handling of their software. Unfortunately, much of the perception of software quality is forged not by whether the software fails, because it eventually will, but by how it handles those failures.

And the following point reinforced the way I work, mixing “spec”-like tests with integration and unit tests:

I prefer to test each defect, at least at the unit level.

Stephen sometimes talks about how most of our programming languages and tools aren’t designed specifically to support testing. One idea that runs through the book is about how to design code to be testable, and writing decoupled tests is part of this. Balancing encapsulation with access to internal state for testing is something that I think most of us struggle with.

As we have seen, verification of those internal representations sometimes occurs through interface access. Where null safety is either guaranteed by the representation or ignored by the test, we see code like A.getB().getC().getD() Despite the blatant violation of the Principle of Least Knowledge, we frequently find code like this-of course we do not write it ourselves!— in tests and production.

Chapter 12, “Use Existing Seams”, left an impression on me: it’s about the idea of finding places in code that allows you to take control of that code so you can bring it under test. Since reading that chapter I seem to have found more convenient places to grapple Express applications and test them more thoroughly.

If you write tests, but find they become unmaintainable over time, then this book may guide you to create less entangled tests. It mixes material for dynamic languages like JavaScript with statically typed languages such as C++ and Java. I find this useful as someone who writes a lot of JavaScript but works alongside Objective-C and .NET developers.

Stephen has combined years of experience into a rare, testing-focused book, that relates principles that we use to write well-designed code to the problems inherent in automated testing.

Node Roundup: npm Trademark, Cha

05 Mar 2014 | By Alex Young | Comments | Tags node modules npm

Charlie Robbins and the npm Trademark

Charlie Robbins, who you may know as indexzero, recently published An open letter to the Node community:

Being part of a community means listening to it. After listening to the deep concern that has been voiced over our application to register the npm trademark we have decided to withdraw the application from the USPTO. I want to apologize for the way that our message came across. We hastily reacted to something that clearly needed more thought behind it.

Nodejitsu previously announced its intention of registering the npm trademark, and although it seems like it was with the best intentions, the confusion that arose was understandable.

Charlie signs off the post by saying the Node community needs a non-profit “foundation” that helps manage Node:

There is little beyond GitHub issues and discussions as to the questions like roadmap and long term plans. A non-profit organization could get more of this tedious work done by having more dedicated resources instead of relying on individual community members to go it alone.

Many of us have seen something similar happen in companies we’ve worked at: we use GitHub issues and small, informal groups to manage things quite happily until the business grows and management mistakes become more dangerous.

Recently we’ve seen the arrival of npm, Inc and TJ Fontaine take over Node, so things are changing. I’m not sure how a non-profit Node Foundation fits into this, but as someone who depends on Node for his career I think Charlie has raised some important questions that need addressing.

Cha

Cha

Cha (GitHub: chajs / cha, License: MIT, npm: cha) is a module for defining tasks and chaining them together. It can be used to define build scripts, or whatever else you’d like to automate, and the author shows how to tie them to npm scripts as well.

This is what the basic API looks like:

var cha = require('../')

// Set a watcher.
cha.watch = require('./tasks/watch')

cha.in('read', require('./tasks/read'))
   .in('cat', require('./tasks/cat'))
   .in('coffee', require('./tasks/coffee'))
   .in('write', require('./tasks/write'))
   .in('uglifyjs', require('./tasks/uglifyjs'))
   .in('copy', require('./tasks/copy'))

There is a specification for tasks, and it allows text-based “expressions” to be defined that can glob files and do other cool stuff with less syntax:

cha(['glob:./fixtures/js/*.js', 'request:http://underscorejs.org/underscore-min.js'])

8 Bit Procedural Sound Generation, Flappy Bird 2

04 Mar 2014 | By Alex Young | Comments | Tags games audio

8 Bit Procedural Sound Generation

8 Bit Procedural Sound

8 Bit Procedural Sound Generation by Jerome Etienne is a post about generating sounds using jsfx. Jerome’s demo shows visualisations for sounds that might be useful in a game.

He also introduces the webaudiox WebAudio API helpers, which includes methods for converting from byte arrays to floating point numbers.

Flappy Bird 2

Thomas Palef sent in part 2 of his Flappy Bird tutorial:

In the last HTML5 tutorial we did a simple Flappy Bird clone. It was nice, but quite boring to play. We will see in this post how to add animations and sounds to our Flappy Bird clone. These won’t change the game’s mechanics, but the game will feel a lot more interesting.

There’s also an article about his experiences on the IndieGames.com blog.

I think games are an interesting way of teaching full stack development – if you can hook a game like this up to a server-side Node project that stores player details, scores, and perhaps multiplayer, then it covers a wide range of skills.

Some Atom-related Node Packages

03 Mar 2014 | By Alex Young | Comments | Tags node editors tools

Atom

Hugh Kennedy sent in npm-install, an Atom package for automatically installing and saving the npm modules in the current file.

To use it, you just need to open the Command Palette and type npm install. The Command Palette can be opened with cmd-shift-p.

There’s another npm-related Atom package as well: npm-docs by Jonathan Clem. This allows you to use the Command Palette to easily look up a module’s readme or homepage. This is the kind of thing I do all the time when I write about Node on DailyJS.

Tyler Benziger kindly sent me the Atom invitation, but I’ve only used it for a few small things so far. I’ve been trying to figure out how it fits in with the Node community, and whether or not it’ll be popular with DailyJS readers.

If you look at the screenshot in this post you might notice that I’ve got a folder open with lots of items. That’s DailyJS’s 1127 posts, which Atom handles without any trouble.

The Atom Editor

28 Feb 2014 | By Alex Young | Comments | Tags node editors tools

Atom

“Alex, you love talking about text editors, why don’t you write about that GitHub Atom project?”

Ah, text editors. Arguably our most important tools, yet we’re more fickle about them than our choice of programming language, web framework, and preferred caffeinated beverage. Atom is made by GitHub. It’s built using dozens of related open source projects, and some of these include “packages” that extend the editor.

All of the packages seem to be written with CoffeeScript, but before you get your pitchforks out, take a look at this thread:

You can use plain JS to develop packages.

Phew. The reason I wanted to write about Atom on DailyJS was it’s built using Node and a web view. The fact it embraces Node means it should be easier for us to extend it. It also claims to have TextMate support, and can use native extensions through Node C and C++ modules.

Parts of Atom are native as well, so it should feel desktop-like rather than web-based:

Atom is a desktop application based on web technologies. Like other desktop apps, it has its own icon in the dock, native menus and dialogs, and full access to the file system.

I’ve seen a few generations of desktop text editors come and go: BBEdit, TextMate, and Sublime Text. I expect the excitement around Atom to follow a similar pattern. I’m going to write about interesting Atom packages if I think they’re of interest to DailyJS readers (please send them in), but you’ll still find me happily plodding on with Vim. And vin (rouge), but that’s another story.

Nodyn: No Dice

27 Feb 2014 | By Alex Young | Comments | Tags node java

Nodyn (GitHub: projectodd / nodyn, License: Apache 2.0) is a Node API-compatible JVM-based project. That means you can technically use Java libraries from within Node programs.

I’ve been using it on my Mac, running Mavericks. Here’s what I had to do to get it to work:

brew install maven
git clone https://github.com/projectodd/nodyn.git
cd nodyn
export JAVA_HOME=`/usr/libexec/java_home`
mvn install -Dmaven.test.skip=true
cd nodyn-standalone/target
java -jar nodyn-standalone.jar --console

It took me a while to figure all of this out. I already had Homebrew installed, but I didn’t have Maven. I’m an amateur Android developer, so I only ever really write Java through Google’s recommended IDE tools.

Maven installed without too much trouble, except I found it used the wrong version of Java. The export JAVA_HOME line makes Maven use the right version. I’m not sure why this is required because java -version showed 1.7, but for some reason Maven was building Nodyn with 1.6, which generated a long and inscrutable error message.

The mvn install -Dmaven.test.skip=true line builds Nodyn, and skips tests. I wanted to skip the tests because they seemed to hang on this line:

Starting test: src/test/resources/os|os_test.js|testFreemem

Once I built it, I ran a small program that reads its own source and prints it to stdout:

var fs = require('fs');

console.log('I can print my own code');

fs.readFile('test.js', 'utf8', function(err, text) {
  if (err) console.error(err);
  console.log(text);
  console.log('When I work correctly');
});

This printed the following output, which is incorrect:

log4j:WARN No appenders could be found for logger (io.netty.util.internal.logging.InternalLoggerFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
I can print my own code

The expected output is this:

I can print my own code
var fs = require('fs');

console.log('I can print my own code');

fs.readFile('test.js', 'utf8', function(err, text) {
  if (err) console.error(err);
  console.log(text);
  console.log('When I work correctly');
});

When I work correctly

It seems like Node API compatibility isn’t quite there yet. I also noticed it takes much longer than Node to start up, but I seem to remember jRuby developers complaining about startup time so that might be something to do with how Java works. It probably doesn’t really matter for long-running server processes, but I quite like the fact Node programs start up quickly.

If you’re a Java programmer Nodyn might seem cool, but so far I’ve struggled with it. Despite my Maven issues, the project looks neatly organised and carefully written, so I’m going to keep watching it.

Node Roundup: No More Force Publish, Counterpart, mock-fs

26 Feb 2014 | By Alex Young | Comments | Tags node modules npm testing internationalisation

No More Force Publish

Isaac Z. Schlueter wrote on npm’s blog that publish -f will no longer work:

If you publish foo@1.2.3, you can still un-publish foo@1.2.3. But then, you will not be able to publish something else to that same package identifier and version. Ever.

The common wisdom is changing the code that a version number describes is dangerous, so it’s better to publish a new version. If you’re a module author, you may feel that this is frustrating – what if you just released something with a dangerous security flaw? In cases like this it may be best to remove the version and publish a new, fixed version.

Counterpart

Counterpart (GitHub: martinandert / counterpart, License: MIT, npm: counterpart) by Martin Andert is an internationalisation module based on Ruby’s I18n gem:

translate('damals.about_x_hours_ago.one')          // => 'about one hour ago'
translate(['damals', 'about_x_hours_ago', 'one'])  // => 'about one hour ago'
translate(['damals', 'about_x_hours_ago.one'])     // => 'about one hour ago'

You can write translation documents using JSON. Features include interpolation, pluralisation, and default fallbacks.

mock-fs

mock-fs (GitHub: tschaub / mock-fs, License: MIT, npm: mock-fs) by Tim Schaub is an API-compatible version of Node’s fs module that essentially allows you to temporarily use an in-memory filesystem.

It provides a mock function that accepts a specification of the files you want to mock:

mock({
  'path/to/fake/dir': {
    'some-file.txt': 'file content here',
    'empty-dir': {/** empty directory */}
  },
  'path/to/some.png': new Buffer([8, 6, 7, 5, 3, 0, 9]),
  'some/other/path': {/** another empty directory */}
});

You might find this useful if you want to write tests that avoid touching real files.

Matter.js

25 Feb 2014 | By Alex Young | Comments | Tags webgl html5 physics

The Matter.js Wrecking Ball demo.

Matter.js (GitHub: liabru / matter-js, License: MIT) by Liam Brummitt is a stable and flexible rigid body physics engine for browsers. The author describes it as an alpha project that came about as a result of learning game programming.

If you’re interested in reading more about physics for game programming, Liam has collected some useful resources in Game physics for beginners.

Matter.js uses time-corrected Verlet integration, adaptive grid broad-phase detection, AABB mid-phase detection, SAT narrow-phase detection, and other algorithms for managing collisions and physical simulation. More well-known engines like Box2D support these features, but if you take a look at the some of the classes Liam has written then you’ll see how clean and readable his version is.

I’ve been looking at the source to see how to use it, and the API seems friendly to me:

var Bodies = Matter.Bodies;
var Engine = Matter.Engine;
var engine = Engine.create(container, options);
var World = Matter.World;

World.addBody(engine.world, Bodies.rectangle(300, 180, 700, 20, { isStatic: true, angle: Math.PI * 0.06 }));
World.addBody(enigne.world, Bodies.rectangle(300, 70, 40, 40, { friction: 0.001 }));

The demo is cool, so try it out if you want to experiment!

Angular Selection Model, Normalized Particle Swarm Optimization

24 Feb 2014 | By Alex Young | Comments | Tags graphics optimisation angularjs

Angular Selection Model

Angular Selection Model (GitHub: jtrussell / angular-selection-model, License: MIT) by Justin Russell is an AngularJS directive for managing selections of items in lists and tables. It’s indifferent to how data is presented, and only tracks what items are selected.

This example allows a text input to filter a list of items, and also allow the user to select items from the list:

<input type="text" ng-model="fancyfilter" />

<table>
  <thead>
    <tr>
      <th></th>
      <th>#</th>
      <th>Label</th>
      <th>Value</th>
    </tr>
  </thead>
  <tr ng-repeat="item in fancy.bag | filter:fancyfilter"
      selection-model
      selection-model-type="checkbox"
      selection-model-mode="multiple-additive"
      selection-model-selected-class="foobar">
    <td><input type="checkbox"></td>
    <td>1</td>
    <td></td>
    <td></td>
  </tr>
</table>

The directive does a lot of things behind the scenes to make this work naturally. An internal read-only list is used to represent selected items, and there’s a provider for setting things like the selected attribute and class name assigned to selected items at a global level. Checkboxes are automatically managed, including support for multiple selection.

Justin has included tests, documentation, and examples.

Normalized Particle Swarm Optimization

Swarm optimisation

Adrian Seeley sent in this gist: JavaScript Normalized Particle Swarm Optimization Implementation. If you want to try it, just click “Download Gist” then open the HTML file locally.

The reason I wanted to write about it was he decided to license it as “Abandoned”, so rather than letting it languish I thought I’d share it in case someone finds it useful.

Here’s how Adrian described the project:

Particle swarm optimization is an incredibly viable machine learning structure, but is often implemented using database oriented designs splayed across multiple files in c++ or java making it very inaccessible to newcomers. I present a simple, unoptimized, and easy to follow javascript implementation of normalized particle swarm optimization, making use of full descriptive variable names entirely encapsulated in a single inlined function.

ViziCities, Flappy Bird in HTML5

21 Feb 2014 | By Alex Young | Comments | Tags webgl maps games html5

ViziCities

ViziCities

ViziCities (Demo, GitHub: robhawkes / vizicities, License: MIT) by Robin Hawkes and Peter Smart is a WebGL 3D city and data visualisation platform. It uses OpenStreetMap, and aims to overlay animated data views with 3D city layouts.

The developers have created some visualisations of social data, traffic simulation, and public transport.

It uses Three.js, D3, Grunt, and some stalwarts like Moment.js and Underscore.js.

Flappy Bird in HTML5 with Phaser

Thomas Palef, who has been making one HTML5 game per-week, has created a tutorial for making Flappy Bird in HTML5 and Phaser. The cool thing about the tutorial is he reduces Flappy Bird to its basic parts – collision detection, scoring, and the player controls. Instead of worrying about whether or not the graphics are stolen from Mario, you can just follow along and learn how a game like this works.

JavaScript Promises ... In Wicked Detail

20 Feb 2014 | By Matt Greer | Comments | Tags promises tutorial
This post is by Matt Greer. You can find the original here: mattgreer.org/articles/promises-in-wicked-detail/

I’ve been using Promises in my JavaScript code for a while now. They can be a little brain bending at first. I now use them pretty effectively, but when it came down to it, I didn’t fully understand how they work. This article is my resolution to that. If you stick around until the end, you should understand Promises well too.

We will be incrementally creating a Promise implementation that by the end will mostly meet the Promise/A+ spec, and understand how promises meet the needs of asynchronous programming along the way. This article assumes you already have some familiarity with Promises. If you don’t, promisejs.org is a good site to check out.

Why?

Why bother to understand Promises to this level of detail? Really understanding how something works can increase your ability to take advantage of it, and debug it more successfully when things go wrong. I was inspired to write this article when a coworker and I got stumped on a tricky Promise scenario. Had I known then what I know now, we wouldn’t have gotten stumped.

The Simplest Use Case

Let’s begin our Promise implementation as simple as can be. We want to go from this

doSomething(function(value) {
  console.log('Got a value:' value);
});

to this

doSomething().then(function(value) {
  console.log('Got a value:' value);
});

To do this, we just need to change doSomething() from this

function doSomething(callback) {
  var value = 42;
  callback(value);
}

to this “Promise” based solution

function doSomething() {
  return {
    then: function(callback) {
      var value = 42;
      callback(value);
    }
  };
}
fiddle

This is just a little sugar for the callback pattern. It’s pretty pointless sugar so far. But it’s a start and yet we’ve already hit upon a core idea behind Promises

Promises capture the notion of an eventual value into an object

This is the main reason Promises are so interesting. Once the concept of eventuality is captured like this, we can begin to do some very powerful things. We’ll explore this more later on.

Defining the Promise type

This simple object literal isn’t going to hold up. Let’s define an actual Promise type that we’ll be able to expand upon

function Promise(fn) {
  var callback = null;
  this.then = function(cb) {
    callback = cb;
  };

  function resolve(value) {
    callback(value);
  }

  fn(resolve);
}

and reimplement doSomething() to use it

function doSomething() {
  return new Promise(function(resolve) {
    var value = 42;
    resolve(value);
  });
}

There is a problem here. If you trace through the execution, you’ll see that resolve() gets called before then(), which means callback will be null. Let’s hide this problem in a little hack involving setTimeout

function Promise(fn) {
  var callback = null;
  this.then = function(cb) {
    callback = cb;
  };

  function resolve(value) {
    // force callback to be called in the next
    // iteration of the event loop, giving
    // callback a chance to be set by then()
    setTimeout(function() {
      callback(value);
    }, 1);
  }

  fn(resolve);
}
fiddle

With the hack in place, this code now works … sort of.

This Code is Brittle and Bad

Our naive, poor Promise implementation must use asynchronicity to work. It’s easy to make it fail again, just call then() asynchronously and we are right back to the callback being null again. Why am I setting you up for failure so soon? Because the above implementation has the advantage of being pretty easy to wrap your head around. then() and resolve() won’t go away. They are key concepts in Promises.

Promises have State

Our brittle code above revealed something unexpectedly. Promises have state. We need to know what state they are in before proceeding, and make sure we move through the states correctly. Doing so gets rid of the brittleness.

  • A Promise can be pending waiting for a value, or resolved with a value.
  • Once a Promise resolves to a value, it will always remain at that value and never resolve again.

(A Promise can also be rejected, but we’ll get to error handling later)

Let’s explicitly track the state inside of our implementation, which will allow us to do away with our hack

function Promise(fn) {
  var state = 'pending';
  var value;
  var deferred;

  function resolve(newValue) {
    value = newValue;
    state = 'resolved';

    if(deferred) {
      handle(deferred);
    }
  }

  function handle(onResolved) {
    if(state === 'pending') {
      deferred = onResolved;
      return;
    }

    onResolved(value);
  }

  this.then = function(onResolved) {
    handle(onResolved);
  };

  fn(resolve);
}
fiddle

It’s getting more complicated, but the caller can invoke then() whenever they want, and the callee can invoke resolve() whenever they want. It fully works with synchronous or asynchronous code.

This is because of the state flag. Both then() and resolve() hand off to the new method handle(), which will do one of two things depending on the situation:

  • The caller has called then() before the callee calls resolve(), that means there is no value ready to hand back. In this case the state will be pending, and so we hold onto the caller’s callback to use later. Later when resolve() gets called, we can then invoke the callback and send the value on its way.
  • The callee calls resolve() before the caller calls then(): In this case we hold onto the resulting value. Once then() gets called, we are ready to hand back the value.

Notice setTimeout went away? That’s temporary, it will be coming back. But one thing at a time.

With Promises, the order in which we work with them doesn't matter. We are free to call then() and resolve() whenever they suit our purposes. This is one of the powerful advantages of capturing the notion of eventual results into an object

We still have quite a few more things in the spec to implement, but our Promises are already pretty powerful. This system allows us to call then() as many times as we want, we will always get the same value back

var promise = doSomething();

promise.then(function(value) {
  console.log('Got a value:', value);
});

promise.then(function(value) {
  console.log('Got the same value again:', value);
});

This is not completely true for the Promise implementation in this article. If the opposite happens, ie the caller calls then() multiple times before resolve() is called, only the last call to then() will be honored. The fix for this is to keep a running list of deferreds inside of the Promise instead of just one. I decided to not do that in the interest of keeping the article more simple, it's long enough as it is :)

Chaining Promises

Since Promises capture the notion of asynchronicity in an object, we can chain them, map them, have them run in parallel or sequential, all kinds of useful things. Code like the following is very common with Promises

getSomeData()
.then(filterTheData)
.then(processTheData)
.then(displayTheData);

getSomeData is returning a Promise, as evidenced by the call to then(), but the result of that first then must also be a Promise, as we call then() again (and yet again!) That’s exactly what happens, if we can convince then() to return a Promise, things get more interesting.

then() always returns a Promise

Here is our Promise type with chaining added in

function Promise(fn) {
  var state = 'pending';
  var value;
  var deferred = null;

  function resolve(newValue) {
    value = newValue;
    state = 'resolved';

    if(deferred) {
      handle(deferred);
    }
  }

  function handle(handler) {
    if(state === 'pending') {
      deferred = handler;
      return;
    }

    if(!handler.onResolved) {
      handler.resolve(value);
      return;
    }

    var ret = handler.onResolved(value);
    handler.resolve(ret);
  }

  this.then = function(onResolved) {
    return new Promise(function(resolve) {
      handle({
        onResolved: onResolved,
        resolve: resolve
      });
    });
  };

  fn(resolve);
}
fiddle

Hoo, it’s getting a little squirrelly. Aren’t you glad we’re building this up slowly? The real key here is that then() is returning a new Promise.

Since then() always returns a new Promise object, there will always be at least one Promise object that gets created, resolved and then ignored. Which can be seen as wasteful. The callback approach does not have this problem. Another ding against Promises. You can start to appreciate why some in the JavaScript community have shunned them.

What value does the second Promise resolve to? It receives the return value of the first promise. This is happening at the bottom of handle(), The handler object carries around both an onResolved callback as well as a reference to resolve(). There is more than one copy of resolve() floating around, each Promise gets their own copy of this function, and a closure for it to run within. This is the bridge from the first Promise to the second. We are concluding the first Promise at this line:

var ret = handler.onResolved(value);

In the examples I’ve been using here, handler.onResolved is

function(value) {
  console.log("Got a value:", value);
}

in other words, it’s what was passed into the first call to then(). The return value of that first handler is used to resolve the second Promise. Thus chaining is accomplished

doSomething().then(function(result) {
  console.log('first result', result);
  return 88;
}).then(function(secondResult) {
  console.log('second result', secondResult);
});

// the output is
//
// first result 42
// second result 88


doSomething().then(function(result) {
  console.log('first result', result);
  // not explicitly returning anything
}).then(function(secondResult) {
  console.log('second result', secondResult);
});

// now the output is
//
// first result 42
// second result undefined

Since then() always returns a new Promise, this chaining can go as deep as we like

doSomething().then(function(result) {
  console.log('first result', result);
  return 88;
}).then(function(secondResult) {
  console.log('second result', secondResult);
  return 99;
}).then(function(thirdResult) {
  console.log('third result', thirdResult);
  return 200;
}).then(function(fourthResult) {
  // on and on...
});

What if in the above example, we wanted all the results in the end? With chaining, we would need to manually build up the result ourself

doSomething().then(function(result) {
  var results = [result];
  results.push(88);
  return results;
}).then(function(results) {
  results.push(99);
  return results;
}).then(function(results) {
  console.log(results.join(', ');
});

// the output is
//
// 42, 88, 99
Promises always resolve to one value. If you need to pass more than one value along, you need to create a multi-value in some fashion (an array, an object, concatting strings, etc)

A potentially better way is to use a Promise library’s all() method or any number of other utility methods that increase the usefulness of Promises, which I’ll leave to you to go and discover.

The Callback is Optional

The callback to then() is not strictly required. If you leave it off, the Promise resolves to the same value as the previous Promise

doSomething().then().then(function(result) {
  console.log('got a result', result);
});

// the output is
//
// got a result 42

You can see this inside of handle(), where if there is no callback, it simply resolves the Promise and exits. value is still the value of the previous Promise.

if(!handler.onResolved) {
  handler.resolve(value);
  return;
}

Returning Promises Inside the Chain

Our chaining implementation is a bit naive. It’s blindly passing the resolved values down the line. What if one of the resolved values is a Promise? For example

doSomething().then(result) {
  // doSomethingElse returns a Promise
  return doSomethingElse(result)
}.then(function(finalResult) {
  console.log("the final result is", finalResult);
});

As it stands now, the above won’t do what we want. finalResult won’t actually be a fully resolved value, it will instead be a Promise. To get the intended result, we’d need to do

doSomething().then(result) {
  // doSomethingElse returns a Promise
  return doSomethingElse(result)
}.then(function(anotherPromise) {
  anotherPromise.then(function(finalResult) {
    console.log("the final result is", finalResult);
  });
});

Who wants that crud in their code? Let’s have the Promise implementation seamlessly handle this for us. This is simple to do, inside of resolve() just add a special case if the resolved value is a Promise

function resolve(newValue) {
  if(newValue && typeof newValue.then === 'function') {
    newValue.then(resolve);
    return;
  }
  state = 'resolved';
  value = newValue;

  if(deferred) {
    handle(deferred);
  }
}
fiddle

We’ll keep calling resolve() recursively as long as we get a Promise back. Once it’s no longer a Promise, then proceed as before.

It is possible for this to be an infinite loop. The Promise/A+ spec recommends implementations detect infinite loops, but it's not required.
Also worth pointing out, this implementation does not meet the spec. Nor will we fully meet the spec in this regard in the article. For the more curious, I recommend reading the Promise resolution procedure.

Notice how loose the check is to see if newValue is a Promise? We are only looking for a then() method. This duck typing is intentional, it allows different Promise implementations to interopt with each other. It’s actually quite common for Promise libraries to intermingle, as different third party libraries you use can each use different Promise implementations.

Different Promise implementations can interopt with each other, as long as they all are following the spec properly.

With chaining in place, our implementation is pretty complete. But we’ve completely ignored error handling.

Rejecting Promises

When something goes wrong during the course of a Promise, it needs to be rejected with a reason. How does the caller know when this happens? They can find out by passing in a second callback to then()

doSomething().then(function(value) {
  console.log('Success!', value);
}, function(error) {
  console.log('Uh oh', error);
});

As mentioned earlier, the Promise will transition from pending to either resolved or rejected, never both. In other words, only one of the above callbacks ever gets called.

Promises enable rejection by means of reject(), the evil twin of resolve(). Here is doSomething() with error handling support added

function doSomething() {
  return new Promise(function(resolve, reject) {
    var result = somehowGetTheValue(); 
    if(result.error) {
      reject(result.error);
    } else {
      resolve(result.value);
    }
  });
}

Inside the Promise implementation, we need to account for rejection. As soon as a Promise is rejected, all downstream Promises from it also need to be rejected.

Let’s see the full Promise implementation again, this time with rejection support added

function Promise(fn) {
  var state = 'pending';
  var value;
  var deferred = null;

  function resolve(newValue) {
    if(newValue && typeof newValue.then === 'function') {
      newValue.then(resolve, reject);
      return;
    }
    state = 'resolved';
    value = newValue;

    if(deferred) {
      handle(deferred);
    }
  }

  function reject(reason) {
    state = 'rejected';
    value = reason;

    if(deferred) {
      handle(deferred);
    }
  }

  function handle(handler) {
    if(state === 'pending') {
      deferred = handler;
      return;
    }

    var handlerCallback;

    if(state === 'resolved') {
      handlerCallback = handler.onResolved;
    } else {
      handlerCallback = handler.onRejected;
    }

    if(!handlerCallback) {
      if(state === 'resolved') {
        handler.resolve(value);
      } else {
        handler.reject(value);
      }

      return;
    }

    var ret = handlerCallback(value);
    handler.resolve(ret);
  }

  this.then = function(onResolved, onRejected) {
    return new Promise(function(resolve, reject) {
      handle({
        onResolved: onResolved,
        onRejected: onRejected,
        resolve: resolve,
        reject: reject
      });
    });
  };

  fn(resolve, reject);
}
fiddle

Other than the addition of reject() itself, handle() also has to be aware of rejection. Within handle(), either the rejection path or resolve path will be taken depending on the value of state. This value of state gets pushed into the next Promise, because calling the next Promises’ resolve() or reject() sets its state value accordingly.

When using Promises, it's very easy to omit the error callback. But if you do, you'll never get any indication something went wrong. At the very least, the final Promise in your chain should have an error callback. See the section further down about swallowed errors for more info.

Unexpected Errors Should Also Lead to Rejection

So far our error handling only accounts for known errors. It’s possible an unhandled exception will happen, completely ruining everything. It’s essential that the Promise implementation catch these exceptions and reject accordingly.

This means that resolve() should get wrapped in a try/catch block

function resolve(newValue) {
  try {
    // ... as before
  } catch(e) {
    reject(e);
  }
}

It’s also important to make sure the callbacks given to us by the caller don’t throw unhandled exceptions. These callbacks are called in handle(), so we end up with

function handle(deferred) {
  // ... as before

  var ret;
  try {
    ret = handlerCallback(value);
  } catch(e) {
    handler.reject(e);
    return;
  }

  handler.resolve(ret);
}

Promises can Swallow Errors!

It's possible for a misunderstanding of Promises to lead to completely swallowed errors! This trips people up a lot

Consider this example

function getSomeJson() {
  return new Promise(function(resolve, reject) {
    var badJson = "<div>uh oh, this is not JSON at all!</div>";
    resolve(badJson);
  });
}

getSomeJson().then(function(json) {
  var obj = JSON.parse(json);
  console.log(obj);
}, function(error) {
  console.log('uh oh', error);
});
fiddle

What is going to happen here? Our callback inside then() is expecting some valid JSON. So it naively tries to parse it, which leads to an exception. But we have an error callback, so we’re good, right?

Nope. That error callback will not be invoked! If you run this example via the above fiddle, you will get no output at all. No errors, no nothing. Pure chilling silence.

Why is this? Since the unhandled exception took place in our callback to then(), it is being caught inside of handle(). This causes handle() to reject the Promise that then() returned, not the Promise we are already responding to, as that Promise has already properly resolved.

Always remember, inside of then()'s callback, the Promise you are responding to has already resolved. The result of your callback will have no influence on this Promise

If you want to capture the above error, you need an error callback further downstream

getSomeJson().then(function(json) {
  var obj = JSON.parse(json);
  console.log(obj);
}).then(null, function(error) {
  console.log("an error occured: ", error);
});

Now we will properly log the error.

In my experience, this is the biggest pitfall of Promises. Read onto the next section for a potentially better solution

done() to the Rescue

Most (but not all) Promise libraries have a done() method. It’s very similar to then(), except it avoids the above pitfalls of then().

done() can be called whenever then() can. The key differences are it does not return a Promise, and any unhandled exception inside of done() is not captured by the Promise implementation. In other words, done() represents when the entire Promise chain has fully resolved. Our getSomeJson() example can be more robust using done()

getSomeJson().done(function(json) {
  // when this throws, it won't be swallowed
  var obj = JSON.parse(json);
  console.log(obj);
});

done() also takes an error callback, done(callback, errback), just like then() does, and since the entire Promise resolution is, well, done, you are assured of being informed of any errors that erupted.

done() is not part of the Promise/A+ spec (at least not yet), so your Promise library of choice might not have it.

Promise Resolution Needs to be Async

Early in the article we cheated a bit by using setTimeout. Once we fixed that hack, we’ve not used setTimeout since. But the truth is the Promise/A+ spec requires that Promise resolution happen asynchronously. Meeting this requirement is simple, we simply need to wrap most of handle()’s implementation inside of a setTimeout call

function handle(handler) {
  if(state === 'pending') {
    deferred = handler;
    return;
  }
  setTimeout(function() {
    // ... as before
  }, 1);
}

This is all that is needed. In truth, real Promise libraries don’t tend to use setTimeout. If the library is NodeJS oriented it will possibly use process.nextTick, for browsers it might use the new setImmediate or a setImmediate shim (so far only IE supports setImmediate), or perhaps an asynchronous library such as Kris Kowal’s asap (Kris Kowal also wrote Q, a popular Promise library)

Why Is This Async Requirement in the Spec?

It allows for consistency and reliable execution flow. Consider this contrived example

var promise = doAnOperation();
invokeSomething();
promise.then(wrapItAllUp);
invokeSomethingElse();

What is the call flow here? Based on the naming you’d probably guess it is invokeSomething() -> invokeSomethingElse() -> wrapItAllUp(). But this all depends on if the promise resolves synchronously or asynchronously in our current implementation. If doAnOperation() works asynchronously, then that is the call flow. But if it works synchronously, then the call flow is actually invokeSomething() -> wrapItAllUp() -> invokeSomethingElse(), which is probably bad.

To get around this, Promises always resolve asynchronously, even if they don’t have to. It reduces surprise and allows people to use Promises without having to take into consideration asynchronicity when reasoning about their code.

Promises always require at least one more iteration of the event loop to resolve. This is not necessarily true of the standard callback approach.

Before We Wrap Up … then/promise

There are many, full featured, Promise libraries out there. The then organization’s promise library takes a simpler approach. It is meant to be a simple implementation that meets the spec and nothing more. If you take a look at their implementation, you should see it looks quite familiar. then/promise was the basis of the code for this article, we’ve almost built up the same Promise implementation. Thanks to Nathan Zadoks and Forbes Lindsay for their great library and work on JavaScript Promises. Forbes Lindsay is also the guy behind the promisejs.org site mentioned at the start.

There are some differences in the real implementation and what is here in this article. That is because there are more details in the Promise/A+ spec that I have not addressed. I recommend reading the spec, it is short and pretty straightforward.

Conclusion

If you made it this far, then thanks for reading! We’ve covered the core of Promises, which is the only thing the spec addresses. Most implementations offer much more functionality, such as all(), spread(), race(), denodeify() and much more. I recommend browsing the API docs for Bluebird to see what all is possible with Promises.

Once I came to understand how Promises worked and their caveats, I came to really like them. They have led to very clean and elegant code in my projects. There’s so much more to talk about too, this article is just the beginning!

If you enjoyed this, you should follow me on Twitter to find out when I write another guide like this.

Further Reading

More great articles on Promises

Found a mistake? if I made an error and you want to let me know, please email me or file an issue. Thanks!

Node Roundup: 0.10.26, DozerJS, Keybase

19 Feb 2014 | By Alex Young | Comments | Tags node modules

Node 0.10.26

Node 0.10.26 is out. It includes updates for V8, npm, and uv, and fixes for several core modules, including crypto, fs, and net.

DozerJS

DozerJS

DozerJS (GitHub: DozerJS / dozerjs, License: MIT, npm: dozerjs) is an Express-based project that aims to make it easier to develop MVC-style REST-based applications.

It looks like the focus is on simplifying the server-side implementation so you can focus on the UI. The conventions used for the server-side structure seem to follow the popular wisdom: route separation, simple models with validation, and HTTP verbs for CRUD operations.

Keybase

Keybase

Keybase (GitHub: keybase / node-installer) is a public key sharing tool that you can install with npm: npm install -g keybase-installer. It allows you to associate several keys with a single identity:

In one command, Keybase has acquired maria’s public key, her keybase username, and her public identities, and confirmed they’re all her, using GnuPG to review a signed tweet and gist she posted.

I think it’s an extremely interesting project – the website is clear, and I like the idea of being able to confirm identities for collaborating with people online. Using npm to distribute the client seems like a smart approach.

AngularJS Infinite Scroll, Bindable.js

18 Feb 2014 | By Alex Young | Comments | Tags dom angularjs data-binding

AngularJS Infinite Scroll

This project made me wonder if AngularJS modules are the new jQuery plugins: lrInfiniteScroll (GitHub: lorenzofox3 / lrInfiniteScroll, License: MIT), by Laurent Renard. It’s a small and highly reusable library that is specifically tailored to work well with Angular’s API.

It attaches an event handler to an element that fires when the element has been scrolled to the bottom. You can use it to automatically load items on demand, Angular style:

<ul lr-infinite-scroll="myEventHandler" scroll-threshold="200" time-threshold="600">
  <li ng-repeat="item in myCollection">
</ul>

Bindable.js

Data binding libraries are often coupled to view objects. Bindable.js (GitHub: classdojo / bindable.js, License: MIT) from ClassDojo (and Craig Condon) is a more generic bidirectional data binding library. Bindable objects are constructed, and then properties can be bound to callbacks:

var person = new bindable.Object({
  name: 'craig',
  last: 'condon',
  location: {
    city: 'San Francisco'
  }
});

person.bind('location.zip', function(value) {
  // 94102
}).now();

// Triggers the binding
person.set('location.zip', '94102'); 

Bindable objects emit other events (change, watching, dispose), and there are methods for introspection (bindable.has), context (bindable.context), and triggering callbacks after they’re defined (.now).

Backbone.React.Component, backbone-dom-view

17 Feb 2014 | By Alex Young | Comments | Tags backbone dom views react

Backbone.React.Component

If you like Facebook’s React library and Backbone.js, then take a look at José Magalhães’ Backbone.React.Component (GitHub: magalhas / backbone-react-component, License: MIT, Bower: backbone-react-component). It acts as a bridge so you can bind models, collections, and components on both the client and server.

The author has made a blog example that you can run locally. The server uses Express, and keeps collections updated with data both on the server and in the browser.

backbone-dom-view

backbone-dom-view (GitHub: redexp / backbone-dom-view, License: MIT, Bower: backbone-dom-view) by Sergii Kliuchnyk is a view class for Backbone that allows selectors to be bound to helper methods using a shorthand notation that supports binding model fields, view events, and calculations.

Sergii’s example is a to-do model:

View = Backbone.DOMView.extend
  template:
    '.title':
      html: '@title'
    '.state':
      class:
        'done': '@is_done'

It has RequireJS support, tests, and documentation in the readme.