There's a post on the npm blog about npm and front-end packaging. It directly addresses something that lots of people are confused about: is it a good idea to use npm for front-end development, even if the project doesn't use CommonJS modules?
It's useful to see the npm team's thoughts on this, because it seems like some front-end developers use npm out of convenience, and others use it because their module works well in Node and browsers. Writing code for browsers has different requirements to Node, but the major ones are highlighted in this post and then potential solutions are discussed.
One future feature that will help front-end developers is ecosystems, which is a way to create a subset of packages for a common base. So in theory you could place React, jQuery, Gulp, and Grunt packages into separate sub-repositories.
Another recommendation suggested in the article is using additional metadata in the package.json file. I've seen lots of packages do this so it seems increasingly popular at this point.
My preferred approach to front-end development is npm with npm scripts and Browserify, so it's encouraging to see that mentioned in the post:
We also think browserify is amazing and under-utilized, and an end-to-end solution that worked with it automatically at install time is a really, really interesting idea (check out browserify’s sweet handbook for really great docs on how to use it effectively).
Building dependencies from ./node_modules is a pain because every module seems to have a different entry point filename, so it would be really nice if more front-end developers used the main property in package.json.
Recently a web service called Joker has been in the technology press. It's a web application that downloads torrents based on magnet URIs, and allows users to stream video content and fast forward to any point in the video. Of course, it's already been taken down, and it didn't always work as well as my description sells it, but it was an interesting experiment all the same.
node-webkit-builder makes it easy to build cross-platform desktop apps with node-webkit:
peerflix will stream torrents from a magnet link to a HTTP server that video players like VLC can connect to. It's based on torrent-stream, a Node stream torrent client that has a friendly API.
torrent-stream uses lots of small modules to do the job. For example, magnet-uri can parse magnet URIs, and peer-wire-swarm is a swarm implementation.
Reading through these modules is like a showcase of Node's stream API. Academically they're fascinating, despite the obvious grey market connotations.
Which brings me to the TV/movie/music "PVR"-like applications. Media cataloguing doesn't have to be for pirated content: I have lots of DRM-free music, video, and books that could be presented in a better way. Combining my music purchases from Amazon, Apple, and Google into a cool desktop media browser powered by Node with a friendly RESTful API would be really fun and useful.
There's actually a node-webkit apps list, but I haven't yet found my perfect Node-powered media browser. Let me know if you've made your own Node media browser (or anything similar) and I'll check it out!
Enjoying Syro? I'll admit I was a little bit too obsessed by the cover art, which contains a list of Aphex Twin's expenses and a list of the audio hardware used on the album. So I thought it was pretty cool to see Spectroface by Daniel Rapp. This project uses the Web Audio API to recreate Aphex Twin's spectrogram face that was hidden in the Windowlicker b-side.
Daniel's website explains how spectrograms work, and the source code is heavily commented, so with a little bit of effort you should be able to follow it.
A spectrogram is a visual representation of the frequencies which make up a sound. Say you whistle a pure "middle C", then a spectrogram would light up right at 261.6 Hz, which is the corresponding frequency for that tone. Likewise, the "A" note makes the spectrogram turn bright white at 440 Hz.
If you hover over "middle C" and "A" on the original page (http://danielrapp.github.io/spectroface) it'll actually play the notes, which is a nice touch.
I tried out Daniel's examples and found they work best in Chrome with the webcam and mic active. You should play the sounds from the speaker rather than headphones to see the image encoding effect in the spectrogram visualisations.
The source is concise, and amazingly doesn't require too much hacking to get the audio values translated into pixels. For example, the code that determines the shade of grey to use for position x, y in the image looks like this:
The last example treats the video as an instrument, so you can wave things in front of the camera to produce different sounds. Although it sounds extremely loud and strange, it's very interesting and comes from a surprisingly small amount of code.
It starts acting like an instrument! Notice that if you place your finger over the webcam, the sound is muted. You can produce low-pass or high-pass filters by covering only part of the webcam.
This project allows video streams to be combined using a Node server. It uses WebRTC, the W3C standard for browser video, audio, and P2P. Google recently switched Google Hangouts over to WebRTC, which you can try in developer builds of Chrome:
Google+ Hangouts no longer requires a separate plugin to be installed in Chrome for video and voice chat to work. Using the Web Real-Time Communication API (WebRTC) and Native Client (NaCl) Google is able to provide a native video chat experience out of the box in Chrome.
The Node project uses Express and Socket.IO. It's currently a monolithic file with no dependencies, so it expects you to have Express and Socket.IO installed globally. Refactoring it into a more modular Express application might be a nice exercise for someone looking to contribute to an open source project...