DailyJS

DailyJS

The JavaScript blog.


Tagmusic
Featured

talks music audio

Free Online Training, React Sound Player

Posted on .

Free Online JavaScript Training

Bitovi is hosting free weekly training sessions for intermediate to advanced JavaScript programmers. The content covers core concepts like closures, prototypes, and DOM manipulation.

This training is designed to make people proficient at JavaScript application development from the ground up. A fundamental understanding of how JavaScript, the DOM, and other technologies work is, in the long run, essential to make good choices in large scale application development.

The sessions are held over Google Hangouts on Air, and the videos are posted to YouTube. The material is based on content that was used for a Frontend Masters course.

If you want to join in, go to the page about the workshop and get the Google Calendar link.

React Sound Player

React Sound Player

Dmitri Voronianski sent in a very nicely designed React library for SoundCloud (GitHub: soundblogs/react-soundplayer, License: MIT, npm: react-soundplayer). It has a high-level component (SoundPlayerContainer), and there are lots of other components relating to audio playback. This includes buttons like PlayButton and NextButton, but also play position (Progress, Timer), and album art (Cover).

My screenshot is from the example near the bottom of the page, which is a full-blown audio player that uses most of the features of the library. It looks very nice, and is all done through HTML5, React, and the SoundCloud API.

Featured

music audio node modules

Substack's Musical Node Modules

Posted on .

Prerequisites: Install gnuplot and SoX to follow along.

James Halliday, otherwise known as "substack", has been making what he calls computer generated beepstep using two new modules: baudio (npm: baudio, License: MIT) and plucky (npm: plucky, License: MIT).

The baudio module returns a readable stream that generates raw audio data. It requires SoX to play or record audio, which should be installable from your package manager (Debian has it, and so does Homebrew).

The callback passed to the baudio function receives two arguments: t and i -- the time in seconds, and a counter. The callback will be run using process.nextTick to generate a stream of audio data. The audio data will be passed to SoX for playback or recording using child_process.spawn.

The audio data itself is where things get interesting. The baudio stream is sent directly into SoX through SoX's "pipeline" mode, in which audio data is read from standard input, using the "s16" format -- it's streams all the way down! Internally, baudio converts the float values returned from the callback into integers, which are written to a buffer using Node's buf.writeInt16LE method.

Your callback should generate floating point values between -1.0 and 1.0. By default, baudio is set to use a frequency of 44 kHz -- that's 44,000 values a second. This is close to CD quality (44.1 kHz).

Unless you're well-versed in audio programming, generating sounds with baudio is going to be hard work. To help you understand what's going on, I've written a small example that uses gnuplot to visualise the output.

First, install baudio:

npm install baudio  

And then create a file called baudio-simple.js:

var baudio = require('baudio')  
  , out = process.stdout
  , note = 1.0
  ;

var b = baudio(function(t, i) {  
  var value = Math.sin(Math.PI * t * 261.626);
  out.write(value.toString() + '\n');
  return value;
});

setTimeout(function() {  
  b.end();
}, 1000);

b.play();  

This file can be run with node baudio-simple.js > audio.dat (the redirection is important for generating the graphs), and if you've got SoX installed you'll get a sound.

Now you're going to write a bit of gnuplot. Create a file called baudio-plot with this script:

#!/usr/bin/env gnuplot

set terminal png size 530,420  
set output "baudio.png"  
plot "audio.dat" using 0:1 with lines  

Now make it executable, and run it:

chmod 700 baudio-plot  
./baudio-plot

It should generate a file called baudio.png with a second's worth of audio data plotted.

A graph of baudio's output using one second of audio.

If you're looking at Math.sin in the example code and wondering why there isn't a beautiful sweeping sine wave, then the reason is simple: there's too much data. Let's try ending the output earlier:

var baudio = require('baudio')  
  , out = process.stdout
  , note = 1.0
  ;

var b = baudio(function(t, i) {  
  if (i > 100) b.end();
  var value = Math.sin(Math.PI * t * 261.626);
  out.write(value.toString() + '\n');
  return value;
});

b.play();  

Now your graph should look like a wave, but you won't hear much sound:

A much shorter sample shows the output really is a sine wave.

Wave Period and Amplitude

The simple example I've used above is focused on controlling the "wave period", or the pitch of the output. To demonstrate this, try changing 261.626 to 60.0:

Some sub-bass.

Now the output is a lower pitch, and the graph makes this clear because you can see there are less cycles in the same amount of time. So we've mastered pitch, but what about volume?

It's actually easy once you know what the wave equation is doing. The generalised equation is A sin(t - K) + b (from Amplitude on Wikipedia). In programmer-speak, this equation can be written as A * sin(t - K) + b, where A is the "peak amplitude of the wave", t is time (which we already know baudio gives us), and K and b are offsets for the wave (which I'm not going to talk about here).

That gives rise to the following example that allows volume to be controlled by adding a variable, vol, for A:

var baudio = require('baudio')  
  , out = process.stdout
  , note = 1.0
  , vol = 0.1
  ;

var b = baudio(function(t, i) {  
  if (i > 100) b.end();
  var value = vol * Math.sin(Math.PI * t * 261.626);
  out.write(value.toString() + '\n');
  return value;
});

b.play();  

The graph is now appropriately smaller:

A quieter audio sample, plotted with set yrange [-1.0:1.0] to correct the axis.

More Fun Stuff

If you've managed to get gnuplot and SoX installed and played around with these examples, then there's more! First, try taking a look at plucky, which can be used to make "arrangements" of callbacks that generate different channels of audio. Also, James wrote beepstep.js which is a much more involved example than anything I've talked about here.

And, James tweets about this stuff, so follow @substack if you're into streams and audio hacking.

Featured

jquery ui plugins music amd translation

jQuery Roundup: AMD-Utils, jquery.mentionsInput, Echo Nest API Plugin, i18next

Posted on .

Note: You can send your plugins and articles in for review through our [contact form](/contact.html) or [@dailyjs](http://twitter.com/dailyjs).

AMD-Utils

AMD-Utils (GitHub: millermedeiros / amd-utils, License:
MIT) by Miller Medeiros is a set of modular utilities written using the AMD API.

All code is library agnostic and consist mostly of helper methods that aren't directly related with the DOM, the purpose of this library isn't to replace Dojo, jQuery, YUI, Mootools, etc, but to provide modular solutions for common problems that aren't solved by most of them.

The modules written so far include some of the things covered on my
Let's Make a Framework tutorial series, like a functional style Array module, and other
commonly required number and string utility functions often added by
large frameworks.

jquery.mentionsInput

jquery.mentionsInput (GitHub: podio / jquery-mentions-input,
License: MIT) by Kenneth Auchenberg and the Podio Dev Team is a UI
component for handling @mentions. It'll display an autocomplete list
for matching names and allow one to be selected. Once a name is
selected, it'll change colour in the text box, as shown above.

The authors have packaged it with CSS, so it's easy to get started right
away. It does expect some data, rather than automatically searching an
API like Twitter's, so the expected format looks like this:

$('textarea.mention').mentionsInput({
  onDataRequest:function (mode, query, callback) {
    var data = [
      { id:1, name:'Kenneth Auchenberg', 'avatar':'http://cdn0.4dots.com/i/customavatars/avatar7112_1.gif', 'type':'contact' },
      { id:2, name:'Jon Froda', 'avatar':'http://cdn0.4dots.com/i/customavatars/avatar7112_1.gif', 'type':'contact' },
      { id:3, name:'Anders Pollas', 'avatar':'http://cdn0.4dots.com/i/customavatars/avatar7112_1.gif', 'type':'contact' },
      { id:4, name:'Kasper Hulthin', 'avatar':'http://cdn0.4dots.com/i/customavatars/avatar7112_1.gif', 'type':'contact' },
      { id:5, name:'Andreas Haugstrup', 'avatar':'http://cdn0.4dots.com/i/customavatars/avatar7112_1.gif', 'type':'contact' },
      { id:6, name:'Pete Lacey', 'avatar':'http://cdn0.4dots.com/i/customavatars/avatar7112_1.gif', 'type':'contact' }
    ];

    data = _.filter(data, function(item) { return item.name.toLowerCase().indexOf(query.toLowerCase()) > -1 });

    callback.call(this, data);
  }
});

Echo Nest API Plugin

Echonest-jQuery-Plugin by Samuel Richardson is a plugin for The Echo
Nest
which is a real-time API for accessing
music data. Song data and audio fingerprinting are just some of the cool
things that this API provides.

Let's say I needed to get a set of album images. All I'd have to do is
this:

var echonest = new EchoNest('YOUR_API_KEY');
echonest.artist('Radiohead').images(function(imageCollection) {
  $('body').prepend(imageCollection.to_html(''));
});

Combined with a templating system, this makes working with music data
extremely convenient. The only issue with this approach is the API key
is exposed. Echo Nest uses the API key for enforcing rate limits, so
it's not safe to expose it publicly. As it stands, I'd probably use
client-side Echo Nest API code as a way of rapidly prototyping a music
service built on this platform, then create my own server-side bridge to
obscure the API key.

i18next

i18next (GitHub: jamuhl / i18next, License: MIT) by Jan
Mühlemann is a client-side translation plugin with lots of features,
including: plurals, localStorage, namespaces, and
variables. JSON resource files can be used to store translations, and
then i18next will load them as required.

Given a suitable file at /locales/en-US/translation.json:

{
  "app": {
    "name": "i18n"
  },
  "creator": {
    "firstname": "Jan",
    "lastname": "Mühlemann"
  }
}

Then \$.i18n.init can be used to load the resource and
access translations:

$.i18n.init({}, function(t) { // will init i18n with default settings and set language from navigator
  var appName = t('app.name'); // -> i18n
  var creator = t('creator.firstname') + ' ' + t('creator.lastname'); // -> Jan Mühlemann
});

The i18next documentation contains
more examples and shows how to change languages, organise translations
with nesting, and use variables.

Featured

music node modules natural-language

Node Roundup: natural, node-beatport, node-schema-org

Posted on .

natural

I was recently doing some work that involved stemming. It wasn't very
fun. Anyway, I found natural by
Chris Umbel
(License) which offers tools for stemming, classification, phonetics and
inflection:

var natural = require('natural'),
    classifier = new natural.BayesClassifier();

natural.PorterStemmer.stem('reading');
// 'read'

classifier.train(
  [{classification: 'happy', text: 'I love pizza'},
   {classification: 'happy', text: 'javascript is awesome'},
   {classification: 'sad', text: 'I hate tax'}
]);

classifier.classify('how does alex feel about pizza?');
// 'happy'
classifier.classify('how does alex feel about tax?');
// 'sad'

node-beatport

node-beatport is a client library for the Beatport API.
Beatport is a service for buying and downloading electronic music, so the API provides a way of querying
track, artist, chart or genre details.

var Beatport = require('beatport')

// initialize client
var bp = Beatport()

// resources (i.e: featured/releases) as methods (camelCased, i.e: featuredReleases)
bp.releases({
  facets: [ 'genreName:Trance', 'performerName:Above&Beyond' ]
, perPage: 5
, page: 1
}, function(err, results, metadata) {
  // do something
})

bp.labelDetail({ id: 804 }, function(err, results, metadata) {
  // do something
})

I found it surprising that Beatport has an API -- I don't think I've
seen other music retailers offering this kind of service.

node-schema-org

node-schema-org by Charlie Robbins is a library for parsing the
microdata schemas found on schema.org.

At the moment it will just output all of the scemas as JSON files:

$ npm install -g schema-org
$ read-schema-org
warn:   Removing all schemas in /usr/local/lib/node_modules/schema-org/schemas
info:   Spawning: node /usr/local/lib/node_modules/schema-org/list-schemas.js
info:   Contacting: http://schema.org/docs/full.html

This actually spawns lots of background jobs and totally owned my
machine, so be wary of using it in its current state.

Featured

graphics music

Max/MSP and JavaScript

Posted on .

In a previous life I was heavily into digital music production. One
popular tool in that area is Max/MSP -- a visual programming language
for music and graphics. It's not just used by hackers, many musicians
and artists also use it. In some ways it's more accessible than
Processing, and is more adept at audio.

Max/MSP allows you to draw networks of audio processing units and
manipulate them in real time. You can interact with MIDI hardware as
well.

What's interesting about Max/MSP is in recent years they've added a
JavaScript API. The API uses globally available functions and objects,
so the API feels a bit like Processing. The company that makes Max/MSP,
Cycling74, has a set of tutorials up called JavaScript in
Max
.

You can use JavaScript to create UIs with OpenGL, so you could create
interesting animations as well as scripted audio processing.

If you'd like to see some example patches, try searching for JavaScript
in the Max Objects Database. Max is
actually commercial software (it starts at \$250), but there's a 30 day
demo if you're interested in experimenting. If you've got a Mac you can
load Quartz Composer to see a similar type of tool which is focused on
video.