/**
Article / Note
2015/10/07

On the path to the Web Audio API

I'm interesting here in the Web Audio API, a W3C working draft for "high-level JavaScript API for processing and synthesizing audio in web applications". My immediate interest in this API is that I need to play sound samples within precise start timing because of track synchronization. The goal is to build a sample mixer with the ability to start a track not exactly when asked but at the next instant when it will be in sync with other tracks.

Audience: knowledge in javascript programmation.
Web Audio API version : 07 October 2015

At the moment I write this article there is a widely supported JS API called AudioElement allowing to play sound in the browser. Although it may be sufficient when sounds are not closely time-tied it becomes limiting to rely on just the javascript timing API (window.setTimeout and such) as source of synchronization. In the later context you never know exactly when your callback will be executed, it depends on how hard your browser is working at a given time, you may be called exactly when you decided (within 1ms range) but you may as well be called like 100ms later. Not practicable.

In the other hand we've got the Web Audio API being specified since 2013, which is said to be supported by recent browsers, with the exception of Internet Explorer and Android browser (all versions).

Usage-weighted Web Audio API support as of september 2015, from caniuse.com
Fig1. - Usage-weighted Web Audio API support as of september 2015, from caniuse.com.

We have to be ready by considering this API that IE users and Android Browser users will be put aside. Let's say we are ready for that for the rest of this discussion, we will try in our projects to propose to them a decent downgraded version using the Audio Element API.

Is your browser capable ?

Our first test will be to test whether the browser you are currently browsing with is actually implementing the API. We will check whether the AudioContext interface is present :

window.AudioContext = window.AudioContext || window.webkitAudioContext; // maybe prefixed
if (window.AudioContext) {
    console.log("Your browser implements the Web Audio API");
} else {
    console.log("Your browser does NOT implement the Web Audio API");
}

In your case : .

If you browser in not compatible the samples presented in this article will not work for you, sorry.

Hello world

Let's start with a small goal : let's load a simple sound file and play it.

START DEMO
// creating our instance of AudioContext, there is usually only one instance needed
// let's support we have our data ready (instance of ArrayBuffer)
var context = new AudioContext();
context.decodeAudioData(
    data, // DataBuffer
    function(audioBuffer) {
        // great, we get an AudioBuffer to work with
        var node = me.context.createBufferSource(); // data produced in the graph
        node.buffer = audioBuffer;
        node.connect(me.context.destination);   // data produced connect to hardware ouput
        node.onended = function(evt) {
            // cleaning everything
            context.close();
            context = null;
        }
        node.start(0); // play now !
    },
    function(error) {
        // sadly
    }
);

We suppose here you already have an ArrayBuffer containing the bytes of your sound file. I recommend this article of Henry Algus to learn how to obtain such buffer by using XHR. One can alternatively use a Audio element as source, as pointed out on MDN website. According to the specs you can use every file format/codec supported by the Audio element, and probably with the same limitation (for instance ogg vorbis not being supported by Microsoft and Apple products). Let's use mp3 files for this demo.

First thing to do is to create the context, and then decode your ArrayBuffer.

The version of AudioContext.decodeAudioData using promise is not yet supported (at least by Chrome), so you get the old way of doing things here : giving callbacks to the function.

Things can go wrong here (bad encoding). If you get your AudioBuffer object you still have to create your elementary processing graph. It is composed of a source node producing data and a final destination node sending the sound wave for your hardware to emit. Eventually place a listener to be called when the sound has been played.

Last thing to do is to invoke start on the source node to launch the process.

More sound samples, looping and playing with the volume

We will now load two sound file, one for drums and one for the melody, play them at the same time in loop mode and manage to change the volume of the mix. The drums last 2 seconds, the melody 4 seconds, the tempo is 120 bpm. So hopefully this loops should fit together if started at the same time. We should check that these two loops are still synchronized even if our browser (or the whole host system) is doing something heavy.

START DEMO
gain : 1.0
// we suppose here you already have a context and two source nodes, as explained in the previous section
var gainNode = context.createGain();
// "50% volume"
gainNode.gain.value = 0.5;
// gain node is plugged to the output
gainNode.connect(context.destination);
sourceNode1.connect(gainNode);
sourceNode2.connect(gainNode);
// start both now
sourceNode1.start(0);
sourceNode2.start(0);

Here we plugged the two sources into the gain node but we could have created one gain node by source to control their volume independently, the two gain nodes being connected to the destination. The complexity of the graph is not a priori limited, but may you have cycles in your graph ? I will not test that here.

Right on time

This is the moment when we will need to have our sounds to be rendered exactly when we want them too. We'll start the guitar in loop mode and then we will explicitly ask the drum to play. We want it to start not exactly when requested, but as soon as possible, considering the needed synchronization between the two tracks.

In other words the guitar sample is a 4 seconds 120 bpm, it will start at time \( t_0 \) , we will allow the drum sample (2s 120 bpm) to start at time \( t_0+2k \) seconds , \( k \) being the smallest integer such that \( t_0 + 2k \geq t_{now} \). This may eventually seem obscure but the implementation is trivial.

The next demo show this in action, with a visualization canvas to help seeing what happen. It leads naturally to use the window.requestAnimationFrame method to schedule when to play the drum track. The alternative being the window.setTimeout approach.

START DEMO

Now for the critical points of the code, this exclude canvas drawing, report to this excellent MDN documentation on the subject if needed.

// prerequisites
// let's suppose again you got a context and loaded your sound buffers already
var context, buffer1, buffer2;
// this 'constant' tells at which seconds it is allowed to play sound 2
var SOUND2_START_BASE = 2; // sound2 can only be played at second 0,2,4,6 ...

// setting the source node for background 
var node1, node2 = null;
node1 = context.createBufferSource();
node1.buffer = buffer1;
node1.loop = true;
node1.connect(context.destination); // otherwise no sound will output
// node2 will be created only when play is requested (you can't start a node twice)

// Some state vars
var doScheduleSound2 = false; // true => request to start sound 2 asap

// function to be called when used schedule the playing of the 2nd sound
function scheduleSound2() {
    if (null !== node2) return; // do not call this function will node2 will be playing
    // creating node2
    node2 = context.createBufferSource();
    node2.buffer = buffer2;
    node2.onended = function() {
        node2 = null;
        // you can now allow user to schedule sound2 again
    }
    node2.connect(context.destination);
    doScheduleSound2 = true;
}

// called each time it's time to draw
function animate(ts) {
    // we will not use the given timestamp here but will rather trust the audio system clock
    ts = context.currentTime(); // in seconds
    if (doScheduleSound2) {
        var when = SOUND2_START_BASE * Math.ceil(ts/SOUND2_START_BASE); // next time we can start node2
        node2.start(when);
        doScheduleSound2 = false; // avoid starting it again next frame (+ would raise exception)
    }
    // now draw whatever you need to draw on you canvas ...
}

// bootstrap animation loop
window.requestAnimationFrame(animate);

// play sound 1 in the background
node1.start();

Ok that's nice, with these small samples we catch a glimpse of what can offer this audio API for a web application. The whole code of the demos can be downloaded here.

Thanks for reading, and see you for my next article, maybe on the same subject.

References and good articles

>> Réagir à cet article