From f93ef19d66b0d692ff171d7bdcb82d98f741544a Mon Sep 17 00:00:00 2001 From: logic-finder <83723320+logic-finder@users.noreply.github.com> Date: Sun, 15 Aug 2021 23:57:52 +0900 Subject: [ko] Work done for 'Web Audio API' article. (#1609) * Work done for 'Web Audio API' article. * fix a hyperlink * small fixes and documents added Co-authored-by: hochan Lee --- .../web_audio_api/advanced_techniques/index.html | 586 +++++++++++++++++++++ .../advanced_techniques/sequencer.png | Bin 0 -> 9782 bytes files/ko/web/api/web_audio_api/audio-context_.png | Bin 0 -> 29346 bytes .../api/web_audio_api/best_practices/index.html | 97 ++++ .../customsourcenode-as-splitter.svg | 1 + .../index.html | 284 ++++++++++ files/ko/web/api/web_audio_api/index.html | 499 ++++++------------ .../migrating_from_webkitaudiocontext/index.html | 381 ++++++++++++++ files/ko/web/api/web_audio_api/tools/index.html | 41 ++ .../web_audio_api/using_audioworklet/index.html | 325 ++++++++++++ .../using_iir_filters/iir-filter-demo.png | Bin 0 -> 6824 bytes .../api/web_audio_api/using_iir_filters/index.html | 198 +++++++ .../bar-graph.png | Bin 0 -> 2221 bytes .../visualizations_with_web_audio_api/index.html | 189 +++++++ .../visualizations_with_web_audio_api/wave.png | Bin 0 -> 4433 bytes .../web_audio_spatialization_basics/index.html | 467 ++++++++++++++++ .../web-audio-spatialization.png | Bin 0 -> 26452 bytes 17 files changed, 2724 insertions(+), 344 deletions(-) create mode 100644 files/ko/web/api/web_audio_api/advanced_techniques/index.html create mode 100644 files/ko/web/api/web_audio_api/advanced_techniques/sequencer.png create mode 100644 files/ko/web/api/web_audio_api/audio-context_.png create mode 100644 files/ko/web/api/web_audio_api/best_practices/index.html create mode 100644 files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/customsourcenode-as-splitter.svg create mode 100644 files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/index.html create mode 100644 files/ko/web/api/web_audio_api/migrating_from_webkitaudiocontext/index.html create mode 100644 files/ko/web/api/web_audio_api/tools/index.html create mode 100644 files/ko/web/api/web_audio_api/using_audioworklet/index.html create mode 100644 files/ko/web/api/web_audio_api/using_iir_filters/iir-filter-demo.png create mode 100644 files/ko/web/api/web_audio_api/using_iir_filters/index.html create mode 100644 files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/bar-graph.png create mode 100644 files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/index.html create mode 100644 files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/wave.png create mode 100644 files/ko/web/api/web_audio_api/web_audio_spatialization_basics/index.html create mode 100644 files/ko/web/api/web_audio_api/web_audio_spatialization_basics/web-audio-spatialization.png (limited to 'files/ko') diff --git a/files/ko/web/api/web_audio_api/advanced_techniques/index.html b/files/ko/web/api/web_audio_api/advanced_techniques/index.html new file mode 100644 index 0000000000..d3ce7cd56d --- /dev/null +++ b/files/ko/web/api/web_audio_api/advanced_techniques/index.html @@ -0,0 +1,586 @@ +--- +title: 'Advanced techniques: Creating and sequencing audio' +slug: Web/API/Web_Audio_API/Advanced_techniques +tags: + - API + - Advanced + - Audio + - Guide + - Reference + - Web Audio API + - sequencer +--- +
{{DefaultAPISidebar("Web Audio API")}}
+ +

In this tutorial, we're going to cover sound creation and modification, as well as timing and scheduling. We're going to introduce sample loading, envelopes, filters, wavetables, and frequency modulation. If you're familiar with these terms and you're looking for an introduction to their application within with the Web Audio API, you've come to the right place.

+ +

Demo

+ +

We're going to be looking at a very simple step sequencer:

+ +

A sound sequencer application featuring play and BPM master controls, and 4 different voices with controls for each.
+  

+ +

In practice this is easier to do with a library — the Web Audio API was built to be built upon. If you are about to embark on building something more complex, tone.js would be a good place to start. However, we want to demonstrate how to build such a demo from first principles, as a learning exercise.

+ +
+

Note: You can find the source code on GitHub as step-sequencer; see the step-sequencer running live also.

+
+ +

The interface consists of master controls, which allow us to play/stop the sequencer, and adjust the BPM (beats per minute) to speed up or slow down the "music".

+ +

There are four different sounds, or voices, which can be played. Each voice has four buttons, which represent four beats in one bar of music. When they are enabled the note will sound. When the instrument plays, it will move across this set of beats and loop the bar.

+ +

Each voice also has local controls, which allow you to manipulate the effects or parameters particular to each technique we are using to create those voices. The techniques we are using are:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Name of voiceTechniqueAssociated Web Audio API feature
"Sweep"Oscillator, periodic wave{{domxref("OscillatorNode")}}, {{domxref("PeriodicWave")}}
"Pulse"Multiple oscillators{{domxref("OscillatorNode")}}
"Noise"Random noise buffer, Biquad filter{{domxref("AudioBuffer")}}, {{domxref("AudioBufferSourceNode")}}, {{domxref("BiquadFilterNode")}}
"Dial up"Loading a sound sample to play{{domxref("BaseAudioContext/decodeAudioData")}}, {{domxref("AudioBufferSourceNode")}}
+ +
+

Note: This instrument was not created to sound good, it was created to provide demonstration code and represents a very simplified version of such an instrument. The sounds are based on a dial-up modem. If you are unaware of how one sounds you can listen to one here.

+
+ +

Creating an audio context

+ +

As you should be used to by now, each Web Audio API app starts with an audio context:

+ +
// for cross browser compatibility
+const AudioContext = window.AudioContext || window.webkitAudioContext;
+const audioCtx = new AudioContext();
+ +

The "sweep" — oscillators, periodic waves, and envelopes

+ +

For what we will call the "sweep" sound, that first noise you hear when you dial up, we're going to create an oscillator to generate the sound.

+ +

The {{domxref("OscillatorNode")}} comes with basic waveforms out of the box — sine, square, triangle or sawtooth. However, instead of using the standard waves that come by default, we're going to create our own using the {{domxref("PeriodicWave")}} interface and values set in a wavetable. We can use the {{domxref("BaseAudioContext.createPeriodicWave")}} method to use this custom wave with an oscillator.

+ +

The periodic wave

+ +

First of all, we'll create our periodic wave. To do so, We need to pass real and imaginary values into the {{domxref("BaseAudioContext.createPeriodicWave()")}} method.:

+ +
const wave = audioCtx.createPeriodicWave(wavetable.real, wavetable.imag);
+
+ +
+

Note: In our example the wavetable is held in a separate JavaScript file (wavetable.js), because there are so many values. It is taken from a repository of wavetables, which can be found in the Web Audio API examples from Google Chrome Labs.

+
+ +

The Oscillator

+ +

Now we can create an {{domxref("OscillatorNode")}} and set its wave to the one we've created:

+ +
function playSweep(time) {
+     const osc = audioCtx.createOscillator();
+     osc.setPeriodicWave(wave);
+     osc.frequency.value = 440;
+     osc.connect(audioCtx.destination);
+     osc.start(time);
+     osc.stop(time + 1);
+}
+ +

We pass in a time parameter to the function here, which we'll use later to schedule the sweep.

+ +

Controlling amplitude

+ +

This is great, but wouldn't it be nice if we had an amplitude envelope to go with it? Let's create a simple one so we get used to the methods we need to create an envelope with the Web Audio API.

+ +

Let's say our envelope has attack and release. We can allow the user to control these using range inputs on the interface:

+ +
<label for="attack">Attack</label>
+<input name="attack" id="attack" type="range" min="0" max="1" value="0.2" step="0.1" />
+
+<label for="release">Release</label>
+<input name="release" id="release" type="range" min="0" max="1" value="0.5" step="0.1" />
+ +

Now we can create some variables over in JavaScript and have them change when the input values are updated:

+ +
let attackTime = 0.2;
+const attackControl = document.querySelector('#attack');
+attackControl.addEventListener('input', function() {
+    attackTime = Number(this.value);
+}, false);
+
+let releaseTime = 0.5;
+const releaseControl = document.querySelector('#release');
+releaseControl.addEventListener('input', function() {
+    releaseTime = Number(this.value);
+}, false);
+ +

The final playSweep() function

+ +

Now we can expand our playSweep() function. We need to add a {{domxref("GainNode")}} and connect that through our audio graph to actually apply amplitude variations to our sound. The gain node has one property: gain, which is of type {{domxref("AudioParam")}}.

+ +

This is really useful — now we can start to harness the power of the audio param methods on the gain value. We can set a value at a certain time, or we can change it over time with methods such as {{domxref("AudioParam.linearRampToValueAtTime")}}.

+ +

For our attack and release, we'll use the linearRampToValueAtTime method as mentioned above. It takes two parameters — the value you want to set the parameter you are changing to (in this case the gain) and when you want to do this. In our case when is controlled by our inputs. So in the example below the gain is being increased to 1, at a linear rate, over the time the attack range input has been set to. Similarly, for our release, the gain is being set to 0, at a linear rate, over the time the release input has been set to.

+ +
let sweepLength = 2;
+function playSweep(time) {
+    let osc = audioCtx.createOscillator();
+    osc.setPeriodicWave(wave);
+    osc.frequency.value = 440;
+
+    let sweepEnv = audioCtx.createGain();
+    sweepEnv.gain.cancelScheduledValues(time);
+    sweepEnv.gain.setValueAtTime(0, time);
+    // set our attack
+    sweepEnv.gain.linearRampToValueAtTime(1, time + attackTime);
+    // set our release
+    sweepEnv.gain.linearRampToValueAtTime(0, time + sweepLength - releaseTime);
+
+    osc.connect(sweepEnv).connect(audioCtx.destination);
+    osc.start(time);
+    osc.stop(time + sweepLength);
+}
+ +

The "pulse" — low frequency oscillator modulation

+ +

Great, now we've got our sweep! Let's move on and take a look at that nice pulse sound. We can achieve this with a basic oscillator, modulated with a second oscillator.

+ +

Initial oscillator

+ +

We'll set up our first {{domxref("OscillatorNode")}} the same way as our sweep sound, except we won't use a wavetable to set a bespoke wave — we'll just use the default sine wave:

+ +
const osc = audioCtx.createOscillator();
+osc.type = 'sine';
+osc.frequency.value = 880;
+ +

Now we're going to create a {{domxref("GainNode")}}, as it's the gain value that we will oscillate with our second, low frequency oscillator:

+ +
const amp = audioCtx.createGain();
+amp.gain.setValueAtTime(1, audioCtx.currentTime);
+ +

Creating the second, low frequency, oscillator

+ +

We'll now create a second — square — wave (or pulse) oscillator, to alter the amplification of our first sine wave:

+ +
const lfo = audioCtx.createOscillator();
+lfo.type = 'square';
+lfo.frequency.value = 30;
+ +

Connecting the graph

+ +

The key here is connecting the graph correctly, and also starting both oscillators:

+ +
lfo.connect(amp.gain);
+osc.connect(amp).connect(audioCtx.destination);
+lfo.start();
+osc.start(time);
+osc.stop(time + pulseTime);
+ +
+

Note: We also don't have to use the default wave types for either of these oscillators we're creating — we could use a wavetable and the periodic wave method as we did before. There is a multitude of possibilities with just a minimum of nodes.

+
+ +

Pulse user controls

+ +

For the UI controls, let's expose both frequencies of our oscillators, allowing them to be controlled via range inputs. One will change the tone and the other will change how the pulse modulates the first wave:

+ +
<label for="hz">Hz</label>
+<input name="hz" id="hz" type="range" min="660" max="1320" value="880" step="1" />
+<label for="lfo">LFO</label>
+<input name="lfo" id="lfo" type="range" min="20" max="40" value="30" step="1" />
+ +

As before, we'll vary the parameters when the range input values are changed by the user.

+ +
let pulseHz = 880;
+const hzControl = document.querySelector('#hz');
+hzControl.addEventListener('input', function() {
+    pulseHz = Number(this.value);
+}, false);
+
+let lfoHz = 30;
+const lfoControl = document.querySelector('#lfo');
+lfoControl.addEventListener('input', function() {
+    lfoHz = Number(this.value);
+}, false);
+ +

The final playPulse() function

+ +

Here's the entire playPulse() function:

+ +
let pulseTime = 1;
+function playPulse(time) {
+    let osc = audioCtx.createOscillator();
+    osc.type = 'sine';
+    osc.frequency.value = pulseHz;
+
+    let amp = audioCtx.createGain();
+    amp.gain.value = 1;
+
+    let lfo = audioCtx.createOscillator();
+    lfo.type = 'square';
+    lfo.frequency.value = lfoHz;
+
+    lfo.connect(amp.gain);
+    osc.connect(amp).connect(audioCtx.destination);
+    lfo.start();
+    osc.start(time);
+    osc.stop(time + pulseTime);
+}
+ +

The "noise" — random noise buffer with biquad filter

+ +

Now we need to make some noise! All modems have noise. Noise is just random numbers when it comes to audio data, so is, therefore, a relatively straightforward thing to create with code.

+ +

Creating an audio buffer

+ +

We need to create an empty container to put these numbers into, however, one that the Web Audio API understands. This is where {{domxref("AudioBuffer")}} objects come in. You can fetch a file and decode it into a buffer (we'll get to that later on in the tutorial), or you can create an empty buffer and fill it with your own data.

+ +

For noise, let's do the latter. We first need to calculate the size of our buffer, to create it. We can use the {{domxref("BaseAudioContext.sampleRate")}} property for this:

+ +
const bufferSize = audioCtx.sampleRate * noiseLength;
+const buffer = audioCtx.createBuffer(1, bufferSize, audioCtx.sampleRate);
+ +

Now we can fill it with random numbers between -1 and 1:

+ +
let data = buffer.getChannelData(0); // get data
+
+// fill the buffer with noise
+for (let i = 0; i < bufferSize; i++) {
+    data[i] = Math.random() * 2 - 1;
+}
+ +
+

Note: Why -1 to 1? When outputting sound to a file or speakers we need to have a number to represent 0db full scale — the numerical limit of the fixed point media or DAC. In floating point audio, 1 is a convenient number to map to "full scale" for mathematical operations on signals, so oscillators, noise generators and other sound sources typically output bipolar signals in the range -1 to 1. A browser will clamp values outside this range.

+
+ +

Creating a buffer source

+ +

Now we have the audio buffer and have filled it with data, we need a node to add to our graph that can use the buffer as a source. We'll create a {{domxref("AudioBufferSourceNode")}} for this, and pass in the data we've created:

+ +
let noise = audioCtx.createBufferSource();
+noise.buffer = buffer;
+ +

If we connect this through our audio graph and play it —

+ +
noise.connect(audioCtx.destination);
+noise.start();
+ +

you'll notice that it's pretty hissy or tinny. We've created white noise, that's how it should be. Our values are running from -1 to 1, which means we have peaks of all frequencies, which in turn is actually quite dramatic and piercing. We could modify the function to run values from 0.5 to -0.5 or similar to take the peaks off and reduce the discomfort, however, where's the fun in that? Let's route the noise we've created through a filter.

+ +

Adding a biquad filter to the mix

+ +

We want something in the range of pink or brown noise. We want to cut off those high frequencies and possibly some of the lower ones. Let's pick a bandpass biquad filter for the job.

+ +
+

Note: The Web Audio API comes with two types of filter nodes: {{domxref("BiquadFilterNode")}} and {{domxref("IIRFilterNode")}}. For the most part a biquad filter will be good enough — it comes with different types such as lowpass, highpass, and bandpass. If you're looking to do something more bespoke, however, the IIR filter might be a good option — see Using IIR filters for more information.

+
+ +

Wiring this up is the same as we've seen before. We create the {{domxref("BiquadFilterNode")}}, configure the properties we want for it and connect it through our graph. Different types of biquad filters have different properties — for instance setting the frequency on a bandpass type adjusts the middle frequency, however on a lowpass it would set the top frequency.

+ +
let bandpass = audioCtx.createBiquadFilter();
+bandpass.type = 'bandpass';
+bandpass.frequency.value = 1000;
+
+// connect our graph
+noise.connect(bandpass).connect(audioCtx.destination);
+ +

Noise user controls

+ +

On the UI we'll expose the noise duration and the frequency we want to band, allowing the user to adjust them via range inputs and event handlers just like in previous sections:

+ +
<label for="duration">Duration</label>
+<input name="duration" id="duration" type="range" min="0" max="2" value="1" step="0.1" />
+
+<label for="band">Band</label>
+<input name="band" id="band" type="range" min="400" max="1200" value="1000" step="5" />
+
+ +
let noiseDuration = 1;
+const durControl = document.querySelector('#duration');
+durControl.addEventListener('input', function() {
+    noiseDuration = Number(this.value);
+}, false);
+
+let bandHz = 1000;
+const bandControl = document.querySelector('#band');
+bandControl.addEventListener('input', function() {
+    bandHz = Number(this.value);
+}, false);
+ +

The final playNoise() function

+ +

Here's the entire playNoise() function:

+ +
function playNoise(time) {
+    const bufferSize = audioCtx.sampleRate * noiseDuration; // set the time of the note
+    const buffer = audioCtx.createBuffer(1, bufferSize, audioCtx.sampleRate); // create an empty buffer
+    let data = buffer.getChannelData(0); // get data
+
+    // fill the buffer with noise
+    for (let i = 0; i < bufferSize; i++) {
+        data[i] = Math.random() * 2 - 1;
+    }
+
+    // create a buffer source for our created data
+    let noise = audioCtx.createBufferSource();
+    noise.buffer = buffer;
+
+    let bandpass = audioCtx.createBiquadFilter();
+    bandpass.type = 'bandpass';
+    bandpass.frequency.value = bandHz;
+
+    // connect our graph
+    noise.connect(bandpass).connect(audioCtx.destination);
+    noise.start(time);
+}
+ +

"Dial up" — loading a sound sample

+ +

It's straightforward enough to emulate phone dial (DTMF) sounds, by playing a couple of oscillators together using the methods we've already looked at, however, in this section, we'll load in a sample file instead so we can take a look at what's involved.

+ +

Loading the sample

+ +

We want to make sure our file has loaded and been decoded into a buffer before we use it, so let's create an async function to allow us to do this:

+ +
async function getFile(audioContext, filepath) {
+  const response = await fetch(filepath);
+  const arrayBuffer = await response.arrayBuffer();
+  const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
+  return audioBuffer;
+}
+ +

We can then use the await operator when calling this function, which ensures that we can only run subsequent code when it has finished executing.

+ +

Let's create another async function to set up the sample — we can combine the two async functions in a nice promise pattern to perform further actions when this file is loaded and buffered:

+ +
async function setupSample() {
+    const filePath = 'dtmf.mp3';
+    const sample = await getFile(audioCtx, filePath);
+    return sample;
+}
+ +
+

Note: You can easily modify the above function to take an array of files and loop over them to load more than one sample. This would be very handy for more complex instruments, or gaming.

+
+ +

We can now use setupSample() like so:

+ +
setupSample()
+    .then((sample) => {
+        // sample is our buffered file
+        // ...
+});
+ +

When the sample is ready to play, the program sets up the UI so it is ready to go.

+ +

Playing the sample

+ +

Let's create a playSample() function in a similar manner to how we did with the other sounds. This time it will create an {{domxref("AudioBufferSourceNode")}}, and put the buffer data we've fetched and decoded into it, and play it:

+ +
function playSample(audioContext, audioBuffer, time) {
+    const sampleSource = audioContext.createBufferSource();
+    sampleSource.buffer = audioBuffer;
+    sampleSource.connect(audioContext.destination)
+    sampleSource.start(time);
+    return sampleSource;
+}
+ +
+

Note: We can call stop() on an {{domxref("AudioBufferSourceNode")}}, however, this will happen automatically when the sample has finished playing.

+
+ +

Dial-up user controls

+ +

The {{domxref("AudioBufferSourceNode")}} comes with a playbackRate property. Let's expose that to our UI, so we can speed up and slow down our sample. We'll do that in the same sort of way as before:

+ +
<label for="rate">Rate</label>
+<input name="rate" id="rate" type="range" min="0.1" max="2" value="1" step="0.1" />
+ +
let playbackRate = 1;
+const rateControl = document.querySelector('#rate');
+rateControl.addEventListener('input', function() {
+    playbackRate = Number(this.value);
+}, false);
+ +

The final playSample() function

+ +

We'll then add a line to update the playbackRate property to our playSample() function. The final version looks like this:

+ +
function playSample(audioContext, audioBuffer, time) {
+    const sampleSource = audioContext.createBufferSource();
+    sampleSource.buffer = audioBuffer;
+    sampleSource.playbackRate.value = playbackRate;
+    sampleSource.connect(audioContext.destination)
+    sampleSource.start(time);
+    return sampleSource;
+}
+ +
+

Note: The sound file was sourced from soundbible.com.

+
+ +

Playing the audio in time

+ +

A common problem with digital audio applications is getting the sounds to play in time so that the beat remains consistent, and things do not slip out of time.

+ +

We could schedule our voices to play within a for loop, however the biggest problem with this is updating whilst it is playing, and we've already implemented UI controls to do so. Also, it would be really nice to consider an instrument-wide BPM control. The best way to get our voices to play on the beat is to create a scheduling system, whereby we look ahead at when the notes are going to play and push them into a queue. We can start them at a precise time with the currentTime property and also take into account any changes.

+ +
+

Note: This is a much stripped down version of Chris Wilson's A Tale Of Two Clocks article, which goes into this method in much more detail. There's no point repeating it all here, but it's highly recommended to read this article and use this method. Much of the code here is taken from his metronome example, which he references in the article.

+
+ +

Let's start by setting up our default BPM (beats per minute), which will also be user-controllable via — you guessed it — another range input.

+ +
let tempo = 60.0;
+const bpmControl = document.querySelector('#bpm');
+bpmControl.addEventListener('input', function() {
+    tempo = Number(this.value);
+}, false);
+ +

Then we'll create variables to define how far ahead we want to look, and how far ahead we want to schedule:

+ +
const lookahead = 25.0; // How frequently to call scheduling function (in milliseconds)
+const scheduleAheadTime = 0.1; // How far ahead to schedule audio (sec)
+ +

Let's create a function that moves the note forwards by one beat, and loops back to the first when it reaches the 4th (last) one:

+ +
let currentNote = 0;
+let nextNoteTime = 0.0; // when the next note is due.
+
+function nextNote() {
+    const secondsPerBeat = 60.0 / tempo;
+
+    nextNoteTime += secondsPerBeat; // Add beat length to last beat time
+
+    // Advance the beat number, wrap to zero
+    currentNote++;
+    if (currentNote === 4) {
+            currentNote = 0;
+    }
+}
+ +

We want to create a reference queue for the notes that are to be played, and the functionality to play them using the functions we've previously created:

+ +
const notesInQueue = [];
+
+function scheduleNote(beatNumber, time) {
+
+    // push the note on the queue, even if we're not playing.
+    notesInQueue.push({ note: beatNumber, time: time });
+
+    if (pads[0].querySelectorAll('button')[beatNumber].getAttribute('aria-checked') === 'true') {
+        playSweep(time)
+    }
+    if (pads[1].querySelectorAll('button')[beatNumber].getAttribute('aria-checked') === 'true') {
+        playPulse(time)
+    }
+    if (pads[2].querySelectorAll('button')[beatNumber].getAttribute('aria-checked') === 'true') {
+        playNoise(time)
+    }
+    if (pads[3].querySelectorAll('button')[beatNumber].getAttribute('aria-checked') === 'true') {
+        playSample(audioCtx, dtmf, time);
+    }
+}
+ +

Here we look at the current time and compare it to the time for the next note; when the two match it will call the previous two functions.

+ +

{{domxref("AudioContext")}} object instances have a currentTime property, which allows us to retrieve the number of seconds after we first created the context. This is what we shall use for timing within our step sequencer — It's extremely accurate, returning a float value accurate to about 15 decimal places.

+ +
function scheduler() {
+    // while there are notes that will need to play before the next interval, schedule them and advance the pointer.
+    while (nextNoteTime < audioCtx.currentTime + scheduleAheadTime ) {
+        scheduleNote(currentNote, nextNoteTime);
+        nextNote();
+    }
+    timerID = window.setTimeout(scheduler, lookahead);
+}
+ +

We also need a draw function to update the UI, so we can see when the beat progresses.

+ +
let lastNoteDrawn = 3;
+
+function draw() {
+    let drawNote = lastNoteDrawn;
+    let currentTime = audioCtx.currentTime;
+
+    while (notesInQueue.length && notesInQueue[0].time < currentTime) {
+        drawNote = notesInQueue[0].note;
+        notesInQueue.splice(0,1);   // remove note from queue
+    }
+
+    // We only need to draw if the note has moved.
+    if (lastNoteDrawn != drawNote) {
+        pads.forEach(function(el, i) {
+            el.children[lastNoteDrawn].style.borderColor = 'hsla(0, 0%, 10%, 1)';
+            el.children[drawNote].style.borderColor = 'hsla(49, 99%, 50%, 1)';
+        });
+
+        lastNoteDrawn = drawNote;
+    }
+    // set up to draw again
+    requestAnimationFrame(draw);
+}
+ +

Putting it all together

+ +

Now all that's left to do is make sure we've loaded the sample before we are able to play the instrument. We'll add a loading screen that disappears when the file has been fetched and decoded, then we can allow the scheduler to start using the play button click event.

+ +
// when the sample has loaded allow play
+let loadingEl = document.querySelector('.loading');
+const playButton = document.querySelector('[data-playing]');
+let isPlaying = false;
+setupSample()
+    .then((sample) => {
+        loadingEl.style.display = 'none'; // remove loading screen
+
+        dtmf = sample; // to be used in our playSample function
+
+        playButton.addEventListener('click', function() {
+            isPlaying = !isPlaying;
+
+            if (isPlaying) { // start playing
+
+                // check if context is in suspended state (autoplay policy)
+                if (audioCtx.state === 'suspended') {
+                    audioCtx.resume();
+                }
+
+                currentNote = 0;
+                nextNoteTime = audioCtx.currentTime;
+                scheduler(); // kick off scheduling
+                requestAnimationFrame(draw); // start the drawing loop.
+                this.dataset.playing = 'true';
+
+            } else {
+
+                window.clearTimeout(timerID);
+                this.dataset.playing = 'false';
+
+            }
+        })
+    });
+ +

Summary

+ +

We've now got an instrument inside our browser! Keep playing and experimenting — you can expand on any of these techniques to create something much more elaborate.

diff --git a/files/ko/web/api/web_audio_api/advanced_techniques/sequencer.png b/files/ko/web/api/web_audio_api/advanced_techniques/sequencer.png new file mode 100644 index 0000000000..63de8cb0de Binary files /dev/null and b/files/ko/web/api/web_audio_api/advanced_techniques/sequencer.png differ diff --git a/files/ko/web/api/web_audio_api/audio-context_.png b/files/ko/web/api/web_audio_api/audio-context_.png new file mode 100644 index 0000000000..36d0190052 Binary files /dev/null and b/files/ko/web/api/web_audio_api/audio-context_.png differ diff --git a/files/ko/web/api/web_audio_api/best_practices/index.html b/files/ko/web/api/web_audio_api/best_practices/index.html new file mode 100644 index 0000000000..784b3f1f3c --- /dev/null +++ b/files/ko/web/api/web_audio_api/best_practices/index.html @@ -0,0 +1,97 @@ +--- +title: Web Audio API best practices +slug: Web/API/Web_Audio_API/Best_practices +tags: + - Audio + - Best practices + - Guide + - Web Audio API +--- +
{{apiref("Web Audio API")}}
+ +

There's no strict right or wrong way when writing creative code. As long as you consider security, performance, and accessibility, you can adapt to your own style. In this article, we'll share a number of best practices — guidelines, tips, and tricks for working with the Web Audio API.

+ +

Loading sounds/files

+ +

There are four main ways to load sound with the Web Audio API and it can be a little confusing as to which one you should use.

+ +

When working with files, you are looking at either the grabbing the file from an {{domxref("HTMLMediaElement")}} (i.e. an {{htmlelement("audio")}} or {{htmlelement("video")}} element), or you're looking to fetch the file and decode it into a buffer. Both are legitimate ways of working, however, it's more common to use the former when you are working with full-length tracks, and the latter when working with shorter, more sample-like tracks.

+ +

Media elements have streaming support out of the box. The audio will start playing when the browser determines it can load the rest of the file before playing finishes. You can see an example of how to use this with the Web Audio API in the Using the Web Audio API tutorial.

+ +

You will, however, have more control if you use a buffer node. You have to request the file and wait for it to load (this section of our advanced article shows a good way to do it), but then you have access to the data directly, which means more precision, and more precise manipulation.

+ +

If you're looking to work with audio from the user's camera or microphone you can access it via the Media Stream API and the {{domxref("MediaStreamAudioSourceNode")}} interface. This is good for WebRTC and situations where you might want to record or possibly analyze audio.

+ +

The last way is to generate your own sound, which can be done with either an {{domxref("OscillatorNode")}} or by creating a buffer and populating it with your own data. Check out the tutorial here for creating your own instrument for information on creating sounds with oscillators and buffers.

+ +

Cross browser & legacy support

+ +

The Web Audio API specification is constantly evolving and like most things on the web, there are some issues with it working consistently across browsers. Here we'll look at options for getting around cross-browser problems.

+ +

There's the standardised-audio-context npm package, which creates API functionality consistently across browsers, filling holes as they are found. It's constantly in development and endeavours to keep up with the current specification.

+ +

There is also the option of libraries, of which there are a few depending on your use case. For a good all-rounder, howler.js is a good choice. It has cross-browser support and, provides a useful subset of functionality. Although it doesn't harness the full gamut of filters and other effects the Web Audio API comes with, you can do most of what you'd want to do.

+ +

If you are looking for sound creation or a more instrument-based option, tone.js is a great library. It provides advanced scheduling capabilities, synths, and effects, and intuitive musical abstractions built on top of the Web Audio API.

+ +

R-audio, from the BBC's Research & Development department, is a library of React components aiming to provide a "more intuitive, declarative interface to Web Audio". If you're used to writing JSX it might be worth looking at.

+ +

Autoplay policy

+ +

Browsers have started to implement an autoplay policy, which in general can be summed up as:

+ +
+

"Create or resume context from inside a user gesture".

+
+ +

But what does that mean in practice? A user gesture has been interpreted to mean a user-initiated event, normally a click event. Browser vendors decided that Web Audio contexts should not be allowed to automatically play audio; they should instead be started by a user. This is because autoplaying audio can be really annoying and obtrusive. But how do we handle this?

+ +

When you create an audio context (either offline or online) it is created with a state, which can be suspended, running, or closed.

+ +

When working with an {{domxref("AudioContext")}}, if you create the audio context from inside a click event the state should automatically be set to running. Here is a simple example of creating the context from inside a click event:

+ +
const button = document.querySelector('button');
+button.addEventListener('click', function() {
+    const audioCtx = new AudioContext();
+}, false);
+
+ +

If however, you create the context outside of a user gesture, its state will be set to suspended and it will need to be started after user interaction. We can use the same click event example here, test for the state of the context and start it, if it is suspended, using the resume() method.

+ +
const audioCtx = new AudioContext();
+const button = document.querySelector('button');
+
+button.addEventListener('click', function() {
+      // check if context is in suspended state (autoplay policy)
+    if (audioCtx.state === 'suspended') {
+        audioCtx.resume();
+    }
+}, false);
+
+ +

You might instead be working with an {{domxref("OfflineAudioContext")}}, in which case you can resume the suspended audio context with the startRendering() method.

+ +

User control

+ +

If your website or application contains sound, you should allow the user control over it, otherwise again, it will become annoying. This can be achieved by play/stop and volume/mute controls. The Using the Web Audio API tutorial goes over how to do this.

+ +

If you have buttons that switch audio on and off, using the ARIA role="switch" attribute on them is a good option for signalling to assistive technology what the button's exact purpose is, and therefore making the app more accessible. There's a demo of how to use it here.

+ +

As you work with a lot of changing values within the Web Audio API and will want to provide users with control over these, the range input is often a good choice of control to use. It's a good option as you can set minimum and maximum values, as well as increments with the step attribute.

+ +

Setting AudioParam values

+ +

There are two ways to manipulate {{domxref("AudioNode")}} values, which are themselves objects of type {{domxref("AudioParam")}} interface. The first is to set the value directly via the property. So for instance if we want to change the gain value of a {{domxref("GainNode")}} we would do so thus:

+ +
gainNode.gain.value = 0.5;
+
+ +

This will set our volume to half. However, if you're using any of the AudioParam's defined methods to set these values, they will take precedence over the above property setting. If for example, you want the gain value to be raised to 1 in 2 seconds time, you can do this:

+ +
gainNode.gain.setValueAtTime(1, audioCtx.currentTime + 2);
+
+ +

It will override the previous example (as it should), even if it were to come later in your code.

+ +

Bearing this in mind, if your website or application requires timing and scheduling, it's best to stick with the {{domxref("AudioParam")}} methods for setting values. If you're sure it doesn't, setting it with the value property is fine.

diff --git a/files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/customsourcenode-as-splitter.svg b/files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/customsourcenode-as-splitter.svg new file mode 100644 index 0000000000..0490cddbe5 --- /dev/null +++ b/files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/customsourcenode-as-splitter.svg @@ -0,0 +1 @@ +ConstantSourceNodeGainNodeGainNodeStereoPannerNodegainpangaininput = Noutput = Noutput = Noutput = N \ No newline at end of file diff --git a/files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/index.html b/files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/index.html new file mode 100644 index 0000000000..5fdd188213 --- /dev/null +++ b/files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/index.html @@ -0,0 +1,284 @@ +--- +title: Controlling multiple parameters with ConstantSourceNode +slug: Web/API/Web_Audio_API/Controlling_multiple_parameters_with_ConstantSourceNode +tags: + - Audio + - Example + - Guide + - Intermediate + - Media + - Tutorial + - Web Audio + - Web Audio API +--- +
{{APIRef("Web Audio API")}}
+ +

This article demonstrates how to use a {{domxref("ConstantSourceNode")}} to link multiple parameters together so they share the same value, which can be changed by setting the value of the {{domxref("ConstantSourceNode.offset")}} parameter.

+ +

You may have times when you want to have multiple audio parameters be linked so they share the same value even while being changed in some way. For example, perhaps you have a set of oscillators, and two of them need to share the same, configurable volume, or you have a filter that's been applied to certain inputs but not to all of them. You could use a loop and change the value of each affected {{domxref("AudioParam")}} one at a time, but there are two drawbacks to doing it that way: first, that's extra code that, as you're about to see, you don't have to write; and second, that loop uses valuable CPU time on your thread (likely the main thread), and there's a way to offload all that work to the audio rendering thread, which is optimized for this kind of work and may run at a more appropriate priority level than your code.

+ +

The solution is simple, and it involves using an audio node type which, at first glance, doesn't look all that useful: {{domxref("ConstantSourceNode")}}.

+ +

The technique

+ +

This is actually a really easy way to do something that sounds like it might be hard to do. You need to create a {{domxref("ConstantSourceNode")}} and connect it to all of the {{domxref("AudioParam")}}s whose values should be linked to always match each other. Since ConstantSourceNode's {{domxref("ConstantSourceNode.offset", "offset")}} value is sent straight through to all of its outputs, it acts as a splitter for that value, sending it to each connected parameter.

+ +

The diagram below shows how this works; an input value, N, is set as the value of the {{domxref("ConstantSourceNode.offset")}} property. The ConstantSourceNode can have as many outputs as necessary; in this case, we've connected it to three nodes: two {{domxref("GainNode")}}s and a {{domxref("StereoPannerNode")}}. So N becomes the value of the specified parameter ({{domxref("GainNode.gain", "gain")}} for the {{domxref("GainNode")}}s and pan for the {{domxref("StereoPannerNode")}}.

+ +

Dagram in SVG showing how ConstantSourceNode can be used to split an input parameter to share it with multiple nodes.

+ +

As a result, every time you change N (the value of the input {{domxref("AudioParam")}}, the values of the two GainNodes' gain properties and the value of the StereoPannerNode's pan propertry are all set to N as well.

+ +

Example

+ +

Let's take a look at this technique in action. In this simple example, we create three {{domxref("OscillatorNode")}}s. Two of them have adjustable gain, controlled using a shared input control. The other oscillator has a fixed volume.

+ +

HTML

+ +

The HTML content for this example is primarily a button to toggle the oscillator tones on and off and an {{HTMLElement("input")}} element of type range to control the volume of two of the three oscillators.

+ +
<div class="controls">
+  <div class="left">
+    <div id="playButton" class="button">
+      ▶️
+    </div>
+  </div>
+  <div class="right">
+    <span>Volume: </span>
+    <input type="range" min="0.0" max="1.0" step="0.01"
+        value="0.8" name="volume" id="volumeControl">
+  </div>
+</div>
+
+<p>Use the button above to start and stop the tones, and the volume control to
+change the volume of the notes E and G in the chord.</p>
+ + + +

JavaScript

+ +

Now let's take a look at the JavaScript code, a piece at a time.

+ +

Setting up

+ +

Let's start by looking at the global variable initialization.

+ +
let context = null;
+
+let playButton = null;
+let volumeControl = null;
+
+let oscNode1 = null;
+let oscNode2 = null;
+let oscNode3 = null;
+let constantNode = null;
+let gainNode1 = null;
+let gainNode2 = null;
+let gainNode3 = null;
+
+let playing = false;
+ +

These variables are:

+ +
+
context
+
The {{domxref("AudioContext")}} in which all the audio nodes live.
+
playButton and volumeControl
+
References to the play button and volume control elements.
+
oscNode1, oscNode2, and oscNode3
+
The three {{domxref("OscillatorNode")}}s used to generate the chord.
+
gainNode1, gainNode2, and gainNode3
+
The three {{domxref("GainNode")}} instances which provide the volume levels for each of the three oscillators. gainNode2 and gainNode3 will be linked together to have the same, adjustable, value using the {{domxref("ConstantSourceNode")}}.
+
constantNode
+
The {{domxref("ConstantSourceNode")}} used to control the values of gainNode2 and gainNode3 together.
+
playing
+
A {{jsxref("Boolean")}} that we'll use to keep track of whether or not we're currently playing the tones.
+
+ +

Now let's look at the setup() function, which is our handler for the window's {{event("load")}} event; it handles all the initialization tasks that require the DOM to be in place.

+ +
function setup() {
+  context = new (window.AudioContext || window.webkitAudioContext)();
+
+  playButton = document.querySelector("#playButton");
+  volumeControl = document.querySelector("#volumeControl");
+
+  playButton.addEventListener("click", togglePlay, false);
+  volumeControl.addEventListener("input", changeVolume, false);
+
+  gainNode1 = context.createGain();
+  gainNode1.gain.value = 0.5;
+
+  gainNode2 = context.createGain();
+  gainNode3 = context.createGain();
+  gainNode2.gain.value = gainNode1.gain.value;
+  gainNode3.gain.value = gainNode1.gain.value;
+  volumeControl.value = gainNode1.gain.value;
+
+  constantNode = context.createConstantSource();
+  constantNode.connect(gainNode2.gain);
+  constantNode.connect(gainNode3.gain);
+  constantNode.start();
+
+  gainNode1.connect(context.destination);
+  gainNode2.connect(context.destination);
+  gainNode3.connect(context.destination);
+}
+
+window.addEventListener("load", setup, false);
+
+ +

First, we get access to the window's {{domxref("AudioContext")}}, stashing the reference in context. Then we get references to the control widgets, setting playButton to reference the play button and volumeControl to reference the slider control that the user will use to adjust the gain on the linked pair of oscillators.

+ +

Then we assign a handler for the play button's {{event("click")}} event (see {{anch("Toggling the oscillators on and off")}} for more on the togglePlay() method), and for the volume slider's {{event("input")}} event (see {{anch("Controlling the linked oscillators")}} to see the very short changeVolume() method).

+ +

Next, the {{domxref("GainNode")}} gainNode1 is created to handle the volume for the non-linked oscillator (oscNode1). We set that gain to 0.5. We also create gainNode2 and gainNode3, setting their values to match gainNode1, then set the value of the volume slider to the same value, so it is synchronized with the gain level it controls.

+ +

Once all the gain nodes are created, we create the {{domxref("ConstantSourceNode")}}, constantNode. We connect its output to the gain {{domxref("AudioParam")}} on both gainNode2 and gainNode3, and we start the constant node running by calling its {{domxref("AudioScheduledSourceNode/start", "start()")}} method; now it's sending the value 0.5 to the two gain nodes' values, and any change to {{domxref("ConstantSourceNode.offset", "constantNode.offset")}} will automatically set the gain of both gainNode2 and gainNode3 (affecting their audio inputs as expected).

+ +

Finally, we connect all the gain nodes to the {{domxref("AudioContext")}}'s {{domxref("BaseAudioContext/destination", "destination")}}, so that any sound delivered to the gain nodes will reach the output, whether that output be speakers, headphones, a recording stream, or any other destination type.

+ +

After setting the window's {{event("load")}} event handler to be the setup() function, the stage is set. Let's see how the action plays out.

+ +

Toggling the oscillators on and off

+ +

Because {{domxref("OscillatorNode")}} doesn't support the notion of being in a paused state, we have to simulate it by terminating the oscillators and starting them again when the play button is clicked again to toggle them back on. Let's look at the code.

+ +
function togglePlay(event) {
+  if (playing) {
+    playButton.textContent = "▶️";
+    stopOscillators();
+  } else {
+    playButton.textContent = "⏸";
+    startOscillators();
+  }
+}
+ +

If the playing variable indicates we're already playing the oscillators, we change the playButton's content to be the Unicode character "right-pointing triangle" (▶️) and call stopOscillators() to shut down the oscillators. See {{anch("Stopping the oscillators")}} below for that code.

+ +

If playing is false, indicating that we're currently paused, we change the play button's content to be the Unicode character "pause symbol" (⏸) and call startOscillators() to start the oscillators playing their tones. That code is covered under {{anch("Starting the oscillators")}} below.

+ +

Controlling the linked oscillators

+ +

The changeVolume() function—the event handler for the slider control for the gain on the linked oscillator pair—looks like this:

+ +
function changeVolume(event) {
+  constantNode.offset.value = volumeControl.value;
+}
+ +

That simple function controls the gain on both nodes. All we have to do is set the value of the {{domxref("ConstantSourceNode")}}'s {{domxref("ConstantSourceNode.offset", "offset")}} parameter. That value becomes the node's constant output value, which is fed into all of its outputs, which are, as set above, gainNode2 and gainNode3.

+ +

While this is an extremely simple example, imagine having a 32 oscillator synthesizer with multiple linked parameters in play across a number of patched nodes. Being able to shorten the number of operations to adjust them all will prove invaluable for code size and performance both.

+ +

Starting the oscillators

+ +

When the user clicks the play/pause toggle button while the oscillators aren't playing, the startOscillators() function gets called.

+ +
function startOscillators() {
+  oscNode1 = context.createOscillator();
+  oscNode1.type = "sine";
+  oscNode1.frequency.value = 261.625565300598634; // middle C
+  oscNode1.connect(gainNode1);
+
+  oscNode2 = context.createOscillator();
+  oscNode2.type = "sine";
+  oscNode2.frequency.value = 329.627556912869929; // E
+  oscNode2.connect(gainNode2);
+
+  oscNode3 = context.createOscillator();
+  oscNode3.type = "sine";
+  oscNode3.frequency.value = 391.995435981749294 // G
+  oscNode3.connect(gainNode3);
+
+  oscNode1.start();
+  oscNode2.start();
+  oscNode3.start();
+
+  playing = true;
+}
+ +

Each of the three oscillators is set up the same way:

+ +
    +
  1. Create the {{domxref("OscillatorNode")}} by calling {{domxref("BaseAudioContext.createOscillator")}}.
  2. +
  3. Set the oscillator's type to "sine" to use a sine wave as the audio waveform.
  4. +
  5. Set the oscillator's frequency to the desired value; in this case, oscNode1 is set to a middle C, while oscNode2 and oscNode3 round out the chord by playing the E and G notes.
  6. +
  7. Connect the new oscillator to the corresponding gain node.
  8. +
+ +

Once all three oscillators have been created, they're started by calling each one's {{domxref("AudioScheduledSourceNode.start", "ConstantSourceNode.start()")}} method in turn, and playing is set to true to track that the tones are playing.

+ +

Stopping the oscillators

+ +

Stopping the oscillators when the user toggles the play state to pause the tones is as simple as stopping each node.

+ +
function stopOscillators() {
+  oscNode1.stop();
+  oscNode2.stop();
+  oscNode3.stop();
+  playing = false;
+}
+ +

Each node is stopped by calling its {{domxref("AudioScheduledSourceNode.stop", "ConstantSourceNode.stop()")}} method, then playing is set to false.

+ +

Result

+ +

{{ EmbedLiveSample('Example', 600, 200) }}

+ +

See also

+ + diff --git a/files/ko/web/api/web_audio_api/index.html b/files/ko/web/api/web_audio_api/index.html index a6f2a443d1..1ccd2526b3 100644 --- a/files/ko/web/api/web_audio_api/index.html +++ b/files/ko/web/api/web_audio_api/index.html @@ -3,11 +3,11 @@ title: Web Audio API slug: Web/API/Web_Audio_API translation_of: Web/API/Web_Audio_API --- -
+
{{DefaultAPISidebar("Web Audio API")}}
+

Web Audio API는 웹에서 오디오를 제어하기 위한 강력하고 다양한 기능을 제공합니다. Web Audio API를 이용하면 오디오 소스를 선택할 수 있도록 하거나, 오디오에 이펙트를 추가하거나, 오디오를 시각화하거나, 패닝과 같은 공간 이펙트를 적용시키는 등의 작업이 가능합니다.

-
-

Web audio의 개념과 사용법

+

Web audio의 개념과 사용법

Web Audio API는 오디오 컨텍스트 내부의 오디오 조작을 핸들링하는 것을 포함하며, 모듈러 라우팅을 허용하도록 설계되어 있습니다. 기본적인 오디오 연산은 오디오 노드를 통해 수행되며, 오디오 노드는 서로 연결되어 오디오 라우팅 그래프를 형성합니다. 서로 다른 타입의 채널 레이아웃을 포함한 다수의 오디오 소스는 단일 컨텍스트 내에서도 지원됩니다. 이 모듈식 설계는 역동적이고 복합적인 오디오 기능 생성을 위한 유연성을 제공합니다.

@@ -18,24 +18,24 @@ translation_of: Web/API/Web_Audio_API

웹 오디오의 간단하고 일반적인 작업 흐름은 다음과 같습니다 :

    -
  1. 오디오 컨텍스트를 생성합니다.
  2. -
  3. 컨텍스트 내에 소스를 생성합니다.(ex - <audio>, 발진기, 스트림)
  4. -
  5. 이펙트 노드를 생성합니다. (ex - 잔향 효과,  바이쿼드 필터, 패너, 컴프레서 등)
  6. -
  7. 오디오의 최종 목적지를 선택합니다. (ex - 시스템 스피커)
  8. -
  9. 사운드를 이펙트에 연결하고, 이펙트를 목적지에 연결합니다.
  10. +
  11. 오디오 컨텍스트를 생성합니다.
  12. +
  13. 컨텍스트 내에 소스를 생성합니다.(ex - <audio>, 발진기, 스트림)
  14. +
  15. 이펙트 노드를 생성합니다. (ex - 잔향 효과,  바이쿼드 필터, 패너, 컴프레서 등)
  16. +
  17. 오디오의 최종 목적지를 선택합니다. (ex - 시스템 스피커)
  18. +
  19. 사운드를 이펙트에 연결하고, 이펙트를 목적지에 연결합니다.
-

A simple box diagram with an outer box labeled Audio context, and three inner boxes labeled Sources, Effects and Destination. The three inner boxes have arrow between them pointing from left to right, indicating the flow of audio information.

+

오디오 컨텍스트라고 쓰여진 외부 박스와, 소스, 이펙트, 목적지라고 쓰여진 세 개의 내부 박스를 가진 간단한 박스 다이어그램. 세 개의 내부 박스는 사이에 좌에서 우를 가리키는 화살표를 가지고 있는데, 이는 오디오 정보의 흐름을 나타냅니다.

높은 정확도와 적은 지연시간을 가진 타이밍 계산 덕분에, 개발자는 높은 샘플 레이트에서도 특정 샘플을 대상으로 이벤트에 정확하게 응답하는 코드를 작성할 수 있습니다. 따라서 드럼 머신이나 시퀀서 등의 어플리케이션은 충분히 구현 가능합니다.

Web Audio API는 오디오가 어떻게 공간화될지 컨트롤할 수 있도록 합니다. 소스-리스너 모델을 기반으로 하는 시스템을 사용하면 패닝 모델거리-유도 감쇄 혹은 움직이는 소스(혹은 움직이는 청자)를 통해 유발된 도플러 시프트 컨트롤이 가능합니다.

-

Basic concepts behind Web Audio API 아티클에서 Web Audio API 이론에 대한 더 자세한 내용을 읽을 수 있습니다.

+

Web Audio API의 기본 개념 문서에서 Web Audio API 이론에 대한 더 자세한 내용을 읽을 수 있습니다.

-

Web Audio API 타겟 사용자층

+

Web Audio API 타겟 사용자층

오디오나 음악 용어에 익숙하지 않은 사람은 Web Audio API가 막막하게 느껴질 수 있습니다. 또한 Web Audio API가 굉장히 다양한 기능을 제공하는 만큼 개발자로서는 시작하기 어렵게 느껴질 수 있습니다.

@@ -47,74 +47,80 @@ translation_of: Web/API/Web_Audio_API

코드를 작성하는 것은 카드 게임과 비슷합니다. 규칙을 배우고, 플레이합니다. 모르겠는 규칙은 다시 공부하고, 다시 새로운 판을 합니다. 마찬가지로, 이 문서와 첫 튜토리얼에서 설명하는 것만으로 부족하다고 느끼신다면 첫 튜토리얼의 내용을 보충하는 동시에 여러 테크닉을 이용하여 스텝 시퀀서를 만드는 법을 설명하는 상급자용 튜토리얼을 읽어보시는 것을 추천합니다.

-

그 외에도 이 페이지의 사이드바에서 API의 모든 기능을 설명하는 참고자료와 다양한 튜토리얼을 찾아 보실 수 있습니다.

+

그 외에도 이 페이지의 사이드바에서 API의 모든 기능을 설명하는 참고자료와 다양한 자습서를 찾아 보실 수 있습니다.

만약에 프로그래밍보다는 음악이 친숙하고, 음악 이론에 익숙하며, 악기를 만들고 싶으시다면 바로 상급자용 튜토리얼부터 시작하여 여러가지를 만들기 시작하시면 됩니다. 위의 튜토리얼은 음표를 배치하는 법, 저주파 발진기 등 맞춤형 Oscillator(발진기)와 Envelope를 설계하는 법 등을 설명하고 있으니, 이를 읽으며 사이드바의 자료를 참고하시면 될 것입니다.

프로그래밍에 전혀 익숙하지 않으시다면 자바스크립트 기초 튜토리얼을 먼저 읽고 이 문서를 다시 읽으시는 게 나을 수도 있습니다. 모질라의 자바스크립트 기초만큼 좋은 자료도 몇 없죠.

-

Web Audio API Interfaces

+

Web Audio API 인터페이스

Web Audio API는 다양한 인터페이스와 연관 이벤트를 가지고 있으며, 이는 9가지의 기능적 범주로 나뉩니다.

-

일반 오디오 그래프 정의

+

일반 오디오 그래프 정의

Web Audio API 사용범위 내에서 오디오 그래프를 형성하는 일반적인 컨테이너와 정의입니다.

-
{{domxref("AudioContext")}}
-
AudioContext 인터페이스는 오디오 모듈이 서로 연결되어 구성된 오디오 프로세싱 그래프를 표현하며, 각각의 그래프는 {{domxref("AudioNode")}}로 표현됩니다. AudioContext는 자신이 가지고 있는 노드의 생성과 오디오 프로세싱 혹은 디코딩의 실행을 제어합니다. 어떤 작업이든 시작하기 전에 AudioContext를 생성해야 합니다. 모든 작업은 컨텍스트 내에서 이루어집니다.
-
{{domxref("AudioNode")}}
-
AudioNode 인터페이스는 오디오 소스({{HTMLElement("audio")}}나 {{HTMLElement("video")}}엘리먼트), 오디오 목적지, 중간 처리 모듈({{domxref("BiquadFilterNode")}}이나 {{domxref("GainNode")}})과 같은 오디오 처리 모듈을 나타냅니다.
-
{{domxref("AudioParam")}}
-
AudioParam 인터페이스는 {{domxref("AudioNode")}}중 하나와 같은 오디오 관련 파라미터를 나타냅니다. 이는 특정 값 또는 값 변경으로 세팅되거나, 특정 시간에 발생하고 특정 패턴을 따르도록 스케쥴링할 수 있습니다.
-
The {{event("ended")}} event
-
-

ended 이벤트는 미디어의 끝에 도달하여 재생이 정지되면 호출됩니다.

-
+
{{domxref("AudioContext")}}
+
AudioContext 인터페이스는 오디오 모듈이 서로 연결되어 구성된 오디오 프로세싱 그래프를 표현하며, 각각의 그래프는 {{domxref("AudioNode")}}로 표현됩니다. AudioContext는 자신이 가지고 있는 노드의 생성과 오디오 프로세싱 혹은 디코딩의 실행을 제어합니다. 어떤 작업이든 시작하기 전에 AudioContext를 생성해야 합니다. 모든 작업은 컨텍스트 내에서 이루어집니다.
+
{{domxref("AudioNode")}}
+
AudioNode 인터페이스는 오디오 소스({{HTMLElement("audio")}}나 {{HTMLElement("video")}} 요소), 오디오 목적지, 중간 처리 모듈({{domxref("BiquadFilterNode")}}이나 {{domxref("GainNode")}})과 같은 오디오 처리 모듈을 나타냅니다.
+
{{domxref("AudioParam")}}
+
AudioParam 인터페이스는 {{domxref("AudioNode")}}중 하나와 같은 오디오 관련 파라미터를 나타냅니다. 이는 특정 값 또는 값 변경으로 세팅되거나, 특정 시간에 발생하고 특정 패턴을 따르도록 스케쥴링할 수 있습니다.
+
{{domxref("AudioParamMap")}}
+
{{domxref("AudioParam")}} 인터페이스 그룹에 maplike 인터페이스를 제공하는데, 이는 forEach(), get(), has(), keys()values() 메서드와 size 속성이 제공된다는 것을 의미합니다.
+
{{domxref("BaseAudioContext")}}
+
BaseAudioContext 인터페이스는 온라인과 오프라인 오디오 프로세싱 그래프에 대한 기본 정의로서 동작하는데, 이는 각각 {{domxref("AudioContext")}} 와 {{domxref("OfflineAudioContext")}}로 대표됩니다. BaseAudioContext는 직접 쓰여질 수 없습니다 — 이 두 가지 상속되는 인터페이스 중 하나를 통해 이것의 기능을 사용할 수 있습니다.
+
The {{event("ended")}} event
+

ended 이벤트는 미디어의 끝에 도달하여 재생이 정지되면 호출됩니다.

-

오디오 소스 정의하기

+

오디오 소스 정의하기

Web Audio API에서 사용하기 위한 오디오 소스를 정의하는 인터페이스입니다.

-
{{domxref("OscillatorNode")}}
-
OscillatorNode 인터페이스는 삼각파 또는 사인파와 같은 주기적 파형을 나타냅니다. 이것은 주어진 주파수의 파동을 생성하는 {{domxref("AudioNode")}} 오디오 프로세싱 모듈입니다.
-
{{domxref("AudioBuffer")}}
-
AudioBuffer 인터페이스는 {{ domxref("AudioContext.decodeAudioData()") }}메소드를 사용해 오디오 파일에서 생성되거나 {{ domxref("AudioContext.createBuffer()") }}를 사용해 로우 데이터로부터 생성된 메모리상에 적재되는 짧은 오디오 자원을 나타냅니다. 이 형식으로 디코딩된 오디오는 {{ domxref("AudioBufferSourceNode") }}에 삽입될 수 있습니다.
-
{{domxref("AudioBufferSourceNode")}}
-
AudioBufferSourceNode 인터페이스는 {{domxref("AudioBuffer")}}에 저장된 메모리상의 오디오 데이터로 구성된 오디오 소스를 나타냅니다. 이것은 오디오 소스 역할을 하는 {{domxref("AudioNode")}}입니다.
-
{{domxref("MediaElementAudioSourceNode")}}
-
MediaElementAudioSourceNode 인터페이스는 {{ htmlelement("audio") }} 나 {{ htmlelement("video") }} HTML 엘리먼트로 구성된 오디오 소스를 나타냅니다. 이것은 오디오 소스 역할을 하는 {{domxref("AudioNode")}}입니다.
-
{{domxref("MediaStreamAudioSourceNode")}}
-
MediaStreamAudioSourceNode 인터페이스는 WebRTC {{domxref("MediaStream")}}(웹캡, 마이크 혹은 원격 컴퓨터에서 전송된 스트림)으로 구성된 오디오 소스를 나타냅니다. 이것은 오디오 소스 역할을 하는 {{domxref("AudioNode")}}입니다.
+
{{domxref("AudioScheduledSourceNode")}}
+
AudioScheduledSourceNode는 오디오 소스 노드 인터페이스의 몇 가지 유형에 대한 부모 인터페이스입니다. 이것은 {{domxref("AudioNode")}}입니다.
+
{{domxref("OscillatorNode")}}
+
OscillatorNode 인터페이스는 삼각파 또는 사인파와 같은 주기적 파형을 나타냅니다. 이것은 주어진 주파수의 파동을 생성하는 {{domxref("AudioNode")}} 오디오 프로세싱 모듈입니다.
+
{{domxref("AudioBuffer")}}
+
AudioBuffer 인터페이스는 {{ domxref("AudioContext.decodeAudioData()") }}메소드를 사용해 오디오 파일에서 생성되거나 {{ domxref("AudioContext.createBuffer()") }}를 사용해 로우 데이터로부터 생성된 메모리상에 적재되는 짧은 오디오 자원을 나타냅니다. 이 형식으로 디코딩된 오디오는 {{ domxref("AudioBufferSourceNode") }}에 삽입될 수 있습니다.
+
{{domxref("AudioBufferSourceNode")}}
+
AudioBufferSourceNode 인터페이스는 {{domxref("AudioBuffer")}}에 저장된 메모리상의 오디오 데이터로 구성된 오디오 소스를 나타냅니다. 이것은 오디오 소스 역할을 하는 {{domxref("AudioNode")}}입니다.
+
{{domxref("MediaElementAudioSourceNode")}}
+
MediaElementAudioSourceNode 인터페이스는 {{ htmlelement("audio") }} 나 {{ htmlelement("video") }} HTML 엘리먼트로 구성된 오디오 소스를 나타냅니다. 이것은 오디오 소스 역할을 하는 {{domxref("AudioNode")}}입니다.
+
{{domxref("MediaStreamAudioSourceNode")}}
+
MediaStreamAudioSourceNode 인터페이스는 WebRTC {{domxref("MediaStream")}}(웹캠, 마이크 혹은 원격 컴퓨터에서 전송된 스트림)으로 구성된 오디오 소스를 나타냅니다. 이것은 오디오 소스 역할을 하는 {{domxref("AudioNode")}}입니다.
+
{{domxref("MediaStreamTrackAudioSourceNode")}}
+
{{domxref("MediaStreamTrackAudioSourceNode")}} 유형의 노드는 데이터가 {{domxref("MediaStreamTrack")}}로부터 오는 오디오 소스를 표현합니다. 이 노드를 생성하기 위해 {{domxref("AudioContext.createMediaStreamTrackSource", "createMediaStreamTrackSource()")}} 메서드를 사용하여 이 노드를 생성할 때, 여러분은 어떤 트랙을 사용할 지 명시합니다. 이것은 MediaStreamAudioSourceNode보다 더 많은 제어를 제공합니다.
-

오디오 이펙트 필터 정의하기

+

오디오 이펙트 필터 정의하기

오디오 소스에 적용할 이펙트를 정의하는 인터페이스입니다.

-
{{domxref("BiquadFilterNode")}}
-
BiquadFilterNode 인터페이스는 간단한 하위 필터를 나타냅니다. 이것은 여러 종류의 필터나 톤 제어 장치 혹은 그래픽 이퀄라이저를 나타낼 수 있는 {{domxref("AudioNode")}}입니다. BiquadFilterNode는 항상 단 하나의 입력과 출력만을 가집니다. 
-
{{domxref("ConvolverNode")}}
-
ConvolverNode 인터페이스는 주어진 {{domxref("AudioBuffer")}}에 선형 콘볼루션을 수행하는 {{domxref("AudioNode")}}이며, 리버브 이펙트를 얻기 위해 자주 사용됩니다. 
-
{{domxref("DelayNode")}}
-
DelayNode 인터페이스는 지연선을 나타냅니다. 지연선은 입력 데이터가 출력에 전달되기까지의 사이에 딜레이를 발생시키는 {{domxref("AudioNode")}} 오디오 처리 모듈입니다.
-
{{domxref("DynamicsCompressorNode")}}
-
DynamicsCompressorNode 인터페이스는 압축 이펙트를 제공합니다, 이는 신호의 가장 큰 부분의 볼륨을 낮추어 여러 사운드를 동시에 재생할 때 발생할 수 있는 클리핑 및 왜곡을 방지합니다.
-
{{domxref("GainNode")}}
-
GainNode 인터페이스는 음량의 변경을 나타냅니다. 이는 출력에 전달되기 전의 입력 데이터에 주어진 음량 조정을 적용하기 위한 {{domxref("AudioNode")}} 오디오 모듈입니다.
-
{{domxref("StereoPannerNode")}}
-
StereoPannerNode 인터페이스는 오디오 스트림을 좌우로 편향시키는데 사용될 수 있는 간단한 스테레오 패너 노드를 나타냅니다.
-
{{domxref("WaveShaperNode")}}
-
WaveShaperNode 인터페이스는 비선형 왜곡을 나타냅니다. 이는 곡선을 사용하여 신호의 파형 형성에 왜곡을 적용하는 {{domxref("AudioNode")}}입니다. 분명한 왜곡 이펙트 외에도 신호에 따뜻한 느낌을 더하는데 자주 사용됩니다.
-
{{domxref("PeriodicWave")}}
-
{{domxref("OscillatorNode")}}의 출력을 형성하는데 사용될 수 있는 주기적 파형을 설명합니다.
+
{{domxref("BiquadFilterNode")}}
+
BiquadFilterNode 인터페이스는 간단한 하위 필터를 나타냅니다. 이것은 여러 종류의 필터나 톤 제어 장치 혹은 그래픽 이퀄라이저를 나타낼 수 있는 {{domxref("AudioNode")}}입니다. BiquadFilterNode는 항상 단 하나의 입력과 출력만을 가집니다. 
+
{{domxref("ConvolverNode")}}
+
ConvolverNode 인터페이스는 주어진 {{domxref("AudioBuffer")}}에 선형 콘볼루션을 수행하는 {{domxref("AudioNode")}}이며, 리버브 이펙트를 얻기 위해 자주 사용됩니다. 
+
{{domxref("DelayNode")}}
+
DelayNode 인터페이스는 지연선을 나타냅니다. 지연선은 입력 데이터가 출력에 전달되기까지의 사이에 딜레이를 발생시키는 {{domxref("AudioNode")}} 오디오 처리 모듈입니다.
+
{{domxref("DynamicsCompressorNode")}}
+
DynamicsCompressorNode 인터페이스는 압축 이펙트를 제공합니다, 이는 신호의 가장 큰 부분의 볼륨을 낮추어 여러 사운드를 동시에 재생할 때 발생할 수 있는 클리핑 및 왜곡을 방지합니다.
+
{{domxref("GainNode")}}
+
GainNode 인터페이스는 음량의 변경을 나타냅니다. 이는 출력에 전달되기 전의 입력 데이터에 주어진 음량 조정을 적용하기 위한 {{domxref("AudioNode")}} 오디오 모듈입니다.
+
{{domxref("WaveShaperNode")}}
+
WaveShaperNode 인터페이스는 비선형 왜곡을 나타냅니다. 이는 곡선을 사용하여 신호의 파형 형성에 왜곡을 적용하는 {{domxref("AudioNode")}}입니다. 분명한 왜곡 이펙트 외에도 신호에 따뜻한 느낌을 더하는데 자주 사용됩니다.
+
{{domxref("PeriodicWave")}}
+
{{domxref("OscillatorNode")}}의 출력을 형성하는데 사용될 수 있는 주기적 파형을 설명합니다.
+
{{domxref("IIRFilterNode")}}
+
일반적인 infinite impulse response (IIR) 필터를 구현합니다; 이 유형의 필터는 음색 제어 장치와 그래픽 이퀄라이저를 구현하는 데 사용될 수 있습니다.
-

오디오 목적지 정의하기

+

오디오 목적지 정의하기

처리된 오디오를 어디에 출력할지 정의하는 인터페이스입니다.

@@ -122,347 +128,152 @@ translation_of: Web/API/Web_Audio_API
{{domxref("AudioDestinationNode")}}
AudioDestinationNode 인터페이스는 주어진 컨텍스트 내의 오디오 소스의 최종 목적지를 나타냅니다. 주로 기기의 스피커로 출력할 때 사용됩니다.
{{domxref("MediaStreamAudioDestinationNode")}}
-
MediaStreamAudioDestinationNode 인터페이스는 단일 AudioMediaStreamTrack 을 가진 WebRTC {{domxref("MediaStream")}}로 구성된 오디오 목적지를 나타내며, 이는 {{ domxref("MediaDevices.getUserMedia", "getUserMedia()") }}에서 얻은 {{domxref("MediaStream")}}과 비슷한 방식으로 사용할 수 있습니다. 이것은 오디오 목적지 역할을 하는 {{domxref("AudioNode")}}입니다.
+
MediaStreamAudioDestinationNode 인터페이스는 단일 AudioMediaStreamTrack 을 가진 WebRTC {{domxref("MediaStream")}}로 구성된 오디오 목적지를 나타내며, 이는 {{ domxref("MediaDevices.getUserMedia", "getUserMedia()") }}에서 얻은 {{domxref("MediaStream")}}과 비슷한 방식으로 사용할 수 있습니다. 이것은 오디오 목적지 역할을 하는 {{domxref("AudioNode")}}입니다.
-

데이터 분석 및 시각화

+

데이터 분석 및 시각화

오디오에서 재생시간이나 주파수 등의 데이터를 추출하기 위한 인터페이스입니다.

-
{{domxref("AnalyserNode")}}
-
AnalyserNode 인터페이스는 데이터를 분석하고 시각화하기 위한 실시간 주파수와 시간영역 분석 정보를 제공하는 노드를 나타냅니다.
+
{{domxref("AnalyserNode")}}
+
AnalyserNode 인터페이스는 데이터를 분석하고 시각화하기 위한 실시간 주파수와 시간영역 분석 정보를 제공하는 노드를 나타냅니다.
-

오디오 채널을 분리하고 병합하기

+

오디오 채널을 분리하고 병합하기

오디오 채널들을 분리하거나 병합하기 위한 인터페이스입니다.

-
{{domxref("ChannelSplitterNode")}}
-
ChannelSplitterNode 인터페이스는 오디오 소스의 여러 채널을 모노 출력 셋으로 분리합니다.
-
{{domxref("ChannelMergerNode")}}
-
ChannelMergerNode 인터페이스는 여러 모노 입력을 하나의 출력으로 재결합합니다. 각 입력은 출력의 채널을 채우는데 사용될 것입니다.
+
{{domxref("ChannelSplitterNode")}}
+
ChannelSplitterNode 인터페이스는 오디오 소스의 여러 채널을 모노 출력 셋으로 분리합니다.
+
{{domxref("ChannelMergerNode")}}
+
ChannelMergerNode 인터페이스는 여러 모노 입력을 하나의 출력으로 재결합합니다. 각 입력은 출력의 채널을 채우는데 사용될 것입니다.
-

오디오 공간화

+

오디오 공간화

오디오 소스에 오디오 공간화 패닝 이펙트를 추가하는 인터페이스입니다.

-
{{domxref("AudioListener")}}
-
AudioListener 인터페이스는 오디오 공간화에 사용되는 오디오 장면을 청취하는 고유한 시청자의 위치와 방향을 나타냅니다.
-
{{domxref("PannerNode")}}
-
PannerNode 인터페이스는 공간 내의 신호 양식을 나타냅니다. 이것은 자신의 오른손 직교 좌표 내의 포지션과, 속도 벡터를 이용한 움직임과, 방향성 원뿔을 이용한 방향을 서술하는 {{domxref("AudioNode")}} 오디오 프로세싱 모듈입니다.
-
- -

자바스크립트에서 오디오 처리하기

- -

자바스크립트에서 오디오 데이터를 처리하기 위한 코드를 작성할 수 있습니다. 이렇게 하려면 아래에 나열된 인터페이스와 이벤트를 사용하세요.

- -
-

이것은 Web Audio API 2014년 8월 29일의 스펙입니다. 이 기능은 지원이 중단되고 {{ anch("Audio_Workers") }}로 대체될 예정입니다.

-
- -
-
{{domxref("ScriptProcessorNode")}}
-
ScriptProcessorNode 인터페이스는 자바스크립트를 이용한 오디오 생성, 처리, 분석 기능을 제공합니다. 이것은 현재 입력 버퍼와 출력 버퍼, 총 두 개의 버퍼에 연결되는 {{domxref("AudioNode")}} 오디오 프로세싱 모듈입니다. {{domxref("AudioProcessingEvent")}}인터페이스를 구현하는 이벤트는 입력 버퍼에 새로운 데이터가 들어올 때마다 객체로 전달되고, 출력 버퍼가 데이터로 채워지면 이벤트 핸들러가 종료됩니다.
-
{{event("audioprocess")}} (event)
-
audioprocess 이벤트는 Web Audio API {{domxref("ScriptProcessorNode")}}의 입력 버퍼가 처리될 준비가 되었을 때 발생합니다.
-
{{domxref("AudioProcessingEvent")}}
-
Web Audio API AudioProcessingEvent 는 {{domxref("ScriptProcessorNode")}} 입력 버퍼가 처리될 준비가 되었을 때 발생하는 이벤트를 나타냅니다.
+
{{domxref("AudioListener")}}
+
AudioListener 인터페이스는 오디오 공간화에 사용되는 오디오 장면을 청취하는 고유한 시청자의 위치와 방향을 나타냅니다.
+
{{domxref("PannerNode")}}
+
PannerNode 인터페이스는 공간 내의 신호 양식을 나타냅니다. 이것은 자신의 오른손 직교 좌표 내의 포지션과, 속도 벡터를 이용한 움직임과, 방향성 원뿔을 이용한 방향을 서술하는 {{domxref("AudioNode")}} 오디오 프로세싱 모듈입니다.
+
{{domxref("StereoPannerNode")}}
+
StereoPannerNode 인터페이스는 오디오 스트림을 좌우로 편향시키는데 사용될 수 있는 간단한 스테레오 패너 노드를 나타냅니다.
-

오프라인/백그라운드 오디오 처리하기

+

JavaScript에서의 오디오 프로세싱

-

다음을 이용해 백그라운드(장치의 스피커가 아닌 {{domxref("AudioBuffer")}}으로 렌더링)에서 오디오 그래프를 신속하게 처리/렌더링 할수 있습니다.

+

오디오 worklet을 사용하여, 여러분은 JavaScript 또는 WebAssembly로 작성된 사용자 정의 오디오 노드를 정의할 수 있습니다. 오디오 worklet은 {{domxref("Worklet")}} 인터페이스를 구현하는데, 이는 {{domxref("Worker")}} 인터페이스의 가벼운 버전입니다.

-
{{domxref("OfflineAudioContext")}}
-
OfflineAudioContext 인터페이스는 {{domxref("AudioNode")}}로 연결되어 구성된 오디오 프로세싱 그래프를 나타내는 {{domxref("AudioContext")}} 인터페이스입니다. 표준 AudioContext 와 대조적으로, OfflineAudioContext 는 실제로 오디오를 렌더링하지 않고 가능한 빨리 버퍼 내에서 생성합니다. 
-
{{event("complete")}} (event)
-
complete 이벤트는 {{domxref("OfflineAudioContext")}}의 렌더링이 종료될때 발생합니다.
-
{{domxref("OfflineAudioCompletionEvent")}}
-
OfflineAudioCompletionEvent 이벤트는 {{domxref("OfflineAudioContext")}} 의 처리가 종료될 때 발생하는 이벤트를 나타냅니다. {{event("complete")}} 이벤트는 이 이벤트를 구현합니다.
+
{{domxref("AudioWorklet")}}
+
AudioWorklet 인터페이스는 {{domxref("AudioContext")}} 객체의 {{domxref("BaseAudioContext.audioWorklet", "audioWorklet")}}을 통하여 사용 가능하고, 메인 스레드를 실행할 오디오 worklet에 모듈을 추가할 수 있게 합니다.
+
{{domxref("AudioWorkletNode")}}
+
AudioWorkletNode 인터페이스는 오디오 그래프에 임베드된 {{domxref("AudioNode")}}을 나타내고 해당하는 AudioWorkletProcessor에 메시지를 전달할 수 있습니다.
+
{{domxref("AudioWorkletProcessor")}}
+
AudioWorkletProcessor 인터페이스는 오디오를 직접 생성하거나, 처리하거나, 또는 분석하는 AudioWorkletGlobalScope에서 실행되는 오디오 프로세싱 코드를 나타내고, 해당하는 AudioWorkletNode에 메시지를 전달할 수 있습니다.
+
{{domxref("AudioWorkletGlobalScope")}}
+
AudioWorkletGlobalScope 인터페이스는 오디오 프로세싱 스크립트가 실행되는 워커 컨텍스트를 나타내는 파생된 객체인 WorkletGlobalScope입니다; 이것은 메인 스레드가 아닌 worklet 스레드에서 JavaScript를 사용하여 직접적으로 오디오 데이터의 생성, 처리, 분석을 가능하게 하도록 설계되었습니다.
-

오디오 워커

+

안 쓰임: 스크립트 프로세서 노드

-

오디오 워커는 web worker 컨텍스트 내에서 스크립팅된 오디오 처리를 관리하기 위한 기능을 제공하며, 두어가지 인터페이스로 정의되어 있습니다(2014년 8월 29일 새로운 기능이 추가되었습니다). 이는 아직 모든 브라우저에서 구현되지 않았습니다. 구현된 브라우저에서는 Audio processing in JavaScript에서 설명된 {{domxref("ScriptProcessorNode")}}를 포함한 다른 기능을 대체합니다.

+

오디오 worklet이 정의되기 전에, Web Audio API는 JavaScript 기반의 오디오 프로세싱을 위해 ScriptProcessorNode를 사용했습니다. 코드가 메인 스레드에서 실행되기 때문에, 나쁜 성능을 가지고 있었습니다. ScriptProcessorNode는 역사적인 이유로 보존되나 deprecated되었습니다.

-
{{domxref("AudioWorkerNode")}}
-
AudioWorkerNode 인터페이스는 워커 쓰레드와 상호작용하여 오디오를 직접 생성, 처리, 분석하는 {{domxref("AudioNode")}}를 나타냅니다. 
-
{{domxref("AudioWorkerGlobalScope")}}
-
AudioWorkerGlobalScope 인터페이스는 DedicatedWorkerGlobalScope 에서 파생된 오디오 처리 스크립트가 실행되는 워커 컨텍스트를 나타내는 객체입니다. 이것은 워커 쓰레드 내에서 자바스크립트를 이용하여 직접 오디오 데이터를 생성, 처리, 분석할 수 있도록 설계되었습니다.
-
{{domxref("AudioProcessEvent")}}
-
이것은 처리를 수행하기 위해 {{domxref("AudioWorkerGlobalScope")}} 오브젝트로 전달되는 Event 오브젝트입니다.
+
{{domxref("ScriptProcessorNode")}} {{deprecated_inline}}
+
ScriptProcessorNode 인터페이스는 자바스크립트를 이용한 오디오 생성, 처리, 분석 기능을 제공합니다. 이것은 현재 입력 버퍼와 출력 버퍼, 총 두 개의 버퍼에 연결되는 {{domxref("AudioNode")}} 오디오 프로세싱 모듈입니다. {{domxref("AudioProcessingEvent")}} 인터페이스를 구현하는 이벤트는 입력 버퍼에 새로운 데이터가 들어올 때마다 객체로 전달되고, 출력 버퍼가 데이터로 채워지면 이벤트 핸들러가 종료됩니다.
+
{{event("audioprocess")}} (event) {{deprecated_inline}}
+
audioprocess 이벤트는 Web Audio API {{domxref("ScriptProcessorNode")}}의 입력 버퍼가 처리될 준비가 되었을 때 발생합니다.
+
{{domxref("AudioProcessingEvent")}} {{deprecated_inline}}
+
Web Audio API AudioProcessingEvent는 {{domxref("ScriptProcessorNode")}} 입력 버퍼가 처리될 준비가 되었을 때 발생하는 이벤트를 나타냅니다.
-

Obsolete interfaces

+

오프라인/백그라운드 오디오 처리하기

-

The following interfaces were defined in old versions of the Web Audio API spec, but are now obsolete and have been replaced by other interfaces.

+

다음을 이용해 백그라운드(장치의 스피커가 아닌 {{domxref("AudioBuffer")}}으로 렌더링)에서 오디오 그래프를 신속하게 처리/렌더링 할수 있습니다.

-
{{domxref("JavaScriptNode")}}
-
Used for direct audio processing via JavaScript. This interface is obsolete, and has been replaced by {{domxref("ScriptProcessorNode")}}.
-
{{domxref("WaveTableNode")}}
-
Used to define a periodic waveform. This interface is obsolete, and has been replaced by {{domxref("PeriodicWave")}}.
+
{{domxref("OfflineAudioContext")}}
+
OfflineAudioContext 인터페이스는 {{domxref("AudioNode")}}로 연결되어 구성된 오디오 프로세싱 그래프를 나타내는 {{domxref("AudioContext")}} 인터페이스입니다. 표준 AudioContext 와 대조적으로, OfflineAudioContext 는 실제로 오디오를 렌더링하지 않고 가능한 빨리 버퍼 내에서 생성합니다. 
+
{{event("complete")}} (event)
+
complete 이벤트는 {{domxref("OfflineAudioContext")}}의 렌더링이 종료될때 발생합니다.
+
{{domxref("OfflineAudioCompletionEvent")}}
+
OfflineAudioCompletionEvent 이벤트는 {{domxref("OfflineAudioContext")}} 의 처리가 종료될 때 발생하는 이벤트를 나타냅니다. {{event("complete")}} 이벤트는 이 이벤트를 구현합니다.
-

Example

- -

This example shows a wide variety of Web Audio API functions being used. You can see this code in action on the Voice-change-o-matic demo (also check out the full source code at Github) — this is an experimental voice changer toy demo; keep your speakers turned down low when you use it, at least to start!

- -

The Web Audio API lines are highlighted; if you want to find out more about what the different methods, etc. do, have a search around the reference pages.

- -
var audioCtx = new (window.AudioContext || window.webkitAudioContext)(); // define audio context
-// Webkit/blink browsers need prefix, Safari won't work without window.
-
-var voiceSelect = document.getElementById("voice"); // select box for selecting voice effect options
-var visualSelect = document.getElementById("visual"); // select box for selecting audio visualization options
-var mute = document.querySelector('.mute'); // mute button
-var drawVisual; // requestAnimationFrame
-
-var analyser = audioCtx.createAnalyser();
-var distortion = audioCtx.createWaveShaper();
-var gainNode = audioCtx.createGain();
-var biquadFilter = audioCtx.createBiquadFilter();
-
-function makeDistortionCurve(amount) { // function to make curve shape for distortion/wave shaper node to use
-  var k = typeof amount === 'number' ? amount : 50,
-    n_samples = 44100,
-    curve = new Float32Array(n_samples),
-    deg = Math.PI / 180,
-    i = 0,
-    x;
-  for ( ; i < n_samples; ++i ) {
-    x = i * 2 / n_samples - 1;
-    curve[i] = ( 3 + k ) * x * 20 * deg / ( Math.PI + k * Math.abs(x) );
-  }
-  return curve;
-};
-
-navigator.getUserMedia (
-  // constraints - only audio needed for this app
-  {
-    audio: true
-  },
-
-  // Success callback
-  function(stream) {
-    source = audioCtx.createMediaStreamSource(stream);
-    source.connect(analyser);
-    analyser.connect(distortion);
-    distortion.connect(biquadFilter);
-    biquadFilter.connect(gainNode);
-    gainNode.connect(audioCtx.destination); // connecting the different audio graph nodes together
-
-    visualize(stream);
-    voiceChange();
-
-  },
-
-  // Error callback
-  function(err) {
-    console.log('The following gUM error occured: ' + err);
-  }
-);
-
-function visualize(stream) {
-  WIDTH = canvas.width;
-  HEIGHT = canvas.height;
-
-  var visualSetting = visualSelect.value;
-  console.log(visualSetting);
-
-  if(visualSetting == "sinewave") {
-    analyser.fftSize = 2048;
-    var bufferLength = analyser.frequencyBinCount; // half the FFT value
-    var dataArray = new Uint8Array(bufferLength); // create an array to store the data
+

가이드와 자습서

-    canvasCtx.clearRect(0, 0, WIDTH, HEIGHT); +

{{LandingPageListSubpages}}

-    function draw() { +

예제

-      drawVisual = requestAnimationFrame(draw); +

여러분은 GitHub의 webaudio-example 레포지토리에서 몇 개의 예제를 찾을 수 있습니다.

-      analyser.getByteTimeDomainData(dataArray); // get waveform data and put it into the array created above +

명세

-      canvasCtx.fillStyle = 'rgb(200, 200, 200)'; // draw wave with canvas -      canvasCtx.fillRect(0, 0, WIDTH, HEIGHT); - -      canvasCtx.lineWidth = 2; -      canvasCtx.strokeStyle = 'rgb(0, 0, 0)'; - -      canvasCtx.beginPath(); - -      var sliceWidth = WIDTH * 1.0 / bufferLength; -      var x = 0; - -      for(var i = 0; i < bufferLength; i++) { - -        var v = dataArray[i] / 128.0; -        var y = v * HEIGHT/2; - -        if(i === 0) { -          canvasCtx.moveTo(x, y); -        } else { -          canvasCtx.lineTo(x, y); -        } - -        x += sliceWidth; -      } - -      canvasCtx.lineTo(canvas.width, canvas.height/2); -      canvasCtx.stroke(); -    }; - -    draw(); - -  } else if(visualSetting == "off") { -    canvasCtx.clearRect(0, 0, WIDTH, HEIGHT); -    canvasCtx.fillStyle = "red"; -    canvasCtx.fillRect(0, 0, WIDTH, HEIGHT); -  } - -} - -function voiceChange() { -  distortion.curve = new Float32Array; -  biquadFilter.gain.value = 0; // reset the effects each time the voiceChange function is run - -  var voiceSetting = voiceSelect.value; -  console.log(voiceSetting); - -  if(voiceSetting == "distortion") { -    distortion.curve = makeDistortionCurve(400); // apply distortion to sound using waveshaper node -  } else if(voiceSetting == "biquad") { -    biquadFilter.type = "lowshelf"; -    biquadFilter.frequency.value = 1000; -    biquadFilter.gain.value = 25; // apply lowshelf filter to sounds using biquad -  } else if(voiceSetting == "off") { -    console.log("Voice settings turned off"); // do nothing, as off option was chosen -  } - -} - -// event listeners to change visualize and voice settings + + + + + + + + + + + + + +
SpecificationStatusComment
{{SpecName('Web Audio API')}}{{Spec2('Web Audio API')}}
-visualSelect.onchange = function() { -  window.cancelAnimationFrame(drawVisual); -  visualize(stream); -} +

브라우저 호환성

-voiceSelect.onchange = function() { -  voiceChange(); -} +
+

AudioContext

-mute.onclick = voiceMute; +
-function voiceMute() { // toggle to mute and unmute sound -  if(mute.id == "") { -    gainNode.gain.value = 0; // gain set to 0 to mute sound -    mute.id = "activated"; -    mute.innerHTML = "Unmute"; -  } else { -    gainNode.gain.value = 1; // gain set to 1 to unmute sound -    mute.id = ""; -    mute.innerHTML = "Mute"; -  } -} -
+

{{Compat("api.AudioContext", 0)}}

+ + -

Specifications

+

같이 보기

- - - - - - - - - - - - - -
SpecificationStatusComment
{{SpecName('Web Audio API')}}{{Spec2('Web Audio API')}}
+

자습서/가이드

-

Browser compatibility

+ -

{{Compat("api.AudioContext", 0)}}

- -

See also

+

라이브러리

- + diff --git a/files/ko/web/api/web_audio_api/migrating_from_webkitaudiocontext/index.html b/files/ko/web/api/web_audio_api/migrating_from_webkitaudiocontext/index.html new file mode 100644 index 0000000000..260a26a090 --- /dev/null +++ b/files/ko/web/api/web_audio_api/migrating_from_webkitaudiocontext/index.html @@ -0,0 +1,381 @@ +--- +title: Migrating from webkitAudioContext +slug: Web/API/Web_Audio_API/Migrating_from_webkitAudioContext +tags: + - API + - Audio + - Guide + - Migrating + - Migration + - Updating + - Web Audio API + - porting + - webkitAudioContext +--- +

The Web Audio API went through many iterations before reaching its current state. It was first implemented in WebKit, and some of its older parts were not immediately removed as they were replaced in the specification, leading to many sites using non-compatible code. In this article, we cover the differences in Web Audio API since it was first implemented in WebKit and how to update your code to use the modern Web Audio API.

+ +

The Web Audio standard was first implemented in WebKit, and the implementation was built in parallel with the work on the specification of the API. As the specification evolved and changes were made to the spec, some of the old implementation pieces were not removed from the WebKit (and Blink) implementations due to backwards compatibility reasons.

+ +

New engines implementing the Web Audio spec (such as Gecko) will only implement the official, final version of the specification, which means that code using webkitAudioContext or old naming conventions in the Web Audio specification may not immediately work out of the box in a compliant Web Audio implementation.  This article attempts to summarize the areas where developers are likely to encounter these problems and provide examples on how to port such code to standards based {{domxref("AudioContext")}}, which will work across different browser engines.

+ +
+

Note: There is a library called webkitAudioContext monkeypatch, which automatically fixes some of these changes to make most code targeting webkitAudioContext to work on the standards based AudioContext out of the box, but it currently doesn't handle all of the cases below.  Please consult the README file for that library to see a list of APIs that are automatically handled by it.

+
+ +

Changes to the creator methods

+ +

Three of the creator methods on webkitAudioContext have been renamed in {{domxref("AudioContext")}}.

+ + + +

These are simple renames that were made in order to improve the consistency of these method names on {{domxref("AudioContext")}}.  If your code uses either of these names, like in the example below :

+ +
// Old method names
+var gain = context.createGainNode();
+var delay = context.createDelayNode();
+var js = context.createJavascriptNode(1024);
+
+ +

you can rename the methods to look like this:

+ +
// New method names
+var gain = context.createGain();
+var delay = context.createDelay();
+var js = context.createScriptProcessor(1024);
+
+ +

The semantics of these methods remain the same in the renamed versions.

+ +

Changes to starting and stopping nodes

+ +

In webkitAudioContext, there are two ways to start and stop {{domxref("AudioBufferSourceNode")}} and {{domxref("OscillatorNode")}}: the noteOn() and noteOff() methods, and the start() and stop() methods.  ({{domxref("AudioBufferSourceNode ")}}has yet another way of starting output: the noteGrainOn() method.)  The noteOn()/noteGrainOn()/noteOff() methods were the original way to start/stop output in these nodes, and in the newer versions of the specification, the noteOn() and noteGrainOn() methods were consolidated into a single start() method, and the noteOff() method was renamed to the stop() method.

+ +

In order to port your code, you can just rename the method that you're using.  For example, if you have code like the below:

+ +
var osc = context.createOscillator();
+osc.noteOn(1);
+osc.noteOff(1.5);
+
+var src = context.createBufferSource();
+src.noteGrainOn(1, 0.25);
+src.noteOff(2);
+
+ +

you can change it like this in order to port it to the standard AudioContext API:

+ +
var osc = context.createOscillator();
+osc.start(1);
+osc.stop(1.5);
+
+var src = context.createBufferSource();
+src.start(1, 0.25);
+src.stop(2);
+ +

Remove synchronous buffer creation

+ +

In the old WebKit implementation of Web Audio, there were two versions of createBuffer(), one which created an initially empty buffer, and one which took an existing {{domxref("ArrayBuffer")}} containing encoded audio, decoded it and returned the result in the form of an {{domxref("AudioBuffer")}}.  The latter version of createBuffer() was potentially expensive, because it had to decode the audio buffer synchronously, and with the buffer being arbitrarily large, it could take a lot of time for this method to complete its work, and no other part of your web page's code could execute in the mean time.

+ +

Because of these problems, this version of the createBuffer() method has been removed, and you should use the asynchronous decodeAudioData() method instead.

+ +

The example below shows old code which downloads an audio file over the network, and then decoded it using createBuffer():

+ +
var xhr = new XMLHttpRequest();
+xhr.open("GET", "/path/to/audio.ogg", true);
+xhr.responseType = "arraybuffer";
+xhr.send();
+xhr.onload = function() {
+  var decodedBuffer = context.createBuffer(xhr.response, false);
+  if (decodedBuffer) {
+    // Decoding was successful, do something useful with the audio buffer
+  } else {
+    alert("Decoding the audio buffer failed");
+  }
+};
+
+ +

Converting this code to use decodeAudioData() is relatively simple, as can be seen below:

+ +
var xhr = new XMLHttpRequest();
+xhr.open("GET", "/path/to/audio.ogg", true);
+xhr.responseType = "arraybuffer";
+xhr.send();
+xhr.onload = function() {
+  context.decodeAudioData(xhr.response, function onSuccess(decodedBuffer) {
+    // Decoding was successful, do something useful with the audio buffer
+  }, function onFailure() {
+    alert("Decoding the audio buffer failed");
+  });
+};
+ +

Note that the decodeAudioData() method is asynchronous, which means that it will return immediately, and then when the decoding finishes, one of the success or failure callback functions will get called depending on whether the audio decoding was successful.  This means that you may need to restructure your code to run the part which happened after the createBuffer() call in the success callback, as you can see in the example above.

+ +

Renaming of AudioParam.setTargetValueAtTime

+ +

The setTargetValueAtTime() method on the {{domxref("AudioParam")}} interface has been renamed to setTargetAtTime().  This is also a simple rename to improve the understandability of the API, and the semantics of the method are the same.  If your code is using setTargetValueAtTime(), you can rename it to use setTargetAtTime(). For example, if we have code that looks like this:

+ +
  var gainNode = context.createGain();
+  gainNode.gain.setTargetValueAtTime(0.0, 10.0, 1.0);
+
+ +

you can rename the method, and be compliant with the standard, like so:

+ +
  var gainNode = context.createGain();
+  gainNode.gain.setTargetAtTime(0.0, 10.0, 1.0);
+
+ +

Enumerated values that changed

+ +

The original webkitAudioContext API used C-style number based enumerated values in the API.  Those values have since been changed to use the Web IDL based enumerated values, which should be familiar because they are similar to things like the {{domxref("HTMLInputElement")}} property {{domxref("HTMLInputElement.type", "type")}}.

+ +

OscillatorNode.type

+ +

{{domxref("OscillatorNode")}}'s type property has been changed to use Web IDL enums.  Old code using webkitAudioContext can be ported to standards based {{domxref("AudioContext")}} like below:

+ +
// Old webkitAudioContext code:
+var osc = context.createOscillator();
+osc.type = osc.SINE;     // sine waveform
+osc.type = osc.SQUARE;   // square waveform
+osc.type = osc.SAWTOOTH; // sawtooth waveform
+osc.type = osc.TRIANGLE; // triangle waveform
+osc.setWaveTable(table);
+var isCustom = (osc.type == osc.CUSTOM); // isCustom will be true
+
+// New standard AudioContext code:
+var osc = context.createOscillator();
+osc.type = "sine";       // sine waveform
+osc.type = "square";     // square waveform
+osc.type = "sawtooth";   // sawtooth waveform
+osc.type = "triangle";   // triangle waveform
+osc.setPeriodicWave(table);  // Note: setWaveTable has been renamed to setPeriodicWave!
+var isCustom = (osc.type == "custom"); // isCustom will be true
+
+ +

BiquadFilterNode.type

+ +

{{domxref("BiquadFilterNode")}}'s type property has been changed to use Web IDL enums.  Old code using webkitAudioContext can be ported to standards based {{domxref("AudioContext")}} like below:

+ +
// Old webkitAudioContext code:
+var filter = context.createBiquadFilter();
+filter.type = filter.LOWPASS;   // lowpass filter
+filter.type = filter.HIGHPASS;  // highpass filter
+filter.type = filter.BANDPASS;  // bandpass filter
+filter.type = filter.LOWSHELF;  // lowshelf filter
+filter.type = filter.HIGHSHELF; // highshelf filter
+filter.type = filter.PEAKING;   // peaking filter
+filter.type = filter.NOTCH;     // notch filter
+filter.type = filter.ALLPASS;   // allpass filter
+
+// New standard AudioContext code:
+var filter = context.createBiquadFilter();
+filter.type = "lowpass";        // lowpass filter
+filter.type = "highpass";       // highpass filter
+filter.type = "bandpass";       // bandpass filter
+filter.type = "lowshelf";       // lowshelf filter
+filter.type = "highshelf";      // highshelf filter
+filter.type = "peaking";        // peaking filter
+filter.type = "notch";          // notch filter
+filter.type = "allpass";        // allpass filter
+
+ +

PannerNode.panningModel

+ +

{{domxref("PannerNode")}}'s panningModel property has been changed to use Web IDL enums.  Old code using webkitAudioContext can be ported to standards based {{domxref("AudioContext")}} like below:

+ +
// Old webkitAudioContext code:
+var panner = context.createPanner();
+panner.panningModel = panner.EQUALPOWER;  // equalpower panning
+panner.panningModel = panner.HRTF;        // HRTF panning
+
+// New standard AudioContext code:
+var panner = context.createPanner();
+panner.panningModel = "equalpower";       // equalpower panning
+panner.panningModel = "HRTF";             // HRTF panning
+
+ +

PannerNode.distanceModel

+ +

{{domxref("PannerNode")}}'s distanceModel property has been changed to use Web IDL enums.  Old code using webkitAudioContext can be ported to standards based {{domxref("AudioContext")}} like below:

+ +
// Old webkitAudioContext code:
+var panner = context.createPanner();
+panner.distanceModel = panner.LINEAR_DISTANCE;      // linear distance model
+panner.distanceModel = panner.INVERSE_DISTANCE;     // inverse distance model
+panner.distanceModel = panner.EXPONENTIAL_DISTANCE; // exponential distance model
+
+// Mew standard AudioContext code:
+var panner = context.createPanner();
+panner.distanceModel = "linear";                    // linear distance model
+panner.distanceModel = "inverse";                   // inverse distance model
+panner.distanceModel = "exponential";               // exponential distance model
+
+ +

Gain control moved to its own node type

+ +

The Web Audio standard now controls all gain using the {{domxref("GainNode")}}. Instead of setting a gain property directly on an audio source, you connect the source to a gain node and then control the gain using that node's gain parameter.

+ +

AudioBufferSourceNode

+ +

The gain attribute of {{domxref("AudioBufferSourceNode")}} has been removed.  The same functionality can be achieved by connecting the {{domxref("AudioBufferSourceNode")}} to a gain node.  See the following example:

+ +
// Old webkitAudioContext code:
+var src = context.createBufferSource();
+src.buffer = someBuffer;
+src.gain.value = 0.5;
+src.connect(context.destination);
+src.noteOn(0);
+
+// New standard AudioContext code:
+var src = context.createBufferSource();
+src.buffer = someBuffer;
+var gain = context.createGain();
+src.connect(gain);
+gain.gain.value = 0.5;
+gain.connect(context.destination);
+src.start(0);
+
+ +

AudioBuffer

+ +

The gain attribute of {{domxref("AudioBuffer")}} has been removed.  The same functionality can be achieved by connecting the {{domxref("AudioBufferSourceNode")}} that owns the buffer to a gain node.  See the following example:

+ +
// Old webkitAudioContext code:
+var src = context.createBufferSource();
+src.buffer = someBuffer;
+src.buffer.gain = 0.5;
+src.connect(context.destination);
+src.noteOn(0);
+
+// New standard AudioContext code:
+var src = context.createBufferSource();
+src.buffer = someBuffer;
+var gain = context.createGain();
+src.connect(gain);
+gain.gain.value = 0.5;
+gain.connect(context.destination);
+src.start(0);
+
+ +

Removal of AudioBufferSourceNode.looping

+ +

The looping attribute of {{domxref("AudioBufferSourceNode")}} has been removed.  This attribute was an alias of the loop attribute, so you can just use the loop attribute instead. Instead of having code like this:

+ +
var source = context.createBufferSource();
+source.looping = true;
+
+ +

you can change it to respect the last version of the specification:

+ +
var source = context.createBufferSource();
+source.loop = true;
+
+ +

Note, the loopStart and loopEnd attributes are not supported in webkitAudioContext.

+ +

Changes to determining playback state

+ +

The playbackState attribute of {{domxref("AudioBufferSourceNode")}} and {{domxref("OscillatorNode")}} has been removed.  Depending on why you used this attribute, you can use the following techniques to get the same information:

+ + + +
// Old webkitAudioContext code:
+var src = context.createBufferSource();
+// Some time later...
+var isFinished = (src.playbackState == src.FINISHED_STATE);
+
+// New AudioContext code:
+var src = context.createBufferSource();
+function endedHandler(event) {
+  isFinished = true;
+}
+var isFinished = false;
+src.onended = endedHandler;
+
+ +

The exact same changes have been applied to both {{domxref("AudioBufferSourceNode")}} and {{domxref("OscillatorNode")}}, so you can apply the same techniques to both kinds of nodes.

+ +

Removal of AudioContext.activeSourceCount

+ +

The activeSourceCount attribute has been removed from {{domxref("AudioContext")}}.  If you need to count the number of playing source nodes, you can maintain the count by handling the ended event on the source nodes, as shown above.

+ +

Code using the activeSourceCount attribute of the {{domxref("AudioContext")}}, like this snippet:

+ +
  var src0 = context.createBufferSource();
+  var src1 = context.createBufferSource();
+  // Set buffers and other parameters...
+  src0.start(0);
+  src1.start(0);
+  // Some time later...
+  console.log(context.activeSourceCount);
+
+ +

could be rewritten like that:

+ +
  // Array to track the playing source nodes:
+  var sources = [];
+  // When starting the source, put it at the end of the array,
+  // and set a handler to make sure it gets removed when the
+  // AudioBufferSourceNode reaches its end.
+  // First argument is the AudioBufferSourceNode to start, other arguments are
+  // the argument to the |start()| method of the AudioBufferSourceNode.
+  function startSource() {
+    var src = arguments[0];
+    var startArgs = Array.prototype.slice.call(arguments, 1);
+    src.onended = function() {
+      sources.splice(sources.indexOf(src), 1);
+    }
+    sources.push(src);
+    src.start.apply(src, startArgs);
+  }
+  function activeSources() {
+    return sources.length;
+  }
+  var src0 = context.createBufferSource();
+  var src0 = context.createBufferSource();
+  // Set buffers and other parameters...
+  startSource(src0, 0);
+  startSource(src1, 0);
+  // Some time later, query the number of sources...
+  console.log(activeSources());
+
+ +

Renaming of WaveTable

+ +

The {{domxref("WaveTable")}} interface has been renamed to {{domxref("PeriodicWave")}}.  Here is how you can port old code using WaveTable to the standard AudioContext API:

+ +
// Old webkitAudioContext code:
+var osc = context.createOscillator();
+var table = context.createWaveTable(realArray, imaginaryArray);
+osc.setWaveTable(table);
+
+// New standard AudioContext code:
+var osc = context.createOscillator();
+var table = context.createPeriodicWave(realArray, imaginaryArray);
+osc.setPeriodicWave(table);
+
+ +

Removal of some of the AudioParam read-only attributes

+ +

The following read-only attributes have been removed from AudioParam: name, units, minValue, and maxValue.  These used to be informational attributes.  Here is some information on how you can get these values if you need them:

+ + + +

Removal of MediaElementAudioSourceNode.mediaElement

+ +

The mediaElement attribute of {{domxref("MediaElementAudioSourceNode")}} has been removed.  You can keep a reference to the media element used to create this node if you need to access it later.

+ +

Removal of MediaStreamAudioSourceNode.mediaStream

+ +

The mediaStream attribute of {{domxref("MediaStreamAudioSourceNode")}} has been removed.  You can keep a reference to the media stream used to create this node if you need to access it later.

diff --git a/files/ko/web/api/web_audio_api/tools/index.html b/files/ko/web/api/web_audio_api/tools/index.html new file mode 100644 index 0000000000..beee9d6fb4 --- /dev/null +++ b/files/ko/web/api/web_audio_api/tools/index.html @@ -0,0 +1,41 @@ +--- +title: Tools for analyzing Web Audio usage +slug: Web/API/Web_Audio_API/Tools +tags: + - API + - Audio + - Debugging + - Media + - Tools + - Web + - Web Audio + - Web Audio API + - sound +--- +
{{APIRef("Web Audio API")}}
+ +

While working on your Web Audio API code, you may find that you need tools to analyze the graph of nodes you create or to otherwise debug your work. This article discusses tools available to help you do that.

+ +

Chrome

+ +

A handy web audio inspector can be found in the Chrome Web Store.

+ +

Edge

+ +

Add information for developers using Microsoft Edge.

+ +

Firefox

+ +

Firefox offers a native Web Audio Editor.

+ +

Safari

+ +

Add information for developers working in Safari.

+ +

See also

+ + diff --git a/files/ko/web/api/web_audio_api/using_audioworklet/index.html b/files/ko/web/api/web_audio_api/using_audioworklet/index.html new file mode 100644 index 0000000000..b103225f09 --- /dev/null +++ b/files/ko/web/api/web_audio_api/using_audioworklet/index.html @@ -0,0 +1,325 @@ +--- +title: Background audio processing using AudioWorklet +slug: Web/API/Web_Audio_API/Using_AudioWorklet +tags: + - API + - Audio + - AudioWorklet + - Background + - Examples + - Guide + - Processing + - Web Audio + - Web Audio API + - WebAudio API + - sound +--- +

{{APIRef("Web Audio API")}}

+ +

When the Web Audio API was first introduced to browsers, it included the ability to use JavaScript code to create custom audio processors that would be invoked to perform real-time audio manipulations. The drawback to ScriptProcessorNode was simple: it ran on the main thread, thus blocking everything else going on until it completed execution. This was far less than ideal, especially for something that can be as computationally expensive as audio processing.

+ +

Enter {{domxref("AudioWorklet")}}. An audio context's audio worklet is a {{domxref("Worklet")}} which runs off the main thread, executing audio processing code added to it by calling the context's {{domxref("Worklet.addModule", "audioWorklet.addModule()")}} method. Calling addModule() loads the specified JavaScript file, which should contain the implementation of the audio processor. With the processor registered, you can create a new {{domxref("AudioWorkletNode")}} which passes the audio through the processor's code when the node is linked into the chain of audio nodes along with any other audio nodes.

+ +

The process of creating an audio processor using JavaScript, establishing it as an audio worklet processor, and then using that processor within a Web Audio application is the topic of this article.

+ +

It's worth noting that because audio processing can often involve substantial computation, your processor may benefit greatly from being built using WebAssembly, which brings near-native or fully native performance to web apps. Implementing your audio processing algorithm using WebAssembly can make it perform markedly better.

+ +

High level overview

+ +

Before we start looking at the use of AudioWorklet on a step-by-step basis, let's start with a brief high-level overview of what's involved.

+ +
    +
  1. Create module that defines a audio worklet processor class, based on {{domxref("AudioWorkletProcessor")}} which takes audio from one or more incoming sources, performs its operation on the data, and outputs the resulting audio data.
  2. +
  3. Access the audio context's {{domxref("AudioWorklet")}} through its {{domxref("BaseAudioContext.audioWorklet", "audioWorklet")}} property, and call the audio worklet's {{domxref("Worklet.addModule", "addModule()")}} method to install the audio worklet processor module.
  4. +
  5. As needed, create audio processing nodes by passing the processor's name (which is defined by the module) to the {{domxref("AudioWorkletNode.AudioWorkletNode", "AudioWorkletNode()")}} constructor.
  6. +
  7. Set up any audio parameters the {{domxref("AudioWorkletNode")}} needs, or that you wish to configure. These are defined in the audio worklet processor module.
  8. +
  9. Connect the created AudioWorkletNodes into your audio processing pipeline as you would any other node, then use your audio pipeline as usual.
  10. +
+ +

Throughout the remainder of this article, we'll look at these steps in more detail, with examples (including working examples you can try out on your own).

+ +

The example code found on this page is derived from this working example which is part of MDN's GitHub repository of Web Audio examples. The example creates an oscillator node and adds white noise to it using an {{domxref("AudioWorkletNode")}} before playing the resulting sound out. Slider controls are available to allow controlling the gain of both the oscillator and the audio worklet's output.

+ +

See the code

+ +

Try it live

+ +

Creating an audio worklet processor

+ +

Fundamentally, an audio worklet processor (which we'll refer to usually as either an "audio processor" or as a "processor" because otherwise this article will be about twice as long) is implemented using a JavaScript module that defines and installs the custom audio processor class.

+ +

Structure of an audio worklet processor

+ +

An audio worklet processor is a JavaScript module which consists of the following:

+ + + +

A single audio worklet processor module may define multiple processor classes, registering each of them with individual calls to registerProcessor(). As long as each has its own unique name, this will work just fine. It's also more efficient than loading multiple modules from over the network or even the user's local disk.

+ +

Basic code framework

+ +

The barest framework of an audio processor class looks like this:

+ +
class MyAudioProcessor extends AudioWorkletProcessor {
+  constructor() {
+    super();
+  }
+
+  process(inputList, outputList, parameters) {
+    /* using the inputs (or not, as needed), write the output
+       into each of the outputs */
+
+    return true;
+  }
+};
+
+registerProcessor("my-audio-processor", MyAudioProcessor);
+
+ +

After the implementation of the processor comes a call to the global function {{domxref("AudioWorkletGlobalScope.registerProcessor", "registerProcessor()")}}, which is only available within the scope of the audio context's {{domxref("AudioWorklet")}}, which is the invoker of the processor script as a result of your call to {{domxref("Worklet.addModule", "audioWorklet.addModule()")}}. This call to registerProcessor() registers your class as the basis for any {{domxref("AudioWorkletProcessor")}}s created when {{domxref("AudioWorkletNode")}}s are set up.

+ +

This is the barest framework and actually has no effect until code is added into process() to do something with those inputs and outputs. Which brings us to talking about those inputs and outputs.

+ +

The input and output lists

+ +

The lists of inputs and outputs can be a little confusing at first, even though they're actually very simple once you realize what's going on.

+ +

Let's start at the inside and work our way out. Fundamentally, the audio for a single audio channel (such as the left speaker or the subwoofer, for example) is represented as a Float32Array whose values are the individual audio samples. By specification, each block of audio your process() function receives contains 128 frames (that is, 128 samples for each channel), but it is planned that this value will change in the future, and may in fact vary depending on circumstances, so you should always check the array's length rather than assuming a particular size. It is, however, guaranteed that the inputs and outputs will have the same block length.

+ +

Each input can have a number of channels. A mono input has a single channel; stereo input has two channels. Surround sound might have six or more channels. So each input is, in turn, an array of channels. That is, an array of Float32Array objects.

+ +

Then, there can be multiple inputs, so the inputList is an array of arrays of Float32Array objects. Each input may have a different number of channels, and each channel has its own array of samples.

+ +

Thus, given the input list inputList:

+ +
const numberOfInputs = inputList.length;
+const firstInput = inputList[0];
+
+const firstInputChannelCount = firstInput.length;
+const firstInputFirstChannel = firstInput[0]; // (or inputList[0][0])
+
+const firstChannelByteCount = firstInputFirstChannel.length;
+const firstByteOfFirstChannel = firstInputFirstChannel[0]; // (or inputList[0][0][0])
+
+ +

The output list is structured in exactly the same way; it's an array of outputs, each of which is an array of channels, each of which is an array of Float32Array objects, which contain the samples for that channel.

+ +

How you use the inputs and how you generate the outputs depends very much on your processor. If your processor is just a generator, it can ignore the inputs and just replace the contents of the outputs with the generated data. Or you can process each input independently, applying an algorithm to the incoming data on each channel of each input and writing the results into the corresponding outputs' channels (keeping in mind that the number of inputs and outputs may differ, and the channel counts on those inputs and outputs may also differ). Or you can take all the inputs and perform mixing or other computations that result in a single output being filled with data (or all the outputs being filled with the same data).

+ +

It's entirely up to you. This is a very powerful tool in your audio programming toolkit.

+ +

Processing multiple inputs

+ +

Let's take a look at an implementation of process() that can process multiple inputs, with each input being used to generate the corresponding output. Any excess inputs are ignored.

+ +
process(inputList, outputList, parameters) {
+  const sourceLimit = Math.min(inputList.length, outputList.length);
+
+  for (let inputNum = 0; inputNum < sourceLimit; inputNum++) {
+    let input = inputList[inputNum];
+    let output = outputList[inputNum];
+    let channelCount = Math.min(input.length, output.length);
+
+    for (let channelNum = 0; channelNum < channelCount; channelNum++) {
+      let sampleCount = input[channelNum].length;
+
+      for (let i = 0; i < sampleCount; i++) {
+        let sample = input[channelNum][i];
+
+        /* Manipulate the sample */
+
+        output[channelNum][i] = sample;
+      }
+    }
+  };
+
+  return true;
+}
+
+ +

Note that when determining the number of sources to process and send through to the corresponding outputs, we use Math.min() to ensure that we only process as many channels as we have room for in the output list. The same check is performed when determining how many channels to process in the current input; we only process as many as there are room for in the destination output. This avoids errors due to overrunning these arrays.

+ +

Mixing inputs

+ +

Many nodes perform mixing operations, where the inputs are combined in some way into a single output. This is demonstrated in the following example.

+ +
process(inputList, outputList, parameters) {
+  const sourceLimit = Math.min(inputList.length, outputList.length);
+   for (let inputNum = 0; inputNum < sourceLimit; inputNum++) {
+     let input = inputList[inputNum];
+     let output = outputList[0];
+     let channelCount = Math.min(input.length, output.length);
+
+     for (let channelNum = 0; channelNum < channelCount; channelNum++) {
+       let sampleCount = input[channelNum].length;
+
+       for (let i = 0; i < sampleCount; i++) {
+         let sample = output[channelNum][i] + input[channelNum][i];
+
+         if (sample > 1.0) {
+           sample = 1.0;
+         } else if (sample < -1.0) {
+           sample = -1.0;
+         }
+
+         output[channelNum][i] = sample;
+       }
+     }
+   };
+
+  return true;
+}
+
+ +

This is similar code to the previous sample in many ways, but only the first output—outputList[0]—is altered. Each sample is added to the corresponding sample in the output buffer, with a simple code fragment in place to prevent the samples from exceeding the legal range of -1.0 to 1.0 by capping the values; there are other ways to avoid clipping that are perhaps less prone to distortion, but this is a simple example that's better than nothing.

+ +

Lifetime of an audio worklet processor

+ +

The only means by which you can influence the lifespan of your audio worklet processor is through the value returned by process(), which should be a Boolean value indicating whether or not to override the {{Glossary("user agent")}}'s decision-making as to whether or not your node is still in use.

+ +

In general, the lifetime policy of any audio node is simple: if the node is still considered to be actively processing audio, it will continue to be used. In the case of an {{domxref("AudioWorkletNode")}}, the node is considered to be active if its process() function returns true and the node is either generating content as a source for audio data, or is receiving data from one or more inputs.

+ +

Specifying a value of true as the result from your process() function in essence tells the Web Audio API that your processor needs to keep being called even if the API doesn't think there's anything left for you to do. In other words, true overrides the API's logic and gives you control over your processor's lifetime policy, keeping the processor's owning {{domxref("AudioWorkletNode")}} running even when it would otherwise decide to shut down the node.

+ +

Returning false from the process() method tells the API that it should follow its normal logic and shut down your processor node if it deems it appropriate to do so. If the API determines that your node is no longer needed, process() will not be called again.

+ +
+

Note: At this time, unfortunately, Chrome does not implement this algorithm in a manner that matches the current standard. Instead, it keeps the node alive if you return true and shuts it down if you return false. Thus for compatibility reasons you must always return true from process(), at least on Chrome. However, once this Chrome issue is fixed, you will want to change this behavior if possible as it may have a slight negative impact on performance.

+
+ +

Creating an audio processor worklet node

+ +

To create an audio node that pumps blocks of audio data through an {{domxref("AudioWorkletProcessor")}}, you need to follow these simple steps:

+ +
    +
  1. Load and install the audio processor module
  2. +
  3. Create an {{domxref("AudioWorkletNode")}}, specifying the audio processor module to use by its name
  4. +
  5. Connect inputs to the AudioWorkletNode and its outputs to appropriate destinations (either other nodes or to the {{domxref("AudioContext")}} object's {{domxref("AudioContext.destination", "destination")}} property.
  6. +
+ +

To use an audio worklet processor, you can use code similar to the following:

+ +
let audioContext = null;
+
+async function createMyAudioProcessor() {
+  if (!audioContext) {
+    try {
+      audioContext = new AudioContext();
+      await audioContext.resume();
+      await audioContext.audioWorklet.addModule("module-url/module.js");
+    } catch(e) {
+      return null;
+    }
+  }
+
+  return new AudioWorkletNode(audioContext, "processor-name");
+}
+
+ +

This createMyAudioProcessor() function creates and returns a new instance of {{domxref("AudioWorkletNode")}} configured to use your audio processor. It also handles creating the audio context if it hasn't already been done.

+ +

In order to ensure the context is usable, this starts by creating the context if it's not already available, then adds the module containing the processor to the worklet. Once that's done, it instantiates and returns a new AudioWorkletNode. Once you have that in hand, you connect it to other nodes and otherwise use it just like any other node.

+ +

You can then create a new audio processor node by doing this:

+ +
let newProcessorNode = createMyAudioProcessor();
+ +

If the returned value, newProcessorNode, is non-null, we have a valid audio context with its hiss processor node in place and ready to use.

+ +

Supporting audio parameters

+ +

Just like any other Web Audio node, {{domxref("AudioWorkletNode")}} supports parameters, which are shared with the {{domxref("AudioWorkletProcessor")}} that does the actual work.

+ +

Adding parameter support to the processor

+ +

To add parameters to an {{domxref("AudioWorkletNode")}}, you need to define them within your {{domxref("AudioWorkletProcessor")}}-based processor class in your module. This is done by adding the static getter {{domxref("AudioWorkletProcessor.parameterDescriptors", "parameterDescriptors")}} to your class. This function should return an array of {{domxref("AudioParam")}} objects, one for each parameter supported by the processor.

+ +

In the following implementation of parameterDescriptors(), the returned array has two AudioParam objects. The first defines gain as a value between 0 and 1, with a default value of 0.5. The second parameter is named frequency and defaults to 440.0, with a range from 27.5 to 4186.009, inclusively.

+ +
static get parameterDescriptors() {
+  return [
+   {
+      name: "gain",
+      defaultValue: 0.5,
+      minValue: 0,
+      maxValue: 1
+    },
+    {
+      name: "frequency",
+      defaultValue: 440.0;
+      minValue: 27.5,
+      maxValue: 4186.009
+    }
+  ];
+}
+ +

Accessing your processor node's parameters is as simple as looking them up in the parameters object passed into your implementation of {{domxref("AudioWorkletProcessor.process", "process()")}}. Within the parameters object are arrays, one for each of your parameters, and sharing the same names as your parameters.

+ +
+
A-rate parameters
+
For a-rate parameters—parameters whose values automatically change over time—the parameter's entry in the parameters object is an array of {{domxref("AudioParam")}} objects, one for each frame in the block being processed. These values are to be applied to the corresponding frames.
+
K-rate parameters
+
K-rate parameters, on the other hand, can only change once per block, so the parameter's array has only a single entry. Use that value for every frame in the block.
+
+ +

In the code below, we see a process() function that handles a gain parameter which can be used as either an a-rate or k-rate parameter. Our node only supports one input, so it just takes the first input in the list, applies the gain to it, and writes the resulting data to the first output's buffer.

+ +
process(inputList, outputList, parameters) {
+  const input = inputList[0];
+  const output = outputList[0];
+  const gain = parameters.gain;
+
+  for (let channelNum = 0; channelNum < input.length; channel++) {
+    const inputChannel = input[channel];
+    const outputChannel = output[channel];
+
+    // If gain.length is 1, it's a k-rate parameter, so apply
+    // the first entry to every frame. Otherwise, apply each
+    // entry to the corresponding frame.
+
+    if (gain.length === 1) {
+      for (let i = 0; i < inputChannel.length; i++) {
+        outputChannel[i] = inputChannel[i] * gain[0];
+      }
+    } else {
+      for (let i = 0; i < inputChannel.length; i++) {
+        outputChannel[i] = inputChannel[i] * gain[i];
+      }
+    }
+  }
+
+  return true;
+}
+
+ +

Here, if gain.length indicates that there's only a single value in the gain parameter's array of values, the first entry in the array is applied to every frame in the block. Otherwise, for each frame in the block, the corresponding entry in gain[] is applied.

+ +

Accessing parameters from the main thread script

+ +

Your main thread script can access the parameters just like it can any other node. To do so, first you need to get a reference to the parameter by calling the {{domxref("AudioWorkletNode")}}'s {{domxref("AudioWorkletNode.parameters", "parameters")}} property's {{domxref("AudioParamMap.get", "get()")}} method:

+ +
let gainParam = myAudioWorkletNode.parameters.get("gain");
+
+ +

The value returned and stored in gainParam is the {{domxref("AudioParam")}} used to store the gain parameter. You can then change its value effective at a given time using the {{domxref("AudioParam")}} method {{domxref("AudioParam.setValueAtTime", "setValueAtTime()")}}.

+ +

Here, for example, we set the value to newValue, effective immediately.

+ +
gainParam.setValueAtTime(newValue, audioContext.currentTime);
+ +

You can similarly use any of the other methods in the {{domxref("AudioParam")}} interface to apply changes over time, to cancel scheduled changes, and so forth.

+ +

Reading the value of a parameter is as simple as looking at its {{domxref("AudioParam.value", "value")}} property:

+ +
let currentGain = gainParam.value;
+ +

See also

+ + diff --git a/files/ko/web/api/web_audio_api/using_iir_filters/iir-filter-demo.png b/files/ko/web/api/web_audio_api/using_iir_filters/iir-filter-demo.png new file mode 100644 index 0000000000..0e701a2b6a Binary files /dev/null and b/files/ko/web/api/web_audio_api/using_iir_filters/iir-filter-demo.png differ diff --git a/files/ko/web/api/web_audio_api/using_iir_filters/index.html b/files/ko/web/api/web_audio_api/using_iir_filters/index.html new file mode 100644 index 0000000000..0c48b1096c --- /dev/null +++ b/files/ko/web/api/web_audio_api/using_iir_filters/index.html @@ -0,0 +1,198 @@ +--- +title: Using IIR filters +slug: Web/API/Web_Audio_API/Using_IIR_filters +tags: + - API + - Audio + - Guide + - IIRFilter + - Using + - Web Audio API +--- +
{{DefaultAPISidebar("Web Audio API")}}
+ +

The IIRFilterNode interface of the Web Audio API is an {{domxref("AudioNode")}} processor that implements a general infinite impulse response (IIR) filter; this type of filter can be used to implement tone control devices and graphic equalizers, and the filter response parameters can be specified, so that it can be tuned as needed. This article looks at how to implement one, and use it in a simple example.

+ +

Demo

+ +

Our simple example for this guide provides a play/pause button that starts and pauses audio play, and a toggle that turns an IIR filter on and off, altering the tone of the sound. It also provides a canvas on which is drawn the frequency response of the audio, so you can see what effect the IIR filter has.

+ +

A demo featuring a play button, and toggle to turn a filter on and off, and a line graph showing the filter frequencies returned after the filter has been applied.

+ +

You can check out the full demo here on Codepen. Also see the source code on GitHub. It includes some different coefficient values for different lowpass frequencies — you can change the value of the filterNumber constant to a value between 0 and 3 to check out the different available effects.

+ +

Browser support

+ +

IIR filters are supported well across modern browsers, although they have been implemented more recently than some of the more longstanding Web Audio API features, like Biquad filters.

+ +

The IIRFilterNode

+ +

The Web Audio API now comes with an {{domxref("IIRFilterNode")}} interface. But what is this and how does it differ from the {{domxref("BiquadFilterNode")}} we have already?

+ +

An IIR filter is a infinite impulse response filter. It's one of two primary types of filters used in audio and digital signal processing. The other type is FIR — finite impulse response filter. There's a really good overview to IIF filters and FIR filters here.

+ +

A biquad filter is actually a specific type of infinite impulse response filter. It's a commonly-used type and we already have it as a node in the Web Audio API. If you choose this node the hard work is done for you. For instance, if you want to filter lower frequencies from your sound, you can set the type to highpass and then set which frequency to filter from (or cut off). There's more information on how biquad filters work here.

+ +

When you are using an {{domxref("IIRFilterNode")}} instead of a {{domxref("BiquadFilterNode")}} you are creating the filter yourself, rather than just choosing a pre-programmed type. So you can create a highpass filter, or a lowpass filter, or a more bespoke one. And this is where the IIR filter node is useful — you can create your own if none of the alaready available settings is right for what you want. As well as this, if your audio graph needed a highpass and a bandpass filter within it, you could just use one IIR filter node in place of the two biquad filter nodes you would otherwise need for this.

+ +

With the IIRFIlter node it's up to you to set what feedforward and feedback values the filter needs — this determines the characteristics of the filter. The downside is that this involves some complex maths.

+ +

If you are looking to learn more there's some information about the maths behind IIR filters here. This enters the realms of signal processing theory — don't worry if you look at it and feel like it's not for you.

+ +

If you want to play with the IIR filter node and need some values to help along the way, there's a table of already calculated values here; on pages 4 & 5 of the linked PDF the an values refer to the feedForward values and the bn values refer to the feedback. musicdsp.org is also a great resource if you want to read more about different filters and how they are implemented digitally.

+ +

With that all in mind, let's take a look at the code to create an IIR filter with the Web Audio API.

+ +

Setting our IIRFilter co-efficients

+ +

When creating an IIR filter, we pass in the feedforward and feedback coefficients as options (coefficients is how we describe the values). Both of these parameters are arrays, neither of which can be larger than 20 items.

+ +

When setting our coefficients, the feedforward values can't all be set to zero, otherwise nothing would be sent to the filter. Something like this is acceptable:

+ +
let feedForward = [0.00020298, 0.0004059599, 0.00020298];
+
+ +

Our feedback values cannot start with zero, otherwise on the first pass nothing would be sent back:

+ +
let feedBackward = [1.0126964558, -1.9991880801, 0.9873035442];
+
+ +
+

Note: These values are calculated based on the lowpass filter specified in the filter characteristics of the Web Audio API specification. As this filter node gains more popularity we should be able to collate more coefficient values.

+
+ +

Using an IIRFilter in an audio graph

+ +

Let's create our context and our filter node:

+ +
const AudioContext = window.AudioContext || window.webkitAudioContext;
+const audioCtx = new AudioContext();
+
+const iirFilter = audioCtx.createIIRFilter(feedForward, feedBack);
+
+ +

We need a sound source to play. We set this up using a custom function, playSoundNode(), which creates a buffer source from an existing {{domxref("AudioBuffer")}}, attaches it to the default destination, starts it playing, and returns it:

+ +
function playSourceNode(audioContext, audioBuffer) {
+  const soundSource = audioContext.createBufferSource();
+  soundSource.buffer = audioBuffer;
+  soundSource.connect(audioContext.destination);
+  soundSource.start();
+  return soundSource;
+}
+ +

This function is called when the play button is pressed. The play button HTML looks like this:

+ +
<button class="button-play" role="switch" data-playing="false" aria-pressed="false">Play</button>
+ +

And the click event listener starts like so:

+ +
playButton.addEventListener('click', function() {
+    if (this.dataset.playing === 'false') {
+        srcNode = playSourceNode(audioCtx, sample);
+        ...
+}, false);
+ +

The toggle that turns the IIR filter on and off is set up in the similar way. First, the HTML:

+ +
<button class="button-filter" role="switch" data-filteron="false" aria-pressed="false" aria-describedby="label" disabled></button>
+ +

The filter button's click handler then connects the IIRFilter up to the graph, between the source and the detination:

+ +
filterButton.addEventListener('click', function() {
+    if (this.dataset.filteron === 'false') {
+        srcNode.disconnect(audioCtx.destination);
+        srcNode.connect(iirfilter).connect(audioCtx.destination);
+        ...
+}, false);
+ +

Frequency response

+ +

We only have one method available on {{domxref("IIRFilterNode")}} instances, getFrequencyResponse(), this allows us to see what is happening to the frequencies of the audio being passed into the filter.

+ +

Let's draw a frequency plot of the filter we've created with the data we get back from this method.

+ +

We need to create three arrays. One of frequency values for which we want to receive the magnitude response and phase response for, and two empty arrays to receive the data. All three of these have to be of type float32array and all be of the same size.

+ +
// arrays for our frequency response
+const totalArrayItems = 30;
+let myFrequencyArray = new Float32Array(totalArrayItems);
+let magResponseOutput = new Float32Array(totalArrayItems);
+let phaseResponseOutput = new Float32Array(totalArrayItems);
+
+ +

Let's fill our first array with frequency values we want data to be returned on:

+ +
myFrequencyArray = myFrequencyArray.map(function(item, index) {
+    return Math.pow(1.4, index);
+});
+
+ +

We could go for a linear approach, but it's far better when working with frequencies to take a log approach, so let's fill our array with frequency values that get larger further on in the array items.

+ +

Now let's get our response data:

+ +
iirFilter.getFrequencyResponse(myFrequencyArray, magResponseOutput, phaseResponseOutput);
+
+ +

We can use this data to draw a filter frequency plot. We'll do so on a 2d canvas context.

+ +
// create a canvas element and append it to our dom
+const canvasContainer = document.querySelector('.filter-graph');
+const canvasEl = document.createElement('canvas');
+canvasContainer.appendChild(canvasEl);
+
+// set 2d context and set dimesions
+const canvasCtx = canvasEl.getContext('2d');
+const width = canvasContainer.offsetWidth;
+const height = canvasContainer.offsetHeight;
+canvasEl.width = width;
+canvasEl.height = height;
+
+// set background fill
+canvasCtx.fillStyle = 'white';
+canvasCtx.fillRect(0, 0, width, height);
+
+// set up some spacing based on size
+const spacing = width/16;
+const fontSize = Math.floor(spacing/1.5);
+
+// draw our axis
+canvasCtx.lineWidth = 2;
+canvasCtx.strokeStyle = 'grey';
+
+canvasCtx.beginPath();
+canvasCtx.moveTo(spacing, spacing);
+canvasCtx.lineTo(spacing, height-spacing);
+canvasCtx.lineTo(width-spacing, height-spacing);
+canvasCtx.stroke();
+
+// axis is gain by frequency -> make labels
+canvasCtx.font = fontSize+'px sans-serif';
+canvasCtx.fillStyle = 'grey';
+canvasCtx.fillText('1', spacing-fontSize, spacing+fontSize);
+canvasCtx.fillText('g', spacing-fontSize, (height-spacing+fontSize)/2);
+canvasCtx.fillText('0', spacing-fontSize, height-spacing+fontSize);
+canvasCtx.fillText('Hz', width/2, height-spacing+fontSize);
+canvasCtx.fillText('20k', width-spacing, height-spacing+fontSize);
+
+// loop over our magnitude response data and plot our filter
+
+canvasCtx.beginPath();
+
+for(let i = 0; i < magResponseOutput.length; i++) {
+
+    if (i === 0) {
+        canvasCtx.moveTo(spacing, height-(magResponseOutput[i]*100)-spacing );
+    } else {
+        canvasCtx.lineTo((width/totalArrayItems)*i, height-(magResponseOutput[i]*100)-spacing );
+    }
+
+}
+
+canvasCtx.stroke();
+
+ +

Summary

+ +

That's it for our IIRFilter demo. This should have shown you how to use the basics, and helped you to understand what it's useful for and how it works.

diff --git a/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/bar-graph.png b/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/bar-graph.png new file mode 100644 index 0000000000..a31829c5d1 Binary files /dev/null and b/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/bar-graph.png differ diff --git a/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/index.html b/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/index.html new file mode 100644 index 0000000000..c0dd84ee68 --- /dev/null +++ b/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/index.html @@ -0,0 +1,189 @@ +--- +title: Visualizations with Web Audio API +slug: Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API +tags: + - API + - Web Audio API + - analyser + - fft + - visualisation + - visualization + - waveform +--- +
+

One of the most interesting features of the Web Audio API is the ability to extract frequency, waveform, and other data from your audio source, which can then be used to create visualizations. This article explains how, and provides a couple of basic use cases.

+
+ +
+

Note: You can find working examples of all the code snippets in our Voice-change-O-matic demo.

+
+ +

Basic concepts

+ +

To extract data from your audio source, you need an {{ domxref("AnalyserNode") }}, which is created using the {{ domxref("BaseAudioContext.createAnalyser") }} method, for example:

+ +
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
+var analyser = audioCtx.createAnalyser();
+
+ +

This node is then connected to your audio source at some point between your source and your destination, for example:

+ +
source = audioCtx.createMediaStreamSource(stream);
+source.connect(analyser);
+analyser.connect(distortion);
+distortion.connect(audioCtx.destination);
+ +
+

Note: you don't need to connect the analyser's output to another node for it to work, as long as the input is connected to the source, either directly or via another node.

+
+ +

The analyser node will then capture audio data using a Fast Fourier Transform (fft) in a certain frequency domain, depending on what you specify as the {{ domxref("AnalyserNode.fftSize") }} property value (if no value is specified, the default is 2048.)

+ +
+

Note: You can also specify a minimum and maximum power value for the fft data scaling range, using {{ domxref("AnalyserNode.minDecibels") }} and {{ domxref("AnalyserNode.maxDecibels") }}, and different data averaging constants using {{ domxref("AnalyserNode.smoothingTimeConstant") }}. Read those pages to get more information on how to use them.

+
+ +

To capture data, you need to use the methods {{ domxref("AnalyserNode.getFloatFrequencyData()") }} and {{ domxref("AnalyserNode.getByteFrequencyData()") }} to capture frequency data, and {{ domxref("AnalyserNode.getByteTimeDomainData()") }} and {{ domxref("AnalyserNode.getFloatTimeDomainData()") }} to capture waveform data.

+ +

These methods copy data into a specified array, so you need to create a new array to receive the data before invoking one. The first one produces 32-bit floating point numbers, and the second and third ones produce 8-bit unsigned integers, therefore a standard JavaScript array won't do — you need to use a {{ domxref("Float32Array") }} or {{ domxref("Uint8Array") }} array, depending on what data you are handling.

+ +

So for example, say we are dealing with an fft size of 2048. We return the {{ domxref("AnalyserNode.frequencyBinCount") }} value, which is half the fft, then call Uint8Array() with the frequencyBinCount as its length argument — this is how many data points we will be collecting, for that fft size.

+ +
analyser.fftSize = 2048;
+var bufferLength = analyser.frequencyBinCount;
+var dataArray = new Uint8Array(bufferLength);
+ +

To actually retrieve the data and copy it into our array, we then call the data collection method we want, with the array passed as it's argument. For example:

+ +
analyser.getByteTimeDomainData(dataArray);
+ +

We now have the audio data for that moment in time captured in our array, and can proceed to visualize it however we like, for example by plotting it onto an HTML5 {{ htmlelement("canvas") }}.

+ +

Let's go on to look at some specific examples.

+ +

Creating a waveform/oscilloscope

+ +

To create the oscilloscope visualisation (hat tip to Soledad Penadés for the original code in Voice-change-O-matic), we first follow the standard pattern described in the previous section to set up the buffer:

+ +
analyser.fftSize = 2048;
+var bufferLength = analyser.frequencyBinCount;
+var dataArray = new Uint8Array(bufferLength);
+ +

Next, we clear the canvas of what had been drawn on it before to get ready for the new visualization display:

+ +
canvasCtx.clearRect(0, 0, WIDTH, HEIGHT);
+ +

We now define the draw() function:

+ +
function draw() {
+ +

In here, we use requestAnimationFrame() to keep looping the drawing function once it has been started:

+ +
var drawVisual = requestAnimationFrame(draw);
+ +

Next, we grab the time domain data and copy it into our array

+ +
analyser.getByteTimeDomainData(dataArray);
+ +

Next, fill the canvas with a solid color to start

+ +
canvasCtx.fillStyle = 'rgb(200, 200, 200)';
+canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
+ +

Set a line width and stroke color for the wave we will draw, then begin drawing a path

+ +
canvasCtx.lineWidth = 2;
+canvasCtx.strokeStyle = 'rgb(0, 0, 0)';
+canvasCtx.beginPath();
+ +

Determine the width of each segment of the line to be drawn by dividing the canvas width by the array length (equal to the FrequencyBinCount, as defined earlier on), then define an x variable to define the position to move to for drawing each segment of the line.

+ +
var sliceWidth = WIDTH * 1.0 / bufferLength;
+var x = 0;
+ +

Now we run through a loop, defining the position of a small segment of the wave for each point in the buffer at a certain height based on the data point value form the array, then moving the line across to the place where the next wave segment should be drawn:

+ +
      for(var i = 0; i < bufferLength; i++) {
+
+        var v = dataArray[i] / 128.0;
+        var y = v * HEIGHT/2;
+
+        if(i === 0) {
+          canvasCtx.moveTo(x, y);
+        } else {
+          canvasCtx.lineTo(x, y);
+        }
+
+        x += sliceWidth;
+      }
+ +

Finally, we finish the line in the middle of the right hand side of the canvas, then draw the stroke we've defined:

+ +
      canvasCtx.lineTo(canvas.width, canvas.height/2);
+      canvasCtx.stroke();
+    };
+ +

At the end of this section of code, we invoke the draw() function to start off the whole process:

+ +
    draw();
+ +

This gives us a nice waveform display that updates several times a second:

+ +

a black oscilloscope line, showing the waveform of an audio signal

+ +

Creating a frequency bar graph

+ +

Another nice little sound visualization to create is one of those Winamp-style frequency bar graphs. We have one available in Voice-change-O-matic; let's look at how it's done.

+ +

First, we again set up our analyser and data array, then clear the current canvas display with clearRect(). The only difference from before is that we have set the fft size to be much smaller; this is so that each bar in the graph is big enough to actually look like a bar rather than a thin strand.

+ +
analyser.fftSize = 256;
+var bufferLength = analyser.frequencyBinCount;
+console.log(bufferLength);
+var dataArray = new Uint8Array(bufferLength);
+
+canvasCtx.clearRect(0, 0, WIDTH, HEIGHT);
+ +

Next, we start our draw() function off, again setting up a loop with requestAnimationFrame() so that the displayed data keeps updating, and clearing the display with each animation frame.

+ +
    function draw() {
+      drawVisual = requestAnimationFrame(draw);
+
+      analyser.getByteFrequencyData(dataArray);
+
+      canvasCtx.fillStyle = 'rgb(0, 0, 0)';
+      canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
+ +

Now we set our barWidth to be equal to the canvas width divided by the number of bars (the buffer length). However, we are also multiplying that width by 2.5, because most of the frequencies will come back as having no audio in them, as most of the sounds we hear every day are in a certain lower frequency range. We don't want to display loads of empty bars, therefore we shift the ones that will display regularly at a noticeable height across so they fill the canvas display.

+ +

We also set a barHeight variable, and an x variable to record how far across the screen to draw the current bar.

+ +
var barWidth = (WIDTH / bufferLength) * 2.5;
+var barHeight;
+var x = 0;
+ +

As before, we now start a for loop and cycle through each value in the dataArray. For each one, we make the barHeight equal to the array value, set a fill color based on the barHeight (taller bars are brighter), and draw a bar at x pixels across the canvas, which is barWidth wide and barHeight/2 tall (we eventually decided to cut each bar in half so they would all fit on the canvas better.)

+ +

The one value that needs explaining is the vertical offset position we are drawing each bar at: HEIGHT-barHeight/2. I am doing this because I want each bar to stick up from the bottom of the canvas, not down from the top, as it would if we set the vertical position to 0. Therefore, we instead set the vertical position each time to the height of the canvas minus barHeight/2, so therefore each bar will be drawn from partway down the canvas, down to the bottom.

+ +
      for(var i = 0; i < bufferLength; i++) {
+        barHeight = dataArray[i]/2;
+
+        canvasCtx.fillStyle = 'rgb(' + (barHeight+100) + ',50,50)';
+        canvasCtx.fillRect(x,HEIGHT-barHeight/2,barWidth,barHeight);
+
+        x += barWidth + 1;
+      }
+    };
+ +

Again, at the end of the code we invoke the draw() function to set the whole process in motion.

+ +
draw();
+ +

This code gives us a result like the following:

+ +

a series of red bars in a bar graph, showing intensity of different frequencies in an audio signal

+ +
+

Note: The examples listed in this article have shown usage of {{ domxref("AnalyserNode.getByteFrequencyData()") }} and {{ domxref("AnalyserNode.getByteTimeDomainData()") }}. For working examples showing {{ domxref("AnalyserNode.getFloatFrequencyData()") }} and {{ domxref("AnalyserNode.getFloatTimeDomainData()") }}, refer to our Voice-change-O-matic-float-data demo (see the source code too) — this is exactly the same as the original Voice-change-O-matic, except that it uses Float data, not unsigned byte data.

+
diff --git a/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/wave.png b/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/wave.png new file mode 100644 index 0000000000..9254829d23 Binary files /dev/null and b/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/wave.png differ diff --git a/files/ko/web/api/web_audio_api/web_audio_spatialization_basics/index.html b/files/ko/web/api/web_audio_api/web_audio_spatialization_basics/index.html new file mode 100644 index 0000000000..2846d45d6c --- /dev/null +++ b/files/ko/web/api/web_audio_api/web_audio_spatialization_basics/index.html @@ -0,0 +1,467 @@ +--- +title: Web audio spatialization basics +slug: Web/API/Web_Audio_API/Web_audio_spatialization_basics +tags: + - PannerNode + - Web Audio API + - panning +--- +
{{DefaultAPISidebar("Web Audio API")}}
+ +
+

As if its extensive variety of sound processing (and other) options wasn't enough, the Web Audio API also includes facilities to allow you to emulate the difference in sound as a listener moves around a sound source, for example panning as you move around a sound source inside a 3D game. The official term for this is spatialization, and this article will cover the basics of how to implement such a system.

+
+ +

Basics of spatialization

+ +

In Web Audio, complex 3D spatializations are created using the {{domxref("PannerNode")}}, which in layman's terms is basically a whole lotta cool maths to make audio appear in 3D space. Think sounds flying over you, creeping up behind you, moving across in front of you. That sort of thing.

+ +

It's really useful for WebXR and gaming. In 3D spaces, it's the only way to achieve realistic audio. Libraries like three.js and A-frame harness its potential when dealing with sound. It's worth noting that you don't have to move sound within a full 3D space either — you could stick with just a 2D plane, so if you were planning a 2D game, this would still be the node you were looking for.

+ +
+

Note: There's also a {{domxref("StereoPannerNode")}} designed to deal with the common use case of creating simple left and right stereo panning effects. This is much simpler to use, but obviously nowhere near as versatile. If you just want a simple stereo panning effect, our StereoPannerNode example (see source code) should give you everything you need.

+
+ +

3D boombox demo

+ +

To demonstrate 3D spatialization we've created a modified version of the boombox demo we created in our basic Using the Web Audio API guide. see the 3D spatialization demo live (and see the source code also).

+ +

A simple UI with a rotated boombox and controls to move it left and right and in and out, and rotate it.

+ +

The boombox sits inside a room (defined by the edges of the browser viewport), and in this demo, we can move and rotate it with the provided controls. When we move the boombox, the sound it produces changes accordingly, panning as it moves to the left or right of the room, or becoming quieter as it is moved away from the user or is rotated so the speakers are facing away from them, etc. This is done by setting the different properties of the PannerNode object instance in relation to that movement, to emulate spacialization.

+ +
+

Note: The experience is much better if you use headphones, or have some kind of surround sound system to plug your computer into.

+
+ +

Creating an audio listener

+ +

So let's begin! The {{domxref("BaseAudioContext")}} (the interface the {{domxref("AudioContext")}} is extended from) has a listener property that returns an {{domxref("AudioListener")}} object. This represents the listener of the scene, usually your user. You can define where they are in space and in which direction they are facing. They remain static. The pannerNode can then calculate its sound position relative to the position of the listener.

+ +

Let's create our context and listener and set the listener's position to emulate a person looking into our room:

+ +
const AudioContext = window.AudioContext || window.webkitAudioContext;
+const audioCtx = new AudioContext();
+const listener = audioCtx.listener;
+
+const posX = window.innerWidth/2;
+const posY = window.innerHeight/2;
+const posZ = 300;
+
+listener.positionX.value = posX;
+listener.positionY.value = posY;
+listener.positionZ.value = posZ-5;
+
+ +

We could move the listener left or right using positionX, up or down using positionY, or in or out of the room using positionZ. Here we are setting the listener to be in the middle of the viewport and slightly in front of our boombox. We can also set the direction the listener is facing. The default values for these work well:

+ +
listener.forwardX.value = 0;
+listener.forwardY.value = 0;
+listener.forwardZ.value = -1;
+listener.upX.value = 0;
+listener.upY.value = 1;
+listener.upZ.value = 0;
+
+ +

The forward properties represent the 3D coordinate position of the listener's forward direction (e.g. the direction they are facing in), while the up properties represent the 3D coordinate position of the top of the listener's head. These two together can nicely set the direction.

+ +

Creating a panner node

+ +

Let's create our {{domxref("PannerNode")}}. This has a whole bunch of properties associated with it. Let's take a look at each of them:

+ +

To start we can set the panningModel. This is the spacialization algorithm that's used to position the audio in 3D space. We can set this to:

+ +

equalpower — The default and the general way panning is figured out

+ +

HRTF — This stands for 'Head-related transfer function' and looks to take into account the human head when figuring out where the sound is.

+ +

Pretty clever stuff. Let's use the HRTF model!

+ +
const pannerModel = 'HRTF';
+
+ +

The coneInnerAngle and coneOuterAngle properties specify where the volume emanates from. By default, both are 360 degrees. Our boombox speakers will have smaller cones, which we can define. The inner cone is where gain (volume) is always emulated at a maximum and the outer cone is where the gain starts to drop away. The gain is reduced by the value of the coneOuterGain value. Let's create constants that store the values we'll use for these parameters later on:

+ +
const innerCone = 60;
+const outerCone = 90;
+const outerGain = 0.3;
+
+ +

The next parameter is distanceModel — this can only be set to linear, inverse, or exponential. These are different algorithms, which are used to reduce the volume of the audio source as it moves away from the listener. We'll use linear, as it is simple:

+ +
const distanceModel = 'linear';
+
+ +

We can set a maximum distance (maxDistance) between the source and the listener — the volume will not be reduced anymore if the source moves further away from this point. This can be useful, as you may find you want to emulate distance, but volume can drop out and that's actually not what you want. By default, it's 10,000 (a unitless relative value). We can keep it as this:

+ +
const maxDistance = 10000;
+
+ +

There's also a reference distance (refDistance), which is used by the distance models. We can keep that at the default value of 1 as well:

+ +
const refDistance = 1;
+
+ +

Then there's the roll-off factor (rolloffFactor) — how quickly does the volume reduce as the panner moves away from the listener. The default value is 1; let's make that a bit bigger to exaggerate our movements.

+ +
const rollOff = 10;
+
+ +

Now we can start setting our position and orientation of our boombox. This is a lot like how we did it with our listener. These are also the parameters we're going to change when the controls on our interface are used.

+ +
const positionX = posX;
+const positionY = posY;
+const positionZ = posZ;
+
+const orientationX = 0.0;
+const orientationY = 0.0;
+const orientationZ = -1.0;
+
+ +

Note the minus value on our z orientation — this sets the boombox to face us. A positive value would set the sound source facing away from us.

+ +

Let's use the relevant constructor for creating our panner node and pass in all those parameters we set above:

+ +
const panner = new PannerNode(audioCtx, {
+    panningModel: pannerModel,
+    distanceModel: distanceModel,
+    positionX: positionX,
+    positionY: positionY,
+    positionZ: positionZ,
+    orientationX: orientationX,
+    orientationY: orientationY,
+    orientationZ: orientationZ,
+    refDistance: refDistance,
+    maxDistance: maxDistance,
+    rolloffFactor: rollOff,
+    coneInnerAngle: innerCone,
+    coneOuterAngle: outerCone,
+    coneOuterGain: outerGain
+})
+
+ +

Moving the boombox

+ +

Now we're going to move our boombox around our 'room'. We've got some controls set up to do this. We can move it left and right, up and down, and back and forth; we can also rotate it. The sound direction is coming from the boombox speaker at the front, so when we rotate it, we can alter the sound's direction — i.e. make it project to the back when the boombox is rotated 180 degrees and facing away from us.

+ +

We need to set up a few things for the interface. First, we'll get references to the elements we want to move, then we'll store references to the values we'll change when we set up CSS transforms to actually do the movement. Finally, we'll set some bounds so our boombox doesn't move too far in any direction:

+ +
const moveControls = document.querySelector('#move-controls').querySelectorAll('button');
+const boombox = document.querySelector('.boombox-body');
+
+// the values for our css transforms
+let transform = {
+    xAxis: 0,
+    yAxis: 0,
+    zAxis: 0.8,
+    rotateX: 0,
+    rotateY: 0
+}
+
+// set our bounds
+const topBound = -posY;
+const bottomBound = posY;
+const rightBound = posX;
+const leftBound = -posX;
+const innerBound = 0.1;
+const outerBound = 1.5;
+
+ +

Let's create a function that takes the direction we want to move as a parameter, and both modifies the CSS transform and updates the position and orientation values of our panner node properties to change the sound as appropriate.

+ +

To start with let's take a look at our left, right, up and down values as these are pretty straightforward. We'll move the boombox along these axis and update the appropriate position.

+ +
function moveBoombox(direction) {
+    switch (direction) {
+        case 'left':
+            if (transform.xAxis > leftBound) {
+                transform.xAxis -= 5;
+                panner.positionX.value -= 0.1;
+            }
+        break;
+        case 'up':
+            if (transform.yAxis > topBound) {
+                transform.yAxis -= 5;
+                panner.positionY.value -= 0.3;
+            }
+        break;
+        case 'right':
+            if (transform.xAxis < rightBound) {
+                transform.xAxis += 5;
+                panner.positionX.value += 0.1;
+            }
+        break;
+        case 'down':
+            if (transform.yAxis < bottomBound) {
+                transform.yAxis += 5;
+                panner.positionY.value += 0.3;
+            }
+        break;
+    }
+}
+
+ +

It's a similar story for our move in and out values too:

+ +
case 'back':
+    if (transform.zAxis > innerBound) {
+        transform.zAxis -= 0.01;
+        panner.positionZ.value += 40;
+    }
+break;
+case 'forward':
+    if (transform.zAxis < outerBound) {
+        transform.zAxis += 0.01;
+        panner.positionZ.value -= 40;
+    }
+break;
+
+ +

Our rotation values are a little more involved, however, as we need to move the sound around. Not only do we have to update two axis values (e.g. if you rotate an object around the x-axis, you update the y and z coordinates for that object), but we also need to do some more maths for this. The rotation is a circle and we need Math.sin and Math.cos to help us draw that circle.

+ +

Let's set up a rotation rate, which we'll convert into a radian range value for use in Math.sin and Math.cos later, when we want to figure out the new coordinates when we're rotating our boombox:

+ +
// set up rotation constants
+const rotationRate = 60; // bigger number equals slower sound rotation
+
+const q = Math.PI/rotationRate; //rotation increment in radians
+
+ +

We can also use this to work out degrees rotated, which will help with the CSS transforms we will have to create (note we need both an x and y-axis for the CSS transforms):

+ +
// get degrees for css
+const degreesX = (q * 180)/Math.PI;
+const degreesY = (q * 180)/Math.PI;
+
+ +

Let's take a look at our left rotation as an example. We need to change the x orientation and the z orientation of the panner coordinates, to move around the y-axis for our left rotation:

+ +
case 'rotate-left':
+  transform.rotateY -= degreesY;
+
+  // 'left' is rotation about y-axis with negative angle increment
+  z = panner.orientationZ.value*Math.cos(q) - panner.orientationX.value*Math.sin(q);
+  x = panner.orientationZ.value*Math.sin(q) + panner.orientationX.value*Math.cos(q);
+  y = panner.orientationY.value;
+
+  panner.orientationX.value = x;
+  panner.orientationY.value = y;
+  panner.orientationZ.value = z;
+break;
+
+ +

This is a little confusing, but what we're doing is using sin and cos to help us work out the circular motion the coordinates need for the rotation of the boombox.

+ +

We can do this for all the axes. We just need to choose the right orientations to update and whether we want a positive or negative increment.

+ +
case 'rotate-right':
+  transform.rotateY += degreesY;
+  // 'right' is rotation about y-axis with positive angle increment
+  z = panner.orientationZ.value*Math.cos(-q) - panner.orientationX.value*Math.sin(-q);
+  x = panner.orientationZ.value*Math.sin(-q) + panner.orientationX.value*Math.cos(-q);
+  y = panner.orientationY.value;
+  panner.orientationX.value = x;
+  panner.orientationY.value = y;
+  panner.orientationZ.value = z;
+break;
+case 'rotate-up':
+  transform.rotateX += degreesX;
+  // 'up' is rotation about x-axis with negative angle increment
+  z = panner.orientationZ.value*Math.cos(-q) - panner.orientationY.value*Math.sin(-q);
+  y = panner.orientationZ.value*Math.sin(-q) + panner.orientationY.value*Math.cos(-q);
+  x = panner.orientationX.value;
+  panner.orientationX.value = x;
+  panner.orientationY.value = y;
+  panner.orientationZ.value = z;
+break;
+case 'rotate-down':
+  transform.rotateX -= degreesX;
+  // 'down' is rotation about x-axis with positive angle increment
+  z = panner.orientationZ.value*Math.cos(q) - panner.orientationY.value*Math.sin(q);
+  y = panner.orientationZ.value*Math.sin(q) + panner.orientationY.value*Math.cos(q);
+  x = panner.orientationX.value;
+  panner.orientationX.value = x;
+  panner.orientationY.value = y;
+  panner.orientationZ.value = z;
+break;
+
+ +

One last thing — we need to update the CSS and keep a reference of the last move for the mouse event. Here's the final moveBoombox function.

+ +
function moveBoombox(direction, prevMove) {
+    switch (direction) {
+        case 'left':
+            if (transform.xAxis > leftBound) {
+                transform.xAxis -= 5;
+                panner.positionX.value -= 0.1;
+            }
+        break;
+        case 'up':
+            if (transform.yAxis > topBound) {
+                transform.yAxis -= 5;
+                panner.positionY.value -= 0.3;
+            }
+        break;
+        case 'right':
+            if (transform.xAxis < rightBound) {
+                transform.xAxis += 5;
+                panner.positionX.value += 0.1;
+            }
+        break;
+        case 'down':
+            if (transform.yAxis < bottomBound) {
+                transform.yAxis += 5;
+                panner.positionY.value += 0.3;
+            }
+        break;
+        case 'back':
+            if (transform.zAxis > innerBound) {
+                transform.zAxis -= 0.01;
+                panner.positionZ.value += 40;
+            }
+        break;
+        case 'forward':
+            if (transform.zAxis < outerBound) {
+                transform.zAxis += 0.01;
+                panner.positionZ.value -= 40;
+            }
+        break;
+        case 'rotate-left':
+            transform.rotateY -= degreesY;
+
+            // 'left' is rotation about y-axis with negative angle increment
+            z = panner.orientationZ.value*Math.cos(q) - panner.orientationX.value*Math.sin(q);
+            x = panner.orientationZ.value*Math.sin(q) + panner.orientationX.value*Math.cos(q);
+            y = panner.orientationY.value;
+
+            panner.orientationX.value = x;
+            panner.orientationY.value = y;
+            panner.orientationZ.value = z;
+        break;
+        case 'rotate-right':
+            transform.rotateY += degreesY;
+            // 'right' is rotation about y-axis with positive angle increment
+            z = panner.orientationZ.value*Math.cos(-q) - panner.orientationX.value*Math.sin(-q);
+            x = panner.orientationZ.value*Math.sin(-q) + panner.orientationX.value*Math.cos(-q);
+            y = panner.orientationY.value;
+            panner.orientationX.value = x;
+            panner.orientationY.value = y;
+            panner.orientationZ.value = z;
+        break;
+        case 'rotate-up':
+            transform.rotateX += degreesX;
+            // 'up' is rotation about x-axis with negative angle increment
+            z = panner.orientationZ.value*Math.cos(-q) - panner.orientationY.value*Math.sin(-q);
+            y = panner.orientationZ.value*Math.sin(-q) + panner.orientationY.value*Math.cos(-q);
+            x = panner.orientationX.value;
+            panner.orientationX.value = x;
+            panner.orientationY.value = y;
+            panner.orientationZ.value = z;
+        break;
+        case 'rotate-down':
+            transform.rotateX -= degreesX;
+            // 'down' is rotation about x-axis with positive angle increment
+            z = panner.orientationZ.value*Math.cos(q) - panner.orientationY.value*Math.sin(q);
+            y = panner.orientationZ.value*Math.sin(q) + panner.orientationY.value*Math.cos(q);
+            x = panner.orientationX.value;
+            panner.orientationX.value = x;
+            panner.orientationY.value = y;
+            panner.orientationZ.value = z;
+        break;
+    }
+
+  boombox.style.transform = 'translateX('+transform.xAxis+'px) translateY('+transform.yAxis+'px) scale('+transform.zAxis+') rotateY('+transform.rotateY+'deg) rotateX('+transform.rotateX+'deg)';
+
+  const move = prevMove || {};
+  move.frameId = requestAnimationFrame(() => moveBoombox(direction, move));
+    return move;
+}
+
+ +

Wiring up our controls

+ +

Wiring up out control buttons is comparatively simple — now we can listen for a mouse event on our controls and run this function, as well as stop it when the mouse is released:

+ +
// for each of our controls, move the boombox and change the position values
+moveControls.forEach(function(el) {
+
+    let moving;
+    el.addEventListener('mousedown', function() {
+
+        let direction = this.dataset.control;
+        if (moving && moving.frameId) {
+            window.cancelAnimationFrame(moving.frameId);
+        }
+        moving = moveBoombox(direction);
+
+    }, false);
+
+    window.addEventListener('mouseup', function() {
+        if (moving && moving.frameId) {
+            window.cancelAnimationFrame(moving.frameId);
+        }
+    }, false)
+
+})
+
+ +

Connecting Our Graph

+ +

Our HTML contains the audio element we want to be affected by the panner node.

+ +
<audio src="myCoolTrack.mp3"></audio>
+ +

We need to grab the source from that element and pipe it into the Web Audio API using the {{domxref('AudioContext.createMediaElementSource')}}.

+ +
// get the audio element
+const audioElement = document.querySelector('audio');
+
+// pass it into the audio context
+const track = audioContext.createMediaElementSource(audioElement);
+
+ +

Next we have to connect our audio graph. We connect our input (the track) to our modification node (the panner) to our destination (in this case the speakers).

+ +
track.connect(panner).connect(audioCtx.destination);
+
+ +

Let's create a play button, that when clicked will play or pause the audio depending on the current state.

+ +
<button data-playing="false" role="switch">Play/Pause</button>
+
+ +
// select our play button
+const playButton = document.querySelector('button');
+
+playButton.addEventListener('click', function() {
+
+// check if context is in suspended state (autoplay policy)
+if (audioContext.state === 'suspended') {
+audioContext.resume();
+}
+
+// play or pause track depending on state
+if (this.dataset.playing === 'false') {
+audioElement.play();
+this.dataset.playing = 'true';
+} else if (this.dataset.playing === 'true') {
+audioElement.pause();
+this.dataset.playing = 'false';
+}
+
+}, false);
+
+ +

For a more in depth look at playing/controlling audio and audio graphs check out Using The Web Audio API.

+ +

Summary

+ +

Hopefully, this article has given you an insight into how Web Audio spatialization works, and what each of the {{domxref("PannerNode")}} properties do (there are quite a few of them). The values can be hard to manipulate sometimes and depending on your use case it can take some time to get them right.

+ +
+

Note: There are slight differences in the way the audio spatialization sounds across different browsers. The panner node does some very involved maths under the hood; there are a number of tests here so you can keep track of the status of the inner workings of this node across different platforms.

+
+ +

Again, you can check out the final demo here, and the final source code is here. There is also a Codepen demo too.

+ +

If you are working with 3D games and/or WebXR it's a good idea to harness a 3D library to create such functionality, rather than trying to do this all yourself from first principles. We rolled our own in this article to give you an idea of how it works, but you'll save a lot of time by taking advantage of work others have done before you.

diff --git a/files/ko/web/api/web_audio_api/web_audio_spatialization_basics/web-audio-spatialization.png b/files/ko/web/api/web_audio_api/web_audio_spatialization_basics/web-audio-spatialization.png new file mode 100644 index 0000000000..18a359e5c1 Binary files /dev/null and b/files/ko/web/api/web_audio_api/web_audio_spatialization_basics/web-audio-spatialization.png differ -- cgit v1.2.3-54-g00ecf