aboutsummaryrefslogtreecommitdiff
path: root/files/ko/web
diff options
context:
space:
mode:
authorlogic-finder <83723320+logic-finder@users.noreply.github.com>2021-08-15 23:57:52 +0900
committerGitHub <noreply@github.com>2021-08-15 23:57:52 +0900
commitf93ef19d66b0d692ff171d7bdcb82d98f741544a (patch)
tree30350064b1fe030463f06c2a52d9e748ec786f0c /files/ko/web
parent86205e9ea4dc7902a3cbbf2612b1adc77d73c4f5 (diff)
downloadtranslated-content-f93ef19d66b0d692ff171d7bdcb82d98f741544a.tar.gz
translated-content-f93ef19d66b0d692ff171d7bdcb82d98f741544a.tar.bz2
translated-content-f93ef19d66b0d692ff171d7bdcb82d98f741544a.zip
[ko] Work done for 'Web Audio API' article. (#1609)
* Work done for 'Web Audio API' article. * fix a hyperlink * small fixes and documents added Co-authored-by: hochan Lee <hochan049@gmail.com>
Diffstat (limited to 'files/ko/web')
-rw-r--r--files/ko/web/api/web_audio_api/advanced_techniques/index.html586
-rw-r--r--files/ko/web/api/web_audio_api/advanced_techniques/sequencer.pngbin0 -> 9782 bytes
-rw-r--r--files/ko/web/api/web_audio_api/audio-context_.pngbin0 -> 29346 bytes
-rw-r--r--files/ko/web/api/web_audio_api/best_practices/index.html97
-rw-r--r--files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/customsourcenode-as-splitter.svg1
-rw-r--r--files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/index.html284
-rw-r--r--files/ko/web/api/web_audio_api/index.html499
-rw-r--r--files/ko/web/api/web_audio_api/migrating_from_webkitaudiocontext/index.html381
-rw-r--r--files/ko/web/api/web_audio_api/tools/index.html41
-rw-r--r--files/ko/web/api/web_audio_api/using_audioworklet/index.html325
-rw-r--r--files/ko/web/api/web_audio_api/using_iir_filters/iir-filter-demo.pngbin0 -> 6824 bytes
-rw-r--r--files/ko/web/api/web_audio_api/using_iir_filters/index.html198
-rw-r--r--files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/bar-graph.pngbin0 -> 2221 bytes
-rw-r--r--files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/index.html189
-rw-r--r--files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/wave.pngbin0 -> 4433 bytes
-rw-r--r--files/ko/web/api/web_audio_api/web_audio_spatialization_basics/index.html467
-rw-r--r--files/ko/web/api/web_audio_api/web_audio_spatialization_basics/web-audio-spatialization.pngbin0 -> 26452 bytes
17 files changed, 2724 insertions, 344 deletions
diff --git a/files/ko/web/api/web_audio_api/advanced_techniques/index.html b/files/ko/web/api/web_audio_api/advanced_techniques/index.html
new file mode 100644
index 0000000000..d3ce7cd56d
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/advanced_techniques/index.html
@@ -0,0 +1,586 @@
+---
+title: 'Advanced techniques: Creating and sequencing audio'
+slug: Web/API/Web_Audio_API/Advanced_techniques
+tags:
+ - API
+ - Advanced
+ - Audio
+ - Guide
+ - Reference
+ - Web Audio API
+ - sequencer
+---
+<div>{{DefaultAPISidebar("Web Audio API")}}</div>
+
+<p class="summary">In this tutorial, we're going to cover sound creation and modification, as well as timing and scheduling. We're going to introduce sample loading, envelopes, filters, wavetables, and frequency modulation. If you're familiar with these terms and you're looking for an introduction to their application within with the Web Audio API, you've come to the right place.</p>
+
+<h2 id="Demo">Demo</h2>
+
+<p>We're going to be looking at a very simple step sequencer:</p>
+
+<p><img alt="A sound sequencer application featuring play and BPM master controls, and 4 different voices with controls for each." src="sequencer.png"><br>
+  </p>
+
+<p>In practice this is easier to do with a library — the Web Audio API was built to be built upon. If you are about to embark on building something more complex, <a href="https://tonejs.github.io/">tone.js</a> would be a good place to start. However, we want to demonstrate how to build such a demo from first principles, as a learning exercise.</p>
+
+<div class="note">
+<p><strong>Note</strong>: You can find the source code on GitHub as <a href="https://github.com/mdn/webaudio-examples/tree/master/step-sequencer">step-sequencer</a>; see the <a href="https://mdn.github.io/webaudio-examples/step-sequencer/">step-sequencer running live</a> also.</p>
+</div>
+
+<p>The interface consists of master controls, which allow us to play/stop the sequencer, and adjust the BPM (beats per minute) to speed up or slow down the "music".</p>
+
+<p>There are four different sounds, or voices, which can be played. Each voice has four buttons, which represent four beats in one bar of music. When they are enabled the note will sound. When the instrument plays, it will move across this set of beats and loop the bar.</p>
+
+<p>Each voice also has local controls, which allow you to manipulate the effects or parameters particular to each technique we are using to create those voices. The techniques we are using are:</p>
+
+<table class="standard-table">
+ <thead>
+ <tr>
+ <th scope="col">Name of voice</th>
+ <th scope="col">Technique</th>
+ <th scope="col">Associated Web Audio API feature</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <td>"Sweep"</td>
+ <td>Oscillator, periodic wave</td>
+ <td>{{domxref("OscillatorNode")}}, {{domxref("PeriodicWave")}}</td>
+ </tr>
+ <tr>
+ <td>"Pulse"</td>
+ <td>Multiple oscillators</td>
+ <td>{{domxref("OscillatorNode")}}</td>
+ </tr>
+ <tr>
+ <td>"Noise"</td>
+ <td>Random noise buffer, Biquad filter</td>
+ <td>{{domxref("AudioBuffer")}}, {{domxref("AudioBufferSourceNode")}}, {{domxref("BiquadFilterNode")}}</td>
+ </tr>
+ <tr>
+ <td>"Dial up"</td>
+ <td>Loading a sound sample to play</td>
+ <td>{{domxref("BaseAudioContext/decodeAudioData")}}, {{domxref("AudioBufferSourceNode")}}</td>
+ </tr>
+ </tbody>
+</table>
+
+<div class="note">
+<p><strong>Note</strong>: This instrument was not created to sound good, it was created to provide demonstration code and represents a <em>very</em> simplified version of such an instrument. The sounds are based on a dial-up modem. If you are unaware of how one sounds you can <a href="https://soundcloud.com/john-pemberton/modem-dialup">listen to one here</a>.</p>
+</div>
+
+<h2 id="Creating_an_audio_context">Creating an audio context</h2>
+
+<p>As you should be used to by now, each Web Audio API app starts with an audio context:</p>
+
+<pre class="brush: js">// for cross browser compatibility
+const AudioContext = window.AudioContext || window.webkitAudioContext;
+const audioCtx = new AudioContext();</pre>
+
+<h2 id="The_sweep_—_oscillators_periodic_waves_and_envelopes">The "sweep" — oscillators, periodic waves, and envelopes</h2>
+
+<p>For what we will call the "sweep" sound, that first noise you hear when you dial up, we're going to create an oscillator to generate the sound.</p>
+
+<p>The {{domxref("OscillatorNode")}} comes with basic waveforms out of the box — sine, square, triangle or sawtooth. However, instead of using the standard waves that come by default, we're going to create our own using the {{domxref("PeriodicWave")}} interface and values set in a wavetable. We can use the {{domxref("BaseAudioContext.createPeriodicWave")}} method to use this custom wave with an oscillator.</p>
+
+<h3 id="The_periodic_wave">The periodic wave</h3>
+
+<p>First of all, we'll create our periodic wave. To do so, We need to pass real and imaginary values into the {{domxref("BaseAudioContext.createPeriodicWave()")}} method.:</p>
+
+<pre class="brush: js">const wave = audioCtx.createPeriodicWave(wavetable.real, wavetable.imag);
+</pre>
+
+<div class="note">
+<p><strong>Note</strong>: In our example the wavetable is held in a separate JavaScript file (<code>wavetable.js</code>), because there are <em>so</em> many values. It is taken from a <a href="https://github.com/GoogleChromeLabs/web-audio-samples/tree/main/archive/demos/wave-tables">repository of wavetables</a>, which can be found in the <a href="https://github.com/GoogleChromeLabs/web-audio-samples/">Web Audio API examples from Google Chrome Labs</a>.</p>
+</div>
+
+<h3 id="The_Oscillator">The Oscillator</h3>
+
+<p>Now we can create an {{domxref("OscillatorNode")}} and set its wave to the one we've created:</p>
+
+<pre class="brush: js">function playSweep(time) {
+ const osc = audioCtx.createOscillator();
+ osc.setPeriodicWave(wave);
+ osc.frequency.value = 440;
+ osc.connect(audioCtx.destination);
+ osc.start(time);
+ osc.stop(time + 1);
+}</pre>
+
+<p>We pass in a time parameter to the function here, which we'll use later to schedule the sweep.</p>
+
+<h3 id="Controlling_amplitude">Controlling amplitude</h3>
+
+<p>This is great, but wouldn't it be nice if we had an amplitude envelope to go with it? Let's create a simple one so we get used to the methods we need to create an envelope with the Web Audio API.</p>
+
+<p>Let's say our envelope has attack and release. We can allow the user to control these using <a href="/en-US/docs/Web/HTML/Element/input/range">range inputs</a> on the interface:</p>
+
+<pre class="brush: html">&lt;label for="attack"&gt;Attack&lt;/label&gt;
+&lt;input name="attack" id="attack" type="range" min="0" max="1" value="0.2" step="0.1" /&gt;
+
+&lt;label for="release"&gt;Release&lt;/label&gt;
+&lt;input name="release" id="release" type="range" min="0" max="1" value="0.5" step="0.1" /&gt;</pre>
+
+<p>Now we can create some variables over in JavaScript and have them change when the input values are updated:</p>
+
+<pre class="brush: js">let attackTime = 0.2;
+const attackControl = document.querySelector('#attack');
+attackControl.addEventListener('input', function() {
+ attackTime = Number(this.value);
+}, false);
+
+let releaseTime = 0.5;
+const releaseControl = document.querySelector('#release');
+releaseControl.addEventListener('input', function() {
+ releaseTime = Number(this.value);
+}, false);</pre>
+
+<h3 id="The_final_playSweep_function">The final playSweep() function</h3>
+
+<p>Now we can expand our <code>playSweep()</code> function. We need to add a {{domxref("GainNode")}} and connect that through our audio graph to actually apply amplitude variations to our sound. The gain node has one property: <code>gain</code>, which is of type {{domxref("AudioParam")}}.</p>
+
+<p>This is really useful — now we can start to harness the power of the audio param methods on the gain value. We can set a value at a certain time, or we can change it <em>over</em> time with methods such as {{domxref("AudioParam.linearRampToValueAtTime")}}.</p>
+
+<p>For our attack and release, we'll use the <code>linearRampToValueAtTime</code> method as mentioned above. It takes two parameters — the value you want to set the parameter you are changing to (in this case the gain) and when you want to do this. In our case <em>when</em> is controlled by our inputs. So in the example below the gain is being increased to 1, at a linear rate, over the time the attack range input has been set to. Similarly, for our release, the gain is being set to 0, at a linear rate, over the time the release input has been set to.</p>
+
+<pre class="brush: js">let sweepLength = 2;
+function playSweep(time) {
+ let osc = audioCtx.createOscillator();
+ osc.setPeriodicWave(wave);
+ osc.frequency.value = 440;
+
+ let sweepEnv = audioCtx.createGain();
+ sweepEnv.gain.cancelScheduledValues(time);
+ sweepEnv.gain.setValueAtTime(0, time);
+ // set our attack
+ sweepEnv.gain.linearRampToValueAtTime(1, time + attackTime);
+  // set our release
+ sweepEnv.gain.linearRampToValueAtTime(0, time + sweepLength - releaseTime);
+
+ osc.connect(sweepEnv).connect(audioCtx.destination);
+ osc.start(time);
+ osc.stop(time + sweepLength);
+}</pre>
+
+<h2 id="The_pulse_—_low_frequency_oscillator_modulation">The "pulse" — low frequency oscillator modulation</h2>
+
+<p>Great, now we've got our sweep! Let's move on and take a look at that nice pulse sound. We can achieve this with a basic oscillator, modulated with a second oscillator.</p>
+
+<h3 id="Initial_oscillator">Initial oscillator</h3>
+
+<p>We'll set up our first {{domxref("OscillatorNode")}} the same way as our sweep sound, except we won't use a wavetable to set a bespoke wave — we'll just use the default <code>sine</code> wave:</p>
+
+<pre class="brush: js">const osc = audioCtx.createOscillator();
+osc.type = 'sine';
+osc.frequency.value = 880;</pre>
+
+<p>Now we're going to create a {{domxref("GainNode")}}, as it's the <code>gain</code> value that we will oscillate with our second, low frequency oscillator:</p>
+
+<pre class="brush: js">const amp = audioCtx.createGain();
+amp.gain.setValueAtTime(1, audioCtx.currentTime);</pre>
+
+<h3 id="Creating_the_second_low_frequency_oscillator">Creating the second, low frequency, oscillator</h3>
+
+<p>We'll now create a second — <code>square</code> — wave (or pulse) oscillator, to alter the amplification of our first sine wave:</p>
+
+<pre class="brush: js">const lfo = audioCtx.createOscillator();
+lfo.type = 'square';
+lfo.frequency.value = 30;</pre>
+
+<h3 id="Connecting_the_graph">Connecting the graph</h3>
+
+<p>The key here is connecting the graph correctly, and also starting both oscillators:</p>
+
+<pre class="brush: js">lfo.connect(amp.gain);
+osc.connect(amp).connect(audioCtx.destination);
+lfo.start();
+osc.start(time);
+osc.stop(time + pulseTime);</pre>
+
+<div class="note">
+<p><strong>Note</strong>: We also don't have to use the default wave types for either of these oscillators we're creating — we could use a wavetable and the periodic wave method as we did before. There is a multitude of possibilities with just a minimum of nodes.</p>
+</div>
+
+<h3 id="Pulse_user_controls">Pulse user controls</h3>
+
+<p>For the UI controls, let's expose both frequencies of our oscillators, allowing them to be controlled via range inputs. One will change the tone and the other will change how the pulse modulates the first wave:</p>
+
+<pre class="brush: html">&lt;label for="hz"&gt;Hz&lt;/label&gt;
+&lt;input name="hz" id="hz" type="range" min="660" max="1320" value="880" step="1" /&gt;
+&lt;label for="lfo"&gt;LFO&lt;/label&gt;
+&lt;input name="lfo" id="lfo" type="range" min="20" max="40" value="30" step="1" /&gt;</pre>
+
+<p>As before, we'll vary the parameters when the range input values are changed by the user.</p>
+
+<pre class="brush: js">let pulseHz = 880;
+const hzControl = document.querySelector('#hz');
+hzControl.addEventListener('input', function() {
+ pulseHz = Number(this.value);
+}, false);
+
+let lfoHz = 30;
+const lfoControl = document.querySelector('#lfo');
+lfoControl.addEventListener('input', function() {
+ lfoHz = Number(this.value);
+}, false);</pre>
+
+<h3 id="The_final_playPulse_function">The final playPulse() function</h3>
+
+<p>Here's the entire <code>playPulse()</code> function:</p>
+
+<pre class="brush: js">let pulseTime = 1;
+function playPulse(time) {
+ let osc = audioCtx.createOscillator();
+ osc.type = 'sine';
+ osc.frequency.value = pulseHz;
+
+ let amp = audioCtx.createGain();
+ amp.gain.value = 1;
+
+ let lfo = audioCtx.createOscillator();
+ lfo.type = 'square';
+ lfo.frequency.value = lfoHz;
+
+ lfo.connect(amp.gain);
+ osc.connect(amp).connect(audioCtx.destination);
+ lfo.start();
+ osc.start(time);
+ osc.stop(time + pulseTime);
+}</pre>
+
+<h2 id="The_noise_—_random_noise_buffer_with_biquad_filter">The "noise" — random noise buffer with biquad filter</h2>
+
+<p>Now we need to make some noise! All modems have noise. Noise is just random numbers when it comes to audio data, so is, therefore, a relatively straightforward thing to create with code.</p>
+
+<h3 id="Creating_an_audio_buffer">Creating an audio buffer</h3>
+
+<p>We need to create an empty container to put these numbers into, however, one that the Web Audio API understands. This is where {{domxref("AudioBuffer")}} objects come in. You can fetch a file and decode it into a buffer (we'll get to that later on in the tutorial), or you can create an empty buffer and fill it with your own data.</p>
+
+<p>For noise, let's do the latter. We first need to calculate the size of our buffer, to create it. We can use the {{domxref("BaseAudioContext.sampleRate")}} property for this:</p>
+
+<pre class="brush: js">const bufferSize = audioCtx.sampleRate * noiseLength;
+const buffer = audioCtx.createBuffer(1, bufferSize, audioCtx.sampleRate);</pre>
+
+<p>Now we can fill it with random numbers between -1 and 1:</p>
+
+<pre class="brush: js">let data = buffer.getChannelData(0); // get data
+
+// fill the buffer with noise
+for (let i = 0; i &lt; bufferSize; i++) {
+ data[i] = Math.random() * 2 - 1;
+}</pre>
+
+<div class="note">
+<p><strong>Note</strong>: Why -1 to 1? When outputting sound to a file or speakers we need to have a number to represent 0db full scale — the numerical limit of the fixed point media or DAC. In floating point audio, 1 is a convenient number to map to "full scale" for mathematical operations on signals, so oscillators, noise generators and other sound sources typically output bipolar signals in the range -1 to 1. A browser will clamp values outside this range.</p>
+</div>
+
+<h3 id="Creating_a_buffer_source">Creating a buffer source</h3>
+
+<p>Now we have the audio buffer and have filled it with data, we need a node to add to our graph that can use the buffer as a source. We'll create a {{domxref("AudioBufferSourceNode")}} for this, and pass in the data we've created:</p>
+
+<pre class="brush: js">let noise = audioCtx.createBufferSource();
+noise.buffer = buffer;</pre>
+
+<p>If we connect this through our audio graph and play it —</p>
+
+<pre class="brush: js">noise.connect(audioCtx.destination);
+noise.start();</pre>
+
+<p>you'll notice that it's pretty hissy or tinny. We've created white noise, that's how it should be. Our values are running from -1 to 1, which means we have peaks of all frequencies, which in turn is actually quite dramatic and piercing. We <em>could</em> modify the function to run values from 0.5 to -0.5 or similar to take the peaks off and reduce the discomfort, however, where's the fun in that? Let's route the noise we've created through a filter.</p>
+
+<h3 id="Adding_a_biquad_filter_to_the_mix">Adding a biquad filter to the mix</h3>
+
+<p>We want something in the range of pink or brown noise. We want to cut off those high frequencies and possibly some of the lower ones. Let's pick a bandpass biquad filter for the job.</p>
+
+<div class="note">
+<p><strong>Note</strong>: The Web Audio API comes with two types of filter nodes: {{domxref("BiquadFilterNode")}} and {{domxref("IIRFilterNode")}}. For the most part a biquad filter will be good enough — it comes with different types such as lowpass, highpass, and bandpass. If you're looking to do something more bespoke, however, the IIR filter might be a good option — see <a href="/en-US/docs/Web/API/Web_Audio_API/Using_IIR_filters">Using IIR filters</a> for more information.</p>
+</div>
+
+<p>Wiring this up is the same as we've seen before. We create the {{domxref("BiquadFilterNode")}}, configure the properties we want for it and connect it through our graph. Different types of biquad filters have different properties — for instance setting the frequency on a bandpass type adjusts the middle frequency, however on a lowpass it would set the top frequency.</p>
+
+<pre class="brush: js">let bandpass = audioCtx.createBiquadFilter();
+bandpass.type = 'bandpass';
+bandpass.frequency.value = 1000;
+
+// connect our graph
+noise.connect(bandpass).connect(audioCtx.destination);</pre>
+
+<h3 id="Noise_user_controls">Noise user controls</h3>
+
+<p>On the UI we'll expose the noise duration and the frequency we want to band, allowing the user to adjust them via range inputs and event handlers just like in previous sections:</p>
+
+<pre class="brush: html">&lt;label for="duration"&gt;Duration&lt;/label&gt;
+&lt;input name="duration" id="duration" type="range" min="0" max="2" value="1" step="0.1" /&gt;
+
+&lt;label for="band"&gt;Band&lt;/label&gt;
+&lt;input name="band" id="band" type="range" min="400" max="1200" value="1000" step="5" /&gt;
+</pre>
+
+<pre class="brush: js">let noiseDuration = 1;
+const durControl = document.querySelector('#duration');
+durControl.addEventListener('input', function() {
+ noiseDuration = Number(this.value);
+}, false);
+
+let bandHz = 1000;
+const bandControl = document.querySelector('#band');
+bandControl.addEventListener('input', function() {
+ bandHz = Number(this.value);
+}, false);</pre>
+
+<h3 id="The_final_playNoise_function">The final playNoise() function</h3>
+
+<p>Here's the entire <code>playNoise()</code> function:</p>
+
+<pre class="brush: js">function playNoise(time) {
+ const bufferSize = audioCtx.sampleRate * noiseDuration; // set the time of the note
+ const buffer = audioCtx.createBuffer(1, bufferSize, audioCtx.sampleRate); // create an empty buffer
+ let data = buffer.getChannelData(0); // get data
+
+ // fill the buffer with noise
+ for (let i = 0; i &lt; bufferSize; i++) {
+ data[i] = Math.random() * 2 - 1;
+ }
+
+ // create a buffer source for our created data
+ let noise = audioCtx.createBufferSource();
+ noise.buffer = buffer;
+
+ let bandpass = audioCtx.createBiquadFilter();
+ bandpass.type = 'bandpass';
+ bandpass.frequency.value = bandHz;
+
+ // connect our graph
+ noise.connect(bandpass).connect(audioCtx.destination);
+ noise.start(time);
+}</pre>
+
+<h2 id="Dial_up_—_loading_a_sound_sample">"Dial up" — loading a sound sample</h2>
+
+<p>It's straightforward enough to emulate phone dial (DTMF) sounds, by playing a couple of oscillators together using the methods we've already looked at, however, in this section, we'll load in a sample file instead so we can take a look at what's involved.</p>
+
+<h3 id="Loading_the_sample">Loading the sample</h3>
+
+<p>We want to make sure our file has loaded and been decoded into a buffer before we use it, so let's create an <code><a href="/en-US/docs/Web/JavaScript/Reference/Statements/async_function">async</a></code> function to allow us to do this:</p>
+
+<pre class="brush: js">async function getFile(audioContext, filepath) {
+ const response = await fetch(filepath);
+ const arrayBuffer = await response.arrayBuffer();
+ const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
+ return audioBuffer;
+}</pre>
+
+<p>We can then use the <code><a href="/en-US/docs/Web/JavaScript/Reference/Operators/await">await</a></code> operator when calling this function, which ensures that we can only run subsequent code when it has finished executing.</p>
+
+<p>Let's create another <code>async</code> function to set up the sample — we can combine the two async functions in a nice promise pattern to perform further actions when this file is loaded and buffered:</p>
+
+<pre class="brush: js">async function setupSample() {
+ const filePath = 'dtmf.mp3';
+ const sample = await getFile(audioCtx, filePath);
+ return sample;
+}</pre>
+
+<div class="note">
+<p><strong>Note</strong>: You can easily modify the above function to take an array of files and loop over them to load more than one sample. This would be very handy for more complex instruments, or gaming.</p>
+</div>
+
+<p>We can now use <code>setupSample()</code> like so:</p>
+
+<pre class="brush: js">setupSample()
+ .then((sample) =&gt; {
+ // sample is our buffered file
+ // ...
+});</pre>
+
+<p>When the sample is ready to play, the program sets up the UI so it is ready to go.</p>
+
+<h3 id="Playing_the_sample">Playing the sample</h3>
+
+<p>Let's create a <code>playSample()</code> function in a similar manner to how we did with the other sounds. This time it will create an {{domxref("AudioBufferSourceNode")}}, and put the buffer data we've fetched and decoded into it, and play it:</p>
+
+<pre class="brush: js">function playSample(audioContext, audioBuffer, time) {
+ const sampleSource = audioContext.createBufferSource();
+ sampleSource.buffer = audioBuffer;
+ sampleSource.connect(audioContext.destination)
+ sampleSource.start(time);
+ return sampleSource;
+}</pre>
+
+<div class="note">
+<p><strong>Note</strong>: We can call <code>stop()</code> on an {{domxref("AudioBufferSourceNode")}}, however, this will happen automatically when the sample has finished playing.</p>
+</div>
+
+<h3 id="Dial-up_user_controls">Dial-up user controls</h3>
+
+<p>The {{domxref("AudioBufferSourceNode")}} comes with a <code><a href="/en-US/docs/Web/API/AudioBufferSourceNode/playbackRate">playbackRate</a></code> property. Let's expose that to our UI, so we can speed up and slow down our sample. We'll do that in the same sort of way as before:</p>
+
+<pre class="brush: html">&lt;label for="rate"&gt;Rate&lt;/label&gt;
+&lt;input name="rate" id="rate" type="range" min="0.1" max="2" value="1" step="0.1" /&gt;</pre>
+
+<pre class="brush: js">let playbackRate = 1;
+const rateControl = document.querySelector('#rate');
+rateControl.addEventListener('input', function() {
+ playbackRate = Number(this.value);
+}, false);</pre>
+
+<h3 id="The_final_playSample_function">The final playSample() function</h3>
+
+<p>We'll then add a line to update the <code>playbackRate</code> property to our <code>playSample()</code> function. The final version looks like this:</p>
+
+<pre class="brush: js">function playSample(audioContext, audioBuffer, time) {
+ const sampleSource = audioContext.createBufferSource();
+ sampleSource.buffer = audioBuffer;
+ sampleSource.playbackRate.value = playbackRate;
+ sampleSource.connect(audioContext.destination)
+ sampleSource.start(time);
+ return sampleSource;
+}</pre>
+
+<div class="note">
+<p><strong>Note</strong>: The sound file was <a href="http://soundbible.com/1573-DTMF-Tones.html">sourced from soundbible.com</a>.</p>
+</div>
+
+<h2 id="Playing_the_audio_in_time">Playing the audio in time</h2>
+
+<p>A common problem with digital audio applications is getting the sounds to play in time so that the beat remains consistent, and things do not slip out of time.</p>
+
+<p>We could schedule our voices to play within a <code>for</code> loop, however the biggest problem with this is updating whilst it is playing, and we've already implemented UI controls to do so. Also, it would be really nice to consider an instrument-wide BPM control. The best way to get our voices to play on the beat is to create a scheduling system, whereby we look ahead at when the notes are going to play and push them into a queue. We can start them at a precise time with the currentTime property and also take into account any changes.</p>
+
+<div class="note">
+<p><strong>Note</strong>: This is a much stripped down version of <a href="https://www.html5rocks.com/en/tutorials/audio/scheduling/">Chris Wilson's A Tale Of Two Clocks</a> article, which goes into this method in much more detail. There's no point repeating it all here, but it's highly recommended to read this article and use this method. Much of the code here is taken from his <a href="https://github.com/cwilso/metronome/blob/master/js/metronome.js">metronome example</a>, which he references in the article.</p>
+</div>
+
+<p>Let's start by setting up our default BPM (beats per minute), which will also be user-controllable via — you guessed it — another range input.</p>
+
+<pre class="brush: js">let tempo = 60.0;
+const bpmControl = document.querySelector('#bpm');
+bpmControl.addEventListener('input', function() {
+ tempo = Number(this.value);
+}, false);</pre>
+
+<p>Then we'll create variables to define how far ahead we want to look, and how far ahead we want to schedule:</p>
+
+<pre class="brush: js">const lookahead = 25.0; // How frequently to call scheduling function (in milliseconds)
+const scheduleAheadTime = 0.1; // How far ahead to schedule audio (sec)</pre>
+
+<p>Let's create a function that moves the note forwards by one beat, and loops back to the first when it reaches the 4th (last) one:</p>
+
+<pre class="brush: js">let currentNote = 0;
+let nextNoteTime = 0.0; // when the next note is due.
+
+function nextNote() {
+ const secondsPerBeat = 60.0 / tempo;
+
+ nextNoteTime += secondsPerBeat; // Add beat length to last beat time
+
+ // Advance the beat number, wrap to zero
+ currentNote++;
+ if (currentNote === 4) {
+ currentNote = 0;
+ }
+}</pre>
+
+<p>We want to create a reference queue for the notes that are to be played, and the functionality to play them using the functions we've previously created:</p>
+
+<pre class="brush: js">const notesInQueue = [];
+
+function scheduleNote(beatNumber, time) {
+
+ // push the note on the queue, even if we're not playing.
+ notesInQueue.push({ note: beatNumber, time: time });
+
+ if (pads[0].querySelectorAll('button')[beatNumber].getAttribute('aria-checked') === 'true') {
+ playSweep(time)
+ }
+ if (pads[1].querySelectorAll('button')[beatNumber].getAttribute('aria-checked') === 'true') {
+ playPulse(time)
+ }
+ if (pads[2].querySelectorAll('button')[beatNumber].getAttribute('aria-checked') === 'true') {
+ playNoise(time)
+ }
+ if (pads[3].querySelectorAll('button')[beatNumber].getAttribute('aria-checked') === 'true') {
+ playSample(audioCtx, dtmf, time);
+ }
+}</pre>
+
+<p>Here we look at the current time and compare it to the time for the next note; when the two match it will call the previous two functions.</p>
+
+<p>{{domxref("AudioContext")}} object instances have a <code><a href="/en-US/docs/Web/API/BaseAudioContext/currentTime">currentTime</a></code> property, which allows us to retrieve the number of seconds after we first created the context. This is what we shall use for timing within our step sequencer — It's extremely accurate, returning a float value accurate to about 15 decimal places.</p>
+
+<pre class="brush: js">function scheduler() {
+ // while there are notes that will need to play before the next interval, schedule them and advance the pointer.
+ while (nextNoteTime &lt; audioCtx.currentTime + scheduleAheadTime ) {
+ scheduleNote(currentNote, nextNoteTime);
+ nextNote();
+ }
+ timerID = window.setTimeout(scheduler, lookahead);
+}</pre>
+
+<p>We also need a draw function to update the UI, so we can see when the beat progresses.</p>
+
+<pre class="brush: js">let lastNoteDrawn = 3;
+
+function draw() {
+ let drawNote = lastNoteDrawn;
+ let currentTime = audioCtx.currentTime;
+
+ while (notesInQueue.length &amp;&amp; notesInQueue[0].time &lt; currentTime) {
+ drawNote = notesInQueue[0].note;
+ notesInQueue.splice(0,1); // remove note from queue
+ }
+
+ // We only need to draw if the note has moved.
+ if (lastNoteDrawn != drawNote) {
+ pads.forEach(function(el, i) {
+ el.children[lastNoteDrawn].style.borderColor = 'hsla(0, 0%, 10%, 1)';
+ el.children[drawNote].style.borderColor = 'hsla(49, 99%, 50%, 1)';
+ });
+
+ lastNoteDrawn = drawNote;
+ }
+ // set up to draw again
+ requestAnimationFrame(draw);
+}</pre>
+
+<h2 id="Putting_it_all_together">Putting it all together</h2>
+
+<p>Now all that's left to do is make sure we've loaded the sample before we are able to <em>play</em> the instrument. We'll add a loading screen that disappears when the file has been fetched and decoded, then we can allow the scheduler to start using the play button click event.</p>
+
+<pre class="brush: js">// when the sample has loaded allow play
+let loadingEl = document.querySelector('.loading');
+const playButton = document.querySelector('[data-playing]');
+let isPlaying = false;
+setupSample()
+ .then((sample) =&gt; {
+ loadingEl.style.display = 'none'; // remove loading screen
+
+ dtmf = sample; // to be used in our playSample function
+
+ playButton.addEventListener('click', function() {
+ isPlaying = !isPlaying;
+
+ if (isPlaying) { // start playing
+
+ // check if context is in suspended state (autoplay policy)
+ if (audioCtx.state === 'suspended') {
+ audioCtx.resume();
+ }
+
+ currentNote = 0;
+ nextNoteTime = audioCtx.currentTime;
+ scheduler(); // kick off scheduling
+ requestAnimationFrame(draw); // start the drawing loop.
+ this.dataset.playing = 'true';
+
+ } else {
+
+ window.clearTimeout(timerID);
+ this.dataset.playing = 'false';
+
+ }
+ })
+ });</pre>
+
+<h2 id="Summary">Summary</h2>
+
+<p>We've now got an instrument inside our browser! Keep playing and experimenting — you can expand on any of these techniques to create something much more elaborate.</p>
diff --git a/files/ko/web/api/web_audio_api/advanced_techniques/sequencer.png b/files/ko/web/api/web_audio_api/advanced_techniques/sequencer.png
new file mode 100644
index 0000000000..63de8cb0de
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/advanced_techniques/sequencer.png
Binary files differ
diff --git a/files/ko/web/api/web_audio_api/audio-context_.png b/files/ko/web/api/web_audio_api/audio-context_.png
new file mode 100644
index 0000000000..36d0190052
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/audio-context_.png
Binary files differ
diff --git a/files/ko/web/api/web_audio_api/best_practices/index.html b/files/ko/web/api/web_audio_api/best_practices/index.html
new file mode 100644
index 0000000000..784b3f1f3c
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/best_practices/index.html
@@ -0,0 +1,97 @@
+---
+title: Web Audio API best practices
+slug: Web/API/Web_Audio_API/Best_practices
+tags:
+ - Audio
+ - Best practices
+ - Guide
+ - Web Audio API
+---
+<div>{{apiref("Web Audio API")}}</div>
+
+<p class="summary">There's no strict right or wrong way when writing creative code. As long as you consider security, performance, and accessibility, you can adapt to your own style. In this article, we'll share a number of <em>best practices</em> — guidelines, tips, and tricks for working with the Web Audio API.</p>
+
+<h2 id="Loading_soundsfiles">Loading sounds/files</h2>
+
+<p>There are four main ways to load sound with the Web Audio API and it can be a little confusing as to which one you should use.</p>
+
+<p>When working with files, you are looking at either the grabbing the file from an {{domxref("HTMLMediaElement")}} (i.e. an {{htmlelement("audio")}} or {{htmlelement("video")}} element), or you're looking to fetch the file and decode it into a buffer. Both are legitimate ways of working, however, it's more common to use the former when you are working with full-length tracks, and the latter when working with shorter, more sample-like tracks.</p>
+
+<p>Media elements have streaming support out of the box. The audio will start playing when the browser determines it can load the rest of the file before playing finishes. You can see an example of how to use this with the Web Audio API in the <a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API tutorial</a>.</p>
+
+<p>You will, however, have more control if you use a buffer node. You have to request the file and wait for it to load (<a href="/en-US/docs/Web/API/Web_Audio_API/Advanced_techniques#Dial_up_%E2%80%94_loading_a_sound_sample">this section of our advanced article</a> shows a good way to do it), but then you have access to the data directly, which means more precision, and more precise manipulation.</p>
+
+<p>If you're looking to work with audio from the user's camera or microphone you can access it via the <a href="/en-US/docs/Web/API/Media_Streams_API">Media Stream API</a> and the {{domxref("MediaStreamAudioSourceNode")}} interface. This is good for WebRTC and situations where you might want to record or possibly analyze audio.</p>
+
+<p>The last way is to generate your own sound, which can be done with either an {{domxref("OscillatorNode")}} or by creating a buffer and populating it with your own data. Check out the <a href="/en-US/docs/Web/API/Web_Audio_API/Advanced_techniques">tutorial here for creating your own instrument</a> for information on creating sounds with oscillators and buffers.</p>
+
+<h2 id="Cross_browser_legacy_support">Cross browser &amp; legacy support</h2>
+
+<p>The Web Audio API specification is constantly evolving and like most things on the web, there are some issues with it working consistently across browsers. Here we'll look at options for getting around cross-browser problems.</p>
+
+<p>There's the <a href="https://github.com/chrisguttandin/standardized-audio-context"><code>standardised-audio-context</code></a> npm package, which creates API functionality consistently across browsers, filling holes as they are found. It's constantly in development and endeavours to keep up with the current specification.</p>
+
+<p>There is also the option of libraries, of which there are a few depending on your use case. For a good all-rounder, <a href="https://howlerjs.com/">howler.js</a> is a good choice. It has cross-browser support and, provides a useful subset of functionality. Although it doesn't harness the full gamut of filters and other effects the Web Audio API comes with, you can do most of what you'd want to do.</p>
+
+<p>If you are looking for sound creation or a more instrument-based option, <a href="https://tonejs.github.io/">tone.js</a> is a great library. It provides advanced scheduling capabilities, synths, and effects, and intuitive musical abstractions built on top of the Web Audio API.</p>
+
+<p><a href="https://github.com/bbc/r-audio">R-audio</a>, from the <a href="https://medium.com/bbc-design-engineering/r-audio-declarative-reactive-and-flexible-web-audio-graphs-in-react-102c44a1c69c">BBC's Research &amp; Development department</a>, is a library of React components aiming to provide a "more intuitive, declarative interface to Web Audio". If you're used to writing JSX it might be worth looking at.</p>
+
+<h2 id="Autoplay_policy">Autoplay policy</h2>
+
+<p>Browsers have started to implement an autoplay policy, which in general can be summed up as:</p>
+
+<blockquote>
+<p>"Create or resume context from inside a user gesture".</p>
+</blockquote>
+
+<p>But what does that mean in practice? A user gesture has been interpreted to mean a user-initiated event, normally a <code>click</code> event. Browser vendors decided that Web Audio contexts should not be allowed to automatically play audio; they should instead be started by a user. This is because autoplaying audio can be really annoying and obtrusive. But how do we handle this?</p>
+
+<p>When you create an audio context (either offline or online) it is created with a <code>state</code>, which can be <code>suspended</code>, <code>running</code>, or <code>closed</code>.</p>
+
+<p>When working with an {{domxref("AudioContext")}}, if you create the audio context from inside a <code>click</code> event the state should automatically be set to <code>running</code>. Here is a simple example of creating the context from inside a <code>click</code> event:</p>
+
+<pre class="brush: js">const button = document.querySelector('button');
+button.addEventListener('click', function() {
+ const audioCtx = new AudioContext();
+}, false);
+</pre>
+
+<p>If however, you create the context outside of a user gesture, its state will be set to <code>suspended</code> and it will need to be started after user interaction. We can use the same click event example here, test for the state of the context and start it, if it is suspended, using the <a href="/en-US/docs/Web/API/BaseAudioContext/resume"><code>resume()</code></a> method.</p>
+
+<pre class="brush: js">const audioCtx = new AudioContext();
+const button = document.querySelector('button');
+
+button.addEventListener('click', function() {
+ // check if context is in suspended state (autoplay policy)
+ if (audioCtx.state === 'suspended') {
+ audioCtx.resume();
+ }
+}, false);
+</pre>
+
+<p>You might instead be working with an {{domxref("OfflineAudioContext")}}, in which case you can resume the suspended audio context with the <a href="/en-US/docs/Web/API/OfflineAudioContext/startRendering"><code>startRendering()</code></a> method.</p>
+
+<h2 id="User_control">User control</h2>
+
+<p>If your website or application contains sound, you should allow the user control over it, otherwise again, it will become annoying. This can be achieved by play/stop and volume/mute controls. The <a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a> tutorial goes over how to do this.</p>
+
+<p>If you have buttons that switch audio on and off, using the ARIA <a href="/en-US/docs/Web/Accessibility/ARIA/Roles/Switch_role"><code>role="switch"</code></a> attribute on them is a good option for signalling to assistive technology what the button's exact purpose is, and therefore making the app more accessible. There's a <a href="https://codepen.io/Wilto/pen/ZoGoQm?editors=1100">demo of how to use it here</a>.</p>
+
+<p>As you work with a lot of changing values within the Web Audio API and will want to provide users with control over these, the <a href="/en-US/docs/Web/HTML/Element/input/range"><code>range input</code></a> is often a good choice of control to use. It's a good option as you can set minimum and maximum values, as well as increments with the <a href="/en-US/docs/Web/HTML/Element/input#attr-step"><code>step</code></a> attribute.</p>
+
+<h2 id="Setting_AudioParam_values">Setting AudioParam values</h2>
+
+<p>There are two ways to manipulate {{domxref("AudioNode")}} values, which are themselves objects of type {{domxref("AudioParam")}} interface. The first is to set the value directly via the property. So for instance if we want to change the <code>gain</code> value of a {{domxref("GainNode")}} we would do so thus:</p>
+
+<pre class="brush: js">gainNode.gain.value = 0.5;
+</pre>
+
+<p>This will set our volume to half. However, if you're using any of the <code>AudioParam</code>'s defined methods to set these values, they will take precedence over the above property setting. If for example, you want the <code>gain</code> value to be raised to 1 in 2 seconds time, you can do this:</p>
+
+<pre class="brush: js">gainNode.gain.setValueAtTime(1, audioCtx.currentTime + 2);
+</pre>
+
+<p>It will override the previous example (as it should), even if it were to come later in your code.</p>
+
+<p>Bearing this in mind, if your website or application requires timing and scheduling, it's best to stick with the {{domxref("AudioParam")}} methods for setting values. If you're sure it doesn't, setting it with the <code>value</code> property is fine.</p>
diff --git a/files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/customsourcenode-as-splitter.svg b/files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/customsourcenode-as-splitter.svg
new file mode 100644
index 0000000000..0490cddbe5
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/customsourcenode-as-splitter.svg
@@ -0,0 +1 @@
+<svg xmlns="http://www.w3.org/2000/svg" viewBox="7 7 580 346" width="580pt" height="346pt"><defs><marker orient="auto" overflow="visible" markerUnits="strokeWidth" id="a" viewBox="-1 -3 7 6" markerWidth="7" markerHeight="6" color="#000"><path d="M4.8 0 0-1.8v3.6z" fill="currentColor" stroke="currentColor"/></marker></defs><g fill="none"><path fill="#867fff" d="M207 99h180v45H207z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M207 99h180v45H207z"/><text transform="translate(212 113)" fill="#000"><tspan font-family="Courier" font-size="14" font-weight="500" x="9.388" y="14" textLength="151.225">ConstantSourceNode</tspan></text><path fill="#867fff" d="M9 216h180v45H9z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M9 216h180v45H9z"/><text transform="translate(14 230)" fill="#000"><tspan font-family="Courier" font-size="14" font-weight="500" x="51.395" y="14" textLength="67.211">GainNode</tspan></text><path fill="#867fff" d="M405 216h180v45H405z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M405 216h180v45H405z"/><text transform="translate(410 230)" fill="#000"><tspan font-family="Courier" font-size="14" font-weight="500" x="51.395" y="14" textLength="67.211">GainNode</tspan></text><path fill="#867fff" d="M207 216h180v45H207z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M207 216h180v45H207z"/><text transform="translate(212 230)" fill="#000"><tspan font-family="Courier" font-size="14" font-weight="500" x="17.789" y="14" textLength="134.422">StereoPannerNode</tspan></text><path d="M252 144v27H99v32.1M297 144v59.1m45-59.1v27h153v32.1" marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="2"/><text transform="translate(55.876 192.447)" fill="#000"><tspan font-family="Courier" font-size="14" font-weight="500" x=".197" y="14" textLength="33.605">gain</tspan></text><text transform="translate(258.64 192.854)" fill="#000"><tspan font-family="Courier" font-size="14" font-weight="500" x=".398" y="14" textLength="25.204">pan</tspan></text><text transform="translate(504.37 193.347)" fill="#000"><tspan font-family="Courier" font-size="14" font-weight="500" x=".197" y="14" textLength="33.605">gain</tspan></text><path marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M297 54v32.1"/><path d="M243 9h144l-36 45H207z" fill="#fff"/><path d="M243 9h144l-36 45H207z" stroke="#000" stroke-linecap="round" stroke-linejoin="round"/><text transform="translate(248 22)" fill="#000"><tspan font-family="Arial" font-size="16" font-weight="500" x="17.734" y="15" textLength="52.93">input = </tspan><tspan font-family="Courier" font-size="16" font-style="italic" font-weight="500" x="70.664" y="15" textLength="9.602">N</tspan></text><path d="M243 306h144l-36 45H207z" fill="#fff"/><path d="M243 306h144l-36 45H207z" stroke="#000" stroke-linecap="round" stroke-linejoin="round"/><text transform="translate(248 319)" fill="#000"><tspan font-family="Arial" font-size="16" font-weight="500" x="12.84" y="15" textLength="62.719">output = </tspan><tspan font-family="Courier" font-size="16" font-style="italic" font-weight="500" x="75.559" y="15" textLength="9.602">N</tspan></text><path marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="m296.5 261 .357 32.101"/><path d="M441 306h144l-36 45H405z" fill="#fff"/><path d="M441 306h144l-36 45H405z" stroke="#000" stroke-linecap="round" stroke-linejoin="round"/><text transform="translate(446 319)" fill="#000"><tspan font-family="Arial" font-size="16" font-weight="500" x="12.84" y="15" textLength="62.719">output = </tspan><tspan font-family="Courier" font-size="16" font-style="italic" font-weight="500" x="75.559" y="15" textLength="9.602">N</tspan></text><path marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M495 261v32.1"/><path d="M45 306h144l-36 45H9z" fill="#fff"/><path d="M45 306h144l-36 45H9z" stroke="#000" stroke-linecap="round" stroke-linejoin="round"/><text transform="translate(50 319)" fill="#000"><tspan font-family="Arial" font-size="16" font-weight="500" x="12.84" y="15" textLength="62.719">output = </tspan><tspan font-family="Courier" font-size="16" font-style="italic" font-weight="500" x="75.559" y="15" textLength="9.602">N</tspan></text><path marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M99 261v32.1"/></g></svg> \ No newline at end of file
diff --git a/files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/index.html b/files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/index.html
new file mode 100644
index 0000000000..5fdd188213
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/index.html
@@ -0,0 +1,284 @@
+---
+title: Controlling multiple parameters with ConstantSourceNode
+slug: Web/API/Web_Audio_API/Controlling_multiple_parameters_with_ConstantSourceNode
+tags:
+ - Audio
+ - Example
+ - Guide
+ - Intermediate
+ - Media
+ - Tutorial
+ - Web Audio
+ - Web Audio API
+---
+<div>{{APIRef("Web Audio API")}}</div>
+
+<p><span class="seoSummary">This article demonstrates how to use a {{domxref("ConstantSourceNode")}} to link multiple parameters together so they share the same value, which can be changed by setting the value of the {{domxref("ConstantSourceNode.offset")}} parameter.</span></p>
+
+<p>You may have times when you want to have multiple audio parameters be linked so they share the same value even while being changed in some way. For example, perhaps you have a set of oscillators, and two of them need to share the same, configurable volume, or you have a filter that's been applied to certain inputs but not to all of them. You could use a loop and change the value of each affected {{domxref("AudioParam")}} one at a time, but there are two drawbacks to doing it that way: first, that's extra code that, as you're about to see, you don't have to write; and second, that loop uses valuable CPU time on your thread (likely the main thread), and there's a way to offload all that work to the audio rendering thread, which is optimized for this kind of work and may run at a more appropriate priority level than your code.</p>
+
+<p>The solution is simple, and it involves using an audio node type which, at first glance, doesn't look all that useful: {{domxref("ConstantSourceNode")}}.</p>
+
+<h2 id="The_technique">The technique</h2>
+
+<p>This is actually a really easy way to do something that sounds like it might be hard to do. You need to create a {{domxref("ConstantSourceNode")}} and connect it to all of the {{domxref("AudioParam")}}s whose values should be linked to always match each other. Since <code>ConstantSourceNode</code>'s {{domxref("ConstantSourceNode.offset", "offset")}} value is sent straight through to all of its outputs, it acts as a splitter for that value, sending it to each connected parameter.</p>
+
+<p>The diagram below shows how this works; an input value, <code>N</code>, is set as the value of the {{domxref("ConstantSourceNode.offset")}} property. The <code>ConstantSourceNode</code> can have as many outputs as necessary; in this case, we've connected it to three nodes: two {{domxref("GainNode")}}s and a {{domxref("StereoPannerNode")}}. So <code>N</code> becomes the value of the specified parameter ({{domxref("GainNode.gain", "gain")}} for the {{domxref("GainNode")}}s and pan for the {{domxref("StereoPannerNode")}}.</p>
+
+<p><img alt="Dagram in SVG showing how ConstantSourceNode can be used to split an input parameter to share it with multiple nodes." src="customsourcenode-as-splitter.svg"></p>
+
+<p>As a result, every time you change <code>N</code> (the value of the input {{domxref("AudioParam")}}, the values of the two <code>GainNode</code>s' <code>gain</code> properties and the value of the <code>StereoPannerNode</code>'s <code>pan</code> propertry are all set to <code>N</code> as well.</p>
+
+<h2 id="Example">Example</h2>
+
+<p>Let's take a look at this technique in action. In this simple example, we create three {{domxref("OscillatorNode")}}s. Two of them have adjustable gain, controlled using a shared input control. The other oscillator has a fixed volume.</p>
+
+<h3 id="HTML">HTML</h3>
+
+<p>The HTML content for this example is primarily a button to toggle the oscillator tones on and off and an {{HTMLElement("input")}} element of type <code>range</code> to control the volume of two of the three oscillators.</p>
+
+<pre class="brush: html">&lt;div class="controls"&gt;
+ &lt;div class="left"&gt;
+ &lt;div id="playButton" class="button"&gt;
+ ▶️
+ &lt;/div&gt;
+ &lt;/div&gt;
+ &lt;div class="right"&gt;
+ &lt;span&gt;Volume: &lt;/span&gt;
+ &lt;input type="range" min="0.0" max="1.0" step="0.01"
+ value="0.8" name="volume" id="volumeControl"&gt;
+ &lt;/div&gt;
+&lt;/div&gt;
+
+&lt;p&gt;Use the button above to start and stop the tones, and the volume control to
+change the volume of the notes E and G in the chord.&lt;/p&gt;</pre>
+
+<div class="hidden">
+<h3 id="CSS">CSS</h3>
+
+<pre class="brush: css">.controls {
+ width: 400px;
+ position: relative;
+ vertical-align: middle;
+ height: 44px;
+}
+
+.button {
+ font-size: 32px;
+ cursor: pointer;
+ user-select: none;
+ -moz-user-select: none;
+ -webkit-user-select: none;
+ -ms-user-select: none;
+ -o-user-select: none;
+}
+
+.right {
+ width: 50%;
+ font: 14px "Open Sans", "Lucida Grande", "Arial", sans-serif;
+ position: absolute;
+ right: 0;
+ display: table-cell;
+ vertical-align: middle;
+}
+
+.right span {
+ vertical-align: middle;
+}
+
+.right input {
+ vertical-align: baseline;
+}
+
+.left {
+ width: 50%;
+ position: absolute;
+ left: 0;
+ display: table-cell;
+ vertical-align: middle;
+}
+
+.left span, .left input {
+ vertical-align: middle;
+}</pre>
+</div>
+
+<h3 id="JavaScript">JavaScript</h3>
+
+<p>Now let's take a look at the JavaScript code, a piece at a time.</p>
+
+<h4 id="Setting_up">Setting up</h4>
+
+<p>Let's start by looking at the global variable initialization.</p>
+
+<pre class="brush: js">let context = null;
+
+let playButton = null;
+let volumeControl = null;
+
+let oscNode1 = null;
+let oscNode2 = null;
+let oscNode3 = null;
+let constantNode = null;
+let gainNode1 = null;
+let gainNode2 = null;
+let gainNode3 = null;
+
+let playing = false;</pre>
+
+<p>These variables are:</p>
+
+<dl>
+ <dt><code>context</code></dt>
+ <dd>The {{domxref("AudioContext")}} in which all the audio nodes live.</dd>
+ <dt><code>playButton</code> and <code>volumeControl</code></dt>
+ <dd>References to the play button and volume control elements.</dd>
+ <dt><code>oscNode1</code>, <code>oscNode2</code>, and <code>oscNode3</code></dt>
+ <dd>The three {{domxref("OscillatorNode")}}s used to generate the chord.</dd>
+ <dt><code>gainNode1</code>, <code>gainNode2</code>, and <code>gainNode3</code></dt>
+ <dd>The three {{domxref("GainNode")}} instances which provide the volume levels for each of the three oscillators. <code>gainNode2</code> and <code>gainNode3</code> will be linked together to have the same, adjustable, value using the {{domxref("ConstantSourceNode")}}.</dd>
+ <dt><code>constantNode</code></dt>
+ <dd>The {{domxref("ConstantSourceNode")}} used to control the values of <code>gainNode2</code> and <code>gainNode3</code> together.</dd>
+ <dt><code>playing</code></dt>
+ <dd>A {{jsxref("Boolean")}} that we'll use to keep track of whether or not we're currently playing the tones.</dd>
+</dl>
+
+<p>Now let's look at the <code>setup()</code> function, which is our handler for the window's {{event("load")}} event; it handles all the initialization tasks that require the DOM to be in place.</p>
+
+<pre class="brush: js">function setup() {
+ context = new (window.AudioContext || window.webkitAudioContext)();
+
+ playButton = document.querySelector("#playButton");
+ volumeControl = document.querySelector("#volumeControl");
+
+ playButton.addEventListener("click", togglePlay, false);
+ volumeControl.addEventListener("input", changeVolume, false);
+
+ gainNode1 = context.createGain();
+ gainNode1.gain.value = 0.5;
+
+ gainNode2 = context.createGain();
+ gainNode3 = context.createGain();
+ gainNode2.gain.value = gainNode1.gain.value;
+ gainNode3.gain.value = gainNode1.gain.value;
+ volumeControl.value = gainNode1.gain.value;
+
+ constantNode = context.createConstantSource();
+ constantNode.connect(gainNode2.gain);
+ constantNode.connect(gainNode3.gain);
+ constantNode.start();
+
+ gainNode1.connect(context.destination);
+ gainNode2.connect(context.destination);
+ gainNode3.connect(context.destination);
+}
+
+window.addEventListener("load", setup, false);
+</pre>
+
+<p>First, we get access to the window's {{domxref("AudioContext")}}, stashing the reference in <code>context</code>. Then we get references to the control widgets, setting <code>playButton</code> to reference the play button and <code>volumeControl</code> to reference the slider control that the user will use to adjust the gain on the linked pair of oscillators.</p>
+
+<p>Then we assign a handler for the play button's {{event("click")}} event (see {{anch("Toggling the oscillators on and off")}} for more on the <code>togglePlay()</code> method), and for the volume slider's {{event("input")}} event (see {{anch("Controlling the linked oscillators")}} to see the very short <code>changeVolume()</code> method).</p>
+
+<p>Next, the {{domxref("GainNode")}} <code>gainNode1</code> is created to handle the volume for the non-linked oscillator (<code>oscNode1</code>). We set that gain to 0.5. We also create <code>gainNode2</code> and <code>gainNode3</code>, setting their values to match <code>gainNode1</code>, then set the value of the volume slider to the same value, so it is synchronized with the gain level it controls.</p>
+
+<p>Once all the gain nodes are created, we create the {{domxref("ConstantSourceNode")}}, <code>constantNode</code>. We connect its output to the <code>gain</code> {{domxref("AudioParam")}} on both <code>gainNode2</code> and <code>gainNode3</code>, and we start the constant node running by calling its {{domxref("AudioScheduledSourceNode/start", "start()")}} method; now it's sending the value 0.5 to the two gain nodes' values, and any change to {{domxref("ConstantSourceNode.offset", "constantNode.offset")}} will automatically set the gain of both <code>gainNode2</code> and <code>gainNode3</code> (affecting their audio inputs as expected).</p>
+
+<p>Finally, we connect all the gain nodes to the {{domxref("AudioContext")}}'s {{domxref("BaseAudioContext/destination", "destination")}}, so that any sound delivered to the gain nodes will reach the output, whether that output be speakers, headphones, a recording stream, or any other destination type.</p>
+
+<p>After setting the window's {{event("load")}} event handler to be the <code>setup()</code> function, the stage is set. Let's see how the action plays out.</p>
+
+<h4 id="Toggling_the_oscillators_on_and_off">Toggling the oscillators on and off</h4>
+
+<p>Because {{domxref("OscillatorNode")}} doesn't support the notion of being in a paused state, we have to simulate it by terminating the oscillators and starting them again when the play button is clicked again to toggle them back on. Let's look at the code.</p>
+
+<pre class="brush: js">function togglePlay(event) {
+ if (playing) {
+ playButton.textContent = "▶️";
+ stopOscillators();
+ } else {
+ playButton.textContent = "⏸";
+ startOscillators();
+ }
+}</pre>
+
+<p>If the <code>playing</code> variable indicates we're already playing the oscillators, we change the <code>playButton</code>'s content to be the Unicode character "right-pointing triangle" (▶️) and call <code>stopOscillators()</code> to shut down the oscillators. See {{anch("Stopping the oscillators")}} below for that code.</p>
+
+<p>If <code>playing</code> is false, indicating that we're currently paused, we change the play button's content to be the Unicode character "pause symbol" (⏸) and call <code>startOscillators()</code> to start the oscillators playing their tones. That code is covered under {{anch("Starting the oscillators")}} below.</p>
+
+<h4 id="Controlling_the_linked_oscillators">Controlling the linked oscillators</h4>
+
+<p>The <code>changeVolume()</code> function—the event handler for the slider control for the gain on the linked oscillator pair—looks like this:</p>
+
+<pre class="brush: js">function changeVolume(event) {
+ constantNode.offset.value = volumeControl.value;
+}</pre>
+
+<p>That simple function controls the gain on both nodes. All we have to do is set the value of the {{domxref("ConstantSourceNode")}}'s {{domxref("ConstantSourceNode.offset", "offset")}} parameter. That value becomes the node's constant output value, which is fed into all of its outputs, which are, as set above, <code>gainNode2</code> and <code>gainNode3</code>.</p>
+
+<p>While this is an extremely simple example, imagine having a 32 oscillator synthesizer with multiple linked parameters in play across a number of patched nodes. Being able to shorten the number of operations to adjust them all will prove invaluable for code size and performance both.</p>
+
+<h4 id="Starting_the_oscillators">Starting the oscillators</h4>
+
+<p>When the user clicks the play/pause toggle button while the oscillators aren't playing, the <code>startOscillators()</code> function gets called.</p>
+
+<pre class="brush: js">function startOscillators() {
+ oscNode1 = context.createOscillator();
+ oscNode1.type = "sine";
+ oscNode1.frequency.value = 261.625565300598634; // middle C
+ oscNode1.connect(gainNode1);
+
+ oscNode2 = context.createOscillator();
+ oscNode2.type = "sine";
+ oscNode2.frequency.value = 329.627556912869929; // E
+ oscNode2.connect(gainNode2);
+
+ oscNode3 = context.createOscillator();
+ oscNode3.type = "sine";
+ oscNode3.frequency.value = 391.995435981749294 // G
+ oscNode3.connect(gainNode3);
+
+ oscNode1.start();
+ oscNode2.start();
+ oscNode3.start();
+
+ playing = true;
+}</pre>
+
+<p>Each of the three oscillators is set up the same way:</p>
+
+<ol>
+ <li>Create the {{domxref("OscillatorNode")}} by calling {{domxref("BaseAudioContext.createOscillator")}}.</li>
+ <li>Set the oscillator's type to <code>"sine"</code> to use a sine wave as the audio waveform.</li>
+ <li>Set the oscillator's frequency to the desired value; in this case, <code>oscNode1</code> is set to a middle C, while <code>oscNode2</code> and <code>oscNode3</code> round out the chord by playing the E and G notes.</li>
+ <li>Connect the new oscillator to the corresponding gain node.</li>
+</ol>
+
+<p>Once all three oscillators have been created, they're started by calling each one's {{domxref("AudioScheduledSourceNode.start", "ConstantSourceNode.start()")}} method in turn, and <code>playing</code> is set to <code>true</code> to track that the tones are playing.</p>
+
+<h4 id="Stopping_the_oscillators">Stopping the oscillators</h4>
+
+<p>Stopping the oscillators when the user toggles the play state to pause the tones is as simple as stopping each node.</p>
+
+<pre class="brush: js">function stopOscillators() {
+ oscNode1.stop();
+ oscNode2.stop();
+ oscNode3.stop();
+ playing = false;
+}</pre>
+
+<p>Each node is stopped by calling its {{domxref("AudioScheduledSourceNode.stop", "ConstantSourceNode.stop()")}} method, then <code>playing</code> is set to <code>false</code>.</p>
+
+<h3 id="Result">Result</h3>
+
+<p>{{ EmbedLiveSample('Example', 600, 200) }}</p>
+
+<h2 id="See_also">See also</h2>
+
+<ul>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API">Web Audio API</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Simple_synth">Simple synth keyboard</a> (example)</li>
+ <li>{{domxref("OscillatorNode")}}</li>
+ <li>{{domxref("ConstantSourceNode")}}</li>
+</ul>
diff --git a/files/ko/web/api/web_audio_api/index.html b/files/ko/web/api/web_audio_api/index.html
index a6f2a443d1..1ccd2526b3 100644
--- a/files/ko/web/api/web_audio_api/index.html
+++ b/files/ko/web/api/web_audio_api/index.html
@@ -3,11 +3,11 @@ title: Web Audio API
slug: Web/API/Web_Audio_API
translation_of: Web/API/Web_Audio_API
---
-<div>
+<div>{{DefaultAPISidebar("Web Audio API")}}</div>
+
<p>Web Audio API는 웹에서 오디오를 제어하기 위한 강력하고 다양한 기능을 제공합니다. Web Audio API를 이용하면 오디오 소스를 선택할 수 있도록 하거나, 오디오에 이펙트를 추가하거나, 오디오를 시각화하거나, 패닝과 같은 공간 이펙트를 적용시키는 등의 작업이 가능합니다.</p>
-</div>
-<h2 id="Web_audio의_개념과_사용법">Web audio의 개념과 사용법</h2>
+<h2 id="Web_audio_concepts_and_usage">Web audio의 개념과 사용법</h2>
<p>Web Audio API는 <strong>오디오 컨텍스트</strong> 내부의 오디오 조작을 핸들링하는 것을 포함하며, <strong>모듈러 라우팅</strong>을 허용하도록 설계되어 있습니다. 기본적인 오디오 연산은 <strong>오디오 노드</strong>를 통해 수행되며, <strong>오디오 노드</strong>는 서로 연결되어 <strong>오디오 라우팅 그래프</strong>를 형성합니다. 서로 다른 타입의 채널 레이아웃을 포함한 다수의 오디오 소스는 단일 컨텍스트 내에서도 지원됩니다. 이 모듈식 설계는 역동적이고 복합적인 오디오 기능 생성을 위한 유연성을 제공합니다.</p>
@@ -18,24 +18,24 @@ translation_of: Web/API/Web_Audio_API
<p>웹 오디오의 간단하고 일반적인 작업 흐름은 다음과 같습니다 :</p>
<ol>
- <li>오디오 컨텍스트를 생성합니다.</li>
- <li>컨텍스트 내에 소스를 생성합니다.(ex - &lt;audio&gt;, 발진기, 스트림)</li>
- <li>이펙트 노드를 생성합니다. (ex - 잔향 효과,  바이쿼드 필터, 패너, 컴프레서 등)</li>
- <li>오디오의 최종 목적지를 선택합니다. (ex - 시스템 스피커)</li>
- <li>사운드를 이펙트에 연결하고, 이펙트를 목적지에 연결합니다.</li>
+ <li>오디오 컨텍스트를 생성합니다.</li>
+ <li>컨텍스트 내에 소스를 생성합니다.(ex - &lt;audio&gt;, 발진기, 스트림)</li>
+ <li>이펙트 노드를 생성합니다. (ex - 잔향 효과,  바이쿼드 필터, 패너, 컴프레서 등)</li>
+ <li>오디오의 최종 목적지를 선택합니다. (ex - 시스템 스피커)</li>
+ <li>사운드를 이펙트에 연결하고, 이펙트를 목적지에 연결합니다.</li>
</ol>
-<p><img alt="A simple box diagram with an outer box labeled Audio context, and three inner boxes labeled Sources, Effects and Destination. The three inner boxes have arrow between them pointing from left to right, indicating the flow of audio information." src="https://mdn.mozillademos.org/files/12241/webaudioAPI_en.svg" style="display: block; height: 143px; margin: 0px auto; width: 643px;"></p>
+<p><img alt="오디오 컨텍스트라고 쓰여진 외부 박스와, 소스, 이펙트, 목적지라고 쓰여진 세 개의 내부 박스를 가진 간단한 박스 다이어그램. 세 개의 내부 박스는 사이에 좌에서 우를 가리키는 화살표를 가지고 있는데, 이는 오디오 정보의 흐름을 나타냅니다." src="audio-context_.png"></p>
<p>높은 정확도와 적은 지연시간을 가진 타이밍 계산 덕분에, 개발자는 높은 샘플 레이트에서도 특정 샘플을 대상으로 이벤트에 정확하게 응답하는 코드를 작성할 수 있습니다. 따라서 드럼 머신이나 시퀀서 등의 어플리케이션은 충분히 구현 가능합니다.</p>
<p>Web Audio API는 오디오가 어떻게 <em>공간화</em>될지 컨트롤할 수 있도록 합니다. <em>소스-리스너 모델</em>을 기반으로 하는 시스템을 사용하면 <em>패닝 모델</em>과 <em>거리-유도 감쇄</em> 혹은 움직이는 소스(혹은 움직이는 청자)를 통해 유발된 <em>도플러 시프트</em> 컨트롤이 가능합니다.</p>
<div class="note">
-<p><a href="/en-US/docs/Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API">Basic concepts behind Web Audio API</a> 아티클에서 Web Audio API 이론에 대한 더 자세한 내용을 읽을 수 있습니다.</p>
+<p><a href="/en-US/docs/Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API">Web Audio API의 기본 개념</a> 문서에서 Web Audio API 이론에 대한 더 자세한 내용을 읽을 수 있습니다.</p>
</div>
-<h2 id="Web_Audio_API_타겟_사용자층">Web Audio API 타겟 사용자층</h2>
+<h2 id="Web_Audio_API_target_audience">Web Audio API 타겟 사용자층</h2>
<p>오디오나 음악 용어에 익숙하지 않은 사람은 Web Audio API가 막막하게 느껴질 수 있습니다. 또한 Web Audio API가 굉장히 다양한 기능을 제공하는 만큼 개발자로서는 시작하기 어렵게 느껴질 수 있습니다.</p>
@@ -47,74 +47,80 @@ translation_of: Web/API/Web_Audio_API
<p>코드를 작성하는 것은 카드 게임과 비슷합니다. 규칙을 배우고, 플레이합니다. 모르겠는 규칙은 다시 공부하고, 다시 새로운 판을 합니다. 마찬가지로, 이 문서와 첫 튜토리얼에서 설명하는 것만으로 부족하다고 느끼신다면 첫 튜토리얼의 내용을 보충하는 동시에 여러 테크닉을 이용하여 스텝 시퀀서를 만드는 법을 설명하는 <a href="/en-US/docs/Web/API/Web_Audio_API/Advanced_techniques">상급자용 튜토리얼</a>을 읽어보시는 것을 추천합니다.</p>
-<p>그 외에도 이 페이지의 사이드바에서 API의 모든 기능을 설명하는 참고자료와 다양한 튜토리얼을 찾아 보실 수 있습니다.</p>
+<p>그 외에도 이 페이지의 사이드바에서 API의 모든 기능을 설명하는 참고자료와 다양한 자습서를 찾아 보실 수 있습니다.</p>
<p>만약에 프로그래밍보다는 음악이 친숙하고, 음악 이론에 익숙하며, 악기를 만들고 싶으시다면 바로 상급자용 튜토리얼부터 시작하여 여러가지를 만들기 시작하시면 됩니다. 위의 튜토리얼은 음표를 배치하는 법, 저주파 발진기 등 맞춤형 Oscillator(발진기)와 Envelope를 설계하는 법 등을 설명하고 있으니, 이를 읽으며 사이드바의 자료를 참고하시면 될 것입니다.</p>
<p>프로그래밍에 전혀 익숙하지 않으시다면 자바스크립트 기초 튜토리얼을 먼저 읽고 이 문서를 다시 읽으시는 게 나을 수도 있습니다. 모질라의 <a href="/en-US/docs/Learn/JavaScript">자바스크립트 기초</a>만큼 좋은 자료도 몇 없죠.</p>
-<h2 id="Web_Audio_API_Interfaces">Web Audio API Interfaces</h2>
+<h2 id="Web_Audio_API_Interfaces">Web Audio API 인터페이스</h2>
<p>Web Audio API는 다양한 인터페이스와 연관 이벤트를 가지고 있으며, 이는 9가지의 기능적 범주로 나뉩니다.</p>
-<h3 id="일반_오디오_그래프_정의">일반 오디오 그래프 정의</h3>
+<h3 id="General_audio_graph_definition">일반 오디오 그래프 정의</h3>
<p>Web Audio API 사용범위 내에서 오디오 그래프를 형성하는 일반적인 컨테이너와 정의입니다.</p>
<dl>
- <dt>{{domxref("AudioContext")}}</dt>
- <dd><strong><code>AudioContext</code></strong> 인터페이스는 오디오 모듈이 서로 연결되어 구성된 오디오 프로세싱 그래프를 표현하며, 각각의 그래프는 {{domxref("AudioNode")}}로 표현됩니다. <code>AudioContext</code>는 자신이 가지고 있는 노드의 생성과 오디오 프로세싱 혹은 디코딩의 실행을 제어합니다. 어떤 작업이든 시작하기 전에 <code>AudioContext</code>를 생성해야 합니다. 모든 작업은 컨텍스트 내에서 이루어집니다.</dd>
- <dt>{{domxref("AudioNode")}}</dt>
- <dd><strong><code>AudioNode</code></strong><strong> </strong>인터페이스는 오디오 소스({{HTMLElement("audio")}}나 {{HTMLElement("video")}}엘리먼트), 오디오 목적지, 중간 처리 모듈({{domxref("BiquadFilterNode")}}이나 {{domxref("GainNode")}})과 같은 오디오 처리 모듈을 나타냅니다.</dd>
- <dt>{{domxref("AudioParam")}}</dt>
- <dd><strong><code>AudioParam</code></strong> 인터페이스는 {{domxref("AudioNode")}}중 하나와 같은 오디오 관련 파라미터를 나타냅니다. 이는 특정 값 또는 값 변경으로 세팅되거나, 특정 시간에 발생하고 특정 패턴을 따르도록 스케쥴링할 수 있습니다.</dd>
- <dt>The {{event("ended")}} event</dt>
- <dd>
- <p><strong><code>ended</code></strong> 이벤트는 미디어의 끝에 도달하여 재생이 정지되면 호출됩니다.</p>
- </dd>
+ <dt>{{domxref("AudioContext")}}</dt>
+ <dd><strong><code>AudioContext</code></strong> 인터페이스는 오디오 모듈이 서로 연결되어 구성된 오디오 프로세싱 그래프를 표현하며, 각각의 그래프는 {{domxref("AudioNode")}}로 표현됩니다. <code>AudioContext</code>는 자신이 가지고 있는 노드의 생성과 오디오 프로세싱 혹은 디코딩의 실행을 제어합니다. 어떤 작업이든 시작하기 전에 <code>AudioContext</code>를 생성해야 합니다. 모든 작업은 컨텍스트 내에서 이루어집니다.</dd>
+ <dt>{{domxref("AudioNode")}}</dt>
+ <dd><strong><code>AudioNode</code></strong><strong> </strong>인터페이스는 오디오 소스({{HTMLElement("audio")}}나 {{HTMLElement("video")}} 요소), 오디오 목적지, 중간 처리 모듈({{domxref("BiquadFilterNode")}}이나 {{domxref("GainNode")}})과 같은 오디오 처리 모듈을 나타냅니다.</dd>
+ <dt>{{domxref("AudioParam")}}</dt>
+ <dd><strong><code>AudioParam</code></strong> 인터페이스는 {{domxref("AudioNode")}}중 하나와 같은 오디오 관련 파라미터를 나타냅니다. 이는 특정 값 또는 값 변경으로 세팅되거나, 특정 시간에 발생하고 특정 패턴을 따르도록 스케쥴링할 수 있습니다.</dd>
+ <dt>{{domxref("AudioParamMap")}}</dt>
+ <dd>{{domxref("AudioParam")}} 인터페이스 그룹에 maplike 인터페이스를 제공하는데, 이는 <code>forEach()</code>, <code>get()</code>, <code>has()</code>, <code>keys()</code>, <code>values()</code> 메서드와 <code>size</code> 속성이 제공된다는 것을 의미합니다.</dd>
+ <dt>{{domxref("BaseAudioContext")}}</dt>
+ <dd><strong><code>BaseAudioContext</code></strong> 인터페이스는 온라인과 오프라인 오디오 프로세싱 그래프에 대한 기본 정의로서 동작하는데, 이는 각각 {{domxref("AudioContext")}} 와 {{domxref("OfflineAudioContext")}}로 대표됩니다. <code>BaseAudioContext</code>는 직접 쓰여질 수 없습니다 — 이 두 가지 상속되는 인터페이스 중 하나를 통해 이것의 기능을 사용할 수 있습니다.</dd>
+ <dt>The {{event("ended")}} event</dt>
+ <dd><p><strong><code>ended</code></strong> 이벤트는 미디어의 끝에 도달하여 재생이 정지되면 호출됩니다.</p></dd>
</dl>
-<h3 id="오디오_소스_정의하기">오디오 소스 정의하기</h3>
+<h3 id="Defining_audio_sources">오디오 소스 정의하기</h3>
<p>Web Audio API에서 사용하기 위한 오디오 소스를 정의하는 인터페이스입니다.</p>
<dl>
- <dt>{{domxref("OscillatorNode")}}</dt>
- <dd><strong><code style="font-size: 14px;">OscillatorNode</code></strong> 인터페이스는 삼각파 또는 사인파와 같은 주기적 파형을 나타냅니다. 이것은 주어진 주파수의 파동을 생성하는 {{domxref("AudioNode")}} 오디오 프로세싱 모듈입니다.</dd>
- <dt>{{domxref("AudioBuffer")}}</dt>
- <dd><strong><code>AudioBuffer</code></strong> 인터페이스는 {{ domxref("AudioContext.decodeAudioData()") }}메소드를 사용해 오디오 파일에서 생성되거나 {{ domxref("AudioContext.createBuffer()") }}를 사용해 로우 데이터로부터 생성된 메모리상에 적재되는 짧은 오디오 자원을 나타냅니다. 이 형식으로 디코딩된 오디오는 {{ domxref("AudioBufferSourceNode") }}에 삽입될 수 있습니다.</dd>
- <dt>{{domxref("AudioBufferSourceNode")}}</dt>
- <dd><strong><code>AudioBufferSourceNode</code></strong> 인터페이스는 {{domxref("AudioBuffer")}}에 저장된 메모리상의 오디오 데이터로 구성된 오디오 소스를 나타냅니다. 이것은 오디오 소스 역할을 하는 {{domxref("AudioNode")}}입니다.</dd>
- <dt>{{domxref("MediaElementAudioSourceNode")}}</dt>
- <dd><code><strong>MediaElementAudio</strong></code><strong><code>SourceNode</code></strong> 인터페이스는 {{ htmlelement("audio") }} 나 {{ htmlelement("video") }} HTML 엘리먼트로 구성된 오디오 소스를 나타냅니다. 이것은 오디오 소스 역할을 하는 {{domxref("AudioNode")}}입니다.</dd>
- <dt>{{domxref("MediaStreamAudioSourceNode")}}</dt>
- <dd><code><strong>MediaStreamAudio</strong></code><strong><code>SourceNode</code></strong> 인터페이스는 <a href="/en-US/docs/WebRTC" title="/en-US/docs/WebRTC">WebRTC</a> {{domxref("MediaStream")}}(웹캡, 마이크 혹은 원격 컴퓨터에서 전송된 스트림)으로 구성된 오디오 소스를 나타냅니다. 이것은 오디오 소스 역할을 하는 {{domxref("AudioNode")}}입니다.</dd>
+ <dt>{{domxref("AudioScheduledSourceNode")}}</dt>
+ <dd><strong><code>AudioScheduledSourceNode</code></strong>는 오디오 소스 노드 인터페이스의 몇 가지 유형에 대한 부모 인터페이스입니다. 이것은 {{domxref("AudioNode")}}입니다.</dd>
+ <dt>{{domxref("OscillatorNode")}}</dt>
+ <dd><strong><code style="font-size: 14px;">OscillatorNode</code></strong> 인터페이스는 삼각파 또는 사인파와 같은 주기적 파형을 나타냅니다. 이것은 주어진 주파수의 파동을 생성하는 {{domxref("AudioNode")}} 오디오 프로세싱 모듈입니다.</dd>
+ <dt>{{domxref("AudioBuffer")}}</dt>
+ <dd><strong><code>AudioBuffer</code></strong> 인터페이스는 {{ domxref("AudioContext.decodeAudioData()") }}메소드를 사용해 오디오 파일에서 생성되거나 {{ domxref("AudioContext.createBuffer()") }}를 사용해 로우 데이터로부터 생성된 메모리상에 적재되는 짧은 오디오 자원을 나타냅니다. 이 형식으로 디코딩된 오디오는 {{ domxref("AudioBufferSourceNode") }}에 삽입될 수 있습니다.</dd>
+ <dt>{{domxref("AudioBufferSourceNode")}}</dt>
+ <dd><strong><code>AudioBufferSourceNode</code></strong> 인터페이스는 {{domxref("AudioBuffer")}}에 저장된 메모리상의 오디오 데이터로 구성된 오디오 소스를 나타냅니다. 이것은 오디오 소스 역할을 하는 {{domxref("AudioNode")}}입니다.</dd>
+ <dt>{{domxref("MediaElementAudioSourceNode")}}</dt>
+ <dd><code><strong>MediaElementAudio</strong></code><strong><code>SourceNode</code></strong> 인터페이스는 {{ htmlelement("audio") }} 나 {{ htmlelement("video") }} HTML 엘리먼트로 구성된 오디오 소스를 나타냅니다. 이것은 오디오 소스 역할을 하는 {{domxref("AudioNode")}}입니다.</dd>
+ <dt>{{domxref("MediaStreamAudioSourceNode")}}</dt>
+ <dd><code><strong>MediaStreamAudio</strong></code><strong><code>SourceNode</code></strong> 인터페이스는 <a href="/en-US/docs/Web/API/WebRTC_API" title="/en-US/docs/WebRTC">WebRTC</a> {{domxref("MediaStream")}}(웹캠, 마이크 혹은 원격 컴퓨터에서 전송된 스트림)으로 구성된 오디오 소스를 나타냅니다. 이것은 오디오 소스 역할을 하는 {{domxref("AudioNode")}}입니다.</dd>
+ <dt>{{domxref("MediaStreamTrackAudioSourceNode")}}</dt>
+ <dd>{{domxref("MediaStreamTrackAudioSourceNode")}} 유형의 노드는 데이터가 {{domxref("MediaStreamTrack")}}로부터 오는 오디오 소스를 표현합니다. 이 노드를 생성하기 위해 {{domxref("AudioContext.createMediaStreamTrackSource", "createMediaStreamTrackSource()")}} 메서드를 사용하여 이 노드를 생성할 때, 여러분은 어떤 트랙을 사용할 지 명시합니다. 이것은 <code>MediaStreamAudioSourceNode</code>보다 더 많은 제어를 제공합니다.</dd>
</dl>
-<h3 id="오디오_이펙트_필터_정의하기">오디오 이펙트 필터 정의하기</h3>
+<h3 id="Defining_audio_effects_filters">오디오 이펙트 필터 정의하기</h3>
<p>오디오 소스에 적용할 이펙트를 정의하는 인터페이스입니다.</p>
<dl>
- <dt>{{domxref("BiquadFilterNode")}}</dt>
- <dd><strong><code>BiquadFilterNode</code></strong> 인터페이스는 간단한 하위 필터를 나타냅니다. 이것은 여러 종류의 필터나 톤 제어 장치 혹은 그래픽 이퀄라이저를 나타낼 수 있는 {{domxref("AudioNode")}}입니다. <code>BiquadFilterNode</code>는 항상 단 하나의 입력과 출력만을 가집니다. </dd>
- <dt>{{domxref("ConvolverNode")}}</dt>
- <dd><code><strong>Convolver</strong></code><strong><code>Node</code></strong><span style="line-height: 1.5;"> 인터페이스는 주어진 {{domxref("AudioBuffer")}}에 선형 콘볼루션을 수행하는 {{domxref("AudioNode")}}이며, 리버브 이펙트를 얻기 위해 자주 사용됩니다. </span></dd>
- <dt>{{domxref("DelayNode")}}</dt>
- <dd><strong><code>DelayNode</code></strong> 인터페이스는 지연선을 나타냅니다. 지연선은 입력 데이터가 출력에 전달되기까지의 사이에 딜레이를 발생시키는 {{domxref("AudioNode")}} 오디오 처리 모듈입니다.</dd>
- <dt>{{domxref("DynamicsCompressorNode")}}</dt>
- <dd><strong><code>DynamicsCompressorNode</code></strong> 인터페이스는 압축 이펙트를 제공합니다, 이는 신호의 가장 큰 부분의 볼륨을 낮추어 여러 사운드를 동시에 재생할 때 발생할 수 있는 클리핑 및 왜곡을 방지합니다.</dd>
- <dt>{{domxref("GainNode")}}</dt>
- <dd><strong><code>GainNode</code></strong> 인터페이스는 음량의 변경을 나타냅니다. 이는 출력에 전달되기 전의 입력 데이터에 주어진 음량 조정을 적용하기 위한 {{domxref("AudioNode")}} 오디오 모듈입니다.</dd>
- <dt>{{domxref("StereoPannerNode")}}</dt>
- <dd><code><strong>StereoPannerNode</strong></code> 인터페이스는 오디오 스트림을 좌우로 편향시키는데 사용될 수 있는 간단한 스테레오 패너 노드를 나타냅니다.</dd>
- <dt>{{domxref("WaveShaperNode")}}</dt>
- <dd><strong><code>WaveShaperNode</code></strong> 인터페이스는 비선형 왜곡을 나타냅니다. 이는 곡선을 사용하여 신호의 파형 형성에 왜곡을 적용하는 {{domxref("AudioNode")}}입니다. 분명한 왜곡 이펙트 외에도 신호에 따뜻한 느낌을 더하는데 자주 사용됩니다.</dd>
- <dt>{{domxref("PeriodicWave")}}</dt>
- <dd>{{domxref("OscillatorNode")}}의 출력을 형성하는데 사용될 수 있는 주기적 파형을 설명합니다.</dd>
+ <dt>{{domxref("BiquadFilterNode")}}</dt>
+ <dd><strong><code>BiquadFilterNode</code></strong> 인터페이스는 간단한 하위 필터를 나타냅니다. 이것은 여러 종류의 필터나 톤 제어 장치 혹은 그래픽 이퀄라이저를 나타낼 수 있는 {{domxref("AudioNode")}}입니다. <code>BiquadFilterNode</code>는 항상 단 하나의 입력과 출력만을 가집니다. </dd>
+ <dt>{{domxref("ConvolverNode")}}</dt>
+ <dd><code><strong>Convolver</strong></code><strong><code>Node</code></strong><span style="line-height: 1.5;"> 인터페이스는 주어진 {{domxref("AudioBuffer")}}에 선형 콘볼루션을 수행하는 {{domxref("AudioNode")}}이며, 리버브 이펙트를 얻기 위해 자주 사용됩니다. </span></dd>
+ <dt>{{domxref("DelayNode")}}</dt>
+ <dd><strong><code>DelayNode</code></strong> 인터페이스는 지연선을 나타냅니다. 지연선은 입력 데이터가 출력에 전달되기까지의 사이에 딜레이를 발생시키는 {{domxref("AudioNode")}} 오디오 처리 모듈입니다.</dd>
+ <dt>{{domxref("DynamicsCompressorNode")}}</dt>
+ <dd><strong><code>DynamicsCompressorNode</code></strong> 인터페이스는 압축 이펙트를 제공합니다, 이는 신호의 가장 큰 부분의 볼륨을 낮추어 여러 사운드를 동시에 재생할 때 발생할 수 있는 클리핑 및 왜곡을 방지합니다.</dd>
+ <dt>{{domxref("GainNode")}}</dt>
+ <dd><strong><code>GainNode</code></strong> 인터페이스는 음량의 변경을 나타냅니다. 이는 출력에 전달되기 전의 입력 데이터에 주어진 음량 조정을 적용하기 위한 {{domxref("AudioNode")}} 오디오 모듈입니다.</dd>
+ <dt>{{domxref("WaveShaperNode")}}</dt>
+ <dd><strong><code>WaveShaperNode</code></strong> 인터페이스는 비선형 왜곡을 나타냅니다. 이는 곡선을 사용하여 신호의 파형 형성에 왜곡을 적용하는 {{domxref("AudioNode")}}입니다. 분명한 왜곡 이펙트 외에도 신호에 따뜻한 느낌을 더하는데 자주 사용됩니다.</dd>
+ <dt>{{domxref("PeriodicWave")}}</dt>
+ <dd>{{domxref("OscillatorNode")}}의 출력을 형성하는데 사용될 수 있는 주기적 파형을 설명합니다.</dd>
+ <dt>{{domxref("IIRFilterNode")}}</dt>
+ <dd>일반적인 <strong><a class="external external-icon" href="https://en.wikipedia.org/wiki/infinite%20impulse%20response" title="infinite impulse response">infinite impulse response</a></strong> (IIR) 필터를 구현합니다; 이 유형의 필터는 음색 제어 장치와 그래픽 이퀄라이저를 구현하는 데 사용될 수 있습니다.</dd>
</dl>
-<h3 id="오디오_목적지_정의하기">오디오 목적지 정의하기</h3>
+<h3 id="Defining_audio_destinations">오디오 목적지 정의하기</h3>
<p>처리된 오디오를 어디에 출력할지 정의하는 인터페이스입니다.</p>
@@ -122,347 +128,152 @@ translation_of: Web/API/Web_Audio_API
<dt>{{domxref("AudioDestinationNode")}}</dt>
<dd><strong><code>AudioDestinationNode</code></strong> 인터페이스는 주어진 컨텍스트 내의 오디오 소스의 최종 목적지를 나타냅니다. 주로 기기의 스피커로 출력할 때 사용됩니다.</dd>
<dt>{{domxref("MediaStreamAudioDestinationNode")}}</dt>
- <dd><code><strong>MediaStreamAudio</strong></code><strong><code>DestinationNode</code></strong> 인터페이스는 단일 <code>AudioMediaStreamTrack</code> 을 가진 <a href="/en-US/docs/WebRTC" title="/en-US/docs/WebRTC">WebRTC</a> {{domxref("MediaStream")}}로 구성된 오디오 목적지를 나타내며, 이는 {{ domxref("MediaDevices.getUserMedia", "getUserMedia()") }}에서 얻은 {{domxref("MediaStream")}}과 비슷한 방식으로 사용할 수 있습니다. 이것은 오디오 목적지 역할을 하는 {{domxref("AudioNode")}}입니다.</dd>
+ <dd><code><strong>MediaStreamAudio</strong></code><strong><code>DestinationNode</code></strong> 인터페이스는 단일 <code>AudioMediaStreamTrack</code> 을 가진 <a href="/en-US/docs/Web/API/WebRTC_API" title="/en-US/docs/WebRTC">WebRTC</a> {{domxref("MediaStream")}}로 구성된 오디오 목적지를 나타내며, 이는 {{ domxref("MediaDevices.getUserMedia", "getUserMedia()") }}에서 얻은 {{domxref("MediaStream")}}과 비슷한 방식으로 사용할 수 있습니다. 이것은 오디오 목적지 역할을 하는 {{domxref("AudioNode")}}입니다.</dd>
</dl>
-<h3 id="데이터_분석_및_시각화">데이터 분석 및 시각화</h3>
+<h3 id="Data_analysis_and_visualization">데이터 분석 및 시각화</h3>
<p>오디오에서 재생시간이나 주파수 등의 데이터를 추출하기 위한 인터페이스입니다.</p>
<dl>
- <dt>{{domxref("AnalyserNode")}}</dt>
- <dd><strong><code>AnalyserNode</code></strong> 인터페이스는 데이터를 분석하고 시각화하기 위한 실시간 주파수와 시간영역 분석 정보를 제공하는 노드를 나타냅니다.</dd>
+ <dt>{{domxref("AnalyserNode")}}</dt>
+ <dd><strong><code>AnalyserNode</code></strong> 인터페이스는 데이터를 분석하고 시각화하기 위한 실시간 주파수와 시간영역 분석 정보를 제공하는 노드를 나타냅니다.</dd>
</dl>
-<h3 id="오디오_채널을_분리하고_병합하기">오디오 채널을 분리하고 병합하기</h3>
+<h3 id="Splitting_and_merging_audio_channels">오디오 채널을 분리하고 병합하기</h3>
<p>오디오 채널들을 분리하거나 병합하기 위한 인터페이스입니다.</p>
<dl>
- <dt>{{domxref("ChannelSplitterNode")}}</dt>
- <dd><code><strong>ChannelSplitterNode</strong></code> 인터페이스는 오디오 소스의 여러 채널을 모노 출력 셋으로 분리합니다.</dd>
- <dt>{{domxref("ChannelMergerNode")}}</dt>
- <dd><code><strong>ChannelMergerNode</strong></code> 인터페이스는 여러 모노 입력을 하나의 출력으로 재결합합니다. 각 입력은 출력의 채널을 채우는데 사용될 것입니다.</dd>
+ <dt>{{domxref("ChannelSplitterNode")}}</dt>
+ <dd><code><strong>ChannelSplitterNode</strong></code> 인터페이스는 오디오 소스의 여러 채널을 모노 출력 셋으로 분리합니다.</dd>
+ <dt>{{domxref("ChannelMergerNode")}}</dt>
+ <dd><code><strong>ChannelMergerNode</strong></code> 인터페이스는 여러 모노 입력을 하나의 출력으로 재결합합니다. 각 입력은 출력의 채널을 채우는데 사용될 것입니다.</dd>
</dl>
-<h3 id="오디오_공간화">오디오 공간화</h3>
+<h3 id="Audio_spatialization">오디오 공간화</h3>
<p>오디오 소스에 오디오 공간화 패닝 이펙트를 추가하는 인터페이스입니다.</p>
<dl>
- <dt>{{domxref("AudioListener")}}</dt>
- <dd><strong><code>AudioListener</code></strong> 인터페이스는 오디오 공간화에 사용되는 오디오 장면을 청취하는 고유한 시청자의 위치와 방향을 나타냅니다.</dd>
- <dt>{{domxref("PannerNode")}}</dt>
- <dd><strong><code>PannerNode</code></strong> 인터페이스는 공간 내의 신호 양식을 나타냅니다. 이것은 자신의 오른손 직교 좌표 내의 포지션과, 속도 벡터를 이용한 움직임과, 방향성 원뿔을 이용한 방향을 서술하는 {{domxref("AudioNode")}} 오디오 프로세싱 모듈입니다.</dd>
-</dl>
-
-<h3 id="자바스크립트에서_오디오_처리하기">자바스크립트에서 오디오 처리하기</h3>
-
-<p>자바스크립트에서 오디오 데이터를 처리하기 위한 코드를 작성할 수 있습니다. 이렇게 하려면 아래에 나열된 인터페이스와 이벤트를 사용하세요.</p>
-
-<div class="note">
-<p>이것은 Web Audio API 2014년 8월 29일의 스펙입니다. 이 기능은 지원이 중단되고 {{ anch("Audio_Workers") }}로 대체될 예정입니다.</p>
-</div>
-
-<dl>
- <dt>{{domxref("ScriptProcessorNode")}}</dt>
- <dd><strong><code>ScriptProcessorNode</code></strong> 인터페이스는 자바스크립트를 이용한 오디오 생성, 처리, 분석 기능을 제공합니다. 이것은 현재 입력 버퍼와 출력 버퍼, 총 두 개의 버퍼에 연결되는 {{domxref("AudioNode")}} 오디오 프로세싱 모듈입니다. {{domxref("AudioProcessingEvent")}}인터페이스를 구현하는 이벤트는 입력 버퍼에 새로운 데이터가 들어올 때마다 객체로 전달되고, 출력 버퍼가 데이터로 채워지면 이벤트 핸들러가 종료됩니다.</dd>
- <dt>{{event("audioprocess")}} (event)</dt>
- <dd><strong><code>audioprocess</code></strong> 이벤트는 Web Audio API {{domxref("ScriptProcessorNode")}}의 입력 버퍼가 처리될 준비가 되었을 때 발생합니다.</dd>
- <dt>{{domxref("AudioProcessingEvent")}}</dt>
- <dd><a href="/en-US/docs/Web_Audio_API" title="/en-US/docs/Web_Audio_API">Web Audio API</a> <strong><code>AudioProcessingEvent</code></strong> 는 {{domxref("ScriptProcessorNode")}} 입력 버퍼가 처리될 준비가 되었을 때 발생하는 이벤트를 나타냅니다.</dd>
+ <dt>{{domxref("AudioListener")}}</dt>
+ <dd><strong><code>AudioListener</code></strong> 인터페이스는 오디오 공간화에 사용되는 오디오 장면을 청취하는 고유한 시청자의 위치와 방향을 나타냅니다.</dd>
+ <dt>{{domxref("PannerNode")}}</dt>
+ <dd><strong><code>PannerNode</code></strong> 인터페이스는 공간 내의 신호 양식을 나타냅니다. 이것은 자신의 오른손 직교 좌표 내의 포지션과, 속도 벡터를 이용한 움직임과, 방향성 원뿔을 이용한 방향을 서술하는 {{domxref("AudioNode")}} 오디오 프로세싱 모듈입니다.</dd>
+ <dt>{{domxref("StereoPannerNode")}}</dt>
+ <dd><code><strong>StereoPannerNode</strong></code> 인터페이스는 오디오 스트림을 좌우로 편향시키는데 사용될 수 있는 간단한 스테레오 패너 노드를 나타냅니다.</dd>
</dl>
-<h3 id="오프라인백그라운드_오디오_처리하기">오프라인/백그라운드 오디오 처리하기</h3>
+<h3 id="Audio_processing_in_JavaScript">JavaScript에서의 오디오 프로세싱</h3>
-<p>다음을 이용해 백그라운드(장치의 스피커가 아닌 {{domxref("AudioBuffer")}}으로 렌더링)에서 오디오 그래프를 신속하게 처리/렌더링 할수 있습니다.</p>
+<p>오디오 worklet을 사용하여, 여러분은 JavaScript 또는 <a href="/en-US/docs/WebAssembly">WebAssembly</a>로 작성된 사용자 정의 오디오 노드를 정의할 수 있습니다. 오디오 worklet은 {{domxref("Worklet")}} 인터페이스를 구현하는데, 이는 {{domxref("Worker")}} 인터페이스의 가벼운 버전입니다.</p>
<dl>
- <dt>{{domxref("OfflineAudioContext")}}</dt>
- <dd><strong><code>OfflineAudioContext</code></strong> 인터페이스는 {{domxref("AudioNode")}}로 연결되어 구성된 오디오 프로세싱 그래프를 나타내는 {{domxref("AudioContext")}} 인터페이스입니다. 표준 <strong><code>AudioContext</code></strong> 와 대조적으로, <strong><code>OfflineAudioContext</code></strong> 는 실제로 오디오를 렌더링하지 않고 가능한 빨리 버퍼 내에서 생성합니다. </dd>
- <dt>{{event("complete")}} (event)</dt>
- <dd><strong><code>complete</code></strong> 이벤트는 {{domxref("OfflineAudioContext")}}의 렌더링이 종료될때 발생합니다.</dd>
- <dt>{{domxref("OfflineAudioCompletionEvent")}}</dt>
- <dd><strong><code>OfflineAudioCompletionEvent</code></strong> 이벤트는 {{domxref("OfflineAudioContext")}} 의 처리가 종료될 때 발생하는 이벤트를 나타냅니다. {{event("complete")}} 이벤트는 이 이벤트를 구현합니다.</dd>
+ <dt>{{domxref("AudioWorklet")}}</dt>
+ <dd><code>AudioWorklet</code> 인터페이스는 {{domxref("AudioContext")}} 객체의 {{domxref("BaseAudioContext.audioWorklet", "audioWorklet")}}을 통하여 사용 가능하고, 메인 스레드를 실행할 오디오 worklet에 모듈을 추가할 수 있게 합니다.</dd>
+ <dt>{{domxref("AudioWorkletNode")}}</dt>
+ <dd><code>AudioWorkletNode</code> 인터페이스는 오디오 그래프에 임베드된 {{domxref("AudioNode")}}을 나타내고 해당하는 <code>AudioWorkletProcessor</code>에 메시지를 전달할 수 있습니다.</dd>
+ <dt>{{domxref("AudioWorkletProcessor")}}</dt>
+ <dd><code>AudioWorkletProcessor</code> 인터페이스는 오디오를 직접 생성하거나, 처리하거나, 또는 분석하는 <code>AudioWorkletGlobalScope</code>에서 실행되는 오디오 프로세싱 코드를 나타내고, 해당하는 <code>AudioWorkletNode</code>에 메시지를 전달할 수 있습니다.</dd>
+ <dt>{{domxref("AudioWorkletGlobalScope")}}</dt>
+ <dd><code>AudioWorkletGlobalScope</code> 인터페이스는 오디오 프로세싱 스크립트가 실행되는 워커 컨텍스트를 나타내는 파생된 객체인 <code>WorkletGlobalScope</code>입니다; 이것은 메인 스레드가 아닌 worklet 스레드에서 JavaScript를 사용하여 직접적으로 오디오 데이터의 생성, 처리, 분석을 가능하게 하도록 설계되었습니다.</dd>
</dl>
-<h3 id="Audio_Workers" name="Audio_Workers">오디오 워커</h3>
+<h4 id="Obsolete_script_processor_nodes">안 쓰임: 스크립트 프로세서 노드</h4>
-<p>오디오 워커는 <a href="/en-US/docs/Web/Guide/Performance/Using_web_workers">web worker</a> 컨텍스트 내에서 스크립팅된 오디오 처리를 관리하기 위한 기능을 제공하며, 두어가지 인터페이스로 정의되어 있습니다(2014년 8월 29일 새로운 기능이 추가되었습니다). 이는 아직 모든 브라우저에서 구현되지 않았습니다. 구현된 브라우저에서는 <a href="#Audio_processing_via_JavaScript">Audio processing in JavaScript</a>에서 설명된 {{domxref("ScriptProcessorNode")}}를 포함한 다른 기능을 대체합니다.</p>
+<p>오디오 worklet이 정의되기 전에, Web Audio API는 JavaScript 기반의 오디오 프로세싱을 위해 <code>ScriptProcessorNode</code>를 사용했습니다. 코드가 메인 스레드에서 실행되기 때문에, 나쁜 성능을 가지고 있었습니다. <code>ScriptProcessorNode</code>는 역사적인 이유로 보존되나 deprecated되었습니다.</p>
<dl>
- <dt>{{domxref("AudioWorkerNode")}}</dt>
- <dd><strong><code>AudioWorkerNode</code></strong> 인터페이스는 워커 쓰레드와 상호작용하여 오디오를 직접 생성, 처리, 분석하는 {{domxref("AudioNode")}}를 나타냅니다. </dd>
- <dt>{{domxref("AudioWorkerGlobalScope")}}</dt>
- <dd><strong><code>AudioWorkerGlobalScope</code></strong> 인터페이스는 <strong><code>DedicatedWorkerGlobalScope</code></strong> 에서 파생된 오디오 처리 스크립트가 실행되는 워커 컨텍스트를 나타내는 객체입니다. 이것은 워커 쓰레드 내에서 자바스크립트를 이용하여 직접 오디오 데이터를 생성, 처리, 분석할 수 있도록 설계되었습니다.</dd>
- <dt>{{domxref("AudioProcessEvent")}}</dt>
- <dd>이것은 처리를 수행하기 위해 {{domxref("AudioWorkerGlobalScope")}} 오브젝트로 전달되는 <code>Event</code> 오브젝트입니다.</dd>
+ <dt>{{domxref("ScriptProcessorNode")}} {{deprecated_inline}}</dt>
+ <dd><strong><code>ScriptProcessorNode</code></strong> 인터페이스는 자바스크립트를 이용한 오디오 생성, 처리, 분석 기능을 제공합니다. 이것은 현재 입력 버퍼와 출력 버퍼, 총 두 개의 버퍼에 연결되는 {{domxref("AudioNode")}} 오디오 프로세싱 모듈입니다. {{domxref("AudioProcessingEvent")}} 인터페이스를 구현하는 이벤트는 입력 버퍼에 새로운 데이터가 들어올 때마다 객체로 전달되고, 출력 버퍼가 데이터로 채워지면 이벤트 핸들러가 종료됩니다.</dd>
+ <dt>{{event("audioprocess")}} (event) {{deprecated_inline}}</dt>
+ <dd><code>audioprocess</code> 이벤트는 Web Audio API {{domxref("ScriptProcessorNode")}}의 입력 버퍼가 처리될 준비가 되었을 때 발생합니다.</dd>
+ <dt>{{domxref("AudioProcessingEvent")}} {{deprecated_inline}}</dt>
+ <dd><a href="/en-US/docs/Web/API/Web_Audio_API">Web Audio API</a> <code>AudioProcessingEvent</code>는 {{domxref("ScriptProcessorNode")}} 입력 버퍼가 처리될 준비가 되었을 때 발생하는 이벤트를 나타냅니다.</dd>
</dl>
-<h2 id="Example" name="Example">Obsolete interfaces</h2>
+<h3 id="Offlinebackground_audio_processing">오프라인/백그라운드 오디오 처리하기</h3>
-<p>The following interfaces were defined in old versions of the Web Audio API spec, but are now obsolete and have been replaced by other interfaces.</p>
+<p>다음을 이용해 백그라운드(장치의 스피커가 아닌 {{domxref("AudioBuffer")}}으로 렌더링)에서 오디오 그래프를 신속하게 처리/렌더링 할수 있습니다.</p>
<dl>
- <dt>{{domxref("JavaScriptNode")}}</dt>
- <dd>Used for direct audio processing via JavaScript. This interface is obsolete, and has been replaced by {{domxref("ScriptProcessorNode")}}.</dd>
- <dt>{{domxref("WaveTableNode")}}</dt>
- <dd>Used to define a periodic waveform. This interface is obsolete, and has been replaced by {{domxref("PeriodicWave")}}.</dd>
+ <dt>{{domxref("OfflineAudioContext")}}</dt>
+ <dd><strong><code>OfflineAudioContext</code></strong> 인터페이스는 {{domxref("AudioNode")}}로 연결되어 구성된 오디오 프로세싱 그래프를 나타내는 {{domxref("AudioContext")}} 인터페이스입니다. 표준 <strong><code>AudioContext</code></strong> 와 대조적으로, <strong><code>OfflineAudioContext</code></strong> 는 실제로 오디오를 렌더링하지 않고 가능한 빨리 버퍼 내에서 생성합니다. </dd>
+ <dt>{{event("complete")}} (event)</dt>
+ <dd><strong><code>complete</code></strong> 이벤트는 {{domxref("OfflineAudioContext")}}의 렌더링이 종료될때 발생합니다.</dd>
+ <dt>{{domxref("OfflineAudioCompletionEvent")}}</dt>
+ <dd><strong><code>OfflineAudioCompletionEvent</code></strong> 이벤트는 {{domxref("OfflineAudioContext")}} 의 처리가 종료될 때 발생하는 이벤트를 나타냅니다. {{event("complete")}} 이벤트는 이 이벤트를 구현합니다.</dd>
</dl>
-<h2 id="Example" name="Example">Example</h2>
-
-<p>This example shows a wide variety of Web Audio API functions being used. You can see this code in action on the <a href="https://mdn.github.io/voice-change-o-matic/">Voice-change-o-matic</a> demo (also check out the <a href="https://github.com/mdn/voice-change-o-matic">full source code at Github</a>) — this is an experimental voice changer toy demo; keep your speakers turned down low when you use it, at least to start!</p>
-
-<p>The Web Audio API lines are highlighted; if you want to find out more about what the different methods, etc. do, have a search around the reference pages.</p>
-
-<pre class="brush: js; highlight:[1,2,9,10,11,12,36,37,38,39,40,41,62,63,72,114,115,121,123,124,125,147,151] notranslate">var audioCtx = new (window.AudioContext || window.webkitAudioContext)(); // define audio context
-// Webkit/blink browsers need prefix, Safari won't work without window.
-
-var voiceSelect = document.getElementById("voice"); // select box for selecting voice effect options
-var visualSelect = document.getElementById("visual"); // select box for selecting audio visualization options
-var mute = document.querySelector('.mute'); // mute button
-var drawVisual; // requestAnimationFrame
-
-var analyser = audioCtx.createAnalyser();
-var distortion = audioCtx.createWaveShaper();
-var gainNode = audioCtx.createGain();
-var biquadFilter = audioCtx.createBiquadFilter();
-
-function makeDistortionCurve(amount) { // function to make curve shape for distortion/wave shaper node to use
-  var k = typeof amount === 'number' ? amount : 50,
-    n_samples = 44100,
-    curve = new Float32Array(n_samples),
-    deg = Math.PI / 180,
-    i = 0,
-    x;
-  for ( ; i &lt; n_samples; ++i ) {
-    x = i * 2 / n_samples - 1;
-    curve[i] = ( 3 + k ) * x * 20 * deg / ( Math.PI + k * Math.abs(x) );
-  }
-  return curve;
-};
-
-navigator.getUserMedia (
-  // constraints - only audio needed for this app
-  {
-    audio: true
-  },
-
-  // Success callback
-  function(stream) {
-    source = audioCtx.createMediaStreamSource(stream);
-    source.connect(analyser);
-    analyser.connect(distortion);
-    distortion.connect(biquadFilter);
-    biquadFilter.connect(gainNode);
-    gainNode.connect(audioCtx.destination); // connecting the different audio graph nodes together
-
-    visualize(stream);
-    voiceChange();
-
-  },
-
-  // Error callback
-  function(err) {
-    console.log('The following gUM error occured: ' + err);
-  }
-);
-
-function visualize(stream) {
-  WIDTH = canvas.width;
-  HEIGHT = canvas.height;
-
-  var visualSetting = visualSelect.value;
-  console.log(visualSetting);
-
-  if(visualSetting == "sinewave") {
-    analyser.fftSize = 2048;
-    var bufferLength = analyser.frequencyBinCount; // half the FFT value
-    var dataArray = new Uint8Array(bufferLength); // create an array to store the data
+<h2 id="Guides_and_tutorials">가이드와 자습서</h2>
-    canvasCtx.clearRect(0, 0, WIDTH, HEIGHT);
+<p>{{LandingPageListSubpages}}</p>
-    function draw() {
+<h2 id="Examples">예제</h2>
-      drawVisual = requestAnimationFrame(draw);
+<p>여러분은 GitHub의 <a href="https://github.com/mdn/webaudio-examples/">webaudio-example 레포지토리</a>에서 몇 개의 예제를 찾을 수 있습니다.</p>
-      analyser.getByteTimeDomainData(dataArray); // get waveform data and put it into the array created above
+<h2 id="Specifications">명세</h2>
-      canvasCtx.fillStyle = 'rgb(200, 200, 200)'; // draw wave with canvas
-      canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
-
-      canvasCtx.lineWidth = 2;
-      canvasCtx.strokeStyle = 'rgb(0, 0, 0)';
-
-      canvasCtx.beginPath();
-
-      var sliceWidth = WIDTH * 1.0 / bufferLength;
-      var x = 0;
-
-      for(var i = 0; i &lt; bufferLength; i++) {
-
-        var v = dataArray[i] / 128.0;
-        var y = v * HEIGHT/2;
-
-        if(i === 0) {
-          canvasCtx.moveTo(x, y);
-        } else {
-          canvasCtx.lineTo(x, y);
-        }
-
-        x += sliceWidth;
-      }
-
-      canvasCtx.lineTo(canvas.width, canvas.height/2);
-      canvasCtx.stroke();
-    };
-
-    draw();
-
-  } else if(visualSetting == "off") {
-    canvasCtx.clearRect(0, 0, WIDTH, HEIGHT);
-    canvasCtx.fillStyle = "red";
-    canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
-  }
-
-}
-
-function voiceChange() {
-  distortion.curve = new Float32Array;
-  biquadFilter.gain.value = 0; // reset the effects each time the voiceChange function is run
-
-  var voiceSetting = voiceSelect.value;
-  console.log(voiceSetting);
-
-  if(voiceSetting == "distortion") {
-    distortion.curve = makeDistortionCurve(400); // apply distortion to sound using waveshaper node
-  } else if(voiceSetting == "biquad") {
-    biquadFilter.type = "lowshelf";
-    biquadFilter.frequency.value = 1000;
-    biquadFilter.gain.value = 25; // apply lowshelf filter to sounds using biquad
-  } else if(voiceSetting == "off") {
-    console.log("Voice settings turned off"); // do nothing, as off option was chosen
-  }
-
-}
-
-// event listeners to change visualize and voice settings
+<table class="standard-table">
+ <tbody>
+ <tr>
+ <th scope="col">Specification</th>
+ <th scope="col">Status</th>
+ <th scope="col">Comment</th>
+ </tr>
+ <tr>
+ <td>{{SpecName('Web Audio API')}}</td>
+ <td>{{Spec2('Web Audio API')}}</td>
+ <td></td>
+ </tr>
+ </tbody>
+</table>
-visualSelect.onchange = function() {
-  window.cancelAnimationFrame(drawVisual);
-  visualize(stream);
-}
+<h2 id="Browser_compatibility">브라우저 호환성</h2>
-voiceSelect.onchange = function() {
-  voiceChange();
-}
+<div>
+<h3 id="AudioContext">AudioContext</h3>
-mute.onclick = voiceMute;
+<div>
-function voiceMute() { // toggle to mute and unmute sound
-  if(mute.id == "") {
-    gainNode.gain.value = 0; // gain set to 0 to mute sound
-    mute.id = "activated";
-    mute.innerHTML = "Unmute";
-  } else {
-    gainNode.gain.value = 1; // gain set to 1 to unmute sound
-    mute.id = "";
-    mute.innerHTML = "Mute";
-  }
-}
-</pre>
+<p>{{Compat("api.AudioContext", 0)}}</p>
+</div>
+</div>
-<h2 id="Specifications">Specifications</h2>
+<h2 id="See_also">같이 보기</h2>
-<table class="standard-table">
- <tbody>
- <tr>
- <th scope="col">Specification</th>
- <th scope="col">Status</th>
- <th scope="col">Comment</th>
- </tr>
- <tr>
- <td>{{SpecName('Web Audio API')}}</td>
- <td>{{Spec2('Web Audio API')}}</td>
- <td></td>
- </tr>
- </tbody>
-</table>
+<h3 id="Tutorialsguides">자습서/가이드</h3>
-<h2 id="Browser_compatibility">Browser compatibility</h2>
+<ul>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API">Web Audio API의 기본 개념</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Web Audio API 사용하기</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Advanced_techniques">고급 기술: 소리 생성, 시퀸싱, 타이밍, 스케쥴링</a></li>
+ <li><a href="/en-US/docs/Web/Media/Autoplay_guide">미디어와 Web Audio API에 대한 자동 재생 가이드</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Using_IIR_filters">IIR 필터 사용하기</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API">Web Audio API 시각화</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Web_audio_spatialization_basics">Web audio 공간화 기초</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Controlling_multiple_parameters_with_ConstantSourceNode">ConstantSourceNode로 다수의 매개변수 제어하기</a></li>
+ <li><a href="https://www.html5rocks.com/tutorials/webaudio/positional_audio/">positional audio와 WebGL 같이 사용하기</a></li>
+ <li><a href="https://www.html5rocks.com/tutorials/webaudio/games/">Web Audio API로 게임 오디오 개발하기</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Migrating_from_webkitAudioContext">webkitAudioContext 코드를 AudioContext 기반 표준에 포팅하기</a></li>
+</ul>
-<p>{{Compat("api.AudioContext", 0)}}</p>
-
-<h2 id="See_also">See also</h2>
+<h3 id="Libraries">라이브러리</h3>
<ul>
- <li><a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a></li>
- <li><a href="/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API">Visualizations with Web Audio API</a></li>
- <li><a href="http://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic example</a></li>
- <li><a href="http://mdn.github.io/violent-theremin/">Violent Theremin example</a></li>
- <li><a href="/en-US/docs/Web/API/Web_Audio_API/Web_audio_spatialisation_basics">Web audio spatialisation basics</a></li>
- <li><a href="http://www.html5rocks.com/tutorials/webaudio/positional_audio/">Mixing Positional Audio and WebGL</a></li>
- <li><a href="http://www.html5rocks.com/tutorials/webaudio/games/">Developing Game Audio with the Web Audio API</a></li>
- <li><a href="/en-US/docs/Web/API/Web_Audio_API/Porting_webkitAudioContext_code_to_standards_based_AudioContext" title="/en-US/docs/Web_Audio_API/Porting_webkitAudioContext_code_to_standards_based_AudioContext">Porting webkitAudioContext code to standards based AudioContext</a></li>
- <li><a href="https://github.com/bit101/tones">Tones</a>: a simple library for playing specific tones/notes using the Web Audio API.</li>
- <li><a href="https://github.com/goldfire/howler.js/">howler.js</a>: a JS audio library that defaults to <a href="https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html">Web Audio API</a> and falls back to <a href="http://www.whatwg.org/specs/web-apps/current-work/#the-audio-element">HTML5 Audio</a>, as well as providing other useful features.</li>
- <li><a href="https://github.com/mattlima/mooog">Mooog</a>: jQuery-style chaining of AudioNodes, mixer-style sends/returns, and more.</li>
+ <li><a href="https://github.com/bit101/tones">Tones</a>: Web Audio API를 사용하여 특정한 음색/음을 재생하는 간단한 라이브러리</li>
+ <li><a href="https://tonejs.github.io/">Tone.js</a>: 브라우저에서 상호작용을 하는 음악을 생성하기 위한 프레임워크</li>
+ <li><a href="https://github.com/goldfire/howler.js/">howler.js</a>: 다른 유용한 기능들을 제공할 뿐만 아니라, <a href="https://webaudio.github.io/web-audio-api/">Web Audio API</a>을 기본으로 하고 <a href="https://www.whatwg.org/specs/web-apps/current-work/#the-audio-element">HTML5 Audio</a>에 대안을 제공하는 JS 오디오 라이브러리</li>
+ <li><a href="https://github.com/mattlima/mooog">Mooog</a>: jQuery 스타일의 AudioNode 체이닝, mixer 스타일의 전송/반환, 등등</li>
+ <li><a href="https://korilakkuma.github.io/XSound/">XSound</a>: 신시사이저, 이펙트, 시각화, 레코딩 등을 위한 Web Audio API 라이브러리</li>
+ <li><a class="external external-icon" href="https://github.com/chrisjohndigital/OpenLang">OpenLang</a>: 다른 소스로부터 하나의 파일에 비디오와 오디오를 레코드하고 결합시키기 위한 Web Audio API를 사용하는 HTML5 비디오 language lab 웹 애플리케이션 (<a class="external external-icon" href="https://github.com/chrisjohndigital/OpenLang">GitHub에 있는 소스</a>)</li>
+ <li><a href="https://ptsjs.org/">Pts.js</a>: 웹 오디오 시각화를 단순화합니다 (<a href="https://ptsjs.org/guide/sound-0800">가이드</a>)</li>
</ul>
-<section id="Quick_Links">
-<h3 id="Quicklinks">Quicklinks</h3>
+<h3 id="Related_topics">관련 주제</h3>
-<ol>
- <li data-default-state="open"><strong><a href="#">Guides</a></strong>
-
- <ol>
- <li><a href="/en-US/docs/Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API">Basic concepts behind Web Audio API</a></li>
- <li><a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a></li>
- <li><a href="/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API">Visualizations with Web Audio API</a></li>
- <li><a href="/en-US/docs/Web/API/Web_Audio_API/Web_audio_spatialization_basics">Web audio spatialization basics</a></li>
- <li><a href="/en-US/docs/Web/API/Web_Audio_API/Porting_webkitAudioContext_code_to_standards_based_AudioContext" title="/en-US/docs/Web_Audio_API/Porting_webkitAudioContext_code_to_standards_based_AudioContext">Porting webkitAudioContext code to standards based AudioContext</a></li>
- </ol>
- </li>
- <li data-default-state="open"><strong><a href="#">Examples</a></strong>
- <ol>
- <li><a href="/en-US/docs/Web/API/Web_Audio_API/Simple_synth">Simple synth keyboard</a></li>
- <li><a href="http://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic</a></li>
- <li><a href="http://mdn.github.io/violent-theremin/">Violent Theremin</a></li>
- </ol>
- </li>
- <li data-default-state="open"><strong><a href="#">Interfaces</a></strong>
- <ol>
- <li>{{domxref("AnalyserNode")}}</li>
- <li>{{domxref("AudioBuffer")}}</li>
- <li>{{domxref("AudioBufferSourceNode")}}</li>
- <li>{{domxref("AudioContext")}}</li>
- <li>{{domxref("AudioDestinationNode")}}</li>
- <li>{{domxref("AudioListener")}}</li>
- <li>{{domxref("AudioNode")}}</li>
- <li>{{domxref("AudioParam")}}</li>
- <li>{{event("audioprocess")}} (event)</li>
- <li>{{domxref("AudioProcessingEvent")}}</li>
- <li>{{domxref("BiquadFilterNode")}}</li>
- <li>{{domxref("ChannelMergerNode")}}</li>
- <li>{{domxref("ChannelSplitterNode")}}</li>
- <li>{{event("complete")}} (event)</li>
- <li>{{domxref("ConvolverNode")}}</li>
- <li>{{domxref("DelayNode")}}</li>
- <li>{{domxref("DynamicsCompressorNode")}}</li>
- <li>{{event("ended_(Web_Audio)", "ended")}} (event)</li>
- <li>{{domxref("GainNode")}}</li>
- <li>{{domxref("MediaElementAudioSourceNode")}}</li>
- <li>{{domxref("MediaStreamAudioDestinationNode")}}</li>
- <li>{{domxref("MediaStreamAudioSourceNode")}}</li>
- <li>{{domxref("OfflineAudioCompletionEvent")}}</li>
- <li>{{domxref("OfflineAudioContext")}}</li>
- <li>{{domxref("OscillatorNode")}}</li>
- <li>{{domxref("PannerNode")}}</li>
- <li>{{domxref("PeriodicWave")}}</li>
- <li>{{domxref("ScriptProcessorNode")}}</li>
- <li>{{domxref("WaveShaperNode")}}</li>
- </ol>
- </li>
-</ol>
-</section>
+<ul>
+ <li><a href="/en-US/docs/Web/Media">웹 미디어 기술</a></li>
+ <li><a href="/en-US/docs/Web/Media/Formats">웹에서의 미디어 타입과 포맷에 대한 가이드</a></li>
+</ul>
diff --git a/files/ko/web/api/web_audio_api/migrating_from_webkitaudiocontext/index.html b/files/ko/web/api/web_audio_api/migrating_from_webkitaudiocontext/index.html
new file mode 100644
index 0000000000..260a26a090
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/migrating_from_webkitaudiocontext/index.html
@@ -0,0 +1,381 @@
+---
+title: Migrating from webkitAudioContext
+slug: Web/API/Web_Audio_API/Migrating_from_webkitAudioContext
+tags:
+ - API
+ - Audio
+ - Guide
+ - Migrating
+ - Migration
+ - Updating
+ - Web Audio API
+ - porting
+ - webkitAudioContext
+---
+<p>The Web Audio API went through many iterations before reaching its current state. It was first implemented in WebKit, and some of its older parts were not immediately removed as they were replaced in the specification, leading to many sites using non-compatible code. <span class="seoSummary">In this article, we cover the differences in Web Audio API since it was first implemented in WebKit and how to update your code to use the modern Web Audio API.</span></p>
+
+<p>The Web Audio standard was first implemented in <a href="http://webkit.org/">WebKit</a>, and the implementation was built in parallel with the work on the <a href="https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html">specification</a> of the API. As the specification evolved and changes were made to the spec, some of the old implementation pieces were not removed from the WebKit (and Blink) implementations due to backwards compatibility reasons.</p>
+
+<p>New engines implementing the Web Audio spec (such as Gecko) will only implement the official, final version of the specification, which means that code using <code>webkitAudioContext</code> or old naming conventions in the Web Audio specification may not immediately work out of the box in a compliant Web Audio implementation.  This article attempts to summarize the areas where developers are likely to encounter these problems and provide examples on how to port such code to standards based {{domxref("AudioContext")}}, which will work across different browser engines.</p>
+
+<div class="note">
+<p><strong>Note</strong>: There is a library called <a href="https://github.com/cwilso/webkitAudioContext-MonkeyPatch">webkitAudioContext monkeypatch</a>, which automatically fixes some of these changes to make most code targeting <code>webkitAudioContext</code> to work on the standards based <code>AudioContext</code> out of the box, but it currently doesn't handle all of the cases below.  Please consult the <a href="https://github.com/cwilso/webkitAudioContext-MonkeyPatch/blob/gh-pages/README.md">README file</a> for that library to see a list of APIs that are automatically handled by it.</p>
+</div>
+
+<h2 id="Changes_to_the_creator_methods">Changes to the creator methods</h2>
+
+<p>Three of the creator methods on <code>webkitAudioContext</code> have been renamed in {{domxref("AudioContext")}}.</p>
+
+<ul>
+ <li><code>createGainNode()</code> has been renamed to {{domxref("createGain")}}.</li>
+ <li><code>createDelayNode()</code> has been renamed to {{domxref("createDelay")}}.</li>
+ <li><code>createJavaScriptNode()</code> has been renamed to {{domxref("createScriptProcessor")}}.</li>
+</ul>
+
+<p>These are simple renames that were made in order to improve the consistency of these method names on {{domxref("AudioContext")}}.  If your code uses either of these names, like in the example below :</p>
+
+<pre class="brush: js">// Old method names
+var gain = context.createGainNode();
+var delay = context.createDelayNode();
+var js = context.createJavascriptNode(1024);
+</pre>
+
+<p>you can rename the methods to look like this:</p>
+
+<pre class="brush: js">// New method names
+var gain = context.createGain();
+var delay = context.createDelay();
+var js = context.createScriptProcessor(1024);
+</pre>
+
+<p>The semantics of these methods remain the same in the renamed versions.</p>
+
+<h2 id="Changes_to_starting_and_stopping_nodes">Changes to starting and stopping nodes</h2>
+
+<p>In <code>webkitAudioContext</code>, there are two ways to start and stop {{domxref("AudioBufferSourceNode")}} and {{domxref("OscillatorNode")}}: the <code>noteOn()</code> and <code>noteOff()</code> methods, and the <code>start()</code> and <code>stop()</code> methods.  ({{domxref("AudioBufferSourceNode ")}}has yet another way of starting output: the <code>noteGrainOn()</code> method.)  The <code>noteOn()</code>/<code>noteGrainOn()</code>/<code>noteOff()</code> methods were the original way to start/stop output in these nodes, and in the newer versions of the specification, the <code>noteOn()</code> and <code>noteGrainOn()</code> methods were consolidated into a single <code>start()</code> method, and the <code>noteOff()</code> method was renamed to the <code>stop()</code> method.</p>
+
+<p>In order to port your code, you can just rename the method that you're using.  For example, if you have code like the below:</p>
+
+<pre class="brush: js">var osc = context.createOscillator();
+osc.noteOn(1);
+osc.noteOff(1.5);
+
+var src = context.createBufferSource();
+src.noteGrainOn(1, 0.25);
+src.noteOff(2);
+</pre>
+
+<p>you can change it like this in order to port it to the standard AudioContext API:</p>
+
+<pre class="brush: js">var osc = context.createOscillator();
+osc.start(1);
+osc.stop(1.5);
+
+var src = context.createBufferSource();
+src.start(1, 0.25);
+src.stop(2);</pre>
+
+<h2 id="Remove_synchronous_buffer_creation">Remove synchronous buffer creation</h2>
+
+<p>In the old WebKit implementation of Web Audio, there were two versions of <code>createBuffer()</code>, one which created an initially empty buffer, and one which took an existing {{domxref("ArrayBuffer")}} containing encoded audio, decoded it and returned the result in the form of an {{domxref("AudioBuffer")}}.  The latter version of <code>createBuffer()</code> was potentially expensive, because it had to decode the audio buffer synchronously, and with the buffer being arbitrarily large, it could take a lot of time for this method to complete its work, and no other part of your web page's code could execute in the mean time.</p>
+
+<p>Because of these problems, this version of the <code>createBuffer()</code> method has been removed, and you should use the asynchronous <code>decodeAudioData()</code> method instead.</p>
+
+<p>The example below shows old code which downloads an audio file over the network, and then decoded it using <code>createBuffer()</code>:</p>
+
+<pre class="brush: js">var xhr = new XMLHttpRequest();
+xhr.open("GET", "/path/to/audio.ogg", true);
+xhr.responseType = "arraybuffer";
+xhr.send();
+xhr.onload = function() {
+ var decodedBuffer = context.createBuffer(xhr.response, false);
+ if (decodedBuffer) {
+ // Decoding was successful, do something useful with the audio buffer
+ } else {
+ alert("Decoding the audio buffer failed");
+ }
+};
+</pre>
+
+<p>Converting this code to use <code>decodeAudioData()</code> is relatively simple, as can be seen below:</p>
+
+<pre class="brush: js">var xhr = new XMLHttpRequest();
+xhr.open("GET", "/path/to/audio.ogg", true);
+xhr.responseType = "arraybuffer";
+xhr.send();
+xhr.onload = function() {
+ context.decodeAudioData(xhr.response, function onSuccess(decodedBuffer) {
+ // Decoding was successful, do something useful with the audio buffer
+ }, function onFailure() {
+ alert("Decoding the audio buffer failed");
+ });
+};</pre>
+
+<p>Note that the <code>decodeAudioData()</code> method is asynchronous, which means that it will return immediately, and then when the decoding finishes, one of the success or failure callback functions will get called depending on whether the audio decoding was successful.  This means that you may need to restructure your code to run the part which happened after the <code>createBuffer()</code> call in the success callback, as you can see in the example above.</p>
+
+<h2 id="Renaming_of_AudioParam.setTargetValueAtTime">Renaming of AudioParam.setTargetValueAtTime</h2>
+
+<p>The <code>setTargetValueAtTime()</code> method on the {{domxref("AudioParam")}} interface has been renamed to <code>setTargetAtTime()</code>.  This is also a simple rename to improve the understandability of the API, and the semantics of the method are the same.  If your code is using <code>setTargetValueAtTime()</code>, you can rename it to use <code>setTargetAtTime()</code>. For example, if we have code that looks like this:</p>
+
+<pre class="brush: js"> var gainNode = context.createGain();
+ gainNode.gain.setTargetValueAtTime(0.0, 10.0, 1.0);
+</pre>
+
+<p>you can rename the method, and be compliant with the standard, like so:</p>
+
+<pre class="brush: js"> var gainNode = context.createGain();
+ gainNode.gain.setTargetAtTime(0.0, 10.0, 1.0);
+</pre>
+
+<h2 id="Enumerated_values_that_changed">Enumerated values that changed</h2>
+
+<p>The original <code>webkitAudioContext</code> API used C-style number based enumerated values in the API.  Those values have since been changed to use the Web IDL based enumerated values, which should be familiar because they are similar to things like the {{domxref("HTMLInputElement")}} property {{domxref("HTMLInputElement.type", "type")}}.</p>
+
+<h3 id="OscillatorNode.type">OscillatorNode.type</h3>
+
+<p>{{domxref("OscillatorNode")}}'s type property has been changed to use Web IDL enums.  Old code using <code>webkitAudioContext</code> can be ported to standards based {{domxref("AudioContext")}} like below:</p>
+
+<pre class="brush: js">// Old webkitAudioContext code:
+var osc = context.createOscillator();
+osc.type = osc.SINE; // sine waveform
+osc.type = osc.SQUARE; // square waveform
+osc.type = osc.SAWTOOTH; // sawtooth waveform
+osc.type = osc.TRIANGLE; // triangle waveform
+osc.setWaveTable(table);
+var isCustom = (osc.type == osc.CUSTOM); // isCustom will be true
+
+// New standard AudioContext code:
+var osc = context.createOscillator();
+osc.type = "sine"; // sine waveform
+osc.type = "square"; // square waveform
+osc.type = "sawtooth"; // sawtooth waveform
+osc.type = "triangle"; // triangle waveform
+osc.setPeriodicWave(table); // Note: setWaveTable has been renamed to setPeriodicWave!
+var isCustom = (osc.type == "custom"); // isCustom will be true
+</pre>
+
+<h3 id="BiquadFilterNode.type">BiquadFilterNode.type</h3>
+
+<p>{{domxref("BiquadFilterNode")}}'s type property has been changed to use Web IDL enums.  Old code using <code>webkitAudioContext</code> can be ported to standards based {{domxref("AudioContext")}} like below:</p>
+
+<pre class="brush: js">// Old webkitAudioContext code:
+var filter = context.createBiquadFilter();
+filter.type = filter.LOWPASS; // lowpass filter
+filter.type = filter.HIGHPASS; // highpass filter
+filter.type = filter.BANDPASS; // bandpass filter
+filter.type = filter.LOWSHELF; // lowshelf filter
+filter.type = filter.HIGHSHELF; // highshelf filter
+filter.type = filter.PEAKING; // peaking filter
+filter.type = filter.NOTCH; // notch filter
+filter.type = filter.ALLPASS; // allpass filter
+
+// New standard AudioContext code:
+var filter = context.createBiquadFilter();
+filter.type = "lowpass"; // lowpass filter
+filter.type = "highpass"; // highpass filter
+filter.type = "bandpass"; // bandpass filter
+filter.type = "lowshelf"; // lowshelf filter
+filter.type = "highshelf"; // highshelf filter
+filter.type = "peaking"; // peaking filter
+filter.type = "notch"; // notch filter
+filter.type = "allpass"; // allpass filter
+</pre>
+
+<h3 id="PannerNode.panningModel">PannerNode.panningModel</h3>
+
+<p>{{domxref("PannerNode")}}'s panningModel property has been changed to use Web IDL enums.  Old code using <code>webkitAudioContext</code> can be ported to standards based {{domxref("AudioContext")}} like below:</p>
+
+<pre class="brush: js">// Old webkitAudioContext code:
+var panner = context.createPanner();
+panner.panningModel = panner.EQUALPOWER; // equalpower panning
+panner.panningModel = panner.HRTF; // HRTF panning
+
+// New standard AudioContext code:
+var panner = context.createPanner();
+panner.panningModel = "equalpower"; // equalpower panning
+panner.panningModel = "HRTF"; // HRTF panning
+</pre>
+
+<h3 id="PannerNode.distanceModel">PannerNode.distanceModel</h3>
+
+<p>{{domxref("PannerNode")}}'s <code>distanceModel</code> property has been changed to use Web IDL enums.  Old code using <code>webkitAudioContext</code> can be ported to standards based {{domxref("AudioContext")}} like below:</p>
+
+<pre class="brush: js">// Old webkitAudioContext code:
+var panner = context.createPanner();
+panner.distanceModel = panner.LINEAR_DISTANCE; // linear distance model
+panner.distanceModel = panner.INVERSE_DISTANCE; // inverse distance model
+panner.distanceModel = panner.EXPONENTIAL_DISTANCE; // exponential distance model
+
+// Mew standard AudioContext code:
+var panner = context.createPanner();
+panner.distanceModel = "linear"; // linear distance model
+panner.distanceModel = "inverse"; // inverse distance model
+panner.distanceModel = "exponential"; // exponential distance model
+</pre>
+
+<h2 id="Gain_control_moved_to_its_own_node_type">Gain control moved to its own node type</h2>
+
+<p>The Web Audio standard now controls all gain using the {{domxref("GainNode")}}. Instead of setting a <code>gain</code> property directly on an audio source, you connect the source to a gain node and then control the gain using that node's <code>gain</code> parameter.</p>
+
+<h3 id="AudioBufferSourceNode">AudioBufferSourceNode</h3>
+
+<p>The <code>gain</code> attribute of {{domxref("AudioBufferSourceNode")}} has been removed.  The same functionality can be achieved by connecting the {{domxref("AudioBufferSourceNode")}} to a gain node.  See the following example:</p>
+
+<pre class="brush: js">// Old webkitAudioContext code:
+var src = context.createBufferSource();
+src.buffer = someBuffer;
+src.gain.value = 0.5;
+src.connect(context.destination);
+src.noteOn(0);
+
+// New standard AudioContext code:
+var src = context.createBufferSource();
+src.buffer = someBuffer;
+var gain = context.createGain();
+src.connect(gain);
+gain.gain.value = 0.5;
+gain.connect(context.destination);
+src.start(0);
+</pre>
+
+<h3 id="AudioBuffer">AudioBuffer</h3>
+
+<p>The <code>gain</code> attribute of {{domxref("AudioBuffer")}} has been removed.  The same functionality can be achieved by connecting the {{domxref("AudioBufferSourceNode")}} that owns the buffer to a gain node.  See the following example:</p>
+
+<pre class="brush: js">// Old webkitAudioContext code:
+var src = context.createBufferSource();
+src.buffer = someBuffer;
+src.buffer.gain = 0.5;
+src.connect(context.destination);
+src.noteOn(0);
+
+// New standard AudioContext code:
+var src = context.createBufferSource();
+src.buffer = someBuffer;
+var gain = context.createGain();
+src.connect(gain);
+gain.gain.value = 0.5;
+gain.connect(context.destination);
+src.start(0);
+</pre>
+
+<h2 id="Removal_of_AudioBufferSourceNode.looping">Removal of AudioBufferSourceNode.looping</h2>
+
+<p>The <code>looping</code> attribute of {{domxref("AudioBufferSourceNode")}} has been removed.  This attribute was an alias of the <code>loop</code> attribute, so you can just use the <code>loop</code> attribute instead. Instead of having code like this:</p>
+
+<pre class="brush: js">var source = context.createBufferSource();
+source.looping = true;
+</pre>
+
+<p>you can change it to respect the last version of the specification:</p>
+
+<pre class="brush: js">var source = context.createBufferSource();
+source.loop = true;
+</pre>
+
+<p>Note, the <code>loopStart</code> and <code>loopEnd</code> attributes are not supported in <code>webkitAudioContext</code>.</p>
+
+<h2 id="Changes_to_determining_playback_state">Changes to determining playback state</h2>
+
+<p>The <code>playbackState</code> attribute of {{domxref("AudioBufferSourceNode")}} and {{domxref("OscillatorNode")}} has been removed.  Depending on why you used this attribute, you can use the following techniques to get the same information:</p>
+
+<ul>
+ <li>If you need to compare this attribute to <code>UNSCHEDULED_STATE</code>, you can basically remember whether you've called <code>start()</code> on the node or not.</li>
+ <li>If you need to compare this attribute to <code>SCHEDULED_STATE</code>, you can basically remember whether you've called <code>start()</code> on the node or not.  You can compare the value of {{domxref("AudioContext.currentTime")}} to the first argument passed to <code>start()</code> to know whether playback has started or not.</li>
+ <li>If you need to compare this attribute to <code>PLAYING_STATE</code>, you can compare the value of {{domxref("AudioContext.currentTime")}} to the first argument passed to <code>start()</code> to know whether playback has started or not.</li>
+ <li>If you need to know when playback of the node is finished (which is the most significant use case of <code>playbackState</code>), there is a new ended event which you can use to know when playback is finished.  Please see this code example:</li>
+</ul>
+
+<pre class="brush: js">// Old webkitAudioContext code:
+var src = context.createBufferSource();
+// Some time later...
+var isFinished = (src.playbackState == src.FINISHED_STATE);
+
+// New AudioContext code:
+var src = context.createBufferSource();
+function endedHandler(event) {
+ isFinished = true;
+}
+var isFinished = false;
+src.onended = endedHandler;
+</pre>
+
+<p>The exact same changes have been applied to both {{domxref("AudioBufferSourceNode")}} and {{domxref("OscillatorNode")}}, so you can apply the same techniques to both kinds of nodes.</p>
+
+<h2 id="Removal_of_AudioContext.activeSourceCount">Removal of AudioContext.activeSourceCount</h2>
+
+<p>The <code>activeSourceCount</code> attribute has been removed from {{domxref("AudioContext")}}.  If you need to count the number of playing source nodes, you can maintain the count by handling the ended event on the source nodes, as shown above.</p>
+
+<p>Code using the <code>activeSourceCount</code> attribute of the {{domxref("AudioContext")}}, like this snippet:</p>
+
+<pre class="brush: js"> var src0 = context.createBufferSource();
+ var src1 = context.createBufferSource();
+ // Set buffers and other parameters...
+ src0.start(0);
+ src1.start(0);
+ // Some time later...
+ console.log(context.activeSourceCount);
+</pre>
+
+<p>could be rewritten like that:</p>
+
+<pre class="brush: js"> // Array to track the playing source nodes:
+ var sources = [];
+ // When starting the source, put it at the end of the array,
+ // and set a handler to make sure it gets removed when the
+ // AudioBufferSourceNode reaches its end.
+ // First argument is the AudioBufferSourceNode to start, other arguments are
+ // the argument to the |start()| method of the AudioBufferSourceNode.
+ function startSource() {
+ var src = arguments[0];
+ var startArgs = Array.prototype.slice.call(arguments, 1);
+ src.onended = function() {
+ sources.splice(sources.indexOf(src), 1);
+ }
+ sources.push(src);
+ src.start.apply(src, startArgs);
+ }
+ function activeSources() {
+ return sources.length;
+ }
+ var src0 = context.createBufferSource();
+ var src0 = context.createBufferSource();
+ // Set buffers and other parameters...
+ startSource(src0, 0);
+ startSource(src1, 0);
+ // Some time later, query the number of sources...
+ console.log(activeSources());
+</pre>
+
+<h2 id="Renaming_of_WaveTable">Renaming of WaveTable</h2>
+
+<p>The {{domxref("WaveTable")}} interface has been renamed to {{domxref("PeriodicWave")}}.  Here is how you can port old code using <code>WaveTable</code> to the standard AudioContext API:</p>
+
+<pre class="brush: js">// Old webkitAudioContext code:
+var osc = context.createOscillator();
+var table = context.createWaveTable(realArray, imaginaryArray);
+osc.setWaveTable(table);
+
+// New standard AudioContext code:
+var osc = context.createOscillator();
+var table = context.createPeriodicWave(realArray, imaginaryArray);
+osc.setPeriodicWave(table);
+</pre>
+
+<h2 id="Removal_of_some_of_the_AudioParam_read-only_attributes">Removal of some of the AudioParam read-only attributes</h2>
+
+<p>The following read-only attributes have been removed from AudioParam: <code>name</code>, <code>units</code>, <code>minValue</code>, and <code>maxValue</code>.  These used to be informational attributes.  Here is some information on how you can get these values if you need them:</p>
+
+<ul>
+ <li>The <code>name</code> attribute is a string representing the name of the {{domxref("AudioParam")}} object.  For example, the name of {{domxref("GainNode.gain")}} is <code>"gain"</code>.  You can track where the {{domxref("AudioParam")}} object is coming from in your code if you need this information.</li>
+ <li>The <code>minValue</code> and <code>maxValue</code> attributes are read-only values representing the nominal range for the {{domxref("AudioParam")}}.  For example, for {{domxref("GainNode") }}, these values are 0 and 1, respectively.  Note that these bounds are not enforced by the engine, and are merely used for informational purposes.  As an example, it's perfectly valid to set a gain value to 2, or even -1.  In order to find out these nominal values, you can consult the <a href="https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html">specification</a>.</li>
+ <li>The <code>units</code> attribute as implemented in <code>webkitAudioContext</code> implementations is unused, and always returns 0.  There is no reason why you should need this attribute.</li>
+</ul>
+
+<h2 id="Removal_of_MediaElementAudioSourceNode.mediaElement">Removal of MediaElementAudioSourceNode.mediaElement</h2>
+
+<p>The <code>mediaElement</code> attribute of {{domxref("MediaElementAudioSourceNode")}} has been removed.  You can keep a reference to the media element used to create this node if you need to access it later.</p>
+
+<h2 id="Removal_of_MediaStreamAudioSourceNode.mediaStream">Removal of MediaStreamAudioSourceNode.mediaStream</h2>
+
+<p>The <code>mediaStream</code> attribute of {{domxref("MediaStreamAudioSourceNode")}} has been removed.  You can keep a reference to the media stream used to create this node if you need to access it later.</p>
diff --git a/files/ko/web/api/web_audio_api/tools/index.html b/files/ko/web/api/web_audio_api/tools/index.html
new file mode 100644
index 0000000000..beee9d6fb4
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/tools/index.html
@@ -0,0 +1,41 @@
+---
+title: Tools for analyzing Web Audio usage
+slug: Web/API/Web_Audio_API/Tools
+tags:
+ - API
+ - Audio
+ - Debugging
+ - Media
+ - Tools
+ - Web
+ - Web Audio
+ - Web Audio API
+ - sound
+---
+<div>{{APIRef("Web Audio API")}}</div>
+
+<p>While working on your Web Audio API code, you may find that you need tools to analyze the graph of nodes you create or to otherwise debug your work. This article discusses tools available to help you do that.</p>
+
+<h2 id="Chrome">Chrome</h2>
+
+<p>A handy web audio inspector can be found in the <a href="https://chrome.google.com/webstore/detail/web-audio-inspector/cmhomipkklckpomafalojobppmmidlgl">Chrome Web Store</a>.</p>
+
+<h2 id="Edge">Edge</h2>
+
+<p><em>Add information for developers using Microsoft Edge.</em></p>
+
+<h2 id="Firefox">Firefox</h2>
+
+<p>Firefox offers a native <a href="/en-US/docs/Tools/Web_Audio_Editor">Web Audio Editor</a>.</p>
+
+<h2 id="Safari">Safari</h2>
+
+<p><em>Add information for developers working in Safari.</em></p>
+
+<h2 id="See_also">See also</h2>
+
+<ul>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API">Web Audio API</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a></li>
+ <li><a href="/en-US/docs/Web/Apps/Fundamentals/Audio_and_video_delivery/Web_Audio_API_cross_browser">Writing Web Audio API code that works in every browser</a></li>
+</ul>
diff --git a/files/ko/web/api/web_audio_api/using_audioworklet/index.html b/files/ko/web/api/web_audio_api/using_audioworklet/index.html
new file mode 100644
index 0000000000..b103225f09
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/using_audioworklet/index.html
@@ -0,0 +1,325 @@
+---
+title: Background audio processing using AudioWorklet
+slug: Web/API/Web_Audio_API/Using_AudioWorklet
+tags:
+ - API
+ - Audio
+ - AudioWorklet
+ - Background
+ - Examples
+ - Guide
+ - Processing
+ - Web Audio
+ - Web Audio API
+ - WebAudio API
+ - sound
+---
+<p>{{APIRef("Web Audio API")}}</p>
+
+<p>When the Web Audio API was first introduced to browsers, it included the ability to use JavaScript code to create custom audio processors that would be invoked to perform real-time audio manipulations. The drawback to <code>ScriptProcessorNode</code> was simple: it ran on the main thread, thus blocking everything else going on until it completed execution. This was far less than ideal, especially for something that can be as computationally expensive as audio processing.</p>
+
+<p>Enter {{domxref("AudioWorklet")}}. An audio context's audio worklet is a {{domxref("Worklet")}} which runs off the main thread, executing audio processing code added to it by calling the context's {{domxref("Worklet.addModule", "audioWorklet.addModule()")}} method. Calling <code>addModule()</code> loads the specified JavaScript file, which should contain the implementation of the audio processor. With the processor registered, you can create a new {{domxref("AudioWorkletNode")}} which passes the audio through the processor's code when the node is linked into the chain of audio nodes along with any other audio nodes.</p>
+
+<p><span class="seoSummary">The process of creating an audio processor using JavaScript, establishing it as an audio worklet processor, and then using that processor within a Web Audio application is the topic of this article.</span></p>
+
+<p>It's worth noting that because audio processing can often involve substantial computation, your processor may benefit greatly from being built using <a href="/en-US/docs/WebAssembly">WebAssembly</a>, which brings near-native or fully native performance to web apps. Implementing your audio processing algorithm using WebAssembly can make it perform markedly better.</p>
+
+<h2 id="High_level_overview">High level overview</h2>
+
+<p>Before we start looking at the use of AudioWorklet on a step-by-step basis, let's start with a brief high-level overview of what's involved.</p>
+
+<ol>
+ <li>Create module that defines a audio worklet processor class, based on {{domxref("AudioWorkletProcessor")}} which takes audio from one or more incoming sources, performs its operation on the data, and outputs the resulting audio data.</li>
+ <li>Access the audio context's {{domxref("AudioWorklet")}} through its {{domxref("BaseAudioContext.audioWorklet", "audioWorklet")}} property, and call the audio worklet's {{domxref("Worklet.addModule", "addModule()")}} method to install the audio worklet processor module.</li>
+ <li>As needed, create audio processing nodes by passing the processor's name (which is defined by the module) to the {{domxref("AudioWorkletNode.AudioWorkletNode", "AudioWorkletNode()")}} constructor.</li>
+ <li>Set up any audio parameters the {{domxref("AudioWorkletNode")}} needs, or that you wish to configure. These are defined in the audio worklet processor module.</li>
+ <li>Connect the created <code>AudioWorkletNode</code>s into your audio processing pipeline as you would any other node, then use your audio pipeline as usual.</li>
+</ol>
+
+<p>Throughout the remainder of this article, we'll look at these steps in more detail, with examples (including working examples you can try out on your own).</p>
+
+<p>The example code found on this page is derived from <a href="https://mdn.github.io/webaudio-examples/audioworklet/">this working example</a> which is part of MDN's <a href="https://github.com/mdn/webaudio-examples/">GitHub repository of Web Audio examples</a>. The example creates an oscillator node and adds white noise to it using an {{domxref("AudioWorkletNode")}} before playing the resulting sound out. Slider controls are available to allow controlling the gain of both the oscillator and the audio worklet's output.</p>
+
+<p><a href="https://github.com/mdn/webaudio-examples/tree/master/audioworklet"><strong>See the code</strong></a></p>
+
+<p><a href="https://mdn.github.io/webaudio-examples/audioworklet/"><strong>Try it live</strong></a></p>
+
+<h2 id="Creating_an_audio_worklet_processor">Creating an audio worklet processor</h2>
+
+<p>Fundamentally, an audio worklet processor (which we'll refer to usually as either an "audio processor" or as a "processor" because otherwise this article will be about twice as long) is implemented using a JavaScript module that defines and installs the custom audio processor class.</p>
+
+<h3 id="Structure_of_an_audio_worklet_processor">Structure of an audio worklet processor</h3>
+
+<p>An audio worklet processor is a JavaScript module which consists of the following:</p>
+
+<ul>
+ <li>A JavaScript class which defines the audio processor. This class extends the {{domxref("AudioWorkletProcessor")}} class.</li>
+ <li>The audio processor class must implement a {{domxref("AudioWorkletProcessor.process", "process()")}} method, which receives incoming audio data and writes back out the data as manipulated by the processor.</li>
+ <li>The module installs the new audio worklet processor class by calling {{domxref("AudioWorkletGlobalScope.registerProcessor", "registerProcessor()")}}, specifying a name for the audio processor and the class that defines the processor.</li>
+</ul>
+
+<p>A single audio worklet processor module may define multiple processor classes, registering each of them with individual calls to <code>registerProcessor()</code>. As long as each has its own unique name, this will work just fine. It's also more efficient than loading multiple modules from over the network or even the user's local disk.</p>
+
+<h3 id="Basic_code_framework">Basic code framework</h3>
+
+<p>The barest framework of an audio processor class looks like this:</p>
+
+<pre class="brush: js">class MyAudioProcessor extends AudioWorkletProcessor {
+  constructor() {
+  super();
+  }
+
+  process(inputList, outputList, parameters) {
+  /* using the inputs (or not, as needed), write the output
+  into each of the outputs */
+
+  return true;
+  }
+};
+
+registerProcessor("my-audio-processor", MyAudioProcessor);
+</pre>
+
+<p>After the implementation of the processor comes a call to the global function {{domxref("AudioWorkletGlobalScope.registerProcessor", "registerProcessor()")}}, which is only available within the scope of the audio context's {{domxref("AudioWorklet")}}, which is the invoker of the processor script as a result of your call to {{domxref("Worklet.addModule", "audioWorklet.addModule()")}}. This call to <code>registerProcessor()</code> registers your class as the basis for any {{domxref("AudioWorkletProcessor")}}s created when {{domxref("AudioWorkletNode")}}s are set up.</p>
+
+<p>This is the barest framework and actually has no effect until code is added into <code>process()</code> to do something with those inputs and outputs. Which brings us to talking about those inputs and outputs.</p>
+
+<h3 id="The_input_and_output_lists">The input and output lists</h3>
+
+<p>The lists of inputs and outputs can be a little confusing at first, even though they're actually very simple once you realize what's going on.</p>
+
+<p>Let's start at the inside and work our way out. Fundamentally, the audio for a single audio channel (such as the left speaker or the subwoofer, for example) is represented as a <code><a href="/en-US/docs/Web/JavaScript/Reference/Global_Objects/Float32Array">Float32Array</a></code> whose values are the individual audio samples. By specification, each block of audio your <code>process()</code> function receives contains 128 frames (that is, 128 samples for each channel), but it is planned that <em>this value will change in the future</em>, and may in fact vary depending on circumstances, so you should <em>always</em> check the array's <code><a href="/en-US/docs/Web/JavaScript/Reference/Global_Objects/TypedArray/length">length</a></code> rather than assuming a particular size. It is, however, guaranteed that the inputs and outputs will have the same block length.</p>
+
+<p>Each input can have a number of channels. A mono input has a single channel; stereo input has two channels. Surround sound might have six or more channels. So each input is, in turn, an array of channels. That is, an array of <code>Float32Array</code> objects.</p>
+
+<p>Then, there can be multiple inputs, so the <code>inputList</code> is an array of arrays of <code>Float32Array</code> objects. Each input may have a different number of channels, and each channel has its own array of samples.</p>
+
+<p>Thus, given the input list <code>inputList</code>:</p>
+
+<pre class="brush: js">const numberOfInputs = inputList.length;
+const firstInput = inputList[0];
+
+const firstInputChannelCount = firstInput.length;
+const firstInputFirstChannel = firstInput[0]; // (or inputList[0][0])
+
+const firstChannelByteCount = firstInputFirstChannel.length;
+const firstByteOfFirstChannel = firstInputFirstChannel[0]; // (or inputList[0][0][0])
+</pre>
+
+<p>The output list is structured in exactly the same way; it's an array of outputs, each of which is an array of channels, each of which is an array of <code>Float32Array</code> objects, which contain the samples for that channel.</p>
+
+<p>How you use the inputs and how you generate the outputs depends very much on your processor. If your processor is just a generator, it can ignore the inputs and just replace the contents of the outputs with the generated data. Or you can process each input independently, applying an algorithm to the incoming data on each channel of each input and writing the results into the corresponding outputs' channels (keeping in mind that the number of inputs and outputs may differ, and the channel counts on those inputs and outputs may also differ). Or you can take all the inputs and perform mixing or other computations that result in a single output being filled with data (or all the outputs being filled with the same data).</p>
+
+<p>It's entirely up to you. This is a very powerful tool in your audio programming toolkit.</p>
+
+<h3 id="Processing_multiple_inputs">Processing multiple inputs</h3>
+
+<p>Let's take a look at an implementation of <code>process()</code> that can process multiple inputs, with each input being used to generate the corresponding output. Any excess inputs are ignored.</p>
+
+<pre class="brush: js">process(inputList, outputList, parameters) {
+  const sourceLimit = Math.min(inputList.length, outputList.length);
+
+  for (let inputNum = 0; inputNum &lt; sourceLimit; inputNum++) {
+    let input = inputList[inputNum];
+    let output = outputList[inputNum];
+    let channelCount = Math.min(input.length, output.length);
+
+    for (let channelNum = 0; channelNum &lt; channelCount; channelNum++) {
+      let sampleCount = input[channelNum].length;
+
+      for (let i = 0; i &lt; sampleCount; i++) {
+        let sample = input[channelNum][i];
+
+ /* Manipulate the sample */
+
+        output[channelNum][i] = sample;
+      }
+    }
+  };
+
+  return true;
+}
+</pre>
+
+<p>Note that when determining the number of sources to process and send through to the corresponding outputs, we use <code><a href="/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/min">Math.min()</a></code> to ensure that we only process as many channels as we have room for in the output list. The same check is performed when determining how many channels to process in the current input; we only process as many as there are room for in the destination output. This avoids errors due to overrunning these arrays.</p>
+
+<h3 id="Mixing_inputs">Mixing inputs</h3>
+
+<p>Many nodes perform <strong>mixing</strong> operations, where the inputs are combined in some way into a single output. This is demonstrated in the following example.</p>
+
+<pre class="brush: js">process(inputList, outputList, parameters) {
+  const sourceLimit = Math.min(inputList.length, outputList.length);
+   for (let inputNum = 0; inputNum &lt; sourceLimit; inputNum++) {
+     let input = inputList[inputNum];
+     let output = outputList[0];
+     let channelCount = Math.min(input.length, output.length);
+
+     for (let channelNum = 0; channelNum &lt; channelCount; channelNum++) {
+       let sampleCount = input[channelNum].length;
+
+       for (let i = 0; i &lt; sampleCount; i++) {
+         let sample = output[channelNum][i] + input[channelNum][i];
+
+ if (sample &gt; 1.0) {
+  sample = 1.0;
+  } else if (sample &lt; -1.0) {
+  sample = -1.0;
+  }
+
+         output[channelNum][i] = sample;
+       }
+     }
+   };
+
+  return true;
+}
+</pre>
+
+<p>This is similar code to the previous sample in many ways, but only the first output—<code>outputList[0]</code>—is altered. Each sample is added to the corresponding sample in the output buffer, with a simple code fragment in place to prevent the samples from exceeding the legal range of -1.0 to 1.0 by capping the values; there are other ways to avoid clipping that are perhaps less prone to distortion, but this is a simple example that's better than nothing.</p>
+
+<h2 id="Lifetime_of_an_audio_worklet_processor">Lifetime of an audio worklet processor</h2>
+
+<p>The only means by which you can influence the lifespan of your audio worklet processor is through the value returned by <code>process()</code>, which should be a Boolean value indicating whether or not to override the {{Glossary("user agent")}}'s decision-making as to whether or not your node is still in use.</p>
+
+<p>In general, the lifetime policy of any audio node is simple: if the node is still considered to be actively processing audio, it will continue to be used. In the case of an {{domxref("AudioWorkletNode")}}, the node is considered to be active if its <code>process()</code> function returns <code>true</code> <em>and</em> the node is either generating content as a source for audio data, or is receiving data from one or more inputs.</p>
+
+<p>Specifying a value of <code>true</code> as the result from your <code>process()</code> function in essence tells the Web Audio API that your processor needs to keep being called even if the API doesn't think there's anything left for you to do. In other words, <code>true</code> overrides the API's logic and gives you control over your processor's lifetime policy, keeping the processor's owning {{domxref("AudioWorkletNode")}} running even when it would otherwise decide to shut down the node.</p>
+
+<p>Returning <code>false</code> from the <code>process()</code> method tells the API that it should follow its normal logic and shut down your processor node if it deems it appropriate to do so. If the API determines that your node is no longer needed, <code>process()</code> will not be called again.</p>
+
+<div class="notecard note">
+<p><strong>Note:</strong> At this time, unfortunately, Chrome does not implement this algorithm in a manner that matches the current standard. Instead, it keeps the node alive if you return <code>true</code> and shuts it down if you return <code>false</code>. Thus for compatibility reasons you must always return <code>true</code> from <code>process()</code>, at least on Chrome. However, once <a href="https://bugs.chromium.org/p/chromium/issues/detail?id=921354">this Chrome issue</a> is fixed, you will want to change this behavior if possible as it may have a slight negative impact on performance.</p>
+</div>
+
+<h2 id="Creating_an_audio_processor_worklet_node">Creating an audio processor worklet node</h2>
+
+<p>To create an audio node that pumps blocks of audio data through an {{domxref("AudioWorkletProcessor")}}, you need to follow these simple steps:</p>
+
+<ol>
+ <li>Load and install the audio processor module</li>
+ <li>Create an {{domxref("AudioWorkletNode")}}, specifying the audio processor module to use by its name</li>
+ <li>Connect inputs to the <code>AudioWorkletNode</code> and its outputs to appropriate destinations (either other nodes or to the {{domxref("AudioContext")}} object's {{domxref("AudioContext.destination", "destination")}} property.</li>
+</ol>
+
+<p>To use an audio worklet processor, you can use code similar to the following:</p>
+
+<pre class="brush: js">let audioContext = null;
+
+async function createMyAudioProcessor() {
+  if (!audioContext) {
+  try {
+   audioContext = new AudioContext();
+  await audioContext.resume();
+   await audioContext.audioWorklet.addModule("module-url/module.js");
+  } catch(e) {
+  return null;
+  }
+ }
+
+  return new AudioWorkletNode(audioContext, "processor-name");
+}
+</pre>
+
+<p>This <code>createMyAudioProcessor()</code> function creates and returns a new instance of {{domxref("AudioWorkletNode")}} configured to use your audio processor. It also handles creating the audio context if it hasn't already been done.</p>
+
+<p>In order to ensure the context is usable, this starts by creating the context if it's not already available, then adds the module containing the processor to the worklet. Once that's done, it instantiates and returns a new <code>AudioWorkletNode</code>. Once you have that in hand, you connect it to other nodes and otherwise use it just like any other node.</p>
+
+<p>You can then create a new audio processor node by doing this:</p>
+
+<pre class="brush: js">let newProcessorNode = createMyAudioProcessor();</pre>
+
+<p>If the returned value, <code>newProcessorNode</code>, is non-<code>null</code>, we have a valid audio context with its hiss processor node in place and ready to use.</p>
+
+<h2 id="Supporting_audio_parameters">Supporting audio parameters</h2>
+
+<p>Just like any other Web Audio node, {{domxref("AudioWorkletNode")}} supports parameters, which are shared with the {{domxref("AudioWorkletProcessor")}} that does the actual work.</p>
+
+<h3 id="Adding_parameter_support_to_the_processor">Adding parameter support to the processor</h3>
+
+<p>To add parameters to an {{domxref("AudioWorkletNode")}}, you need to define them within your {{domxref("AudioWorkletProcessor")}}-based processor class in your module. This is done by adding the static getter {{domxref("AudioWorkletProcessor.parameterDescriptors", "parameterDescriptors")}} to your class. This function should return an array of {{domxref("AudioParam")}} objects, one for each parameter supported by the processor.</p>
+
+<p>In the following implementation of <code>parameterDescriptors()</code>, the returned array has two <code>AudioParam</code> objects. The first defines <code>gain</code> as a value between 0 and 1, with a default value of 0.5. The second parameter is named <code>frequency</code> and defaults to 440.0, with a range from 27.5 to 4186.009, inclusively.</p>
+
+<pre class="brush: js">static get parameterDescriptors() {
+ return [
+ {
+ name: "gain",
+ defaultValue: 0.5,
+ minValue: 0,
+ maxValue: 1
+ },
+  {
+  name: "frequency",
+  defaultValue: 440.0;
+  minValue: 27.5,
+  maxValue: 4186.009
+  }
+ ];
+}</pre>
+
+<p>Accessing your processor node's parameters is as simple as looking them up in the <code>parameters</code> object passed into your implementation of {{domxref("AudioWorkletProcessor.process", "process()")}}. Within the <code>parameters</code> object are arrays, one for each of your parameters, and sharing the same names as your parameters.</p>
+
+<dl>
+ <dt>A-rate parameters</dt>
+ <dd>For a-rate parameters—parameters whose values automatically change over time—the parameter's entry in the <code>parameters</code> object is an array of {{domxref("AudioParam")}} objects, one for each frame in the block being processed. These values are to be applied to the corresponding frames.</dd>
+ <dt>K-rate parameters</dt>
+ <dd>K-rate parameters, on the other hand, can only change once per block, so the parameter's array has only a single entry. Use that value for every frame in the block.</dd>
+</dl>
+
+<p>In the code below, we see a <code>process()</code> function that handles a <code>gain</code> parameter which can be used as either an a-rate or k-rate parameter. Our node only supports one input, so it just takes the first input in the list, applies the gain to it, and writes the resulting data to the first output's buffer.</p>
+
+<pre class="brush: js">process(inputList, outputList, parameters) {
+  const input = inputList[0];
+  const output = outputList[0];
+  const gain = parameters.gain;
+
+  for (let channelNum = 0; channelNum &lt; input.length; channel++) {
+  const inputChannel = input[channel];
+  const outputChannel = output[channel];
+
+ // If gain.length is 1, it's a k-rate parameter, so apply
+ // the first entry to every frame. Otherwise, apply each
+ // entry to the corresponding frame.
+
+  if (gain.length === 1) {
+  for (let i = 0; i &lt; inputChannel.length; i++) {
+  outputChannel[i] = inputChannel[i] * gain[0];
+  }
+  } else {
+  for (let i = 0; i &lt; inputChannel.length; i++) {
+  outputChannel[i] = inputChannel[i] * gain[i];
+  }
+  }
+  }
+
+  return true;
+}
+</pre>
+
+<p>Here, if <code>gain.length</code> indicates that there's only a single value in the <code>gain</code> parameter's array of values, the first entry in the array is applied to every frame in the block. Otherwise, for each frame in the block, the corresponding entry in <code>gain[]</code> is applied.</p>
+
+<h3 id="Accessing_parameters_from_the_main_thread_script">Accessing parameters from the main thread script</h3>
+
+<p>Your main thread script can access the parameters just like it can any other node. To do so, first you need to get a reference to the parameter by calling the {{domxref("AudioWorkletNode")}}'s {{domxref("AudioWorkletNode.parameters", "parameters")}} property's {{domxref("AudioParamMap.get", "get()")}} method:</p>
+
+<pre class="brush: js">let gainParam = myAudioWorkletNode.parameters.get("gain");
+</pre>
+
+<p>The value returned and stored in <code>gainParam</code> is the {{domxref("AudioParam")}} used to store the <code>gain</code> parameter. You can then change its value effective at a given time using the {{domxref("AudioParam")}} method {{domxref("AudioParam.setValueAtTime", "setValueAtTime()")}}.</p>
+
+<p>Here, for example, we set the value to <code>newValue</code>, effective immediately.</p>
+
+<pre class="brush: js">gainParam.setValueAtTime(newValue, audioContext.currentTime);</pre>
+
+<p>You can similarly use any of the other methods in the {{domxref("AudioParam")}} interface to apply changes over time, to cancel scheduled changes, and so forth.</p>
+
+<p>Reading the value of a parameter is as simple as looking at its {{domxref("AudioParam.value", "value")}} property:</p>
+
+<pre class="brush: js">let currentGain = gainParam.value;</pre>
+
+<h2 id="See_also">See also</h2>
+
+<ul>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API">Web Audio API</a></li>
+ <li><a href="https://developers.google.com/web/updates/2017/12/audio-worklet">Enter Audio Worklet</a> (Google Developers blog)</li>
+</ul>
diff --git a/files/ko/web/api/web_audio_api/using_iir_filters/iir-filter-demo.png b/files/ko/web/api/web_audio_api/using_iir_filters/iir-filter-demo.png
new file mode 100644
index 0000000000..0e701a2b6a
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/using_iir_filters/iir-filter-demo.png
Binary files differ
diff --git a/files/ko/web/api/web_audio_api/using_iir_filters/index.html b/files/ko/web/api/web_audio_api/using_iir_filters/index.html
new file mode 100644
index 0000000000..0c48b1096c
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/using_iir_filters/index.html
@@ -0,0 +1,198 @@
+---
+title: Using IIR filters
+slug: Web/API/Web_Audio_API/Using_IIR_filters
+tags:
+ - API
+ - Audio
+ - Guide
+ - IIRFilter
+ - Using
+ - Web Audio API
+---
+<div>{{DefaultAPISidebar("Web Audio API")}}</div>
+
+<p class="summary">The <strong><code>IIRFilterNode</code></strong> interface of the <a href="/en-US/docs/Web/API/Web_Audio_API">Web Audio API</a> is an {{domxref("AudioNode")}} processor that implements a general <a href="https://en.wikipedia.org/wiki/infinite%20impulse%20response">infinite impulse response</a> (IIR) filter; this type of filter can be used to implement tone control devices and graphic equalizers, and the filter response parameters can be specified, so that it can be tuned as needed. This article looks at how to implement one, and use it in a simple example.</p>
+
+<h2 id="Demo">Demo</h2>
+
+<p>Our simple example for this guide provides a play/pause button that starts and pauses audio play, and a toggle that turns an IIR filter on and off, altering the tone of the sound. It also provides a canvas on which is drawn the frequency response of the audio, so you can see what effect the IIR filter has.</p>
+
+<p><img alt="A demo featuring a play button, and toggle to turn a filter on and off, and a line graph showing the filter frequencies returned after the filter has been applied." src="iir-filter-demo.png"></p>
+
+<p>You can check out the <a href="https://codepen.io/Rumyra/pen/oPxvYB/">full demo here on Codepen</a>. Also see the <a href="https://github.com/mdn/webaudio-examples/tree/master/iirfilter-node">source code on GitHub</a>. It includes some different coefficient values for different lowpass frequencies — you can change the value of the <code>filterNumber</code> constant to a value between 0 and 3 to check out the different available effects.</p>
+
+<h2 id="Browser_support">Browser support</h2>
+
+<p><a href="/en-US/docs/Web/API/IIRFilterNode">IIR filters</a> are supported well across modern browsers, although they have been implemented more recently than some of the more longstanding Web Audio API features, like <a href="/en-US/docs/Web/API/BiquadFilterNode">Biquad filters</a>.</p>
+
+<h2 id="The_IIRFilterNode">The IIRFilterNode</h2>
+
+<p>The Web Audio API now comes with an {{domxref("IIRFilterNode")}} interface. But what is this and how does it differ from the {{domxref("BiquadFilterNode")}} we have already?</p>
+
+<p>An IIR filter is a <strong>infinite impulse response filter</strong>. It's one of two primary types of filters used in audio and digital signal processing. The other type is FIR — <strong>finite impulse response filter</strong>. There's a really good overview to <a href="https://dspguru.com/dsp/faqs/iir/basics/">IIF filters and FIR filters here</a>.</p>
+
+<p>A biquad filter is actually a <em>specific type</em> of infinite impulse response filter. It's a commonly-used type and we already have it as a node in the Web Audio API. If you choose this node the hard work is done for you. For instance, if you want to filter lower frequencies from your sound, you can set the <a href="/en-US/docs/Web/API/BiquadFilterNode/type">type</a> to <code>highpass</code> and then set which frequency to filter from (or cut off). <a href="http://www.earlevel.com/main/2003/02/28/biquads/">There's more information on how biquad filters work here</a>.</p>
+
+<p>When you are using an {{domxref("IIRFilterNode")}} instead of a {{domxref("BiquadFilterNode")}} you are creating the filter yourself, rather than just choosing a pre-programmed type. So you can create a highpass filter, or a lowpass filter, or a more bespoke one. And this is where the IIR filter node is useful — you can create your own if none of the alaready available settings is right for what you want. As well as this, if your audio graph needed a highpass and a bandpass filter within it, you could just use one IIR filter node in place of the two biquad filter nodes you would otherwise need for this.</p>
+
+<p>With the IIRFIlter node it's up to you to set what <code>feedforward</code> and <code>feedback</code> values the filter needs — this determines the characteristics of the filter. The downside is that this involves some complex maths.</p>
+
+<p>If you are looking to learn more there's some <a href="http://ece.uccs.edu/~mwickert/ece2610/lecture_notes/ece2610_chap8.pdf">information about the maths behind IIR filters here</a>. This enters the realms of signal processing theory — don't worry if you look at it and feel like it's not for you.</p>
+
+<p>If you want to play with the IIR filter node and need some values to help along the way, there's <a href="http://www.dspguide.com/CH20.PDF">a table of already calculated values here</a>; on pages 4 &amp; 5 of the linked PDF the a<em>n</em> values refer to the <code>feedForward</code> values and the b<em>n</em> values refer to the <code>feedback</code>. <a href="http://musicdsp.org/">musicdsp.org</a> is also a great resource if you want to read more about different filters and how they are implemented digitally.</p>
+
+<p>With that all in mind, let's take a look at the code to create an IIR filter with the Web Audio API.</p>
+
+<h2 id="Setting_our_IIRFilter_co-efficients">Setting our IIRFilter co-efficients</h2>
+
+<p>When creating an IIR filter, we pass in the <code>feedforward</code> and <code>feedback</code> coefficients as options (coefficients is how we describe the values). Both of these parameters are arrays, neither of which can be larger than 20 items.</p>
+
+<p>When setting our coefficients, the <code>feedforward</code> values can't all be set to zero, otherwise nothing would be sent to the filter. Something like this is acceptable:</p>
+
+<pre class="brush: js">let feedForward = [0.00020298, 0.0004059599, 0.00020298];
+</pre>
+
+<p>Our <code>feedback</code> values cannot start with zero, otherwise on the first pass nothing would be sent back:</p>
+
+<pre class="brush: js">let feedBackward = [1.0126964558, -1.9991880801, 0.9873035442];
+</pre>
+
+<div class="note">
+<p><strong>Note</strong>: These values are calculated based on the lowpass filter specified in the <a href="https://webaudio.github.io/web-audio-api/#filters-characteristics">filter characteristics of the Web Audio API specification</a>. As this filter node gains more popularity we should be able to collate more coefficient values.</p>
+</div>
+
+<h2 id="Using_an_IIRFilter_in_an_audio_graph">Using an IIRFilter in an audio graph</h2>
+
+<p>Let's create our context and our filter node:</p>
+
+<pre class="brush: js">const AudioContext = window.AudioContext || window.webkitAudioContext;
+const audioCtx = new AudioContext();
+
+const iirFilter = audioCtx.createIIRFilter(feedForward, feedBack);
+</pre>
+
+<p>We need a sound source to play. We set this up using a custom function, <code>playSoundNode()</code>, which <a href="/en-US/docs/Web/API/BaseAudioContext/createBufferSource">creates a buffer source</a> from an existing {{domxref("AudioBuffer")}}, attaches it to the default destination, starts it playing, and returns it:</p>
+
+<pre class="brush: js">function playSourceNode(audioContext, audioBuffer) {
+ const soundSource = audioContext.createBufferSource();
+ soundSource.buffer = audioBuffer;
+ soundSource.connect(audioContext.destination);
+ soundSource.start();
+ return soundSource;
+}</pre>
+
+<p>This function is called when the play button is pressed. The play button HTML looks like this:</p>
+
+<pre class="brush: html">&lt;button class="button-play" role="switch" data-playing="false" aria-pressed="false"&gt;Play&lt;/button&gt;</pre>
+
+<p>And the <code>click</code> event listener starts like so:</p>
+
+<pre class="brush: js">playButton.addEventListener('click', function() {
+ if (this.dataset.playing === 'false') {
+ srcNode = playSourceNode(audioCtx, sample);
+ ...
+}, false);</pre>
+
+<p>The toggle that turns the IIR filter on and off is set up in the similar way. First, the HTML:</p>
+
+<pre>&lt;button class="button-filter" role="switch" data-filteron="false" aria-pressed="false" aria-describedby="label" disabled&gt;&lt;/button&gt;</pre>
+
+<p>The filter button's <code>click</code> handler then connects the <code>IIRFilter</code> up to the graph, between the source and the detination:</p>
+
+<pre class="brush: js">filterButton.addEventListener('click', function() {
+ if (this.dataset.filteron === 'false') {
+ srcNode.disconnect(audioCtx.destination);
+ srcNode.connect(iirfilter).connect(audioCtx.destination);
+ ...
+}, false);</pre>
+
+<h3 id="Frequency_response">Frequency response</h3>
+
+<p>We only have one method available on {{domxref("IIRFilterNode")}} instances, <code>getFrequencyResponse()</code>, this allows us to see what is happening to the frequencies of the audio being passed into the filter.</p>
+
+<p>Let's draw a frequency plot of the filter we've created with the data we get back from this method.</p>
+
+<p>We need to create three arrays. One of frequency values for which we want to receive the magnitude response and phase response for, and two empty arrays to receive the data. All three of these have to be of type <a href="/en-US/docs/Web/JavaScript/Reference/Global_Objects/Float32Array"><code>float32array</code></a> and all be of the same size.</p>
+
+<pre class="brush: js">// arrays for our frequency response
+const totalArrayItems = 30;
+let myFrequencyArray = new Float32Array(totalArrayItems);
+let magResponseOutput = new Float32Array(totalArrayItems);
+let phaseResponseOutput = new Float32Array(totalArrayItems);
+</pre>
+
+<p>Let's fill our first array with frequency values we want data to be returned on:</p>
+
+<pre class="brush: js">myFrequencyArray = myFrequencyArray.map(function(item, index) {
+ return Math.pow(1.4, index);
+});
+</pre>
+
+<p>We could go for a linear approach, but it's far better when working with frequencies to take a log approach, so let's fill our array with frequency values that get larger further on in the array items.</p>
+
+<p>Now let's get our response data:</p>
+
+<pre class="brush: js">iirFilter.getFrequencyResponse(myFrequencyArray, magResponseOutput, phaseResponseOutput);
+</pre>
+
+<p>We can use this data to draw a filter frequency plot. We'll do so on a 2d canvas context.</p>
+
+<pre class="brush: js">// create a canvas element and append it to our dom
+const canvasContainer = document.querySelector('.filter-graph');
+const canvasEl = document.createElement('canvas');
+canvasContainer.appendChild(canvasEl);
+
+// set 2d context and set dimesions
+const canvasCtx = canvasEl.getContext('2d');
+const width = canvasContainer.offsetWidth;
+const height = canvasContainer.offsetHeight;
+canvasEl.width = width;
+canvasEl.height = height;
+
+// set background fill
+canvasCtx.fillStyle = 'white';
+canvasCtx.fillRect(0, 0, width, height);
+
+// set up some spacing based on size
+const spacing = width/16;
+const fontSize = Math.floor(spacing/1.5);
+
+// draw our axis
+canvasCtx.lineWidth = 2;
+canvasCtx.strokeStyle = 'grey';
+
+canvasCtx.beginPath();
+canvasCtx.moveTo(spacing, spacing);
+canvasCtx.lineTo(spacing, height-spacing);
+canvasCtx.lineTo(width-spacing, height-spacing);
+canvasCtx.stroke();
+
+// axis is gain by frequency -&gt; make labels
+canvasCtx.font = fontSize+'px sans-serif';
+canvasCtx.fillStyle = 'grey';
+canvasCtx.fillText('1', spacing-fontSize, spacing+fontSize);
+canvasCtx.fillText('g', spacing-fontSize, (height-spacing+fontSize)/2);
+canvasCtx.fillText('0', spacing-fontSize, height-spacing+fontSize);
+canvasCtx.fillText('Hz', width/2, height-spacing+fontSize);
+canvasCtx.fillText('20k', width-spacing, height-spacing+fontSize);
+
+// loop over our magnitude response data and plot our filter
+
+canvasCtx.beginPath();
+
+for(let i = 0; i &lt; magResponseOutput.length; i++) {
+
+ if (i === 0) {
+ canvasCtx.moveTo(spacing, height-(magResponseOutput[i]*100)-spacing );
+ } else {
+ canvasCtx.lineTo((width/totalArrayItems)*i, height-(magResponseOutput[i]*100)-spacing );
+ }
+
+}
+
+canvasCtx.stroke();
+</pre>
+
+<h2 id="Summary">Summary</h2>
+
+<p>That's it for our IIRFilter demo. This should have shown you how to use the basics, and helped you to understand what it's useful for and how it works.</p>
diff --git a/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/bar-graph.png b/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/bar-graph.png
new file mode 100644
index 0000000000..a31829c5d1
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/bar-graph.png
Binary files differ
diff --git a/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/index.html b/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/index.html
new file mode 100644
index 0000000000..c0dd84ee68
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/index.html
@@ -0,0 +1,189 @@
+---
+title: Visualizations with Web Audio API
+slug: Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API
+tags:
+ - API
+ - Web Audio API
+ - analyser
+ - fft
+ - visualisation
+ - visualization
+ - waveform
+---
+<div class="summary">
+<p>One of the most interesting features of the Web Audio API is the ability to extract frequency, waveform, and other data from your audio source, which can then be used to create visualizations. This article explains how, and provides a couple of basic use cases.</p>
+</div>
+
+<div class="note">
+<p><strong>Note</strong>: You can find working examples of all the code snippets in our <a href="https://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic</a> demo.</p>
+</div>
+
+<h2 id="Basic_concepts">Basic concepts</h2>
+
+<p>To extract data from your audio source, you need an {{ domxref("AnalyserNode") }}, which is created using the {{ domxref("BaseAudioContext.createAnalyser") }} method, for example:</p>
+
+<pre class="brush: js">var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
+var analyser = audioCtx.createAnalyser();
+</pre>
+
+<p>This node is then connected to your audio source at some point between your source and your destination, for example:</p>
+
+<pre class="brush: js">source = audioCtx.createMediaStreamSource(stream);
+source.connect(analyser);
+analyser.connect(distortion);
+distortion.connect(audioCtx.destination);</pre>
+
+<div class="note">
+<p><strong>Note</strong>: you don't need to connect the analyser's output to another node for it to work, as long as the input is connected to the source, either directly or via another node.</p>
+</div>
+
+<p>The analyser node will then capture audio data using a Fast Fourier Transform (fft) in a certain frequency domain, depending on what you specify as the {{ domxref("AnalyserNode.fftSize") }} property value (if no value is specified, the default is 2048.)</p>
+
+<div class="note">
+<p><strong>Note</strong>: You can also specify a minimum and maximum power value for the fft data scaling range, using {{ domxref("AnalyserNode.minDecibels") }} and {{ domxref("AnalyserNode.maxDecibels") }}, and different data averaging constants using {{ domxref("AnalyserNode.smoothingTimeConstant") }}. Read those pages to get more information on how to use them.</p>
+</div>
+
+<p>To capture data, you need to use the methods {{ domxref("AnalyserNode.getFloatFrequencyData()") }} and {{ domxref("AnalyserNode.getByteFrequencyData()") }} to capture frequency data, and {{ domxref("AnalyserNode.getByteTimeDomainData()") }} and {{ domxref("AnalyserNode.getFloatTimeDomainData()") }} to capture waveform data.</p>
+
+<p>These methods copy data into a specified array, so you need to create a new array to receive the data before invoking one. The first one produces 32-bit floating point numbers, and the second and third ones produce 8-bit unsigned integers, therefore a standard JavaScript array won't do — you need to use a {{ domxref("Float32Array") }} or {{ domxref("Uint8Array") }} array, depending on what data you are handling.</p>
+
+<p>So for example, say we are dealing with an fft size of 2048. We return the {{ domxref("AnalyserNode.frequencyBinCount") }} value, which is half the fft, then call Uint8Array() with the frequencyBinCount as its length argument — this is how many data points we will be collecting, for that fft size.</p>
+
+<pre class="brush: js">analyser.fftSize = 2048;
+var bufferLength = analyser.frequencyBinCount;
+var dataArray = new Uint8Array(bufferLength);</pre>
+
+<p>To actually retrieve the data and copy it into our array, we then call the data collection method we want, with the array passed as it's argument. For example:</p>
+
+<pre class="brush: js">analyser.getByteTimeDomainData(dataArray);</pre>
+
+<p>We now have the audio data for that moment in time captured in our array, and can proceed to visualize it however we like, for example by plotting it onto an HTML5 {{ htmlelement("canvas") }}.</p>
+
+<p>Let's go on to look at some specific examples.</p>
+
+<h2 id="Creating_a_waveformoscilloscope">Creating a waveform/oscilloscope</h2>
+
+<p>To create the oscilloscope visualisation (hat tip to <a href="http://soledadpenades.com/">Soledad Penadés</a> for the original code in <a href="https://github.com/mdn/voice-change-o-matic/blob/gh-pages/scripts/app.js#L123-L167">Voice-change-O-matic</a>), we first follow the standard pattern described in the previous section to set up the buffer:</p>
+
+<pre class="brush: js">analyser.fftSize = 2048;
+var bufferLength = analyser.frequencyBinCount;
+var dataArray = new Uint8Array(bufferLength);</pre>
+
+<p>Next, we clear the canvas of what had been drawn on it before to get ready for the new visualization display:</p>
+
+<pre class="brush: js">canvasCtx.clearRect(0, 0, WIDTH, HEIGHT);</pre>
+
+<p>We now define the <code>draw()</code> function:</p>
+
+<pre class="brush: js">function draw() {</pre>
+
+<p>In here, we use <code>requestAnimationFrame()</code> to keep looping the drawing function once it has been started:</p>
+
+<pre class="brush: js">var drawVisual = requestAnimationFrame(draw);</pre>
+
+<p>Next, we grab the time domain data and copy it into our array</p>
+
+<pre class="brush: js">analyser.getByteTimeDomainData(dataArray);</pre>
+
+<p>Next, fill the canvas with a solid color to start</p>
+
+<pre class="brush: js">canvasCtx.fillStyle = 'rgb(200, 200, 200)';
+canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);</pre>
+
+<p>Set a line width and stroke color for the wave we will draw, then begin drawing a path</p>
+
+<pre class="brush: js">canvasCtx.lineWidth = 2;
+canvasCtx.strokeStyle = 'rgb(0, 0, 0)';
+canvasCtx.beginPath();</pre>
+
+<p>Determine the width of each segment of the line to be drawn by dividing the canvas width by the array length (equal to the FrequencyBinCount, as defined earlier on), then define an x variable to define the position to move to for drawing each segment of the line.</p>
+
+<pre class="brush: js">var sliceWidth = WIDTH * 1.0 / bufferLength;
+var x = 0;</pre>
+
+<p>Now we run through a loop, defining the position of a small segment of the wave for each point in the buffer at a certain height based on the data point value form the array, then moving the line across to the place where the next wave segment should be drawn:</p>
+
+<pre class="brush: js"> for(var i = 0; i &lt; bufferLength; i++) {
+
+ var v = dataArray[i] / 128.0;
+ var y = v * HEIGHT/2;
+
+ if(i === 0) {
+ canvasCtx.moveTo(x, y);
+ } else {
+ canvasCtx.lineTo(x, y);
+ }
+
+ x += sliceWidth;
+ }</pre>
+
+<p>Finally, we finish the line in the middle of the right hand side of the canvas, then draw the stroke we've defined:</p>
+
+<pre class="brush: js"> canvasCtx.lineTo(canvas.width, canvas.height/2);
+ canvasCtx.stroke();
+ };</pre>
+
+<p>At the end of this section of code, we invoke the <code>draw()</code> function to start off the whole process:</p>
+
+<pre class="brush: js"> draw();</pre>
+
+<p>This gives us a nice waveform display that updates several times a second:</p>
+
+<p><img alt="a black oscilloscope line, showing the waveform of an audio signal" src="wave.png"></p>
+
+<h2 id="Creating_a_frequency_bar_graph">Creating a frequency bar graph</h2>
+
+<p>Another nice little sound visualization to create is one of those Winamp-style frequency bar graphs. We have one available in Voice-change-O-matic; let's look at how it's done.</p>
+
+<p>First, we again set up our analyser and data array, then clear the current canvas display with <code>clearRect()</code>. The only difference from before is that we have set the fft size to be much smaller; this is so that each bar in the graph is big enough to actually look like a bar rather than a thin strand.</p>
+
+<pre class="brush: js">analyser.fftSize = 256;
+var bufferLength = analyser.frequencyBinCount;
+console.log(bufferLength);
+var dataArray = new Uint8Array(bufferLength);
+
+canvasCtx.clearRect(0, 0, WIDTH, HEIGHT);</pre>
+
+<p>Next, we start our <code>draw()</code> function off, again setting up a loop with <code>requestAnimationFrame()</code> so that the displayed data keeps updating, and clearing the display with each animation frame.</p>
+
+<pre class="brush: js"> function draw() {
+ drawVisual = requestAnimationFrame(draw);
+
+ analyser.getByteFrequencyData(dataArray);
+
+ canvasCtx.fillStyle = 'rgb(0, 0, 0)';
+ canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);</pre>
+
+<p>Now we set our <code>barWidth</code> to be equal to the canvas width divided by the number of bars (the buffer length). However, we are also multiplying that width by 2.5, because most of the frequencies will come back as having no audio in them, as most of the sounds we hear every day are in a certain lower frequency range. We don't want to display loads of empty bars, therefore we shift the ones that will display regularly at a noticeable height across so they fill the canvas display.</p>
+
+<p>We also set a <code>barHeight</code> variable, and an <code>x</code> variable to record how far across the screen to draw the current bar.</p>
+
+<pre class="brush: js">var barWidth = (WIDTH / bufferLength) * 2.5;
+var barHeight;
+var x = 0;</pre>
+
+<p>As before, we now start a for loop and cycle through each value in the <code>dataArray</code>. For each one, we make the <code>barHeight</code> equal to the array value, set a fill color based on the <code>barHeight</code> (taller bars are brighter), and draw a bar at <code>x</code> pixels across the canvas, which is <code>barWidth</code> wide and <code>barHeight/2</code> tall (we eventually decided to cut each bar in half so they would all fit on the canvas better.)</p>
+
+<p>The one value that needs explaining is the vertical offset position we are drawing each bar at: <code>HEIGHT-barHeight/2</code>. I am doing this because I want each bar to stick up from the bottom of the canvas, not down from the top, as it would if we set the vertical position to 0. Therefore, we instead set the vertical position each time to the height of the canvas minus <code>barHeight/2</code>, so therefore each bar will be drawn from partway down the canvas, down to the bottom.</p>
+
+<pre class="brush: js"> for(var i = 0; i &lt; bufferLength; i++) {
+ barHeight = dataArray[i]/2;
+
+ canvasCtx.fillStyle = 'rgb(' + (barHeight+100) + ',50,50)';
+ canvasCtx.fillRect(x,HEIGHT-barHeight/2,barWidth,barHeight);
+
+ x += barWidth + 1;
+ }
+ };</pre>
+
+<p>Again, at the end of the code we invoke the draw() function to set the whole process in motion.</p>
+
+<pre class="brush: js">draw();</pre>
+
+<p>This code gives us a result like the following:</p>
+
+<p><img alt="a series of red bars in a bar graph, showing intensity of different frequencies in an audio signal" src="bar-graph.png"></p>
+
+<div class="note">
+<p><strong>Note</strong>: The examples listed in this article have shown usage of {{ domxref("AnalyserNode.getByteFrequencyData()") }} and {{ domxref("AnalyserNode.getByteTimeDomainData()") }}. For working examples showing {{ domxref("AnalyserNode.getFloatFrequencyData()") }} and {{ domxref("AnalyserNode.getFloatTimeDomainData()") }}, refer to our <a href="https://mdn.github.io/voice-change-o-matic-float-data/">Voice-change-O-matic-float-data</a> demo (see the <a href="https://github.com/mdn/voice-change-o-matic-float-data">source code</a> too) — this is exactly the same as the original <a href="https://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic</a>, except that it uses Float data, not unsigned byte data.</p>
+</div>
diff --git a/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/wave.png b/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/wave.png
new file mode 100644
index 0000000000..9254829d23
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/wave.png
Binary files differ
diff --git a/files/ko/web/api/web_audio_api/web_audio_spatialization_basics/index.html b/files/ko/web/api/web_audio_api/web_audio_spatialization_basics/index.html
new file mode 100644
index 0000000000..2846d45d6c
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/web_audio_spatialization_basics/index.html
@@ -0,0 +1,467 @@
+---
+title: Web audio spatialization basics
+slug: Web/API/Web_Audio_API/Web_audio_spatialization_basics
+tags:
+ - PannerNode
+ - Web Audio API
+ - panning
+---
+<div>{{DefaultAPISidebar("Web Audio API")}}</div>
+
+<div class="summary">
+<p><span class="seoSummary">As if its extensive variety of sound processing (and other) options wasn't enough, the Web Audio API also includes facilities to allow you to emulate the difference in sound as a listener moves around a sound source, for example panning as you move around a sound source inside a 3D game. The official term for this is <strong>spatialization</strong>, and this article will cover the basics of how to implement such a system.</span></p>
+</div>
+
+<h2 id="Basics_of_spatialization">Basics of spatialization</h2>
+
+<p>In Web Audio, complex 3D spatializations are created using the {{domxref("PannerNode")}}, which in layman's terms is basically a whole lotta cool maths to make audio appear in 3D space. Think sounds flying over you, creeping up behind you, moving across in front of you. That sort of thing.</p>
+
+<p>It's really useful for WebXR and gaming. In 3D spaces, it's the only way to achieve realistic audio. Libraries like <a href="https://threejs.org/">three.js</a> and <a href="https://aframe.io/">A-frame</a> harness its potential when dealing with sound. It's worth noting that you don't <em>have</em> to move sound within a full 3D space either — you could stick with just a 2D plane, so if you were planning a 2D game, this would still be the node you were looking for.</p>
+
+<div class="note">
+<p><strong>Note</strong>: There's also a {{domxref("StereoPannerNode")}} designed to deal with the common use case of creating simple left and right stereo panning effects. This is much simpler to use, but obviously nowhere near as versatile. If you just want a simple stereo panning effect, our <a href="https://mdn.github.io/webaudio-examples/stereo-panner-node/">StereoPannerNode example</a> (<a href="https://github.com/mdn/webaudio-examples/tree/master/stereo-panner-node">see source code</a>) should give you everything you need.</p>
+</div>
+
+<h2 id="3D_boombox_demo">3D boombox demo</h2>
+
+<p>To demonstrate 3D spatialization we've created a modified version of the boombox demo we created in our basic <a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a> guide. see the <a href="https://mdn.github.io/webaudio-examples/spacialization/">3D spatialization demo live</a> (and see the <a href="https://github.com/mdn/webaudio-examples/tree/master/spacialization">source code</a> also).</p>
+
+<p><img alt="A simple UI with a rotated boombox and controls to move it left and right and in and out, and rotate it." src="web-audio-spatialization.png"></p>
+
+<p>The boombox sits inside a room (defined by the edges of the browser viewport), and in this demo, we can move and rotate it with the provided controls. When we move the boombox, the sound it produces changes accordingly, panning as it moves to the left or right of the room, or becoming quieter as it is moved away from the user or is rotated so the speakers are facing away from them, etc. This is done by setting the different properties of the <code>PannerNode</code> object instance in relation to that movement, to emulate spacialization.</p>
+
+<div class="note">
+<p><strong>Note</strong>: The experience is much better if you use headphones, or have some kind of surround sound system to plug your computer into.</p>
+</div>
+
+<h2 id="Creating_an_audio_listener">Creating an audio listener</h2>
+
+<p>So let's begin! The {{domxref("BaseAudioContext")}} (the interface the {{domxref("AudioContext")}} is extended from) has a <code><a href="/en-US/docs/Web/API/BaseAudioContext/listener">listener</a></code> property that returns an {{domxref("AudioListener")}} object. This represents the listener of the scene, usually your user. You can define where they are in space and in which direction they are facing. They remain static. The <code>pannerNode</code> can then calculate its sound position relative to the position of the listener.</p>
+
+<p>Let's create our context and listener and set the listener's position to emulate a person looking into our room:</p>
+
+<pre class="brush: js">const AudioContext = window.AudioContext || window.webkitAudioContext;
+const audioCtx = new AudioContext();
+const listener = audioCtx.listener;
+
+const posX = window.innerWidth/2;
+const posY = window.innerHeight/2;
+const posZ = 300;
+
+listener.positionX.value = posX;
+listener.positionY.value = posY;
+listener.positionZ.value = posZ-5;
+</pre>
+
+<p>We could move the listener left or right using <code>positionX</code>, up or down using <code>positionY</code>, or in or out of the room using <code>positionZ</code>. Here we are setting the listener to be in the middle of the viewport and slightly in front of our boombox. We can also set the direction the listener is facing. The default values for these work well:</p>
+
+<pre class="brush: js">listener.forwardX.value = 0;
+listener.forwardY.value = 0;
+listener.forwardZ.value = -1;
+listener.upX.value = 0;
+listener.upY.value = 1;
+listener.upZ.value = 0;
+</pre>
+
+<p>The forward properties represent the 3D coordinate position of the listener's forward direction (e.g. the direction they are facing in), while the up properties represent the 3D coordinate position of the top of the listener's head. These two together can nicely set the direction.</p>
+
+<h2 id="Creating_a_panner_node">Creating a panner node</h2>
+
+<p>Let's create our {{domxref("PannerNode")}}. This has a whole bunch of properties associated with it. Let's take a look at each of them:</p>
+
+<p>To start we can set the <a href="/en-US/docs/Web/API/PannerNode/panningModel"><code>panningModel</code></a>. This is the spacialization algorithm that's used to position the audio in 3D space. We can set this to:</p>
+
+<p><code>equalpower</code> — The default and the general way panning is figured out</p>
+
+<p><code>HRTF</code> — This stands for 'Head-related transfer function' and looks to take into account the human head when figuring out where the sound is.</p>
+
+<p>Pretty clever stuff. Let's use the <code>HRTF</code> model!</p>
+
+<pre class="brush: js">const pannerModel = 'HRTF';
+</pre>
+
+<p>The <a href="/en-US/docs/Web/API/PannerNode/coneInnerAngle"><code>coneInnerAngle</code></a> and <a href="/en-US/docs/Web/API/PannerNode/coneOuterAngle"><code>coneOuterAngle</code></a> properties specify where the volume emanates from. By default, both are 360 degrees. Our boombox speakers will have smaller cones, which we can define. The inner cone is where gain (volume) is always emulated at a maximum and the outer cone is where the gain starts to drop away. The gain is reduced by the value of the <a href="/en-US/docs/Web/API/PannerNode/coneOuterGain"><code>coneOuterGain</code></a> value. Let's create constants that store the values we'll use for these parameters later on:</p>
+
+<pre class="brush: js">const innerCone = 60;
+const outerCone = 90;
+const outerGain = 0.3;
+</pre>
+
+<p>The next parameter is <a href="/en-US/docs/Web/API/PannerNode/distanceModel"><code>distanceModel</code></a> — this can only be set to <code>linear</code>, <code>inverse</code>, or <code>exponential</code>. These are different algorithms, which are used to reduce the volume of the audio source as it moves away from the listener. We'll use <code>linear</code>, as it is simple:</p>
+
+<pre class="brush: js">const distanceModel = 'linear';
+</pre>
+
+<p>We can set a maximum distance (<a href="/en-US/docs/Web/API/PannerNode/maxDistance"><code>maxDistance</code></a>) between the source and the listener — the volume will not be reduced anymore if the source moves further away from this point. This can be useful, as you may find you want to emulate distance, but volume can drop out and that's actually not what you want. By default, it's 10,000 (a unitless relative value). We can keep it as this:</p>
+
+<pre class="brush: js">const maxDistance = 10000;
+</pre>
+
+<p>There's also a reference distance (<code><a href="/en-US/docs/Web/API/PannerNode/refDistance">refDistance</a></code>), which is used by the distance models. We can keep that at the default value of <code>1</code> as well:</p>
+
+<pre class="brush: js">const refDistance = 1;
+</pre>
+
+<p>Then there's the roll-off factor (<a href="/en-US/docs/Web/API/PannerNode/rolloffFactor"><code>rolloffFactor</code></a>) — how quickly does the volume reduce as the panner moves away from the listener. The default value is 1; let's make that a bit bigger to exaggerate our movements.</p>
+
+<pre class="brush: js">const rollOff = 10;
+</pre>
+
+<p>Now we can start setting our position and orientation of our boombox. This is a lot like how we did it with our listener. These are also the parameters we're going to change when the controls on our interface are used.</p>
+
+<pre class="brush: js">const positionX = posX;
+const positionY = posY;
+const positionZ = posZ;
+
+const orientationX = 0.0;
+const orientationY = 0.0;
+const orientationZ = -1.0;
+</pre>
+
+<p>Note the minus value on our z orientation — this sets the boombox to face us. A positive value would set the sound source facing away from us.</p>
+
+<p>Let's use the relevant constructor for creating our panner node and pass in all those parameters we set above:</p>
+
+<pre class="brush: js">const panner = new PannerNode(audioCtx, {
+ panningModel: pannerModel,
+ distanceModel: distanceModel,
+ positionX: positionX,
+ positionY: positionY,
+ positionZ: positionZ,
+ orientationX: orientationX,
+ orientationY: orientationY,
+ orientationZ: orientationZ,
+ refDistance: refDistance,
+ maxDistance: maxDistance,
+ rolloffFactor: rollOff,
+ coneInnerAngle: innerCone,
+ coneOuterAngle: outerCone,
+ coneOuterGain: outerGain
+})
+</pre>
+
+<h2 id="Moving_the_boombox">Moving the boombox</h2>
+
+<p>Now we're going to move our boombox around our 'room'. We've got some controls set up to do this. We can move it left and right, up and down, and back and forth; we can also rotate it. The sound direction is coming from the boombox speaker at the front, so when we rotate it, we can alter the sound's direction — i.e. make it project to the back when the boombox is rotated 180 degrees and facing away from us.</p>
+
+<p>We need to set up a few things for the interface. First, we'll get references to the elements we want to move, then we'll store references to the values we'll change when we set up <a href="/en-US/docs/Web/CSS/CSS_Transforms">CSS transforms</a> to actually do the movement. Finally, we'll set some bounds so our boombox doesn't move too far in any direction:</p>
+
+<pre class="brush: js">const moveControls = document.querySelector('#move-controls').querySelectorAll('button');
+const boombox = document.querySelector('.boombox-body');
+
+// the values for our css transforms
+let transform = {
+ xAxis: 0,
+ yAxis: 0,
+ zAxis: 0.8,
+ rotateX: 0,
+ rotateY: 0
+}
+
+// set our bounds
+const topBound = -posY;
+const bottomBound = posY;
+const rightBound = posX;
+const leftBound = -posX;
+const innerBound = 0.1;
+const outerBound = 1.5;
+</pre>
+
+<p>Let's create a function that takes the direction we want to move as a parameter, and both modifies the CSS transform and updates the position and orientation values of our panner node properties to change the sound as appropriate.</p>
+
+<p>To start with let's take a look at our left, right, up and down values as these are pretty straightforward. We'll move the boombox along these axis and update the appropriate position.</p>
+
+<pre class="brush: js">function moveBoombox(direction) {
+ switch (direction) {
+ case 'left':
+ if (transform.xAxis &gt; leftBound) {
+ transform.xAxis -= 5;
+ panner.positionX.value -= 0.1;
+ }
+ break;
+ case 'up':
+ if (transform.yAxis &gt; topBound) {
+ transform.yAxis -= 5;
+ panner.positionY.value -= 0.3;
+ }
+ break;
+ case 'right':
+ if (transform.xAxis &lt; rightBound) {
+ transform.xAxis += 5;
+ panner.positionX.value += 0.1;
+ }
+ break;
+ case 'down':
+ if (transform.yAxis &lt; bottomBound) {
+ transform.yAxis += 5;
+ panner.positionY.value += 0.3;
+ }
+ break;
+ }
+}
+</pre>
+
+<p>It's a similar story for our move in and out values too:</p>
+
+<pre class="brush: js">case 'back':
+ if (transform.zAxis &gt; innerBound) {
+ transform.zAxis -= 0.01;
+ panner.positionZ.value += 40;
+ }
+break;
+case 'forward':
+ if (transform.zAxis &lt; outerBound) {
+ transform.zAxis += 0.01;
+ panner.positionZ.value -= 40;
+ }
+break;
+</pre>
+
+<p>Our rotation values are a little more involved, however, as we need to move the sound <em>around</em>. Not only do we have to update two axis values (e.g. if you rotate an object around the x-axis, you update the y and z coordinates for that object), but we also need to do some more maths for this. The rotation is a circle and we need <code><a href="/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/sin">Math.sin</a></code> and <code><a href="/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/cos">Math.cos</a></code> to help us draw that circle.</p>
+
+<p>Let's set up a rotation rate, which we'll convert into a radian range value for use in <code>Math.sin</code> and <code>Math.cos</code> later, when we want to figure out the new coordinates when we're rotating our boombox:</p>
+
+<pre class="brush: js">// set up rotation constants
+const rotationRate = 60; // bigger number equals slower sound rotation
+
+const q = Math.PI/rotationRate; //rotation increment in radians
+</pre>
+
+<p>We can also use this to work out degrees rotated, which will help with the CSS transforms we will have to create (note we need both an x and y-axis for the CSS transforms):</p>
+
+<pre class="brush: js">// get degrees for css
+const degreesX = (q * 180)/Math.PI;
+const degreesY = (q * 180)/Math.PI;
+</pre>
+
+<p>Let's take a look at our left rotation as an example. We need to change the x orientation and the z orientation of the panner coordinates, to move around the y-axis for our left rotation:</p>
+
+<pre class="brush: js">case 'rotate-left':
+ transform.rotateY -= degreesY;
+
+ // 'left' is rotation about y-axis with negative angle increment
+ z = panner.orientationZ.value*Math.cos(q) - panner.orientationX.value*Math.sin(q);
+ x = panner.orientationZ.value*Math.sin(q) + panner.orientationX.value*Math.cos(q);
+ y = panner.orientationY.value;
+
+ panner.orientationX.value = x;
+ panner.orientationY.value = y;
+ panner.orientationZ.value = z;
+break;
+</pre>
+
+<p>This <em>is</em> a little confusing, but what we're doing is using sin and cos to help us work out the circular motion the coordinates need for the rotation of the boombox.</p>
+
+<p>We can do this for all the axes. We just need to choose the right orientations to update and whether we want a positive or negative increment.</p>
+
+<pre class="brush: js">case 'rotate-right':
+ transform.rotateY += degreesY;
+ // 'right' is rotation about y-axis with positive angle increment
+ z = panner.orientationZ.value*Math.cos(-q) - panner.orientationX.value*Math.sin(-q);
+ x = panner.orientationZ.value*Math.sin(-q) + panner.orientationX.value*Math.cos(-q);
+ y = panner.orientationY.value;
+ panner.orientationX.value = x;
+ panner.orientationY.value = y;
+ panner.orientationZ.value = z;
+break;
+case 'rotate-up':
+ transform.rotateX += degreesX;
+ // 'up' is rotation about x-axis with negative angle increment
+ z = panner.orientationZ.value*Math.cos(-q) - panner.orientationY.value*Math.sin(-q);
+ y = panner.orientationZ.value*Math.sin(-q) + panner.orientationY.value*Math.cos(-q);
+ x = panner.orientationX.value;
+ panner.orientationX.value = x;
+ panner.orientationY.value = y;
+ panner.orientationZ.value = z;
+break;
+case 'rotate-down':
+ transform.rotateX -= degreesX;
+ // 'down' is rotation about x-axis with positive angle increment
+ z = panner.orientationZ.value*Math.cos(q) - panner.orientationY.value*Math.sin(q);
+ y = panner.orientationZ.value*Math.sin(q) + panner.orientationY.value*Math.cos(q);
+ x = panner.orientationX.value;
+ panner.orientationX.value = x;
+ panner.orientationY.value = y;
+ panner.orientationZ.value = z;
+break;
+</pre>
+
+<p>One last thing — we need to update the CSS and keep a reference of the last move for the mouse event. Here's the final <code>moveBoombox</code> function.</p>
+
+<pre class="brush: js">function moveBoombox(direction, prevMove) {
+ switch (direction) {
+ case 'left':
+ if (transform.xAxis &gt; leftBound) {
+ transform.xAxis -= 5;
+ panner.positionX.value -= 0.1;
+ }
+ break;
+ case 'up':
+ if (transform.yAxis &gt; topBound) {
+ transform.yAxis -= 5;
+ panner.positionY.value -= 0.3;
+ }
+ break;
+ case 'right':
+ if (transform.xAxis &lt; rightBound) {
+ transform.xAxis += 5;
+ panner.positionX.value += 0.1;
+ }
+ break;
+ case 'down':
+ if (transform.yAxis &lt; bottomBound) {
+ transform.yAxis += 5;
+ panner.positionY.value += 0.3;
+ }
+ break;
+ case 'back':
+ if (transform.zAxis &gt; innerBound) {
+ transform.zAxis -= 0.01;
+ panner.positionZ.value += 40;
+ }
+ break;
+ case 'forward':
+ if (transform.zAxis &lt; outerBound) {
+ transform.zAxis += 0.01;
+ panner.positionZ.value -= 40;
+ }
+ break;
+ case 'rotate-left':
+ transform.rotateY -= degreesY;
+
+ // 'left' is rotation about y-axis with negative angle increment
+ z = panner.orientationZ.value*Math.cos(q) - panner.orientationX.value*Math.sin(q);
+ x = panner.orientationZ.value*Math.sin(q) + panner.orientationX.value*Math.cos(q);
+ y = panner.orientationY.value;
+
+ panner.orientationX.value = x;
+ panner.orientationY.value = y;
+ panner.orientationZ.value = z;
+ break;
+ case 'rotate-right':
+ transform.rotateY += degreesY;
+ // 'right' is rotation about y-axis with positive angle increment
+ z = panner.orientationZ.value*Math.cos(-q) - panner.orientationX.value*Math.sin(-q);
+ x = panner.orientationZ.value*Math.sin(-q) + panner.orientationX.value*Math.cos(-q);
+ y = panner.orientationY.value;
+ panner.orientationX.value = x;
+ panner.orientationY.value = y;
+ panner.orientationZ.value = z;
+ break;
+ case 'rotate-up':
+ transform.rotateX += degreesX;
+ // 'up' is rotation about x-axis with negative angle increment
+ z = panner.orientationZ.value*Math.cos(-q) - panner.orientationY.value*Math.sin(-q);
+ y = panner.orientationZ.value*Math.sin(-q) + panner.orientationY.value*Math.cos(-q);
+ x = panner.orientationX.value;
+ panner.orientationX.value = x;
+ panner.orientationY.value = y;
+ panner.orientationZ.value = z;
+ break;
+ case 'rotate-down':
+ transform.rotateX -= degreesX;
+ // 'down' is rotation about x-axis with positive angle increment
+ z = panner.orientationZ.value*Math.cos(q) - panner.orientationY.value*Math.sin(q);
+ y = panner.orientationZ.value*Math.sin(q) + panner.orientationY.value*Math.cos(q);
+ x = panner.orientationX.value;
+ panner.orientationX.value = x;
+ panner.orientationY.value = y;
+ panner.orientationZ.value = z;
+ break;
+ }
+
+ boombox.style.transform = 'translateX('+transform.xAxis+'px) translateY('+transform.yAxis+'px) scale('+transform.zAxis+') rotateY('+transform.rotateY+'deg) rotateX('+transform.rotateX+'deg)';
+
+ const move = prevMove || {};
+ move.frameId = requestAnimationFrame(() =&gt; moveBoombox(direction, move));
+ return move;
+}
+</pre>
+
+<h2 id="Wiring_up_our_controls">Wiring up our controls</h2>
+
+<p>Wiring up out control buttons is comparatively simple — now we can listen for a mouse event on our controls and run this function, as well as stop it when the mouse is released:</p>
+
+<pre class="brush: js">// for each of our controls, move the boombox and change the position values
+moveControls.forEach(function(el) {
+
+ let moving;
+ el.addEventListener('mousedown', function() {
+
+        let direction = this.dataset.control;
+        if (moving &amp;&amp; moving.frameId) {
+            window.cancelAnimationFrame(moving.frameId);
+        }
+        moving = moveBoombox(direction);
+
+    }, false);
+
+    window.addEventListener('mouseup', function() {
+        if (moving &amp;&amp; moving.frameId) {
+            window.cancelAnimationFrame(moving.frameId);
+        }
+    }, false)
+
+})
+</pre>
+
+<h2 id="Connecting_Our_Graph">Connecting Our Graph</h2>
+
+<p>Our HTML contains the audio element we want to be affected by the panner node.</p>
+
+<pre class="brush: html">&lt;audio src="myCoolTrack.mp3"&gt;&lt;/audio&gt;</pre>
+
+<p>We need to grab the source from that element and pipe it into the Web Audio API using the {{domxref('AudioContext.createMediaElementSource')}}.</p>
+
+<pre class="brush: js">// get the audio element
+const audioElement = document.querySelector('audio');
+
+// pass it into the audio context
+const track = audioContext.createMediaElementSource(audioElement);
+</pre>
+
+<p>Next we have to connect our audio graph. We connect our input (the track) to our modification node (the panner) to our destination (in this case the speakers).</p>
+
+<pre class="brush: js">track.connect(panner).connect(audioCtx.destination);
+</pre>
+
+<p>Let's create a play button, that when clicked will play or pause the audio depending on the current state.</p>
+
+<pre class="brush: html">&lt;button data-playing="false" role="switch"&gt;Play/Pause&lt;/button&gt;
+</pre>
+
+<pre class="brush: js">// select our play button
+const playButton = document.querySelector('button');
+
+playButton.addEventListener('click', function() {
+
+// check if context is in suspended state (autoplay policy)
+if (audioContext.state === 'suspended') {
+audioContext.resume();
+}
+
+// play or pause track depending on state
+if (this.dataset.playing === 'false') {
+audioElement.play();
+this.dataset.playing = 'true';
+} else if (this.dataset.playing === 'true') {
+audioElement.pause();
+this.dataset.playing = 'false';
+}
+
+}, false);
+</pre>
+
+<p>For a more in depth look at playing/controlling audio and audio graphs check out <a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using The Web Audio API.</a></p>
+
+<h2 id="Summary">Summary</h2>
+
+<p>Hopefully, this article has given you an insight into how Web Audio spatialization works, and what each of the {{domxref("PannerNode")}} properties do (there are quite a few of them). The values can be hard to manipulate sometimes and depending on your use case it can take some time to get them right.</p>
+
+<div class="note">
+<p><strong>Note</strong>: There are slight differences in the way the audio spatialization sounds across different browsers. The panner node does some very involved maths under the hood; there are a <a href="https://wpt.fyi/results/webaudio/the-audio-api/the-pannernode-interface?label=stable&amp;aligned=true">number of tests here</a> so you can keep track of the status of the inner workings of this node across different platforms.</p>
+</div>
+
+<p>Again, you can <a href="https://mdn.github.io/webaudio-examples/spacialization/">check out the final demo here</a>, and the <a href="https://github.com/mdn/webaudio-examples/tree/master/spacialization">final source code is here</a>. There is also a <a href="https://codepen.io/Rumyra/pen/MqayoK?editors=0100">Codepen demo too</a>.</p>
+
+<p>If you are working with 3D games and/or WebXR it's a good idea to harness a 3D library to create such functionality, rather than trying to do this all yourself from first principles. We rolled our own in this article to give you an idea of how it works, but you'll save a lot of time by taking advantage of work others have done before you.</p>
diff --git a/files/ko/web/api/web_audio_api/web_audio_spatialization_basics/web-audio-spatialization.png b/files/ko/web/api/web_audio_api/web_audio_spatialization_basics/web-audio-spatialization.png
new file mode 100644
index 0000000000..18a359e5c1
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/web_audio_spatialization_basics/web-audio-spatialization.png
Binary files differ