aboutsummaryrefslogtreecommitdiff
path: root/files/ko/web/api/web_audio_api
diff options
context:
space:
mode:
authorPeter Bengtsson <mail@peterbe.com>2020-12-08 14:42:17 -0500
committerPeter Bengtsson <mail@peterbe.com>2020-12-08 14:42:17 -0500
commitda78a9e329e272dedb2400b79a3bdeebff387d47 (patch)
treee6ef8aa7c43556f55ddfe031a01cf0a8fa271bfe /files/ko/web/api/web_audio_api
parent1109132f09d75da9a28b649c7677bb6ce07c40c0 (diff)
downloadtranslated-content-da78a9e329e272dedb2400b79a3bdeebff387d47.tar.gz
translated-content-da78a9e329e272dedb2400b79a3bdeebff387d47.tar.bz2
translated-content-da78a9e329e272dedb2400b79a3bdeebff387d47.zip
initial commit
Diffstat (limited to 'files/ko/web/api/web_audio_api')
-rw-r--r--files/ko/web/api/web_audio_api/basic_concepts_behind_web_audio_api/index.html354
-rw-r--r--files/ko/web/api/web_audio_api/index.html523
-rw-r--r--files/ko/web/api/web_audio_api/using_web_audio_api/index.html238
3 files changed, 1115 insertions, 0 deletions
diff --git a/files/ko/web/api/web_audio_api/basic_concepts_behind_web_audio_api/index.html b/files/ko/web/api/web_audio_api/basic_concepts_behind_web_audio_api/index.html
new file mode 100644
index 0000000000..571c15684e
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/basic_concepts_behind_web_audio_api/index.html
@@ -0,0 +1,354 @@
+---
+title: Basic concepts behind Web Audio API
+slug: Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API
+tags:
+ - 가이드
+ - 미디어
+ - 오디오
+ - 웹오디오API
+ - 웹오디오API이론
+ - 이론
+ - 컨셉
+translation_of: Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API
+---
+<div class="summary">
+<p><span class="seoSummary">Web Audio API의 기능이 어떻게 동작하는지에 대한 오디오 이론에 대해서 설명합니다. 마스터 사운드 엔지니어가 될 수 는 없지만, Web Audio API가 왜 그렇게 작동하는지에 대해 이해할 수 있는 충분한 배경 지식을 제공해서 개발중에 더 나은 결정을 내릴 수 있게합니다. </span></p>
+</div>
+
+<h2 id="Audio_graphs">Audio graphs</h2>
+
+<p>The Web Audio API involves handling audio operations inside an <strong>audio context</strong>, and has been designed to allow <strong>modular routing</strong>. Basic audio operations are performed with <strong>audio nodes</strong>, which are linked together to form an <strong>audio routing graph</strong>. Several sources — with different types of channel layout — are supported even within a single context. This modular design provides the flexibility to create complex audio functions with dynamic effects.</p>
+
+<p>Audio nodes are linked via their inputs and outputs, forming a chain that starts with one or more sources, goes through one or more nodes, then ends up at a destination. Although, you don't have to provide a destination if you, say, just want to visualize some audio data. A simple, typical workflow for web audio would look something like this:</p>
+
+<ol>
+ <li>Create audio context.</li>
+ <li>Inside the context, create sources — such as <code>&lt;audio&gt;</code>, oscillator, stream.</li>
+ <li>Create effects nodes, such as reverb, biquad filter, panner, compressor.</li>
+ <li>Choose final destination of audio, for example your system speakers.</li>
+ <li>Connect the sources up to the effects, and the effects to the destination.</li>
+</ol>
+
+<p><img alt="A simple box diagram with an outer box labeled Audio context, and three inner boxes labeled Sources, Effects and Destination. The three inner boxes have arrow between them pointing from left to right, indicating the flow of audio information." src="https://mdn.mozillademos.org/files/12237/webaudioAPI_en.svg" style="display: block; height: 143px; margin: 0px auto; width: 643px;"></p>
+
+<p>Each input or output is composed of several <strong>channels, </strong>which represent a specific audio layout. Any discrete channel structure is supported, including <em>mono</em>, <em>stereo</em>, <em>quad</em>, <em>5.1</em>, and so on.</p>
+
+<p><img alt="Show the ability of AudioNodes to connect via their inputs and outputs and the channels inside these inputs/outputs." src="https://mdn.mozillademos.org/files/14179/mdn.png" style="display: block; height: 360px; margin: 0px auto; width: 630px;"></p>
+
+<p>Audio sources can come from a variety of places:</p>
+
+<ul>
+ <li>Generated directly in JavaScript by an audio node (such as an oscillator).</li>
+ <li>Created from raw PCM data (the audio context has methods to decode supported audio formats).</li>
+ <li>Taken from HTML media elements (such as {{HTMLElement("video")}} or {{HTMLElement("audio")}}).</li>
+ <li>Taken directly from a <a href="/en-US/docs/WebRTC" title="WebRTC">WebRTC</a> {{domxref("MediaStream")}} (such as a webcam or microphone).</li>
+</ul>
+
+<h2 id="Audio_data_what's_in_a_sample">Audio data: what's in a sample</h2>
+
+<p>When an audio signal is processed, <strong>sampling</strong> means the conversion of a <a href="https://en.wikipedia.org/wiki/Continuous_signal" title="Continuous signal">continuous signal</a> to a <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Discrete_signal" title="Discrete signal">discrete signal</a>; or put another way, a continuous sound wave, such as a band playing live, is converted to a sequence of samples (a discrete-time signal) that allow a computer to handle the audio in distinct blocks.</p>
+
+<p>A lot more information can be found on the Wikipedia page <a href="https://en.wikipedia.org/wiki/Sampling_%28signal_processing%29">Sampling (signal processing)</a>.</p>
+
+<h2 id="Audio_buffers_frames_samples_and_channels">Audio buffers: frames, samples and channels</h2>
+
+<p>An {{ domxref("AudioBuffer") }} takes as its parameters a number of channels (1 for mono, 2 for stereo, etc), a length, meaning the number of sample frames inside the buffer, and a sample rate, which is the number of sample frames played per second.</p>
+
+<p>A sample is a single float32 value that represents the value of the audio stream at each specific point in time, in a specific channel (left or right, if in the case of stereo). A frame, or sample frame, is the set of all values for all channels that will play at a specific point in time: all the samples of all the channels that play at the same time (two for a stereo sound, six for 5.1, etc.)</p>
+
+<p>The sample rate is the number of those samples (or frames, since all samples of a frame play at the same time) that will play in one second, measured in Hz. The higher the sample rate, the better the sound quality.</p>
+
+<p>Let's look at a Mono and a Stereo audio buffer, each is one second long, and playing at 44100Hz:</p>
+
+<ul>
+ <li>The Mono buffer will have 44100 samples, and 44100 frames. The <code>length</code> property will be 44100.</li>
+ <li>The Stereo buffer will have 88200 samples, but still 44100 frames. The <code>length</code> property will still be 44100 since it's equal to the number of frames.</li>
+</ul>
+
+<p><img alt="A diagram showing several frames in an audio buffer in a long line, each one containing two samples, as the buffer has two channels, it is stereo." src="https://mdn.mozillademos.org/files/14801/sampleframe-english.png" style="height: 150px; width: 853px;"></p>
+
+<p>When a buffer plays, you will hear the left most sample frame, and then the one right next to it, etc. In the case of stereo, you will hear both channels at the same time. Sample frames are very useful, because they are independent of the number of channels, and represent time, in a useful way for doing precise audio manipulation.</p>
+
+<div class="note">
+<p><strong>Note</strong>: To get a time in seconds from a frame count, simply divide the number of frames by the sample rate. To get a number of frames from a number of samples, simply divide by the channel count.</p>
+</div>
+
+<p>Here's a couple of simple trivial examples:</p>
+
+<pre class="brush: js">var context = new AudioContext();
+var buffer = context.createBuffer(2, 22050, 44100);</pre>
+
+<div class="note">
+<p><strong>Note</strong>: In <a href="https://en.wikipedia.org/wiki/Digital_audio" title="Digital audio">digital audio</a>, <strong>44,100 <a href="https://en.wikipedia.org/wiki/Hertz" title="Hertz">Hz</a></strong> (alternately represented as <strong>44.1 kHz</strong>) is a common <a href="https://en.wikipedia.org/wiki/Sampling_frequency" title="Sampling frequency">sampling frequency</a>. Why 44.1kHz? <br>
+ <br>
+ Firstly, because the <a href="https://en.wikipedia.org/wiki/Hearing_range" title="Hearing range">hearing range</a> of human ears is roughly 20 Hz to 20,000 Hz. Via the <a href="https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem" title="Nyquist–Shannon sampling theorem">Nyquist–Shannon sampling theorem</a>, the sampling frequency must be greater than twice the maximum frequency one wishes to reproduce. Therefore, the sampling rate has to be greater than 40 kHz.<br>
+ <br>
+ Secondly, signals must be <a href="https://en.wikipedia.org/wiki/Low-pass_filter" title="Low-pass filter">low-pass filtered</a> before sampling, otherwise <a href="https://en.wikipedia.org/wiki/Aliasing" title="Aliasing">aliasing</a> occurs. While an ideal low-pass filter would perfectly pass frequencies below 20 kHz (without attenuating them) and perfectly cut off frequencies above 20 kHz, in practice a <a href="https://en.wikipedia.org/wiki/Transition_band" title="Transition band">transition band</a> is necessary, where frequencies are partly attenuated. The wider this transition band is, the easier and more economical it is to make an <a href="https://en.wikipedia.org/wiki/Anti-aliasing_filter" title="Anti-aliasing filter">anti-aliasing filter</a>. The 44.1 kHz sampling frequency allows for a 2.05 kHz transition band.</p>
+</div>
+
+<p>If you use this call above, you will get a stereo buffer with two channels, that when played back on an AudioContext running at 44100Hz (very common, most normal sound cards run at this rate), will last for 0.5 seconds: 22050 frames/44100Hz = 0.5 seconds.</p>
+
+<pre class="brush: js">var context = new AudioContext();
+var buffer = context.createBuffer(1, 22050, 22050);</pre>
+
+<p>If you use this call, you will get a mono buffer with just one channel), that when played back on an AudioContext running at 44100Hz, will be automatically <em>resampled</em> to 44100Hz (and therefore yield 44100 frames), and last for 1.0 second: 44100 frames/44100Hz = 1 second.</p>
+
+<div class="note">
+<p><strong>Note</strong>: audio resampling is very similar to image resizing. Say you've got a 16 x 16 image, but you want it to fill a 32x32 area. You resize (or resample) it. The result has less quality (it can be blurry or edgy, depending on the resizing algorithm), but it works, with the resized image taking up less space. Resampled audio is exactly the same: you save space, but in practice you will be unable to properly reproduce high frequency content, or treble sound.</p>
+</div>
+
+<h3 id="Planar_versus_interleaved_buffers">Planar versus interleaved buffers</h3>
+
+<p>The Web Audio API uses a planar buffer format. The left and right channels are stored like this:</p>
+
+<pre>LLLLLLLLLLLLLLLLRRRRRRRRRRRRRRRR (for a buffer of 16 frames)</pre>
+
+<p>This is very common in audio processing: it makes it easy to process each channel independently.</p>
+
+<p>The alternative is to use an interleaved buffer format:</p>
+
+<pre>LRLRLRLRLRLRLRLRLRLRLRLRLRLRLRLR (for a buffer of 16 frames)</pre>
+
+<p>This format is very common for storing and playing back audio without much processing, for example a decoded MP3 stream.<br>
+ <br>
+ The Web Audio API exposes <strong>only</strong> planar buffers, because it's made for processing. It works with planar, but converts the audio to interleaved when it is sent to the sound card for playback. Conversely, when an MP3 is decoded, it starts off in interleaved format, but is converted to planar for processing.</p>
+
+<h2 id="Audio_channels">Audio channels</h2>
+
+<p>Different audio buffers contain different numbers of channels: from the more basic mono (only one channel) and stereo (left and right channels), to more complex sets like quad and 5.1, which have different sound samples contained in each channel, leading to a richer sound experience. The channels are usually represented by standard abbreviations detailed in the table below:</p>
+
+<table class="standard-table">
+ <tbody>
+ <tr>
+ <td><em>Mono</em></td>
+ <td><code>0: M: mono</code></td>
+ </tr>
+ <tr>
+ <td><em>Stereo</em></td>
+ <td><code>0: L: left<br>
+ 1: R: right</code></td>
+ </tr>
+ <tr>
+ <td><em>Quad</em></td>
+ <td><code>0: L: left<br>
+ 1: R: right<br>
+ 2: SL: surround left<br>
+ 3: SR: surround right</code></td>
+ </tr>
+ <tr>
+ <td><em>5.1</em></td>
+ <td><code>0: L: left<br>
+ 1: R: right<br>
+ 2: C: center<br>
+ 3: LFE: subwoofer<br>
+ 4: SL: surround left<br>
+ 5: SR: surround right</code></td>
+ </tr>
+ </tbody>
+</table>
+
+<h3 id="Up-mixing_and_down-mixing">Up-mixing and down-mixing</h3>
+
+<p>When the number of channels doesn't match between an input and an output, up- or down-mixing happens according the following rules. This can be somewhat controlled by setting the {{domxref("AudioNode.channelInterpretation")}} property to <code>speakers</code> or <code>discrete</code>:</p>
+
+<table class="standard-table">
+ <thead>
+ <tr>
+ <th scope="row">Interpretation</th>
+ <th scope="col">Input channels</th>
+ <th scope="col">Output channels</th>
+ <th scope="col">Mixing rules</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th colspan="1" rowspan="13" scope="row"><code>speakers</code></th>
+ <td><code>1</code> <em>(Mono)</em></td>
+ <td><code>2</code> <em>(Stereo)</em></td>
+ <td><em>Up-mix from mono to stereo</em>.<br>
+ The <code>M</code> input channel is used for both output channels (<code>L</code> and <code>R</code>).<br>
+ <code>output.L = input.M<br>
+ output.R = input.M</code></td>
+ </tr>
+ <tr>
+ <td><code>1</code> <em>(Mono)</em></td>
+ <td><code>4</code> <em>(Quad)</em></td>
+ <td><em>Up-mix from mono to quad.</em><br>
+ The <code>M</code> input channel is used for non-surround output channels (<code>L</code> and <code>R</code>). Surround output channels (<code>SL</code> and <code>SR</code>) are silent.<br>
+ <code>output.L = input.M<br>
+ output.R = input.M<br>
+ output.SL = 0<br>
+ output.SR = 0</code></td>
+ </tr>
+ <tr>
+ <td><code>1</code> <em>(Mono)</em></td>
+ <td><code>6</code> <em>(5.1)</em></td>
+ <td><em>Up-mix from mono to 5.1.</em><br>
+ The <code>M</code> input channel is used for the center output channel (<code>C</code>). All the others (<code>L</code>, <code>R</code>, <code>LFE</code>, <code>SL</code>, and <code>SR</code>) are silent.<br>
+ <code>output.L = 0<br>
+ output.R = 0</code><br>
+ <code>output.C = input.M<br>
+ output.LFE = 0<br>
+ output.SL = 0<br>
+ output.SR = 0</code></td>
+ </tr>
+ <tr>
+ <td><code>2</code> <em>(Stereo)</em></td>
+ <td><code>1</code> <em>(Mono)</em></td>
+ <td><em>Down-mix from stereo to mono</em>.<br>
+ Both input channels (<code>L</code> and <code>R</code>) are equally combined to produce the unique output channel (<code>M</code>).<br>
+ <code>output.M = 0.5 * (input.L + input.R)</code></td>
+ </tr>
+ <tr>
+ <td><code>2</code> <em>(Stereo)</em></td>
+ <td><code>4</code> <em>(Quad)</em></td>
+ <td><em>Up-mix from stereo to quad.</em><br>
+ The <code>L</code> and <code>R </code>input channels are used for their non-surround respective output channels (<code>L</code> and <code>R</code>). Surround output channels (<code>SL</code> and <code>SR</code>) are silent.<br>
+ <code>output.L = input.L<br>
+ output.R = input.R<br>
+ output.SL = 0<br>
+ output.SR = 0</code></td>
+ </tr>
+ <tr>
+ <td><code>2</code> <em>(Stereo)</em></td>
+ <td><code>6</code> <em>(5.1)</em></td>
+ <td><em>Up-mix from stereo to 5.1.</em><br>
+ The <code>L</code> and <code>R </code>input channels are used for their non-surround respective output channels (<code>L</code> and <code>R</code>). Surround output channels (<code>SL</code> and <code>SR</code>), as well as the center (<code>C</code>) and subwoofer (<code>LFE</code>) channels, are left silent.<br>
+ <code>output.L = input.L<br>
+ output.R = input.R<br>
+ output.C = 0<br>
+ output.LFE = 0<br>
+ output.SL = 0<br>
+ output.SR = 0</code></td>
+ </tr>
+ <tr>
+ <td><code>4</code> <em>(Quad)</em></td>
+ <td><code>1</code> <em>(Mono)</em></td>
+ <td><em>Down-mix from quad to mono</em>.<br>
+ All four input channels (<code>L</code>, <code>R</code>, <code>SL</code>, and <code>SR</code>) are equally combined to produce the unique output channel (<code>M</code>).<br>
+ <code>output.M = 0.25 * (input.L + input.R + </code><code>input.SL + input.SR</code><code>)</code></td>
+ </tr>
+ <tr>
+ <td><code>4</code> <em>(Quad)</em></td>
+ <td><code>2</code> <em>(Stereo)</em></td>
+ <td><em>Down-mix from quad to stereo</em>.<br>
+ Both left input channels (<code>L</code> and <code>SL</code>) are equally combined to produce the unique left output channel (<code>L</code>). And similarly, both right input channels (<code>R</code> and <code>SR</code>) are equally combined to produce the unique right output channel (<code>R</code>).<br>
+ <code>output.L = 0.5 * (input.L + input.SL</code><code>)</code><br>
+ <code>output.R = 0.5 * (input.R + input.SR</code><code>)</code></td>
+ </tr>
+ <tr>
+ <td><code>4</code> <em>(Quad)</em></td>
+ <td><code>6</code> <em>(5.1)</em></td>
+ <td><em>Up-mix from quad to 5.1.</em><br>
+ The <code>L</code>, <code>R</code>, <code>SL</code>, and <code>SR</code> input channels are used for their respective output channels (<code>L</code> and <code>R</code>). Center (<code>C</code>) and subwoofer (<code>LFE</code>) channels are left silent.<br>
+ <code>output.L = input.L<br>
+ output.R = input.R<br>
+ output.C = 0<br>
+ output.LFE = 0<br>
+ output.SL = input.SL<br>
+ output.SR = input.SR</code></td>
+ </tr>
+ <tr>
+ <td><code>6</code> <em>(5.1)</em></td>
+ <td><code>1</code> <em>(Mono)</em></td>
+ <td><em>Down-mix from 5.1 to mono.</em><br>
+ The left (<code>L</code> and <code>SL</code>), right (<code>R</code> and <code>SR</code>) and central channels are all mixed together. The surround channels are slightly attenuated and the regular lateral channels are power-compensated to make them count as a single channel by multiplying by <code>√2/2</code>. The subwoofer (<code>LFE</code>) channel is lost.<br>
+ <code>output.M = 0.7071 * (input.L + input.R) + input.C + 0.5 * (input.SL + input.SR)</code></td>
+ </tr>
+ <tr>
+ <td><code>6</code> <em>(5.1)</em></td>
+ <td><code>2</code> <em>(Stereo)</em></td>
+ <td><em>Down-mix from 5.1 to stereo.</em><br>
+ The central channel (<code>C</code>) is summed with each lateral surround channel (<code>SL</code> or <code>SR</code>) and mixed to each lateral channel. As it is mixed down to two channels, it is mixed at a lower power: in each case it is multiplied by <code>√2/2</code>. The subwoofer (<code>LFE</code>) channel is lost.<br>
+ <code>output.L = input.L + 0.7071 * (input.C + input.SL)<br>
+ output.R = input.R </code><code>+ 0.7071 * (input.C + input.SR)</code></td>
+ </tr>
+ <tr>
+ <td><code>6</code> <em>(5.1)</em></td>
+ <td><code>4</code> <em>(Quad)</em></td>
+ <td><em>Down-mix from 5.1 to quad.</em><br>
+ The central (<code>C</code>) is mixed with the lateral non-surround channels (<code>L</code> and <code>R</code>). As it is mixed down to two channels, it is mixed at a lower power: in each case it is multiplied by <code>√2/2</code>. The surround channels are passed unchanged. The subwoofer (<code>LFE</code>) channel is lost.<br>
+ <code>output.L = input.L + 0.7071 * input.C<br>
+ output.R = input.R + 0.7071 * input.C<br>
+ <code>output.SL = input.SL<br>
+ output.SR = input.SR</code></code></td>
+ </tr>
+ <tr>
+ <td colspan="2" rowspan="1">Other, non-standard layouts</td>
+ <td>Non-standard channel layouts are handled as if <code>channelInterpretation</code> is set to <code>discrete</code>.<br>
+ The specification explicitly allows the future definition of new speaker layouts. This fallback is therefore not future proof as the behavior of the browsers for a specific number of channels may change in the future.</td>
+ </tr>
+ <tr>
+ <th colspan="1" rowspan="2" scope="row"><code>discrete</code></th>
+ <td rowspan="1">any (<code>x</code>)</td>
+ <td rowspan="1">any (<code>y</code>) where <code>x&lt;y</code></td>
+ <td><em>Up-mix discrete channels.</em><br>
+ Fill each output channel with its input counterpart, that is the input channel with the same index. Channels with no corresponding input channels are left silent.</td>
+ </tr>
+ <tr>
+ <td rowspan="1">any (<code>x</code>)</td>
+ <td rowspan="1">any (<code>y</code>) where <code>x&gt;y</code></td>
+ <td><em>Down-mix discrete channels.</em><br>
+ Fill each output channel with its input counterpart, that is the input channel with the same index. Input channels with no corresponding output channels are dropped.</td>
+ </tr>
+ </tbody>
+</table>
+
+<h2 id="Visualizations">Visualizations</h2>
+
+<p>In general, audio visualizations are achieved by accessing an ouput of audio data over time, usually gain or frequency data, and then using a graphical technology to turn that into a visual output, such as a graph. The Web Audio API has an {{domxref("AnalyserNode")}} available that doesn't alter the audio signal passing through it. Instead it outputs audio data that can be passed to a visualization technology such as {{htmlelement("canvas")}}.</p>
+
+<p><img alt="Without modifying the audio stream, the node allows to get the frequency and time-domain data associated to it, using a FFT." src="https://mdn.mozillademos.org/files/12521/fttaudiodata_en.svg" style="height: 206px; width: 693px;"></p>
+
+<p>You can grab data using the following methods:</p>
+
+<dl>
+ <dt>{{domxref("AnalyserNode.getFloatFrequencyData()")}}</dt>
+ <dd>Copies the current frequency data into a {{domxref("Float32Array")}} array passed into it.</dd>
+</dl>
+
+<dl>
+ <dt>{{domxref("AnalyserNode.getByteFrequencyData()")}}</dt>
+ <dd>Copies the current frequency data into a {{domxref("Uint8Array")}} (unsigned byte array) passed into it.</dd>
+</dl>
+
+<dl>
+ <dt>{{domxref("AnalyserNode.getFloatTimeDomainData()")}}</dt>
+ <dd>Copies the current waveform, or time-domain, data into a {{domxref("Float32Array")}} array passed into it.</dd>
+ <dt>{{domxref("AnalyserNode.getByteTimeDomainData()")}}</dt>
+ <dd>Copies the current waveform, or time-domain, data into a {{domxref("Uint8Array")}} (unsigned byte array) passed into it.</dd>
+</dl>
+
+<div class="note">
+<p><strong>Note</strong>: For more information, see our <a href="/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API">Visualizations with Web Audio API</a> article.</p>
+</div>
+
+<h2 id="Spatialisations">Spatialisations</h2>
+
+<div>
+<p>An audio spatialisation (handled by the {{domxref("PannerNode")}} and {{domxref("AudioListener")}} nodes in the Web Audio API) allows us to model the position and behavior of an audio signal at a certain point in space, and the listener hearing that audio.</p>
+
+<p>The panner's position is described with right-hand Cartesian coordinates; its movement using a velocity vector, necessary for creating Doppler effects, and its directionality using a directionality cone.The cone can be very large, e.g. for omnidirectional sources.</p>
+</div>
+
+<p><img alt="The PannerNode brings a spatial position and velocity and a directionality for a given signal." src="https://mdn.mozillademos.org/files/12511/pannernode_en.svg" style="height: 340px; width: 799px;"></p>
+
+<div>
+<p>The listener's position is described using right-hand Cartesian coordinates; its movement using a velocity vector and the direction the listener's head is pointing using two direction vectors: up and front. These respectively define the direction of the top of the listener's head, and the direction the listener's nose is pointing, and are at right angles to one another.</p>
+</div>
+
+<p><img alt="The PannerNode brings a spatial position and velocity and a directionality for a given signal." src="https://mdn.mozillademos.org/files/12513/listener.svg" style="height: 249px; width: 720px;"></p>
+
+<div class="note">
+<p><strong>Note</strong>: For more information, see our <a href="/en-US/docs/Web/API/Web_Audio_API/Web_audio_spatialization_basics">Web audio spatialization basics</a> article.</p>
+</div>
+
+<h2 id="Fan-in_and_Fan-out">Fan-in and Fan-out</h2>
+
+<p>In audio terms, <strong>fan-in</strong> describes the process by which a {{domxref("ChannelMergerNode")}} takes a series of mono input sources and outputs a single multi-channel signal:</p>
+
+<p><img alt="" src="https://mdn.mozillademos.org/files/12517/fanin.svg" style="height: 258px; width: 325px;"></p>
+
+<p><strong>Fan-out</strong> describes the opposite process, whereby a {{domxref("ChannelSplitterNode")}} takes a multi-channel input source and outputs multiple mono output signals:</p>
+
+<p><img alt="" src="https://mdn.mozillademos.org/files/12515/fanout.svg" style="height: 258px; width: 325px;"></p>
diff --git a/files/ko/web/api/web_audio_api/index.html b/files/ko/web/api/web_audio_api/index.html
new file mode 100644
index 0000000000..714ccdb2af
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/index.html
@@ -0,0 +1,523 @@
+---
+title: Web Audio API
+slug: Web/API/Web_Audio_API
+translation_of: Web/API/Web_Audio_API
+---
+<div>
+<p>Web Audio API는 웹에서 오디오를 제어하기 위한 강력하고 다양한 기능을 제공합니다. Web Audio API를 이용하면 오디오 소스를 선택할 수 있도록 하거나, 오디오에 이펙트를 추가하거나, 오디오를 시각화하거나, 패닝과 같은 공간 이펙트를 적용시키는 등의 작업이 가능합니다.</p>
+</div>
+
+<h2 id="Web_audio의_개념과_사용법">Web audio의 개념과 사용법</h2>
+
+<p>Web Audio API는 <strong>오디오 컨텍스트</strong> 내부의 오디오 조작을 핸들링하는 것을 포함하며, <strong>모듈러 라우팅</strong>을 허용하도록 설계되어 있습니다. 기본적인 오디오 연산은 <strong>오디오 노드</strong>를 통해 수행되며, <strong>오디오 노드</strong>는 서로 연결되어 <strong>오디오 라우팅 그래프</strong>를 형성합니다. 서로 다른 타입의 채널 레이아웃을 포함한 다수의 오디오 소스는 단일 컨텍스트 내에서도 지원됩니다. 이 모듈식 설계는 역동적이고 복합적인 오디오 기능 생성을 위한 유연성을 제공합니다.</p>
+
+<p>오디오 노드는 각각의 입력과 출력을 통해 체인과 간단한 망으로 연결됩니다. 이들은 일반적으로 하나 이상의 소스로 시작합니다. 소스들은 초당 수만 개 가량의 아주 작은 시간 단위의 음향 인텐시티(샘플) 배열로 제공됩니다. 소스는 {{domxref("OscillatorNode")}}와 같이 수학적으로 계산된 것이거나, {{domxref("AudioBufferSourceNode")}} 또는 {{domxref("MediaElementAudioSourceNode")}}와 같은 사운드/비디오 파일, 마지막으로 {{domxref("MediaStreamAudioSourceNode")}}와 같은 오디오 스트림일 수 있습니다. 사실, 사운드 파일은 마이크나 전자기기로 생성된 음향 인텐시티가 녹음된 것에 불과하며, 하나의 복합적인 파동으로 믹싱됩니다.</p>
+
+<p>오디오 노드의 출력은 다른 노드의 입력 단자와 연결될 수 있습니다. 이 입력은 노드의 사운드 샘플 스트림을 다른 스트림으로 믹스하거나 변경합니다. 일반적인 변경은 {{domxref("GainNode")}}와 같이 샘플에 소리를 더 키우거나 줄이는 값을 곱하는 것입니다. 사운드가 의도된 이펙트를 위해 충분히 처리되면, 이를 {{domxref("AudioContext.destination")}}의 입력에 연결해 사운드를 스피커와 헤드폰으로 출력합니다. 이 연결은 사용자가 오디오를 듣도록 하기 위한 용도로만 필요합니다.</p>
+
+<p>웹 오디오의 간단하고 일반적인 작업 흐름은 다음과 같습니다 :</p>
+
+<ol>
+ <li>오디오 컨텍스트를 생성합니다.</li>
+ <li>컨텍스트 내에 소스를 생성합니다.(ex - &lt;audio&gt;, 발진기, 스트림)</li>
+ <li>이펙트 노드를 생성합니다. (ex - 잔향 효과,  바이쿼드 필터, 패너, 컴프레서 등)</li>
+ <li>오디오의 최종 목적지를 선택합니다. (ex - 시스템 스피커)</li>
+ <li>사운드를 이펙트에 연결하고, 이펙트를 목적지에 연결합니다.</li>
+</ol>
+
+<p><img alt="A simple box diagram with an outer box labeled Audio context, and three inner boxes labeled Sources, Effects and Destination. The three inner boxes have arrow between them pointing from left to right, indicating the flow of audio information." src="https://mdn.mozillademos.org/files/12241/webaudioAPI_en.svg" style="display: block; height: 143px; margin: 0px auto; width: 643px;"></p>
+
+<p>높은 정확도와 적은 지연시간을 가진 타이밍 계산 덕분에, 개발자는 높은 샘플 레이트에서도 특정 샘플을 대상으로 이벤트에 정확하게 응답하는 코드를 작성할 수 있습니다. 따라서 드럼 머신이나 시퀀서 등의 어플리케이션은 충분히 구현 가능합니다.</p>
+
+<p>Web Audio API는 오디오가 어떻게 <em>공간화</em>될지 컨트롤할 수 있도록 합니다. <em>소스-리스너 모델</em>을 기반으로 하는 시스템을 사용하면 <em>패닝 모델</em>과 <em>거리-유도 감쇄</em> 혹은 움직이는 소스(혹은 움직이는 청자)를 통해 유발된 <em>도플러 시프트</em> 컨트롤이 가능합니다.</p>
+
+<div class="note">
+<p><a href="/en-US/docs/Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API">Basic concepts behind Web Audio API</a> 아티클에서 Web Audio API 이론에 대한 더 자세한 내용을 읽을 수 있습니다.</p>
+</div>
+
+<h2 id="Web_Audio_API_타겟_사용자층">Web Audio API 타겟 사용자층</h2>
+
+<p>오디오나 음악 용어에 익숙하지 않은 사람은 Web Audio API가 막막하게 느껴질 수 있습니다. 또한 Web Audio API가 굉장히 다양한 기능을 제공하는 만큼 개발자로서는 시작하기 어렵게 느껴질 수 있습니다.</p>
+
+<p>Web Audio API는 <a href="https://www.futurelibrary.no/">futurelibrary.no</a>에서와 같이 배경 음악을 깔거나, <a href="https://css-tricks.com/form-validation-web-audio/">작성된 폼에 대한 피드백을 제공</a>하는 등, 웹사이트에 간단한 오디오 기능을 제공하는 데에 사용될 수 있습니다. 그리고 물론 상호작용 가능한 상급자용 악기 기능을 만드는 데에도 사용할 수 있습니다. 따라서 Web Audio API는 개발자와 뮤지션 모두가 사용 가능합니다.</p>
+
+<p>프로그래밍에는 익숙하지만 각종 용어나 API의 구조에 대해 공부하고 싶으신 분들을 위한 <a href="https://wiki.developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">간단한 튜토리얼</a>이 준비되어 있습니다.</p>
+
+<p><a href="https://wiki.developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API">Web Audio API의 원리</a>에는 API 내에서 디지털 오디오가 어떻게 동작하는지 나와 있습니다. 해당 문서에는 API가 어떤 원리를 이용해 작성되었는지에 대한 설명도 잘 되어 있습니다.</p>
+
+<p>코드를 작성하는 것은 카드 게임과 비슷합니다. 규칙을 배우고, 플레이합니다. 모르겠는 규칙은 다시 공부하고, 다시 새로운 판을 합니다. 마찬가지로, 이 문서와 첫 튜토리얼에서 설명하는 것만으로 부족하다고 느끼신다면 첫 튜토리얼의 내용을 보충하는 동시에 여러 테크닉을 이용하여 스텝 시퀀서를 만드는 법을 설명하는 <a href="https://wiki.developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API/Advanced_techniques">상급자용 튜토리얼</a>을 읽어보시는 것을 추천합니다.</p>
+
+<p>그 외에도 이 페이지의 사이드바에서 API의 모든 기능을 설명하는 참고자료와 다양한 튜토리얼을 찾아 보실 수 있습니다.</p>
+
+<p>만약에 프로그래밍보다는 음악이 친숙하고, 음악 이론에 익숙하며, 악기를 만들고 싶으시다면 바로 상급자용 튜토리얼부터 시작하여 여러가지를 만들기 시작하시면 됩니다. 위의 튜토리얼은 음표를 배치하는 법, 저주파 발진기 등 맞춤형 Oscillator(발진기)와 Envelope를 설계하는 법 등을 설명하고 있으니, 이를 읽으며 사이드바의 자료를 참고하시면 될 것입니다.</p>
+
+<p>프로그래밍에 전혀 익숙하지 않으시다면 자바스크립트 기초 튜토리얼을 먼저 읽고 이 문서를 다시 읽으시는 게 나을 수도 있습니다. 모질라의 <a href="https://wiki.developer.mozilla.org/en-US/docs/Learn/JavaScript">자바스크립트 기초</a>만큼 좋은 자료도 몇 없죠.</p>
+
+<h2 id="Web_Audio_API_Interfaces">Web Audio API Interfaces</h2>
+
+<p>Web Audio API는 다양한 인터페이스와 연관 이벤트를 가지고 있으며, 이는 9가지의 기능적 범주로 나뉩니다.</p>
+
+<h3 id="일반_오디오_그래프_정의">일반 오디오 그래프 정의</h3>
+
+<p>Web Audio API 사용범위 내에서 오디오 그래프를 형성하는 일반적인 컨테이너와 정의입니다.</p>
+
+<dl>
+ <dt>{{domxref("AudioContext")}}</dt>
+ <dd><strong><code>AudioContext</code></strong> 인터페이스는 오디오 모듈이 서로 연결되어 구성된 오디오 프로세싱 그래프를 표현하며, 각각의 그래프는 {{domxref("AudioNode")}}로 표현됩니다. <code>AudioContext</code>는 자신이 가지고 있는 노드의 생성과 오디오 프로세싱 혹은 디코딩의 실행을 제어합니다. 어떤 작업이든 시작하기 전에 <code>AudioContext</code>를 생성해야 합니다. 모든 작업은 컨텍스트 내에서 이루어집니다.</dd>
+ <dt>{{domxref("AudioNode")}}</dt>
+ <dd><strong><code>AudioNode</code></strong><strong> </strong>인터페이스는 오디오 소스({{HTMLElement("audio")}}나 {{HTMLElement("video")}}엘리먼트), 오디오 목적지, 중간 처리 모듈({{domxref("BiquadFilterNode")}}이나 {{domxref("GainNode")}})과 같은 오디오 처리 모듈을 나타냅니다.</dd>
+ <dt>{{domxref("AudioParam")}}</dt>
+ <dd><strong><code>AudioParam</code></strong> 인터페이스는 {{domxref("AudioNode")}}중 하나와 같은 오디오 관련 파라미터를 나타냅니다. 이는 특정 값 또는 값 변경으로 세팅되거나, 특정 시간에 발생하고 특정 패턴을 따르도록 스케쥴링할 수 있습니다.</dd>
+ <dt>The {{event("ended")}} event</dt>
+ <dd>
+ <p><strong><code>ended</code></strong> 이벤트는 미디어의 끝에 도달하여 재생이 정지되면 호출됩니다.</p>
+ </dd>
+</dl>
+
+<h3 id="오디오_소스_정의하기">오디오 소스 정의하기</h3>
+
+<p>Web Audio API에서 사용하기 위한 오디오 소스를 정의하는 인터페이스입니다.</p>
+
+<dl>
+ <dt>{{domxref("OscillatorNode")}}</dt>
+ <dd><strong><code style="font-size: 14px;">OscillatorNode</code></strong> 인터페이스는 삼각파 또는 사인파와 같은 주기적 파형을 나타냅니다. 이것은 주어진 주파수의 파동을 생성하는 {{domxref("AudioNode")}} 오디오 프로세싱 모듈입니다.</dd>
+ <dt>{{domxref("AudioBuffer")}}</dt>
+ <dd><strong><code>AudioBuffer</code></strong> 인터페이스는 {{ domxref("AudioContext.decodeAudioData()") }}메소드를 사용해 오디오 파일에서 생성되거나 {{ domxref("AudioContext.createBuffer()") }}를 사용해 로우 데이터로부터 생성된 메모리상에 적재되는 짧은 오디오 자원을 나타냅니다. 이 형식으로 디코딩된 오디오는 {{ domxref("AudioBufferSourceNode") }}에 삽입될 수 있습니다.</dd>
+ <dt>{{domxref("AudioBufferSourceNode")}}</dt>
+ <dd><strong><code>AudioBufferSourceNode</code></strong> 인터페이스는 {{domxref("AudioBuffer")}}에 저장된 메모리상의 오디오 데이터로 구성된 오디오 소스를 나타냅니다. 이것은 오디오 소스 역할을 하는 {{domxref("AudioNode")}}입니다.</dd>
+ <dt>{{domxref("MediaElementAudioSourceNode")}}</dt>
+ <dd><code><strong>MediaElementAudio</strong></code><strong><code>SourceNode</code></strong> 인터페이스는 {{ htmlelement("audio") }} 나 {{ htmlelement("video") }} HTML 엘리먼트로 구성된 오디오 소스를 나타냅니다. 이것은 오디오 소스 역할을 하는 {{domxref("AudioNode")}}입니다.</dd>
+ <dt>{{domxref("MediaStreamAudioSourceNode")}}</dt>
+ <dd><code><strong>MediaStreamAudio</strong></code><strong><code>SourceNode</code></strong> 인터페이스는 <a href="/en-US/docs/WebRTC" title="/en-US/docs/WebRTC">WebRTC</a> {{domxref("MediaStream")}}(웹캡, 마이크 혹은 원격 컴퓨터에서 전송된 스트림)으로 구성된 오디오 소스를 나타냅니다. 이것은 오디오 소스 역할을 하는 {{domxref("AudioNode")}}입니다.</dd>
+</dl>
+
+<h3 id="오디오_이펙트_필터_정의하기">오디오 이펙트 필터 정의하기</h3>
+
+<p>오디오 소스에 적용할 이펙트를 정의하는 인터페이스입니다.</p>
+
+<dl>
+ <dt>{{domxref("BiquadFilterNode")}}</dt>
+ <dd><strong><code>BiquadFilterNode</code></strong> 인터페이스는 간단한 하위 필터를 나타냅니다. 이것은 여러 종류의 필터나 톤 제어 장치 혹은 그래픽 이퀄라이저를 나타낼 수 있는 {{domxref("AudioNode")}}입니다. <code>BiquadFilterNode</code>는 항상 단 하나의 입력과 출력만을 가집니다. </dd>
+ <dt>{{domxref("ConvolverNode")}}</dt>
+ <dd><code><strong>Convolver</strong></code><strong><code>Node</code></strong><span style="line-height: 1.5;"> 인터페이스는 주어진 {{domxref("AudioBuffer")}}에 선형 콘볼루션을 수행하는 {{domxref("AudioNode")}}이며, 리버브 이펙트를 얻기 위해 자주 사용됩니다. </span></dd>
+ <dt>{{domxref("DelayNode")}}</dt>
+ <dd><strong><code>DelayNode</code></strong> 인터페이스는 지연선을 나타냅니다. 지연선은 입력 데이터가 출력에 전달되기까지의 사이에 딜레이를 발생시키는 {{domxref("AudioNode")}} 오디오 처리 모듈입니다.</dd>
+ <dt>{{domxref("DynamicsCompressorNode")}}</dt>
+ <dd><strong><code>DynamicsCompressorNode</code></strong> 인터페이스는 압축 이펙트를 제공합니다, 이는 신호의 가장 큰 부분의 볼륨을 낮추어 여러 사운드를 동시에 재생할 때 발생할 수 있는 클리핑 및 왜곡을 방지합니다.</dd>
+ <dt>{{domxref("GainNode")}}</dt>
+ <dd><strong><code>GainNode</code></strong> 인터페이스는 음량의 변경을 나타냅니다. 이는 출력에 전달되기 전의 입력 데이터에 주어진 음량 조정을 적용하기 위한 {{domxref("AudioNode")}} 오디오 모듈입니다.</dd>
+ <dt>{{domxref("StereoPannerNode")}}</dt>
+ <dd><code><strong>StereoPannerNode</strong></code> 인터페이스는 오디오 스트림을 좌우로 편향시키는데 사용될 수 있는 간단한 스테레오 패너 노드를 나타냅니다.</dd>
+ <dt>{{domxref("WaveShaperNode")}}</dt>
+ <dd><strong><code>WaveShaperNode</code></strong> 인터페이스는 비선형 왜곡을 나타냅니다. 이는 곡선을 사용하여 신호의 파형 형성에 왜곡을 적용하는 {{domxref("AudioNode")}}입니다. 분명한 왜곡 이펙트 외에도 신호에 따뜻한 느낌을 더하는데 자주 사용됩니다.</dd>
+ <dt>{{domxref("PeriodicWave")}}</dt>
+ <dd>{{domxref("OscillatorNode")}}의 출력을 형성하는데 사용될 수 있는 주기적 파형을 설명합니다.</dd>
+</dl>
+
+<h3 id="오디오_목적지_정의하기">오디오 목적지 정의하기</h3>
+
+<p>처리된 오디오를 어디에 출력할지 정의하는 인터페이스입니다.</p>
+
+<dl>
+ <dt>{{domxref("AudioDestinationNode")}}</dt>
+ <dd><strong><code>AudioDestinationNode</code></strong> 인터페이스는 주어진 컨텍스트 내의 오디오 소스의 최종 목적지를 나타냅니다. 주로 기기의 스피커로 출력할 때 사용됩니다.</dd>
+ <dt>{{domxref("MediaStreamAudioDestinationNode")}}</dt>
+ <dd><code><strong>MediaStreamAudio</strong></code><strong><code>DestinationNode</code></strong> 인터페이스는 단일 <code>AudioMediaStreamTrack</code> 을 가진 <a href="/en-US/docs/WebRTC" title="/en-US/docs/WebRTC">WebRTC</a> {{domxref("MediaStream")}}로 구성된 오디오 목적지를 나타내며, 이는 {{ domxref("MediaDevices.getUserMedia", "getUserMedia()") }}에서 얻은 {{domxref("MediaStream")}}과 비슷한 방식으로 사용할 수 있습니다. 이것은 오디오 목적지 역할을 하는 {{domxref("AudioNode")}}입니다.</dd>
+</dl>
+
+<h3 id="데이터_분석_및_시각화">데이터 분석 및 시각화</h3>
+
+<p>오디오에서 재생시간이나 주파수 등의 데이터를 추출하기 위한 인터페이스입니다.</p>
+
+<dl>
+ <dt>{{domxref("AnalyserNode")}}</dt>
+ <dd><strong><code>AnalyserNode</code></strong> 인터페이스는 데이터를 분석하고 시각화하기 위한 실시간 주파수와 시간영역 분석 정보를 제공하는 노드를 나타냅니다.</dd>
+</dl>
+
+<h3 id="오디오_채널을_분리하고_병합하기">오디오 채널을 분리하고 병합하기</h3>
+
+<p>오디오 채널들을 분리하거나 병합하기 위한 인터페이스입니다.</p>
+
+<dl>
+ <dt>{{domxref("ChannelSplitterNode")}}</dt>
+ <dd><code><strong>ChannelSplitterNode</strong></code> 인터페이스는 오디오 소스의 여러 채널을 모노 출력 셋으로 분리합니다.</dd>
+ <dt>{{domxref("ChannelMergerNode")}}</dt>
+ <dd><code><strong>ChannelMergerNode</strong></code> 인터페이스는 여러 모노 입력을 하나의 출력으로 재결합합니다. 각 입력은 출력의 채널을 채우는데 사용될 것입니다.</dd>
+</dl>
+
+<h3 id="오디오_공간화">오디오 공간화</h3>
+
+<p>오디오 소스에 오디오 공간화 패닝 이펙트를 추가하는 인터페이스입니다.</p>
+
+<dl>
+ <dt>{{domxref("AudioListener")}}</dt>
+ <dd><strong><code>AudioListener</code></strong> 인터페이스는 오디오 공간화에 사용되는 오디오 장면을 청취하는 고유한 시청자의 위치와 방향을 나타냅니다.</dd>
+ <dt>{{domxref("PannerNode")}}</dt>
+ <dd><strong><code>PannerNode</code></strong> 인터페이스는 공간 내의 신호 양식을 나타냅니다. 이것은 자신의 오른손 직교 좌표 내의 포지션과, 속도 벡터를 이용한 움직임과, 방향성 원뿔을 이용한 방향을 서술하는 {{domxref("AudioNode")}} 오디오 프로세싱 모듈입니다.</dd>
+</dl>
+
+<h3 id="자바스크립트에서_오디오_처리하기">자바스크립트에서 오디오 처리하기</h3>
+
+<p>자바스크립트에서 오디오 데이터를 처리하기 위한 코드를 작성할 수 있습니다. 이렇게 하려면 아래에 나열된 인터페이스와 이벤트를 사용하세요.</p>
+
+<div class="note">
+<p>이것은 Web Audio API 2014년 8월 29일의 스펙입니다. 이 기능은 지원이 중단되고 {{ anch("Audio_Workers") }}로 대체될 예정입니다.</p>
+</div>
+
+<dl>
+ <dt>{{domxref("ScriptProcessorNode")}}</dt>
+ <dd><strong><code>ScriptProcessorNode</code></strong> 인터페이스는 자바스크립트를 이용한 오디오 생성, 처리, 분석 기능을 제공합니다. 이것은 현재 입력 버퍼와 출력 버퍼, 총 두 개의 버퍼에 연결되는 {{domxref("AudioNode")}} 오디오 프로세싱 모듈입니다. {{domxref("AudioProcessingEvent")}}인터페이스를 구현하는 이벤트는 입력 버퍼에 새로운 데이터가 들어올 때마다 객체로 전달되고, 출력 버퍼가 데이터로 채워지면 이벤트 핸들러가 종료됩니다.</dd>
+ <dt>{{event("audioprocess")}} (event)</dt>
+ <dd><strong><code>audioprocess</code></strong> 이벤트는 Web Audio API {{domxref("ScriptProcessorNode")}}의 입력 버퍼가 처리될 준비가 되었을 때 발생합니다.</dd>
+ <dt>{{domxref("AudioProcessingEvent")}}</dt>
+ <dd><a href="/en-US/docs/Web_Audio_API" title="/en-US/docs/Web_Audio_API">Web Audio API</a> <strong><code>AudioProcessingEvent</code></strong> 는 {{domxref("ScriptProcessorNode")}} 입력 버퍼가 처리될 준비가 되었을 때 발생하는 이벤트를 나타냅니다.</dd>
+</dl>
+
+<h3 id="오프라인백그라운드_오디오_처리하기">오프라인/백그라운드 오디오 처리하기</h3>
+
+<p>다음을 이용해 백그라운드(장치의 스피커가 아닌 {{domxref("AudioBuffer")}}으로 렌더링)에서 오디오 그래프를 신속하게 처리/렌더링 할수 있습니다.</p>
+
+<dl>
+ <dt>{{domxref("OfflineAudioContext")}}</dt>
+ <dd><strong><code>OfflineAudioContext</code></strong> 인터페이스는 {{domxref("AudioNode")}}로 연결되어 구성된 오디오 프로세싱 그래프를 나타내는 {{domxref("AudioContext")}} 인터페이스입니다. 표준 <strong><code>AudioContext</code></strong> 와 대조적으로, <strong><code>OfflineAudioContext</code></strong> 는 실제로 오디오를 렌더링하지 않고 가능한 빨리 버퍼 내에서 생성합니다. </dd>
+ <dt>{{event("complete")}} (event)</dt>
+ <dd><strong><code>complete</code></strong> 이벤트는 {{domxref("OfflineAudioContext")}}의 렌더링이 종료될때 발생합니다.</dd>
+ <dt>{{domxref("OfflineAudioCompletionEvent")}}</dt>
+ <dd><strong><code>OfflineAudioCompletionEvent</code></strong> 이벤트는 {{domxref("OfflineAudioContext")}} 의 처리가 종료될 때 발생하는 이벤트를 나타냅니다. {{event("complete")}} 이벤트는 이 이벤트를 구현합니다.</dd>
+</dl>
+
+<h3 id="Audio_Workers" name="Audio_Workers">오디오 워커</h3>
+
+<p>오디오 워커는 <a href="/en-US/docs/Web/Guide/Performance/Using_web_workers">web worker</a> 컨텍스트 내에서 스크립팅된 오디오 처리를 관리하기 위한 기능을 제공하며, 두어가지 인터페이스로 정의되어 있습니다(2014년 8월 29일 새로운 기능이 추가되었습니다). 이는 아직 모든 브라우저에서 구현되지 않았습니다. 구현된 브라우저에서는 <a href="#Audio_processing_via_JavaScript">Audio processing in JavaScript</a>에서 설명된 {{domxref("ScriptProcessorNode")}}를 포함한 다른 기능을 대체합니다.</p>
+
+<dl>
+ <dt>{{domxref("AudioWorkerNode")}}</dt>
+ <dd><strong><code>AudioWorkerNode</code></strong> 인터페이스는 워커 쓰레드와 상호작용하여 오디오를 직접 생성, 처리, 분석하는 {{domxref("AudioNode")}}를 나타냅니다. </dd>
+ <dt>{{domxref("AudioWorkerGlobalScope")}}</dt>
+ <dd><strong><code>AudioWorkerGlobalScope</code></strong> 인터페이스는 <strong><code>DedicatedWorkerGlobalScope</code></strong> 에서 파생된 오디오 처리 스크립트가 실행되는 워커 컨텍스트를 나타내는 객체입니다. 이것은 워커 쓰레드 내에서 자바스크립트를 이용하여 직접 오디오 데이터를 생성, 처리, 분석할 수 있도록 설계되었습니다.</dd>
+ <dt>{{domxref("AudioProcessEvent")}}</dt>
+ <dd>이것은 처리를 수행하기 위해 {{domxref("AudioWorkerGlobalScope")}} 오브젝트로 전달되는 <code>Event</code> 오브젝트입니다.</dd>
+</dl>
+
+<h2 id="Example" name="Example">Obsolete interfaces</h2>
+
+<p>The following interfaces were defined in old versions of the Web Audio API spec, but are now obsolete and have been replaced by other interfaces.</p>
+
+<dl>
+ <dt>{{domxref("JavaScriptNode")}}</dt>
+ <dd>Used for direct audio processing via JavaScript. This interface is obsolete, and has been replaced by {{domxref("ScriptProcessorNode")}}.</dd>
+ <dt>{{domxref("WaveTableNode")}}</dt>
+ <dd>Used to define a periodic waveform. This interface is obsolete, and has been replaced by {{domxref("PeriodicWave")}}.</dd>
+</dl>
+
+<h2 id="Example" name="Example">Example</h2>
+
+<p>This example shows a wide variety of Web Audio API functions being used. You can see this code in action on the <a href="https://mdn.github.io/voice-change-o-matic/">Voice-change-o-matic</a> demo (also check out the <a href="https://github.com/mdn/voice-change-o-matic">full source code at Github</a>) — this is an experimental voice changer toy demo; keep your speakers turned down low when you use it, at least to start!</p>
+
+<p>The Web Audio API lines are highlighted; if you want to find out more about what the different methods, etc. do, have a search around the reference pages.</p>
+
+<pre class="brush: js; highlight:[1,2,9,10,11,12,36,37,38,39,40,41,62,63,72,114,115,121,123,124,125,147,151] notranslate">var audioCtx = new (window.AudioContext || window.webkitAudioContext)(); // define audio context
+// Webkit/blink browsers need prefix, Safari won't work without window.
+
+var voiceSelect = document.getElementById("voice"); // select box for selecting voice effect options
+var visualSelect = document.getElementById("visual"); // select box for selecting audio visualization options
+var mute = document.querySelector('.mute'); // mute button
+var drawVisual; // requestAnimationFrame
+
+var analyser = audioCtx.createAnalyser();
+var distortion = audioCtx.createWaveShaper();
+var gainNode = audioCtx.createGain();
+var biquadFilter = audioCtx.createBiquadFilter();
+
+function makeDistortionCurve(amount) { // function to make curve shape for distortion/wave shaper node to use
+  var k = typeof amount === 'number' ? amount : 50,
+    n_samples = 44100,
+    curve = new Float32Array(n_samples),
+    deg = Math.PI / 180,
+    i = 0,
+    x;
+  for ( ; i &lt; n_samples; ++i ) {
+    x = i * 2 / n_samples - 1;
+    curve[i] = ( 3 + k ) * x * 20 * deg / ( Math.PI + k * Math.abs(x) );
+  }
+  return curve;
+};
+
+navigator.getUserMedia (
+  // constraints - only audio needed for this app
+  {
+    audio: true
+  },
+
+  // Success callback
+  function(stream) {
+    source = audioCtx.createMediaStreamSource(stream);
+    source.connect(analyser);
+    analyser.connect(distortion);
+    distortion.connect(biquadFilter);
+    biquadFilter.connect(gainNode);
+    gainNode.connect(audioCtx.destination); // connecting the different audio graph nodes together
+
+    visualize(stream);
+    voiceChange();
+
+  },
+
+  // Error callback
+  function(err) {
+    console.log('The following gUM error occured: ' + err);
+  }
+);
+
+function visualize(stream) {
+  WIDTH = canvas.width;
+  HEIGHT = canvas.height;
+
+  var visualSetting = visualSelect.value;
+  console.log(visualSetting);
+
+  if(visualSetting == "sinewave") {
+    analyser.fftSize = 2048;
+    var bufferLength = analyser.frequencyBinCount; // half the FFT value
+    var dataArray = new Uint8Array(bufferLength); // create an array to store the data
+
+    canvasCtx.clearRect(0, 0, WIDTH, HEIGHT);
+
+    function draw() {
+
+      drawVisual = requestAnimationFrame(draw);
+
+      analyser.getByteTimeDomainData(dataArray); // get waveform data and put it into the array created above
+
+      canvasCtx.fillStyle = 'rgb(200, 200, 200)'; // draw wave with canvas
+      canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
+
+      canvasCtx.lineWidth = 2;
+      canvasCtx.strokeStyle = 'rgb(0, 0, 0)';
+
+      canvasCtx.beginPath();
+
+      var sliceWidth = WIDTH * 1.0 / bufferLength;
+      var x = 0;
+
+      for(var i = 0; i &lt; bufferLength; i++) {
+
+        var v = dataArray[i] / 128.0;
+        var y = v * HEIGHT/2;
+
+        if(i === 0) {
+          canvasCtx.moveTo(x, y);
+        } else {
+          canvasCtx.lineTo(x, y);
+        }
+
+        x += sliceWidth;
+      }
+
+      canvasCtx.lineTo(canvas.width, canvas.height/2);
+      canvasCtx.stroke();
+    };
+
+    draw();
+
+  } else if(visualSetting == "off") {
+    canvasCtx.clearRect(0, 0, WIDTH, HEIGHT);
+    canvasCtx.fillStyle = "red";
+    canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
+  }
+
+}
+
+function voiceChange() {
+  distortion.curve = new Float32Array;
+  biquadFilter.gain.value = 0; // reset the effects each time the voiceChange function is run
+
+  var voiceSetting = voiceSelect.value;
+  console.log(voiceSetting);
+
+  if(voiceSetting == "distortion") {
+    distortion.curve = makeDistortionCurve(400); // apply distortion to sound using waveshaper node
+  } else if(voiceSetting == "biquad") {
+    biquadFilter.type = "lowshelf";
+    biquadFilter.frequency.value = 1000;
+    biquadFilter.gain.value = 25; // apply lowshelf filter to sounds using biquad
+  } else if(voiceSetting == "off") {
+    console.log("Voice settings turned off"); // do nothing, as off option was chosen
+  }
+
+}
+
+// event listeners to change visualize and voice settings
+
+visualSelect.onchange = function() {
+  window.cancelAnimationFrame(drawVisual);
+  visualize(stream);
+}
+
+voiceSelect.onchange = function() {
+  voiceChange();
+}
+
+mute.onclick = voiceMute;
+
+function voiceMute() { // toggle to mute and unmute sound
+  if(mute.id == "") {
+    gainNode.gain.value = 0; // gain set to 0 to mute sound
+    mute.id = "activated";
+    mute.innerHTML = "Unmute";
+  } else {
+    gainNode.gain.value = 1; // gain set to 1 to unmute sound
+    mute.id = "";
+    mute.innerHTML = "Mute";
+  }
+}
+</pre>
+
+<h2 id="Specifications">Specifications</h2>
+
+<table class="standard-table">
+ <tbody>
+ <tr>
+ <th scope="col">Specification</th>
+ <th scope="col">Status</th>
+ <th scope="col">Comment</th>
+ </tr>
+ <tr>
+ <td>{{SpecName('Web Audio API')}}</td>
+ <td>{{Spec2('Web Audio API')}}</td>
+ <td></td>
+ </tr>
+ </tbody>
+</table>
+
+<h2 id="Browser_compatibility">Browser compatibility</h2>
+
+<div>{{CompatibilityTable}}</div>
+
+<div id="compat-desktop">
+<table class="compat-table">
+ <tbody>
+ <tr>
+ <th>Feature</th>
+ <th>Chrome</th>
+ <th>Edge</th>
+ <th>Firefox (Gecko)</th>
+ <th>Internet Explorer</th>
+ <th>Opera</th>
+ <th>Safari (WebKit)</th>
+ </tr>
+ <tr>
+ <td>Basic support</td>
+ <td>14 {{property_prefix("webkit")}}</td>
+ <td>{{CompatVersionUnknown}}</td>
+ <td>23</td>
+ <td>{{CompatNo}}</td>
+ <td>15 {{property_prefix("webkit")}}<br>
+ 22 (unprefixed)</td>
+ <td>6 {{property_prefix("webkit")}}</td>
+ </tr>
+ </tbody>
+</table>
+</div>
+
+<div id="compat-mobile">
+<table class="compat-table">
+ <tbody>
+ <tr>
+ <th>Feature</th>
+ <th>Android</th>
+ <th>Chrome</th>
+ <th>Edge</th>
+ <th>Firefox Mobile (Gecko)</th>
+ <th>Firefox OS</th>
+ <th>IE Phone</th>
+ <th>Opera Mobile</th>
+ <th>Safari Mobile</th>
+ </tr>
+ <tr>
+ <td>Basic support</td>
+ <td>{{CompatNo}}</td>
+ <td>28 {{property_prefix("webkit")}}</td>
+ <td>{{CompatVersionUnknown}}</td>
+ <td>25</td>
+ <td>1.2</td>
+ <td>{{CompatNo}}</td>
+ <td>{{CompatNo}}</td>
+ <td>6 {{property_prefix("webkit")}}</td>
+ </tr>
+ </tbody>
+</table>
+</div>
+
+<h2 id="See_also">See also</h2>
+
+<ul>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API">Visualizations with Web Audio API</a></li>
+ <li><a href="http://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic example</a></li>
+ <li><a href="http://mdn.github.io/violent-theremin/">Violent Theremin example</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Web_audio_spatialisation_basics">Web audio spatialisation basics</a></li>
+ <li><a href="http://www.html5rocks.com/tutorials/webaudio/positional_audio/" title="http://www.html5rocks.com/tutorials/webaudio/positional_audio/">Mixing Positional Audio and WebGL</a></li>
+ <li><a href="http://www.html5rocks.com/tutorials/webaudio/games/" title="http://www.html5rocks.com/tutorials/webaudio/games/">Developing Game Audio with the Web Audio API</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Porting_webkitAudioContext_code_to_standards_based_AudioContext" title="/en-US/docs/Web_Audio_API/Porting_webkitAudioContext_code_to_standards_based_AudioContext">Porting webkitAudioContext code to standards based AudioContext</a></li>
+ <li><a href="https://github.com/bit101/tones">Tones</a>: a simple library for playing specific tones/notes using the Web Audio API.</li>
+ <li><a href="https://github.com/goldfire/howler.js/">howler.js</a>: a JS audio library that defaults to <a href="https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html">Web Audio API</a> and falls back to <a href="http://www.whatwg.org/specs/web-apps/current-work/#the-audio-element">HTML5 Audio</a>, as well as providing other useful features.</li>
+ <li><a href="https://github.com/mattlima/mooog">Mooog</a>: jQuery-style chaining of AudioNodes, mixer-style sends/returns, and more.</li>
+</ul>
+
+<section id="Quick_Links">
+<h3 id="Quicklinks">Quicklinks</h3>
+
+<ol>
+ <li data-default-state="open"><strong><a href="#">Guides</a></strong>
+
+ <ol>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API">Basic concepts behind Web Audio API</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API">Visualizations with Web Audio API</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Web_audio_spatialization_basics">Web audio spatialization basics</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Porting_webkitAudioContext_code_to_standards_based_AudioContext" title="/en-US/docs/Web_Audio_API/Porting_webkitAudioContext_code_to_standards_based_AudioContext">Porting webkitAudioContext code to standards based AudioContext</a></li>
+ </ol>
+ </li>
+ <li data-default-state="open"><strong><a href="#">Examples</a></strong>
+ <ol>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Simple_synth">Simple synth keyboard</a></li>
+ <li><a href="http://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic</a></li>
+ <li><a href="http://mdn.github.io/violent-theremin/">Violent Theremin</a></li>
+ </ol>
+ </li>
+ <li data-default-state="open"><strong><a href="#">Interfaces</a></strong>
+ <ol>
+ <li>{{domxref("AnalyserNode")}}</li>
+ <li>{{domxref("AudioBuffer")}}</li>
+ <li>{{domxref("AudioBufferSourceNode")}}</li>
+ <li>{{domxref("AudioContext")}}</li>
+ <li>{{domxref("AudioDestinationNode")}}</li>
+ <li>{{domxref("AudioListener")}}</li>
+ <li>{{domxref("AudioNode")}}</li>
+ <li>{{domxref("AudioParam")}}</li>
+ <li>{{event("audioprocess")}} (event)</li>
+ <li>{{domxref("AudioProcessingEvent")}}</li>
+ <li>{{domxref("BiquadFilterNode")}}</li>
+ <li>{{domxref("ChannelMergerNode")}}</li>
+ <li>{{domxref("ChannelSplitterNode")}}</li>
+ <li>{{event("complete")}} (event)</li>
+ <li>{{domxref("ConvolverNode")}}</li>
+ <li>{{domxref("DelayNode")}}</li>
+ <li>{{domxref("DynamicsCompressorNode")}}</li>
+ <li>{{event("ended_(Web_Audio)", "ended")}} (event)</li>
+ <li>{{domxref("GainNode")}}</li>
+ <li>{{domxref("MediaElementAudioSourceNode")}}</li>
+ <li>{{domxref("MediaStreamAudioDestinationNode")}}</li>
+ <li>{{domxref("MediaStreamAudioSourceNode")}}</li>
+ <li>{{domxref("OfflineAudioCompletionEvent")}}</li>
+ <li>{{domxref("OfflineAudioContext")}}</li>
+ <li>{{domxref("OscillatorNode")}}</li>
+ <li>{{domxref("PannerNode")}}</li>
+ <li>{{domxref("PeriodicWave")}}</li>
+ <li>{{domxref("ScriptProcessorNode")}}</li>
+ <li>{{domxref("WaveShaperNode")}}</li>
+ </ol>
+ </li>
+</ol>
+</section>
diff --git a/files/ko/web/api/web_audio_api/using_web_audio_api/index.html b/files/ko/web/api/web_audio_api/using_web_audio_api/index.html
new file mode 100644
index 0000000000..3b64b5809c
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/using_web_audio_api/index.html
@@ -0,0 +1,238 @@
+---
+title: Using the Web Audio API
+slug: Web/API/Web_Audio_API/Using_Web_Audio_API
+translation_of: Web/API/Web_Audio_API/Using_Web_Audio_API
+---
+<div>{{DefaultAPISidebar("Web Audio API")}}</div>
+
+<p class="summary" id="webaudioapibasics"><span class="seoSummary">Let's take a look at getting started with the <a href="/en-US/docs/Web/API/Web_Audio_API">Web Audio API</a>. We'll briefly look at some concepts, then study a simple boombox example that allows us to load an audio track, play and pause it, and change its volume and stereo panning.</span></p>
+
+<p>The Web Audio API does not replace the {{HTMLElement("audio")}} media element, but rather complements it, just like {{HTMLElement("canvas")}} coexists alongside the {{HTMLElement("img")}} element. Your use case will determine what tools you use to implement audio. If you simply want to control playback of an audio track, the <code>&lt;audio&gt;</code> media element provides a better, quicker solution than the Web Audio API. If you want to carry out more complex audio processing, as well as playback, the Web Audio API provides much more power and control.</p>
+
+<p>A powerful feature of the Web Audio API is that it does not have a strict "sound call limitation". For example, there is no ceiling of 32 or 64 sound calls at one time. Some processors may be capable of playing more than 1,000 simultaneous sounds without stuttering.</p>
+
+<h2 id="Example_code">Example code</h2>
+
+<p>Our boombox looks like this:</p>
+
+<p><img alt="A boombox with play, pan, and volume controls" src="https://mdn.mozillademos.org/files/16197/boombox.png" style="border-style: solid; border-width: 1px; height: 646px; width: 1200px;"></p>
+
+<p>플레이 버튼과 볼륨조절 버튼 그리고 스테레오 패닝(Stereo panning)을 위한 슬라이더가 있는 레트로 카세트덱이 있습니다. 카세트덱을 이것보다 더 복잡하게 만들 수 있겠지만 이 단계에서는 이정도로 학습을 하는데에는 충분합니다.</p>
+
+<p><a href="https://codepen.io/Rumyra/pen/qyMzqN/">Check out the final demo here on Codepen</a>, or see the <a href="https://github.com/mdn/webaudio-examples/tree/master/audio-basics">source code on GitHub</a>.</p>
+
+<h2 id="Browser_support">Browser support</h2>
+
+<p>현대의 브라우저들은 Web Audio API에 대한 대부분의 기능들을 제공합니다. API의 여러 특징(features)이 있지만 더 정확한 정보는 각 페이지의 아래에 있는 브라우저 호환 테이블(the browser compatibility tables)을 확인하세요.</p>
+
+<h2 id="Audio_graphs">Audio graphs</h2>
+
+<p>Everything within the Web Audio API is based around the concept of an audio graph, which is made up of nodes.</p>
+
+<p>The Web Audio API handles audio operations inside an <strong>audio context</strong>, and has been designed to allow <strong>modular routing</strong>. Basic audio operations are performed with <strong>audio nodes</strong>, which are linked together to form an <strong>audio routing graph</strong>. You have input nodes, which are the source of the sounds you are manipulating, modification nodes that change those sounds as desired, and output nodes (destinations), which allow you to save or hear those sounds.</p>
+
+<p>Several audio sources with different channel layouts are supported, even within a single context. Because of this modular design, you can create complex audio functions with dynamic effects.</p>
+
+<h2 id="Audio_context">Audio context</h2>
+
+<p>To be able to do anything with the Web Audio API, we need to create an instance of the audio context. This then gives us access to all the features and functionality of the API.</p>
+
+<pre class="brush: js">// for legacy browsers
+const AudioContext = window.AudioContext || window.webkitAudioContext;
+
+const audioContext = new AudioContext();
+</pre>
+
+<p>So what's going on when we do this? A {{domxref("BaseAudioContext")}} is created for us automatically and extended to an online audio context. We'll want this because we're looking to play live sound.</p>
+
+<div class="note">
+<p><strong>Note</strong>: If you just want to process audio data, for instance, buffer and stream it but not play it, you might want to look into creating an {{domxref("OfflineAudioContext")}}.</p>
+</div>
+
+<h2 id="Loading_sound">Loading sound</h2>
+
+<p>Now, the audio context we've created needs some sound to play through it. There are a few ways to do this with the API. Let's begin with a simple method — as we have a boombox, we most likely want to play a full song track. Also, for accessibility, it's nice to expose that track in the DOM. We'll expose the song on the page using an {{htmlelement("audio")}} element.</p>
+
+<pre class="brush: html">&lt;audio src="myCoolTrack.mp3" type="audio/mpeg"&gt;&lt;/audio&gt;
+</pre>
+
+<div class="note">
+<p><strong>Note</strong>: If the sound file you're loading is held on a different domain you will need to use the <code>crossorigin</code> attribute; see <a href="/en-US/docs/Web/HTTP/CORS">Cross Origin Resource Sharing (CORS)</a>  for more information.</p>
+</div>
+
+<p>To use all the nice things we get with the Web Audio API, we need to grab the source from this element and <em>pipe</em> it into the context we have created. Lucky for us there's a method that allows us to do just that — {{domxref("AudioContext.createMediaElementSource")}}:</p>
+
+<pre class="brush: js">// get the audio element
+const audioElement = document.querySelector('audio');
+
+// pass it into the audio context
+const track = audioContext.createMediaElementSource(audioElement);
+</pre>
+
+<div class="note">
+<p><strong>Note</strong>: The <code>&lt;audio&gt;</code> element above is represented in the DOM by an object of type {{domxref("HTMLMediaElement")}}, which comes with its own set of functionality. All of this has stayed intact; we are merely allowing the sound to be available to the Web Audio API.</p>
+</div>
+
+<h2 id="Controlling_sound">Controlling sound</h2>
+
+<p>When playing sound on the web, it's important to allow the user to control it. Depending on the use case, there's a myriad of options, but we'll provide functionality to play/pause the sound, alter the track's volume, and pan it from left to right.</p>
+
+<p>Controlling sound programmatically from JavaScript code is covered by browsers' autoplay support policies, as such is likely to be blocked without permission being granted by the user (or a whitelist). Autoplay policies typically require either explicit permission or a user engagement with the page before scripts can trigger audio to play.</p>
+
+<p>These special requirements are in place essentially because unexpected sounds can be annoying and intrusive, and can cause accessibility problems. You can learn more about this in our article <a href="/en-US/docs/Web/Media/Autoplay_guide">Autoplay guide for media and Web Audio APIs</a>.</p>
+
+<p>Since our scripts are playing audio in response to a user input event (a click on a play button, for instance), we're in good shape and should have no problems from autoplay blocking. So, let's start by taking a look at our play and pause functionality. We have a play button that changes to a pause button when the track is playing:</p>
+
+<pre class="brush: html">&lt;button data-playing="false" role="switch" aria-checked="false"&gt;
+ &lt;span&gt;Play/Pause&lt;/span&gt;
+&lt;/button&gt;
+</pre>
+
+<p>Before we can play our track we need to connect our audio graph from the audio source/input node to the destination.</p>
+
+<p>We've already created an input node by passing our audio element into the API. For the most part, you don't need to create an output node, you can just connect your other nodes to {{domxref("BaseAudioContext.destination")}}, which handles the situation for you:</p>
+
+<pre class="brush: js">track.connect(audioContext.destination);
+</pre>
+
+<p>A good way to visualise these nodes is by drawing an audio graph so you can visualize it. This is what our current audio graph looks like:</p>
+
+<p><img alt="an audio graph with an audio element source connected to the default destination" src="https://mdn.mozillademos.org/files/16195/graph1.jpg" style="border-style: solid; border-width: 1px; height: 486px; width: 1426px;"></p>
+
+<p>Now we can add the play and pause functionality.</p>
+
+<pre class="brush: js">// select our play button
+const playButton = document.querySelector('button');
+
+playButton.addEventListener('click', function() {
+
+ // check if context is in suspended state (autoplay policy)
+ if (audioContext.state === 'suspended') {
+ audioContext.resume();
+ }
+
+ // play or pause track depending on state
+ if (this.dataset.playing === 'false') {
+ audioElement.play();
+ this.dataset.playing = 'true';
+ } else if (this.dataset.playing === 'true') {
+ audioElement.pause();
+ this.dataset.playing = 'false';
+ }
+
+}, false);
+</pre>
+
+<p>We also need to take into account what to do when the track finishes playing. Our <code>HTMLMediaElement</code> fires an <code>ended</code> event once it's finished playing, so we can listen for that and run code accordingly:</p>
+
+<pre class="brush: js">audioElement.addEventListener('ended', () =&gt; {
+ playButton.dataset.playing = 'false';
+}, false);
+</pre>
+
+<h2 id="Modifying_sound">Modifying sound</h2>
+
+<p>Let's delve into some basic modification nodes, to change the sound that we have. This is where the Web Audio API really starts to come in handy. First of all, let's change the volume. This can be done using a {{domxref("GainNode")}}, which represents how big our sound wave is.</p>
+
+<p>There are two ways you can create nodes with the Web Audio API. You can use the factory method on the context itself (e.g. <code>audioContext.createGain()</code>) or via a constructor of the node (e.g. <code>new GainNode()</code>). We'll use the factory method in our code:</p>
+
+<pre class="brush: js">const gainNode = audioContext.createGain();
+</pre>
+
+<p>Now we have to update our audio graph from before, so the input is connected to the gain, then the gain node is connected to the destination:</p>
+
+<pre class="brush: js">track.connect(gainNode).connect(audioContext.destination);
+</pre>
+
+<p>This will make our audio graph look like this:</p>
+
+<p><img alt="an audio graph with an audio element source, connected to a gain node that modifies the audio source, and then going to the default destination" src="https://mdn.mozillademos.org/files/16196/graph2.jpg" style="border-style: solid; border-width: 1px; height: 550px; width: 1774px;"></p>
+
+<p>The default value for gain is 1; this keeps the current volume the same. Gain can be set to a minimum of about -3.4 and a max of about 3.4. Here we'll allow the boombox to move the gain up to 2 (double the original volume) and down to 0 (this will effectively mute our sound).</p>
+
+<p>Let's give the user control to do this — we'll use a <a href="/en-US/docs/Web/HTML/Element/input/range">range input</a>:</p>
+
+<pre class="brush: html">&lt;input type="range" id="volume" min="0" max="2" value="1" step="0.01"&gt;
+</pre>
+
+<div class="note">
+<p><strong>Note</strong>: Range inputs are a really handy input type for updating values on audio nodes. You can specify a range's values and use them directly with the audio node's parameters.</p>
+</div>
+
+<p>So let's grab this input's value and update the gain value when the input node has its value changed by the user:</p>
+
+<pre class="brush: js">const volumeControl = document.querySelector('#volume');
+
+volumeControl.addEventListener('input', function() {
+ gainNode.gain.value = this.value;
+}, false);
+</pre>
+
+<div class="note">
+<p><strong>Note</strong>: The values of node objects (e.g. <code>GainNode.gain</code>) are not simple values; they are actually objects of type {{domxref("AudioParam")}} — these called parameters. This is why we have to set <code>GainNode.gain</code>'s <code>value</code> property, rather than just setting the value on <code>gain</code> directly. This enables them to be much more flexible, allowing for passing the parameter a specific set of values to change between over a set period of time, for example.</p>
+</div>
+
+<p>Great, now the user can update the track's volume! The gain node is the perfect node to use if you want to add mute functionality.</p>
+
+<h2 id="Adding_stereo_panning_to_our_app">Adding stereo panning to our app</h2>
+
+<p>Let's add another modification node to practise what we've just learnt.</p>
+
+<p>There's a {{domxref("StereoPannerNode")}} node, which changes the balance of the sound between the left and right speakers, if the user has stereo capabilities.</p>
+
+<p>Note: The <code>StereoPannerNode</code> is for simple cases in which you just want stereo panning from left to right. There is also a {{domxref("PannerNode")}}, which allows for a great deal of control over 3D space, or sound <em>spatialisation</em>, for creating more complex effects. This is used in games and 3D apps to create birds flying overhead, or sound coming from behind the user for instance.</p>
+
+<p>To visualise it, we will be making our audio graph look like this:</p>
+
+<p><img alt="An image showing the audio graph showing an input node, two modification nodes (a gain node and a stereo panner node) and a destination node." src="https://mdn.mozillademos.org/files/16229/graphPan.jpg" style="border-style: solid; border-width: 1px; height: 532px; width: 2236px;"></p>
+
+<p>Let's use the constructor method of creating a node this time. When we do it this way, we have to pass in the context and any options that that particular node may take:</p>
+
+<pre class="brush: js">const pannerOptions = { pan: 0 };
+const panner = new StereoPannerNode(audioContext, pannerOptions);
+</pre>
+
+<div class="note">
+<p><strong>Note</strong>: The constructor method of creating nodes is not supported by all browsers at this time. The older factory methods are supported more widely.</p>
+</div>
+
+<p>Here our values range from -1 (far left) and 1 (far right). Again let's use a range type input to vary this parameter:</p>
+
+<pre class="brush: html">&lt;input type="range" id="panner" min="-1" max="1" value="0" step="0.01"&gt;
+</pre>
+
+<p>We use the values from that input to adjust our panner values in the same way as we did before:</p>
+
+<pre class="brush: js">const pannerControl = document.querySelector('#panner');
+
+pannerControl.addEventListener('input', function() {
+ panner.pan.value = this.value;
+}, false);
+</pre>
+
+<p>Let's adjust our audio graph again, to connect all the nodes together:</p>
+
+<pre class="brush: js">track.connect(gainNode).connect(panner).connect(audioContext.destination);
+</pre>
+
+<p>The only thing left to do is give the app a try: <a href="https://codepen.io/Rumyra/pen/qyMzqN/">Check out the final demo here on Codepen</a>.</p>
+
+<h2 id="Summary">Summary</h2>
+
+<p>Great! We have a boombox that plays our 'tape', and we can adjust the volume and stereo panning, giving us a fairly basic working audio graph.</p>
+
+<p>This makes up quite a few basics that you would need to start to add audio to your website or web app. There's a lot more functionality to the Web Audio API, but once you've grasped the concept of nodes and putting your audio graph together, we can move on to looking at more complex functionality.</p>
+
+<h2 id="More_examples">More examples</h2>
+
+<p>There are other examples available to learn more about the Web Audio API.</p>
+
+<p>The <a href="https://github.com/mdn/voice-change-o-matic">Voice-change-O-matic</a> is a fun voice manipulator and sound visualization web app that allows you to choose different effects and visualizations. The application is fairly rudimentary, but it demonstrates the simultaneous use of multiple Web Audio API features. (<a href="https://mdn.github.io/voice-change-o-matic/">run the Voice-change-O-matic live</a>).</p>
+
+<p><img alt="A UI with a sound wave being shown, and options for choosing voice effects and visualizations." src="https://mdn.mozillademos.org/files/7921/voice-change-o-matic.png" style="border-style: solid; border-width: 1px; display: block; height: 500px; margin: 0px auto; width: 640px;"></p>
+
+<p>Another application developed specifically to demonstrate the Web Audio API is the <a href="http://mdn.github.io/violent-theremin/">Violent Theremin</a>, a simple web application that allows you to change pitch and volume by moving your mouse pointer. It also provides a psychedelic lightshow (<a href="https://github.com/mdn/violent-theremin">see Violent Theremin source code</a>).</p>
+
+<p><img alt="A page full of rainbow colours, with two buttons labeled Clear screen and mute. " src="https://mdn.mozillademos.org/files/7919/violent-theremin.png" style="border-style: solid; border-width: 1px; display: block; height: 458px; margin: 0px auto; width: 640px;"></p>
+
+<p>Also see our <a href="https://github.com/mdn/webaudio-examples">webaudio-examples repo</a> for more examples.</p>