aboutsummaryrefslogtreecommitdiff
path: root/files/ko/web/api
diff options
context:
space:
mode:
Diffstat (limited to 'files/ko/web/api')
-rw-r--r--files/ko/web/api/analysernode/analysernode/index.html59
-rw-r--r--files/ko/web/api/analysernode/fftsize/index.html96
-rw-r--r--files/ko/web/api/analysernode/frequencybincount/index.html82
-rw-r--r--files/ko/web/api/analysernode/fttaudiodata_en.svg1
-rw-r--r--files/ko/web/api/analysernode/getbytefrequencydata/index.html102
-rw-r--r--files/ko/web/api/analysernode/getbytetimedomaindata/index.html98
-rw-r--r--files/ko/web/api/analysernode/getfloatfrequencydata/index.html129
-rw-r--r--files/ko/web/api/analysernode/getfloattimedomaindata/index.html104
-rw-r--r--files/ko/web/api/analysernode/index.html178
-rw-r--r--files/ko/web/api/analysernode/maxdecibels/index.html85
-rw-r--r--files/ko/web/api/analysernode/mindecibels/index.html87
-rw-r--r--files/ko/web/api/analysernode/smoothingtimeconstant/index.html92
-rw-r--r--files/ko/web/api/baseaudiocontext/createperiodicwave/index.html8
-rw-r--r--files/ko/web/api/baseaudiocontext/index.html122
-rw-r--r--files/ko/web/api/history/state/index.html12
-rw-r--r--files/ko/web/api/offlineaudiocontext/index.html148
-rw-r--r--files/ko/web/api/periodicwave/index.html55
-rw-r--r--files/ko/web/api/periodicwave/periodicwave/index.html70
-rw-r--r--files/ko/web/api/streams_api/index.html4
-rw-r--r--files/ko/web/api/web_audio_api/advanced_techniques/index.html586
-rw-r--r--files/ko/web/api/web_audio_api/advanced_techniques/sequencer.pngbin0 -> 9782 bytes
-rw-r--r--files/ko/web/api/web_audio_api/audio-context_.pngbin0 -> 29346 bytes
-rw-r--r--files/ko/web/api/web_audio_api/basic_concepts_behind_web_audio_api/index.html241
-rw-r--r--files/ko/web/api/web_audio_api/best_practices/index.html97
-rw-r--r--files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/customsourcenode-as-splitter.svg1
-rw-r--r--files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/index.html284
-rw-r--r--files/ko/web/api/web_audio_api/index.html499
-rw-r--r--files/ko/web/api/web_audio_api/migrating_from_webkitaudiocontext/index.html381
-rw-r--r--files/ko/web/api/web_audio_api/simple_synth/index.html578
-rw-r--r--files/ko/web/api/web_audio_api/tools/index.html41
-rw-r--r--files/ko/web/api/web_audio_api/using_audioworklet/index.html325
-rw-r--r--files/ko/web/api/web_audio_api/using_iir_filters/iir-filter-demo.pngbin0 -> 6824 bytes
-rw-r--r--files/ko/web/api/web_audio_api/using_iir_filters/index.html198
-rw-r--r--files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/bar-graph.pngbin0 -> 2221 bytes
-rw-r--r--files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/index.html189
-rw-r--r--files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/wave.pngbin0 -> 4433 bytes
-rw-r--r--files/ko/web/api/web_audio_api/web_audio_spatialization_basics/index.html467
-rw-r--r--files/ko/web/api/web_audio_api/web_audio_spatialization_basics/web-audio-spatialization.pngbin0 -> 26452 bytes
38 files changed, 4857 insertions, 562 deletions
diff --git a/files/ko/web/api/analysernode/analysernode/index.html b/files/ko/web/api/analysernode/analysernode/index.html
new file mode 100644
index 0000000000..dbec1b677e
--- /dev/null
+++ b/files/ko/web/api/analysernode/analysernode/index.html
@@ -0,0 +1,59 @@
+---
+title: AnalyserNode()
+slug: Web/API/AnalyserNode/AnalyserNode
+tags:
+ - API
+ - AnalyserNode
+ - Audio
+ - Constructor
+ - Media
+ - Reference
+ - Web Audio API
+browser-compat: api.AnalyserNode.AnalyserNode
+---
+<p>{{APIRef("'Web Audio API'")}}</p>
+
+<p class="summary"><a href="/ko/docs/Web/API/Web_Audio_API">Web Audio API</a>의 <strong><code>AnalyserNode()</code></strong> 생성자는 새로운 {{domxref("AnalyserNode")}} 객체 인스턴스를 생성합니다.</p>
+
+<h2 id="Syntax">구문</h2>
+
+<pre class="brush: js">var <var>analyserNode</var> = new AnalyserNode(<var>context</var>, ?<var>options</var>);</pre>
+
+<h3 id="Parameters">매개변수</h3>
+
+<p><em>{{domxref("AudioNodeOptions")}} dictionary로부터 매개변수를 상속받습니다.</em></p>
+
+<dl>
+ <dt><em>context</em></dt>
+ <dd>{{domxref("AudioContext")}} 또는 {{domxref("OfflineAudioContext")}}에의 참조.</dd>
+ <dt><em>options</em> {{optional_inline}}</dt>
+ <dd>
+ <ul>
+ <li><strong><code>fftSize</code></strong>: <a href="https://en.wikipedia.org/wiki/Frequency_domain">주파수 영역</a> 분석에 대한 <a href="https://en.wikipedia.org/wiki/Fast_Fourier_transform">FFT</a>의 원하는 초기 사이즈. <br>
+ 기본값은 <code>2048</code>입니다.</li>
+ <li><strong><code>maxDecibels</code></strong>: FFT 분석에 대한 <a href="https://en.wikipedia.org/wiki/Decibel">dB</a>단위로의 원하는 초기 최대 power.<br>
+ 기본값은 <code>-30</code>입니다.</li>
+ <li><strong><code>minDecibels</code></strong>: FFT 분석에 대한 dB단위로의 원하는 초기 최소 power.<br>
+ 기본값은 <code>-100</code>입니다.</li>
+ <li><strong><code>smoothingTimeConstant</code></strong>: FFT 분석에 대한 원하는 초기 smoothing 상수. 기본값은 <code>0.8</code>입니다.</li>
+ </ul>
+ </dd>
+</dl>
+
+<h3 id="Return_value">반환 값</h3>
+
+<p>새로운 {{domxref("AnalyserNode")}} 객체 인스턴스.</p>
+
+<h2 id="Specifications">명세</h2>
+
+{{Specifications}}
+
+<h2 id="Browser_compatibility">브라우저 호환성</h2>
+
+<p>{{Compat}}</p>
+
+<h2 id="See_also">같이 보기</h2>
+
+<ul>
+ <li>{{domxref("BaseAudioContext.createAnalyser()")}}, 동등한 팩토리 메서드</li>
+</ul>
diff --git a/files/ko/web/api/analysernode/fftsize/index.html b/files/ko/web/api/analysernode/fftsize/index.html
new file mode 100644
index 0000000000..6033ba3892
--- /dev/null
+++ b/files/ko/web/api/analysernode/fftsize/index.html
@@ -0,0 +1,96 @@
+---
+title: AnalyserNode.fftSize
+slug: Web/API/AnalyserNode/fftSize
+tags:
+ - API
+ - AnalyserNode
+ - Property
+ - Reference
+ - Web Audio API
+ - fftSize
+browser-compat: api.AnalyserNode.fftSize
+---
+<div>{{APIRef("Web Audio API")}}</div>
+
+<p class="summary">{{domxref("AnalyserNode")}} 인터페이스의 <strong><code>fftSize</code></strong> 속성은 unsigned long 값이고 주파수 영역 데이터를 얻기 위해 <a href="https://en.wikipedia.org/wiki/Fast_Fourier_transform">고속 푸리에 변환</a>(FFT)을 수행할 때 사용될 샘플에서의 window 사이즈를 나타냅니다.</p>
+
+<h2 id="Syntax">구문</h2>
+
+<pre class="brush: js">var <em>curValue</em> = <em>analyserNode</em>.fftSize;
+<em>analyserNode</em>.fftSize = <em>newValue</em>;
+</pre>
+
+<h3 id="Value">값</h3>
+
+<p>FFT의 window 사이즈를 나타내는 샘플의 수로 주어지는 unsigned 정수입니다. 값이 높을수록 주파수 영역의 자세함이 커지는 결과를 낳으나 시간 영역에서의 자세함은 떨어집니다.</p>
+
+<p>반드시 <math><semantics><msup><mn>2</mn><mn>5</mn></msup><annotation encoding="TeX">2^5</annotation></semantics></math>와 <math><semantics><msup><mn>2</mn><mn>15</mn></msup><annotation encoding="TeX">2^15</annotation></semantics></math> 사이의 2의 제곱이여야만 합니다. 즉 다음 중 하나여야 합니다: <code>32</code>, <code>64</code>, <code>128</code>, <code>256</code>, <code>512</code>, <code>1024</code>, <code>2048</code>, <code>4096</code>, <code>8192</code>, <code>16384</code>, 그리고 <code>32768</code>. 기본값은 <code>2048</code>입니다.</p>
+
+<p class="note"><strong>참고</strong>: 만약 값이 2의 제곱이 아니거나 이 명시된 범위의 바깥에 있다면, <code>IndexSizeError</code>라는 이름의 {{domxref("DOMException")}}이 발생합니다.</p>
+
+<h2 id="Example">예제</h2>
+
+<p>다음의 예제는 <code>AnalyserNode</code>를 생성하기 위한 {{domxref("AudioContext")}}와 그리고 나서 반복적으로 시간 영역의 데이터를 수집하고 현재 오디오 입력의 "오실로스코프 스타일의" 출력을 그리기 위한 {{domxref("window.requestAnimationFrame()","requestAnimationFrame")}}과 {{htmlelement("canvas")}}의 기본 사용을 보여줍니다. 더 완벽한 응용 예제/정보를 보려면 <a href="https://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic</a> 데모를 확인하세요 (관련된 코드를 보려면 <a href="https://github.com/mdn/voice-change-o-matic/blob/gh-pages/scripts/app.js#L128-L205">app.js 라인 128–205</a>를 참고하세요).</p>
+
+<pre class="brush: js">var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
+var analyser = audioCtx.createAnalyser();
+
+ ...
+
+analyser.fftSize = 2048;
+var bufferLength = analyser.frequencyBinCount ;
+var dataArray = new Uint8Array(bufferLength);
+analyser.getByteTimeDomainData(dataArray);
+
+// 현재 오디오 소스의 오실로스코프를 그립니다
+
+function draw() {
+
+ drawVisual = requestAnimationFrame(draw);
+
+ analyser.getByteTimeDomainData(dataArray);
+
+ canvasCtx.fillStyle = 'rgb(200, 200, 200)';
+ canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
+
+ canvasCtx.lineWidth = 2;
+ canvasCtx.strokeStyle = 'rgb(0, 0, 0)';
+
+ canvasCtx.beginPath();
+
+ var sliceWidth = WIDTH * 1.0 / bufferLength;
+ var x = 0;
+
+ for(var i = 0; i &lt; bufferLength; i++) {
+
+ var v = dataArray[i] / 128.0;
+ var y = v * HEIGHT/2;
+
+ if(i === 0) {
+ canvasCtx.moveTo(x, y);
+ } else {
+ canvasCtx.lineTo(x, y);
+ }
+
+ x += sliceWidth;
+ }
+
+ canvasCtx.lineTo(canvas.width, canvas.height/2);
+ canvasCtx.stroke();
+ };
+
+ draw();</pre>
+
+<h2 id="Specifications">명세</h2>
+
+{{Specifications}}
+
+<h2 id="Browser_compatibility">브라우저 호환성</h2>
+
+<p>{{Compat}}</p>
+
+<h2 id="See_also">같이 보기</h2>
+
+<ul>
+ <li><a href="/ko/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Web Audio API 사용하기</a></li>
+</ul>
diff --git a/files/ko/web/api/analysernode/frequencybincount/index.html b/files/ko/web/api/analysernode/frequencybincount/index.html
new file mode 100644
index 0000000000..cd23d8edda
--- /dev/null
+++ b/files/ko/web/api/analysernode/frequencybincount/index.html
@@ -0,0 +1,82 @@
+---
+title: AnalyserNode.frequencyBinCount
+slug: Web/API/AnalyserNode/frequencyBinCount
+tags:
+ - API
+ - AnalyserNode
+ - Property
+ - Reference
+ - Web Audio API
+ - frequencyBinCount
+browser-compat: api.AnalyserNode.frequencyBinCount
+---
+<div>{{APIRef("Web Audio API")}}</div>
+
+<p class="summary">{{domxref("AnalyserNode")}} 인터페이스의 <strong><code>frequencyBinCount</code></strong> 읽기 전용 속성은 {{domxref("AnalyserNode.fftSize")}} 값의 절반인 unsigned 정수입니다. 이것은 일반적으로 시각화를 위해 사용할 데이터 값의 수와 동일시됩니다.</p>
+
+<h2 id="Syntax">구문</h2>
+
+<pre class="brush: js">var <em>arrayLength</em> = <em>analyserNode</em>.frequencyBinCount;
+</pre>
+
+<h3 id="Value">값</h3>
+
+<p>{{domxref("AnalyserNode.getByteFrequencyData()")}}와 {{domxref("AnalyserNode.getFloatFrequencyData()")}}가 제공된 <code>TypedArray</code>내로 복사하는 값의 수와 동일한 unsigned 정수.</p>
+
+<p><a href="https://en.wikipedia.org/wiki/Fast_Fourier_transform">고속 푸리에 변환</a>이 정의된 방법에 관계된 기술적인 이유로, 이것은 언제나 {{domxref("AnalyserNode.fftSize")}} 값의 절반입니다. 그러므로, 이것은 다음 중 하나입니다: <code>16</code>, <code>32</code>, <code>64</code>, <code>128</code>, <code>256</code>, <code>512</code>, <code>1024</code>, <code>2048</code>, <code>4096</code>, <code>8192</code>, 그리고 <code>16384</code>.</p>
+
+<h2 id="Example">예제</h2>
+
+<p>다음의 예제는 <code>AnalyserNode</code>를 생성하기 위한 {{domxref("AudioContext")}}와 그리고 나서 반복적으로 주파수 데이터를 수집하고 현재 오디오 입력의 "winamp 막대그래프 스타일의" 출력을 그리기 위한 {{domxref("window.requestAnimationFrame()","requestAnimationFrame")}}과 {{htmlelement("canvas")}}의 기본 사용을 보여줍니다. 더 완벽한 응용 예제/정보를 보려면 <a href="https://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic</a> 데모를 확인하세요 (관련된 코드를 보려면 <a href="https://github.com/mdn/voice-change-o-matic/blob/gh-pages/scripts/app.js#L128-L205">app.js 라인 128–205</a>를 참고하세요).</p>
+
+<pre class="brush: js">var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
+var analyser = audioCtx.createAnalyser();
+analyser.minDecibels = -90;
+analyser.maxDecibels = -10;
+
+ ...
+
+analyser.fftSize = 256;
+var bufferLength = analyser.frequencyBinCount;
+console.log(bufferLength);
+var dataArray = new Uint8Array(bufferLength);
+
+canvasCtx.clearRect(0, 0, WIDTH, HEIGHT);
+
+function draw() {
+ drawVisual = requestAnimationFrame(draw);
+
+ analyser.getByteFrequencyData(dataArray);
+
+ canvasCtx.fillStyle = 'rgb(0, 0, 0)';
+ canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
+
+ var barWidth = (WIDTH / bufferLength) * 2.5 - 1;
+ var barHeight;
+ var x = 0;
+
+ for(var i = 0; i &lt; bufferLength; i++) {
+ barHeight = dataArray[i];
+
+ canvasCtx.fillStyle = 'rgb(' + (barHeight+100) + ',50,50)';
+ canvasCtx.fillRect(x,HEIGHT-barHeight/2,barWidth,barHeight/2);
+
+ x += barWidth;
+ }
+};
+
+draw();</pre>
+
+<h2 id="Specifications">명세</h2>
+
+{{Specifications}}
+
+<h2 id="Browser_compatibility">브라우저 호환성</h2>
+
+<p>{{Compat}}</p>
+
+<h2 id="See_also">같이 보기</h2>
+
+<ul>
+ <li><a href="/ko/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Web Audio API 사용하기</a></li>
+</ul>
diff --git a/files/ko/web/api/analysernode/fttaudiodata_en.svg b/files/ko/web/api/analysernode/fttaudiodata_en.svg
new file mode 100644
index 0000000000..b1c40a3868
--- /dev/null
+++ b/files/ko/web/api/analysernode/fttaudiodata_en.svg
@@ -0,0 +1 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="692.929" height="206.323"><path fill="none" stroke="#010101" d="M25.556 31.667v59.458"/><path fill="#010101" d="M19.722 32.667l5.834-16.839 5.5 16.839zm210.915 51.914l16.839 5.834-16.839 5.5z"/><path fill="none" stroke="#010101" stroke-miterlimit="10" d="M25.722 53.167h36.667s4.167-14.333 9-11c0 0 2.333.417 7.333 14 0 0 2.917 10.583 8 8.167 0 0 3.333-.417 6.667-14.167 0 0 3.333-11.917 8.5-7.333 0 0 2.667 1.833 6.5 13.333 0 0 4 12 8.5 7.5 0 0 3.333-2.666 6.167-13.5 0 0 3.167-12.667 9-7.667 0 0 2.292.562 5.667 13.5 0 0 4.167 13.083 9.5 7.667 0 0 2.188-1.729 5-13.5 0 0 3.25-12.667 8.5-7.667 0 0 2.938 3.25 6.667 13.667 0 0 5.021 12.333 8.833 7.667 0 0 3.812-4.646 4.667-10.561h30"/><text transform="translate(252.055 94.834)" font-family="'ArialMT'" font-size="14">t</text><text transform="translate(23.222 106.333)"><tspan x="0" y="0" font-family="'ArialMT'" font-size="14">0</tspan><tspan x="7.786" y="0" font-family="'ArialMT'" font-size="14" letter-spacing="24"> </tspan><tspan x="36" y="0" font-family="'ArialMT'" font-size="14">1</tspan><tspan x="43.786" y="0" font-family="'ArialMT'" font-size="14" letter-spacing="24"> </tspan><tspan x="72" y="0" font-family="'ArialMT'" font-size="14">2</tspan><tspan x="79.786" y="0" font-family="'ArialMT'" font-size="14" letter-spacing="24"> </tspan><tspan x="108" y="0" font-family="'ArialMT'" font-size="14">3</tspan><tspan x="115.787" y="0" font-family="'ArialMT'" font-size="14" letter-spacing="24"> </tspan><tspan x="144" y="0" font-family="'ArialMT'" font-size="14">4</tspan></text><path fill="none" stroke="#010101" stroke-miterlimit="10" d="M25.556 90.667h205.081"/><path fill="none" stroke="#010101" d="M431.556 31.667v59.458"/><path fill="#010101" d="M425.722 32.667l5.834-16.839 5.5 16.839zm210.914 51.914l16.84 5.834-16.84 5.5z"/><path fill="none" stroke="#010101" stroke-miterlimit="10" d="M431.722 53.167h36.666s4.167-14.333 9-11c0 0 2.334.417 7.334 14 0 0 2.916 10.583 8 8.167 0 0 3.334-.417 6.666-14.167 0 0 3.334-11.917 8.5-7.333 0 0 2.667 1.833 6.5 13.333 0 0 4 12 8.5 7.5 0 0 3.334-2.666 6.168-13.5 0 0 3.166-12.667 9-7.667 0 0 2.291.562 5.666 13.5 0 0 4.167 13.083 9.5 7.667 0 0 2.188-1.729 5-13.5 0 0 3.25-12.667 8.5-7.667 0 0 2.938 3.25 6.667 13.667 0 0 5.021 12.333 8.833 7.667 0 0 3.812-4.646 4.667-10.561h30"/><text transform="translate(658.055 94.834)" font-family="'ArialMT'" font-size="14">t</text><text transform="translate(429.222 106.333)"><tspan x="0" y="0" font-family="'ArialMT'" font-size="14">0</tspan><tspan x="7.786" y="0" font-family="'ArialMT'" font-size="14" letter-spacing="24"> </tspan><tspan x="36" y="0" font-family="'ArialMT'" font-size="14">1</tspan><tspan x="43.786" y="0" font-family="'ArialMT'" font-size="14" letter-spacing="24"> </tspan><tspan x="72" y="0" font-family="'ArialMT'" font-size="14">2</tspan><tspan x="79.786" y="0" font-family="'ArialMT'" font-size="14" letter-spacing="24"> </tspan><tspan x="108" y="0" font-family="'ArialMT'" font-size="14">3</tspan><tspan x="115.787" y="0" font-family="'ArialMT'" font-size="14" letter-spacing="24"> </tspan><tspan x="144" y="0" font-family="'ArialMT'" font-size="14">4</tspan></text><path fill="none" stroke="#010101" stroke-miterlimit="10" d="M431.556 90.667h205.08"/><path fill="#010101" d="M401.636 47.489l16.84 5.834-16.84 5.5z"/><path fill="none" stroke="#010101" stroke-miterlimit="10" d="M273.555 53.576h128.081"/><path fill="#010101" d="M347.889 148.454l-5.834 16.84-5.5-16.84z"/><path fill="#719FD0" stroke="#010101" d="M299.222 35h86v96.5h-86z"/><text transform="translate(304.223 56.823)" font-family="'ArialMT'" font-size="11">AnalyserNode</text><path fill="none" stroke="#010101" stroke-miterlimit="10" d="M341.803 118v30.454"/><text transform="translate(331.889 106.333)" font-family="'Arial-BoldMT'" font-size="11">FFT</text><path fill="none" stroke="#2C2C76" stroke-miterlimit="10" d="M321.889 86.667h41l-8 29.333h-25.333z"/><g font-family="'ArialMT'" font-size="11"><text transform="translate(484.89 131.5)">unchanged output</text><text transform="translate(302.223 176.167)">frequency data</text></g></svg> \ No newline at end of file
diff --git a/files/ko/web/api/analysernode/getbytefrequencydata/index.html b/files/ko/web/api/analysernode/getbytefrequencydata/index.html
new file mode 100644
index 0000000000..3d85f75ca5
--- /dev/null
+++ b/files/ko/web/api/analysernode/getbytefrequencydata/index.html
@@ -0,0 +1,102 @@
+---
+title: AnalyserNode.getByteFrequencyData()
+slug: Web/API/AnalyserNode/getByteFrequencyData
+tags:
+ - API
+ - AnalyserNode
+ - Method
+ - Reference
+ - Web Audio API
+browser-compat: api.AnalyserNode.getByteFrequencyData
+---
+<p>{{ APIRef("Web Audio API") }}</p>
+
+<p>{{ domxref("AnalyserNode") }} 인터페이스의 <strong><code>getByteFrequencyData()</code></strong> 메서드는 전달된 {{domxref("Uint8Array")}} (unsigned byte array) 내로 현재 주파수 데이터를 복사합니다.</p>
+
+<p>주파수 데이터는 0에서 255 스케일의 정수로 구성되어 있습니다.</p>
+
+<p>배열 내의 각 원소는 특정한 주파수에 대한 데시벨 값을 나타냅니다. 주파수들은 0에서 샘플 레이트의 1/2까지 선형적으로 퍼져 있습니다. 예를 들자면, <code>48000</code> 샘플 레이트에 대해서, 배열의 마지막 원소는 <code>24000</code> Hz에 대한 데시벨 값을 나타냅니다.</p>
+
+<p>만약 배열이 {{domxref("AnalyserNode.frequencyBinCount")}}보다 더 적은 요소를 가지고 있다면, 초과한 요소는 탈락됩니다. 만약 이것이 필요한 것보다 더 많은 요소를 가지고 있다면, 초과한 요소는 무시됩니다.</p>
+
+<h2 id="Syntax">구문</h2>
+
+<pre class="brush: js">var audioCtx = new AudioContext();
+var analyser = audioCtx.createAnalyser();
+var dataArray = new Uint8Array(analyser.frequencyBinCount); // Uint8Array는 frequencyBinCount와 같은 길이여야만 합니다
+
+void <em>analyser</em>.getByteFrequencyData(dataArray); // getByteFrequencyData()로부터 반환된 데이터로 Uint8Array를 채웁니다
+</pre>
+
+<h3 id="Parameters">매개변수</h3>
+
+<dl>
+ <dt><code>array</code></dt>
+ <dd>주파수 영역 데이터가 복사될 {{domxref("Uint8Array")}}. 소리가 없는 모든 샘플에 대해서, 값은 <code>-<a href="/ko/docs/Web/JavaScript/Reference/Global_Objects/Infinity">Infinity</a></code>입니다.<br>
+ 만약 배열이 {{domxref("AnalyserNode.frequencyBinCount")}}보다 더 적은 요소를 가지고 있다면, 초과한 요소는 탈락됩니다. 만약 이것이 필요한 것보다 더 많은 요소를 가지고 있다면, 초과한 요소는 무시됩니다.</dd>
+</dl>
+
+<h3 id="Return_value">반환 값</h3>
+
+<p>없음.</p>
+
+<h2 id="Example">예제</h2>
+
+<p>다음의 예제는 <code>AnalyserNode</code>를 생성하기 위한 {{domxref("AudioContext")}}와 그리고 나서 반복적으로 주파수 데이터를 수집하고 현재 오디오 입력의 "winamp 막대그래프 스타일의" 출력을 그리기 위한 {{domxref("window.requestAnimationFrame()","requestAnimationFrame")}}과 {{htmlelement("canvas")}}의 기본 사용을 보여줍니다. 더 완벽한 응용 예제/정보를 보려면 <a href="https://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic</a> 데모를 확인하세요 (관련된 코드를 보려면 <a href="https://github.com/mdn/voice-change-o-matic/blob/gh-pages/scripts/app.js#L128-L205">app.js 라인 128–205</a>를 참고하세요).</p>
+
+<pre class="brush: js">var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
+var analyser = audioCtx.createAnalyser();
+
+ ...
+
+analyser.fftSize = 256;
+var bufferLength = analyser.frequencyBinCount;
+console.log(bufferLength);
+var dataArray = new Uint8Array(bufferLength);
+
+canvasCtx.clearRect(0, 0, WIDTH, HEIGHT);
+
+function draw() {
+  drawVisual = requestAnimationFrame(draw);
+
+  analyser.getByteFrequencyData(dataArray);
+
+  canvasCtx.fillStyle = 'rgb(0, 0, 0)';
+  canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
+
+  var barWidth = (WIDTH / bufferLength) * 2.5;
+  var barHeight;
+  var x = 0;
+
+  for(var i = 0; i &lt; bufferLength; i++) {
+    barHeight = dataArray[i];
+
+    canvasCtx.fillStyle = 'rgb(' + (barHeight+100) + ',50,50)';
+    canvasCtx.fillRect(x,HEIGHT-barHeight/2,barWidth,barHeight/2);
+
+    x += barWidth + 1;
+  }
+};
+
+draw();</pre>
+
+<h2 id="Parameters_2">매개변수</h2>
+
+<dl>
+ <dt>array</dt>
+ <dd>주파수 영역 데이터가 복사될 {{domxref("Uint8Array")}}.</dd>
+</dl>
+
+<h2 id="Specifications">명세</h2>
+
+{{Specifications}}
+
+<h2 id="Browser_compatibility">브라우저 호환성</h2>
+
+<p>{{Compat}}</p>
+
+<h2 id="See_also">같이 보기</h2>
+
+<ul>
+ <li><a href="/ko/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Web Audio API 사용하기</a></li>
+</ul>
diff --git a/files/ko/web/api/analysernode/getbytetimedomaindata/index.html b/files/ko/web/api/analysernode/getbytetimedomaindata/index.html
new file mode 100644
index 0000000000..58c38f1288
--- /dev/null
+++ b/files/ko/web/api/analysernode/getbytetimedomaindata/index.html
@@ -0,0 +1,98 @@
+---
+title: AnalyserNode.getByteTimeDomainData()
+slug: Web/API/AnalyserNode/getByteTimeDomainData
+tags:
+ - API
+ - AnalyserNode
+ - Method
+ - Reference
+ - Web Audio API
+browser-compat: api.AnalyserNode.getByteTimeDomainData
+---
+<p>{{ APIRef("Mountain View APIRef Project") }}</p>
+
+<p>{{ domxref("AnalyserNode") }} 인터페이스의 <strong><code>getByteTimeDomainData()</code></strong> 메서드는 전달된 {{domxref("Uint8Array")}} (unsigned byte array) 내로 현재 파형, 즉 시간 영역 데이터를 복사합니다.</p>
+
+<p>만약 배열이 {{domxref("AnalyserNode.fftSize")}}보다 더 적은 요소를 가지고 있다면, 초과한 요소는 탈락됩니다. 만약 이것이 필요한 것보다 더 많은 요소를 가지고 있다면, 초과한 요소는 무시됩니다.</p>
+
+<h2 id="Syntax">구문</h2>
+
+<pre class="brush: js">const audioCtx = new AudioContext();
+const analyser = audioCtx.createAnalyser();
+const dataArray = new Uint8Array(analyser.fftSize); // Uint8Array는 fftSize와 같은 길이여야만 합니다
+analyser.getByteTimeDomainData(dataArray); // getByteTimeDomainData()로부터 반환된 데이터로 Uint8Array를 채웁니다
+</pre>
+
+<h3 id="Parameters">매개변수</h3>
+
+<dl>
+ <dt><code>array</code></dt>
+ <dd>시간 영역 데이터가 복사될 {{domxref("Uint8Array")}}.<br>
+ 만약 배열이 {{domxref("AnalyserNode.fftSize")}}보다 더 적은 요소를 가지고 있다면, 초과한 요소는 탈락됩니다. 만약 이것이 필요한 것보다 더 많은 요소를 가지고 있다면, 초과한 요소는 무시됩니다.</dd>
+</dl>
+
+<h3 id="Return_value">반환 값</h3>
+
+<p><strong><code>void</code></strong> | 없음</p>
+
+<h2 id="Example">예제</h2>
+
+<p>다음의 예제는 <code>AnalyserNode</code>를 생성하기 위한 {{domxref("AudioContext")}}와 그리고 나서 반복적으로 시간 영역 데이터를 수집하고 현재 오디오 입력의 "오실로스코프 스타일의" 출력을 그리기 위한 {{domxref("window.requestAnimationFrame()","requestAnimationFrame")}}과 {{htmlelement("canvas")}}의 기본 사용을 보여줍니다. 더 완벽한 응용 예제/정보를 보려면 <a href="https://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic</a> 데모를 확인하세요 (관련된 코드를 보려면 <a href="https://github.com/mdn/voice-change-o-matic/blob/gh-pages/scripts/app.js#L128-L205">app.js 라인 128–205</a>를 참고하세요).</p>
+
+<pre class="brush: js">const audioCtx = new (window.AudioContext || window.webkitAudioContext)();
+const analyser = audioCtx.createAnalyser();
+
+ ...
+
+analyser.fftSize = 2048;
+const bufferLength = analyser.fftSize;
+const dataArray = new Uint8Array(bufferLength);
+analyser.getByteTimeDomainData(dataArray);
+
+// 현재 오디오 소스의 오실로스코프를 그립니다
+function draw() {
+ drawVisual = requestAnimationFrame(draw);
+  analyser.getByteTimeDomainData(dataArray);
+
+  canvasCtx.fillStyle = 'rgb(200, 200, 200)';
+  canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
+
+  canvasCtx.lineWidth = 2;
+  canvasCtx.strokeStyle = 'rgb(0, 0, 0)';
+
+  const sliceWidth = WIDTH * 1.0 / bufferLength;
+  let x = 0;
+
+ canvasCtx.beginPath();
+  for(var i = 0; i &lt; bufferLength; i++) {
+    const v = dataArray[i]/128.0;
+  const y = v * HEIGHT/2;
+
+    if(i === 0)
+    canvasCtx.moveTo(x, y);
+    else
+     canvasCtx.lineTo(x, y);
+
+    x += sliceWidth;
+  }
+
+  canvasCtx.lineTo(WIDTH, HEIGHT/2);
+  canvasCtx.stroke();
+};
+
+draw();
+</pre>
+
+<h2 id="Specifications">명세</h2>
+
+{{Specifications}}
+
+<h2 id="Browser_compatibility">브라우저 호환성</h2>
+
+<p>{{Compat}}</p>
+
+<h2 id="See_also">같이 보기</h2>
+
+<ul>
+ <li><a href="/ko/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Web Audio API 사용하기</a></li>
+</ul>
diff --git a/files/ko/web/api/analysernode/getfloatfrequencydata/index.html b/files/ko/web/api/analysernode/getfloatfrequencydata/index.html
new file mode 100644
index 0000000000..ceef144941
--- /dev/null
+++ b/files/ko/web/api/analysernode/getfloatfrequencydata/index.html
@@ -0,0 +1,129 @@
+---
+title: AnalyserNode.getFloatFrequencyData()
+slug: Web/API/AnalyserNode/getFloatFrequencyData
+tags:
+ - API
+ - AnalyserNode
+ - Method
+ - Reference
+ - Web Audio API
+browser-compat: api.AnalyserNode.getFloatFrequencyData
+---
+<p>{{ APIRef("Web Audio API") }}</p>
+
+<p>{{domxref("AnalyserNode")}} 인터페이스의 <strong><code>getFloatFrequencyData()</code></strong> 메서드는 전달된 {{domxref("Float32Array")}} 배열 내로 현재 주파수 데이터를 복사합니다.</p>
+
+<p>배열 내의 각 원소는 특정한 주파수에 대한 데시벨 값을 나타냅니다. 주파수들은 0에서 샘플 레이트의 1/2까지 선형적으로 퍼져 있습니다. 예를 들자면, <code>48000</code> Hz 샘플 레이트에 대해서, 배열의 마지막 원소는 <code>24000</code> Hz에 대한 데시벨 값을 나타냅니다.</p>
+
+<p>만약 여러분이 더 높은 성능을 원하고 정밀성에 대해서는 상관하지 않는다면, {{domxref("AnalyserNode.getByteFrequencyData()")}}을 대신 사용할 수 있는데, 이는 {{domxref("Uint8Array")}}에서 동작합니다.</p>
+
+<h2 id="Syntax">구문</h2>
+
+<pre class="brush: js">var audioCtx = new AudioContext();
+var analyser = audioCtx.createAnalyser();
+var dataArray = new Float32Array(analyser.frequencyBinCount); // Float32Array는 frequencyBinCount와 같은 길이여야만 합니다
+
+void <em>analyser</em>.getFloatFrequencyData(dataArray); // getFloatFrequencyData()로부터 반환된 데이터로 Float32Array를 채웁니다
+</pre>
+
+<h3 id="Parameters">매개변수</h3>
+
+<dl>
+ <dt><code>array</code></dt>
+ <dd>주파수 영역 데이터가 복사될 {{domxref("Float32Array")}}. 소리가 없는 모든 샘플에 대해서, 값은 <code>-<a href="/ko/docs/Web/JavaScript/Reference/Global_Objects/Infinity">Infinity</a></code>입니다.<br>
+ 만약 배열이 {{domxref("AnalyserNode.frequencyBinCount")}}보다 더 적은 요소를 가지고 있다면, 초과한 요소는 탈락됩니다. 만약 이것이 필요한 것보다 더 많은 요소를 가지고 있다면, 초과한 요소는 무시됩니다.</dd>
+</dl>
+
+<h3 id="Return_value">반환 값</h3>
+
+<p>없음.</p>
+
+<h2 id="Example">예제</h2>
+
+<pre class="brush: js">const audioCtx = new AudioContext();
+const analyser = audioCtx.createAnalyser();
+// Float32Array는 frequencyBinCount와 같은 길이여야만 합니다
+const myDataArray = new Float32Array(analyser.frequencyBinCount);
+// getFloatFrequencyData()로부터 반환된 데이터로 Float32Array를 채웁니다
+analyser.getFloatFrequencyData(myDataArray);
+</pre>
+
+<h3 id="Drawing_a_spectrum">스펙트럼 그리기</h3>
+
+<p>다음의 예제는 {{domxref("MediaElementAudioSourceNode")}}를 <code>AnalyserNode</code>에 연결하기 위한 {{domxref("AudioContext")}}의 기본 사용을 보여줍니다. 오디오가 재생되는 동안, 우리는 {{domxref("window.requestAnimationFrame()","requestAnimationFrame()")}}로 주파수 데이터를 반복적으로 수집하고 "winamp 막대그래프 스타일"을 {{htmlelement("canvas")}} 요소에 그립니다.</p>
+
+<p>더 완벽한 응용 예제/정보를 보려면 <a href="https://mdn.github.io/voice-change-o-matic-float-data/">Voice-change-O-matic-float-data</a> 데모를 확인하세요 (<a href="https://github.com/mdn/voice-change-o-matic-float-data">소스 코드</a>도 보세요).</p>
+
+<pre class="brush: html, highlight:[15, 17, 18, 41]">&lt;!doctype html&gt;
+&lt;body&gt;
+&lt;script&gt;
+const audioCtx = new AudioContext();
+
+//오디오 소스를 생성합니다
+//여기서, 우리는 오디오 파일을 사용하나, 이것은 또한 예를 들자면 마이크 입력도 될 수 있습니다
+const audioEle = new Audio();
+audioEle.src = 'my-audio.mp3';//파일명을 여기 삽입하세요
+audioEle.autoplay = true;
+audioEle.preload = 'auto';
+const audioSourceNode = audioCtx.createMediaElementSource(audioEle);
+
+//analyser 노드를 생성합니다
+const analyserNode = audioCtx.createAnalyser();
+analyserNode.fftSize = 256;
+const bufferLength = analyserNode.frequencyBinCount;
+const dataArray = new Float32Array(bufferLength);
+
+//오디오 노드 네트워크를 설정합니다
+audioSourceNode.connect(analyserNode);
+analyserNode.connect(audioCtx.destination);
+
+//2D canvas를 생성합니다
+const canvas = document.createElement('canvas');
+canvas.style.position = 'absolute';
+canvas.style.top = 0;
+canvas.style.left = 0;
+canvas.width = window.innerWidth;
+canvas.height = window.innerHeight;
+document.body.appendChild(canvas);
+const canvasCtx = canvas.getContext('2d');
+canvasCtx.clearRect(0, 0, canvas.width, canvas.height);
+
+function draw() {
+ //다음 draw를 예정시킵니다
+ requestAnimationFrame(draw);
+
+ //스펙트럼 데이터를 얻습니다
+ analyserNode.getFloatFrequencyData(dataArray);
+
+ //검은색 배경을 그립니다
+ canvasCtx.fillStyle = 'rgb(0, 0, 0)';
+ canvasCtx.fillRect(0, 0, canvas.width, canvas.height);
+
+ //스펙트럼을 그립니다
+ const barWidth = (canvas.width / bufferLength) * 2.5;
+ let posX = 0;
+ for (let i = 0; i &lt; bufferLength; i++) {
+ const barHeight = (dataArray[i] + 140) * 2;
+ canvasCtx.fillStyle = 'rgb(' + Math.floor(barHeight + 100) + ', 50, 50)';
+ canvasCtx.fillRect(posX, canvas.height - barHeight / 2, barWidth, barHeight / 2);
+ posX += barWidth + 1;
+ }
+};
+
+draw();
+&lt;/script&gt;
+&lt;/body&gt;</pre>
+
+<h2 id="Specifications">명세</h2>
+
+{{Specifications}}
+
+<h2 id="Browser_compatibility">브라우저 호환성</h2>
+
+<p>{{Compat}}</p>
+
+<h2 id="See_also">같이 보기</h2>
+
+<ul>
+ <li><a href="/ko/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Web Audio API 사용하기</a></li>
+</ul>
diff --git a/files/ko/web/api/analysernode/getfloattimedomaindata/index.html b/files/ko/web/api/analysernode/getfloattimedomaindata/index.html
new file mode 100644
index 0000000000..ef85673388
--- /dev/null
+++ b/files/ko/web/api/analysernode/getfloattimedomaindata/index.html
@@ -0,0 +1,104 @@
+---
+title: AnalyserNode.getFloatTimeDomainData()
+slug: Web/API/AnalyserNode/getFloatTimeDomainData
+tags:
+ - API
+ - AnalyserNode
+ - Method
+ - Reference
+ - Web Audio API
+browser-compat: api.AnalyserNode.getFloatTimeDomainData
+---
+<p>{{ APIRef("Web Audio API") }}</p>
+
+<p>{{ domxref("AnalyserNode") }} 인터페이스의 <strong><code>getFloatTimeDomainData()</code></strong> 메서드는 전달된 {{domxref("Float32Array")}} 배열 내로 현재 파형, 즉 시간 영역 데이터를 복사합니다.</p>
+
+<h2 id="Syntax">구문</h2>
+
+<pre class="brush: js">var audioCtx = new AudioContext();
+var analyser = audioCtx.createAnalyser();
+var dataArray = new Float32Array(analyser.fftSize); // Float32Array는 fftSize와 같은 길이일 필요가 있습니다
+analyser.getFloatTimeDomainData(dataArray); // getFloatTimeDomainData()로부터 반환된 데이터로 Float32Array를 채웁니다
+</pre>
+
+
+<h3 id="Parameters">매개변수</h3>
+
+<dl>
+ <dt><code>array</code></dt>
+ <dd>시간 영역 데이터가 복사될 {{domxref("Float32Array")}}.<br>
+ 만약 배열이 {{domxref("AnalyserNode.frequencyBinCount")}}보다 더 적은 요소를 가지고 있다면, 초과한 요소는 탈락됩니다. 만약 이것이 필요한 것보다 더 많은 요소를 가지고 있다면, 초과한 요소는 무시됩니다.</dd>
+</dl>
+
+<h3 id="Return_value">반환 값</h3>
+
+<p>없음.</p>
+
+<h2 id="Example">예제</h2>
+
+<p>다음의 예제는 <code>AnalyserNode</code>를 생성하기 위한 {{domxref("AudioContext")}}와 그리고 나서 반복적으로 시간 영역 데이터를 수집하고 현재 오디오 입력의 "오실로스코프 스타일의" 출력을 그리기 위한 {{domxref("window.requestAnimationFrame()","requestAnimationFrame")}}과 {{htmlelement("canvas")}}의 기본 사용을 보여줍니다. 더 완벽한 응용 예제/정보를 보려면 <a href="https://mdn.github.io/voice-change-o-matic-float-data/">Voice-change-O-matic-float-data</a> 데모를 확인하세요 (<a href="https://github.com/mdn/voice-change-o-matic-float-data">소스 코드</a>도 보세요). </p>
+
+<pre class="brush: js">var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
+var analyser = audioCtx.createAnalyser();
+
+ ...
+
+analyser.fftSize = 1024;
+var bufferLength = analyser.fftSize;
+console.log(bufferLength);
+var dataArray = new Float32Array(bufferLength);
+
+canvasCtx.clearRect(0, 0, WIDTH, HEIGHT);
+
+function draw() {
+ drawVisual = requestAnimationFrame(draw);
+ analyser.getFloatTimeDomainData(dataArray);
+
+ canvasCtx.fillStyle = 'rgb(200, 200, 200)';
+ canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
+ canvasCtx.lineWidth = 2;
+ canvasCtx.strokeStyle = 'rgb(0, 0, 0)';
+ canvasCtx.beginPath();
+
+ var sliceWidth = WIDTH * 1.0 / bufferLength;
+ var x = 0;
+
+ for(var i = 0; i &lt; bufferLength; i++) {
+ var v = dataArray[i] * 200.0;
+ var y = HEIGHT/2 + v;
+
+ if(i === 0) {
+ canvasCtx.moveTo(x, y);
+ } else {
+ canvasCtx.lineTo(x, y);
+ }
+ x += sliceWidth;
+ }
+
+ canvasCtx.lineTo(canvas.width, canvas.height/2);
+ canvasCtx.stroke();
+};
+
+draw();</pre>
+
+
+<h2 id="Parameters_2">매개변수</h2>
+
+<dl>
+ <dt>array</dt>
+ <dd>시간 영역 데이터가 복사될 {{domxref("Float32Array")}}.</dd>
+</dl>
+
+<h2 id="Specifications">명세</h2>
+
+{{Specifications}}
+
+<h2 id="Browser_compatibility">브라우저 호환성</h2>
+
+<p>{{Compat}}</p>
+
+<h2 id="See_also">같이 보기</h2>
+
+<ul>
+ <li><a href="/ko/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Web Audio API 사용하기</a></li>
+</ul>
diff --git a/files/ko/web/api/analysernode/index.html b/files/ko/web/api/analysernode/index.html
index dcffff0050..9f02b456bb 100644
--- a/files/ko/web/api/analysernode/index.html
+++ b/files/ko/web/api/analysernode/index.html
@@ -3,35 +3,37 @@ title: AnalyserNode
slug: Web/API/AnalyserNode
tags:
- API
+ - AnalyserNode
+ - Interface
+ - Reference
- Web Audio API
- - 오디오
-translation_of: Web/API/AnalyserNode
+browser-compat: api.AnalyserNode
---
<p>{{APIRef("Web Audio API")}}</p>
-<p><strong><code>AnalyserNode</code></strong><strong> </strong>는 시간대 별로 실시간 주파수의 정보를 표현합니다. {{domxref("AudioNode")}} 를 통해 오디오 스트림정보가 그대로 입력되어 출력이 되지만 이를 통해 당신은 새로운 형태의 데이터를 생성하거나, 가공하고 오디오를 시각화 시키는 작업을 할 수 있습니다.</p>
+<p><strong><code>AnalyserNode</code></strong> 인터페이스는 실시간 주파수와 시간 영역 분석 정보를 제공 가능한 노드를 표현합니다. 이것은 변경되지 않은 오디오 스트림을 입력에서 출력으로 전달하지만, 여러분은 생성된 데이터를 얻고, 그것을 처리하고, 오디오 시각화를 생성할 수 있습니다.</p>
-<p><code>AnalyzerNode</code> 는 하나의 입력에 하나의 출력을 가집니다. 그리고 이 노드는 출력이 명시되지 않더라도 동작을 합니다.</p>
+<p><code>AnalyserNode</code>는 정확히 하나의 입력과 하나의 출력을 가집니다. 이 노드는 출력이 연결되지 않았더라도 작동합니다.</p>
-<p><img alt="Without modifying the audio stream, the node allows to get the frequency and time-domain data associated to it, using a FFT." src="https://mdn.mozillademos.org/files/9707/WebAudioFFT.png" style="height: 174px; width: 661px;"></p>
+<p><img alt="오디오 스트림을 수정하지 않고, 이 노드는 FFT를 사용하여 이것에 관련된 주파수와 시간 영역의 데이터를 얻을 수 있게 합니다." src="fttaudiodata_en.svg"></p>
<table class="properties">
<tbody>
<tr>
- <th scope="row">Number of inputs</th>
+ <th scope="row">입력의 수</th>
<td><code>1</code></td>
</tr>
<tr>
- <th scope="row">Number of outputs</th>
- <td><code>1</code> (but may be left unconnected)</td>
+ <th scope="row">출력의 수</th>
+ <td><code>1</code> (그러나 연결되지 않은 채로 남아있을지도 모릅니다)</td>
</tr>
<tr>
<th scope="row">Channel count mode</th>
- <td><code>"explicit"</code></td>
+ <td><code>"max"</code></td>
</tr>
<tr>
<th scope="row">Channel count</th>
- <td><code>1</code></td>
+ <td><code>2</code></td>
</tr>
<tr>
<th scope="row">Channel interpretation</th>
@@ -40,125 +42,129 @@ translation_of: Web/API/AnalyserNode
</tbody>
</table>
-<div class="note">
-<p><strong>Note</strong>: See the guide <a href="/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API">Visualizations with Web Audio API</a> for more information on creating audio visualizations.</p>
-</div>
+<h2 id="Inheritance">상속</h2>
+
+<p>이 인터페이스는 다음의 부모 인터페이스들로부터 상속받습니다:</p>
-<h2 id="Properties">Properties</h2>
+<p>{{InheritanceDiagram}}</p>
-<p><em>{{domxref("AudioNode")}}</em> 를 부모로 가지는 프로퍼티.<em> </em></p>
+<h2 id="Constructor">생성자</h2>
<dl>
- <dt><span id="cke_bm_91S" class="hidden"> </span>{{domxref("AnalyserNode.fftSize")}}</dt>
- <dd>부호가 없는(unsigned long value) 주파수 영역에서의 전체 크기의 값을 나타내기 위한 푸리에 변환의 값의 크기를 나타낸다. (대략적으로 설명을 하면 해당 주파수영역을 보는데 얼마나 세밀하게 데이터를 볼것인지를 나타낸다. 클수록 세밀하지만 시간이 오래걸리고 작으면 빨리한다.)</dd>
- <dt> </dt>
+ <dt>{{domxref("AnalyserNode.AnalyserNode", "AnalyserNode()")}}</dt>
+ <dd><code>AnalyserNode</code> 객체의 새로운 인스턴스를 생성합니다.</dd>
+</dl>
+
+<h2 id="Properties">속성</h2>
+
+<p><em>부모인 {{domxref("AudioNode")}}로부터 속성을 상속받습니다</em>.</p>
+
+<dl>
+ <dt>{{domxref("AnalyserNode.fftSize")}}</dt>
+ <dd>주파수 영역을 결정하는 데 사용될 FFT(<a href="https://en.wikipedia.org/wiki/Fast_Fourier_transform">Fast Fourier Transform</a>)의 사이즈를 나타내는 unsigned long 값입니다.</dd>
<dt>{{domxref("AnalyserNode.frequencyBinCount")}} {{readonlyInline}}</dt>
- <dd>부호가 없는 푸리에변환 값의 절반을 나타낸다. 이 값은 일반적으로 데이터를 시각화 하기위해 사용되는 데이터의 수와 같다.</dd>
+ <dd>FFT 사이즈 값의 절반인 unsigned long 값입니다. 이것은 일반적으로 시각화를 위해 사용할 데이터 값의 수와 동일시됩니다.</dd>
<dt>{{domxref("AnalyserNode.minDecibels")}}</dt>
- <dd>double형 값으로 표현되는  FFT(푸리에 변환)로 분석된 데이터의 범위에서의 최소값을 나타낸다. 이는 부호가 없는 바이트 값으로 변환된다. 일반적으로 이 특정한 최소값은 <code>getByteFrequencyData()를 사용하여 얻은 결과값이다.</code></dd>
+ <dd>unsigned byte 값으로의 전환에 대해서, FFT 분석 데이터의 스케일링 범위에서의 최소 power 값을 나타내는 double 값입니다 — 기본적으로, 이것은 <code>getByteFrequencyData()</code>를 사용할 때 결과의 범위에 대한 최소 값을 명시합니다.</dd>
<dt>{{domxref("AnalyserNode.maxDecibels")}}</dt>
- <dd>double형 값으로 표현되는  FFT(푸리에 변환)로 분석된 데이터의 범위에서의 최대값을 나타낸다. 이는 부호가 없는 바이트 값으로 변환된다. 일반적으로 이 특정한 최대값은 <code>getByteFrequencyData()를 사용하여 얻은 결과값이다.</code></dd>
+ <dd>unsigned byte 값으로의 전환에 대해서, FFT 분석 데이터의 스케일링 범위에서의 최대 power 값을 나타내는 double 값입니다 — 기본적으로, 이것은 <code>getByteFrequencyData()</code>를 사용할 때 결과의 범위에 대한 최대 값을 명시합니다.</dd>
<dt>{{domxref("AnalyserNode.smoothingTimeConstant")}}</dt>
- <dd>double형 값으로 마지막에 분석된 프레임의 평균 정수값을 나타낸다. 일반적으로 이 값을 통해 time smoother상의 값들을  변환하는데 사용된다.</dd>
+ <dd>마지막 분석 프레임의 에버리징(averaging) 상수를 나타내는 double 값입니다 — 기본적으로, 이것은 시간에 대한 값 사이의 전환을 더 매끄럽게 만듭니다.</dd>
</dl>
-<h2 id="Methods">Methods</h2>
+<h2 id="Methods">메서드</h2>
-<p><em>{{domxref("AudioNode")}} 을 상속하는 메서드.</em></p>
+<p><em>부모인 {{domxref("AudioNode")}}로부터 메서드를 상속받습니다</em>.</p>
<dl>
<dt>{{domxref("AnalyserNode.getFloatFrequencyData()")}}</dt>
- <dd>현재의 주파수 데이터를 <span style="line-height: 1.5;"> {{domxref("Float32Array")}} 로 복사해 전달한다.</span></dd>
-</dl>
-
-<dl>
+ <dd>전달된 {{domxref("Float32Array")}} 배열 내로 현재 주파수 데이터를 복사합니다.</dd>
<dt>{{domxref("AnalyserNode.getByteFrequencyData()")}}</dt>
- <dd>현재의 주파수 데이터를 <span style="line-height: 1.5;"> </span>{{domxref("Uint8Array")}} (unsigned byte array)<span style="line-height: 1.5;"> 로 복사해 전달한다.</span></dd>
-</dl>
-
-<dl>
+ <dd>전달된 {{domxref("Uint8Array")}} (unsiged byte array) 내로 현재 주파수 데이터를 복사합니다.</dd>
<dt>{{domxref("AnalyserNode.getFloatTimeDomainData()")}}</dt>
- <dd>현재 데이터의 파형, 또는 시간기반(time-domain) 데이터를 <span style="line-height: 1.5;"> {{domxref("Float32Array")}} 배열에 전달한다.</span></dd>
+ <dd>전달된 {{domxref("Float32Array")}} 배열 내로 현재 파형, 즉 시간 영역 데이터를 복사합니다.</dd>
<dt>{{domxref("AnalyserNode.getByteTimeDomainData()")}}</dt>
- <dd>현재 데이터의 파형, 또는 시간기반(time-domain) 데이터를 {{domxref("Uint8Array")}} (unsigned byte array) 로 전달한다.</dd>
+ <dd>전달된 {{domxref("Uint8Array")}} (unsigned byte array) 내로 현재 파형, 즉 시간 영역 데이터를 복사합니다.</dd>
</dl>
-<h2 id="Example">Example</h2>
+<h2 id="Examples">예제</h2>
-<p>이 예제는  {{domxref("AudioContext")}} 를 사용해 <span style="font-family: courier new,andale mono,monospace; line-height: 1.5;">AnalyserNode를 생성하여 사용하는 방법을 보여주고, </span><span style="line-height: 1.5;"> {{domxref("window.requestAnimationFrame()","requestAnimationFrame")}} and {{htmlelement("canvas")}} 를 통해 반복적으로 시간기반(time-domain) 의 정보를 반복적으로 수집 및 </span><span style="line-height: 1.5;"> "oscilloscope style" 를 통해 입력된 오디오 정보를 시각화하여 보여주는 예제입니다. 더 많은 정보와 예제는 </span><span style="line-height: 1.5;"> </span><a href="http://mdn.github.io/voice-change-o-matic/" style="line-height: 1.5;">Voice-change-O-matic</a><span style="line-height: 1.5;"> demo (see </span><a href="https://github.com/mdn/voice-change-o-matic/blob/gh-pages/scripts/app.js#L128-L205" style="line-height: 1.5;">app.js lines 128–205</a><span style="line-height: 1.5;"> for relevant code)를 확인 하세요.</span></p>
+<div class="note">
+<p><strong>참고</strong>: 오디오 시각화 생성하기에 대한 더 많은 정보를 보려면 <a href="/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API">Web Audio API 시각화</a> 가이드를 참고하세요.</p>
+</div>
+
+<h3 id="Basic_usage">기본 사용</h3>
+
+<p>다음의 예제는 <code>AnalyserNode</code>를 생성하기 위한 {{domxref("AudioContext")}}와 그리고 나서 반복적으로 시간 영역의 데이터를 수집하고 현재 오디오 입력의 "오실로스코프 스타일의" 출력을 그리기 위한 {{domxref("window.requestAnimationFrame()","requestAnimationFrame")}}과 {{htmlelement("canvas")}}의 기본 사용을 보여줍니다. 더 완벽한 응용 예제/정보를 보려면 <a href="https://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic</a> 데모를 확인하세요 (관련된 코드를 보려면 <a href="https://github.com/mdn/voice-change-o-matic/blob/gh-pages/scripts/app.js#L128-L205">app.js 라인 128–205</a>를 참고하세요).</p>
+
+<pre class="brush: js">var audioCtx = new(window.AudioContext || window.webkitAudioContext)();
+
+// ...
-<pre class="brush: js">var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var analyser = audioCtx.createAnalyser();
-// 새로운 <span style="font-family: courier new,andale mono,monospace;">AnalyserNode를 생성한다.</span>
- ...
+analyser.fftSize = 2048;
+
+var bufferLength = analyser.frequencyBinCount;
+var dataArray = new Uint8Array(bufferLength);
+analyser.getByteTimeDomainData(dataArray);
+
+// 분석될 소스에 연결합니다
+source.connect(analyser);
-analyser.fftSize = 2048; // FFT의 크기를 2048로 한다.
-var bufferLength = analyser.frequencyBinCount; // 시각화를 하기 위한 데이터의 갯수
-var dataArray = new Uint8Array(bufferLength); // 데이터를 담을 bufferLength 크기의 Unit8Array의 배열을 생성
-analyser.getByteTimeDomainData(dataArray); // 시간기반의 데이터를 Unit8Array배열로 전달
+// ID "oscilloscope"로 정의된 canvas를 얻습니다
+var canvas = document.getElementById("oscilloscope");
+var canvasCtx = canvas.getContext("2d");
-// 얻어진 데이터를 기반으로 시각화 작업을 한다. 캔버스를 이용한다.
+// 현재 오디오 소스의 오실로스코프를 그립니다
function draw() {
-      drawVisual = requestAnimationFrame(draw);
+ requestAnimationFrame(draw);
-      analyser.getByteTimeDomainData(dataArray);
+ analyser.getByteTimeDomainData(dataArray);
-      canvasCtx.fillStyle = 'rgb(200, 200, 200)';
-      canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
+ canvasCtx.fillStyle = "rgb(200, 200, 200)";
+ canvasCtx.fillRect(0, 0, canvas.width, canvas.height);
-      canvasCtx.lineWidth = 2;
-      canvasCtx.strokeStyle = 'rgb(0, 0, 0)';
+ canvasCtx.lineWidth = 2;
+ canvasCtx.strokeStyle = "rgb(0, 0, 0)";
-      canvasCtx.beginPath();
+ canvasCtx.beginPath();
-      var sliceWidth = WIDTH * 1.0 / bufferLength;
-      var x = 0;
+ var sliceWidth = canvas.width * 1.0 / bufferLength;
+ var x = 0;
-      for(var i = 0; i &lt; bufferLength; i++) {
+ for (var i = 0; i &lt; bufferLength; i++) {
-        var v = dataArray[i] / 128.0;
-        var y = v * HEIGHT/2;
+ var v = dataArray[i] / 128.0;
+ var y = v * canvas.height / 2;
-        if(i === 0) {
-          canvasCtx.moveTo(x, y);
-        } else {
-          canvasCtx.lineTo(x, y);
-        }
+ if (i === 0) {
+ canvasCtx.moveTo(x, y);
+ } else {
+ canvasCtx.lineTo(x, y);
+ }
-        x += sliceWidth;
-      }
+ x += sliceWidth;
+ }
-      canvasCtx.lineTo(canvas.width, canvas.height/2);
-      canvasCtx.stroke();
-    };
+ canvasCtx.lineTo(canvas.width, canvas.height / 2);
+ canvasCtx.stroke();
+}
-    draw();</pre>
+draw();
+</pre>
-<h2 id="Specifications">Specifications</h2>
+<h2 id="Specifications">명세</h2>
-<table class="standard-table">
- <tbody>
- <tr>
- <th scope="col">Specification</th>
- <th scope="col">Status</th>
- <th scope="col">Comment</th>
- </tr>
- <tr>
- <td>{{SpecName('Web Audio API', '#the-analysernode-interface', 'AnalyserNode')}}</td>
- <td>{{Spec2('Web Audio API')}}</td>
- <td> </td>
- </tr>
- </tbody>
-</table>
+{{Specifications}}
-<h2 id="Browser_compatibility">Browser compatibility</h2>
+<h2 id="Browser_compatibility">브라우저 호환성</h2>
-<p>{{Compat("api.AnalyserNode")}}</p>
+<p>{{Compat}}</p>
-<h2 id="See_also">See also</h2>
+<h2 id="See_also">같이 보기</h2>
<ul>
- <li><a href="/en-US/docs/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a></li>
+ <li><a href="/ko/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Web Audio API 사용하기</a></li>
</ul>
diff --git a/files/ko/web/api/analysernode/maxdecibels/index.html b/files/ko/web/api/analysernode/maxdecibels/index.html
new file mode 100644
index 0000000000..5961655b25
--- /dev/null
+++ b/files/ko/web/api/analysernode/maxdecibels/index.html
@@ -0,0 +1,85 @@
+---
+title: AnalyserNode.maxDecibels
+slug: Web/API/AnalyserNode/maxDecibels
+tags:
+ - API
+ - AnalyserNode
+ - Property
+ - Reference
+ - Web Audio API
+ - maxDecibels
+browser-compat: api.AnalyserNode.maxDecibels
+---
+<div>{{APIRef("Web Audio API")}}</div>
+
+<p class="summary">{{domxref("AnalyserNode")}} 인터페이스의 <strong><code>maxDecibels</code></strong> 속성은 unsigned byte 값으로의 전환에 대해서, FFT 분석 데이터의 스케일링 범위에서의 최대 power 값을 나타내는 double 값입니다 — 기본적으로, 이것은 <code>getByteFrequencyData()</code>를 사용할 때 결과의 범위에 대한 최대 값을 명시합니다.</p>
+
+<h2 id="Syntax">구문</h2>
+
+<pre class="brush: js">var <em>curValue</em> = <em>analyserNode</em>.maxDecibels;
+<em>analyserNode</em>.maxDecibels = <em>newValue</em>;
+</pre>
+
+<h3 id="Value">값</h3>
+
+<p>FFT 분석 데이터를 스케일링하는 것에 대한 최대 <a href="https://en.wikipedia.org/wiki/Decibel" title="Decibel on Wikipedia">데시벨</a> 값을 나타내는 double인데, <code>0</code> dB는 가능한 가장 큰 소리를 나타내고, <code>-10</code> dB는 그것의 10번째, 등등입니다. 기본 값은 <code>-30</code> dB입니다.</p>
+
+<p><code>getByteFrequencyData()</code>로부터 데이터를 얻을 때, <code>maxDecibels</code> 또는 더 높은 진폭을 가진 모든 주파수는 <code>255</code>로 반환됩니다.</p>
+
+<p class="note"><strong>참고</strong>: 만약 <code>AnalyserNode.minDecibels</code>보다 더 작거나 같은 값이 설정된다면, <code>IndexSizeError</code> 예외가 발생합니다.</p>
+
+<h2 id="Example">예제</h2>
+
+<p>다음의 예제는 <code>AnalyserNode</code>를 생성하기 위한 {{domxref("AudioContext")}}와 그리고 나서 반복적으로 주파수 데이터를 수집하고 현재 오디오 입력의 "winamp 막대그래프 스타일의" 출력을 그리기 위한 {{domxref("window.requestAnimationFrame()","requestAnimationFrame")}}과 {{htmlelement("canvas")}}의 기본 사용을 보여줍니다. 더 완벽한 응용 예제/정보를 보려면 <a href="https://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic</a> 데모를 확인하세요 (관련된 코드를 보려면 <a href="https://github.com/mdn/voice-change-o-matic/blob/gh-pages/scripts/app.js#L128-L205">app.js 라인 128–205</a>를 참고하세요).</p>
+
+<pre class="brush: js">var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
+var analyser = audioCtx.createAnalyser();
+analyser.minDecibels = -90;
+analyser.maxDecibels = -10;
+
+ ...
+
+analyser.fftSize = 256;
+var bufferLength = analyser.frequencyBinCount;
+console.log(bufferLength);
+var dataArray = new Uint8Array(bufferLength);
+
+canvasCtx.clearRect(0, 0, WIDTH, HEIGHT);
+
+function draw() {
+  drawVisual = requestAnimationFrame(draw);
+
+  analyser.getByteFrequencyData(dataArray);
+
+  canvasCtx.fillStyle = 'rgb(0, 0, 0)';
+  canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
+
+  var barWidth = (WIDTH / bufferLength) * 2.5;
+  var barHeight;
+  var x = 0;
+
+  for(var i = 0; i &lt; bufferLength; i++) {
+    barHeight = dataArray[i];
+
+    canvasCtx.fillStyle = 'rgb(' + (barHeight+100) + ',50,50)';
+    canvasCtx.fillRect(x,HEIGHT-barHeight/2,barWidth,barHeight/2);
+
+    x += barWidth + 1;
+  }
+};
+
+draw();</pre>
+
+<h2 id="Specifications">명세</h2>
+
+{{Specifications}}
+
+<h2 id="Browser_compatibility">브라우저 호환성</h2>
+
+<p>{{Compat}}</p>
+
+<h2 id="See_also">같이 보기</h2>
+
+<ul>
+ <li><a href="/ko/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Web Audio API 사용하기</a></li>
+</ul>
diff --git a/files/ko/web/api/analysernode/mindecibels/index.html b/files/ko/web/api/analysernode/mindecibels/index.html
new file mode 100644
index 0000000000..95c51692e5
--- /dev/null
+++ b/files/ko/web/api/analysernode/mindecibels/index.html
@@ -0,0 +1,87 @@
+---
+title: AnalyserNode.minDecibels
+slug: Web/API/AnalyserNode/minDecibels
+tags:
+ - API
+ - AnalyserNode
+ - Property
+ - Reference
+ - Web Audio API
+ - minDecibels
+browser-compat: api.AnalyserNode.minDecibels
+---
+<p>{{ APIRef("Web Audio API") }}</p>
+
+<p class="summary">{{ domxref("AnalyserNode") }} 인터페이스의 <strong><code>minDecibels</code></strong> 속성은 unsigned byte 값으로의 전환에 대해서, FFT 분석 데이터의 스케일링 범위에서의 최소 power 값을 나타내는 double 값입니다 — 기본적으로, 이것은 <code>getByteFrequencyData()</code>를 사용할 때 결과의 범위에 대한 최소 값을 명시합니다.</p>
+
+<h2 id="Syntax">구문</h2>
+
+<pre class="brush: js">var <em>curValue</em> = <em>analyserNode</em>.minDecibels;
+<em>analyserNode</em>.minDecibels = <em>newValue</em>;
+</pre>
+
+<h3 id="Value">값</h3>
+
+<p>FFT 분석 데이터를 스케일링하는 것에 대한 최소 <a href="https://en.wikipedia.org/wiki/Decibel" title="Decibel on Wikipedia">데시벨</a> 값을 나타내는 double인데, <code>0</code> dB는 가능한 가장 큰 소리를 나타내고, <code>-10</code> dB는 그것의 10번째, 등등입니다. 기본 값은 <code>-100</code> dB입니다.</p>
+
+<p><code>getByteFrequencyData()</code>로부터 데이터를 얻을 때, <code>minDecibels</code> 또는 더 낮은 진폭을 가진 모든 주파수는 <code>0</code>으로 반환됩니다.</p>
+
+<div class="note">
+<p><strong>참고</strong>: 만약 <code>AnalyserNode.maxDecibels</code>보다 더 큰 값이 설정된다면, <code>INDEX_SIZE_ERR</code> 예외가 발생합니다.</p>
+</div>
+
+<h2 id="Example">예제</h2>
+
+<p>다음의 예제는 <code>AnalyserNode</code>를 생성하기 위한 {{domxref("AudioContext")}}와 그리고 나서 반복적으로 주파수 데이터를 수집하고 현재 오디오 입력의 "winamp 막대그래프 스타일의" 출력을 그리기 위한 {{domxref("window.requestAnimationFrame()","requestAnimationFrame")}}과 {{htmlelement("canvas")}}의 기본 사용을 보여줍니다. 더 완벽한 응용 예제/정보를 보려면 <a href="https://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic</a> 데모를 확인하세요 (관련된 코드를 보려면 <a href="https://github.com/mdn/voice-change-o-matic/blob/gh-pages/scripts/app.js#L128-L205">app.js 라인 128–205</a>를 참고하세요).</p>
+
+<pre class="brush: js">var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
+var analyser = audioCtx.createAnalyser();
+analyser.minDecibels = -90;
+analyser.maxDecibels = -10;
+
+ ...
+
+analyser.fftSize = 256;
+var bufferLength = analyser.frequencyBinCount;
+console.log(bufferLength);
+var dataArray = new Uint8Array(bufferLength);
+
+canvasCtx.clearRect(0, 0, WIDTH, HEIGHT);
+
+function draw() {
+  drawVisual = requestAnimationFrame(draw);
+
+  analyser.getByteFrequencyData(dataArray);
+
+  canvasCtx.fillStyle = 'rgb(0, 0, 0)';
+  canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
+
+  var barWidth = (WIDTH / bufferLength) * 2.5;
+  var barHeight;
+  var x = 0;
+
+  for(var i = 0; i &lt; bufferLength; i++) {
+    barHeight = dataArray[i];
+
+    canvasCtx.fillStyle = 'rgb(' + (barHeight+100) + ',50,50)';
+    canvasCtx.fillRect(x,HEIGHT-barHeight/2,barWidth,barHeight/2);
+
+    x += barWidth + 1;
+  }
+};
+
+draw();</pre>
+
+<h2 id="Specifications">명세</h2>
+
+{{Specifications}}
+
+<h2 id="Browser_compatibility">브라우저 호환성</h2>
+
+<p>{{Compat}}</p>
+
+<h2 id="See_also">같이 보기</h2>
+
+<ul>
+ <li><a href="/ko/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Web Audio API 사용하기</a></li>
+</ul>
diff --git a/files/ko/web/api/analysernode/smoothingtimeconstant/index.html b/files/ko/web/api/analysernode/smoothingtimeconstant/index.html
new file mode 100644
index 0000000000..18d643160f
--- /dev/null
+++ b/files/ko/web/api/analysernode/smoothingtimeconstant/index.html
@@ -0,0 +1,92 @@
+---
+title: AnalyserNode.smoothingTimeConstant
+slug: Web/API/AnalyserNode/smoothingTimeConstant
+tags:
+ - API
+ - AnalyserNode
+ - Property
+ - Reference
+ - Web Audio API
+ - smoothingTimeConstant
+browser-compat: api.AnalyserNode.smoothingTimeConstant
+---
+<p>{{ APIRef("Web Audio API") }}</p>
+
+<p class="summary">{{ domxref("AnalyserNode") }} 인터페이스의 <strong><code>smoothingTimeConstant</code></strong> 속성은 마지막 분석 프레임의 에버리징(averaging) 상수를 나타내는 double 값입니다. 이것은 기본적으로 현재 버퍼와 <code>AnalyserNode</code>가 처리한 마지막 버퍼 사이의 평균이고, 더욱 매끄러운 시간에 대한 값 변화의 집합을 결과로 낳습니다.</p>
+
+<h2 id="Syntax">구문</h2>
+
+<pre class="brush: js">var <em>smoothValue</em> = <em>analyserNode</em>.smoothingTimeConstant;
+<em>analyserNode</em>.smoothingTimeConstant = <em>newValue</em>;
+</pre>
+
+<h3 id="Value">값</h3>
+
+<p><code>0</code>에서 <code>1</code>까지의 범위 내의 double (<code>0</code>은 시간 에버리징이 없음을 의미). 기본값은 <code>0.8</code>입니다.</p>
+
+<p>만약 0이 설정된다면, 완료된 에버리징이 없는 것이지만, 1의 값은 "값을 계산하는 동안 이전과 현재 버퍼를 많이 겹치기"를 의미하는데, 이는 근본적으로 {{domxref("AnalyserNode.getFloatFrequencyData")}}/{{domxref("AnalyserNode.getByteFrequencyData")}} 호출에 걸쳐 변화들을 매끄럽게 합니다.</p>
+
+<p>기술적인 측면에서, 우리는 <a href="https://webaudio.github.io/web-audio-api/#blackman-window">Blackman window</a>를 적용했고 값들을 시간에 대해 매끄럽게 합니다. 기본값은 대부분의 경우에 적합합니다.</p>
+
+<div class="note">
+<p><strong>참고</strong>: 만약 범위 0-1 바깥의 값이 설정된다면, <code>INDEX_SIZE_ERR</code> 예외가 발생합니다.</p>
+</div>
+
+<h2 id="Example">예제</h2>
+
+<p>다음의 예제는 <code>AnalyserNode</code>를 생성하기 위한 {{domxref("AudioContext")}}와 그리고 나서 반복적으로 주파수 데이터를 수집하고 현재 오디오 입력의 "winamp 막대그래프 스타일의" 출력을 그리기 위한 {{domxref("window.requestAnimationFrame()","requestAnimationFrame")}}과 {{htmlelement("canvas")}}의 기본 사용을 보여줍니다. 더 완벽한 응용 예제/정보를 보려면 <a href="https://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic</a> 데모를 확인하세요 (관련된 코드를 보려면 <a href="https://github.com/mdn/voice-change-o-matic/blob/gh-pages/scripts/app.js#L128-L205">app.js 라인 128–205</a>를 참고하세요).</p>
+
+<p>만약 여러분이 <code>smoothingTimeConstant()</code>이 가진 영향에 대해 궁금하다면, 위의 예제를 복사해서 <code>analyser.smoothingTimeConstant = 0;</code>을 대신 설정해 보세요. 값 변화가 더욱 삐걱거리는 것을 인지하실 것입니다.</p>
+
+<pre class="brush: js">var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
+var analyser = audioCtx.createAnalyser();
+analyser.minDecibels = -90;
+analyser.maxDecibels = -10;
+analyser.smoothingTimeConstant = 0.85;
+
+ ...
+
+analyser.fftSize = 256;
+var bufferLength = analyser.frequencyBinCount;
+console.log(bufferLength);
+var dataArray = new Uint8Array(bufferLength);
+
+canvasCtx.clearRect(0, 0, WIDTH, HEIGHT);
+
+function draw() {
+  drawVisual = requestAnimationFrame(draw);
+
+  analyser.getByteFrequencyData(dataArray);
+
+  canvasCtx.fillStyle = 'rgb(0, 0, 0)';
+  canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
+
+  var barWidth = (WIDTH / bufferLength) * 2.5;
+  var barHeight;
+  var x = 0;
+
+  for(var i = 0; i &lt; bufferLength; i++) {
+    barHeight = dataArray[i];
+
+    canvasCtx.fillStyle = 'rgb(' + (barHeight+100) + ',50,50)';
+    canvasCtx.fillRect(x,HEIGHT-barHeight/2,barWidth,barHeight/2);
+
+    x += barWidth + 1;
+  }
+};
+
+draw();</pre>
+
+<h2 id="Specifications">명세</h2>
+
+{{Specifications}}
+
+<h2 id="Browser_compatibility">브라우저 호환성</h2>
+
+<p>{{Compat}}</p>
+
+<h2 id="See_also">같이 보기</h2>
+
+<ul>
+ <li><a href="/ko/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Web Audio API 사용하기</a></li>
+</ul>
diff --git a/files/ko/web/api/baseaudiocontext/createperiodicwave/index.html b/files/ko/web/api/baseaudiocontext/createperiodicwave/index.html
index ac48d4576c..7cf934a807 100644
--- a/files/ko/web/api/baseaudiocontext/createperiodicwave/index.html
+++ b/files/ko/web/api/baseaudiocontext/createperiodicwave/index.html
@@ -17,7 +17,7 @@ browser-compat: api.BaseAudioContext.createPeriodicWave
<p>{{ domxref("BaseAudioContext") }} 인터페이스의 <code>createPeriodicWave()</code> 메서드는 {{domxref("PeriodicWave")}}를 생성하기 위해 사용되는데, 이는 {{ domxref("OscillatorNode") }}의 출력을 형성하기 위해 사용될 수 있는 주기적인 파형을 정의하기 위해 사용됩니다.</p>
-<h2 id="Syntax">문법</h2>
+<h2 id="Syntax">구문</h2>
<pre
class="brush: js">var wave = <em>AudioContext</em>.createPeriodicWave(<em>real</em>, <em>imag</em>[, <em>constraints</em>]);</pre>
@@ -137,13 +137,13 @@ osc.stop(2);</pre>
<annotation encoding="TeX">\left(a+bi\right)e^{i} , \left(c+di\right)e^{2i} ,
\left(f+gi\right)e^{3i}   </annotation>
</semantics>
- </math>etc.) 양이거나 음일 수 있습니다. 수동으로 이러한 계수들을 얻는 간단한 방법은 (최고의 방법은 아니지만) 그래프 계산기를 사용하는 것입니다.</p>
+ </math> 등) 양이거나 음일 수 있습니다. 수동으로 이러한 계수들을 얻는 간단한 방법은 (최고의 방법은 아니지만) 그래프 계산기를 사용하는 것입니다.</p>
-<h2 id="Specifications">Specifications</h2>
+<h2 id="Specifications">명세</h2>
{{Specifications}}
-<h2 id="Browser_compatibility">Browser compatibility</h2>
+<h2 id="Browser_compatibility">브라우저 호환성</h2>
<p>{{Compat}}</p>
diff --git a/files/ko/web/api/baseaudiocontext/index.html b/files/ko/web/api/baseaudiocontext/index.html
new file mode 100644
index 0000000000..0c25a6dfd8
--- /dev/null
+++ b/files/ko/web/api/baseaudiocontext/index.html
@@ -0,0 +1,122 @@
+---
+title: BaseAudioContext
+slug: Web/API/BaseAudioContext
+tags:
+ - API
+ - Audio
+ - BaseAudioContext
+ - Context
+ - Interface
+ - Reference
+ - Web Audio API
+ - sound
+browser-compat: api.BaseAudioContext
+---
+<div>{{APIRef("Web Audio API")}}</div>
+
+<p class="summary"><span class="seoSummary"><a href="/ko/docs/Web/API/Web_Audio_API">Web Audio API</a>의 <code>BaseAudioContext</code> 인터페이스는 {{domxref("AudioContext")}} 와 {{domxref("OfflineAudioContext")}}에 의해 표현되는 온라인과 오프라인 오디오 프로세싱 그래프에 대한 기본 정의로 작동합니다. <code>BaseAudioContext</code>는 직접적으로 사용될 수 없습니다. 대신 위에서 언급한 두 상속 인터페이스를 통해 <code>BaseAudioContext</code>의 기능을 사용할 수 있습니다.</p>
+
+<p><code>BaseAudioContext</code>는 이벤트의 타겟이 될 수 있는데, 따라서 이것은 {{domxref("EventTarget")}} 인터페이스를 구현합니다.</p>
+
+<p>{{InheritanceDiagram}}</p>
+
+<h2 id="Properties">속성</h2>
+
+<dl>
+ <dt>{{domxref("BaseAudioContext.audioWorklet")}} {{experimental_inline}} {{readonlyInline}} {{securecontext_inline}}</dt>
+ <dd>{{domxref("AudioWorklet")}} 객체를 반환하는데, 이는 {{domxref("AudioWorkletProcessor")}} 인터페이스를 구현하는 JavaScript 코드가 오디오 데이터를 처리하기 위해 백그라운드에서 실행되는 {{domxref("AudioNode")}}들을 생성하고 관리하는 데 쓰일 수 있습니다.</dd>
+ <dt>{{domxref("BaseAudioContext.currentTime")}} {{readonlyInline}}</dt>
+ <dd>스케쥴링에 사용되는 초 단위로 계속 증가하는 하드웨어 시간을 나타내는 double을 반환합니다. 이는 <code>0</code>에서 시작합니다.</dd>
+ <dt>{{domxref("BaseAudioContext.destination")}} {{readonlyInline}}</dt>
+ <dd>컨텍스트 내의 모든 오디오의 최종 도착지를 나타내는 {{domxref("AudioDestinationNode")}}를 반환합니다. 이것은 오디오를 렌더링하는 장치로 생각될 수 있습니다.</dd>
+ <dt>{{domxref("BaseAudioContext.listener")}} {{readonlyInline}}</dt>
+ <dd>3D 공간화에 사용되는 {{domxref("AudioListener")}} 객체를 반환합니다.</dd>
+ <dt>{{domxref("BaseAudioContext.sampleRate")}} {{readonlyInline}}</dt>
+ <dd>이 컨텍스트 내의 모든 노드에 의해 사용되는 샘플 레이트(초당 샘플)를 나타내는 float을 반환합니다. {{domxref("AudioContext")}}의 샘플 레이트는 변경될 수 없습니다.</dd>
+ <dt>{{domxref("BaseAudioContext.state")}} {{readonlyInline}}</dt>
+ <dd><code>AudioContext</code>의 현재 상태를 반환합니다.</dd>
+</dl>
+
+<h3 id="Event_handlers">이벤트 처리기</h3>
+
+<dl>
+ <dt>{{domxref("BaseAudioContext.onstatechange")}}</dt>
+ <dd>{{event("statechange")}} 유형의 이벤트가 발생되었을 때 실행되는 이벤트 처리기입니다. 이것은 상태 변화 메서드({{domxref("AudioContext.suspend")}}, {{domxref("AudioContext.resume")}}, 또는 {{domxref("AudioContext.close")}}) 중 하나의 호출에 기인해 <code>AudioContext</code>의 상태가 변경되었을 때 발생됩니다.</dd>
+</dl>
+
+<h2 id="Methods">메서드</h2>
+
+<p><em>또한 {{domxref("EventTarget")}} 인터페이스로부터의 메서드를 구현합니다.</em></p>
+
+<dl>
+ <dt>{{domxref("BaseAudioContext.createAnalyser()")}}</dt>
+ <dd>오디오 시간과 주파수 데이터를 드러내고 데이터 시각화를 생성하는 데 사용될 수 있는 {{domxref("AnalyserNode")}}를 생성합니다.</dd>
+ <dt>{{domxref("BaseAudioContext.createBiquadFilter()")}}</dt>
+ <dd>high-pass, low-pass, band-pass와 같은 몇몇 다른 흔한 필터 유형으로 설정 가능한 2차 필터를 나타내는 {{domxref("BiquadFilterNode")}}를 생성합니다.</dd>
+ <dt>{{domxref("BaseAudioContext.createBuffer()")}}</dt>
+ <dd>데이터를 넣거나 {{ domxref("AudioBufferSourceNode") }}를 통해 재생될 수 있는 새로운 빈 {{ domxref("AudioBuffer") }} 객체를 생성합니다.</dd>
+ <dt>{{domxref("BaseAudioContext.createBufferSource()")}}</dt>
+ <dd>{{ domxref("AudioBuffer") }} 객체 내부에 포함된 오디오 데이터를 재생하거나 조작하기 위해 사용될 수 있는 {{domxref("AudioBufferSourceNode")}}를 생성합니다. {{ domxref("AudioBuffer") }}들은 {{domxref("BaseAudioContext/createBuffer", "AudioContext.createBuffer()")}}를 사용해 생성되거나 성공적으로 오디오 트랙을 디코드했을 때 {{domxref("BaseAudioContext/decodeAudioData", "AudioContext.decodeAudioData()")}}에 의해 반환됩니다.</dd>
+ <dt>{{domxref("BaseAudioContext.createConstantSource()")}}</dt>
+ <dd>샘플이 모두 같은 값을 가지고 있는 모노럴의(한 채널의) 사운드 신호를 계속적으로 출력하는 오디오 소스인 {{domxref("ConstantSourceNode")}} 객체를 생성합니다.</dd>
+ <dt>{{domxref("BaseAudioContext.createChannelMerger()")}}</dt>
+ <dd>다수의 오디오 스트림으로부터 하나의 오디오 스트림에 채널을 결합하기 위해 사용되는 {{domxref("ChannelMergerNode")}}를 생성합니다.</dd>
+ <dt>{{domxref("BaseAudioContext.createChannelSplitter()")}}</dt>
+ <dd>오디오 스트림의 각 채널에 접근하고 별도로 그것들을 처리하기 위해 사용되는 {{domxref("ChannelSplitterNode")}}를 생성합니다.</dd>
+ <dt>{{domxref("BaseAudioContext.createConvolver()")}}</dt>
+ <dd>오디오 그래프에 잔향(reverberation) 효과와 같은 콘볼루션 이펙트를 적용하기 위해 사용될 수 있는 {{domxref("ConvolverNode")}}를 생성합니다.</dd>
+ <dt>{{domxref("BaseAudioContext.createDelay()")}}</dt>
+ <dd>들어오는 오디오 신호를 지연시키기 위해 사용되는 {{domxref("DelayNode")}}를 생성합니다. 이 노드는 Web Audio API 그래프에서 피드백 루프를 생성하는 데 유용합니다.</dd>
+ <dt>{{domxref("BaseAudioContext.createDynamicsCompressor()")}}</dt>
+ <dd>음향 압축(acoustic compression)을 오디오 신호에 적용하기 위해 사용될 수 있는 {{domxref("DynamicsCompressorNode")}}를 생성합니다.</dd>
+ <dt>{{domxref("BaseAudioContext.createGain()")}}</dt>
+ <dd>오디오 그래프의 전반적인 볼륨을 제어하기 위해 사용될 수 있는 {{domxref("GainNode")}}를 생성합니다.</dd>
+ <dt>{{domxref("BaseAudioContext.createIIRFilter()")}}</dt>
+ <dd>몇몇 다른 흔한 필터 유형으로 설정 가능한 2차 필터를 나타내는 {{domxref("IIRFilterNode")}}를 생성합니다.</dd>
+ <dt>{{domxref("BaseAudioContext.createOscillator()")}}</dt>
+ <dd>주기적인 파형을 나타내는 소스인 {{domxref("OscillatorNode")}}를 생성합니다. 이것은 기본적으로 음색을 생성합니다.</dd>
+ <dt>{{domxref("BaseAudioContext.createPanner()")}}</dt>
+ <dd>들어오는 오디오 스트림을 3D 공간에서 공간화하기 위해 사용되는 {{domxref("PannerNode")}}를 생성합니다.</dd>
+ <dt>{{domxref("BaseAudioContext.createPeriodicWave()")}}</dt>
+ <dd>{{ domxref("OscillatorNode") }}의 출력을 결정하기 위해 사용될 수 있는 주기적인 파형을 정의하는 데 쓰이는 {{domxref("PeriodicWave")}}를 생성합니다.</dd>
+ <dt>{{domxref("BaseAudioContext.createScriptProcessor()")}} {{deprecated_inline}}</dt>
+ <dd>JavaScript를 통한 직접적인 오디오 프로세싱을 위해 사용될 수 있는 {{domxref("ScriptProcessorNode")}}를 생성합니다.</dd>
+ <dt>{{domxref("BaseAudioContext.createStereoPanner()")}}</dt>
+ <dd>오디오 소스에 스테레오 패닝을 적용하기 위해 사용될 수 있는 {{domxref("StereoPannerNode")}}를 생성합니다.</dd>
+ <dt>{{domxref("BaseAudioContext.createWaveShaper()")}}</dt>
+ <dd>비선형 변형(non-linear distortion) 효과를 구현하기 위해 사용되는 {{domxref("WaveShaperNode")}}를 생성합니다.</dd>
+ <dt>{{domxref("BaseAudioContext.decodeAudioData()")}}</dt>
+ <dd>비동기적으로 {{domxref("ArrayBuffer")}}에 포함된 오디오 파일 데이터를 디코드합니다. 이 경우, ArrayBuffer는 보통 <code>arraybuffer</code>에 <code>responseType</code>을 설정한 후 {{domxref("XMLHttpRequest")}}의 <code>response</code> 특성으로부터 로딩됩니다. 이 메서드는 오디오 파일의 조각이 아니라, 오직 완전한 파일에서만 작동합니다.</dd>
+</dl>
+
+<h2 id="Examples">예제</h2>
+
+<p>기본적인 오디오 컨텍스트 선언</p>
+
+<pre class="brush: js">const audioContext = new AudioContext();</pre>
+
+<p>크로스 브라우저를 위한 다른 형태</p>
+
+<pre class="brush: js">const AudioContext = window.AudioContext || window.webkitAudioContext;
+const audioContext = new AudioContext();
+
+const oscillatorNode = audioContext.createOscillator();
+const gainNode = audioContext.createGain();
+const finish = audioContext.destination;
+</pre>
+
+<h2 id="Specifications">명세</h2>
+
+{{Specifications}}
+
+<h2 id="Browser_compatibility">브라우저 호환성</h2>
+
+<p>{{Compat}}</p>
+
+<h2 id="See_also">같이 보기</h2>
+
+<ul>
+ <li><a href="/ko/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Web Audio API 사용하기</a></li>
+ <li>{{domxref("AudioContext")}}</li>
+ <li>{{domxref("OfflineAudioContext")}}</li>
+</ul>
diff --git a/files/ko/web/api/history/state/index.html b/files/ko/web/api/history/state/index.html
index 0f889665c7..7aae615ba7 100644
--- a/files/ko/web/api/history/state/index.html
+++ b/files/ko/web/api/history/state/index.html
@@ -9,15 +9,15 @@ translation_of: Web/API/History/state
<p>{{event("popstate")}} 이벤트가 트리거될때가 아닌 상태에서 state값을 볼 수 있는 방법입니다.</p>
-<h2 id="문법">문법</h2>
+<h2 id="syntax">구문</h2>
<pre class="syntaxbox">const <em>currentState</em> = history.state</pre>
-<h3 id="값">값</h3>
+<h3 id="value">값</h3>
<p>현 history에 위치한 값입니다. 이 값은 {{domxref("History.pushState","pushState()")}} 또는 {{domxref("History.replaceState","replaceState()")}}을 사용할때까지 {{jsxref("null")}} 값을 가집니다.</p>
-<h2 id="예제">예제</h2>
+<h2 id="examples">예제</h2>
<p><code>history.state</code> 로 초기값을 보여준 후 {{domxref("History.pushState","pushState()")}}를 사용하여 State를 푸시합니다.</p>
@@ -32,7 +32,7 @@ history.pushState({name: 'Example'}, "pushState example", 'page3.html');
// Now state has a value.
console.log(`History.state after pushState: ${history.state}`);</pre>
-<h2 id="SpecificationsE">Specifications<a class="button section-edit only-icon" href="https://developer.mozilla.org/en-US/docs/Web/API/History$edit#Specifications" rel="nofollow, noindex"><span>E</span></a></h2>
+<h2 id="Specifications">명세</h2>
<table class="standard-table">
<tbody>
@@ -54,13 +54,13 @@ console.log(`History.state after pushState: ${history.state}`);</pre>
</tbody>
</table>
-<h2 id="Browser_compatibility">Browser compatibility</h2>
+<h2 id="Browser_compatibility">브라우저 호환성</h2>
<p>{{Compat("api.History.state")}}</p>
-<h2 id="See_also">See also</h2>
+<h2 id="See_also">같이 보기</h2>
<ul>
<li><a href="/en-US/docs/Web/API/History_API/Working_with_the_History_API">Working with the History API</a></li>
diff --git a/files/ko/web/api/offlineaudiocontext/index.html b/files/ko/web/api/offlineaudiocontext/index.html
new file mode 100644
index 0000000000..6ae837d718
--- /dev/null
+++ b/files/ko/web/api/offlineaudiocontext/index.html
@@ -0,0 +1,148 @@
+---
+title: OfflineAudioContext
+slug: Web/API/OfflineAudioContext
+tags:
+ - API
+ - Audio
+ - Interface
+ - OfflineAudioContext
+ - Reference
+ - Web Audio API
+browser-compat: api.OfflineAudioContext
+---
+<div>{{APIRef("Web Audio API")}}</div>
+
+<p><code>OfflineAudioContext</code> 인터페이스는 함께 연결된 {{domxref("AudioNode")}}들로부터 만들어진 오디오 프로세싱 그래프를 나타내는 {{domxref("AudioContext")}} 인터페이스입니다. 표준 {{domxref("AudioContext")}}와는 대조적으로, <code>OfflineAudioContext</code>는 오디오를 장치 하드웨어로 렌더링하지 않습니다; 대신, 이것은 가능한 한 빨리 오디오를 생성하고, 그 결과를 {{domxref("AudioBuffer")}}에 출력합니다.</p>
+
+<p>{{InheritanceDiagram}}</p>
+
+<h2 id="Constructor">생성자</h2>
+
+<dl>
+ <dt>{{domxref("OfflineAudioContext.OfflineAudioContext()")}}</dt>
+ <dd>새로운 <code>OfflineAudioContext</code> 인스턴스를 생성합니다.</dd>
+</dl>
+
+<h2 id="Properties">속성</h2>
+
+<p><em>또한 부모 인터페이스인 {{domxref("BaseAudioContext")}}로부터 속성을 상속받습니다.</em></p>
+
+<dl>
+ <dt>{{domxref('OfflineAudioContext.length')}} {{readonlyinline}}</dt>
+ <dd>샘플 프레임의 버퍼 사이즈를 나타내는 정수.</dd>
+</dl>
+
+<h3 id="Event_handlers">이벤트 처리기</h3>
+
+<dl>
+ <dt>{{domxref("OfflineAudioContext.oncomplete")}}</dt>
+ <dd>{{domxref("OfflineAudioContext.startRendering()")}}의 이벤트 기반 버전이 사용된 이후, 프로세싱이 종료되었을 때, 즉 ({{domxref("OfflineAudioCompletionEvent")}} 유형의) {{event("complete")}} 이벤트가 발생되었을 때 호출되는 <a href="/en-US/docs/Web/Events/Event_handlers">이벤트 처리기</a>입니다.</dd>
+</dl>
+
+<h2 id="Methods">메서드</h2>
+
+<p><em>또한 부모 인터페이스인 {{domxref("BaseAudioContext")}}로부터 메서드를 상속받습니다.</em></p>
+
+<dl>
+ <dt>{{domxref("OfflineAudioContext.suspend()")}}</dt>
+ <dd>특정한 시간에서의 오디오 컨텍스트의 시간 진행의 연기를 스케쥴링하고 프로미스를 반환합니다.</dd>
+ <dt>{{domxref("OfflineAudioContext.startRendering()")}}</dt>
+ <dd>현재 연결과 현재 스케쥴링된 변화를 고려하며 오디오 렌더링을 시작합니다. 이 문서는 이벤트 기반 버전과 프로미스 기반 버전 모두를 다룹니다.</dd>
+</dl>
+
+<h3 id="Deprecated_methods">더 이상 사용되지 않는 메서드</h3>
+
+<dl>
+ <dt>{{domxref("OfflineAudioContext.resume()")}}</dt>
+ <dd>이전에 연기된 오디오 컨텍스트에서의 시간 진행을 재개합니다.</dd>
+</dl>
+
+<div class="note">
+<p><strong>참고</strong>: <code>resume()</code> 메서드는 여전히 사용 가능합니다 — 이것은 이제 {{domxref("BaseAudioContext")}} 인터페이스에 정의되었고 ({{domxref("AudioContext.resume")}}을 참조하세요) 따라서 {{domxref("AudioContext")}}와 {{domxref("OfflineAudioContext")}} 인터페이스 모두에서 접근 가능합니다.</p>
+</div>
+
+<h2 id="Events">이벤트</h2>
+
+<p><code><a href="/en-US/docs/Web/API/EventTarget/addEventListener">addEventListener()</a></code>를 사용하거나 이벤트 수신기를 이 인터페이스의 <code>on<em>eventname</em></code> 속성에 부여함으로써 이 이벤트들을 수신하세요.</p>
+
+<dl>
+ <dt><code><a href="/en-US/docs/Web/API/OfflineAudioContext/complete_event">complete</a></code></dt>
+ <dd>오프라인 오디오 컨텍스트의 렌더링이 완료되었을 때 발생됩니다.<br>
+ 또한 <code><a href="/en-US/docs/Web/API/OfflineAudioContext/oncomplete">oncomplete</a></code> 이벤트 처리기 속성을 사용하여 이용 가능합니다.</dd>
+</dl>
+
+<h2 id="Examples">예제</h2>
+
+<p>이 간단한 예제에서, 우리는 {{domxref("AudioContext")}}와 <code>OfflineAudioContext</code> 객체 모두를 선언합니다. 우리는 <code>AudioContext</code>을 XHR({{domxref("BaseAudioContext.decodeAudioData")}})을 통해 오디오 트랙을 로드하기 위해 사용하고, 그리고 나서 <code>OfflineAudioContext</code>를 오디오를 {{domxref("AudioBufferSourceNode")}}에 렌더링하고 트랙을 재생하기 위해 사용합니다. 오프라인 오디오 그래프가 준비된 후, 여러분은 {{domxref("OfflineAudioContext.startRendering")}}을 사용하여 {{domxref("AudioBuffer")}}에 이것을 렌더링할 필요가 있습니다.</p>
+
+<p><code>startRendering()</code> 프로미스가 이행할 때, 렌더링은 완료되었고 <code>AudioBuffer</code> 출력은 프로미스에서 반환됩니다.</p>
+
+<p>이 시점에서 우리는 다른 오디오 컨텍스트를 생성하고, 그것의 내부에 {{domxref("AudioBufferSourceNode")}}를 생성하고, 그리고 이것의 버퍼를 <code>AudioBuffer</code> 프로미스와 같게 설정합니다. 이것은 그리고 나서 간단한 표준 오디오 그래프의 일부로 재생됩니다.</p>
+
+<div class="note">
+<p><strong>참고</strong>: 작동하는 예제를 보려면 <a href="https://mdn.github.io/webaudio-examples/offline-audio-context-promise/">offline-audio-context-promise</a> GitHub 레포지토리를 참고하세요 (<a href="https://github.com/mdn/webaudio-examples/tree/master/offline-audio-context-promise">소스 코드</a>도 보세요.)</p>
+</div>
+
+<pre class="brush: js">// 온라인과 오프라인 오디오 컨텍스트를 정의합니다
+
+var audioCtx = new AudioContext();
+var offlineCtx = new OfflineAudioContext(2,44100*40,44100);
+
+source = offlineCtx.createBufferSource();
+
+// 오디오 트랙을 로딩하기 위해 XHR를 사용하고,
+// 이것을 디코딩하기 위해 decodeAudioData를 사용하고 렌더링하기 위해 OfflineAudioContext를 사용합니다
+
+function getData() {
+ request = new XMLHttpRequest();
+
+ request.open('GET', 'viper.ogg', true);
+
+ request.responseType = 'arraybuffer';
+
+ request.onload = function() {
+ var audioData = request.response;
+
+ audioCtx.decodeAudioData(audioData, function(buffer) {
+ myBuffer = buffer;
+ source.buffer = myBuffer;
+ source.connect(offlineCtx.destination);
+ source.start();
+ //source.loop = true;
+ offlineCtx.startRendering().then(function(renderedBuffer) {
+ console.log('Rendering completed successfully');
+ var song = audioCtx.createBufferSource();
+ song.buffer = renderedBuffer;
+
+ song.connect(audioCtx.destination);
+
+ play.onclick = function() {
+ song.start();
+ }
+ }).catch(function(err) {
+ console.log('Rendering failed: ' + err);
+ // 참고: 이 프로미스는 OfflineAudioContext에서 startRendering이 두 번째로 호출되었을 때 거부되어야만 합니다
+ });
+ });
+ }
+
+ request.send();
+}
+
+// 프로세스를 시작하기 위해 getData를 실행합니다
+
+getData();</pre>
+
+<h2 id="Specifications">명세</h2>
+
+{{Specifications}}
+
+<h2 id="Browser_compatibility">브라우저 호환성</h2>
+
+<p>{{Compat}}</p>
+
+<h2 id="See_also">같이 보기</h2>
+
+<ul>
+ <li><a href="/ko/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Web Audio API 사용하기</a></li>
+</ul>
diff --git a/files/ko/web/api/periodicwave/index.html b/files/ko/web/api/periodicwave/index.html
new file mode 100644
index 0000000000..b976fd48d0
--- /dev/null
+++ b/files/ko/web/api/periodicwave/index.html
@@ -0,0 +1,55 @@
+---
+title: PeriodicWave
+slug: Web/API/PeriodicWave
+tags:
+ - API
+ - Audio
+ - Interface
+ - Media
+ - PeriodicWave
+ - Reference
+ - Web Audio
+ - Web Audio API
+ - waveform
+browser-compat: api.PeriodicWave
+---
+<p>{{ APIRef("Web Audio API") }}</p>
+
+<div>
+<p><strong><code>PeriodicWave</code></strong> 인터페이스는 {{domxref("OscillatorNode")}}의 출력을 형성하는데 사용될 수 있는 주기적인 파형을 정의합니다.</p>
+</div>
+
+<p><code>PeriodicWave</code>에는 입력도 출력도 없습니다; 이것은 {{domxref("OscillatorNode.setPeriodicWave()")}}를 호출할 때 사용자 정의 oscillator를 정의하기 위해 쓰입니다. <code>PeriodicWave</code> 그 자체는 {{domxref("BaseAudioContext.createPeriodicWave")}}에 의해 생성/반환됩니다.</p>
+
+<h2 id="Constructor">생성자</h2>
+
+<dl>
+ <dt>{{domxref("PeriodicWave.PeriodicWave()")}}</dt>
+ <dd>모든 속성에 기본값을 사용하여 새로운 <code>PeriodicWave</code> 객체 인스턴스를 생성합니다. 만약 처음에 사용자 정의 속성 값을 설정하기를 원한다면, {{domxref("BaseAudioContext.createPeriodicWave")}} 팩토리 메서드를 대신 사용하세요.</dd>
+</dl>
+
+<h2 id="Properties">속성</h2>
+
+<p><em>없습니다; 또한, <code>PeriodicWave</code>는 어떠한 속성도 상속받지 않습니다.</em></p>
+
+<h2 id="Methods">메서드</h2>
+
+<p><em>없습니다; 또한, <code>PeriodicWave</code>는 어떠한 메서드도 상속받지 않습니다.</em></p>
+
+<h2 id="Example">예제</h2>
+
+<p>간단한 사인파를 포함하는 <code>PeriodicWave</code> 객체를 어떻게 생성하는지 보여주는 간단한 예제 코드를 {{domxref("BaseAudioContext.createPeriodicWave")}}에서 확인해 보세요.</p>
+
+<h2 id="Specifications">명세</h2>
+
+{{Specifications}}
+
+<h2 id="Browser_compatibility">브라우저 호환성</h2>
+
+<p>{{Compat}}</p>
+
+<h2 id="See_also">같이 보기</h2>
+
+<ul>
+ <li><a href="/ko/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Web Audio API 사용하기</a></li>
+</ul>
diff --git a/files/ko/web/api/periodicwave/periodicwave/index.html b/files/ko/web/api/periodicwave/periodicwave/index.html
new file mode 100644
index 0000000000..edeabac774
--- /dev/null
+++ b/files/ko/web/api/periodicwave/periodicwave/index.html
@@ -0,0 +1,70 @@
+---
+title: PeriodicWave()
+slug: Web/API/PeriodicWave/PeriodicWave
+tags:
+ - API
+ - Audio
+ - Constructor
+ - PeriodicWave
+ - Reference
+ - Web Audio API
+browser-compat: api.PeriodicWave.PeriodicWave
+---
+<p>{{APIRef("Web Audio API")}}</p>
+
+<p><a
+ href="/ko/docs/Web/API/Web_Audio_API">Web Audio API</a>의 <code><strong>PeriodicWave()</strong></code> 생성자는 새로운 {{domxref("PeriodicWave")}} 객체 인스턴스를 생성합니다.</p>
+
+<h2 id="Syntax">구문</h2>
+
+<pre
+ class="brush: js">var <em>myWave</em> = new PeriodicWave(<em>context</em>, <em>options</em>);</pre>
+
+<h3 id="Parameters">매개변수</h3>
+
+<p><em>{{domxref("AudioNodeOptions")}} dictionary로부터 매개변수를 상속받습니다</em>.</p>
+
+<dl>
+ <dt><code>context</code></dt>
+ <dd>여러분이 노드가 관련되기를 바라는 오디오 컨텍스트를 나타내는 {{domxref("BaseAudioContext")}}</dd>
+ <dt><code>options</code> {{optional_inline}}</dt>
+ <dd>여러분이 <code>PeriodicWave</code>가 가지기를 바라는 속성들을 정의하는 <code><a href="https://webaudio.github.io/web-audio-api/#idl-def-PeriodicWaveOptions">PeriodicWaveOptions</a></code> dictionary 객체 (이것은 또한 <a
+ href="https://webaudio.github.io/web-audio-api/#idl-def-PeriodicWaveConstraints">PeriodicWaveConstraints</a>
+ dictionary에 정의된 옵션들도 상속받습니다.):
+ <ul>
+ <li><code>real</code>: 여러분이 파동을 형성하기 위해 사용하기를 원하는 코사인 항을 포함하는 {{domxref("Float32Array")}} ({{domxref("BaseAudioContext.createPeriodicWave")}}의 <code>real</code> 매개변수와 동일)</li>
+ <li><code>imag</code>: 여러분이 파동을 형성하기 위해 사용하기를 원하는 사인 항을 포함하는 {{domxref("Float32Array")}} ({{domxref("BaseAudioContext.createPeriodicWave")}}의 <code>imag</code> 매개변수와 동일)</li>
+ </ul>
+ </dd>
+</dl>
+
+<h3 id="Return_value">반환 값</h3>
+
+<p>새로운 {{domxref("PeriodicWave")}} 객체 인스턴스.</p>
+
+<h2 id="Example">예제</h2>
+
+<pre class="brush: js">var real = new Float32Array(2);
+var imag = new Float32Array(2);
+var ac = new AudioContext();
+
+real[0] = 0;
+imag[0] = 0;
+real[1] = 1;
+imag[1] = 0;
+
+var options = {
+ real : real,
+ imag : imag,
+ disableNormalization : false
+}
+
+var wave = new PeriodicWave(ac, options);</pre>
+
+<h2 id="Specifications">명세</h2>
+
+{{Specifications}}
+
+<h2 id="Browser_compatibility">브라우저 호환성</h2>
+
+<p>{{Compat}}</p>
diff --git a/files/ko/web/api/streams_api/index.html b/files/ko/web/api/streams_api/index.html
index 03bafe2e88..510d797db4 100644
--- a/files/ko/web/api/streams_api/index.html
+++ b/files/ko/web/api/streams_api/index.html
@@ -99,10 +99,10 @@ translation_of: Web/API/Streams_API
<ul>
<li><a href="http://mdn.github.io/dom-examples/streams/simple-pump/">Simple stream pump</a>: ReadableStream에서 어떻게 데이터를 읽어들여 다른 곳으로 전달하는지 보여줍니다.</li>
- <li><a href="http://mdn.github.io/dom-examples/streams/grayscale-png/">Grayscale a PNG</a>: PNG file의 ReadableStream을 통해 grayscale로 변경하는 방법을 보여줍니다..</li>
+ <li><a href="http://mdn.github.io/dom-examples/streams/grayscale-png/">Grayscale a PNG</a>: PNG file의 ReadableStream을 통해 grayscale로 변경하는 방법을 보여줍니다.</li>
<li><a href="http://mdn.github.io/dom-examples/streams/simple-random-stream/">Simple random stream</a>: 커스텀 스트림을 통해 무작위 문자열을 생성하고, 데이터 청크로 큐잉한 뒤, 다시 읽어들이는 방법에 대해 설명합니다.</li>
<li><a href="http://mdn.github.io/dom-examples/streams/simple-tee-example/">Simple tee example</a>: 이 예제는 simple random stream 예제를 확장하여, 스트림을 분할하고 각 스트림이 독립적으로 데이터를 읽는 방법을 보여줍니다.</li>
- <li><a href="http://mdn.github.io/dom-examples/streams/simple-writer/">Simple writer</a>:  Writable stream에 데이터를 쓰는 방법을 설명하고, 스트림 데이터를 디코드하여 UI로 표현하는 방법을 보옂부니다..</li>
+ <li><a href="http://mdn.github.io/dom-examples/streams/simple-writer/">Simple writer</a>: Writable stream에 데이터를 쓰는 방법을 설명하고, 스트림 데이터를 디코드하여 UI로 표현하는 방법을 보여줍니다.</li>
<li><a href="http://mdn.github.io/dom-examples/streams/png-transform-stream/">Unpack chunks of a PNG</a>: <a href="https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream/pipeThrough"><code>pipeThrough()</code></a> 을 통해 PNG file을 PNG 청크 스트림으로 변환하는 방식으로 ReadableStream을 다른 데이터 타입 스트림으로 전환하는 방법을 설명합니다.</li>
</ul>
diff --git a/files/ko/web/api/web_audio_api/advanced_techniques/index.html b/files/ko/web/api/web_audio_api/advanced_techniques/index.html
new file mode 100644
index 0000000000..d3ce7cd56d
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/advanced_techniques/index.html
@@ -0,0 +1,586 @@
+---
+title: 'Advanced techniques: Creating and sequencing audio'
+slug: Web/API/Web_Audio_API/Advanced_techniques
+tags:
+ - API
+ - Advanced
+ - Audio
+ - Guide
+ - Reference
+ - Web Audio API
+ - sequencer
+---
+<div>{{DefaultAPISidebar("Web Audio API")}}</div>
+
+<p class="summary">In this tutorial, we're going to cover sound creation and modification, as well as timing and scheduling. We're going to introduce sample loading, envelopes, filters, wavetables, and frequency modulation. If you're familiar with these terms and you're looking for an introduction to their application within with the Web Audio API, you've come to the right place.</p>
+
+<h2 id="Demo">Demo</h2>
+
+<p>We're going to be looking at a very simple step sequencer:</p>
+
+<p><img alt="A sound sequencer application featuring play and BPM master controls, and 4 different voices with controls for each." src="sequencer.png"><br>
+  </p>
+
+<p>In practice this is easier to do with a library — the Web Audio API was built to be built upon. If you are about to embark on building something more complex, <a href="https://tonejs.github.io/">tone.js</a> would be a good place to start. However, we want to demonstrate how to build such a demo from first principles, as a learning exercise.</p>
+
+<div class="note">
+<p><strong>Note</strong>: You can find the source code on GitHub as <a href="https://github.com/mdn/webaudio-examples/tree/master/step-sequencer">step-sequencer</a>; see the <a href="https://mdn.github.io/webaudio-examples/step-sequencer/">step-sequencer running live</a> also.</p>
+</div>
+
+<p>The interface consists of master controls, which allow us to play/stop the sequencer, and adjust the BPM (beats per minute) to speed up or slow down the "music".</p>
+
+<p>There are four different sounds, or voices, which can be played. Each voice has four buttons, which represent four beats in one bar of music. When they are enabled the note will sound. When the instrument plays, it will move across this set of beats and loop the bar.</p>
+
+<p>Each voice also has local controls, which allow you to manipulate the effects or parameters particular to each technique we are using to create those voices. The techniques we are using are:</p>
+
+<table class="standard-table">
+ <thead>
+ <tr>
+ <th scope="col">Name of voice</th>
+ <th scope="col">Technique</th>
+ <th scope="col">Associated Web Audio API feature</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <td>"Sweep"</td>
+ <td>Oscillator, periodic wave</td>
+ <td>{{domxref("OscillatorNode")}}, {{domxref("PeriodicWave")}}</td>
+ </tr>
+ <tr>
+ <td>"Pulse"</td>
+ <td>Multiple oscillators</td>
+ <td>{{domxref("OscillatorNode")}}</td>
+ </tr>
+ <tr>
+ <td>"Noise"</td>
+ <td>Random noise buffer, Biquad filter</td>
+ <td>{{domxref("AudioBuffer")}}, {{domxref("AudioBufferSourceNode")}}, {{domxref("BiquadFilterNode")}}</td>
+ </tr>
+ <tr>
+ <td>"Dial up"</td>
+ <td>Loading a sound sample to play</td>
+ <td>{{domxref("BaseAudioContext/decodeAudioData")}}, {{domxref("AudioBufferSourceNode")}}</td>
+ </tr>
+ </tbody>
+</table>
+
+<div class="note">
+<p><strong>Note</strong>: This instrument was not created to sound good, it was created to provide demonstration code and represents a <em>very</em> simplified version of such an instrument. The sounds are based on a dial-up modem. If you are unaware of how one sounds you can <a href="https://soundcloud.com/john-pemberton/modem-dialup">listen to one here</a>.</p>
+</div>
+
+<h2 id="Creating_an_audio_context">Creating an audio context</h2>
+
+<p>As you should be used to by now, each Web Audio API app starts with an audio context:</p>
+
+<pre class="brush: js">// for cross browser compatibility
+const AudioContext = window.AudioContext || window.webkitAudioContext;
+const audioCtx = new AudioContext();</pre>
+
+<h2 id="The_sweep_—_oscillators_periodic_waves_and_envelopes">The "sweep" — oscillators, periodic waves, and envelopes</h2>
+
+<p>For what we will call the "sweep" sound, that first noise you hear when you dial up, we're going to create an oscillator to generate the sound.</p>
+
+<p>The {{domxref("OscillatorNode")}} comes with basic waveforms out of the box — sine, square, triangle or sawtooth. However, instead of using the standard waves that come by default, we're going to create our own using the {{domxref("PeriodicWave")}} interface and values set in a wavetable. We can use the {{domxref("BaseAudioContext.createPeriodicWave")}} method to use this custom wave with an oscillator.</p>
+
+<h3 id="The_periodic_wave">The periodic wave</h3>
+
+<p>First of all, we'll create our periodic wave. To do so, We need to pass real and imaginary values into the {{domxref("BaseAudioContext.createPeriodicWave()")}} method.:</p>
+
+<pre class="brush: js">const wave = audioCtx.createPeriodicWave(wavetable.real, wavetable.imag);
+</pre>
+
+<div class="note">
+<p><strong>Note</strong>: In our example the wavetable is held in a separate JavaScript file (<code>wavetable.js</code>), because there are <em>so</em> many values. It is taken from a <a href="https://github.com/GoogleChromeLabs/web-audio-samples/tree/main/archive/demos/wave-tables">repository of wavetables</a>, which can be found in the <a href="https://github.com/GoogleChromeLabs/web-audio-samples/">Web Audio API examples from Google Chrome Labs</a>.</p>
+</div>
+
+<h3 id="The_Oscillator">The Oscillator</h3>
+
+<p>Now we can create an {{domxref("OscillatorNode")}} and set its wave to the one we've created:</p>
+
+<pre class="brush: js">function playSweep(time) {
+ const osc = audioCtx.createOscillator();
+ osc.setPeriodicWave(wave);
+ osc.frequency.value = 440;
+ osc.connect(audioCtx.destination);
+ osc.start(time);
+ osc.stop(time + 1);
+}</pre>
+
+<p>We pass in a time parameter to the function here, which we'll use later to schedule the sweep.</p>
+
+<h3 id="Controlling_amplitude">Controlling amplitude</h3>
+
+<p>This is great, but wouldn't it be nice if we had an amplitude envelope to go with it? Let's create a simple one so we get used to the methods we need to create an envelope with the Web Audio API.</p>
+
+<p>Let's say our envelope has attack and release. We can allow the user to control these using <a href="/en-US/docs/Web/HTML/Element/input/range">range inputs</a> on the interface:</p>
+
+<pre class="brush: html">&lt;label for="attack"&gt;Attack&lt;/label&gt;
+&lt;input name="attack" id="attack" type="range" min="0" max="1" value="0.2" step="0.1" /&gt;
+
+&lt;label for="release"&gt;Release&lt;/label&gt;
+&lt;input name="release" id="release" type="range" min="0" max="1" value="0.5" step="0.1" /&gt;</pre>
+
+<p>Now we can create some variables over in JavaScript and have them change when the input values are updated:</p>
+
+<pre class="brush: js">let attackTime = 0.2;
+const attackControl = document.querySelector('#attack');
+attackControl.addEventListener('input', function() {
+ attackTime = Number(this.value);
+}, false);
+
+let releaseTime = 0.5;
+const releaseControl = document.querySelector('#release');
+releaseControl.addEventListener('input', function() {
+ releaseTime = Number(this.value);
+}, false);</pre>
+
+<h3 id="The_final_playSweep_function">The final playSweep() function</h3>
+
+<p>Now we can expand our <code>playSweep()</code> function. We need to add a {{domxref("GainNode")}} and connect that through our audio graph to actually apply amplitude variations to our sound. The gain node has one property: <code>gain</code>, which is of type {{domxref("AudioParam")}}.</p>
+
+<p>This is really useful — now we can start to harness the power of the audio param methods on the gain value. We can set a value at a certain time, or we can change it <em>over</em> time with methods such as {{domxref("AudioParam.linearRampToValueAtTime")}}.</p>
+
+<p>For our attack and release, we'll use the <code>linearRampToValueAtTime</code> method as mentioned above. It takes two parameters — the value you want to set the parameter you are changing to (in this case the gain) and when you want to do this. In our case <em>when</em> is controlled by our inputs. So in the example below the gain is being increased to 1, at a linear rate, over the time the attack range input has been set to. Similarly, for our release, the gain is being set to 0, at a linear rate, over the time the release input has been set to.</p>
+
+<pre class="brush: js">let sweepLength = 2;
+function playSweep(time) {
+ let osc = audioCtx.createOscillator();
+ osc.setPeriodicWave(wave);
+ osc.frequency.value = 440;
+
+ let sweepEnv = audioCtx.createGain();
+ sweepEnv.gain.cancelScheduledValues(time);
+ sweepEnv.gain.setValueAtTime(0, time);
+ // set our attack
+ sweepEnv.gain.linearRampToValueAtTime(1, time + attackTime);
+  // set our release
+ sweepEnv.gain.linearRampToValueAtTime(0, time + sweepLength - releaseTime);
+
+ osc.connect(sweepEnv).connect(audioCtx.destination);
+ osc.start(time);
+ osc.stop(time + sweepLength);
+}</pre>
+
+<h2 id="The_pulse_—_low_frequency_oscillator_modulation">The "pulse" — low frequency oscillator modulation</h2>
+
+<p>Great, now we've got our sweep! Let's move on and take a look at that nice pulse sound. We can achieve this with a basic oscillator, modulated with a second oscillator.</p>
+
+<h3 id="Initial_oscillator">Initial oscillator</h3>
+
+<p>We'll set up our first {{domxref("OscillatorNode")}} the same way as our sweep sound, except we won't use a wavetable to set a bespoke wave — we'll just use the default <code>sine</code> wave:</p>
+
+<pre class="brush: js">const osc = audioCtx.createOscillator();
+osc.type = 'sine';
+osc.frequency.value = 880;</pre>
+
+<p>Now we're going to create a {{domxref("GainNode")}}, as it's the <code>gain</code> value that we will oscillate with our second, low frequency oscillator:</p>
+
+<pre class="brush: js">const amp = audioCtx.createGain();
+amp.gain.setValueAtTime(1, audioCtx.currentTime);</pre>
+
+<h3 id="Creating_the_second_low_frequency_oscillator">Creating the second, low frequency, oscillator</h3>
+
+<p>We'll now create a second — <code>square</code> — wave (or pulse) oscillator, to alter the amplification of our first sine wave:</p>
+
+<pre class="brush: js">const lfo = audioCtx.createOscillator();
+lfo.type = 'square';
+lfo.frequency.value = 30;</pre>
+
+<h3 id="Connecting_the_graph">Connecting the graph</h3>
+
+<p>The key here is connecting the graph correctly, and also starting both oscillators:</p>
+
+<pre class="brush: js">lfo.connect(amp.gain);
+osc.connect(amp).connect(audioCtx.destination);
+lfo.start();
+osc.start(time);
+osc.stop(time + pulseTime);</pre>
+
+<div class="note">
+<p><strong>Note</strong>: We also don't have to use the default wave types for either of these oscillators we're creating — we could use a wavetable and the periodic wave method as we did before. There is a multitude of possibilities with just a minimum of nodes.</p>
+</div>
+
+<h3 id="Pulse_user_controls">Pulse user controls</h3>
+
+<p>For the UI controls, let's expose both frequencies of our oscillators, allowing them to be controlled via range inputs. One will change the tone and the other will change how the pulse modulates the first wave:</p>
+
+<pre class="brush: html">&lt;label for="hz"&gt;Hz&lt;/label&gt;
+&lt;input name="hz" id="hz" type="range" min="660" max="1320" value="880" step="1" /&gt;
+&lt;label for="lfo"&gt;LFO&lt;/label&gt;
+&lt;input name="lfo" id="lfo" type="range" min="20" max="40" value="30" step="1" /&gt;</pre>
+
+<p>As before, we'll vary the parameters when the range input values are changed by the user.</p>
+
+<pre class="brush: js">let pulseHz = 880;
+const hzControl = document.querySelector('#hz');
+hzControl.addEventListener('input', function() {
+ pulseHz = Number(this.value);
+}, false);
+
+let lfoHz = 30;
+const lfoControl = document.querySelector('#lfo');
+lfoControl.addEventListener('input', function() {
+ lfoHz = Number(this.value);
+}, false);</pre>
+
+<h3 id="The_final_playPulse_function">The final playPulse() function</h3>
+
+<p>Here's the entire <code>playPulse()</code> function:</p>
+
+<pre class="brush: js">let pulseTime = 1;
+function playPulse(time) {
+ let osc = audioCtx.createOscillator();
+ osc.type = 'sine';
+ osc.frequency.value = pulseHz;
+
+ let amp = audioCtx.createGain();
+ amp.gain.value = 1;
+
+ let lfo = audioCtx.createOscillator();
+ lfo.type = 'square';
+ lfo.frequency.value = lfoHz;
+
+ lfo.connect(amp.gain);
+ osc.connect(amp).connect(audioCtx.destination);
+ lfo.start();
+ osc.start(time);
+ osc.stop(time + pulseTime);
+}</pre>
+
+<h2 id="The_noise_—_random_noise_buffer_with_biquad_filter">The "noise" — random noise buffer with biquad filter</h2>
+
+<p>Now we need to make some noise! All modems have noise. Noise is just random numbers when it comes to audio data, so is, therefore, a relatively straightforward thing to create with code.</p>
+
+<h3 id="Creating_an_audio_buffer">Creating an audio buffer</h3>
+
+<p>We need to create an empty container to put these numbers into, however, one that the Web Audio API understands. This is where {{domxref("AudioBuffer")}} objects come in. You can fetch a file and decode it into a buffer (we'll get to that later on in the tutorial), or you can create an empty buffer and fill it with your own data.</p>
+
+<p>For noise, let's do the latter. We first need to calculate the size of our buffer, to create it. We can use the {{domxref("BaseAudioContext.sampleRate")}} property for this:</p>
+
+<pre class="brush: js">const bufferSize = audioCtx.sampleRate * noiseLength;
+const buffer = audioCtx.createBuffer(1, bufferSize, audioCtx.sampleRate);</pre>
+
+<p>Now we can fill it with random numbers between -1 and 1:</p>
+
+<pre class="brush: js">let data = buffer.getChannelData(0); // get data
+
+// fill the buffer with noise
+for (let i = 0; i &lt; bufferSize; i++) {
+ data[i] = Math.random() * 2 - 1;
+}</pre>
+
+<div class="note">
+<p><strong>Note</strong>: Why -1 to 1? When outputting sound to a file or speakers we need to have a number to represent 0db full scale — the numerical limit of the fixed point media or DAC. In floating point audio, 1 is a convenient number to map to "full scale" for mathematical operations on signals, so oscillators, noise generators and other sound sources typically output bipolar signals in the range -1 to 1. A browser will clamp values outside this range.</p>
+</div>
+
+<h3 id="Creating_a_buffer_source">Creating a buffer source</h3>
+
+<p>Now we have the audio buffer and have filled it with data, we need a node to add to our graph that can use the buffer as a source. We'll create a {{domxref("AudioBufferSourceNode")}} for this, and pass in the data we've created:</p>
+
+<pre class="brush: js">let noise = audioCtx.createBufferSource();
+noise.buffer = buffer;</pre>
+
+<p>If we connect this through our audio graph and play it —</p>
+
+<pre class="brush: js">noise.connect(audioCtx.destination);
+noise.start();</pre>
+
+<p>you'll notice that it's pretty hissy or tinny. We've created white noise, that's how it should be. Our values are running from -1 to 1, which means we have peaks of all frequencies, which in turn is actually quite dramatic and piercing. We <em>could</em> modify the function to run values from 0.5 to -0.5 or similar to take the peaks off and reduce the discomfort, however, where's the fun in that? Let's route the noise we've created through a filter.</p>
+
+<h3 id="Adding_a_biquad_filter_to_the_mix">Adding a biquad filter to the mix</h3>
+
+<p>We want something in the range of pink or brown noise. We want to cut off those high frequencies and possibly some of the lower ones. Let's pick a bandpass biquad filter for the job.</p>
+
+<div class="note">
+<p><strong>Note</strong>: The Web Audio API comes with two types of filter nodes: {{domxref("BiquadFilterNode")}} and {{domxref("IIRFilterNode")}}. For the most part a biquad filter will be good enough — it comes with different types such as lowpass, highpass, and bandpass. If you're looking to do something more bespoke, however, the IIR filter might be a good option — see <a href="/en-US/docs/Web/API/Web_Audio_API/Using_IIR_filters">Using IIR filters</a> for more information.</p>
+</div>
+
+<p>Wiring this up is the same as we've seen before. We create the {{domxref("BiquadFilterNode")}}, configure the properties we want for it and connect it through our graph. Different types of biquad filters have different properties — for instance setting the frequency on a bandpass type adjusts the middle frequency, however on a lowpass it would set the top frequency.</p>
+
+<pre class="brush: js">let bandpass = audioCtx.createBiquadFilter();
+bandpass.type = 'bandpass';
+bandpass.frequency.value = 1000;
+
+// connect our graph
+noise.connect(bandpass).connect(audioCtx.destination);</pre>
+
+<h3 id="Noise_user_controls">Noise user controls</h3>
+
+<p>On the UI we'll expose the noise duration and the frequency we want to band, allowing the user to adjust them via range inputs and event handlers just like in previous sections:</p>
+
+<pre class="brush: html">&lt;label for="duration"&gt;Duration&lt;/label&gt;
+&lt;input name="duration" id="duration" type="range" min="0" max="2" value="1" step="0.1" /&gt;
+
+&lt;label for="band"&gt;Band&lt;/label&gt;
+&lt;input name="band" id="band" type="range" min="400" max="1200" value="1000" step="5" /&gt;
+</pre>
+
+<pre class="brush: js">let noiseDuration = 1;
+const durControl = document.querySelector('#duration');
+durControl.addEventListener('input', function() {
+ noiseDuration = Number(this.value);
+}, false);
+
+let bandHz = 1000;
+const bandControl = document.querySelector('#band');
+bandControl.addEventListener('input', function() {
+ bandHz = Number(this.value);
+}, false);</pre>
+
+<h3 id="The_final_playNoise_function">The final playNoise() function</h3>
+
+<p>Here's the entire <code>playNoise()</code> function:</p>
+
+<pre class="brush: js">function playNoise(time) {
+ const bufferSize = audioCtx.sampleRate * noiseDuration; // set the time of the note
+ const buffer = audioCtx.createBuffer(1, bufferSize, audioCtx.sampleRate); // create an empty buffer
+ let data = buffer.getChannelData(0); // get data
+
+ // fill the buffer with noise
+ for (let i = 0; i &lt; bufferSize; i++) {
+ data[i] = Math.random() * 2 - 1;
+ }
+
+ // create a buffer source for our created data
+ let noise = audioCtx.createBufferSource();
+ noise.buffer = buffer;
+
+ let bandpass = audioCtx.createBiquadFilter();
+ bandpass.type = 'bandpass';
+ bandpass.frequency.value = bandHz;
+
+ // connect our graph
+ noise.connect(bandpass).connect(audioCtx.destination);
+ noise.start(time);
+}</pre>
+
+<h2 id="Dial_up_—_loading_a_sound_sample">"Dial up" — loading a sound sample</h2>
+
+<p>It's straightforward enough to emulate phone dial (DTMF) sounds, by playing a couple of oscillators together using the methods we've already looked at, however, in this section, we'll load in a sample file instead so we can take a look at what's involved.</p>
+
+<h3 id="Loading_the_sample">Loading the sample</h3>
+
+<p>We want to make sure our file has loaded and been decoded into a buffer before we use it, so let's create an <code><a href="/en-US/docs/Web/JavaScript/Reference/Statements/async_function">async</a></code> function to allow us to do this:</p>
+
+<pre class="brush: js">async function getFile(audioContext, filepath) {
+ const response = await fetch(filepath);
+ const arrayBuffer = await response.arrayBuffer();
+ const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
+ return audioBuffer;
+}</pre>
+
+<p>We can then use the <code><a href="/en-US/docs/Web/JavaScript/Reference/Operators/await">await</a></code> operator when calling this function, which ensures that we can only run subsequent code when it has finished executing.</p>
+
+<p>Let's create another <code>async</code> function to set up the sample — we can combine the two async functions in a nice promise pattern to perform further actions when this file is loaded and buffered:</p>
+
+<pre class="brush: js">async function setupSample() {
+ const filePath = 'dtmf.mp3';
+ const sample = await getFile(audioCtx, filePath);
+ return sample;
+}</pre>
+
+<div class="note">
+<p><strong>Note</strong>: You can easily modify the above function to take an array of files and loop over them to load more than one sample. This would be very handy for more complex instruments, or gaming.</p>
+</div>
+
+<p>We can now use <code>setupSample()</code> like so:</p>
+
+<pre class="brush: js">setupSample()
+ .then((sample) =&gt; {
+ // sample is our buffered file
+ // ...
+});</pre>
+
+<p>When the sample is ready to play, the program sets up the UI so it is ready to go.</p>
+
+<h3 id="Playing_the_sample">Playing the sample</h3>
+
+<p>Let's create a <code>playSample()</code> function in a similar manner to how we did with the other sounds. This time it will create an {{domxref("AudioBufferSourceNode")}}, and put the buffer data we've fetched and decoded into it, and play it:</p>
+
+<pre class="brush: js">function playSample(audioContext, audioBuffer, time) {
+ const sampleSource = audioContext.createBufferSource();
+ sampleSource.buffer = audioBuffer;
+ sampleSource.connect(audioContext.destination)
+ sampleSource.start(time);
+ return sampleSource;
+}</pre>
+
+<div class="note">
+<p><strong>Note</strong>: We can call <code>stop()</code> on an {{domxref("AudioBufferSourceNode")}}, however, this will happen automatically when the sample has finished playing.</p>
+</div>
+
+<h3 id="Dial-up_user_controls">Dial-up user controls</h3>
+
+<p>The {{domxref("AudioBufferSourceNode")}} comes with a <code><a href="/en-US/docs/Web/API/AudioBufferSourceNode/playbackRate">playbackRate</a></code> property. Let's expose that to our UI, so we can speed up and slow down our sample. We'll do that in the same sort of way as before:</p>
+
+<pre class="brush: html">&lt;label for="rate"&gt;Rate&lt;/label&gt;
+&lt;input name="rate" id="rate" type="range" min="0.1" max="2" value="1" step="0.1" /&gt;</pre>
+
+<pre class="brush: js">let playbackRate = 1;
+const rateControl = document.querySelector('#rate');
+rateControl.addEventListener('input', function() {
+ playbackRate = Number(this.value);
+}, false);</pre>
+
+<h3 id="The_final_playSample_function">The final playSample() function</h3>
+
+<p>We'll then add a line to update the <code>playbackRate</code> property to our <code>playSample()</code> function. The final version looks like this:</p>
+
+<pre class="brush: js">function playSample(audioContext, audioBuffer, time) {
+ const sampleSource = audioContext.createBufferSource();
+ sampleSource.buffer = audioBuffer;
+ sampleSource.playbackRate.value = playbackRate;
+ sampleSource.connect(audioContext.destination)
+ sampleSource.start(time);
+ return sampleSource;
+}</pre>
+
+<div class="note">
+<p><strong>Note</strong>: The sound file was <a href="http://soundbible.com/1573-DTMF-Tones.html">sourced from soundbible.com</a>.</p>
+</div>
+
+<h2 id="Playing_the_audio_in_time">Playing the audio in time</h2>
+
+<p>A common problem with digital audio applications is getting the sounds to play in time so that the beat remains consistent, and things do not slip out of time.</p>
+
+<p>We could schedule our voices to play within a <code>for</code> loop, however the biggest problem with this is updating whilst it is playing, and we've already implemented UI controls to do so. Also, it would be really nice to consider an instrument-wide BPM control. The best way to get our voices to play on the beat is to create a scheduling system, whereby we look ahead at when the notes are going to play and push them into a queue. We can start them at a precise time with the currentTime property and also take into account any changes.</p>
+
+<div class="note">
+<p><strong>Note</strong>: This is a much stripped down version of <a href="https://www.html5rocks.com/en/tutorials/audio/scheduling/">Chris Wilson's A Tale Of Two Clocks</a> article, which goes into this method in much more detail. There's no point repeating it all here, but it's highly recommended to read this article and use this method. Much of the code here is taken from his <a href="https://github.com/cwilso/metronome/blob/master/js/metronome.js">metronome example</a>, which he references in the article.</p>
+</div>
+
+<p>Let's start by setting up our default BPM (beats per minute), which will also be user-controllable via — you guessed it — another range input.</p>
+
+<pre class="brush: js">let tempo = 60.0;
+const bpmControl = document.querySelector('#bpm');
+bpmControl.addEventListener('input', function() {
+ tempo = Number(this.value);
+}, false);</pre>
+
+<p>Then we'll create variables to define how far ahead we want to look, and how far ahead we want to schedule:</p>
+
+<pre class="brush: js">const lookahead = 25.0; // How frequently to call scheduling function (in milliseconds)
+const scheduleAheadTime = 0.1; // How far ahead to schedule audio (sec)</pre>
+
+<p>Let's create a function that moves the note forwards by one beat, and loops back to the first when it reaches the 4th (last) one:</p>
+
+<pre class="brush: js">let currentNote = 0;
+let nextNoteTime = 0.0; // when the next note is due.
+
+function nextNote() {
+ const secondsPerBeat = 60.0 / tempo;
+
+ nextNoteTime += secondsPerBeat; // Add beat length to last beat time
+
+ // Advance the beat number, wrap to zero
+ currentNote++;
+ if (currentNote === 4) {
+ currentNote = 0;
+ }
+}</pre>
+
+<p>We want to create a reference queue for the notes that are to be played, and the functionality to play them using the functions we've previously created:</p>
+
+<pre class="brush: js">const notesInQueue = [];
+
+function scheduleNote(beatNumber, time) {
+
+ // push the note on the queue, even if we're not playing.
+ notesInQueue.push({ note: beatNumber, time: time });
+
+ if (pads[0].querySelectorAll('button')[beatNumber].getAttribute('aria-checked') === 'true') {
+ playSweep(time)
+ }
+ if (pads[1].querySelectorAll('button')[beatNumber].getAttribute('aria-checked') === 'true') {
+ playPulse(time)
+ }
+ if (pads[2].querySelectorAll('button')[beatNumber].getAttribute('aria-checked') === 'true') {
+ playNoise(time)
+ }
+ if (pads[3].querySelectorAll('button')[beatNumber].getAttribute('aria-checked') === 'true') {
+ playSample(audioCtx, dtmf, time);
+ }
+}</pre>
+
+<p>Here we look at the current time and compare it to the time for the next note; when the two match it will call the previous two functions.</p>
+
+<p>{{domxref("AudioContext")}} object instances have a <code><a href="/en-US/docs/Web/API/BaseAudioContext/currentTime">currentTime</a></code> property, which allows us to retrieve the number of seconds after we first created the context. This is what we shall use for timing within our step sequencer — It's extremely accurate, returning a float value accurate to about 15 decimal places.</p>
+
+<pre class="brush: js">function scheduler() {
+ // while there are notes that will need to play before the next interval, schedule them and advance the pointer.
+ while (nextNoteTime &lt; audioCtx.currentTime + scheduleAheadTime ) {
+ scheduleNote(currentNote, nextNoteTime);
+ nextNote();
+ }
+ timerID = window.setTimeout(scheduler, lookahead);
+}</pre>
+
+<p>We also need a draw function to update the UI, so we can see when the beat progresses.</p>
+
+<pre class="brush: js">let lastNoteDrawn = 3;
+
+function draw() {
+ let drawNote = lastNoteDrawn;
+ let currentTime = audioCtx.currentTime;
+
+ while (notesInQueue.length &amp;&amp; notesInQueue[0].time &lt; currentTime) {
+ drawNote = notesInQueue[0].note;
+ notesInQueue.splice(0,1); // remove note from queue
+ }
+
+ // We only need to draw if the note has moved.
+ if (lastNoteDrawn != drawNote) {
+ pads.forEach(function(el, i) {
+ el.children[lastNoteDrawn].style.borderColor = 'hsla(0, 0%, 10%, 1)';
+ el.children[drawNote].style.borderColor = 'hsla(49, 99%, 50%, 1)';
+ });
+
+ lastNoteDrawn = drawNote;
+ }
+ // set up to draw again
+ requestAnimationFrame(draw);
+}</pre>
+
+<h2 id="Putting_it_all_together">Putting it all together</h2>
+
+<p>Now all that's left to do is make sure we've loaded the sample before we are able to <em>play</em> the instrument. We'll add a loading screen that disappears when the file has been fetched and decoded, then we can allow the scheduler to start using the play button click event.</p>
+
+<pre class="brush: js">// when the sample has loaded allow play
+let loadingEl = document.querySelector('.loading');
+const playButton = document.querySelector('[data-playing]');
+let isPlaying = false;
+setupSample()
+ .then((sample) =&gt; {
+ loadingEl.style.display = 'none'; // remove loading screen
+
+ dtmf = sample; // to be used in our playSample function
+
+ playButton.addEventListener('click', function() {
+ isPlaying = !isPlaying;
+
+ if (isPlaying) { // start playing
+
+ // check if context is in suspended state (autoplay policy)
+ if (audioCtx.state === 'suspended') {
+ audioCtx.resume();
+ }
+
+ currentNote = 0;
+ nextNoteTime = audioCtx.currentTime;
+ scheduler(); // kick off scheduling
+ requestAnimationFrame(draw); // start the drawing loop.
+ this.dataset.playing = 'true';
+
+ } else {
+
+ window.clearTimeout(timerID);
+ this.dataset.playing = 'false';
+
+ }
+ })
+ });</pre>
+
+<h2 id="Summary">Summary</h2>
+
+<p>We've now got an instrument inside our browser! Keep playing and experimenting — you can expand on any of these techniques to create something much more elaborate.</p>
diff --git a/files/ko/web/api/web_audio_api/advanced_techniques/sequencer.png b/files/ko/web/api/web_audio_api/advanced_techniques/sequencer.png
new file mode 100644
index 0000000000..63de8cb0de
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/advanced_techniques/sequencer.png
Binary files differ
diff --git a/files/ko/web/api/web_audio_api/audio-context_.png b/files/ko/web/api/web_audio_api/audio-context_.png
new file mode 100644
index 0000000000..36d0190052
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/audio-context_.png
Binary files differ
diff --git a/files/ko/web/api/web_audio_api/basic_concepts_behind_web_audio_api/index.html b/files/ko/web/api/web_audio_api/basic_concepts_behind_web_audio_api/index.html
index 571c15684e..b45db93b23 100644
--- a/files/ko/web/api/web_audio_api/basic_concepts_behind_web_audio_api/index.html
+++ b/files/ko/web/api/web_audio_api/basic_concepts_behind_web_audio_api/index.html
@@ -1,5 +1,5 @@
---
-title: Basic concepts behind Web Audio API
+title: Web Audio API의 기본 개념
slug: Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API
tags:
- 가이드
@@ -12,110 +12,116 @@ tags:
translation_of: Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API
---
<div class="summary">
-<p><span class="seoSummary">Web Audio API의 기능이 어떻게 동작하는지에 대한 오디오 이론에 대해서 설명합니다. 마스터 사운드 엔지니어가 될 수 는 없지만, Web Audio API가 왜 그렇게 작동하는지에 대해 이해할 수 있는 충분한 배경 지식을 제공해서 개발중에 더 나은 결정을 내릴 수 있게합니다. </span></p>
+<p><span class="seoSummary">오디오가 어떻게 여러분의 앱을 통해서 전송(route)되는지를 설계하는 동안 여러분이 적절한 결정을 내리는 것을 돕기 위해, 이 문서는 Web Audio API의 기능이 어떻게 동작하는가를 뒷받침하는 얼마간의 오디오 이론을 설명합니다. 이 문서를 읽는다고 해서 여러분이 숙련된 사운드 엔지니어가 될 수는 없지만, 왜 Web Audio API가 이렇게 동작하는지를 이해하기에 충분한 배경지식을 줄 것입니다.</span></p>
</div>
-<h2 id="Audio_graphs">Audio graphs</h2>
+<h2 id="Audio_graphs">오디오 그래프</h2>
-<p>The Web Audio API involves handling audio operations inside an <strong>audio context</strong>, and has been designed to allow <strong>modular routing</strong>. Basic audio operations are performed with <strong>audio nodes</strong>, which are linked together to form an <strong>audio routing graph</strong>. Several sources — with different types of channel layout — are supported even within a single context. This modular design provides the flexibility to create complex audio functions with dynamic effects.</p>
+<p><a href="/ko/docs/Web/API/Web_Audio_API">Web Audio API</a>는 <a href="/ko/docs/Web/API/AudioContext">오디오 컨텍스트</a>(audio context) 내의 오디오 연산을 다루는 것을 포함하고, <strong>모듈러 라우팅</strong>(modular routing)을 허용하도록 설계되었습니다. 기본적인 오디오 연산은 <strong>오디오 노드</strong>(audio node)와 함께 수행되는데, 이는 <strong>오디오 라우팅 그래프</strong>를 형성하기 위해 함께 연결되어 있습니다. 다른 유형의 채널 레이아웃을 가진 몇몇의 소스(source)들은 심지어 하나의 컨텍스트 내에서 지원됩니다. 이 모듈식의(modular) 디자인은 역동적인 효과를 가진 복잡한 오디오 기능을 만드는 데 있어 유연함을 제공합니다.</p>
-<p>Audio nodes are linked via their inputs and outputs, forming a chain that starts with one or more sources, goes through one or more nodes, then ends up at a destination. Although, you don't have to provide a destination if you, say, just want to visualize some audio data. A simple, typical workflow for web audio would look something like this:</p>
+<p>하나 또는 더 많은 소스에서 시작하고, 하나 또는 더 많은 노드를 통과하고, 그리고서 도착지(destination)에서 끝나는 체인(chain)을 형성하며, 오디오 노드는 입력과 출력을 통해 연결되어 있습니다. 그러나, 예를 들어 여러분이 단지 오디오 데이터를 시각화하기를 원한다면 도착지를 반드시 제공할 필요는 없습니다. 웹 오디오의 단순하고, 일반적인 작업 흐름은 다음과 같습니다:</p>
<ol>
- <li>Create audio context.</li>
- <li>Inside the context, create sources — such as <code>&lt;audio&gt;</code>, oscillator, stream.</li>
- <li>Create effects nodes, such as reverb, biquad filter, panner, compressor.</li>
- <li>Choose final destination of audio, for example your system speakers.</li>
- <li>Connect the sources up to the effects, and the effects to the destination.</li>
+ <li>오디오 컨텍스트를 생성합니다.</li>
+ <li>컨텍스트 내에서, 다음과 같이 소스를 생성합니다 — {{HTMLElement("audio")}}, oscillator, 또는 stream.</li>
+ <li>효과 노드를 생성하는데, 예를 들자면 reverb, biquad filter, panner, 또는 compressor가 있습니다.</li>
+ <li>사용자의 컴퓨터 스피커와 같이, 오디오의 최종 도착지를 선택합니다.</li>
+ <li>오디오 소스로부터 0 또는 더 많은 효과를 거쳐 연결(connection)을 확립하는데, 마지막으로는 앞서 선택된 도착지에서 끝납니다.</li>
</ol>
-<p><img alt="A simple box diagram with an outer box labeled Audio context, and three inner boxes labeled Sources, Effects and Destination. The three inner boxes have arrow between them pointing from left to right, indicating the flow of audio information." src="https://mdn.mozillademos.org/files/12237/webaudioAPI_en.svg" style="display: block; height: 143px; margin: 0px auto; width: 643px;"></p>
+<div class="notecard note">
+<h4>채널 표기법</h4>
-<p>Each input or output is composed of several <strong>channels, </strong>which represent a specific audio layout. Any discrete channel structure is supported, including <em>mono</em>, <em>stereo</em>, <em>quad</em>, <em>5.1</em>, and so on.</p>
+<p>한 신호에서 사용 가능한 오디오 채널의 숫자는 종종 숫자 형식으로 표현되는데, 예를 들자면 2.0 또는 5.1과 같습니다. 이것은 <a href="https://en.wikipedia.org/wiki/Surround_sound#Channel_notation">채널 표기법</a>이라고 불립니다. 첫번째 숫자는 신호가 포함하는 전체 주파수 범위 오디오 채널의 숫자입니다. 마침표 뒤의 숫자는 저주파 효과(LFE) 출력에 대해 비축된 채널의 수를 나타냅니다; 이 숫자는 종종 <strong>서브 우퍼</strong>(subwoofer)로 불립니다.</p>
+</div>
+
+<p><img alt="오디오 컨텍스트라고 써진 외부 상자와 소스, 효과, 목적지라고 써진 세 개의 내부 상자를 가진 하나의 간단한 도표. 세 개의 내부 상자는 좌에서 우를 향하는 화살표를 사이에 가지고 있는데, 이는 오디오 정보의 흐름을 나타냅니다." src="webaudioapi_en.svg" style="display: block; margin: 0px auto;"></p>
+
+<p>각각의 입력 또는 출력은 몇몇의 <strong>채널</strong>으로 구성되어 있는데, 이는 특정한 오디오 레이아웃을 나타냅니다. <em>모노</em>, <em>스테레오</em>, <em>quad</em>, <em>5.1</em> 등등을 포함하는, 어떠한 별개의 채널 구조든 지원됩니다.</p>
-<p><img alt="Show the ability of AudioNodes to connect via their inputs and outputs and the channels inside these inputs/outputs." src="https://mdn.mozillademos.org/files/14179/mdn.png" style="display: block; height: 360px; margin: 0px auto; width: 630px;"></p>
+<p><img alt="입력과 출력 그리고 이 입력/출력 내부의 채널을 통해 연결하는 AudioNode의 능력을 보여줍니다." src="mdn.png"></p>
-<p>Audio sources can come from a variety of places:</p>
+<p>오디오 소스는 다양한 방법으로 얻어질 수 있습니다:</p>
<ul>
- <li>Generated directly in JavaScript by an audio node (such as an oscillator).</li>
- <li>Created from raw PCM data (the audio context has methods to decode supported audio formats).</li>
- <li>Taken from HTML media elements (such as {{HTMLElement("video")}} or {{HTMLElement("audio")}}).</li>
- <li>Taken directly from a <a href="/en-US/docs/WebRTC" title="WebRTC">WebRTC</a> {{domxref("MediaStream")}} (such as a webcam or microphone).</li>
+ <li>소리는 JavaScript에서 (oscillator처럼) 오디오 노드에 의해 직접적으로 생성될 수 있습니다.</li>
+ <li>가공되지 않은(raw) PCM 데이터로부터 생성될 수 있습니다 (오디오 컨텍스트는 지원되는 오디오 포맷을 디코드하는 메서드를 가지고 있습니다).</li>
+ <li>({{HTMLElement("video")}} 또는 {{HTMLElement("audio")}}처럼) HTML 미디어 요소로부터 취해질 수 있습니다.</li>
+ <li>(웹캠이나 마이크처럼) <a href="/ko/docs/Web/API/WebRTC_API">WebRTC</a> {{domxref("MediaStream")}}로부터 직접적으로 취해질 수 있습니다.</li>
</ul>
-<h2 id="Audio_data_what's_in_a_sample">Audio data: what's in a sample</h2>
+<h2 id="Audio_data_whats_in_a_sample">오디오 데이터: 무엇이 샘플 속에 들어있는가</h2>
-<p>When an audio signal is processed, <strong>sampling</strong> means the conversion of a <a href="https://en.wikipedia.org/wiki/Continuous_signal" title="Continuous signal">continuous signal</a> to a <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Discrete_signal" title="Discrete signal">discrete signal</a>; or put another way, a continuous sound wave, such as a band playing live, is converted to a sequence of samples (a discrete-time signal) that allow a computer to handle the audio in distinct blocks.</p>
+<p>오디오 신호가 처리될 때, <strong>샘플링</strong>이란 <a href="https://en.wikipedia.org/wiki/Continuous_signal" title="Continuous signal">연속 신호</a>(continuous signal)의 <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Discrete_signal" title="Discrete signal">불연속 신호</a>(discrete signal)로의 전환을 의미합니다; 또는 달리 말하면, 라이브로 연주하고 있는 밴드와 같이, 연속적인 음파를 컴퓨터가 오디오를 구별되는 단위로 다룰 수 있게 허용하는 일련의 샘플들로 전환하는 것을 의미합니다.</p>
-<p>A lot more information can be found on the Wikipedia page <a href="https://en.wikipedia.org/wiki/Sampling_%28signal_processing%29">Sampling (signal processing)</a>.</p>
+<p>더 많은 정보는 위키피디아 문서 <a href="https://en.wikipedia.org/wiki/Sampling_%28signal_processing%29">샘플링 (신호 처리)</a>에서 찾을 수 있습니다.</p>
-<h2 id="Audio_buffers_frames_samples_and_channels">Audio buffers: frames, samples and channels</h2>
+<h2 id="Audio_buffers_frames_samples_and_channels">오디오 버퍼: 프레임, 샘플, 그리고 채널</h2>
-<p>An {{ domxref("AudioBuffer") }} takes as its parameters a number of channels (1 for mono, 2 for stereo, etc), a length, meaning the number of sample frames inside the buffer, and a sample rate, which is the number of sample frames played per second.</p>
+<p>{{ domxref("AudioBuffer") }}는 매개변수로서 채널의 수 (1은 모노, 2는 스테레오 등), 버퍼 내부의 샘플 프레임의 수를 의미하는 길이, 그리고 초당 재생되는 샘플 프레임의 수인 샘플 레이트를 취합니다.</p>
-<p>A sample is a single float32 value that represents the value of the audio stream at each specific point in time, in a specific channel (left or right, if in the case of stereo). A frame, or sample frame, is the set of all values for all channels that will play at a specific point in time: all the samples of all the channels that play at the same time (two for a stereo sound, six for 5.1, etc.)</p>
+<p>샘플은 특정한 채널(스테레오의 경우, 왼쪽 또는 오른쪽)에서, 각각의 특정한 시점에의 오디오 스트림의 값을 표현하는 단일의 float32 값입니다. 프레임 또는 샘플 프레임은, 특정한 시점에 재생될 모든 채널의 모든 값들의 집합입니다: 즉 같은 시간에 재생되는 모든 채널의 모든 샘플 (스테레오 사운드의 경우 2개, 5.1의 경우 6개 등)입니다.</p>
-<p>The sample rate is the number of those samples (or frames, since all samples of a frame play at the same time) that will play in one second, measured in Hz. The higher the sample rate, the better the sound quality.</p>
+<p>샘플 레이트는 Hz로 측정되는, 1초에 재생될 이 샘플들 (또는 프레임들, 왜냐하면 한 프레임의 모든 샘플들이 같은 시간에 재생되므로) 의 수입니다. 샘플 레이트가 높을수록 음질이 더 좋습니다.</p>
-<p>Let's look at a Mono and a Stereo audio buffer, each is one second long, and playing at 44100Hz:</p>
+<p>모노와 스테레오 오디오 버퍼를 살펴봅시다, 각각 1초 길이고, 44100Hz로 재생됩니다:</p>
<ul>
- <li>The Mono buffer will have 44100 samples, and 44100 frames. The <code>length</code> property will be 44100.</li>
- <li>The Stereo buffer will have 88200 samples, but still 44100 frames. The <code>length</code> property will still be 44100 since it's equal to the number of frames.</li>
+ <li>모노 버퍼는 44100 샘플과, 44100 프레임을 가질 것입니다. <code>length</code> 프로퍼티는 44100이 될 것입니다.</li>
+ <li>스테레오 버퍼는 88200 샘플을 가질 것이나, 여전히 44100 프레임입니다. <code>length</code> 프로퍼티는 프레임의 수와 동일하므로 여전히 44100일 것입니다.</li>
</ul>
-<p><img alt="A diagram showing several frames in an audio buffer in a long line, each one containing two samples, as the buffer has two channels, it is stereo." src="https://mdn.mozillademos.org/files/14801/sampleframe-english.png" style="height: 150px; width: 853px;"></p>
+<p><img alt="긴 줄에서 오디오 버퍼의 몇몇 프레임을 보여주는 도표인데, 각각은 두 개의 샘플을 포함하고 있습니다. 버퍼가 두 개의 채널을 가지고 있으므로, 이것은 스테레오입니다." src="sampleframe-english.png"></p>
-<p>When a buffer plays, you will hear the left most sample frame, and then the one right next to it, etc. In the case of stereo, you will hear both channels at the same time. Sample frames are very useful, because they are independent of the number of channels, and represent time, in a useful way for doing precise audio manipulation.</p>
+<p>버퍼가 재생될 때, 여러분은 제일 왼쪽의 샘플 프레임을 들을 것이고, 그리고서 다음에 있는 제일 오른쪽의 샘플 프레임 등등을 들을 것입니다. 스테레오의 경우에, 여러분은 양 채널을 동시에 들을 것입니다. 샘플 프레임은 대단히 유용한데, 왜냐하면 샘플 프레임은 채널의 수에 독립적이고, 정밀한 오디오 조작을 함에 있어 유용한 방법으로 시간을 나타내기 때문입니다.</p>
<div class="note">
-<p><strong>Note</strong>: To get a time in seconds from a frame count, simply divide the number of frames by the sample rate. To get a number of frames from a number of samples, simply divide by the channel count.</p>
+<p><strong>노트</strong>: 프레임 카운트로부터 초로 시간을 얻기 위해서는, 프레임의 수를 샘플 레이트로 나누십시오. 샘플의 수로부터 프레임의 수를 얻기 위해서는, 채널 카운트로 나누십시오.</p>
</div>
-<p>Here's a couple of simple trivial examples:</p>
+<p>두 개의 간단한 예제입니다:</p>
<pre class="brush: js">var context = new AudioContext();
var buffer = context.createBuffer(2, 22050, 44100);</pre>
<div class="note">
-<p><strong>Note</strong>: In <a href="https://en.wikipedia.org/wiki/Digital_audio" title="Digital audio">digital audio</a>, <strong>44,100 <a href="https://en.wikipedia.org/wiki/Hertz" title="Hertz">Hz</a></strong> (alternately represented as <strong>44.1 kHz</strong>) is a common <a href="https://en.wikipedia.org/wiki/Sampling_frequency" title="Sampling frequency">sampling frequency</a>. Why 44.1kHz? <br>
+<p><strong>노트</strong>: <a href="https://en.wikipedia.org/wiki/Digital_audio" title="Digital audio">디지털 오디오</a>에서, <strong>44,100 <a href="https://en.wikipedia.org/wiki/Hertz">Hz</a></strong> (또한 <strong>44.1 kHz</strong>로 표현되어짐) 은 일반적인 <a href="https://en.wikipedia.org/wiki/Sampling_frequency" title="Sampling frequency">샘플링 주파수</a>입니다. 왜 44.1kHz일까요? <br>
<br>
- Firstly, because the <a href="https://en.wikipedia.org/wiki/Hearing_range" title="Hearing range">hearing range</a> of human ears is roughly 20 Hz to 20,000 Hz. Via the <a href="https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem" title="Nyquist–Shannon sampling theorem">Nyquist–Shannon sampling theorem</a>, the sampling frequency must be greater than twice the maximum frequency one wishes to reproduce. Therefore, the sampling rate has to be greater than 40 kHz.<br>
+ 첫째로, 왜냐하면 인간의 <a href="https://en.wikipedia.org/wiki/Hearing_range" title="Hearing range">가청 범위</a>(hearing range)는 대략적으로 20 Hz에서 20,000 Hz이기 때문입니다. <a href="https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem" title="Nyquist–Shannon sampling theorem">표본화 정리</a>(Nyquist–Shannon sampling theorem)에 의하여, 샘플링 주파수는 반드시 재생하기를 원하는 최대 주파수의 2배보다 커야 합니다. 그러므로, 샘플링 레이트는 40 kHz보다 커야만 합니다.<br>
<br>
- Secondly, signals must be <a href="https://en.wikipedia.org/wiki/Low-pass_filter" title="Low-pass filter">low-pass filtered</a> before sampling, otherwise <a href="https://en.wikipedia.org/wiki/Aliasing" title="Aliasing">aliasing</a> occurs. While an ideal low-pass filter would perfectly pass frequencies below 20 kHz (without attenuating them) and perfectly cut off frequencies above 20 kHz, in practice a <a href="https://en.wikipedia.org/wiki/Transition_band" title="Transition band">transition band</a> is necessary, where frequencies are partly attenuated. The wider this transition band is, the easier and more economical it is to make an <a href="https://en.wikipedia.org/wiki/Anti-aliasing_filter" title="Anti-aliasing filter">anti-aliasing filter</a>. The 44.1 kHz sampling frequency allows for a 2.05 kHz transition band.</p>
+ 둘째로, 신호는 반드시 샘플링 전에 <a href="https://en.wikipedia.org/wiki/Low-pass_filter" title="Low-pass filter">저주파 통과 필터</a>(low-pass filter)를 거쳐야만 합니다, 그렇지 않으면 <a href="https://en.wikipedia.org/wiki/Aliasing">에일리어싱</a>(aliasing)이 발생합니다. 이상적인 저주파 통과 필터는 완벽히 20 kHz 아래의 주파수들을 (약화시키는 일 없이) 통과시키고 완벽히 20 kHz 위의 주파수들을 잘라낼 것이지만, 실제로는 <a href="https://en.wikipedia.org/wiki/Transition_band" title="Transition band">천이 대역</a>(transition band)이 필수적인데, 여기서 주파수들은 부분적으로 약화됩니다. 천이 대역이 넓을수록, <a href="https://en.wikipedia.org/wiki/Anti-aliasing_filter" title="Anti-aliasing filter">주파수 중복방지 필터</a>(anti-aliasing filter)를 만들기 쉽고 경제적입니다. 44.1 kHz 샘플링 주파수는 2.05 kHz 천이 대역을 감안합니다.</p>
</div>
-<p>If you use this call above, you will get a stereo buffer with two channels, that when played back on an AudioContext running at 44100Hz (very common, most normal sound cards run at this rate), will last for 0.5 seconds: 22050 frames/44100Hz = 0.5 seconds.</p>
+<p>만약 위의 이 호출을 사용한다면, 여러분은 44100Hz (아주 일반적입니다, 대부분의 보통 사운드 카드는 이 레이트에서 실행됩니다) 에서 실행되는 AudioContext에서 재생될 때 0.5초동안 지속될 두 개의 채널을 가진 스테레오 버퍼를 얻을 것입니다. (22050 프레임 / 44100Hz = 0.5초)</p>
<pre class="brush: js">var context = new AudioContext();
var buffer = context.createBuffer(1, 22050, 22050);</pre>
-<p>If you use this call, you will get a mono buffer with just one channel), that when played back on an AudioContext running at 44100Hz, will be automatically <em>resampled</em> to 44100Hz (and therefore yield 44100 frames), and last for 1.0 second: 44100 frames/44100Hz = 1 second.</p>
+<p>만약 이 호출을 사용한다면, 여러분은 44100Hz에서 실행되는 AudioContext에서 재생될 때 자동적으로 44100Hz로 <em>리샘플</em>(resample)되고 1.0초동안 지속될 단지 하나의 채널을 가진 모노 버퍼를 얻을 것입니다. (44100 프레임 / 44100Hz = 1초)</p>
<div class="note">
-<p><strong>Note</strong>: audio resampling is very similar to image resizing. Say you've got a 16 x 16 image, but you want it to fill a 32x32 area. You resize (or resample) it. The result has less quality (it can be blurry or edgy, depending on the resizing algorithm), but it works, with the resized image taking up less space. Resampled audio is exactly the same: you save space, but in practice you will be unable to properly reproduce high frequency content, or treble sound.</p>
+<p><strong>노트</strong>: 오디오 리샘플링은 이미지 리사이징과 몹시 유사합니다. 예를 들어 여러분이 16 x 16 이미지를 가지고 있지만 32 x 32 영역을 채우고 싶다고 가정해 봅시다. 당신은 리사이즈 (또는 리샘플) 합니다. 결과는 더 낮은 품질을 가지지만 (리사이징 알고리즘에 따라서, 흐릿하거나 각질 수 있습니다), 리사이즈된 이미지가 더 적은 공간을 차지한 채로 작동은 합니다. 리샘플된 오디오는 정확히 동일합니다: 여러분은 공간을 저장하지만, 실제로는 높은 주파수의 콘텐츠 또는 고음의 소리를 적절히 재생할 수 없을 것입니다.</p>
</div>
-<h3 id="Planar_versus_interleaved_buffers">Planar versus interleaved buffers</h3>
+<h3 id="Planar_versus_interleaved_buffers">평면(planar) 대 인터리브(interleaved) 버퍼</h3>
-<p>The Web Audio API uses a planar buffer format. The left and right channels are stored like this:</p>
+<p>Web Audio API는 평면 버퍼 포맷을 사용합니다. 왼쪽과 오른쪽 채널은 다음과 같이 저장됩니다:</p>
-<pre>LLLLLLLLLLLLLLLLRRRRRRRRRRRRRRRR (for a buffer of 16 frames)</pre>
+<pre>LLLLLLLLLLLLLLLLRRRRRRRRRRRRRRRR (16 프레임의 버퍼에 대해)</pre>
-<p>This is very common in audio processing: it makes it easy to process each channel independently.</p>
+<p>이것은 오디오 프로세싱에서 아주 일반적입니다: 이것은 각 채널을 독립적으로 처리하기 쉽게 만들어줍니다.</p>
-<p>The alternative is to use an interleaved buffer format:</p>
+<p>대안은 인터리브 버퍼 포맷을 사용하는 것입니다:</p>
-<pre>LRLRLRLRLRLRLRLRLRLRLRLRLRLRLRLR (for a buffer of 16 frames)</pre>
+<pre>LRLRLRLRLRLRLRLRLRLRLRLRLRLRLRLR (16 프레임의 버퍼에 대해)</pre>
-<p>This format is very common for storing and playing back audio without much processing, for example a decoded MP3 stream.<br>
+<p>이 포맷은 많은 프로세싱 없이 오디오를 저장하고 재생하는 데 아주 일반적인데, 예를 들자면 디코드된 MP3 스트림이 있습니다.<br>
<br>
- The Web Audio API exposes <strong>only</strong> planar buffers, because it's made for processing. It works with planar, but converts the audio to interleaved when it is sent to the sound card for playback. Conversely, when an MP3 is decoded, it starts off in interleaved format, but is converted to planar for processing.</p>
+ Web Audio API는 <strong>오직</strong> 평면 버퍼만을 드러내는데, 왜냐하면 프로세싱을 위해 만들어졌기 때문입니다. 이것은 평면으로 동작하나, 오디오가 재생을 위해 사운드 카드에 전달되었을 때 인터리브로 전환합니다. 역으로, MP3가 디코드되었을 때, 이것은 인터리브 포맷으로 시작하나, 프로세싱을 위해 평면으로 전환됩니다.</p>
-<h2 id="Audio_channels">Audio channels</h2>
+<h2 id="Audio_channels">오디오 채널</h2>
-<p>Different audio buffers contain different numbers of channels: from the more basic mono (only one channel) and stereo (left and right channels), to more complex sets like quad and 5.1, which have different sound samples contained in each channel, leading to a richer sound experience. The channels are usually represented by standard abbreviations detailed in the table below:</p>
+<p>다른 오디오 버퍼는 다른 수의 채널을 포함합니다: 간단한 모노(오직 한 개의 채널)와 스테레오(왼쪽과 오른쪽 채널)에서부터, 각 채널에 포함된 다른 사운드 샘플을 가지고 있어 더욱 풍부한 소리 경험을 가능케 하는 quad와 5.1과 같은 더욱 복잡한 것들까지 있습니다. 채널들은 보통 아래의 테이블에 상세히 설명된 표준 약어에 의해 표현됩니다:</p>
<table class="standard-table">
<tbody>
@@ -147,34 +153,34 @@ var buffer = context.createBuffer(1, 22050, 22050);</pre>
</tbody>
</table>
-<h3 id="Up-mixing_and_down-mixing">Up-mixing and down-mixing</h3>
+<h3 id="Up-mixing_and_down-mixing">업믹싱(up-mixing)과 다운믹싱(down-mixing)</h3>
-<p>When the number of channels doesn't match between an input and an output, up- or down-mixing happens according the following rules. This can be somewhat controlled by setting the {{domxref("AudioNode.channelInterpretation")}} property to <code>speakers</code> or <code>discrete</code>:</p>
+<p>채널의 수가 입력과 출력 사이에서 맞지 않을 때, 업 또는 다운 믹싱이 다음의 규칙에 따라 발생합니다. 이는 {{domxref("AudioNode.channelInterpretation")}} 프로퍼티를 <code>speakers</code> 또는 <code>discrete</code>로 설정함으로써 어느 정도 제어될 수 있습니다.</p>
<table class="standard-table">
<thead>
<tr>
- <th scope="row">Interpretation</th>
- <th scope="col">Input channels</th>
- <th scope="col">Output channels</th>
- <th scope="col">Mixing rules</th>
+ <th scope="row">해석</th>
+ <th scope="col">입력 채널</th>
+ <th scope="col">출력 채널</th>
+ <th scope="col">믹싱 규칙</th>
</tr>
</thead>
<tbody>
<tr>
- <th colspan="1" rowspan="13" scope="row"><code>speakers</code></th>
+ <th rowspan="13" scope="row"><code>스피커</code></th>
<td><code>1</code> <em>(Mono)</em></td>
<td><code>2</code> <em>(Stereo)</em></td>
- <td><em>Up-mix from mono to stereo</em>.<br>
- The <code>M</code> input channel is used for both output channels (<code>L</code> and <code>R</code>).<br>
+ <td><em>모노에서 스테레오로 업믹스</em><br>
+ <code>M</code> 입력 채널이 양 출력 채널 (<code>L</code>와 <code>R</code>)에 대해 사용됩니다.<br>
<code>output.L = input.M<br>
output.R = input.M</code></td>
</tr>
<tr>
<td><code>1</code> <em>(Mono)</em></td>
<td><code>4</code> <em>(Quad)</em></td>
- <td><em>Up-mix from mono to quad.</em><br>
- The <code>M</code> input channel is used for non-surround output channels (<code>L</code> and <code>R</code>). Surround output channels (<code>SL</code> and <code>SR</code>) are silent.<br>
+ <td><em>모노에서 quad로 업믹스</em><br>
+ <code>M</code> 입력 채널이 비 서라운드(non-surround) 출력 채널에 대해 사용됩니다 (<code>L</code> 과 <code>R</code>). 서라운드 출력 채널 (<code>SL</code> 과 <code>SR</code>)은 작동하지 않습니다(silent).<br>
<code>output.L = input.M<br>
output.R = input.M<br>
output.SL = 0<br>
@@ -183,8 +189,8 @@ var buffer = context.createBuffer(1, 22050, 22050);</pre>
<tr>
<td><code>1</code> <em>(Mono)</em></td>
<td><code>6</code> <em>(5.1)</em></td>
- <td><em>Up-mix from mono to 5.1.</em><br>
- The <code>M</code> input channel is used for the center output channel (<code>C</code>). All the others (<code>L</code>, <code>R</code>, <code>LFE</code>, <code>SL</code>, and <code>SR</code>) are silent.<br>
+ <td><em>모노에서 5.1로 업믹스</em><br>
+ <code>M</code> 입력 채널이 센터 출력 채널 (<code>C</code>)에 대해 사용됩니다. 모든 다른 채널들(<code>L</code>, <code>R</code>, <code>LFE</code>, <code>SL</code>, 그리고 <code>SR</code>)은 작동하지 않습니다.<br>
<code>output.L = 0<br>
output.R = 0</code><br>
<code>output.C = input.M<br>
@@ -195,15 +201,15 @@ var buffer = context.createBuffer(1, 22050, 22050);</pre>
<tr>
<td><code>2</code> <em>(Stereo)</em></td>
<td><code>1</code> <em>(Mono)</em></td>
- <td><em>Down-mix from stereo to mono</em>.<br>
- Both input channels (<code>L</code> and <code>R</code>) are equally combined to produce the unique output channel (<code>M</code>).<br>
+ <td><em>스테레오에서 모노로 다운믹스</em><br>
+ 양 출력 채널 (<code>L</code> 과 <code>R</code>)은 고유한 출력 채널 (<code>M</code>)을 생산하기 위해 동등하게 결합됩니다.<br>
<code>output.M = 0.5 * (input.L + input.R)</code></td>
</tr>
<tr>
<td><code>2</code> <em>(Stereo)</em></td>
<td><code>4</code> <em>(Quad)</em></td>
- <td><em>Up-mix from stereo to quad.</em><br>
- The <code>L</code> and <code>R </code>input channels are used for their non-surround respective output channels (<code>L</code> and <code>R</code>). Surround output channels (<code>SL</code> and <code>SR</code>) are silent.<br>
+ <td><em>스테레오에서 quad로 업믹스</em><br>
+ <code>L</code> 과 <code>R </code> 입력 채널이 각자의 비 서라운드 출력 채널 (<code>L</code> 과 <code>R</code>)에 대해 사용됩니다. 서라운드 출력 채널 (<code>SL</code> 과 <code>SR</code>) 은 작동하지 않습니다.<br>
<code>output.L = input.L<br>
output.R = input.R<br>
output.SL = 0<br>
@@ -212,8 +218,8 @@ var buffer = context.createBuffer(1, 22050, 22050);</pre>
<tr>
<td><code>2</code> <em>(Stereo)</em></td>
<td><code>6</code> <em>(5.1)</em></td>
- <td><em>Up-mix from stereo to 5.1.</em><br>
- The <code>L</code> and <code>R </code>input channels are used for their non-surround respective output channels (<code>L</code> and <code>R</code>). Surround output channels (<code>SL</code> and <code>SR</code>), as well as the center (<code>C</code>) and subwoofer (<code>LFE</code>) channels, are left silent.<br>
+ <td><em>스테레오에서 5.1로 업믹스</em><br>
+ <code>L</code> 과 <code>R </code> 입력 채널이 각자의 비 서라운드 출력 채널 (<code>L</code> 과 <code>R</code>) 에 대해 사용됩니다. 서라운드 출력 채널 (<code>SL</code> 과 <code>SR</code>), 그리고 센터 (<code>C</code>) 와 서브우퍼 (<code>LFE</code>) 채널은 작동하지 않습니다.<br>
<code>output.L = input.L<br>
output.R = input.R<br>
output.C = 0<br>
@@ -224,23 +230,23 @@ var buffer = context.createBuffer(1, 22050, 22050);</pre>
<tr>
<td><code>4</code> <em>(Quad)</em></td>
<td><code>1</code> <em>(Mono)</em></td>
- <td><em>Down-mix from quad to mono</em>.<br>
- All four input channels (<code>L</code>, <code>R</code>, <code>SL</code>, and <code>SR</code>) are equally combined to produce the unique output channel (<code>M</code>).<br>
+ <td><em>quad에서 모노로 다운믹스</em><br>
+ 모든 네 개의 입력 채널 (<code>L</code>, <code>R</code>, <code>SL</code>, and <code>SR</code>) 이 고유한 출력 채널 (<code>M</code>)을 생산하기 위해 동등하게 결합됩니다.<br>
<code>output.M = 0.25 * (input.L + input.R + </code><code>input.SL + input.SR</code><code>)</code></td>
</tr>
<tr>
<td><code>4</code> <em>(Quad)</em></td>
<td><code>2</code> <em>(Stereo)</em></td>
- <td><em>Down-mix from quad to stereo</em>.<br>
- Both left input channels (<code>L</code> and <code>SL</code>) are equally combined to produce the unique left output channel (<code>L</code>). And similarly, both right input channels (<code>R</code> and <code>SR</code>) are equally combined to produce the unique right output channel (<code>R</code>).<br>
+ <td><em>quad에서 스테레오로 다운믹스</em><br>
+ 왼쪽 입력 채널 (<code>L</code> 과 <code>SL</code>) 둘 다 고유한 왼쪽 출력 채널 (<code>L</code>)을 생산하기 위해 동등하게 결합됩니다. 그리고 유사하게, 오른쪽 입력 채널 (<code>R</code> 과 <code>SR</code>) 둘 다 고유한 오른쪽 출력 채널을 생산하기 위해 동등하게 결합됩니다.<br>
<code>output.L = 0.5 * (input.L + input.SL</code><code>)</code><br>
<code>output.R = 0.5 * (input.R + input.SR</code><code>)</code></td>
</tr>
<tr>
<td><code>4</code> <em>(Quad)</em></td>
<td><code>6</code> <em>(5.1)</em></td>
- <td><em>Up-mix from quad to 5.1.</em><br>
- The <code>L</code>, <code>R</code>, <code>SL</code>, and <code>SR</code> input channels are used for their respective output channels (<code>L</code> and <code>R</code>). Center (<code>C</code>) and subwoofer (<code>LFE</code>) channels are left silent.<br>
+ <td><em>quad에서 5.1로 업믹스</em><br>
+ <code>L</code>, <code>R</code>, <code>SL</code>, 그리고 <code>SR</code> 입력 채널이 각각의 출력 채널 (<code>L</code> 과 <code>R</code>)에 대해 사용됩니다. 센터 (<code>C</code>)와 서브우퍼 (<code>LFE</code>) 채널은 작동하지 않은 채로 남아있습니다.<br>
<code>output.L = input.L<br>
output.R = input.R<br>
output.C = 0<br>
@@ -251,104 +257,99 @@ var buffer = context.createBuffer(1, 22050, 22050);</pre>
<tr>
<td><code>6</code> <em>(5.1)</em></td>
<td><code>1</code> <em>(Mono)</em></td>
- <td><em>Down-mix from 5.1 to mono.</em><br>
- The left (<code>L</code> and <code>SL</code>), right (<code>R</code> and <code>SR</code>) and central channels are all mixed together. The surround channels are slightly attenuated and the regular lateral channels are power-compensated to make them count as a single channel by multiplying by <code>√2/2</code>. The subwoofer (<code>LFE</code>) channel is lost.<br>
+ <td><em>5.1에서 모노로 다운믹스</em><br>
+ 왼쪽 (<code>L</code> 과 <code>SL</code>), 오른쪽 (<code>R</code> 과 <code>SR</code>) 그리고 중앙 채널이 모두 함께 믹스됩니다. 서라운드 채널은 약간 약화되고 regular lateral 채널은 하나의 채널로 카운트되도록 <code>√2/2</code>를 곱함으로써 파워가 보정(power-compensated)됩니다. 서브우퍼 (<code>LFE</code>) 채널은 손실됩니다.<br>
<code>output.M = 0.7071 * (input.L + input.R) + input.C + 0.5 * (input.SL + input.SR)</code></td>
</tr>
<tr>
<td><code>6</code> <em>(5.1)</em></td>
<td><code>2</code> <em>(Stereo)</em></td>
- <td><em>Down-mix from 5.1 to stereo.</em><br>
- The central channel (<code>C</code>) is summed with each lateral surround channel (<code>SL</code> or <code>SR</code>) and mixed to each lateral channel. As it is mixed down to two channels, it is mixed at a lower power: in each case it is multiplied by <code>√2/2</code>. The subwoofer (<code>LFE</code>) channel is lost.<br>
+ <td><em>5.1에서 스테레오로 다운믹스</em><br>
+ 중앙 채널 (<code>C</code>)이 각각의 측면 서라운드 채널(<code>SL</code> 또는 <code>SR</code>)과 합계되고 각각의 측면 채널로 믹스됩니다. 두 개의 채널로 다운믹스되었으므로, 더 낮은 파워로 믹스되었습니다: 각각의 경우에 <code>√2/2</code>가 곱해집니다. 서브우퍼 (<code>LFE</code>) 채널은 손실됩니다.<br>
<code>output.L = input.L + 0.7071 * (input.C + input.SL)<br>
output.R = input.R </code><code>+ 0.7071 * (input.C + input.SR)</code></td>
</tr>
<tr>
<td><code>6</code> <em>(5.1)</em></td>
<td><code>4</code> <em>(Quad)</em></td>
- <td><em>Down-mix from 5.1 to quad.</em><br>
- The central (<code>C</code>) is mixed with the lateral non-surround channels (<code>L</code> and <code>R</code>). As it is mixed down to two channels, it is mixed at a lower power: in each case it is multiplied by <code>√2/2</code>. The surround channels are passed unchanged. The subwoofer (<code>LFE</code>) channel is lost.<br>
+ <td><em>5.1에서 quad로 다운믹스</em><br>
+ 중앙 (<code>C</code>) 채널이 측면의 비 서라운드 채널 (<code>L</code> 과 <code>R</code>)과 믹스됩니다. 두 채널로 다운믹스되었으므로, 더 낮은 파워로 믹스되었습니다: 각각의 경우에 <code>√2/2</code>가 곱해집니다. 서라운드 채널은 변경되지 않은 채로 전달됩니다. 서브우퍼 (<code>LFE</code>) 채널은 손실됩니다.<br>
<code>output.L = input.L + 0.7071 * input.C<br>
output.R = input.R + 0.7071 * input.C<br>
- <code>output.SL = input.SL<br>
- output.SR = input.SR</code></code></td>
+ output.SL = input.SL<br>
+ output.SR = input.SR</code></td>
</tr>
<tr>
- <td colspan="2" rowspan="1">Other, non-standard layouts</td>
- <td>Non-standard channel layouts are handled as if <code>channelInterpretation</code> is set to <code>discrete</code>.<br>
- The specification explicitly allows the future definition of new speaker layouts. This fallback is therefore not future proof as the behavior of the browsers for a specific number of channels may change in the future.</td>
+ <td colspan="2">기타 비표준 레이아웃</td>
+ <td>비표준 채널 레이아웃은 <code>channelInterpretation</code>이 <code>discrete</code>로 설정된 것처럼 다뤄집니다.<br>
+ 사양(specification)은 분명히 새로운 스피커 레이아웃의 미래의 정의를 허용합니다. 특정한 수의 채널에 대한 브라우저의 행동이 미래에 달라질지도 모르므로 이 대비책은 그러므로 미래에도 사용할 수 있는 (future proof) 것이 아닙니다.</td>
</tr>
<tr>
- <th colspan="1" rowspan="2" scope="row"><code>discrete</code></th>
- <td rowspan="1">any (<code>x</code>)</td>
- <td rowspan="1">any (<code>y</code>) where <code>x&lt;y</code></td>
- <td><em>Up-mix discrete channels.</em><br>
- Fill each output channel with its input counterpart, that is the input channel with the same index. Channels with no corresponding input channels are left silent.</td>
+ <th rowspan="2" scope="row"><code>discrete</code></th>
+ <td>any (<code>x</code>)</td>
+ <td><code>x&lt;y</code>인 any (<code>y</code>)</td>
+ <td><em>discrete 채널의 업믹스</em><br>
+ 대응하는 입력 채널을 가지고 있는 각각의 출력 채널을 채웁니다, 즉 같은 인덱스를 가진 입력 채널입니다. 해당하는 입력 채널이 없는 채널은 작동하지 않은 채로 남아있습니다.</td>
</tr>
<tr>
- <td rowspan="1">any (<code>x</code>)</td>
- <td rowspan="1">any (<code>y</code>) where <code>x&gt;y</code></td>
- <td><em>Down-mix discrete channels.</em><br>
- Fill each output channel with its input counterpart, that is the input channel with the same index. Input channels with no corresponding output channels are dropped.</td>
+ <td>any (<code>x</code>)</td>
+ <td><code>x&gt;y</code>인 any (<code>y</code>)</td>
+ <td><em>discrete 채널의 다운믹스</em><br>
+ 대응하는 입력 채널을 가지고 있는 각각의 출력 채널을 채웁니다, 즉 같은 인덱스를 가진 입력 채널입니다. 해당하는 출력 채널을 가지고 있지 않은 입력 채널은 탈락됩니다.</td>
</tr>
</tbody>
</table>
-<h2 id="Visualizations">Visualizations</h2>
+<h2 id="Visualizations">시각화</h2>
-<p>In general, audio visualizations are achieved by accessing an ouput of audio data over time, usually gain or frequency data, and then using a graphical technology to turn that into a visual output, such as a graph. The Web Audio API has an {{domxref("AnalyserNode")}} available that doesn't alter the audio signal passing through it. Instead it outputs audio data that can be passed to a visualization technology such as {{htmlelement("canvas")}}.</p>
+<p>일반적으로, 오디오 시각화는 보통 진폭 이득(gain) 또는 주파수 데이터인, 시간에 대한 오디오 데이터의 출력에 접근함으로써, 그리고서 그것을 그래프와 같이 시각적 결과로 바꾸기 위해 그래픽 기술을 사용함으로써 성취됩니다. Web Audio API는 통과하는 오디오 신호를 변경하지 않는 {{domxref("AnalyserNode")}}를 가지고 있습니다. 대신 이것은 {{htmlelement("canvas")}}와 같은 시각화 기술로 전달될 수 있는 오디오 데이터를 출력합니다.</p>
-<p><img alt="Without modifying the audio stream, the node allows to get the frequency and time-domain data associated to it, using a FFT." src="https://mdn.mozillademos.org/files/12521/fttaudiodata_en.svg" style="height: 206px; width: 693px;"></p>
+<p><img alt="오디오 스트림을 수정하는 일 없이, FFT를 사용하여, 노드가 주파수와 그것에 관련된 시간 영역(time-domain) 데이터를 얻을 수 있게 허용합니다." src="fttaudiodata_en.svg"></p>
-<p>You can grab data using the following methods:</p>
+<p>여러분은 다음의 메서드들을 사용해 데이터를 얻을 수 있습니다:</p>
<dl>
<dt>{{domxref("AnalyserNode.getFloatFrequencyData()")}}</dt>
- <dd>Copies the current frequency data into a {{domxref("Float32Array")}} array passed into it.</dd>
-</dl>
-
-<dl>
+ <dd>현재 주파수 데이터를 이것 안으로 전달된 {{jsxref("Float32Array")}} 배열 안으로 복사합니다.</dd>
<dt>{{domxref("AnalyserNode.getByteFrequencyData()")}}</dt>
- <dd>Copies the current frequency data into a {{domxref("Uint8Array")}} (unsigned byte array) passed into it.</dd>
-</dl>
-
-<dl>
+ <dd>현재 주파수 데이터를 이것 안으로 전달된 {{jsxref("Uint8Array")}} (unsigned byte array) 안으로 복사합니다.</dd>
<dt>{{domxref("AnalyserNode.getFloatTimeDomainData()")}}</dt>
- <dd>Copies the current waveform, or time-domain, data into a {{domxref("Float32Array")}} array passed into it.</dd>
+ <dd>현재 파형, 또는 시간 영역(time-domain), 데이터를 이것 안으로 전달된 {{jsxref("Float32Array")}} 안으로 복사합니다.</dd>
<dt>{{domxref("AnalyserNode.getByteTimeDomainData()")}}</dt>
- <dd>Copies the current waveform, or time-domain, data into a {{domxref("Uint8Array")}} (unsigned byte array) passed into it.</dd>
+ <dd>현재 파형, 또는 시간 영역, 데이터를 이것 안으로 전달된 {{jsxref("Uint8Array")}} (unsigned byte array) 안으로 복사합니다.</dd>
</dl>
<div class="note">
-<p><strong>Note</strong>: For more information, see our <a href="/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API">Visualizations with Web Audio API</a> article.</p>
+<p><strong>노트</strong>: 더 많은 정보를 보시려면, <a href="/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API">Web Audio API로 시각화</a> 문서를 참조하세요.</p>
</div>
-<h2 id="Spatialisations">Spatialisations</h2>
+<h2 id="Spatialisations">공간화</h2>
<div>
-<p>An audio spatialisation (handled by the {{domxref("PannerNode")}} and {{domxref("AudioListener")}} nodes in the Web Audio API) allows us to model the position and behavior of an audio signal at a certain point in space, and the listener hearing that audio.</p>
+<p>(Web Audio API의 {{domxref("PannerNode")}} 와 {{domxref("AudioListener")}} 노드에 의해 다뤄지는) 오디오 공간화는 공간의 어떤 점에서의 오디오 신호의 위치와 행동을 나타내고(model), 청자(listener)가 그 오디오를 들을 수 있게 허용합니다.</p>
-<p>The panner's position is described with right-hand Cartesian coordinates; its movement using a velocity vector, necessary for creating Doppler effects, and its directionality using a directionality cone.The cone can be very large, e.g. for omnidirectional sources.</p>
+<p>panner의 위치는 right-hand 데카르트 좌표 (Cartesian coordinate)로 기술됩니다; 이것의 움직임은 도플러 효과를 생성하는데 필수적인 속도 벡터를 사용하고, 이것의 방향성(directionality)은 방향성 원뿔을 사용합니다. 이 원뿔은 아주 클 수 있는데, 예를 들자면 전방향의 소스(omnidirectional source)에 대한 것일 수 있습니다.</p>
</div>
-<p><img alt="The PannerNode brings a spatial position and velocity and a directionality for a given signal." src="https://mdn.mozillademos.org/files/12511/pannernode_en.svg" style="height: 340px; width: 799px;"></p>
+<p><img alt="PannerNode는 공간 위치와 속도와 주어진 신호에 대한 방향성을 제공합니다." src="pannernode_en.svg"></p>
<div>
-<p>The listener's position is described using right-hand Cartesian coordinates; its movement using a velocity vector and the direction the listener's head is pointing using two direction vectors: up and front. These respectively define the direction of the top of the listener's head, and the direction the listener's nose is pointing, and are at right angles to one another.</p>
+<p>청자의 위치는 right-hand 데카르트 좌표를 사용해 기술됩니다; 이것의 움직임은 속도 벡터를 사용하고 청자의 머리가 향하고 있는 방향은 위와 앞의 두 개의 방향 벡터를 사용합니다. 이것들은 각각 청자의 머리의 위의 방향과, 청자의 코가 가리키고 있는 방향을 정의하며, 서로 직각에 있습니다.</p>
</div>
-<p><img alt="The PannerNode brings a spatial position and velocity and a directionality for a given signal." src="https://mdn.mozillademos.org/files/12513/listener.svg" style="height: 249px; width: 720px;"></p>
+<p><img alt="AudioListener의 위와 앞의 벡터 위치를 보고 있는데, 위와 앞 벡터는 서로 90°에 있습니다." src="webaudiolistenerreduced.png"></p>
<div class="note">
-<p><strong>Note</strong>: For more information, see our <a href="/en-US/docs/Web/API/Web_Audio_API/Web_audio_spatialization_basics">Web audio spatialization basics</a> article.</p>
+<p><strong>노트</strong>: 더 많은 정보를 보시려면, <a href="/en-US/docs/Web/API/Web_Audio_API/Web_audio_spatialization_basics">Web audio 공간화 기본</a> 문서를 참조하세요.</p>
</div>
-<h2 id="Fan-in_and_Fan-out">Fan-in and Fan-out</h2>
+<h2 id="Fan-in_and_Fan-out">팬 인(fan-in)과 팬 아웃(fan-out)</h2>
-<p>In audio terms, <strong>fan-in</strong> describes the process by which a {{domxref("ChannelMergerNode")}} takes a series of mono input sources and outputs a single multi-channel signal:</p>
+<p>
+오디오 용어에서, <strong>팬 인</strong>은 {{domxref("ChannelMergerNode")}}가 일련의 모노 입력 소스를 취하고 단일의 다수 채널 신호를 출력하는 과정을 설명합니다:</p>
-<p><img alt="" src="https://mdn.mozillademos.org/files/12517/fanin.svg" style="height: 258px; width: 325px;"></p>
+<p><img alt="" src="fanin.svg"></p>
-<p><strong>Fan-out</strong> describes the opposite process, whereby a {{domxref("ChannelSplitterNode")}} takes a multi-channel input source and outputs multiple mono output signals:</p>
+<p><strong>팬 아웃</strong>은 반대 과정을 설명하는데, {{domxref("ChannelSplitterNode")}}가 다수 채널 입력 소스를 취하고 다수의 모노 출력 신호를 출력합니다.</p>
-<p><img alt="" src="https://mdn.mozillademos.org/files/12515/fanout.svg" style="height: 258px; width: 325px;"></p>
+<p><img alt="" src="fanout.svg"></p>
diff --git a/files/ko/web/api/web_audio_api/best_practices/index.html b/files/ko/web/api/web_audio_api/best_practices/index.html
new file mode 100644
index 0000000000..784b3f1f3c
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/best_practices/index.html
@@ -0,0 +1,97 @@
+---
+title: Web Audio API best practices
+slug: Web/API/Web_Audio_API/Best_practices
+tags:
+ - Audio
+ - Best practices
+ - Guide
+ - Web Audio API
+---
+<div>{{apiref("Web Audio API")}}</div>
+
+<p class="summary">There's no strict right or wrong way when writing creative code. As long as you consider security, performance, and accessibility, you can adapt to your own style. In this article, we'll share a number of <em>best practices</em> — guidelines, tips, and tricks for working with the Web Audio API.</p>
+
+<h2 id="Loading_soundsfiles">Loading sounds/files</h2>
+
+<p>There are four main ways to load sound with the Web Audio API and it can be a little confusing as to which one you should use.</p>
+
+<p>When working with files, you are looking at either the grabbing the file from an {{domxref("HTMLMediaElement")}} (i.e. an {{htmlelement("audio")}} or {{htmlelement("video")}} element), or you're looking to fetch the file and decode it into a buffer. Both are legitimate ways of working, however, it's more common to use the former when you are working with full-length tracks, and the latter when working with shorter, more sample-like tracks.</p>
+
+<p>Media elements have streaming support out of the box. The audio will start playing when the browser determines it can load the rest of the file before playing finishes. You can see an example of how to use this with the Web Audio API in the <a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API tutorial</a>.</p>
+
+<p>You will, however, have more control if you use a buffer node. You have to request the file and wait for it to load (<a href="/en-US/docs/Web/API/Web_Audio_API/Advanced_techniques#Dial_up_%E2%80%94_loading_a_sound_sample">this section of our advanced article</a> shows a good way to do it), but then you have access to the data directly, which means more precision, and more precise manipulation.</p>
+
+<p>If you're looking to work with audio from the user's camera or microphone you can access it via the <a href="/en-US/docs/Web/API/Media_Streams_API">Media Stream API</a> and the {{domxref("MediaStreamAudioSourceNode")}} interface. This is good for WebRTC and situations where you might want to record or possibly analyze audio.</p>
+
+<p>The last way is to generate your own sound, which can be done with either an {{domxref("OscillatorNode")}} or by creating a buffer and populating it with your own data. Check out the <a href="/en-US/docs/Web/API/Web_Audio_API/Advanced_techniques">tutorial here for creating your own instrument</a> for information on creating sounds with oscillators and buffers.</p>
+
+<h2 id="Cross_browser_legacy_support">Cross browser &amp; legacy support</h2>
+
+<p>The Web Audio API specification is constantly evolving and like most things on the web, there are some issues with it working consistently across browsers. Here we'll look at options for getting around cross-browser problems.</p>
+
+<p>There's the <a href="https://github.com/chrisguttandin/standardized-audio-context"><code>standardised-audio-context</code></a> npm package, which creates API functionality consistently across browsers, filling holes as they are found. It's constantly in development and endeavours to keep up with the current specification.</p>
+
+<p>There is also the option of libraries, of which there are a few depending on your use case. For a good all-rounder, <a href="https://howlerjs.com/">howler.js</a> is a good choice. It has cross-browser support and, provides a useful subset of functionality. Although it doesn't harness the full gamut of filters and other effects the Web Audio API comes with, you can do most of what you'd want to do.</p>
+
+<p>If you are looking for sound creation or a more instrument-based option, <a href="https://tonejs.github.io/">tone.js</a> is a great library. It provides advanced scheduling capabilities, synths, and effects, and intuitive musical abstractions built on top of the Web Audio API.</p>
+
+<p><a href="https://github.com/bbc/r-audio">R-audio</a>, from the <a href="https://medium.com/bbc-design-engineering/r-audio-declarative-reactive-and-flexible-web-audio-graphs-in-react-102c44a1c69c">BBC's Research &amp; Development department</a>, is a library of React components aiming to provide a "more intuitive, declarative interface to Web Audio". If you're used to writing JSX it might be worth looking at.</p>
+
+<h2 id="Autoplay_policy">Autoplay policy</h2>
+
+<p>Browsers have started to implement an autoplay policy, which in general can be summed up as:</p>
+
+<blockquote>
+<p>"Create or resume context from inside a user gesture".</p>
+</blockquote>
+
+<p>But what does that mean in practice? A user gesture has been interpreted to mean a user-initiated event, normally a <code>click</code> event. Browser vendors decided that Web Audio contexts should not be allowed to automatically play audio; they should instead be started by a user. This is because autoplaying audio can be really annoying and obtrusive. But how do we handle this?</p>
+
+<p>When you create an audio context (either offline or online) it is created with a <code>state</code>, which can be <code>suspended</code>, <code>running</code>, or <code>closed</code>.</p>
+
+<p>When working with an {{domxref("AudioContext")}}, if you create the audio context from inside a <code>click</code> event the state should automatically be set to <code>running</code>. Here is a simple example of creating the context from inside a <code>click</code> event:</p>
+
+<pre class="brush: js">const button = document.querySelector('button');
+button.addEventListener('click', function() {
+ const audioCtx = new AudioContext();
+}, false);
+</pre>
+
+<p>If however, you create the context outside of a user gesture, its state will be set to <code>suspended</code> and it will need to be started after user interaction. We can use the same click event example here, test for the state of the context and start it, if it is suspended, using the <a href="/en-US/docs/Web/API/BaseAudioContext/resume"><code>resume()</code></a> method.</p>
+
+<pre class="brush: js">const audioCtx = new AudioContext();
+const button = document.querySelector('button');
+
+button.addEventListener('click', function() {
+ // check if context is in suspended state (autoplay policy)
+ if (audioCtx.state === 'suspended') {
+ audioCtx.resume();
+ }
+}, false);
+</pre>
+
+<p>You might instead be working with an {{domxref("OfflineAudioContext")}}, in which case you can resume the suspended audio context with the <a href="/en-US/docs/Web/API/OfflineAudioContext/startRendering"><code>startRendering()</code></a> method.</p>
+
+<h2 id="User_control">User control</h2>
+
+<p>If your website or application contains sound, you should allow the user control over it, otherwise again, it will become annoying. This can be achieved by play/stop and volume/mute controls. The <a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a> tutorial goes over how to do this.</p>
+
+<p>If you have buttons that switch audio on and off, using the ARIA <a href="/en-US/docs/Web/Accessibility/ARIA/Roles/Switch_role"><code>role="switch"</code></a> attribute on them is a good option for signalling to assistive technology what the button's exact purpose is, and therefore making the app more accessible. There's a <a href="https://codepen.io/Wilto/pen/ZoGoQm?editors=1100">demo of how to use it here</a>.</p>
+
+<p>As you work with a lot of changing values within the Web Audio API and will want to provide users with control over these, the <a href="/en-US/docs/Web/HTML/Element/input/range"><code>range input</code></a> is often a good choice of control to use. It's a good option as you can set minimum and maximum values, as well as increments with the <a href="/en-US/docs/Web/HTML/Element/input#attr-step"><code>step</code></a> attribute.</p>
+
+<h2 id="Setting_AudioParam_values">Setting AudioParam values</h2>
+
+<p>There are two ways to manipulate {{domxref("AudioNode")}} values, which are themselves objects of type {{domxref("AudioParam")}} interface. The first is to set the value directly via the property. So for instance if we want to change the <code>gain</code> value of a {{domxref("GainNode")}} we would do so thus:</p>
+
+<pre class="brush: js">gainNode.gain.value = 0.5;
+</pre>
+
+<p>This will set our volume to half. However, if you're using any of the <code>AudioParam</code>'s defined methods to set these values, they will take precedence over the above property setting. If for example, you want the <code>gain</code> value to be raised to 1 in 2 seconds time, you can do this:</p>
+
+<pre class="brush: js">gainNode.gain.setValueAtTime(1, audioCtx.currentTime + 2);
+</pre>
+
+<p>It will override the previous example (as it should), even if it were to come later in your code.</p>
+
+<p>Bearing this in mind, if your website or application requires timing and scheduling, it's best to stick with the {{domxref("AudioParam")}} methods for setting values. If you're sure it doesn't, setting it with the <code>value</code> property is fine.</p>
diff --git a/files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/customsourcenode-as-splitter.svg b/files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/customsourcenode-as-splitter.svg
new file mode 100644
index 0000000000..0490cddbe5
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/customsourcenode-as-splitter.svg
@@ -0,0 +1 @@
+<svg xmlns="http://www.w3.org/2000/svg" viewBox="7 7 580 346" width="580pt" height="346pt"><defs><marker orient="auto" overflow="visible" markerUnits="strokeWidth" id="a" viewBox="-1 -3 7 6" markerWidth="7" markerHeight="6" color="#000"><path d="M4.8 0 0-1.8v3.6z" fill="currentColor" stroke="currentColor"/></marker></defs><g fill="none"><path fill="#867fff" d="M207 99h180v45H207z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M207 99h180v45H207z"/><text transform="translate(212 113)" fill="#000"><tspan font-family="Courier" font-size="14" font-weight="500" x="9.388" y="14" textLength="151.225">ConstantSourceNode</tspan></text><path fill="#867fff" d="M9 216h180v45H9z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M9 216h180v45H9z"/><text transform="translate(14 230)" fill="#000"><tspan font-family="Courier" font-size="14" font-weight="500" x="51.395" y="14" textLength="67.211">GainNode</tspan></text><path fill="#867fff" d="M405 216h180v45H405z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M405 216h180v45H405z"/><text transform="translate(410 230)" fill="#000"><tspan font-family="Courier" font-size="14" font-weight="500" x="51.395" y="14" textLength="67.211">GainNode</tspan></text><path fill="#867fff" d="M207 216h180v45H207z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M207 216h180v45H207z"/><text transform="translate(212 230)" fill="#000"><tspan font-family="Courier" font-size="14" font-weight="500" x="17.789" y="14" textLength="134.422">StereoPannerNode</tspan></text><path d="M252 144v27H99v32.1M297 144v59.1m45-59.1v27h153v32.1" marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="2"/><text transform="translate(55.876 192.447)" fill="#000"><tspan font-family="Courier" font-size="14" font-weight="500" x=".197" y="14" textLength="33.605">gain</tspan></text><text transform="translate(258.64 192.854)" fill="#000"><tspan font-family="Courier" font-size="14" font-weight="500" x=".398" y="14" textLength="25.204">pan</tspan></text><text transform="translate(504.37 193.347)" fill="#000"><tspan font-family="Courier" font-size="14" font-weight="500" x=".197" y="14" textLength="33.605">gain</tspan></text><path marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M297 54v32.1"/><path d="M243 9h144l-36 45H207z" fill="#fff"/><path d="M243 9h144l-36 45H207z" stroke="#000" stroke-linecap="round" stroke-linejoin="round"/><text transform="translate(248 22)" fill="#000"><tspan font-family="Arial" font-size="16" font-weight="500" x="17.734" y="15" textLength="52.93">input = </tspan><tspan font-family="Courier" font-size="16" font-style="italic" font-weight="500" x="70.664" y="15" textLength="9.602">N</tspan></text><path d="M243 306h144l-36 45H207z" fill="#fff"/><path d="M243 306h144l-36 45H207z" stroke="#000" stroke-linecap="round" stroke-linejoin="round"/><text transform="translate(248 319)" fill="#000"><tspan font-family="Arial" font-size="16" font-weight="500" x="12.84" y="15" textLength="62.719">output = </tspan><tspan font-family="Courier" font-size="16" font-style="italic" font-weight="500" x="75.559" y="15" textLength="9.602">N</tspan></text><path marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="m296.5 261 .357 32.101"/><path d="M441 306h144l-36 45H405z" fill="#fff"/><path d="M441 306h144l-36 45H405z" stroke="#000" stroke-linecap="round" stroke-linejoin="round"/><text transform="translate(446 319)" fill="#000"><tspan font-family="Arial" font-size="16" font-weight="500" x="12.84" y="15" textLength="62.719">output = </tspan><tspan font-family="Courier" font-size="16" font-style="italic" font-weight="500" x="75.559" y="15" textLength="9.602">N</tspan></text><path marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M495 261v32.1"/><path d="M45 306h144l-36 45H9z" fill="#fff"/><path d="M45 306h144l-36 45H9z" stroke="#000" stroke-linecap="round" stroke-linejoin="round"/><text transform="translate(50 319)" fill="#000"><tspan font-family="Arial" font-size="16" font-weight="500" x="12.84" y="15" textLength="62.719">output = </tspan><tspan font-family="Courier" font-size="16" font-style="italic" font-weight="500" x="75.559" y="15" textLength="9.602">N</tspan></text><path marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M99 261v32.1"/></g></svg> \ No newline at end of file
diff --git a/files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/index.html b/files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/index.html
new file mode 100644
index 0000000000..5fdd188213
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/controlling_multiple_parameters_with_constantsourcenode/index.html
@@ -0,0 +1,284 @@
+---
+title: Controlling multiple parameters with ConstantSourceNode
+slug: Web/API/Web_Audio_API/Controlling_multiple_parameters_with_ConstantSourceNode
+tags:
+ - Audio
+ - Example
+ - Guide
+ - Intermediate
+ - Media
+ - Tutorial
+ - Web Audio
+ - Web Audio API
+---
+<div>{{APIRef("Web Audio API")}}</div>
+
+<p><span class="seoSummary">This article demonstrates how to use a {{domxref("ConstantSourceNode")}} to link multiple parameters together so they share the same value, which can be changed by setting the value of the {{domxref("ConstantSourceNode.offset")}} parameter.</span></p>
+
+<p>You may have times when you want to have multiple audio parameters be linked so they share the same value even while being changed in some way. For example, perhaps you have a set of oscillators, and two of them need to share the same, configurable volume, or you have a filter that's been applied to certain inputs but not to all of them. You could use a loop and change the value of each affected {{domxref("AudioParam")}} one at a time, but there are two drawbacks to doing it that way: first, that's extra code that, as you're about to see, you don't have to write; and second, that loop uses valuable CPU time on your thread (likely the main thread), and there's a way to offload all that work to the audio rendering thread, which is optimized for this kind of work and may run at a more appropriate priority level than your code.</p>
+
+<p>The solution is simple, and it involves using an audio node type which, at first glance, doesn't look all that useful: {{domxref("ConstantSourceNode")}}.</p>
+
+<h2 id="The_technique">The technique</h2>
+
+<p>This is actually a really easy way to do something that sounds like it might be hard to do. You need to create a {{domxref("ConstantSourceNode")}} and connect it to all of the {{domxref("AudioParam")}}s whose values should be linked to always match each other. Since <code>ConstantSourceNode</code>'s {{domxref("ConstantSourceNode.offset", "offset")}} value is sent straight through to all of its outputs, it acts as a splitter for that value, sending it to each connected parameter.</p>
+
+<p>The diagram below shows how this works; an input value, <code>N</code>, is set as the value of the {{domxref("ConstantSourceNode.offset")}} property. The <code>ConstantSourceNode</code> can have as many outputs as necessary; in this case, we've connected it to three nodes: two {{domxref("GainNode")}}s and a {{domxref("StereoPannerNode")}}. So <code>N</code> becomes the value of the specified parameter ({{domxref("GainNode.gain", "gain")}} for the {{domxref("GainNode")}}s and pan for the {{domxref("StereoPannerNode")}}.</p>
+
+<p><img alt="Dagram in SVG showing how ConstantSourceNode can be used to split an input parameter to share it with multiple nodes." src="customsourcenode-as-splitter.svg"></p>
+
+<p>As a result, every time you change <code>N</code> (the value of the input {{domxref("AudioParam")}}, the values of the two <code>GainNode</code>s' <code>gain</code> properties and the value of the <code>StereoPannerNode</code>'s <code>pan</code> propertry are all set to <code>N</code> as well.</p>
+
+<h2 id="Example">Example</h2>
+
+<p>Let's take a look at this technique in action. In this simple example, we create three {{domxref("OscillatorNode")}}s. Two of them have adjustable gain, controlled using a shared input control. The other oscillator has a fixed volume.</p>
+
+<h3 id="HTML">HTML</h3>
+
+<p>The HTML content for this example is primarily a button to toggle the oscillator tones on and off and an {{HTMLElement("input")}} element of type <code>range</code> to control the volume of two of the three oscillators.</p>
+
+<pre class="brush: html">&lt;div class="controls"&gt;
+ &lt;div class="left"&gt;
+ &lt;div id="playButton" class="button"&gt;
+ ▶️
+ &lt;/div&gt;
+ &lt;/div&gt;
+ &lt;div class="right"&gt;
+ &lt;span&gt;Volume: &lt;/span&gt;
+ &lt;input type="range" min="0.0" max="1.0" step="0.01"
+ value="0.8" name="volume" id="volumeControl"&gt;
+ &lt;/div&gt;
+&lt;/div&gt;
+
+&lt;p&gt;Use the button above to start and stop the tones, and the volume control to
+change the volume of the notes E and G in the chord.&lt;/p&gt;</pre>
+
+<div class="hidden">
+<h3 id="CSS">CSS</h3>
+
+<pre class="brush: css">.controls {
+ width: 400px;
+ position: relative;
+ vertical-align: middle;
+ height: 44px;
+}
+
+.button {
+ font-size: 32px;
+ cursor: pointer;
+ user-select: none;
+ -moz-user-select: none;
+ -webkit-user-select: none;
+ -ms-user-select: none;
+ -o-user-select: none;
+}
+
+.right {
+ width: 50%;
+ font: 14px "Open Sans", "Lucida Grande", "Arial", sans-serif;
+ position: absolute;
+ right: 0;
+ display: table-cell;
+ vertical-align: middle;
+}
+
+.right span {
+ vertical-align: middle;
+}
+
+.right input {
+ vertical-align: baseline;
+}
+
+.left {
+ width: 50%;
+ position: absolute;
+ left: 0;
+ display: table-cell;
+ vertical-align: middle;
+}
+
+.left span, .left input {
+ vertical-align: middle;
+}</pre>
+</div>
+
+<h3 id="JavaScript">JavaScript</h3>
+
+<p>Now let's take a look at the JavaScript code, a piece at a time.</p>
+
+<h4 id="Setting_up">Setting up</h4>
+
+<p>Let's start by looking at the global variable initialization.</p>
+
+<pre class="brush: js">let context = null;
+
+let playButton = null;
+let volumeControl = null;
+
+let oscNode1 = null;
+let oscNode2 = null;
+let oscNode3 = null;
+let constantNode = null;
+let gainNode1 = null;
+let gainNode2 = null;
+let gainNode3 = null;
+
+let playing = false;</pre>
+
+<p>These variables are:</p>
+
+<dl>
+ <dt><code>context</code></dt>
+ <dd>The {{domxref("AudioContext")}} in which all the audio nodes live.</dd>
+ <dt><code>playButton</code> and <code>volumeControl</code></dt>
+ <dd>References to the play button and volume control elements.</dd>
+ <dt><code>oscNode1</code>, <code>oscNode2</code>, and <code>oscNode3</code></dt>
+ <dd>The three {{domxref("OscillatorNode")}}s used to generate the chord.</dd>
+ <dt><code>gainNode1</code>, <code>gainNode2</code>, and <code>gainNode3</code></dt>
+ <dd>The three {{domxref("GainNode")}} instances which provide the volume levels for each of the three oscillators. <code>gainNode2</code> and <code>gainNode3</code> will be linked together to have the same, adjustable, value using the {{domxref("ConstantSourceNode")}}.</dd>
+ <dt><code>constantNode</code></dt>
+ <dd>The {{domxref("ConstantSourceNode")}} used to control the values of <code>gainNode2</code> and <code>gainNode3</code> together.</dd>
+ <dt><code>playing</code></dt>
+ <dd>A {{jsxref("Boolean")}} that we'll use to keep track of whether or not we're currently playing the tones.</dd>
+</dl>
+
+<p>Now let's look at the <code>setup()</code> function, which is our handler for the window's {{event("load")}} event; it handles all the initialization tasks that require the DOM to be in place.</p>
+
+<pre class="brush: js">function setup() {
+ context = new (window.AudioContext || window.webkitAudioContext)();
+
+ playButton = document.querySelector("#playButton");
+ volumeControl = document.querySelector("#volumeControl");
+
+ playButton.addEventListener("click", togglePlay, false);
+ volumeControl.addEventListener("input", changeVolume, false);
+
+ gainNode1 = context.createGain();
+ gainNode1.gain.value = 0.5;
+
+ gainNode2 = context.createGain();
+ gainNode3 = context.createGain();
+ gainNode2.gain.value = gainNode1.gain.value;
+ gainNode3.gain.value = gainNode1.gain.value;
+ volumeControl.value = gainNode1.gain.value;
+
+ constantNode = context.createConstantSource();
+ constantNode.connect(gainNode2.gain);
+ constantNode.connect(gainNode3.gain);
+ constantNode.start();
+
+ gainNode1.connect(context.destination);
+ gainNode2.connect(context.destination);
+ gainNode3.connect(context.destination);
+}
+
+window.addEventListener("load", setup, false);
+</pre>
+
+<p>First, we get access to the window's {{domxref("AudioContext")}}, stashing the reference in <code>context</code>. Then we get references to the control widgets, setting <code>playButton</code> to reference the play button and <code>volumeControl</code> to reference the slider control that the user will use to adjust the gain on the linked pair of oscillators.</p>
+
+<p>Then we assign a handler for the play button's {{event("click")}} event (see {{anch("Toggling the oscillators on and off")}} for more on the <code>togglePlay()</code> method), and for the volume slider's {{event("input")}} event (see {{anch("Controlling the linked oscillators")}} to see the very short <code>changeVolume()</code> method).</p>
+
+<p>Next, the {{domxref("GainNode")}} <code>gainNode1</code> is created to handle the volume for the non-linked oscillator (<code>oscNode1</code>). We set that gain to 0.5. We also create <code>gainNode2</code> and <code>gainNode3</code>, setting their values to match <code>gainNode1</code>, then set the value of the volume slider to the same value, so it is synchronized with the gain level it controls.</p>
+
+<p>Once all the gain nodes are created, we create the {{domxref("ConstantSourceNode")}}, <code>constantNode</code>. We connect its output to the <code>gain</code> {{domxref("AudioParam")}} on both <code>gainNode2</code> and <code>gainNode3</code>, and we start the constant node running by calling its {{domxref("AudioScheduledSourceNode/start", "start()")}} method; now it's sending the value 0.5 to the two gain nodes' values, and any change to {{domxref("ConstantSourceNode.offset", "constantNode.offset")}} will automatically set the gain of both <code>gainNode2</code> and <code>gainNode3</code> (affecting their audio inputs as expected).</p>
+
+<p>Finally, we connect all the gain nodes to the {{domxref("AudioContext")}}'s {{domxref("BaseAudioContext/destination", "destination")}}, so that any sound delivered to the gain nodes will reach the output, whether that output be speakers, headphones, a recording stream, or any other destination type.</p>
+
+<p>After setting the window's {{event("load")}} event handler to be the <code>setup()</code> function, the stage is set. Let's see how the action plays out.</p>
+
+<h4 id="Toggling_the_oscillators_on_and_off">Toggling the oscillators on and off</h4>
+
+<p>Because {{domxref("OscillatorNode")}} doesn't support the notion of being in a paused state, we have to simulate it by terminating the oscillators and starting them again when the play button is clicked again to toggle them back on. Let's look at the code.</p>
+
+<pre class="brush: js">function togglePlay(event) {
+ if (playing) {
+ playButton.textContent = "▶️";
+ stopOscillators();
+ } else {
+ playButton.textContent = "⏸";
+ startOscillators();
+ }
+}</pre>
+
+<p>If the <code>playing</code> variable indicates we're already playing the oscillators, we change the <code>playButton</code>'s content to be the Unicode character "right-pointing triangle" (▶️) and call <code>stopOscillators()</code> to shut down the oscillators. See {{anch("Stopping the oscillators")}} below for that code.</p>
+
+<p>If <code>playing</code> is false, indicating that we're currently paused, we change the play button's content to be the Unicode character "pause symbol" (⏸) and call <code>startOscillators()</code> to start the oscillators playing their tones. That code is covered under {{anch("Starting the oscillators")}} below.</p>
+
+<h4 id="Controlling_the_linked_oscillators">Controlling the linked oscillators</h4>
+
+<p>The <code>changeVolume()</code> function—the event handler for the slider control for the gain on the linked oscillator pair—looks like this:</p>
+
+<pre class="brush: js">function changeVolume(event) {
+ constantNode.offset.value = volumeControl.value;
+}</pre>
+
+<p>That simple function controls the gain on both nodes. All we have to do is set the value of the {{domxref("ConstantSourceNode")}}'s {{domxref("ConstantSourceNode.offset", "offset")}} parameter. That value becomes the node's constant output value, which is fed into all of its outputs, which are, as set above, <code>gainNode2</code> and <code>gainNode3</code>.</p>
+
+<p>While this is an extremely simple example, imagine having a 32 oscillator synthesizer with multiple linked parameters in play across a number of patched nodes. Being able to shorten the number of operations to adjust them all will prove invaluable for code size and performance both.</p>
+
+<h4 id="Starting_the_oscillators">Starting the oscillators</h4>
+
+<p>When the user clicks the play/pause toggle button while the oscillators aren't playing, the <code>startOscillators()</code> function gets called.</p>
+
+<pre class="brush: js">function startOscillators() {
+ oscNode1 = context.createOscillator();
+ oscNode1.type = "sine";
+ oscNode1.frequency.value = 261.625565300598634; // middle C
+ oscNode1.connect(gainNode1);
+
+ oscNode2 = context.createOscillator();
+ oscNode2.type = "sine";
+ oscNode2.frequency.value = 329.627556912869929; // E
+ oscNode2.connect(gainNode2);
+
+ oscNode3 = context.createOscillator();
+ oscNode3.type = "sine";
+ oscNode3.frequency.value = 391.995435981749294 // G
+ oscNode3.connect(gainNode3);
+
+ oscNode1.start();
+ oscNode2.start();
+ oscNode3.start();
+
+ playing = true;
+}</pre>
+
+<p>Each of the three oscillators is set up the same way:</p>
+
+<ol>
+ <li>Create the {{domxref("OscillatorNode")}} by calling {{domxref("BaseAudioContext.createOscillator")}}.</li>
+ <li>Set the oscillator's type to <code>"sine"</code> to use a sine wave as the audio waveform.</li>
+ <li>Set the oscillator's frequency to the desired value; in this case, <code>oscNode1</code> is set to a middle C, while <code>oscNode2</code> and <code>oscNode3</code> round out the chord by playing the E and G notes.</li>
+ <li>Connect the new oscillator to the corresponding gain node.</li>
+</ol>
+
+<p>Once all three oscillators have been created, they're started by calling each one's {{domxref("AudioScheduledSourceNode.start", "ConstantSourceNode.start()")}} method in turn, and <code>playing</code> is set to <code>true</code> to track that the tones are playing.</p>
+
+<h4 id="Stopping_the_oscillators">Stopping the oscillators</h4>
+
+<p>Stopping the oscillators when the user toggles the play state to pause the tones is as simple as stopping each node.</p>
+
+<pre class="brush: js">function stopOscillators() {
+ oscNode1.stop();
+ oscNode2.stop();
+ oscNode3.stop();
+ playing = false;
+}</pre>
+
+<p>Each node is stopped by calling its {{domxref("AudioScheduledSourceNode.stop", "ConstantSourceNode.stop()")}} method, then <code>playing</code> is set to <code>false</code>.</p>
+
+<h3 id="Result">Result</h3>
+
+<p>{{ EmbedLiveSample('Example', 600, 200) }}</p>
+
+<h2 id="See_also">See also</h2>
+
+<ul>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API">Web Audio API</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Simple_synth">Simple synth keyboard</a> (example)</li>
+ <li>{{domxref("OscillatorNode")}}</li>
+ <li>{{domxref("ConstantSourceNode")}}</li>
+</ul>
diff --git a/files/ko/web/api/web_audio_api/index.html b/files/ko/web/api/web_audio_api/index.html
index a6f2a443d1..1ccd2526b3 100644
--- a/files/ko/web/api/web_audio_api/index.html
+++ b/files/ko/web/api/web_audio_api/index.html
@@ -3,11 +3,11 @@ title: Web Audio API
slug: Web/API/Web_Audio_API
translation_of: Web/API/Web_Audio_API
---
-<div>
+<div>{{DefaultAPISidebar("Web Audio API")}}</div>
+
<p>Web Audio API는 웹에서 오디오를 제어하기 위한 강력하고 다양한 기능을 제공합니다. Web Audio API를 이용하면 오디오 소스를 선택할 수 있도록 하거나, 오디오에 이펙트를 추가하거나, 오디오를 시각화하거나, 패닝과 같은 공간 이펙트를 적용시키는 등의 작업이 가능합니다.</p>
-</div>
-<h2 id="Web_audio의_개념과_사용법">Web audio의 개념과 사용법</h2>
+<h2 id="Web_audio_concepts_and_usage">Web audio의 개념과 사용법</h2>
<p>Web Audio API는 <strong>오디오 컨텍스트</strong> 내부의 오디오 조작을 핸들링하는 것을 포함하며, <strong>모듈러 라우팅</strong>을 허용하도록 설계되어 있습니다. 기본적인 오디오 연산은 <strong>오디오 노드</strong>를 통해 수행되며, <strong>오디오 노드</strong>는 서로 연결되어 <strong>오디오 라우팅 그래프</strong>를 형성합니다. 서로 다른 타입의 채널 레이아웃을 포함한 다수의 오디오 소스는 단일 컨텍스트 내에서도 지원됩니다. 이 모듈식 설계는 역동적이고 복합적인 오디오 기능 생성을 위한 유연성을 제공합니다.</p>
@@ -18,24 +18,24 @@ translation_of: Web/API/Web_Audio_API
<p>웹 오디오의 간단하고 일반적인 작업 흐름은 다음과 같습니다 :</p>
<ol>
- <li>오디오 컨텍스트를 생성합니다.</li>
- <li>컨텍스트 내에 소스를 생성합니다.(ex - &lt;audio&gt;, 발진기, 스트림)</li>
- <li>이펙트 노드를 생성합니다. (ex - 잔향 효과,  바이쿼드 필터, 패너, 컴프레서 등)</li>
- <li>오디오의 최종 목적지를 선택합니다. (ex - 시스템 스피커)</li>
- <li>사운드를 이펙트에 연결하고, 이펙트를 목적지에 연결합니다.</li>
+ <li>오디오 컨텍스트를 생성합니다.</li>
+ <li>컨텍스트 내에 소스를 생성합니다.(ex - &lt;audio&gt;, 발진기, 스트림)</li>
+ <li>이펙트 노드를 생성합니다. (ex - 잔향 효과,  바이쿼드 필터, 패너, 컴프레서 등)</li>
+ <li>오디오의 최종 목적지를 선택합니다. (ex - 시스템 스피커)</li>
+ <li>사운드를 이펙트에 연결하고, 이펙트를 목적지에 연결합니다.</li>
</ol>
-<p><img alt="A simple box diagram with an outer box labeled Audio context, and three inner boxes labeled Sources, Effects and Destination. The three inner boxes have arrow between them pointing from left to right, indicating the flow of audio information." src="https://mdn.mozillademos.org/files/12241/webaudioAPI_en.svg" style="display: block; height: 143px; margin: 0px auto; width: 643px;"></p>
+<p><img alt="오디오 컨텍스트라고 쓰여진 외부 박스와, 소스, 이펙트, 목적지라고 쓰여진 세 개의 내부 박스를 가진 간단한 박스 다이어그램. 세 개의 내부 박스는 사이에 좌에서 우를 가리키는 화살표를 가지고 있는데, 이는 오디오 정보의 흐름을 나타냅니다." src="audio-context_.png"></p>
<p>높은 정확도와 적은 지연시간을 가진 타이밍 계산 덕분에, 개발자는 높은 샘플 레이트에서도 특정 샘플을 대상으로 이벤트에 정확하게 응답하는 코드를 작성할 수 있습니다. 따라서 드럼 머신이나 시퀀서 등의 어플리케이션은 충분히 구현 가능합니다.</p>
<p>Web Audio API는 오디오가 어떻게 <em>공간화</em>될지 컨트롤할 수 있도록 합니다. <em>소스-리스너 모델</em>을 기반으로 하는 시스템을 사용하면 <em>패닝 모델</em>과 <em>거리-유도 감쇄</em> 혹은 움직이는 소스(혹은 움직이는 청자)를 통해 유발된 <em>도플러 시프트</em> 컨트롤이 가능합니다.</p>
<div class="note">
-<p><a href="/en-US/docs/Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API">Basic concepts behind Web Audio API</a> 아티클에서 Web Audio API 이론에 대한 더 자세한 내용을 읽을 수 있습니다.</p>
+<p><a href="/en-US/docs/Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API">Web Audio API의 기본 개념</a> 문서에서 Web Audio API 이론에 대한 더 자세한 내용을 읽을 수 있습니다.</p>
</div>
-<h2 id="Web_Audio_API_타겟_사용자층">Web Audio API 타겟 사용자층</h2>
+<h2 id="Web_Audio_API_target_audience">Web Audio API 타겟 사용자층</h2>
<p>오디오나 음악 용어에 익숙하지 않은 사람은 Web Audio API가 막막하게 느껴질 수 있습니다. 또한 Web Audio API가 굉장히 다양한 기능을 제공하는 만큼 개발자로서는 시작하기 어렵게 느껴질 수 있습니다.</p>
@@ -47,74 +47,80 @@ translation_of: Web/API/Web_Audio_API
<p>코드를 작성하는 것은 카드 게임과 비슷합니다. 규칙을 배우고, 플레이합니다. 모르겠는 규칙은 다시 공부하고, 다시 새로운 판을 합니다. 마찬가지로, 이 문서와 첫 튜토리얼에서 설명하는 것만으로 부족하다고 느끼신다면 첫 튜토리얼의 내용을 보충하는 동시에 여러 테크닉을 이용하여 스텝 시퀀서를 만드는 법을 설명하는 <a href="/en-US/docs/Web/API/Web_Audio_API/Advanced_techniques">상급자용 튜토리얼</a>을 읽어보시는 것을 추천합니다.</p>
-<p>그 외에도 이 페이지의 사이드바에서 API의 모든 기능을 설명하는 참고자료와 다양한 튜토리얼을 찾아 보실 수 있습니다.</p>
+<p>그 외에도 이 페이지의 사이드바에서 API의 모든 기능을 설명하는 참고자료와 다양한 자습서를 찾아 보실 수 있습니다.</p>
<p>만약에 프로그래밍보다는 음악이 친숙하고, 음악 이론에 익숙하며, 악기를 만들고 싶으시다면 바로 상급자용 튜토리얼부터 시작하여 여러가지를 만들기 시작하시면 됩니다. 위의 튜토리얼은 음표를 배치하는 법, 저주파 발진기 등 맞춤형 Oscillator(발진기)와 Envelope를 설계하는 법 등을 설명하고 있으니, 이를 읽으며 사이드바의 자료를 참고하시면 될 것입니다.</p>
<p>프로그래밍에 전혀 익숙하지 않으시다면 자바스크립트 기초 튜토리얼을 먼저 읽고 이 문서를 다시 읽으시는 게 나을 수도 있습니다. 모질라의 <a href="/en-US/docs/Learn/JavaScript">자바스크립트 기초</a>만큼 좋은 자료도 몇 없죠.</p>
-<h2 id="Web_Audio_API_Interfaces">Web Audio API Interfaces</h2>
+<h2 id="Web_Audio_API_Interfaces">Web Audio API 인터페이스</h2>
<p>Web Audio API는 다양한 인터페이스와 연관 이벤트를 가지고 있으며, 이는 9가지의 기능적 범주로 나뉩니다.</p>
-<h3 id="일반_오디오_그래프_정의">일반 오디오 그래프 정의</h3>
+<h3 id="General_audio_graph_definition">일반 오디오 그래프 정의</h3>
<p>Web Audio API 사용범위 내에서 오디오 그래프를 형성하는 일반적인 컨테이너와 정의입니다.</p>
<dl>
- <dt>{{domxref("AudioContext")}}</dt>
- <dd><strong><code>AudioContext</code></strong> 인터페이스는 오디오 모듈이 서로 연결되어 구성된 오디오 프로세싱 그래프를 표현하며, 각각의 그래프는 {{domxref("AudioNode")}}로 표현됩니다. <code>AudioContext</code>는 자신이 가지고 있는 노드의 생성과 오디오 프로세싱 혹은 디코딩의 실행을 제어합니다. 어떤 작업이든 시작하기 전에 <code>AudioContext</code>를 생성해야 합니다. 모든 작업은 컨텍스트 내에서 이루어집니다.</dd>
- <dt>{{domxref("AudioNode")}}</dt>
- <dd><strong><code>AudioNode</code></strong><strong> </strong>인터페이스는 오디오 소스({{HTMLElement("audio")}}나 {{HTMLElement("video")}}엘리먼트), 오디오 목적지, 중간 처리 모듈({{domxref("BiquadFilterNode")}}이나 {{domxref("GainNode")}})과 같은 오디오 처리 모듈을 나타냅니다.</dd>
- <dt>{{domxref("AudioParam")}}</dt>
- <dd><strong><code>AudioParam</code></strong> 인터페이스는 {{domxref("AudioNode")}}중 하나와 같은 오디오 관련 파라미터를 나타냅니다. 이는 특정 값 또는 값 변경으로 세팅되거나, 특정 시간에 발생하고 특정 패턴을 따르도록 스케쥴링할 수 있습니다.</dd>
- <dt>The {{event("ended")}} event</dt>
- <dd>
- <p><strong><code>ended</code></strong> 이벤트는 미디어의 끝에 도달하여 재생이 정지되면 호출됩니다.</p>
- </dd>
+ <dt>{{domxref("AudioContext")}}</dt>
+ <dd><strong><code>AudioContext</code></strong> 인터페이스는 오디오 모듈이 서로 연결되어 구성된 오디오 프로세싱 그래프를 표현하며, 각각의 그래프는 {{domxref("AudioNode")}}로 표현됩니다. <code>AudioContext</code>는 자신이 가지고 있는 노드의 생성과 오디오 프로세싱 혹은 디코딩의 실행을 제어합니다. 어떤 작업이든 시작하기 전에 <code>AudioContext</code>를 생성해야 합니다. 모든 작업은 컨텍스트 내에서 이루어집니다.</dd>
+ <dt>{{domxref("AudioNode")}}</dt>
+ <dd><strong><code>AudioNode</code></strong><strong> </strong>인터페이스는 오디오 소스({{HTMLElement("audio")}}나 {{HTMLElement("video")}} 요소), 오디오 목적지, 중간 처리 모듈({{domxref("BiquadFilterNode")}}이나 {{domxref("GainNode")}})과 같은 오디오 처리 모듈을 나타냅니다.</dd>
+ <dt>{{domxref("AudioParam")}}</dt>
+ <dd><strong><code>AudioParam</code></strong> 인터페이스는 {{domxref("AudioNode")}}중 하나와 같은 오디오 관련 파라미터를 나타냅니다. 이는 특정 값 또는 값 변경으로 세팅되거나, 특정 시간에 발생하고 특정 패턴을 따르도록 스케쥴링할 수 있습니다.</dd>
+ <dt>{{domxref("AudioParamMap")}}</dt>
+ <dd>{{domxref("AudioParam")}} 인터페이스 그룹에 maplike 인터페이스를 제공하는데, 이는 <code>forEach()</code>, <code>get()</code>, <code>has()</code>, <code>keys()</code>, <code>values()</code> 메서드와 <code>size</code> 속성이 제공된다는 것을 의미합니다.</dd>
+ <dt>{{domxref("BaseAudioContext")}}</dt>
+ <dd><strong><code>BaseAudioContext</code></strong> 인터페이스는 온라인과 오프라인 오디오 프로세싱 그래프에 대한 기본 정의로서 동작하는데, 이는 각각 {{domxref("AudioContext")}} 와 {{domxref("OfflineAudioContext")}}로 대표됩니다. <code>BaseAudioContext</code>는 직접 쓰여질 수 없습니다 — 이 두 가지 상속되는 인터페이스 중 하나를 통해 이것의 기능을 사용할 수 있습니다.</dd>
+ <dt>The {{event("ended")}} event</dt>
+ <dd><p><strong><code>ended</code></strong> 이벤트는 미디어의 끝에 도달하여 재생이 정지되면 호출됩니다.</p></dd>
</dl>
-<h3 id="오디오_소스_정의하기">오디오 소스 정의하기</h3>
+<h3 id="Defining_audio_sources">오디오 소스 정의하기</h3>
<p>Web Audio API에서 사용하기 위한 오디오 소스를 정의하는 인터페이스입니다.</p>
<dl>
- <dt>{{domxref("OscillatorNode")}}</dt>
- <dd><strong><code style="font-size: 14px;">OscillatorNode</code></strong> 인터페이스는 삼각파 또는 사인파와 같은 주기적 파형을 나타냅니다. 이것은 주어진 주파수의 파동을 생성하는 {{domxref("AudioNode")}} 오디오 프로세싱 모듈입니다.</dd>
- <dt>{{domxref("AudioBuffer")}}</dt>
- <dd><strong><code>AudioBuffer</code></strong> 인터페이스는 {{ domxref("AudioContext.decodeAudioData()") }}메소드를 사용해 오디오 파일에서 생성되거나 {{ domxref("AudioContext.createBuffer()") }}를 사용해 로우 데이터로부터 생성된 메모리상에 적재되는 짧은 오디오 자원을 나타냅니다. 이 형식으로 디코딩된 오디오는 {{ domxref("AudioBufferSourceNode") }}에 삽입될 수 있습니다.</dd>
- <dt>{{domxref("AudioBufferSourceNode")}}</dt>
- <dd><strong><code>AudioBufferSourceNode</code></strong> 인터페이스는 {{domxref("AudioBuffer")}}에 저장된 메모리상의 오디오 데이터로 구성된 오디오 소스를 나타냅니다. 이것은 오디오 소스 역할을 하는 {{domxref("AudioNode")}}입니다.</dd>
- <dt>{{domxref("MediaElementAudioSourceNode")}}</dt>
- <dd><code><strong>MediaElementAudio</strong></code><strong><code>SourceNode</code></strong> 인터페이스는 {{ htmlelement("audio") }} 나 {{ htmlelement("video") }} HTML 엘리먼트로 구성된 오디오 소스를 나타냅니다. 이것은 오디오 소스 역할을 하는 {{domxref("AudioNode")}}입니다.</dd>
- <dt>{{domxref("MediaStreamAudioSourceNode")}}</dt>
- <dd><code><strong>MediaStreamAudio</strong></code><strong><code>SourceNode</code></strong> 인터페이스는 <a href="/en-US/docs/WebRTC" title="/en-US/docs/WebRTC">WebRTC</a> {{domxref("MediaStream")}}(웹캡, 마이크 혹은 원격 컴퓨터에서 전송된 스트림)으로 구성된 오디오 소스를 나타냅니다. 이것은 오디오 소스 역할을 하는 {{domxref("AudioNode")}}입니다.</dd>
+ <dt>{{domxref("AudioScheduledSourceNode")}}</dt>
+ <dd><strong><code>AudioScheduledSourceNode</code></strong>는 오디오 소스 노드 인터페이스의 몇 가지 유형에 대한 부모 인터페이스입니다. 이것은 {{domxref("AudioNode")}}입니다.</dd>
+ <dt>{{domxref("OscillatorNode")}}</dt>
+ <dd><strong><code style="font-size: 14px;">OscillatorNode</code></strong> 인터페이스는 삼각파 또는 사인파와 같은 주기적 파형을 나타냅니다. 이것은 주어진 주파수의 파동을 생성하는 {{domxref("AudioNode")}} 오디오 프로세싱 모듈입니다.</dd>
+ <dt>{{domxref("AudioBuffer")}}</dt>
+ <dd><strong><code>AudioBuffer</code></strong> 인터페이스는 {{ domxref("AudioContext.decodeAudioData()") }}메소드를 사용해 오디오 파일에서 생성되거나 {{ domxref("AudioContext.createBuffer()") }}를 사용해 로우 데이터로부터 생성된 메모리상에 적재되는 짧은 오디오 자원을 나타냅니다. 이 형식으로 디코딩된 오디오는 {{ domxref("AudioBufferSourceNode") }}에 삽입될 수 있습니다.</dd>
+ <dt>{{domxref("AudioBufferSourceNode")}}</dt>
+ <dd><strong><code>AudioBufferSourceNode</code></strong> 인터페이스는 {{domxref("AudioBuffer")}}에 저장된 메모리상의 오디오 데이터로 구성된 오디오 소스를 나타냅니다. 이것은 오디오 소스 역할을 하는 {{domxref("AudioNode")}}입니다.</dd>
+ <dt>{{domxref("MediaElementAudioSourceNode")}}</dt>
+ <dd><code><strong>MediaElementAudio</strong></code><strong><code>SourceNode</code></strong> 인터페이스는 {{ htmlelement("audio") }} 나 {{ htmlelement("video") }} HTML 엘리먼트로 구성된 오디오 소스를 나타냅니다. 이것은 오디오 소스 역할을 하는 {{domxref("AudioNode")}}입니다.</dd>
+ <dt>{{domxref("MediaStreamAudioSourceNode")}}</dt>
+ <dd><code><strong>MediaStreamAudio</strong></code><strong><code>SourceNode</code></strong> 인터페이스는 <a href="/en-US/docs/Web/API/WebRTC_API" title="/en-US/docs/WebRTC">WebRTC</a> {{domxref("MediaStream")}}(웹캠, 마이크 혹은 원격 컴퓨터에서 전송된 스트림)으로 구성된 오디오 소스를 나타냅니다. 이것은 오디오 소스 역할을 하는 {{domxref("AudioNode")}}입니다.</dd>
+ <dt>{{domxref("MediaStreamTrackAudioSourceNode")}}</dt>
+ <dd>{{domxref("MediaStreamTrackAudioSourceNode")}} 유형의 노드는 데이터가 {{domxref("MediaStreamTrack")}}로부터 오는 오디오 소스를 표현합니다. 이 노드를 생성하기 위해 {{domxref("AudioContext.createMediaStreamTrackSource", "createMediaStreamTrackSource()")}} 메서드를 사용하여 이 노드를 생성할 때, 여러분은 어떤 트랙을 사용할 지 명시합니다. 이것은 <code>MediaStreamAudioSourceNode</code>보다 더 많은 제어를 제공합니다.</dd>
</dl>
-<h3 id="오디오_이펙트_필터_정의하기">오디오 이펙트 필터 정의하기</h3>
+<h3 id="Defining_audio_effects_filters">오디오 이펙트 필터 정의하기</h3>
<p>오디오 소스에 적용할 이펙트를 정의하는 인터페이스입니다.</p>
<dl>
- <dt>{{domxref("BiquadFilterNode")}}</dt>
- <dd><strong><code>BiquadFilterNode</code></strong> 인터페이스는 간단한 하위 필터를 나타냅니다. 이것은 여러 종류의 필터나 톤 제어 장치 혹은 그래픽 이퀄라이저를 나타낼 수 있는 {{domxref("AudioNode")}}입니다. <code>BiquadFilterNode</code>는 항상 단 하나의 입력과 출력만을 가집니다. </dd>
- <dt>{{domxref("ConvolverNode")}}</dt>
- <dd><code><strong>Convolver</strong></code><strong><code>Node</code></strong><span style="line-height: 1.5;"> 인터페이스는 주어진 {{domxref("AudioBuffer")}}에 선형 콘볼루션을 수행하는 {{domxref("AudioNode")}}이며, 리버브 이펙트를 얻기 위해 자주 사용됩니다. </span></dd>
- <dt>{{domxref("DelayNode")}}</dt>
- <dd><strong><code>DelayNode</code></strong> 인터페이스는 지연선을 나타냅니다. 지연선은 입력 데이터가 출력에 전달되기까지의 사이에 딜레이를 발생시키는 {{domxref("AudioNode")}} 오디오 처리 모듈입니다.</dd>
- <dt>{{domxref("DynamicsCompressorNode")}}</dt>
- <dd><strong><code>DynamicsCompressorNode</code></strong> 인터페이스는 압축 이펙트를 제공합니다, 이는 신호의 가장 큰 부분의 볼륨을 낮추어 여러 사운드를 동시에 재생할 때 발생할 수 있는 클리핑 및 왜곡을 방지합니다.</dd>
- <dt>{{domxref("GainNode")}}</dt>
- <dd><strong><code>GainNode</code></strong> 인터페이스는 음량의 변경을 나타냅니다. 이는 출력에 전달되기 전의 입력 데이터에 주어진 음량 조정을 적용하기 위한 {{domxref("AudioNode")}} 오디오 모듈입니다.</dd>
- <dt>{{domxref("StereoPannerNode")}}</dt>
- <dd><code><strong>StereoPannerNode</strong></code> 인터페이스는 오디오 스트림을 좌우로 편향시키는데 사용될 수 있는 간단한 스테레오 패너 노드를 나타냅니다.</dd>
- <dt>{{domxref("WaveShaperNode")}}</dt>
- <dd><strong><code>WaveShaperNode</code></strong> 인터페이스는 비선형 왜곡을 나타냅니다. 이는 곡선을 사용하여 신호의 파형 형성에 왜곡을 적용하는 {{domxref("AudioNode")}}입니다. 분명한 왜곡 이펙트 외에도 신호에 따뜻한 느낌을 더하는데 자주 사용됩니다.</dd>
- <dt>{{domxref("PeriodicWave")}}</dt>
- <dd>{{domxref("OscillatorNode")}}의 출력을 형성하는데 사용될 수 있는 주기적 파형을 설명합니다.</dd>
+ <dt>{{domxref("BiquadFilterNode")}}</dt>
+ <dd><strong><code>BiquadFilterNode</code></strong> 인터페이스는 간단한 하위 필터를 나타냅니다. 이것은 여러 종류의 필터나 톤 제어 장치 혹은 그래픽 이퀄라이저를 나타낼 수 있는 {{domxref("AudioNode")}}입니다. <code>BiquadFilterNode</code>는 항상 단 하나의 입력과 출력만을 가집니다. </dd>
+ <dt>{{domxref("ConvolverNode")}}</dt>
+ <dd><code><strong>Convolver</strong></code><strong><code>Node</code></strong><span style="line-height: 1.5;"> 인터페이스는 주어진 {{domxref("AudioBuffer")}}에 선형 콘볼루션을 수행하는 {{domxref("AudioNode")}}이며, 리버브 이펙트를 얻기 위해 자주 사용됩니다. </span></dd>
+ <dt>{{domxref("DelayNode")}}</dt>
+ <dd><strong><code>DelayNode</code></strong> 인터페이스는 지연선을 나타냅니다. 지연선은 입력 데이터가 출력에 전달되기까지의 사이에 딜레이를 발생시키는 {{domxref("AudioNode")}} 오디오 처리 모듈입니다.</dd>
+ <dt>{{domxref("DynamicsCompressorNode")}}</dt>
+ <dd><strong><code>DynamicsCompressorNode</code></strong> 인터페이스는 압축 이펙트를 제공합니다, 이는 신호의 가장 큰 부분의 볼륨을 낮추어 여러 사운드를 동시에 재생할 때 발생할 수 있는 클리핑 및 왜곡을 방지합니다.</dd>
+ <dt>{{domxref("GainNode")}}</dt>
+ <dd><strong><code>GainNode</code></strong> 인터페이스는 음량의 변경을 나타냅니다. 이는 출력에 전달되기 전의 입력 데이터에 주어진 음량 조정을 적용하기 위한 {{domxref("AudioNode")}} 오디오 모듈입니다.</dd>
+ <dt>{{domxref("WaveShaperNode")}}</dt>
+ <dd><strong><code>WaveShaperNode</code></strong> 인터페이스는 비선형 왜곡을 나타냅니다. 이는 곡선을 사용하여 신호의 파형 형성에 왜곡을 적용하는 {{domxref("AudioNode")}}입니다. 분명한 왜곡 이펙트 외에도 신호에 따뜻한 느낌을 더하는데 자주 사용됩니다.</dd>
+ <dt>{{domxref("PeriodicWave")}}</dt>
+ <dd>{{domxref("OscillatorNode")}}의 출력을 형성하는데 사용될 수 있는 주기적 파형을 설명합니다.</dd>
+ <dt>{{domxref("IIRFilterNode")}}</dt>
+ <dd>일반적인 <strong><a class="external external-icon" href="https://en.wikipedia.org/wiki/infinite%20impulse%20response" title="infinite impulse response">infinite impulse response</a></strong> (IIR) 필터를 구현합니다; 이 유형의 필터는 음색 제어 장치와 그래픽 이퀄라이저를 구현하는 데 사용될 수 있습니다.</dd>
</dl>
-<h3 id="오디오_목적지_정의하기">오디오 목적지 정의하기</h3>
+<h3 id="Defining_audio_destinations">오디오 목적지 정의하기</h3>
<p>처리된 오디오를 어디에 출력할지 정의하는 인터페이스입니다.</p>
@@ -122,347 +128,152 @@ translation_of: Web/API/Web_Audio_API
<dt>{{domxref("AudioDestinationNode")}}</dt>
<dd><strong><code>AudioDestinationNode</code></strong> 인터페이스는 주어진 컨텍스트 내의 오디오 소스의 최종 목적지를 나타냅니다. 주로 기기의 스피커로 출력할 때 사용됩니다.</dd>
<dt>{{domxref("MediaStreamAudioDestinationNode")}}</dt>
- <dd><code><strong>MediaStreamAudio</strong></code><strong><code>DestinationNode</code></strong> 인터페이스는 단일 <code>AudioMediaStreamTrack</code> 을 가진 <a href="/en-US/docs/WebRTC" title="/en-US/docs/WebRTC">WebRTC</a> {{domxref("MediaStream")}}로 구성된 오디오 목적지를 나타내며, 이는 {{ domxref("MediaDevices.getUserMedia", "getUserMedia()") }}에서 얻은 {{domxref("MediaStream")}}과 비슷한 방식으로 사용할 수 있습니다. 이것은 오디오 목적지 역할을 하는 {{domxref("AudioNode")}}입니다.</dd>
+ <dd><code><strong>MediaStreamAudio</strong></code><strong><code>DestinationNode</code></strong> 인터페이스는 단일 <code>AudioMediaStreamTrack</code> 을 가진 <a href="/en-US/docs/Web/API/WebRTC_API" title="/en-US/docs/WebRTC">WebRTC</a> {{domxref("MediaStream")}}로 구성된 오디오 목적지를 나타내며, 이는 {{ domxref("MediaDevices.getUserMedia", "getUserMedia()") }}에서 얻은 {{domxref("MediaStream")}}과 비슷한 방식으로 사용할 수 있습니다. 이것은 오디오 목적지 역할을 하는 {{domxref("AudioNode")}}입니다.</dd>
</dl>
-<h3 id="데이터_분석_및_시각화">데이터 분석 및 시각화</h3>
+<h3 id="Data_analysis_and_visualization">데이터 분석 및 시각화</h3>
<p>오디오에서 재생시간이나 주파수 등의 데이터를 추출하기 위한 인터페이스입니다.</p>
<dl>
- <dt>{{domxref("AnalyserNode")}}</dt>
- <dd><strong><code>AnalyserNode</code></strong> 인터페이스는 데이터를 분석하고 시각화하기 위한 실시간 주파수와 시간영역 분석 정보를 제공하는 노드를 나타냅니다.</dd>
+ <dt>{{domxref("AnalyserNode")}}</dt>
+ <dd><strong><code>AnalyserNode</code></strong> 인터페이스는 데이터를 분석하고 시각화하기 위한 실시간 주파수와 시간영역 분석 정보를 제공하는 노드를 나타냅니다.</dd>
</dl>
-<h3 id="오디오_채널을_분리하고_병합하기">오디오 채널을 분리하고 병합하기</h3>
+<h3 id="Splitting_and_merging_audio_channels">오디오 채널을 분리하고 병합하기</h3>
<p>오디오 채널들을 분리하거나 병합하기 위한 인터페이스입니다.</p>
<dl>
- <dt>{{domxref("ChannelSplitterNode")}}</dt>
- <dd><code><strong>ChannelSplitterNode</strong></code> 인터페이스는 오디오 소스의 여러 채널을 모노 출력 셋으로 분리합니다.</dd>
- <dt>{{domxref("ChannelMergerNode")}}</dt>
- <dd><code><strong>ChannelMergerNode</strong></code> 인터페이스는 여러 모노 입력을 하나의 출력으로 재결합합니다. 각 입력은 출력의 채널을 채우는데 사용될 것입니다.</dd>
+ <dt>{{domxref("ChannelSplitterNode")}}</dt>
+ <dd><code><strong>ChannelSplitterNode</strong></code> 인터페이스는 오디오 소스의 여러 채널을 모노 출력 셋으로 분리합니다.</dd>
+ <dt>{{domxref("ChannelMergerNode")}}</dt>
+ <dd><code><strong>ChannelMergerNode</strong></code> 인터페이스는 여러 모노 입력을 하나의 출력으로 재결합합니다. 각 입력은 출력의 채널을 채우는데 사용될 것입니다.</dd>
</dl>
-<h3 id="오디오_공간화">오디오 공간화</h3>
+<h3 id="Audio_spatialization">오디오 공간화</h3>
<p>오디오 소스에 오디오 공간화 패닝 이펙트를 추가하는 인터페이스입니다.</p>
<dl>
- <dt>{{domxref("AudioListener")}}</dt>
- <dd><strong><code>AudioListener</code></strong> 인터페이스는 오디오 공간화에 사용되는 오디오 장면을 청취하는 고유한 시청자의 위치와 방향을 나타냅니다.</dd>
- <dt>{{domxref("PannerNode")}}</dt>
- <dd><strong><code>PannerNode</code></strong> 인터페이스는 공간 내의 신호 양식을 나타냅니다. 이것은 자신의 오른손 직교 좌표 내의 포지션과, 속도 벡터를 이용한 움직임과, 방향성 원뿔을 이용한 방향을 서술하는 {{domxref("AudioNode")}} 오디오 프로세싱 모듈입니다.</dd>
-</dl>
-
-<h3 id="자바스크립트에서_오디오_처리하기">자바스크립트에서 오디오 처리하기</h3>
-
-<p>자바스크립트에서 오디오 데이터를 처리하기 위한 코드를 작성할 수 있습니다. 이렇게 하려면 아래에 나열된 인터페이스와 이벤트를 사용하세요.</p>
-
-<div class="note">
-<p>이것은 Web Audio API 2014년 8월 29일의 스펙입니다. 이 기능은 지원이 중단되고 {{ anch("Audio_Workers") }}로 대체될 예정입니다.</p>
-</div>
-
-<dl>
- <dt>{{domxref("ScriptProcessorNode")}}</dt>
- <dd><strong><code>ScriptProcessorNode</code></strong> 인터페이스는 자바스크립트를 이용한 오디오 생성, 처리, 분석 기능을 제공합니다. 이것은 현재 입력 버퍼와 출력 버퍼, 총 두 개의 버퍼에 연결되는 {{domxref("AudioNode")}} 오디오 프로세싱 모듈입니다. {{domxref("AudioProcessingEvent")}}인터페이스를 구현하는 이벤트는 입력 버퍼에 새로운 데이터가 들어올 때마다 객체로 전달되고, 출력 버퍼가 데이터로 채워지면 이벤트 핸들러가 종료됩니다.</dd>
- <dt>{{event("audioprocess")}} (event)</dt>
- <dd><strong><code>audioprocess</code></strong> 이벤트는 Web Audio API {{domxref("ScriptProcessorNode")}}의 입력 버퍼가 처리될 준비가 되었을 때 발생합니다.</dd>
- <dt>{{domxref("AudioProcessingEvent")}}</dt>
- <dd><a href="/en-US/docs/Web_Audio_API" title="/en-US/docs/Web_Audio_API">Web Audio API</a> <strong><code>AudioProcessingEvent</code></strong> 는 {{domxref("ScriptProcessorNode")}} 입력 버퍼가 처리될 준비가 되었을 때 발생하는 이벤트를 나타냅니다.</dd>
+ <dt>{{domxref("AudioListener")}}</dt>
+ <dd><strong><code>AudioListener</code></strong> 인터페이스는 오디오 공간화에 사용되는 오디오 장면을 청취하는 고유한 시청자의 위치와 방향을 나타냅니다.</dd>
+ <dt>{{domxref("PannerNode")}}</dt>
+ <dd><strong><code>PannerNode</code></strong> 인터페이스는 공간 내의 신호 양식을 나타냅니다. 이것은 자신의 오른손 직교 좌표 내의 포지션과, 속도 벡터를 이용한 움직임과, 방향성 원뿔을 이용한 방향을 서술하는 {{domxref("AudioNode")}} 오디오 프로세싱 모듈입니다.</dd>
+ <dt>{{domxref("StereoPannerNode")}}</dt>
+ <dd><code><strong>StereoPannerNode</strong></code> 인터페이스는 오디오 스트림을 좌우로 편향시키는데 사용될 수 있는 간단한 스테레오 패너 노드를 나타냅니다.</dd>
</dl>
-<h3 id="오프라인백그라운드_오디오_처리하기">오프라인/백그라운드 오디오 처리하기</h3>
+<h3 id="Audio_processing_in_JavaScript">JavaScript에서의 오디오 프로세싱</h3>
-<p>다음을 이용해 백그라운드(장치의 스피커가 아닌 {{domxref("AudioBuffer")}}으로 렌더링)에서 오디오 그래프를 신속하게 처리/렌더링 할수 있습니다.</p>
+<p>오디오 worklet을 사용하여, 여러분은 JavaScript 또는 <a href="/en-US/docs/WebAssembly">WebAssembly</a>로 작성된 사용자 정의 오디오 노드를 정의할 수 있습니다. 오디오 worklet은 {{domxref("Worklet")}} 인터페이스를 구현하는데, 이는 {{domxref("Worker")}} 인터페이스의 가벼운 버전입니다.</p>
<dl>
- <dt>{{domxref("OfflineAudioContext")}}</dt>
- <dd><strong><code>OfflineAudioContext</code></strong> 인터페이스는 {{domxref("AudioNode")}}로 연결되어 구성된 오디오 프로세싱 그래프를 나타내는 {{domxref("AudioContext")}} 인터페이스입니다. 표준 <strong><code>AudioContext</code></strong> 와 대조적으로, <strong><code>OfflineAudioContext</code></strong> 는 실제로 오디오를 렌더링하지 않고 가능한 빨리 버퍼 내에서 생성합니다. </dd>
- <dt>{{event("complete")}} (event)</dt>
- <dd><strong><code>complete</code></strong> 이벤트는 {{domxref("OfflineAudioContext")}}의 렌더링이 종료될때 발생합니다.</dd>
- <dt>{{domxref("OfflineAudioCompletionEvent")}}</dt>
- <dd><strong><code>OfflineAudioCompletionEvent</code></strong> 이벤트는 {{domxref("OfflineAudioContext")}} 의 처리가 종료될 때 발생하는 이벤트를 나타냅니다. {{event("complete")}} 이벤트는 이 이벤트를 구현합니다.</dd>
+ <dt>{{domxref("AudioWorklet")}}</dt>
+ <dd><code>AudioWorklet</code> 인터페이스는 {{domxref("AudioContext")}} 객체의 {{domxref("BaseAudioContext.audioWorklet", "audioWorklet")}}을 통하여 사용 가능하고, 메인 스레드를 실행할 오디오 worklet에 모듈을 추가할 수 있게 합니다.</dd>
+ <dt>{{domxref("AudioWorkletNode")}}</dt>
+ <dd><code>AudioWorkletNode</code> 인터페이스는 오디오 그래프에 임베드된 {{domxref("AudioNode")}}을 나타내고 해당하는 <code>AudioWorkletProcessor</code>에 메시지를 전달할 수 있습니다.</dd>
+ <dt>{{domxref("AudioWorkletProcessor")}}</dt>
+ <dd><code>AudioWorkletProcessor</code> 인터페이스는 오디오를 직접 생성하거나, 처리하거나, 또는 분석하는 <code>AudioWorkletGlobalScope</code>에서 실행되는 오디오 프로세싱 코드를 나타내고, 해당하는 <code>AudioWorkletNode</code>에 메시지를 전달할 수 있습니다.</dd>
+ <dt>{{domxref("AudioWorkletGlobalScope")}}</dt>
+ <dd><code>AudioWorkletGlobalScope</code> 인터페이스는 오디오 프로세싱 스크립트가 실행되는 워커 컨텍스트를 나타내는 파생된 객체인 <code>WorkletGlobalScope</code>입니다; 이것은 메인 스레드가 아닌 worklet 스레드에서 JavaScript를 사용하여 직접적으로 오디오 데이터의 생성, 처리, 분석을 가능하게 하도록 설계되었습니다.</dd>
</dl>
-<h3 id="Audio_Workers" name="Audio_Workers">오디오 워커</h3>
+<h4 id="Obsolete_script_processor_nodes">안 쓰임: 스크립트 프로세서 노드</h4>
-<p>오디오 워커는 <a href="/en-US/docs/Web/Guide/Performance/Using_web_workers">web worker</a> 컨텍스트 내에서 스크립팅된 오디오 처리를 관리하기 위한 기능을 제공하며, 두어가지 인터페이스로 정의되어 있습니다(2014년 8월 29일 새로운 기능이 추가되었습니다). 이는 아직 모든 브라우저에서 구현되지 않았습니다. 구현된 브라우저에서는 <a href="#Audio_processing_via_JavaScript">Audio processing in JavaScript</a>에서 설명된 {{domxref("ScriptProcessorNode")}}를 포함한 다른 기능을 대체합니다.</p>
+<p>오디오 worklet이 정의되기 전에, Web Audio API는 JavaScript 기반의 오디오 프로세싱을 위해 <code>ScriptProcessorNode</code>를 사용했습니다. 코드가 메인 스레드에서 실행되기 때문에, 나쁜 성능을 가지고 있었습니다. <code>ScriptProcessorNode</code>는 역사적인 이유로 보존되나 deprecated되었습니다.</p>
<dl>
- <dt>{{domxref("AudioWorkerNode")}}</dt>
- <dd><strong><code>AudioWorkerNode</code></strong> 인터페이스는 워커 쓰레드와 상호작용하여 오디오를 직접 생성, 처리, 분석하는 {{domxref("AudioNode")}}를 나타냅니다. </dd>
- <dt>{{domxref("AudioWorkerGlobalScope")}}</dt>
- <dd><strong><code>AudioWorkerGlobalScope</code></strong> 인터페이스는 <strong><code>DedicatedWorkerGlobalScope</code></strong> 에서 파생된 오디오 처리 스크립트가 실행되는 워커 컨텍스트를 나타내는 객체입니다. 이것은 워커 쓰레드 내에서 자바스크립트를 이용하여 직접 오디오 데이터를 생성, 처리, 분석할 수 있도록 설계되었습니다.</dd>
- <dt>{{domxref("AudioProcessEvent")}}</dt>
- <dd>이것은 처리를 수행하기 위해 {{domxref("AudioWorkerGlobalScope")}} 오브젝트로 전달되는 <code>Event</code> 오브젝트입니다.</dd>
+ <dt>{{domxref("ScriptProcessorNode")}} {{deprecated_inline}}</dt>
+ <dd><strong><code>ScriptProcessorNode</code></strong> 인터페이스는 자바스크립트를 이용한 오디오 생성, 처리, 분석 기능을 제공합니다. 이것은 현재 입력 버퍼와 출력 버퍼, 총 두 개의 버퍼에 연결되는 {{domxref("AudioNode")}} 오디오 프로세싱 모듈입니다. {{domxref("AudioProcessingEvent")}} 인터페이스를 구현하는 이벤트는 입력 버퍼에 새로운 데이터가 들어올 때마다 객체로 전달되고, 출력 버퍼가 데이터로 채워지면 이벤트 핸들러가 종료됩니다.</dd>
+ <dt>{{event("audioprocess")}} (event) {{deprecated_inline}}</dt>
+ <dd><code>audioprocess</code> 이벤트는 Web Audio API {{domxref("ScriptProcessorNode")}}의 입력 버퍼가 처리될 준비가 되었을 때 발생합니다.</dd>
+ <dt>{{domxref("AudioProcessingEvent")}} {{deprecated_inline}}</dt>
+ <dd><a href="/en-US/docs/Web/API/Web_Audio_API">Web Audio API</a> <code>AudioProcessingEvent</code>는 {{domxref("ScriptProcessorNode")}} 입력 버퍼가 처리될 준비가 되었을 때 발생하는 이벤트를 나타냅니다.</dd>
</dl>
-<h2 id="Example" name="Example">Obsolete interfaces</h2>
+<h3 id="Offlinebackground_audio_processing">오프라인/백그라운드 오디오 처리하기</h3>
-<p>The following interfaces were defined in old versions of the Web Audio API spec, but are now obsolete and have been replaced by other interfaces.</p>
+<p>다음을 이용해 백그라운드(장치의 스피커가 아닌 {{domxref("AudioBuffer")}}으로 렌더링)에서 오디오 그래프를 신속하게 처리/렌더링 할수 있습니다.</p>
<dl>
- <dt>{{domxref("JavaScriptNode")}}</dt>
- <dd>Used for direct audio processing via JavaScript. This interface is obsolete, and has been replaced by {{domxref("ScriptProcessorNode")}}.</dd>
- <dt>{{domxref("WaveTableNode")}}</dt>
- <dd>Used to define a periodic waveform. This interface is obsolete, and has been replaced by {{domxref("PeriodicWave")}}.</dd>
+ <dt>{{domxref("OfflineAudioContext")}}</dt>
+ <dd><strong><code>OfflineAudioContext</code></strong> 인터페이스는 {{domxref("AudioNode")}}로 연결되어 구성된 오디오 프로세싱 그래프를 나타내는 {{domxref("AudioContext")}} 인터페이스입니다. 표준 <strong><code>AudioContext</code></strong> 와 대조적으로, <strong><code>OfflineAudioContext</code></strong> 는 실제로 오디오를 렌더링하지 않고 가능한 빨리 버퍼 내에서 생성합니다. </dd>
+ <dt>{{event("complete")}} (event)</dt>
+ <dd><strong><code>complete</code></strong> 이벤트는 {{domxref("OfflineAudioContext")}}의 렌더링이 종료될때 발생합니다.</dd>
+ <dt>{{domxref("OfflineAudioCompletionEvent")}}</dt>
+ <dd><strong><code>OfflineAudioCompletionEvent</code></strong> 이벤트는 {{domxref("OfflineAudioContext")}} 의 처리가 종료될 때 발생하는 이벤트를 나타냅니다. {{event("complete")}} 이벤트는 이 이벤트를 구현합니다.</dd>
</dl>
-<h2 id="Example" name="Example">Example</h2>
-
-<p>This example shows a wide variety of Web Audio API functions being used. You can see this code in action on the <a href="https://mdn.github.io/voice-change-o-matic/">Voice-change-o-matic</a> demo (also check out the <a href="https://github.com/mdn/voice-change-o-matic">full source code at Github</a>) — this is an experimental voice changer toy demo; keep your speakers turned down low when you use it, at least to start!</p>
-
-<p>The Web Audio API lines are highlighted; if you want to find out more about what the different methods, etc. do, have a search around the reference pages.</p>
-
-<pre class="brush: js; highlight:[1,2,9,10,11,12,36,37,38,39,40,41,62,63,72,114,115,121,123,124,125,147,151] notranslate">var audioCtx = new (window.AudioContext || window.webkitAudioContext)(); // define audio context
-// Webkit/blink browsers need prefix, Safari won't work without window.
-
-var voiceSelect = document.getElementById("voice"); // select box for selecting voice effect options
-var visualSelect = document.getElementById("visual"); // select box for selecting audio visualization options
-var mute = document.querySelector('.mute'); // mute button
-var drawVisual; // requestAnimationFrame
-
-var analyser = audioCtx.createAnalyser();
-var distortion = audioCtx.createWaveShaper();
-var gainNode = audioCtx.createGain();
-var biquadFilter = audioCtx.createBiquadFilter();
-
-function makeDistortionCurve(amount) { // function to make curve shape for distortion/wave shaper node to use
-  var k = typeof amount === 'number' ? amount : 50,
-    n_samples = 44100,
-    curve = new Float32Array(n_samples),
-    deg = Math.PI / 180,
-    i = 0,
-    x;
-  for ( ; i &lt; n_samples; ++i ) {
-    x = i * 2 / n_samples - 1;
-    curve[i] = ( 3 + k ) * x * 20 * deg / ( Math.PI + k * Math.abs(x) );
-  }
-  return curve;
-};
-
-navigator.getUserMedia (
-  // constraints - only audio needed for this app
-  {
-    audio: true
-  },
-
-  // Success callback
-  function(stream) {
-    source = audioCtx.createMediaStreamSource(stream);
-    source.connect(analyser);
-    analyser.connect(distortion);
-    distortion.connect(biquadFilter);
-    biquadFilter.connect(gainNode);
-    gainNode.connect(audioCtx.destination); // connecting the different audio graph nodes together
-
-    visualize(stream);
-    voiceChange();
-
-  },
-
-  // Error callback
-  function(err) {
-    console.log('The following gUM error occured: ' + err);
-  }
-);
-
-function visualize(stream) {
-  WIDTH = canvas.width;
-  HEIGHT = canvas.height;
-
-  var visualSetting = visualSelect.value;
-  console.log(visualSetting);
-
-  if(visualSetting == "sinewave") {
-    analyser.fftSize = 2048;
-    var bufferLength = analyser.frequencyBinCount; // half the FFT value
-    var dataArray = new Uint8Array(bufferLength); // create an array to store the data
+<h2 id="Guides_and_tutorials">가이드와 자습서</h2>
-    canvasCtx.clearRect(0, 0, WIDTH, HEIGHT);
+<p>{{LandingPageListSubpages}}</p>
-    function draw() {
+<h2 id="Examples">예제</h2>
-      drawVisual = requestAnimationFrame(draw);
+<p>여러분은 GitHub의 <a href="https://github.com/mdn/webaudio-examples/">webaudio-example 레포지토리</a>에서 몇 개의 예제를 찾을 수 있습니다.</p>
-      analyser.getByteTimeDomainData(dataArray); // get waveform data and put it into the array created above
+<h2 id="Specifications">명세</h2>
-      canvasCtx.fillStyle = 'rgb(200, 200, 200)'; // draw wave with canvas
-      canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
-
-      canvasCtx.lineWidth = 2;
-      canvasCtx.strokeStyle = 'rgb(0, 0, 0)';
-
-      canvasCtx.beginPath();
-
-      var sliceWidth = WIDTH * 1.0 / bufferLength;
-      var x = 0;
-
-      for(var i = 0; i &lt; bufferLength; i++) {
-
-        var v = dataArray[i] / 128.0;
-        var y = v * HEIGHT/2;
-
-        if(i === 0) {
-          canvasCtx.moveTo(x, y);
-        } else {
-          canvasCtx.lineTo(x, y);
-        }
-
-        x += sliceWidth;
-      }
-
-      canvasCtx.lineTo(canvas.width, canvas.height/2);
-      canvasCtx.stroke();
-    };
-
-    draw();
-
-  } else if(visualSetting == "off") {
-    canvasCtx.clearRect(0, 0, WIDTH, HEIGHT);
-    canvasCtx.fillStyle = "red";
-    canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
-  }
-
-}
-
-function voiceChange() {
-  distortion.curve = new Float32Array;
-  biquadFilter.gain.value = 0; // reset the effects each time the voiceChange function is run
-
-  var voiceSetting = voiceSelect.value;
-  console.log(voiceSetting);
-
-  if(voiceSetting == "distortion") {
-    distortion.curve = makeDistortionCurve(400); // apply distortion to sound using waveshaper node
-  } else if(voiceSetting == "biquad") {
-    biquadFilter.type = "lowshelf";
-    biquadFilter.frequency.value = 1000;
-    biquadFilter.gain.value = 25; // apply lowshelf filter to sounds using biquad
-  } else if(voiceSetting == "off") {
-    console.log("Voice settings turned off"); // do nothing, as off option was chosen
-  }
-
-}
-
-// event listeners to change visualize and voice settings
+<table class="standard-table">
+ <tbody>
+ <tr>
+ <th scope="col">Specification</th>
+ <th scope="col">Status</th>
+ <th scope="col">Comment</th>
+ </tr>
+ <tr>
+ <td>{{SpecName('Web Audio API')}}</td>
+ <td>{{Spec2('Web Audio API')}}</td>
+ <td></td>
+ </tr>
+ </tbody>
+</table>
-visualSelect.onchange = function() {
-  window.cancelAnimationFrame(drawVisual);
-  visualize(stream);
-}
+<h2 id="Browser_compatibility">브라우저 호환성</h2>
-voiceSelect.onchange = function() {
-  voiceChange();
-}
+<div>
+<h3 id="AudioContext">AudioContext</h3>
-mute.onclick = voiceMute;
+<div>
-function voiceMute() { // toggle to mute and unmute sound
-  if(mute.id == "") {
-    gainNode.gain.value = 0; // gain set to 0 to mute sound
-    mute.id = "activated";
-    mute.innerHTML = "Unmute";
-  } else {
-    gainNode.gain.value = 1; // gain set to 1 to unmute sound
-    mute.id = "";
-    mute.innerHTML = "Mute";
-  }
-}
-</pre>
+<p>{{Compat("api.AudioContext", 0)}}</p>
+</div>
+</div>
-<h2 id="Specifications">Specifications</h2>
+<h2 id="See_also">같이 보기</h2>
-<table class="standard-table">
- <tbody>
- <tr>
- <th scope="col">Specification</th>
- <th scope="col">Status</th>
- <th scope="col">Comment</th>
- </tr>
- <tr>
- <td>{{SpecName('Web Audio API')}}</td>
- <td>{{Spec2('Web Audio API')}}</td>
- <td></td>
- </tr>
- </tbody>
-</table>
+<h3 id="Tutorialsguides">자습서/가이드</h3>
-<h2 id="Browser_compatibility">Browser compatibility</h2>
+<ul>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API">Web Audio API의 기본 개념</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Web Audio API 사용하기</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Advanced_techniques">고급 기술: 소리 생성, 시퀸싱, 타이밍, 스케쥴링</a></li>
+ <li><a href="/en-US/docs/Web/Media/Autoplay_guide">미디어와 Web Audio API에 대한 자동 재생 가이드</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Using_IIR_filters">IIR 필터 사용하기</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API">Web Audio API 시각화</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Web_audio_spatialization_basics">Web audio 공간화 기초</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Controlling_multiple_parameters_with_ConstantSourceNode">ConstantSourceNode로 다수의 매개변수 제어하기</a></li>
+ <li><a href="https://www.html5rocks.com/tutorials/webaudio/positional_audio/">positional audio와 WebGL 같이 사용하기</a></li>
+ <li><a href="https://www.html5rocks.com/tutorials/webaudio/games/">Web Audio API로 게임 오디오 개발하기</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Migrating_from_webkitAudioContext">webkitAudioContext 코드를 AudioContext 기반 표준에 포팅하기</a></li>
+</ul>
-<p>{{Compat("api.AudioContext", 0)}}</p>
-
-<h2 id="See_also">See also</h2>
+<h3 id="Libraries">라이브러리</h3>
<ul>
- <li><a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a></li>
- <li><a href="/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API">Visualizations with Web Audio API</a></li>
- <li><a href="http://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic example</a></li>
- <li><a href="http://mdn.github.io/violent-theremin/">Violent Theremin example</a></li>
- <li><a href="/en-US/docs/Web/API/Web_Audio_API/Web_audio_spatialisation_basics">Web audio spatialisation basics</a></li>
- <li><a href="http://www.html5rocks.com/tutorials/webaudio/positional_audio/">Mixing Positional Audio and WebGL</a></li>
- <li><a href="http://www.html5rocks.com/tutorials/webaudio/games/">Developing Game Audio with the Web Audio API</a></li>
- <li><a href="/en-US/docs/Web/API/Web_Audio_API/Porting_webkitAudioContext_code_to_standards_based_AudioContext" title="/en-US/docs/Web_Audio_API/Porting_webkitAudioContext_code_to_standards_based_AudioContext">Porting webkitAudioContext code to standards based AudioContext</a></li>
- <li><a href="https://github.com/bit101/tones">Tones</a>: a simple library for playing specific tones/notes using the Web Audio API.</li>
- <li><a href="https://github.com/goldfire/howler.js/">howler.js</a>: a JS audio library that defaults to <a href="https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html">Web Audio API</a> and falls back to <a href="http://www.whatwg.org/specs/web-apps/current-work/#the-audio-element">HTML5 Audio</a>, as well as providing other useful features.</li>
- <li><a href="https://github.com/mattlima/mooog">Mooog</a>: jQuery-style chaining of AudioNodes, mixer-style sends/returns, and more.</li>
+ <li><a href="https://github.com/bit101/tones">Tones</a>: Web Audio API를 사용하여 특정한 음색/음을 재생하는 간단한 라이브러리</li>
+ <li><a href="https://tonejs.github.io/">Tone.js</a>: 브라우저에서 상호작용을 하는 음악을 생성하기 위한 프레임워크</li>
+ <li><a href="https://github.com/goldfire/howler.js/">howler.js</a>: 다른 유용한 기능들을 제공할 뿐만 아니라, <a href="https://webaudio.github.io/web-audio-api/">Web Audio API</a>을 기본으로 하고 <a href="https://www.whatwg.org/specs/web-apps/current-work/#the-audio-element">HTML5 Audio</a>에 대안을 제공하는 JS 오디오 라이브러리</li>
+ <li><a href="https://github.com/mattlima/mooog">Mooog</a>: jQuery 스타일의 AudioNode 체이닝, mixer 스타일의 전송/반환, 등등</li>
+ <li><a href="https://korilakkuma.github.io/XSound/">XSound</a>: 신시사이저, 이펙트, 시각화, 레코딩 등을 위한 Web Audio API 라이브러리</li>
+ <li><a class="external external-icon" href="https://github.com/chrisjohndigital/OpenLang">OpenLang</a>: 다른 소스로부터 하나의 파일에 비디오와 오디오를 레코드하고 결합시키기 위한 Web Audio API를 사용하는 HTML5 비디오 language lab 웹 애플리케이션 (<a class="external external-icon" href="https://github.com/chrisjohndigital/OpenLang">GitHub에 있는 소스</a>)</li>
+ <li><a href="https://ptsjs.org/">Pts.js</a>: 웹 오디오 시각화를 단순화합니다 (<a href="https://ptsjs.org/guide/sound-0800">가이드</a>)</li>
</ul>
-<section id="Quick_Links">
-<h3 id="Quicklinks">Quicklinks</h3>
+<h3 id="Related_topics">관련 주제</h3>
-<ol>
- <li data-default-state="open"><strong><a href="#">Guides</a></strong>
-
- <ol>
- <li><a href="/en-US/docs/Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API">Basic concepts behind Web Audio API</a></li>
- <li><a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a></li>
- <li><a href="/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API">Visualizations with Web Audio API</a></li>
- <li><a href="/en-US/docs/Web/API/Web_Audio_API/Web_audio_spatialization_basics">Web audio spatialization basics</a></li>
- <li><a href="/en-US/docs/Web/API/Web_Audio_API/Porting_webkitAudioContext_code_to_standards_based_AudioContext" title="/en-US/docs/Web_Audio_API/Porting_webkitAudioContext_code_to_standards_based_AudioContext">Porting webkitAudioContext code to standards based AudioContext</a></li>
- </ol>
- </li>
- <li data-default-state="open"><strong><a href="#">Examples</a></strong>
- <ol>
- <li><a href="/en-US/docs/Web/API/Web_Audio_API/Simple_synth">Simple synth keyboard</a></li>
- <li><a href="http://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic</a></li>
- <li><a href="http://mdn.github.io/violent-theremin/">Violent Theremin</a></li>
- </ol>
- </li>
- <li data-default-state="open"><strong><a href="#">Interfaces</a></strong>
- <ol>
- <li>{{domxref("AnalyserNode")}}</li>
- <li>{{domxref("AudioBuffer")}}</li>
- <li>{{domxref("AudioBufferSourceNode")}}</li>
- <li>{{domxref("AudioContext")}}</li>
- <li>{{domxref("AudioDestinationNode")}}</li>
- <li>{{domxref("AudioListener")}}</li>
- <li>{{domxref("AudioNode")}}</li>
- <li>{{domxref("AudioParam")}}</li>
- <li>{{event("audioprocess")}} (event)</li>
- <li>{{domxref("AudioProcessingEvent")}}</li>
- <li>{{domxref("BiquadFilterNode")}}</li>
- <li>{{domxref("ChannelMergerNode")}}</li>
- <li>{{domxref("ChannelSplitterNode")}}</li>
- <li>{{event("complete")}} (event)</li>
- <li>{{domxref("ConvolverNode")}}</li>
- <li>{{domxref("DelayNode")}}</li>
- <li>{{domxref("DynamicsCompressorNode")}}</li>
- <li>{{event("ended_(Web_Audio)", "ended")}} (event)</li>
- <li>{{domxref("GainNode")}}</li>
- <li>{{domxref("MediaElementAudioSourceNode")}}</li>
- <li>{{domxref("MediaStreamAudioDestinationNode")}}</li>
- <li>{{domxref("MediaStreamAudioSourceNode")}}</li>
- <li>{{domxref("OfflineAudioCompletionEvent")}}</li>
- <li>{{domxref("OfflineAudioContext")}}</li>
- <li>{{domxref("OscillatorNode")}}</li>
- <li>{{domxref("PannerNode")}}</li>
- <li>{{domxref("PeriodicWave")}}</li>
- <li>{{domxref("ScriptProcessorNode")}}</li>
- <li>{{domxref("WaveShaperNode")}}</li>
- </ol>
- </li>
-</ol>
-</section>
+<ul>
+ <li><a href="/en-US/docs/Web/Media">웹 미디어 기술</a></li>
+ <li><a href="/en-US/docs/Web/Media/Formats">웹에서의 미디어 타입과 포맷에 대한 가이드</a></li>
+</ul>
diff --git a/files/ko/web/api/web_audio_api/migrating_from_webkitaudiocontext/index.html b/files/ko/web/api/web_audio_api/migrating_from_webkitaudiocontext/index.html
new file mode 100644
index 0000000000..260a26a090
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/migrating_from_webkitaudiocontext/index.html
@@ -0,0 +1,381 @@
+---
+title: Migrating from webkitAudioContext
+slug: Web/API/Web_Audio_API/Migrating_from_webkitAudioContext
+tags:
+ - API
+ - Audio
+ - Guide
+ - Migrating
+ - Migration
+ - Updating
+ - Web Audio API
+ - porting
+ - webkitAudioContext
+---
+<p>The Web Audio API went through many iterations before reaching its current state. It was first implemented in WebKit, and some of its older parts were not immediately removed as they were replaced in the specification, leading to many sites using non-compatible code. <span class="seoSummary">In this article, we cover the differences in Web Audio API since it was first implemented in WebKit and how to update your code to use the modern Web Audio API.</span></p>
+
+<p>The Web Audio standard was first implemented in <a href="http://webkit.org/">WebKit</a>, and the implementation was built in parallel with the work on the <a href="https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html">specification</a> of the API. As the specification evolved and changes were made to the spec, some of the old implementation pieces were not removed from the WebKit (and Blink) implementations due to backwards compatibility reasons.</p>
+
+<p>New engines implementing the Web Audio spec (such as Gecko) will only implement the official, final version of the specification, which means that code using <code>webkitAudioContext</code> or old naming conventions in the Web Audio specification may not immediately work out of the box in a compliant Web Audio implementation.  This article attempts to summarize the areas where developers are likely to encounter these problems and provide examples on how to port such code to standards based {{domxref("AudioContext")}}, which will work across different browser engines.</p>
+
+<div class="note">
+<p><strong>Note</strong>: There is a library called <a href="https://github.com/cwilso/webkitAudioContext-MonkeyPatch">webkitAudioContext monkeypatch</a>, which automatically fixes some of these changes to make most code targeting <code>webkitAudioContext</code> to work on the standards based <code>AudioContext</code> out of the box, but it currently doesn't handle all of the cases below.  Please consult the <a href="https://github.com/cwilso/webkitAudioContext-MonkeyPatch/blob/gh-pages/README.md">README file</a> for that library to see a list of APIs that are automatically handled by it.</p>
+</div>
+
+<h2 id="Changes_to_the_creator_methods">Changes to the creator methods</h2>
+
+<p>Three of the creator methods on <code>webkitAudioContext</code> have been renamed in {{domxref("AudioContext")}}.</p>
+
+<ul>
+ <li><code>createGainNode()</code> has been renamed to {{domxref("createGain")}}.</li>
+ <li><code>createDelayNode()</code> has been renamed to {{domxref("createDelay")}}.</li>
+ <li><code>createJavaScriptNode()</code> has been renamed to {{domxref("createScriptProcessor")}}.</li>
+</ul>
+
+<p>These are simple renames that were made in order to improve the consistency of these method names on {{domxref("AudioContext")}}.  If your code uses either of these names, like in the example below :</p>
+
+<pre class="brush: js">// Old method names
+var gain = context.createGainNode();
+var delay = context.createDelayNode();
+var js = context.createJavascriptNode(1024);
+</pre>
+
+<p>you can rename the methods to look like this:</p>
+
+<pre class="brush: js">// New method names
+var gain = context.createGain();
+var delay = context.createDelay();
+var js = context.createScriptProcessor(1024);
+</pre>
+
+<p>The semantics of these methods remain the same in the renamed versions.</p>
+
+<h2 id="Changes_to_starting_and_stopping_nodes">Changes to starting and stopping nodes</h2>
+
+<p>In <code>webkitAudioContext</code>, there are two ways to start and stop {{domxref("AudioBufferSourceNode")}} and {{domxref("OscillatorNode")}}: the <code>noteOn()</code> and <code>noteOff()</code> methods, and the <code>start()</code> and <code>stop()</code> methods.  ({{domxref("AudioBufferSourceNode ")}}has yet another way of starting output: the <code>noteGrainOn()</code> method.)  The <code>noteOn()</code>/<code>noteGrainOn()</code>/<code>noteOff()</code> methods were the original way to start/stop output in these nodes, and in the newer versions of the specification, the <code>noteOn()</code> and <code>noteGrainOn()</code> methods were consolidated into a single <code>start()</code> method, and the <code>noteOff()</code> method was renamed to the <code>stop()</code> method.</p>
+
+<p>In order to port your code, you can just rename the method that you're using.  For example, if you have code like the below:</p>
+
+<pre class="brush: js">var osc = context.createOscillator();
+osc.noteOn(1);
+osc.noteOff(1.5);
+
+var src = context.createBufferSource();
+src.noteGrainOn(1, 0.25);
+src.noteOff(2);
+</pre>
+
+<p>you can change it like this in order to port it to the standard AudioContext API:</p>
+
+<pre class="brush: js">var osc = context.createOscillator();
+osc.start(1);
+osc.stop(1.5);
+
+var src = context.createBufferSource();
+src.start(1, 0.25);
+src.stop(2);</pre>
+
+<h2 id="Remove_synchronous_buffer_creation">Remove synchronous buffer creation</h2>
+
+<p>In the old WebKit implementation of Web Audio, there were two versions of <code>createBuffer()</code>, one which created an initially empty buffer, and one which took an existing {{domxref("ArrayBuffer")}} containing encoded audio, decoded it and returned the result in the form of an {{domxref("AudioBuffer")}}.  The latter version of <code>createBuffer()</code> was potentially expensive, because it had to decode the audio buffer synchronously, and with the buffer being arbitrarily large, it could take a lot of time for this method to complete its work, and no other part of your web page's code could execute in the mean time.</p>
+
+<p>Because of these problems, this version of the <code>createBuffer()</code> method has been removed, and you should use the asynchronous <code>decodeAudioData()</code> method instead.</p>
+
+<p>The example below shows old code which downloads an audio file over the network, and then decoded it using <code>createBuffer()</code>:</p>
+
+<pre class="brush: js">var xhr = new XMLHttpRequest();
+xhr.open("GET", "/path/to/audio.ogg", true);
+xhr.responseType = "arraybuffer";
+xhr.send();
+xhr.onload = function() {
+ var decodedBuffer = context.createBuffer(xhr.response, false);
+ if (decodedBuffer) {
+ // Decoding was successful, do something useful with the audio buffer
+ } else {
+ alert("Decoding the audio buffer failed");
+ }
+};
+</pre>
+
+<p>Converting this code to use <code>decodeAudioData()</code> is relatively simple, as can be seen below:</p>
+
+<pre class="brush: js">var xhr = new XMLHttpRequest();
+xhr.open("GET", "/path/to/audio.ogg", true);
+xhr.responseType = "arraybuffer";
+xhr.send();
+xhr.onload = function() {
+ context.decodeAudioData(xhr.response, function onSuccess(decodedBuffer) {
+ // Decoding was successful, do something useful with the audio buffer
+ }, function onFailure() {
+ alert("Decoding the audio buffer failed");
+ });
+};</pre>
+
+<p>Note that the <code>decodeAudioData()</code> method is asynchronous, which means that it will return immediately, and then when the decoding finishes, one of the success or failure callback functions will get called depending on whether the audio decoding was successful.  This means that you may need to restructure your code to run the part which happened after the <code>createBuffer()</code> call in the success callback, as you can see in the example above.</p>
+
+<h2 id="Renaming_of_AudioParam.setTargetValueAtTime">Renaming of AudioParam.setTargetValueAtTime</h2>
+
+<p>The <code>setTargetValueAtTime()</code> method on the {{domxref("AudioParam")}} interface has been renamed to <code>setTargetAtTime()</code>.  This is also a simple rename to improve the understandability of the API, and the semantics of the method are the same.  If your code is using <code>setTargetValueAtTime()</code>, you can rename it to use <code>setTargetAtTime()</code>. For example, if we have code that looks like this:</p>
+
+<pre class="brush: js"> var gainNode = context.createGain();
+ gainNode.gain.setTargetValueAtTime(0.0, 10.0, 1.0);
+</pre>
+
+<p>you can rename the method, and be compliant with the standard, like so:</p>
+
+<pre class="brush: js"> var gainNode = context.createGain();
+ gainNode.gain.setTargetAtTime(0.0, 10.0, 1.0);
+</pre>
+
+<h2 id="Enumerated_values_that_changed">Enumerated values that changed</h2>
+
+<p>The original <code>webkitAudioContext</code> API used C-style number based enumerated values in the API.  Those values have since been changed to use the Web IDL based enumerated values, which should be familiar because they are similar to things like the {{domxref("HTMLInputElement")}} property {{domxref("HTMLInputElement.type", "type")}}.</p>
+
+<h3 id="OscillatorNode.type">OscillatorNode.type</h3>
+
+<p>{{domxref("OscillatorNode")}}'s type property has been changed to use Web IDL enums.  Old code using <code>webkitAudioContext</code> can be ported to standards based {{domxref("AudioContext")}} like below:</p>
+
+<pre class="brush: js">// Old webkitAudioContext code:
+var osc = context.createOscillator();
+osc.type = osc.SINE; // sine waveform
+osc.type = osc.SQUARE; // square waveform
+osc.type = osc.SAWTOOTH; // sawtooth waveform
+osc.type = osc.TRIANGLE; // triangle waveform
+osc.setWaveTable(table);
+var isCustom = (osc.type == osc.CUSTOM); // isCustom will be true
+
+// New standard AudioContext code:
+var osc = context.createOscillator();
+osc.type = "sine"; // sine waveform
+osc.type = "square"; // square waveform
+osc.type = "sawtooth"; // sawtooth waveform
+osc.type = "triangle"; // triangle waveform
+osc.setPeriodicWave(table); // Note: setWaveTable has been renamed to setPeriodicWave!
+var isCustom = (osc.type == "custom"); // isCustom will be true
+</pre>
+
+<h3 id="BiquadFilterNode.type">BiquadFilterNode.type</h3>
+
+<p>{{domxref("BiquadFilterNode")}}'s type property has been changed to use Web IDL enums.  Old code using <code>webkitAudioContext</code> can be ported to standards based {{domxref("AudioContext")}} like below:</p>
+
+<pre class="brush: js">// Old webkitAudioContext code:
+var filter = context.createBiquadFilter();
+filter.type = filter.LOWPASS; // lowpass filter
+filter.type = filter.HIGHPASS; // highpass filter
+filter.type = filter.BANDPASS; // bandpass filter
+filter.type = filter.LOWSHELF; // lowshelf filter
+filter.type = filter.HIGHSHELF; // highshelf filter
+filter.type = filter.PEAKING; // peaking filter
+filter.type = filter.NOTCH; // notch filter
+filter.type = filter.ALLPASS; // allpass filter
+
+// New standard AudioContext code:
+var filter = context.createBiquadFilter();
+filter.type = "lowpass"; // lowpass filter
+filter.type = "highpass"; // highpass filter
+filter.type = "bandpass"; // bandpass filter
+filter.type = "lowshelf"; // lowshelf filter
+filter.type = "highshelf"; // highshelf filter
+filter.type = "peaking"; // peaking filter
+filter.type = "notch"; // notch filter
+filter.type = "allpass"; // allpass filter
+</pre>
+
+<h3 id="PannerNode.panningModel">PannerNode.panningModel</h3>
+
+<p>{{domxref("PannerNode")}}'s panningModel property has been changed to use Web IDL enums.  Old code using <code>webkitAudioContext</code> can be ported to standards based {{domxref("AudioContext")}} like below:</p>
+
+<pre class="brush: js">// Old webkitAudioContext code:
+var panner = context.createPanner();
+panner.panningModel = panner.EQUALPOWER; // equalpower panning
+panner.panningModel = panner.HRTF; // HRTF panning
+
+// New standard AudioContext code:
+var panner = context.createPanner();
+panner.panningModel = "equalpower"; // equalpower panning
+panner.panningModel = "HRTF"; // HRTF panning
+</pre>
+
+<h3 id="PannerNode.distanceModel">PannerNode.distanceModel</h3>
+
+<p>{{domxref("PannerNode")}}'s <code>distanceModel</code> property has been changed to use Web IDL enums.  Old code using <code>webkitAudioContext</code> can be ported to standards based {{domxref("AudioContext")}} like below:</p>
+
+<pre class="brush: js">// Old webkitAudioContext code:
+var panner = context.createPanner();
+panner.distanceModel = panner.LINEAR_DISTANCE; // linear distance model
+panner.distanceModel = panner.INVERSE_DISTANCE; // inverse distance model
+panner.distanceModel = panner.EXPONENTIAL_DISTANCE; // exponential distance model
+
+// Mew standard AudioContext code:
+var panner = context.createPanner();
+panner.distanceModel = "linear"; // linear distance model
+panner.distanceModel = "inverse"; // inverse distance model
+panner.distanceModel = "exponential"; // exponential distance model
+</pre>
+
+<h2 id="Gain_control_moved_to_its_own_node_type">Gain control moved to its own node type</h2>
+
+<p>The Web Audio standard now controls all gain using the {{domxref("GainNode")}}. Instead of setting a <code>gain</code> property directly on an audio source, you connect the source to a gain node and then control the gain using that node's <code>gain</code> parameter.</p>
+
+<h3 id="AudioBufferSourceNode">AudioBufferSourceNode</h3>
+
+<p>The <code>gain</code> attribute of {{domxref("AudioBufferSourceNode")}} has been removed.  The same functionality can be achieved by connecting the {{domxref("AudioBufferSourceNode")}} to a gain node.  See the following example:</p>
+
+<pre class="brush: js">// Old webkitAudioContext code:
+var src = context.createBufferSource();
+src.buffer = someBuffer;
+src.gain.value = 0.5;
+src.connect(context.destination);
+src.noteOn(0);
+
+// New standard AudioContext code:
+var src = context.createBufferSource();
+src.buffer = someBuffer;
+var gain = context.createGain();
+src.connect(gain);
+gain.gain.value = 0.5;
+gain.connect(context.destination);
+src.start(0);
+</pre>
+
+<h3 id="AudioBuffer">AudioBuffer</h3>
+
+<p>The <code>gain</code> attribute of {{domxref("AudioBuffer")}} has been removed.  The same functionality can be achieved by connecting the {{domxref("AudioBufferSourceNode")}} that owns the buffer to a gain node.  See the following example:</p>
+
+<pre class="brush: js">// Old webkitAudioContext code:
+var src = context.createBufferSource();
+src.buffer = someBuffer;
+src.buffer.gain = 0.5;
+src.connect(context.destination);
+src.noteOn(0);
+
+// New standard AudioContext code:
+var src = context.createBufferSource();
+src.buffer = someBuffer;
+var gain = context.createGain();
+src.connect(gain);
+gain.gain.value = 0.5;
+gain.connect(context.destination);
+src.start(0);
+</pre>
+
+<h2 id="Removal_of_AudioBufferSourceNode.looping">Removal of AudioBufferSourceNode.looping</h2>
+
+<p>The <code>looping</code> attribute of {{domxref("AudioBufferSourceNode")}} has been removed.  This attribute was an alias of the <code>loop</code> attribute, so you can just use the <code>loop</code> attribute instead. Instead of having code like this:</p>
+
+<pre class="brush: js">var source = context.createBufferSource();
+source.looping = true;
+</pre>
+
+<p>you can change it to respect the last version of the specification:</p>
+
+<pre class="brush: js">var source = context.createBufferSource();
+source.loop = true;
+</pre>
+
+<p>Note, the <code>loopStart</code> and <code>loopEnd</code> attributes are not supported in <code>webkitAudioContext</code>.</p>
+
+<h2 id="Changes_to_determining_playback_state">Changes to determining playback state</h2>
+
+<p>The <code>playbackState</code> attribute of {{domxref("AudioBufferSourceNode")}} and {{domxref("OscillatorNode")}} has been removed.  Depending on why you used this attribute, you can use the following techniques to get the same information:</p>
+
+<ul>
+ <li>If you need to compare this attribute to <code>UNSCHEDULED_STATE</code>, you can basically remember whether you've called <code>start()</code> on the node or not.</li>
+ <li>If you need to compare this attribute to <code>SCHEDULED_STATE</code>, you can basically remember whether you've called <code>start()</code> on the node or not.  You can compare the value of {{domxref("AudioContext.currentTime")}} to the first argument passed to <code>start()</code> to know whether playback has started or not.</li>
+ <li>If you need to compare this attribute to <code>PLAYING_STATE</code>, you can compare the value of {{domxref("AudioContext.currentTime")}} to the first argument passed to <code>start()</code> to know whether playback has started or not.</li>
+ <li>If you need to know when playback of the node is finished (which is the most significant use case of <code>playbackState</code>), there is a new ended event which you can use to know when playback is finished.  Please see this code example:</li>
+</ul>
+
+<pre class="brush: js">// Old webkitAudioContext code:
+var src = context.createBufferSource();
+// Some time later...
+var isFinished = (src.playbackState == src.FINISHED_STATE);
+
+// New AudioContext code:
+var src = context.createBufferSource();
+function endedHandler(event) {
+ isFinished = true;
+}
+var isFinished = false;
+src.onended = endedHandler;
+</pre>
+
+<p>The exact same changes have been applied to both {{domxref("AudioBufferSourceNode")}} and {{domxref("OscillatorNode")}}, so you can apply the same techniques to both kinds of nodes.</p>
+
+<h2 id="Removal_of_AudioContext.activeSourceCount">Removal of AudioContext.activeSourceCount</h2>
+
+<p>The <code>activeSourceCount</code> attribute has been removed from {{domxref("AudioContext")}}.  If you need to count the number of playing source nodes, you can maintain the count by handling the ended event on the source nodes, as shown above.</p>
+
+<p>Code using the <code>activeSourceCount</code> attribute of the {{domxref("AudioContext")}}, like this snippet:</p>
+
+<pre class="brush: js"> var src0 = context.createBufferSource();
+ var src1 = context.createBufferSource();
+ // Set buffers and other parameters...
+ src0.start(0);
+ src1.start(0);
+ // Some time later...
+ console.log(context.activeSourceCount);
+</pre>
+
+<p>could be rewritten like that:</p>
+
+<pre class="brush: js"> // Array to track the playing source nodes:
+ var sources = [];
+ // When starting the source, put it at the end of the array,
+ // and set a handler to make sure it gets removed when the
+ // AudioBufferSourceNode reaches its end.
+ // First argument is the AudioBufferSourceNode to start, other arguments are
+ // the argument to the |start()| method of the AudioBufferSourceNode.
+ function startSource() {
+ var src = arguments[0];
+ var startArgs = Array.prototype.slice.call(arguments, 1);
+ src.onended = function() {
+ sources.splice(sources.indexOf(src), 1);
+ }
+ sources.push(src);
+ src.start.apply(src, startArgs);
+ }
+ function activeSources() {
+ return sources.length;
+ }
+ var src0 = context.createBufferSource();
+ var src0 = context.createBufferSource();
+ // Set buffers and other parameters...
+ startSource(src0, 0);
+ startSource(src1, 0);
+ // Some time later, query the number of sources...
+ console.log(activeSources());
+</pre>
+
+<h2 id="Renaming_of_WaveTable">Renaming of WaveTable</h2>
+
+<p>The {{domxref("WaveTable")}} interface has been renamed to {{domxref("PeriodicWave")}}.  Here is how you can port old code using <code>WaveTable</code> to the standard AudioContext API:</p>
+
+<pre class="brush: js">// Old webkitAudioContext code:
+var osc = context.createOscillator();
+var table = context.createWaveTable(realArray, imaginaryArray);
+osc.setWaveTable(table);
+
+// New standard AudioContext code:
+var osc = context.createOscillator();
+var table = context.createPeriodicWave(realArray, imaginaryArray);
+osc.setPeriodicWave(table);
+</pre>
+
+<h2 id="Removal_of_some_of_the_AudioParam_read-only_attributes">Removal of some of the AudioParam read-only attributes</h2>
+
+<p>The following read-only attributes have been removed from AudioParam: <code>name</code>, <code>units</code>, <code>minValue</code>, and <code>maxValue</code>.  These used to be informational attributes.  Here is some information on how you can get these values if you need them:</p>
+
+<ul>
+ <li>The <code>name</code> attribute is a string representing the name of the {{domxref("AudioParam")}} object.  For example, the name of {{domxref("GainNode.gain")}} is <code>"gain"</code>.  You can track where the {{domxref("AudioParam")}} object is coming from in your code if you need this information.</li>
+ <li>The <code>minValue</code> and <code>maxValue</code> attributes are read-only values representing the nominal range for the {{domxref("AudioParam")}}.  For example, for {{domxref("GainNode") }}, these values are 0 and 1, respectively.  Note that these bounds are not enforced by the engine, and are merely used for informational purposes.  As an example, it's perfectly valid to set a gain value to 2, or even -1.  In order to find out these nominal values, you can consult the <a href="https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html">specification</a>.</li>
+ <li>The <code>units</code> attribute as implemented in <code>webkitAudioContext</code> implementations is unused, and always returns 0.  There is no reason why you should need this attribute.</li>
+</ul>
+
+<h2 id="Removal_of_MediaElementAudioSourceNode.mediaElement">Removal of MediaElementAudioSourceNode.mediaElement</h2>
+
+<p>The <code>mediaElement</code> attribute of {{domxref("MediaElementAudioSourceNode")}} has been removed.  You can keep a reference to the media element used to create this node if you need to access it later.</p>
+
+<h2 id="Removal_of_MediaStreamAudioSourceNode.mediaStream">Removal of MediaStreamAudioSourceNode.mediaStream</h2>
+
+<p>The <code>mediaStream</code> attribute of {{domxref("MediaStreamAudioSourceNode")}} has been removed.  You can keep a reference to the media stream used to create this node if you need to access it later.</p>
diff --git a/files/ko/web/api/web_audio_api/simple_synth/index.html b/files/ko/web/api/web_audio_api/simple_synth/index.html
new file mode 100644
index 0000000000..2ac3a7cf14
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/simple_synth/index.html
@@ -0,0 +1,578 @@
+---
+title: '예제와 튜토리얼: 간단한 신시사이저 키보드'
+slug: Web/API/Web_Audio_API/Simple_synth
+tags:
+ - Audio
+ - Example
+ - Guide
+ - Media
+ - Oscillator
+ - Piano
+ - Synthesizer
+ - Tutorial
+ - Web Audio API
+---
+<div>{{DefaultAPISidebar("Web Audio API")}}</div>
+
+<p>이 문서는 마우스를 사용해 플레이할 수 있는 비디오 키보드의 데모와 코드를 보여줍니다. 이 키보드는 표준 파형들과 사용자 정의 파형 중에서 선택할 수 있는 기능을 제공하고, 키보드 아래에 있는 볼륨 슬라이더를 사용하여 메인 gain을 제어할 수 있습니다. 이 예제는 다음의 Web API 인터페이스를 사용합니다: {{domxref("AudioContext")}}, {{domxref("OscillatorNode")}}, {{domxref("PeriodicWave")}}, 그리고 {{domxref("GainNode")}}.</p>
+
+<p>{{domxref("OscillatorNode")}}가 {{domxref("AudioScheduledSourceNode")}}에 기반하기 때문에, 이것은 또한 얼마간 그것에 대한 예제이기도 합니다.</p>
+
+<h2 id="The_video_keyboard">비디오 키보드</h2>
+
+<h3 id="HTML">HTML</h3>
+
+<p>이 가상 키보드의 디스플레이에는 세 가지 주요한 컴포넌트가 있습니다. 첫번째는 뮤지컬 키보드 그 자체입니다. 우리는 이것을 중첩된 {{HTMLElement("div")}} 요소의 쌍으로 그려 만약 모든 건반이 화면에 맞지 않으면 그것들이 줄바꿈되는 일 없이 키보드를 가로로 스크롤할 수 있게 되도록 만들 것입니다.</p>
+
+<h4 id="The_keyboard">키보드</h4>
+
+<p>첫째로, 키보드를 넣을 공간을 만듭니다. 우리는 프로그래밍적으로 키보드를 구성할 것인데, 왜냐하면 그렇게 하는 것은 우리에게 해당하는 음에 대한 적절한 데이터를 결정하면서 각각의 건반을 설정하는 유연성을 주기 때문입니다. 우리의 경우, 우리는 표로부터 각 음의 주파수를 얻지만, 이것은 또한 알고리즘적으로도 계산될 수 있습니다.</p>
+
+<pre class="brush: html">&lt;div class="container"&gt;
+ &lt;div class="keyboard"&gt;&lt;/div&gt;
+&lt;/div&gt;
+</pre>
+
+<p><code>"container"</code>라는 이름의 {{HTMLElement("div")}}는 만약 이것이 이용 가능한 공간에 대해 너무 넓으면 가로로 스크롤될 수 있는 박스입니다. 건반들 자체는 <code>"keyboard"</code> 클래스의 블록 안으로 삽입될 것입니다.</p>
+
+<h4 id="The_settings_bar">설정 바</h4>
+
+<p>키보드 아래에, 우리는 레이어를 설정하기 위한 조종 장치를 놓을 것입니다. 우선은, 우리는 두 조종 장치를 가지고 있습니다: 하나는 메인 볼륨을 설정하기 위한 것이고 나머지 하나는 노트를 생성할 때 어떤 주기적인 파형을 사용할 지 고르기 위한 것입니다.</p>
+
+<h5 id="The_volume_control">볼륨 컨트롤</h5>
+
+<p>첫째로 우리는 필요한 대로 스타일될 수 있도록, 설정 바를 포함하는 <code>&lt;div&gt;</code>를 생성합니다. 그리고 나서 바의 좌측에 나타날 박스를 생성하고 라벨과 <code>"range"</code> 유형의 {{HTMLElement("input")}} 요소를 배치합니다. range 요소는 보통 슬라이더로 표현됩니다; 각 위치마다 0.01만큼 움직이며 0.0과 1.0 사이의 모든 값을 허용하게 설정합니다.</p>
+
+<pre class="brush: html">&lt;div class="settingsBar"&gt;
+ &lt;div class="left"&gt;
+ &lt;span&gt;Volume: &lt;/span&gt;
+ &lt;input type="range" min="0.0" max="1.0" step="0.01"
+ value="0.5" list="volumes" name="volume"&gt;
+ &lt;datalist id="volumes"&gt;
+ &lt;option value="0.0" label="Mute"&gt;
+ &lt;option value="1.0" label="100%"&gt;
+ &lt;/datalist&gt;
+ &lt;/div&gt;
+</pre>
+
+<p>우리는 기본값을 0.5로 명시하고, ID가 맞는 옵션 목록을 찾기 위해 {{htmlattrxref("name")}} 특성을 사용하여 range에 연결된 {{HTMLElement("datalist")}} 요소를 제공합니다; 이 경우, 데이터셋은 <code>"volume"</code>이라는 이름입니다. 이는 우리로 하여금 브라우저가 옵션적으로 어떤 방식으로 디스플레이하기를 선택할지도 모르는 특별한 문자열과 일반적인 값의 집합을 제공하게 합니다; 우리는 값 0.0 ("무음")과 1.0 ("100%")에 대해 이름을 제공합니다.</p>
+
+<h5 id="The_waveform_picker">파형 선택기</h5>
+
+<p>세팅 바의 우측에, 우리는 라벨과 이용 가능한 파형에 부합하는 옵션을 가지고 있는 <code>"waveform"</code>라는 이름의 {{HTMLElement("select")}} 요소를 배치합니다.</p>
+
+<pre class="brush: html"> &lt;div class="right"&gt;
+ &lt;span&gt;Current waveform: &lt;/span&gt;
+ &lt;select name="waveform"&gt;
+ &lt;option value="sine"&gt;Sine&lt;/option&gt;
+ &lt;option value="square" selected&gt;Square&lt;/option&gt;
+ &lt;option value="sawtooth"&gt;Sawtooth&lt;/option&gt;
+ &lt;option value="triangle"&gt;Triangle&lt;/option&gt;
+ &lt;option value="custom"&gt;Custom&lt;/option&gt;
+ &lt;/select&gt;
+ &lt;/div&gt;
+&lt;/div&gt;</pre>
+
+<div class="hidden">
+<h3 id="CSS">CSS</h3>
+
+<pre class="brush: css">.container {
+ overflow-x: scroll;
+ overflow-y: hidden;
+ width: 660px;
+ height: 110px;
+ white-space: nowrap;
+ margin: 10px;
+}
+
+.keyboard {
+ width: auto;
+ padding: 0;
+ margin: 0;
+}
+
+.key {
+ cursor: pointer;
+ font: 16px "Open Sans", "Lucida Grande", "Arial", sans-serif;
+ border: 1px solid black;
+ border-radius: 5px;
+ width: 20px;
+ height: 80px;
+ text-align: center;
+ box-shadow: 2px 2px darkgray;
+ display: inline-block;
+ position: relative;
+ margin-right: 3px;
+ user-select: none;
+ -moz-user-select: none;
+ -webkit-user-select: none;
+ -ms-user-select: none;
+}
+
+.key div {
+ position: absolute;
+ bottom: 0;
+ text-align: center;
+ width: 100%;
+ pointer-events: none;
+}
+
+.key div sub {
+ font-size: 10px;
+ pointer-events: none;
+}
+
+.key:hover {
+ background-color: #eef;
+}
+
+.key:active {
+ background-color: #000;
+ color: #fff;
+}
+
+.octave {
+ display: inline-block;
+ padding: 0 6px 0 0;
+}
+
+.settingsBar {
+ padding-top: 8px;
+ font: 14px "Open Sans", "Lucida Grande", "Arial", sans-serif;
+ position: relative;
+ vertical-align: middle;
+ width: 100%;
+ height: 30px;
+}
+
+.left {
+ width: 50%;
+ position: absolute;
+ left: 0;
+ display: table-cell;
+ vertical-align: middle;
+}
+
+.left span, .left input {
+ vertical-align: middle;
+}
+
+.right {
+ width: 50%;
+ position: absolute;
+ right: 0;
+ display: table-cell;
+ vertical-align: middle;
+}
+
+.right span {
+ vertical-align: middle;
+}
+
+.right input {
+ vertical-align: baseline;
+}</pre>
+</div>
+
+<h3 id="JavaScript">JavaScript</h3>
+
+<p>JavaScript 코드는 몇 개의 변수를 초기화함으로써 시작합니다.</p>
+
+<pre class="brush: js">let audioContext = new (window.AudioContext || window.webkitAudioContext)();
+let oscList = [];
+let mainGainNode = null;
+</pre>
+
+<ol>
+ <li><code>audioContext</code>는 전역 {{domxref("AudioContext")}} 객체를 (또는 필요하다면 <code>webkitAudioContext</code>를) 참조하기 위해 설정되었습니다.</li>
+ <li><code>oscList</code>는 현재 재생되고 있는 모든 oscillator를 포함할 준비가 되기 위해 설정되었습니다. 이것은 빈 상태로 시작하는데, 왜냐하면 아직 어떤 것도 재생되고 있지 않기 때문입니다.</li>
+ <li><code>mainGainNode</code>은 null로 설정되었습니다; 설정 과정 중에, 이것은 재생되는 모든 oscillator가 연결되고 슬라이더를 사용해 전체 볼륨이 제어되도록 하는 {{domxref("GainNode")}}를 포함하도록 설정될 것입니다.</li>
+</ol>
+
+<pre class="brush: js">let keyboard = document.querySelector(".keyboard");
+let wavePicker = document.querySelector("select[name='waveform']");
+let volumeControl = document.querySelector("input[name='volume']");
+</pre>
+
+<p>접근이 필요한 요소들에의 참조는 위와 같이 얻어집니다:</p>
+
+<ul>
+ <li><code>keyboard</code>는 건반이 배치될 컨테이너입니다.</li>
+ <li><code>wavePicker</code>는 음에 대해 사용할 파형을 선택하는 데 사용되는 {{HTMLElement("select")}} 요소입니다.</li>
+ <li><code>volumeControl</code>는 메인 오디오 볼륨을 제어하기 위해 사용되는 (<code>"range"</code> 유형의) {{HTMLElement("input")}} 요소입니다.</li>
+</ul>
+
+<pre class="brush: js">let noteFreq = null;
+let customWaveform = null;
+let sineTerms = null;
+let cosineTerms = null;
+</pre>
+
+<p>마지막으로, 파형을 생성할 때 사용될 전역 변수들이 생성됩니다:</p>
+
+<ul>
+ <li><code>noteFreq</code>는 배열들의 배열입니다; 각 배열은 하나의 옥타브를 나타내는데, 그 옥타브에 있는 각 음에 대한 항목을 포함합니다. 각각에 대한 값은 음의 음색을 나타내는 헤르츠로 표현되는 주파수입니다.</li>
+ <li><code>customWaveform</code>는 사용자가 "Custom"을 파형 선택기에서 선택했을 때 사용할 파형을 기술하는 {{domxref("PeriodicWave")}}로 설정될 것입니다.</li>
+ <li><code>sineTerms</code> 와 <code>cosineTerms</code>는 파형을 생성하기 위한 데이터를 저장하기 위해 사용될 것입니다; 각각은 사용자가 "Custom"을 선택했을 때 생성되는 배열을 포함할 것입니다.</li>
+</ul>
+
+<h3 id="Creating_the_note_table">음 테이블 생성하기</h3>
+
+<p><code>createNoteTable()</code> 함수는 각 옥타브를 나타내는 객체의 배열을 포함하는 <code>noteFreq</code> 배열을 만듭니다. 차례로 각 옥타브는 그 옥타브에 있는 각 음에 대한 하나의 지정된 속성을 가집니다; 그 속성의 이름은 음의 이름입니다 (예를 들자면 C-sharp는 "C#"로 표현됩니다), 그리고 값은 헤르츠로 표현되는 그 음의 주파수입니다.</p>
+
+<pre class="brush: js">function createNoteTable() {
+ let noteFreq = [];
+ for (let i=0; i&lt; 9; i++) {
+ noteFreq[i] = [];
+ }
+
+ noteFreq[0]["A"] = 27.500000000000000;
+ noteFreq[0]["A#"] = 29.135235094880619;
+ noteFreq[0]["B"] = 30.867706328507756;
+
+ noteFreq[1]["C"] = 32.703195662574829;
+ noteFreq[1]["C#"] = 34.647828872109012;
+ noteFreq[1]["D"] = 36.708095989675945;
+ noteFreq[1]["D#"] = 38.890872965260113;
+ noteFreq[1]["E"] = 41.203444614108741;
+ noteFreq[1]["F"] = 43.653528929125485;
+ noteFreq[1]["F#"] = 46.249302838954299;
+ noteFreq[1]["G"] = 48.999429497718661;
+ noteFreq[1]["G#"] = 51.913087197493142;
+ noteFreq[1]["A"] = 55.000000000000000;
+ noteFreq[1]["A#"] = 58.270470189761239;
+ noteFreq[1]["B"] = 61.735412657015513;
+</pre>
+
+<p>... 간결성을 위해 몇몇 옥타브는 생략되었습니다 ...</p>
+
+<div class="hidden">
+<pre class="brush: js"> noteFreq[2]["C"] = 65.406391325149658;
+ noteFreq[2]["C#"] = 69.295657744218024;
+ noteFreq[2]["D"] = 73.416191979351890;
+ noteFreq[2]["D#"] = 77.781745930520227;
+ noteFreq[2]["E"] = 82.406889228217482;
+ noteFreq[2]["F"] = 87.307057858250971;
+ noteFreq[2]["F#"] = 92.498605677908599;
+ noteFreq[2]["G"] = 97.998858995437323;
+ noteFreq[2]["G#"] = 103.826174394986284;
+ noteFreq[2]["A"] = 110.000000000000000;
+ noteFreq[2]["A#"] = 116.540940379522479;
+ noteFreq[2]["B"] = 123.470825314031027;
+
+ noteFreq[3]["C"] = 130.812782650299317;
+ noteFreq[3]["C#"] = 138.591315488436048;
+ noteFreq[3]["D"] = 146.832383958703780;
+ noteFreq[3]["D#"] = 155.563491861040455;
+ noteFreq[3]["E"] = 164.813778456434964;
+ noteFreq[3]["F"] = 174.614115716501942;
+ noteFreq[3]["F#"] = 184.997211355817199;
+ noteFreq[3]["G"] = 195.997717990874647;
+ noteFreq[3]["G#"] = 207.652348789972569;
+ noteFreq[3]["A"] = 220.000000000000000;
+ noteFreq[3]["A#"] = 233.081880759044958;
+ noteFreq[3]["B"] = 246.941650628062055;
+
+ noteFreq[4]["C"] = 261.625565300598634;
+ noteFreq[4]["C#"] = 277.182630976872096;
+ noteFreq[4]["D"] = 293.664767917407560;
+ noteFreq[4]["D#"] = 311.126983722080910;
+ noteFreq[4]["E"] = 329.627556912869929;
+ noteFreq[4]["F"] = 349.228231433003884;
+ noteFreq[4]["F#"] = 369.994422711634398;
+ noteFreq[4]["G"] = 391.995435981749294;
+ noteFreq[4]["G#"] = 415.304697579945138;
+ noteFreq[4]["A"] = 440.000000000000000;
+ noteFreq[4]["A#"] = 466.163761518089916;
+ noteFreq[4]["B"] = 493.883301256124111;
+
+ noteFreq[5]["C"] = 523.251130601197269;
+ noteFreq[5]["C#"] = 554.365261953744192;
+ noteFreq[5]["D"] = 587.329535834815120;
+ noteFreq[5]["D#"] = 622.253967444161821;
+ noteFreq[5]["E"] = 659.255113825739859;
+ noteFreq[5]["F"] = 698.456462866007768;
+ noteFreq[5]["F#"] = 739.988845423268797;
+ noteFreq[5]["G"] = 783.990871963498588;
+ noteFreq[5]["G#"] = 830.609395159890277;
+ noteFreq[5]["A"] = 880.000000000000000;
+ noteFreq[5]["A#"] = 932.327523036179832;
+ noteFreq[5]["B"] = 987.766602512248223;
+
+ noteFreq[6]["C"] = 1046.502261202394538;
+ noteFreq[6]["C#"] = 1108.730523907488384;
+ noteFreq[6]["D"] = 1174.659071669630241;
+ noteFreq[6]["D#"] = 1244.507934888323642;
+ noteFreq[6]["E"] = 1318.510227651479718;
+ noteFreq[6]["F"] = 1396.912925732015537;
+ noteFreq[6]["F#"] = 1479.977690846537595;
+ noteFreq[6]["G"] = 1567.981743926997176;
+ noteFreq[6]["G#"] = 1661.218790319780554;
+ noteFreq[6]["A"] = 1760.000000000000000;
+ noteFreq[6]["A#"] = 1864.655046072359665;
+ noteFreq[6]["B"] = 1975.533205024496447;
+</pre>
+</div>
+
+<pre class="brush: js"> noteFreq[7]["C"] = 2093.004522404789077;
+ noteFreq[7]["C#"] = 2217.461047814976769;
+ noteFreq[7]["D"] = 2349.318143339260482;
+ noteFreq[7]["D#"] = 2489.015869776647285;
+ noteFreq[7]["E"] = 2637.020455302959437;
+ noteFreq[7]["F"] = 2793.825851464031075;
+ noteFreq[7]["F#"] = 2959.955381693075191;
+ noteFreq[7]["G"] = 3135.963487853994352;
+ noteFreq[7]["G#"] = 3322.437580639561108;
+ noteFreq[7]["A"] = 3520.000000000000000;
+ noteFreq[7]["A#"] = 3729.310092144719331;
+ noteFreq[7]["B"] = 3951.066410048992894;
+
+ noteFreq[8]["C"] = 4186.009044809578154;
+ return noteFreq;
+}
+</pre>
+
+<p>결과는 <code>noteFreq</code> 배열인데, 이는 각 옥타브에 대한 객체를 가지고 있습니다. 각 옥타브 객체는 속성 이름이 음의 이름이고 (예를 들자면 C-sharp는 "C#"로 표현됩니다) 속성의 값은 헤르츠로 표현되는 음의 주파수인 지정된 속성들을 가지고 있습니다. 부분적으로는, 결과 객체는 다음과 같이 보입니다:</p>
+
+<table class="standard-table">
+ <tbody>
+ <tr>
+ <th scope="row">옥타브</th>
+ <td colspan="8">음</td>
+ <td></td>
+ <td></td>
+ <td></td>
+ <td></td>
+ </tr>
+ <tr>
+ <th scope="row">0</th>
+ <td>"A" ⇒ 27.5</td>
+ <td>"A#" ⇒ 29.14</td>
+ <td>"B" ⇒ 30.87</td>
+ <td></td>
+ <td></td>
+ <td></td>
+ <td></td>
+ <td></td>
+ <td></td>
+ <td></td>
+ <td></td>
+ <td></td>
+ </tr>
+ <tr>
+ <th scope="row">1</th>
+ <td>"C" ⇒ 32.70</td>
+ <td>"C#" ⇒ 34.65</td>
+ <td>"D" ⇒ 36.71</td>
+ <td>"D#" ⇒ 38.89</td>
+ <td>"E" ⇒ 41.20</td>
+ <td>"F" ⇒ 43.65</td>
+ <td>"F#" ⇒ 46.25</td>
+ <td>"G" ⇒ 49</td>
+ <td>"G#" ⇒ 51.9</td>
+ <td>"A" ⇒ 55</td>
+ <td>"A#" ⇒ 58.27</td>
+ <td>"B" ⇒ 61.74</td>
+ </tr>
+ <tr>
+ <th scope="row">2</th>
+ <td colspan="12">. . .</td>
+ </tr>
+ </tbody>
+</table>
+
+<p>준비된 이 표를 가지고, 우리는 특정한 옥타브에 있는 주어진 음에 대한 주파수를 꽤 쉽게 찾을 수 있습니다. 만약 우리가 옥타브 1의 G# 음의 주파수를 원한다면, 우리는 <code>noteFreq[1]["G#"]</code>을 사용하여 결과로 51.9의 값을 얻습니다.</p>
+
+<div class="note">
+<p>위의 예시 표의 값들은 소숫점 둘째 자리까지 반올림되었습니다.</p>
+</div>
+
+<div class="hidden">
+<p>This polyfill stands in when <code>Object.entries()</code> doesn't exist.</p>
+
+<pre class="brush: js">if (!Object.entries) {
+ Object.entries = function entries(O) {
+ return reduce(keys(O), (e, k) =&gt; concat(e, typeof k === 'string' &amp;&amp; isEnumerable(O, k) ? [[k, O[k]]] : []), []);
+ };
+}
+</pre>
+</div>
+
+<h3 id="Building_the_keyboard">키보드 만들기</h3>
+
+<p><code>setup()</code> 함수의 역할은 키보드를 만들고 앱이 음악을 재생하도록 준비하는 것입니다.</p>
+
+<pre class="brush: js">function setup() {
+ noteFreq = createNoteTable();
+
+ volumeControl.addEventListener("change", changeVolume, false);
+
+ mainGainNode = audioContext.createGain();
+ mainGainNode.connect(audioContext.destination);
+ mainGainNode.gain.value = volumeControl.value;
+
+ // Create the keys; skip any that are sharp or flat; for
+ // our purposes we don't need them. Each octave is inserted
+ // into a &lt;div&gt; of class "octave".
+
+ noteFreq.forEach(function(keys, idx) {
+ let keyList = Object.entries(keys);
+ let octaveElem = document.createElement("div");
+ octaveElem.className = "octave";
+
+ keyList.forEach(function(key) {
+ if (key[0].length == 1) {
+ octaveElem.appendChild(createKey(key[0], idx, key[1]));
+ }
+ });
+
+ keyboard.appendChild(octaveElem);
+ });
+
+ document.querySelector("div[data-note='B'][data-octave='5']").scrollIntoView(false);
+
+ sineTerms = new Float32Array([0, 0, 1, 0, 1]);
+ cosineTerms = new Float32Array(sineTerms.length);
+ customWaveform = audioContext.createPeriodicWave(cosineTerms, sineTerms);
+
+ for (i=0; i&lt;9; i++) {
+ oscList[i] = {};
+ }
+}
+
+setup();</pre>
+
+<ol>
+ <li>음의 이름과 옥타브를 주파수에 대응(map)시키는 표는 <code>createNoteTable()</code>를 호출함으로써 생성됩니다.</li>
+ <li>메인 gain 제어에서 {{event("change")}} 이벤트를 다루기 위해 {{domxref("EventTarget.addEventListener", "addEventListener()")}}를 호출함으로써 이벤트 핸들러가 생성되었습니다. 이것은 메인 gain 노드의 음량을 제어의 새 값으로 업데이트 합니다.</li>
+ <li>다음으로, 음 주파수 표에 있는 각 옥타브에 대해 순회합니다. 각 옥타브에 대해, 우리는 그 옥타브에 있는 음들의 목록을 얻기 위해 {{jsxref("Object.entries()")}}를 사용합니다.</li>
+ <li>그 옥타브의 음들을 포함하는 {{HTMLElement("div")}}를 생성하고 (이렇게 함으로써 우리는 옥타브들 사이에 약간의 공간을 가질 수 있습니다), 이것의 클래스명을 "octave"로 설정합니다.</li>
+ <li>옥타브에 있는 각 건반에 대해, 우리는 음의 이름이 한 문자보다 많은지 검사합니다. 우리는 이것을 생략하는데, 왜냐하면 우리는 이 예제에서 샤프(#) 음들을 무시하기 때문입니다. 만약 음의 이름이 단지 한 문자라면, 우리는 <code>createKey()</code>를 호출하는데, 이는 음의 문자열, 옥타브, 그리고 주파수를 명시합니다. 이 반환된 요소는 단계 4에서 생성된 옥타브 요소에 추가됩니다.</li>
+ <li>각 옥타브 요소가 생성되었을 때, 키보드에 추가됩니다.</li>
+ <li>키보드가 생성되고 나면, 옥타브 5의 "B" 음이 보이도록 스크롤합니다; 이것은 주변 건반들을 따라 중앙 '다' 음이 보이도록 하는 효과를 가지고 있습니다.</li>
+ <li>그리고 나서 새로운 사용자 정의 파형이 {{domxref("AudioContext.createPeriodicWave()")}}를 사용하여 생성됩니다. 이 파형은 언제든지 사용자가 파형 선택기에서 "Custom"을 선택했을 때 사용될 것입니다.</li>
+ <li>마지막으로, oscillator 목록이 어떤 oscillator가 어떤 건반과 연관되어 있는지를 식별하는 정보를 받기 위한 준비가 되었다는 것을 보장하도록 초기화됩니다.</li>
+</ol>
+
+<h4 id="Creating_a_key">건반 생성하기</h4>
+
+<p><code>createKey()</code> 함수는 가상 키보드에 표시하기를 원하는 각각의 건반에 대해 한 번 호출됩니다. 이것은 건반과 건반의 라벨으로 구성되는 요소를 생성하고, 추후의 사용을 위해 그 요소에 데이터 특성을 추가하고, 그리고 우리가 관심을 가지고 있는 이벤트에 대한 이벤트 핸들러를 부여합니다.</p>
+
+<pre class="brush: js">function createKey(note, octave, freq) {
+ let keyElement = document.createElement("div");
+ let labelElement = document.createElement("div");
+
+ keyElement.className = "key";
+ keyElement.dataset["octave"] = octave;
+ keyElement.dataset["note"] = note;
+ keyElement.dataset["frequency"] = freq;
+
+ labelElement.innerHTML = note + "&lt;sub&gt;" + octave + "&lt;/sub&gt;";
+ keyElement.appendChild(labelElement);
+
+ keyElement.addEventListener("mousedown", notePressed, false);
+ keyElement.addEventListener("mouseup", noteReleased, false);
+ keyElement.addEventListener("mouseover", notePressed, false);
+ keyElement.addEventListener("mouseleave", noteReleased, false);
+
+ return keyElement;
+}
+</pre>
+
+<p>건반과 건반의 라벨을 표현할 요소를 생성한 이후, 건반의 클래스를 (외양을 설정하는) "key"로 설정함으로써 건반의 요소를 설정합니다. 그리고 나서 건반의 옥타브(<code>data-octave</code> 특성), 재생할 음을 표현하는 문자열(<code>data-note</code> 특성), 헤르츠로 표현되는 주파수(<code>data-frequency</code> 특성)를 포함하는 {{htmlattrxref("data-*")}} 특성을 추가합니다. 이것은 우리로 하여금 이벤트를 다룰 때 필요한 경우 쉽게 이 정보를 가져올 수 있도록 할 것입니다.</p>
+
+<h3 id="Making_music">음악 만들기</h3>
+
+<h4 id="Playing_a_tone">음 재생하기</h4>
+
+<p><code>playTone()</code> 함수의 역할은 주어진 주파수의 음을 재생하는 것입니다. 이것은 적절한 음을 재생하는 키보드 건반의 이벤트 핸들러에 의해 사용될 것입니다.</p>
+
+<pre class="brush: js">function playTone(freq) {
+ let osc = audioContext.createOscillator();
+ osc.connect(mainGainNode);
+
+ let type = wavePicker.options[wavePicker.selectedIndex].value;
+
+ if (type == "custom") {
+ osc.setPeriodicWave(customWaveform);
+ } else {
+ osc.type = type;
+ }
+
+ osc.frequency.value = freq;
+ osc.start();
+
+ return osc;
+}
+</pre>
+
+<p><code>playTone()</code>은 {{domxref("AudioContext.createOscillator()")}} 메서드를 호출하여 새로운 {{domxref("OscillatorNode")}}를 생성함으로써 시작합니다. 그리고 나서 우리는 이것을 메인 gain 노드에 새로운 oscillator의 {{domxref("OscillatorNode.connect()")}} 메서드를 호출함으로써 연결하는데, 이는 oscillator에게 이것의 결과를 어디로 보낼지 알려줍니다. 이렇게 함으로써, 메인 gain 노드의 gain을 변경하는 것은 생성되는 모든 음의 볼륨에 영향을 미칠 것입니다.</p>
+
+<p>그리고 나서 우리는 사용할 파형의 유형을 세팅 바의 파형 선택기의 값을 검사함으로써 얻습니다. 만약 사용자가 이것을 <code>"custom"</code>으로 설정했다면, 우리는 사용자 정의 파형을 사용할 oscillator를 설정하기 위하여 {{domxref("OscillatorNode.setPeriodicWave()")}}를 호출합니다. 이를 하는 것은 자동적으로 oscillator의 {{domxref("OscillatorNode.type", "type")}}을 <code>custom</code>으로 설정합니다. 만약 파형 선택기에서 다른 파형이 선택되었다면, 우리는 oscillator의 유형을 선택기의 값으로 설정합니다; 그 값은 <code>sine</code>, <code>square</code>, <code>triangle</code>, 그리고 <code>sawtooth</code> 중 하나일 것입니다.</p>
+
+<p>oscillator의 주파수는 {{domxref("Oscillator.frequency")}} {{domxref("AudioParam")}} 객체의 값을 설정함으로써 <code>freq</code> 파라미터에 명시된 값으로 설정됩니다. 그리고서, 마침내, oscillator는 상속된 {{domxref("AudioScheduledSourceNode.start()")}} 메서드를 호출하여 소리를 생성하도록 시작됩니다.</p>
+
+<h4 id="Playing_a_tone_2">음 재생하기</h4>
+
+<p>{{event("mousedown")}} 이나 {{domxref("mouseover")}} 이벤트가 건반에서 발생했을 때, 우리는 대응하는 음을 재생하기를 원합니다. <code>notePressed()</code> 함수는 이 이벤트들에 대한 이벤트 핸들러로 사용됩니다.</p>
+
+<pre class="brush: js">function notePressed(event) {
+ if (event.buttons &amp; 1) {
+ let dataset = event.target.dataset;
+
+ if (!dataset["pressed"]) {
+ let octave = +dataset["octave"];
+ oscList[octave][dataset["note"]] = playTone(dataset["frequency"]);
+ dataset["pressed"] = "yes";
+ }
+ }
+}
+</pre>
+
+<p>두 가지 이유로, 우리는 주요 마우스 버튼이 눌러졌는지를 확인함으로써 시작합니다. 첫째로, 우리는 오직 주요 마우스 버튼이 노트 재생을 할 수 있게 허용하기를 원합니다. 둘째로, 그리고 더욱 중요하게, 우리는 유저가 음에서 음으로 드래그하는 경우에 대해 {{event("mouseover")}}를 다루기 위해 이것을 사용하고, 우리는 오직 마우스가 요소에 들어왔을 때 눌러졌다면 노트를 재생하기를 원합니다.</p>
+
+<p>만약 마우스 버튼이 실제로 눌러졌다면, 우리는 눌러진 건반의 {{htmlattrxref("dataset")}} 특성을 얻습니다; 이는 요소의 사용자 정의 데이터 특성에 접근하는 것을 쉽게 해 줍니다. 우리는 <code>data-pressed</code> 특성을 찾습니다; 만약 (음이 이미 재생되고 있지 않다는 것을 나타내는) 그것이 없다면, 요소의 <code>data-frequency</code> 특성 값을 전달하며, 우리는 음을 재생하기 위해 <code>playTone()</code>을 호출합니다. 반환된 oscillator는 <code>oscList</code>에 미래의 참조를 위해 저장되고, <code>data-pressed</code>는 음이 재생되고 있다는 것을 나타내기 위해 <code>yes</code>로 설정되어 다음 번에 이것이 호출되었을 때 이것을 다시 시작하지 않습니다.</p>
+
+<h4 id="Stopping_a_tone">음 멈추기</h4>
+
+<p><code>noteReleased()</code> 함수는 사용자가 마우스 버튼을 떼거나 마우스를 현재 재생되고 있는 건반 밖으로 이동시켰을 때 호출되는 이벤트 핸들러입니다.</p>
+
+<pre class="brush: js">function noteReleased(event) {
+ let dataset = event.target.dataset;
+
+ if (dataset &amp;&amp; dataset["pressed"]) {
+ let octave = +dataset["octave"];
+ oscList[octave][dataset["note"]].stop();
+ delete oscList[octave][dataset["note"]];
+ delete dataset["pressed"];
+ }
+}
+</pre>
+
+<p><code>noteReleased()</code>는 사용자 정의 <code>data-octave</code>와 <code>data-note</code> 특성을 건반의 oscillator를 찾아보기 위해 사용하고, 그리고 나서 음 재생을 멈추기 위해 oscillator의 상속된 {{domxref("AudioScheduledSourceNode.stop", "stop()")}} 메서드를 호출합니다. 마지막으로, 음이 현재 재생되고 있지 않다는 것을 나타내기 위해, 음에 대한 <code>oscList</code> 항목은 지워지고 <code>data-pressed</code> 특성은 ({{domxref("event.target")}}에 의해 식별된) 건반 요소로부터 제거됩니다.</p>
+
+<h4 id="main">메인 볼륨 변경하기</h4>
+
+<p>세팅 바의 볼륨 슬라이더는 메인 gain 노드의 gain 값을 변경하기 위한 간단한 인터페이스를 제공하는데, 이로써 재생되는 모든 음의 세기를 변경합니다. <code>changeVolume()</code> 메서드는 슬라이더의 {{event("change")}} 이벤트에 대한 핸들러입니다.</p>
+
+<pre class="brush: js">function changeVolume(event) {
+ mainGainNode.gain.value = volumeControl.value
+}
+</pre>
+
+<p>이것은 메인 gain 노드의 <code>gain</code> {{domxref("AudioParam")}}의 값을 슬라이더의 새로운 값으로 설정합니다.</p>
+
+<h3 id="Result">결과</h3>
+
+<p>이를 모두 합하면, 결과는 간단하지만 작동하는 마우스로 이용 가능한 뮤지컬 키보드입니다.</p>
+
+<p>{{ EmbedLiveSample('The_video_keyboard', 680, 200) }}</p>
+
+<h2 id="See_also">같이 보기</h2>
+
+<ul>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API">Web Audio API</a></li>
+ <li>{{domxref("OscillatorNode")}}</li>
+ <li>{{domxref("GainNode")}}</li>
+ <li>{{domxref("AudioContext")}}</li>
+</ul>
diff --git a/files/ko/web/api/web_audio_api/tools/index.html b/files/ko/web/api/web_audio_api/tools/index.html
new file mode 100644
index 0000000000..beee9d6fb4
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/tools/index.html
@@ -0,0 +1,41 @@
+---
+title: Tools for analyzing Web Audio usage
+slug: Web/API/Web_Audio_API/Tools
+tags:
+ - API
+ - Audio
+ - Debugging
+ - Media
+ - Tools
+ - Web
+ - Web Audio
+ - Web Audio API
+ - sound
+---
+<div>{{APIRef("Web Audio API")}}</div>
+
+<p>While working on your Web Audio API code, you may find that you need tools to analyze the graph of nodes you create or to otherwise debug your work. This article discusses tools available to help you do that.</p>
+
+<h2 id="Chrome">Chrome</h2>
+
+<p>A handy web audio inspector can be found in the <a href="https://chrome.google.com/webstore/detail/web-audio-inspector/cmhomipkklckpomafalojobppmmidlgl">Chrome Web Store</a>.</p>
+
+<h2 id="Edge">Edge</h2>
+
+<p><em>Add information for developers using Microsoft Edge.</em></p>
+
+<h2 id="Firefox">Firefox</h2>
+
+<p>Firefox offers a native <a href="/en-US/docs/Tools/Web_Audio_Editor">Web Audio Editor</a>.</p>
+
+<h2 id="Safari">Safari</h2>
+
+<p><em>Add information for developers working in Safari.</em></p>
+
+<h2 id="See_also">See also</h2>
+
+<ul>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API">Web Audio API</a></li>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a></li>
+ <li><a href="/en-US/docs/Web/Apps/Fundamentals/Audio_and_video_delivery/Web_Audio_API_cross_browser">Writing Web Audio API code that works in every browser</a></li>
+</ul>
diff --git a/files/ko/web/api/web_audio_api/using_audioworklet/index.html b/files/ko/web/api/web_audio_api/using_audioworklet/index.html
new file mode 100644
index 0000000000..b103225f09
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/using_audioworklet/index.html
@@ -0,0 +1,325 @@
+---
+title: Background audio processing using AudioWorklet
+slug: Web/API/Web_Audio_API/Using_AudioWorklet
+tags:
+ - API
+ - Audio
+ - AudioWorklet
+ - Background
+ - Examples
+ - Guide
+ - Processing
+ - Web Audio
+ - Web Audio API
+ - WebAudio API
+ - sound
+---
+<p>{{APIRef("Web Audio API")}}</p>
+
+<p>When the Web Audio API was first introduced to browsers, it included the ability to use JavaScript code to create custom audio processors that would be invoked to perform real-time audio manipulations. The drawback to <code>ScriptProcessorNode</code> was simple: it ran on the main thread, thus blocking everything else going on until it completed execution. This was far less than ideal, especially for something that can be as computationally expensive as audio processing.</p>
+
+<p>Enter {{domxref("AudioWorklet")}}. An audio context's audio worklet is a {{domxref("Worklet")}} which runs off the main thread, executing audio processing code added to it by calling the context's {{domxref("Worklet.addModule", "audioWorklet.addModule()")}} method. Calling <code>addModule()</code> loads the specified JavaScript file, which should contain the implementation of the audio processor. With the processor registered, you can create a new {{domxref("AudioWorkletNode")}} which passes the audio through the processor's code when the node is linked into the chain of audio nodes along with any other audio nodes.</p>
+
+<p><span class="seoSummary">The process of creating an audio processor using JavaScript, establishing it as an audio worklet processor, and then using that processor within a Web Audio application is the topic of this article.</span></p>
+
+<p>It's worth noting that because audio processing can often involve substantial computation, your processor may benefit greatly from being built using <a href="/en-US/docs/WebAssembly">WebAssembly</a>, which brings near-native or fully native performance to web apps. Implementing your audio processing algorithm using WebAssembly can make it perform markedly better.</p>
+
+<h2 id="High_level_overview">High level overview</h2>
+
+<p>Before we start looking at the use of AudioWorklet on a step-by-step basis, let's start with a brief high-level overview of what's involved.</p>
+
+<ol>
+ <li>Create module that defines a audio worklet processor class, based on {{domxref("AudioWorkletProcessor")}} which takes audio from one or more incoming sources, performs its operation on the data, and outputs the resulting audio data.</li>
+ <li>Access the audio context's {{domxref("AudioWorklet")}} through its {{domxref("BaseAudioContext.audioWorklet", "audioWorklet")}} property, and call the audio worklet's {{domxref("Worklet.addModule", "addModule()")}} method to install the audio worklet processor module.</li>
+ <li>As needed, create audio processing nodes by passing the processor's name (which is defined by the module) to the {{domxref("AudioWorkletNode.AudioWorkletNode", "AudioWorkletNode()")}} constructor.</li>
+ <li>Set up any audio parameters the {{domxref("AudioWorkletNode")}} needs, or that you wish to configure. These are defined in the audio worklet processor module.</li>
+ <li>Connect the created <code>AudioWorkletNode</code>s into your audio processing pipeline as you would any other node, then use your audio pipeline as usual.</li>
+</ol>
+
+<p>Throughout the remainder of this article, we'll look at these steps in more detail, with examples (including working examples you can try out on your own).</p>
+
+<p>The example code found on this page is derived from <a href="https://mdn.github.io/webaudio-examples/audioworklet/">this working example</a> which is part of MDN's <a href="https://github.com/mdn/webaudio-examples/">GitHub repository of Web Audio examples</a>. The example creates an oscillator node and adds white noise to it using an {{domxref("AudioWorkletNode")}} before playing the resulting sound out. Slider controls are available to allow controlling the gain of both the oscillator and the audio worklet's output.</p>
+
+<p><a href="https://github.com/mdn/webaudio-examples/tree/master/audioworklet"><strong>See the code</strong></a></p>
+
+<p><a href="https://mdn.github.io/webaudio-examples/audioworklet/"><strong>Try it live</strong></a></p>
+
+<h2 id="Creating_an_audio_worklet_processor">Creating an audio worklet processor</h2>
+
+<p>Fundamentally, an audio worklet processor (which we'll refer to usually as either an "audio processor" or as a "processor" because otherwise this article will be about twice as long) is implemented using a JavaScript module that defines and installs the custom audio processor class.</p>
+
+<h3 id="Structure_of_an_audio_worklet_processor">Structure of an audio worklet processor</h3>
+
+<p>An audio worklet processor is a JavaScript module which consists of the following:</p>
+
+<ul>
+ <li>A JavaScript class which defines the audio processor. This class extends the {{domxref("AudioWorkletProcessor")}} class.</li>
+ <li>The audio processor class must implement a {{domxref("AudioWorkletProcessor.process", "process()")}} method, which receives incoming audio data and writes back out the data as manipulated by the processor.</li>
+ <li>The module installs the new audio worklet processor class by calling {{domxref("AudioWorkletGlobalScope.registerProcessor", "registerProcessor()")}}, specifying a name for the audio processor and the class that defines the processor.</li>
+</ul>
+
+<p>A single audio worklet processor module may define multiple processor classes, registering each of them with individual calls to <code>registerProcessor()</code>. As long as each has its own unique name, this will work just fine. It's also more efficient than loading multiple modules from over the network or even the user's local disk.</p>
+
+<h3 id="Basic_code_framework">Basic code framework</h3>
+
+<p>The barest framework of an audio processor class looks like this:</p>
+
+<pre class="brush: js">class MyAudioProcessor extends AudioWorkletProcessor {
+  constructor() {
+  super();
+  }
+
+  process(inputList, outputList, parameters) {
+  /* using the inputs (or not, as needed), write the output
+  into each of the outputs */
+
+  return true;
+  }
+};
+
+registerProcessor("my-audio-processor", MyAudioProcessor);
+</pre>
+
+<p>After the implementation of the processor comes a call to the global function {{domxref("AudioWorkletGlobalScope.registerProcessor", "registerProcessor()")}}, which is only available within the scope of the audio context's {{domxref("AudioWorklet")}}, which is the invoker of the processor script as a result of your call to {{domxref("Worklet.addModule", "audioWorklet.addModule()")}}. This call to <code>registerProcessor()</code> registers your class as the basis for any {{domxref("AudioWorkletProcessor")}}s created when {{domxref("AudioWorkletNode")}}s are set up.</p>
+
+<p>This is the barest framework and actually has no effect until code is added into <code>process()</code> to do something with those inputs and outputs. Which brings us to talking about those inputs and outputs.</p>
+
+<h3 id="The_input_and_output_lists">The input and output lists</h3>
+
+<p>The lists of inputs and outputs can be a little confusing at first, even though they're actually very simple once you realize what's going on.</p>
+
+<p>Let's start at the inside and work our way out. Fundamentally, the audio for a single audio channel (such as the left speaker or the subwoofer, for example) is represented as a <code><a href="/en-US/docs/Web/JavaScript/Reference/Global_Objects/Float32Array">Float32Array</a></code> whose values are the individual audio samples. By specification, each block of audio your <code>process()</code> function receives contains 128 frames (that is, 128 samples for each channel), but it is planned that <em>this value will change in the future</em>, and may in fact vary depending on circumstances, so you should <em>always</em> check the array's <code><a href="/en-US/docs/Web/JavaScript/Reference/Global_Objects/TypedArray/length">length</a></code> rather than assuming a particular size. It is, however, guaranteed that the inputs and outputs will have the same block length.</p>
+
+<p>Each input can have a number of channels. A mono input has a single channel; stereo input has two channels. Surround sound might have six or more channels. So each input is, in turn, an array of channels. That is, an array of <code>Float32Array</code> objects.</p>
+
+<p>Then, there can be multiple inputs, so the <code>inputList</code> is an array of arrays of <code>Float32Array</code> objects. Each input may have a different number of channels, and each channel has its own array of samples.</p>
+
+<p>Thus, given the input list <code>inputList</code>:</p>
+
+<pre class="brush: js">const numberOfInputs = inputList.length;
+const firstInput = inputList[0];
+
+const firstInputChannelCount = firstInput.length;
+const firstInputFirstChannel = firstInput[0]; // (or inputList[0][0])
+
+const firstChannelByteCount = firstInputFirstChannel.length;
+const firstByteOfFirstChannel = firstInputFirstChannel[0]; // (or inputList[0][0][0])
+</pre>
+
+<p>The output list is structured in exactly the same way; it's an array of outputs, each of which is an array of channels, each of which is an array of <code>Float32Array</code> objects, which contain the samples for that channel.</p>
+
+<p>How you use the inputs and how you generate the outputs depends very much on your processor. If your processor is just a generator, it can ignore the inputs and just replace the contents of the outputs with the generated data. Or you can process each input independently, applying an algorithm to the incoming data on each channel of each input and writing the results into the corresponding outputs' channels (keeping in mind that the number of inputs and outputs may differ, and the channel counts on those inputs and outputs may also differ). Or you can take all the inputs and perform mixing or other computations that result in a single output being filled with data (or all the outputs being filled with the same data).</p>
+
+<p>It's entirely up to you. This is a very powerful tool in your audio programming toolkit.</p>
+
+<h3 id="Processing_multiple_inputs">Processing multiple inputs</h3>
+
+<p>Let's take a look at an implementation of <code>process()</code> that can process multiple inputs, with each input being used to generate the corresponding output. Any excess inputs are ignored.</p>
+
+<pre class="brush: js">process(inputList, outputList, parameters) {
+  const sourceLimit = Math.min(inputList.length, outputList.length);
+
+  for (let inputNum = 0; inputNum &lt; sourceLimit; inputNum++) {
+    let input = inputList[inputNum];
+    let output = outputList[inputNum];
+    let channelCount = Math.min(input.length, output.length);
+
+    for (let channelNum = 0; channelNum &lt; channelCount; channelNum++) {
+      let sampleCount = input[channelNum].length;
+
+      for (let i = 0; i &lt; sampleCount; i++) {
+        let sample = input[channelNum][i];
+
+ /* Manipulate the sample */
+
+        output[channelNum][i] = sample;
+      }
+    }
+  };
+
+  return true;
+}
+</pre>
+
+<p>Note that when determining the number of sources to process and send through to the corresponding outputs, we use <code><a href="/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/min">Math.min()</a></code> to ensure that we only process as many channels as we have room for in the output list. The same check is performed when determining how many channels to process in the current input; we only process as many as there are room for in the destination output. This avoids errors due to overrunning these arrays.</p>
+
+<h3 id="Mixing_inputs">Mixing inputs</h3>
+
+<p>Many nodes perform <strong>mixing</strong> operations, where the inputs are combined in some way into a single output. This is demonstrated in the following example.</p>
+
+<pre class="brush: js">process(inputList, outputList, parameters) {
+  const sourceLimit = Math.min(inputList.length, outputList.length);
+   for (let inputNum = 0; inputNum &lt; sourceLimit; inputNum++) {
+     let input = inputList[inputNum];
+     let output = outputList[0];
+     let channelCount = Math.min(input.length, output.length);
+
+     for (let channelNum = 0; channelNum &lt; channelCount; channelNum++) {
+       let sampleCount = input[channelNum].length;
+
+       for (let i = 0; i &lt; sampleCount; i++) {
+         let sample = output[channelNum][i] + input[channelNum][i];
+
+ if (sample &gt; 1.0) {
+  sample = 1.0;
+  } else if (sample &lt; -1.0) {
+  sample = -1.0;
+  }
+
+         output[channelNum][i] = sample;
+       }
+     }
+   };
+
+  return true;
+}
+</pre>
+
+<p>This is similar code to the previous sample in many ways, but only the first output—<code>outputList[0]</code>—is altered. Each sample is added to the corresponding sample in the output buffer, with a simple code fragment in place to prevent the samples from exceeding the legal range of -1.0 to 1.0 by capping the values; there are other ways to avoid clipping that are perhaps less prone to distortion, but this is a simple example that's better than nothing.</p>
+
+<h2 id="Lifetime_of_an_audio_worklet_processor">Lifetime of an audio worklet processor</h2>
+
+<p>The only means by which you can influence the lifespan of your audio worklet processor is through the value returned by <code>process()</code>, which should be a Boolean value indicating whether or not to override the {{Glossary("user agent")}}'s decision-making as to whether or not your node is still in use.</p>
+
+<p>In general, the lifetime policy of any audio node is simple: if the node is still considered to be actively processing audio, it will continue to be used. In the case of an {{domxref("AudioWorkletNode")}}, the node is considered to be active if its <code>process()</code> function returns <code>true</code> <em>and</em> the node is either generating content as a source for audio data, or is receiving data from one or more inputs.</p>
+
+<p>Specifying a value of <code>true</code> as the result from your <code>process()</code> function in essence tells the Web Audio API that your processor needs to keep being called even if the API doesn't think there's anything left for you to do. In other words, <code>true</code> overrides the API's logic and gives you control over your processor's lifetime policy, keeping the processor's owning {{domxref("AudioWorkletNode")}} running even when it would otherwise decide to shut down the node.</p>
+
+<p>Returning <code>false</code> from the <code>process()</code> method tells the API that it should follow its normal logic and shut down your processor node if it deems it appropriate to do so. If the API determines that your node is no longer needed, <code>process()</code> will not be called again.</p>
+
+<div class="notecard note">
+<p><strong>Note:</strong> At this time, unfortunately, Chrome does not implement this algorithm in a manner that matches the current standard. Instead, it keeps the node alive if you return <code>true</code> and shuts it down if you return <code>false</code>. Thus for compatibility reasons you must always return <code>true</code> from <code>process()</code>, at least on Chrome. However, once <a href="https://bugs.chromium.org/p/chromium/issues/detail?id=921354">this Chrome issue</a> is fixed, you will want to change this behavior if possible as it may have a slight negative impact on performance.</p>
+</div>
+
+<h2 id="Creating_an_audio_processor_worklet_node">Creating an audio processor worklet node</h2>
+
+<p>To create an audio node that pumps blocks of audio data through an {{domxref("AudioWorkletProcessor")}}, you need to follow these simple steps:</p>
+
+<ol>
+ <li>Load and install the audio processor module</li>
+ <li>Create an {{domxref("AudioWorkletNode")}}, specifying the audio processor module to use by its name</li>
+ <li>Connect inputs to the <code>AudioWorkletNode</code> and its outputs to appropriate destinations (either other nodes or to the {{domxref("AudioContext")}} object's {{domxref("AudioContext.destination", "destination")}} property.</li>
+</ol>
+
+<p>To use an audio worklet processor, you can use code similar to the following:</p>
+
+<pre class="brush: js">let audioContext = null;
+
+async function createMyAudioProcessor() {
+  if (!audioContext) {
+  try {
+   audioContext = new AudioContext();
+  await audioContext.resume();
+   await audioContext.audioWorklet.addModule("module-url/module.js");
+  } catch(e) {
+  return null;
+  }
+ }
+
+  return new AudioWorkletNode(audioContext, "processor-name");
+}
+</pre>
+
+<p>This <code>createMyAudioProcessor()</code> function creates and returns a new instance of {{domxref("AudioWorkletNode")}} configured to use your audio processor. It also handles creating the audio context if it hasn't already been done.</p>
+
+<p>In order to ensure the context is usable, this starts by creating the context if it's not already available, then adds the module containing the processor to the worklet. Once that's done, it instantiates and returns a new <code>AudioWorkletNode</code>. Once you have that in hand, you connect it to other nodes and otherwise use it just like any other node.</p>
+
+<p>You can then create a new audio processor node by doing this:</p>
+
+<pre class="brush: js">let newProcessorNode = createMyAudioProcessor();</pre>
+
+<p>If the returned value, <code>newProcessorNode</code>, is non-<code>null</code>, we have a valid audio context with its hiss processor node in place and ready to use.</p>
+
+<h2 id="Supporting_audio_parameters">Supporting audio parameters</h2>
+
+<p>Just like any other Web Audio node, {{domxref("AudioWorkletNode")}} supports parameters, which are shared with the {{domxref("AudioWorkletProcessor")}} that does the actual work.</p>
+
+<h3 id="Adding_parameter_support_to_the_processor">Adding parameter support to the processor</h3>
+
+<p>To add parameters to an {{domxref("AudioWorkletNode")}}, you need to define them within your {{domxref("AudioWorkletProcessor")}}-based processor class in your module. This is done by adding the static getter {{domxref("AudioWorkletProcessor.parameterDescriptors", "parameterDescriptors")}} to your class. This function should return an array of {{domxref("AudioParam")}} objects, one for each parameter supported by the processor.</p>
+
+<p>In the following implementation of <code>parameterDescriptors()</code>, the returned array has two <code>AudioParam</code> objects. The first defines <code>gain</code> as a value between 0 and 1, with a default value of 0.5. The second parameter is named <code>frequency</code> and defaults to 440.0, with a range from 27.5 to 4186.009, inclusively.</p>
+
+<pre class="brush: js">static get parameterDescriptors() {
+ return [
+ {
+ name: "gain",
+ defaultValue: 0.5,
+ minValue: 0,
+ maxValue: 1
+ },
+  {
+  name: "frequency",
+  defaultValue: 440.0;
+  minValue: 27.5,
+  maxValue: 4186.009
+  }
+ ];
+}</pre>
+
+<p>Accessing your processor node's parameters is as simple as looking them up in the <code>parameters</code> object passed into your implementation of {{domxref("AudioWorkletProcessor.process", "process()")}}. Within the <code>parameters</code> object are arrays, one for each of your parameters, and sharing the same names as your parameters.</p>
+
+<dl>
+ <dt>A-rate parameters</dt>
+ <dd>For a-rate parameters—parameters whose values automatically change over time—the parameter's entry in the <code>parameters</code> object is an array of {{domxref("AudioParam")}} objects, one for each frame in the block being processed. These values are to be applied to the corresponding frames.</dd>
+ <dt>K-rate parameters</dt>
+ <dd>K-rate parameters, on the other hand, can only change once per block, so the parameter's array has only a single entry. Use that value for every frame in the block.</dd>
+</dl>
+
+<p>In the code below, we see a <code>process()</code> function that handles a <code>gain</code> parameter which can be used as either an a-rate or k-rate parameter. Our node only supports one input, so it just takes the first input in the list, applies the gain to it, and writes the resulting data to the first output's buffer.</p>
+
+<pre class="brush: js">process(inputList, outputList, parameters) {
+  const input = inputList[0];
+  const output = outputList[0];
+  const gain = parameters.gain;
+
+  for (let channelNum = 0; channelNum &lt; input.length; channel++) {
+  const inputChannel = input[channel];
+  const outputChannel = output[channel];
+
+ // If gain.length is 1, it's a k-rate parameter, so apply
+ // the first entry to every frame. Otherwise, apply each
+ // entry to the corresponding frame.
+
+  if (gain.length === 1) {
+  for (let i = 0; i &lt; inputChannel.length; i++) {
+  outputChannel[i] = inputChannel[i] * gain[0];
+  }
+  } else {
+  for (let i = 0; i &lt; inputChannel.length; i++) {
+  outputChannel[i] = inputChannel[i] * gain[i];
+  }
+  }
+  }
+
+  return true;
+}
+</pre>
+
+<p>Here, if <code>gain.length</code> indicates that there's only a single value in the <code>gain</code> parameter's array of values, the first entry in the array is applied to every frame in the block. Otherwise, for each frame in the block, the corresponding entry in <code>gain[]</code> is applied.</p>
+
+<h3 id="Accessing_parameters_from_the_main_thread_script">Accessing parameters from the main thread script</h3>
+
+<p>Your main thread script can access the parameters just like it can any other node. To do so, first you need to get a reference to the parameter by calling the {{domxref("AudioWorkletNode")}}'s {{domxref("AudioWorkletNode.parameters", "parameters")}} property's {{domxref("AudioParamMap.get", "get()")}} method:</p>
+
+<pre class="brush: js">let gainParam = myAudioWorkletNode.parameters.get("gain");
+</pre>
+
+<p>The value returned and stored in <code>gainParam</code> is the {{domxref("AudioParam")}} used to store the <code>gain</code> parameter. You can then change its value effective at a given time using the {{domxref("AudioParam")}} method {{domxref("AudioParam.setValueAtTime", "setValueAtTime()")}}.</p>
+
+<p>Here, for example, we set the value to <code>newValue</code>, effective immediately.</p>
+
+<pre class="brush: js">gainParam.setValueAtTime(newValue, audioContext.currentTime);</pre>
+
+<p>You can similarly use any of the other methods in the {{domxref("AudioParam")}} interface to apply changes over time, to cancel scheduled changes, and so forth.</p>
+
+<p>Reading the value of a parameter is as simple as looking at its {{domxref("AudioParam.value", "value")}} property:</p>
+
+<pre class="brush: js">let currentGain = gainParam.value;</pre>
+
+<h2 id="See_also">See also</h2>
+
+<ul>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API">Web Audio API</a></li>
+ <li><a href="https://developers.google.com/web/updates/2017/12/audio-worklet">Enter Audio Worklet</a> (Google Developers blog)</li>
+</ul>
diff --git a/files/ko/web/api/web_audio_api/using_iir_filters/iir-filter-demo.png b/files/ko/web/api/web_audio_api/using_iir_filters/iir-filter-demo.png
new file mode 100644
index 0000000000..0e701a2b6a
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/using_iir_filters/iir-filter-demo.png
Binary files differ
diff --git a/files/ko/web/api/web_audio_api/using_iir_filters/index.html b/files/ko/web/api/web_audio_api/using_iir_filters/index.html
new file mode 100644
index 0000000000..0c48b1096c
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/using_iir_filters/index.html
@@ -0,0 +1,198 @@
+---
+title: Using IIR filters
+slug: Web/API/Web_Audio_API/Using_IIR_filters
+tags:
+ - API
+ - Audio
+ - Guide
+ - IIRFilter
+ - Using
+ - Web Audio API
+---
+<div>{{DefaultAPISidebar("Web Audio API")}}</div>
+
+<p class="summary">The <strong><code>IIRFilterNode</code></strong> interface of the <a href="/en-US/docs/Web/API/Web_Audio_API">Web Audio API</a> is an {{domxref("AudioNode")}} processor that implements a general <a href="https://en.wikipedia.org/wiki/infinite%20impulse%20response">infinite impulse response</a> (IIR) filter; this type of filter can be used to implement tone control devices and graphic equalizers, and the filter response parameters can be specified, so that it can be tuned as needed. This article looks at how to implement one, and use it in a simple example.</p>
+
+<h2 id="Demo">Demo</h2>
+
+<p>Our simple example for this guide provides a play/pause button that starts and pauses audio play, and a toggle that turns an IIR filter on and off, altering the tone of the sound. It also provides a canvas on which is drawn the frequency response of the audio, so you can see what effect the IIR filter has.</p>
+
+<p><img alt="A demo featuring a play button, and toggle to turn a filter on and off, and a line graph showing the filter frequencies returned after the filter has been applied." src="iir-filter-demo.png"></p>
+
+<p>You can check out the <a href="https://codepen.io/Rumyra/pen/oPxvYB/">full demo here on Codepen</a>. Also see the <a href="https://github.com/mdn/webaudio-examples/tree/master/iirfilter-node">source code on GitHub</a>. It includes some different coefficient values for different lowpass frequencies — you can change the value of the <code>filterNumber</code> constant to a value between 0 and 3 to check out the different available effects.</p>
+
+<h2 id="Browser_support">Browser support</h2>
+
+<p><a href="/en-US/docs/Web/API/IIRFilterNode">IIR filters</a> are supported well across modern browsers, although they have been implemented more recently than some of the more longstanding Web Audio API features, like <a href="/en-US/docs/Web/API/BiquadFilterNode">Biquad filters</a>.</p>
+
+<h2 id="The_IIRFilterNode">The IIRFilterNode</h2>
+
+<p>The Web Audio API now comes with an {{domxref("IIRFilterNode")}} interface. But what is this and how does it differ from the {{domxref("BiquadFilterNode")}} we have already?</p>
+
+<p>An IIR filter is a <strong>infinite impulse response filter</strong>. It's one of two primary types of filters used in audio and digital signal processing. The other type is FIR — <strong>finite impulse response filter</strong>. There's a really good overview to <a href="https://dspguru.com/dsp/faqs/iir/basics/">IIF filters and FIR filters here</a>.</p>
+
+<p>A biquad filter is actually a <em>specific type</em> of infinite impulse response filter. It's a commonly-used type and we already have it as a node in the Web Audio API. If you choose this node the hard work is done for you. For instance, if you want to filter lower frequencies from your sound, you can set the <a href="/en-US/docs/Web/API/BiquadFilterNode/type">type</a> to <code>highpass</code> and then set which frequency to filter from (or cut off). <a href="http://www.earlevel.com/main/2003/02/28/biquads/">There's more information on how biquad filters work here</a>.</p>
+
+<p>When you are using an {{domxref("IIRFilterNode")}} instead of a {{domxref("BiquadFilterNode")}} you are creating the filter yourself, rather than just choosing a pre-programmed type. So you can create a highpass filter, or a lowpass filter, or a more bespoke one. And this is where the IIR filter node is useful — you can create your own if none of the alaready available settings is right for what you want. As well as this, if your audio graph needed a highpass and a bandpass filter within it, you could just use one IIR filter node in place of the two biquad filter nodes you would otherwise need for this.</p>
+
+<p>With the IIRFIlter node it's up to you to set what <code>feedforward</code> and <code>feedback</code> values the filter needs — this determines the characteristics of the filter. The downside is that this involves some complex maths.</p>
+
+<p>If you are looking to learn more there's some <a href="http://ece.uccs.edu/~mwickert/ece2610/lecture_notes/ece2610_chap8.pdf">information about the maths behind IIR filters here</a>. This enters the realms of signal processing theory — don't worry if you look at it and feel like it's not for you.</p>
+
+<p>If you want to play with the IIR filter node and need some values to help along the way, there's <a href="http://www.dspguide.com/CH20.PDF">a table of already calculated values here</a>; on pages 4 &amp; 5 of the linked PDF the a<em>n</em> values refer to the <code>feedForward</code> values and the b<em>n</em> values refer to the <code>feedback</code>. <a href="http://musicdsp.org/">musicdsp.org</a> is also a great resource if you want to read more about different filters and how they are implemented digitally.</p>
+
+<p>With that all in mind, let's take a look at the code to create an IIR filter with the Web Audio API.</p>
+
+<h2 id="Setting_our_IIRFilter_co-efficients">Setting our IIRFilter co-efficients</h2>
+
+<p>When creating an IIR filter, we pass in the <code>feedforward</code> and <code>feedback</code> coefficients as options (coefficients is how we describe the values). Both of these parameters are arrays, neither of which can be larger than 20 items.</p>
+
+<p>When setting our coefficients, the <code>feedforward</code> values can't all be set to zero, otherwise nothing would be sent to the filter. Something like this is acceptable:</p>
+
+<pre class="brush: js">let feedForward = [0.00020298, 0.0004059599, 0.00020298];
+</pre>
+
+<p>Our <code>feedback</code> values cannot start with zero, otherwise on the first pass nothing would be sent back:</p>
+
+<pre class="brush: js">let feedBackward = [1.0126964558, -1.9991880801, 0.9873035442];
+</pre>
+
+<div class="note">
+<p><strong>Note</strong>: These values are calculated based on the lowpass filter specified in the <a href="https://webaudio.github.io/web-audio-api/#filters-characteristics">filter characteristics of the Web Audio API specification</a>. As this filter node gains more popularity we should be able to collate more coefficient values.</p>
+</div>
+
+<h2 id="Using_an_IIRFilter_in_an_audio_graph">Using an IIRFilter in an audio graph</h2>
+
+<p>Let's create our context and our filter node:</p>
+
+<pre class="brush: js">const AudioContext = window.AudioContext || window.webkitAudioContext;
+const audioCtx = new AudioContext();
+
+const iirFilter = audioCtx.createIIRFilter(feedForward, feedBack);
+</pre>
+
+<p>We need a sound source to play. We set this up using a custom function, <code>playSoundNode()</code>, which <a href="/en-US/docs/Web/API/BaseAudioContext/createBufferSource">creates a buffer source</a> from an existing {{domxref("AudioBuffer")}}, attaches it to the default destination, starts it playing, and returns it:</p>
+
+<pre class="brush: js">function playSourceNode(audioContext, audioBuffer) {
+ const soundSource = audioContext.createBufferSource();
+ soundSource.buffer = audioBuffer;
+ soundSource.connect(audioContext.destination);
+ soundSource.start();
+ return soundSource;
+}</pre>
+
+<p>This function is called when the play button is pressed. The play button HTML looks like this:</p>
+
+<pre class="brush: html">&lt;button class="button-play" role="switch" data-playing="false" aria-pressed="false"&gt;Play&lt;/button&gt;</pre>
+
+<p>And the <code>click</code> event listener starts like so:</p>
+
+<pre class="brush: js">playButton.addEventListener('click', function() {
+ if (this.dataset.playing === 'false') {
+ srcNode = playSourceNode(audioCtx, sample);
+ ...
+}, false);</pre>
+
+<p>The toggle that turns the IIR filter on and off is set up in the similar way. First, the HTML:</p>
+
+<pre>&lt;button class="button-filter" role="switch" data-filteron="false" aria-pressed="false" aria-describedby="label" disabled&gt;&lt;/button&gt;</pre>
+
+<p>The filter button's <code>click</code> handler then connects the <code>IIRFilter</code> up to the graph, between the source and the detination:</p>
+
+<pre class="brush: js">filterButton.addEventListener('click', function() {
+ if (this.dataset.filteron === 'false') {
+ srcNode.disconnect(audioCtx.destination);
+ srcNode.connect(iirfilter).connect(audioCtx.destination);
+ ...
+}, false);</pre>
+
+<h3 id="Frequency_response">Frequency response</h3>
+
+<p>We only have one method available on {{domxref("IIRFilterNode")}} instances, <code>getFrequencyResponse()</code>, this allows us to see what is happening to the frequencies of the audio being passed into the filter.</p>
+
+<p>Let's draw a frequency plot of the filter we've created with the data we get back from this method.</p>
+
+<p>We need to create three arrays. One of frequency values for which we want to receive the magnitude response and phase response for, and two empty arrays to receive the data. All three of these have to be of type <a href="/en-US/docs/Web/JavaScript/Reference/Global_Objects/Float32Array"><code>float32array</code></a> and all be of the same size.</p>
+
+<pre class="brush: js">// arrays for our frequency response
+const totalArrayItems = 30;
+let myFrequencyArray = new Float32Array(totalArrayItems);
+let magResponseOutput = new Float32Array(totalArrayItems);
+let phaseResponseOutput = new Float32Array(totalArrayItems);
+</pre>
+
+<p>Let's fill our first array with frequency values we want data to be returned on:</p>
+
+<pre class="brush: js">myFrequencyArray = myFrequencyArray.map(function(item, index) {
+ return Math.pow(1.4, index);
+});
+</pre>
+
+<p>We could go for a linear approach, but it's far better when working with frequencies to take a log approach, so let's fill our array with frequency values that get larger further on in the array items.</p>
+
+<p>Now let's get our response data:</p>
+
+<pre class="brush: js">iirFilter.getFrequencyResponse(myFrequencyArray, magResponseOutput, phaseResponseOutput);
+</pre>
+
+<p>We can use this data to draw a filter frequency plot. We'll do so on a 2d canvas context.</p>
+
+<pre class="brush: js">// create a canvas element and append it to our dom
+const canvasContainer = document.querySelector('.filter-graph');
+const canvasEl = document.createElement('canvas');
+canvasContainer.appendChild(canvasEl);
+
+// set 2d context and set dimesions
+const canvasCtx = canvasEl.getContext('2d');
+const width = canvasContainer.offsetWidth;
+const height = canvasContainer.offsetHeight;
+canvasEl.width = width;
+canvasEl.height = height;
+
+// set background fill
+canvasCtx.fillStyle = 'white';
+canvasCtx.fillRect(0, 0, width, height);
+
+// set up some spacing based on size
+const spacing = width/16;
+const fontSize = Math.floor(spacing/1.5);
+
+// draw our axis
+canvasCtx.lineWidth = 2;
+canvasCtx.strokeStyle = 'grey';
+
+canvasCtx.beginPath();
+canvasCtx.moveTo(spacing, spacing);
+canvasCtx.lineTo(spacing, height-spacing);
+canvasCtx.lineTo(width-spacing, height-spacing);
+canvasCtx.stroke();
+
+// axis is gain by frequency -&gt; make labels
+canvasCtx.font = fontSize+'px sans-serif';
+canvasCtx.fillStyle = 'grey';
+canvasCtx.fillText('1', spacing-fontSize, spacing+fontSize);
+canvasCtx.fillText('g', spacing-fontSize, (height-spacing+fontSize)/2);
+canvasCtx.fillText('0', spacing-fontSize, height-spacing+fontSize);
+canvasCtx.fillText('Hz', width/2, height-spacing+fontSize);
+canvasCtx.fillText('20k', width-spacing, height-spacing+fontSize);
+
+// loop over our magnitude response data and plot our filter
+
+canvasCtx.beginPath();
+
+for(let i = 0; i &lt; magResponseOutput.length; i++) {
+
+ if (i === 0) {
+ canvasCtx.moveTo(spacing, height-(magResponseOutput[i]*100)-spacing );
+ } else {
+ canvasCtx.lineTo((width/totalArrayItems)*i, height-(magResponseOutput[i]*100)-spacing );
+ }
+
+}
+
+canvasCtx.stroke();
+</pre>
+
+<h2 id="Summary">Summary</h2>
+
+<p>That's it for our IIRFilter demo. This should have shown you how to use the basics, and helped you to understand what it's useful for and how it works.</p>
diff --git a/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/bar-graph.png b/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/bar-graph.png
new file mode 100644
index 0000000000..a31829c5d1
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/bar-graph.png
Binary files differ
diff --git a/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/index.html b/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/index.html
new file mode 100644
index 0000000000..c0dd84ee68
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/index.html
@@ -0,0 +1,189 @@
+---
+title: Visualizations with Web Audio API
+slug: Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API
+tags:
+ - API
+ - Web Audio API
+ - analyser
+ - fft
+ - visualisation
+ - visualization
+ - waveform
+---
+<div class="summary">
+<p>One of the most interesting features of the Web Audio API is the ability to extract frequency, waveform, and other data from your audio source, which can then be used to create visualizations. This article explains how, and provides a couple of basic use cases.</p>
+</div>
+
+<div class="note">
+<p><strong>Note</strong>: You can find working examples of all the code snippets in our <a href="https://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic</a> demo.</p>
+</div>
+
+<h2 id="Basic_concepts">Basic concepts</h2>
+
+<p>To extract data from your audio source, you need an {{ domxref("AnalyserNode") }}, which is created using the {{ domxref("BaseAudioContext.createAnalyser") }} method, for example:</p>
+
+<pre class="brush: js">var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
+var analyser = audioCtx.createAnalyser();
+</pre>
+
+<p>This node is then connected to your audio source at some point between your source and your destination, for example:</p>
+
+<pre class="brush: js">source = audioCtx.createMediaStreamSource(stream);
+source.connect(analyser);
+analyser.connect(distortion);
+distortion.connect(audioCtx.destination);</pre>
+
+<div class="note">
+<p><strong>Note</strong>: you don't need to connect the analyser's output to another node for it to work, as long as the input is connected to the source, either directly or via another node.</p>
+</div>
+
+<p>The analyser node will then capture audio data using a Fast Fourier Transform (fft) in a certain frequency domain, depending on what you specify as the {{ domxref("AnalyserNode.fftSize") }} property value (if no value is specified, the default is 2048.)</p>
+
+<div class="note">
+<p><strong>Note</strong>: You can also specify a minimum and maximum power value for the fft data scaling range, using {{ domxref("AnalyserNode.minDecibels") }} and {{ domxref("AnalyserNode.maxDecibels") }}, and different data averaging constants using {{ domxref("AnalyserNode.smoothingTimeConstant") }}. Read those pages to get more information on how to use them.</p>
+</div>
+
+<p>To capture data, you need to use the methods {{ domxref("AnalyserNode.getFloatFrequencyData()") }} and {{ domxref("AnalyserNode.getByteFrequencyData()") }} to capture frequency data, and {{ domxref("AnalyserNode.getByteTimeDomainData()") }} and {{ domxref("AnalyserNode.getFloatTimeDomainData()") }} to capture waveform data.</p>
+
+<p>These methods copy data into a specified array, so you need to create a new array to receive the data before invoking one. The first one produces 32-bit floating point numbers, and the second and third ones produce 8-bit unsigned integers, therefore a standard JavaScript array won't do — you need to use a {{ domxref("Float32Array") }} or {{ domxref("Uint8Array") }} array, depending on what data you are handling.</p>
+
+<p>So for example, say we are dealing with an fft size of 2048. We return the {{ domxref("AnalyserNode.frequencyBinCount") }} value, which is half the fft, then call Uint8Array() with the frequencyBinCount as its length argument — this is how many data points we will be collecting, for that fft size.</p>
+
+<pre class="brush: js">analyser.fftSize = 2048;
+var bufferLength = analyser.frequencyBinCount;
+var dataArray = new Uint8Array(bufferLength);</pre>
+
+<p>To actually retrieve the data and copy it into our array, we then call the data collection method we want, with the array passed as it's argument. For example:</p>
+
+<pre class="brush: js">analyser.getByteTimeDomainData(dataArray);</pre>
+
+<p>We now have the audio data for that moment in time captured in our array, and can proceed to visualize it however we like, for example by plotting it onto an HTML5 {{ htmlelement("canvas") }}.</p>
+
+<p>Let's go on to look at some specific examples.</p>
+
+<h2 id="Creating_a_waveformoscilloscope">Creating a waveform/oscilloscope</h2>
+
+<p>To create the oscilloscope visualisation (hat tip to <a href="http://soledadpenades.com/">Soledad Penadés</a> for the original code in <a href="https://github.com/mdn/voice-change-o-matic/blob/gh-pages/scripts/app.js#L123-L167">Voice-change-O-matic</a>), we first follow the standard pattern described in the previous section to set up the buffer:</p>
+
+<pre class="brush: js">analyser.fftSize = 2048;
+var bufferLength = analyser.frequencyBinCount;
+var dataArray = new Uint8Array(bufferLength);</pre>
+
+<p>Next, we clear the canvas of what had been drawn on it before to get ready for the new visualization display:</p>
+
+<pre class="brush: js">canvasCtx.clearRect(0, 0, WIDTH, HEIGHT);</pre>
+
+<p>We now define the <code>draw()</code> function:</p>
+
+<pre class="brush: js">function draw() {</pre>
+
+<p>In here, we use <code>requestAnimationFrame()</code> to keep looping the drawing function once it has been started:</p>
+
+<pre class="brush: js">var drawVisual = requestAnimationFrame(draw);</pre>
+
+<p>Next, we grab the time domain data and copy it into our array</p>
+
+<pre class="brush: js">analyser.getByteTimeDomainData(dataArray);</pre>
+
+<p>Next, fill the canvas with a solid color to start</p>
+
+<pre class="brush: js">canvasCtx.fillStyle = 'rgb(200, 200, 200)';
+canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);</pre>
+
+<p>Set a line width and stroke color for the wave we will draw, then begin drawing a path</p>
+
+<pre class="brush: js">canvasCtx.lineWidth = 2;
+canvasCtx.strokeStyle = 'rgb(0, 0, 0)';
+canvasCtx.beginPath();</pre>
+
+<p>Determine the width of each segment of the line to be drawn by dividing the canvas width by the array length (equal to the FrequencyBinCount, as defined earlier on), then define an x variable to define the position to move to for drawing each segment of the line.</p>
+
+<pre class="brush: js">var sliceWidth = WIDTH * 1.0 / bufferLength;
+var x = 0;</pre>
+
+<p>Now we run through a loop, defining the position of a small segment of the wave for each point in the buffer at a certain height based on the data point value form the array, then moving the line across to the place where the next wave segment should be drawn:</p>
+
+<pre class="brush: js"> for(var i = 0; i &lt; bufferLength; i++) {
+
+ var v = dataArray[i] / 128.0;
+ var y = v * HEIGHT/2;
+
+ if(i === 0) {
+ canvasCtx.moveTo(x, y);
+ } else {
+ canvasCtx.lineTo(x, y);
+ }
+
+ x += sliceWidth;
+ }</pre>
+
+<p>Finally, we finish the line in the middle of the right hand side of the canvas, then draw the stroke we've defined:</p>
+
+<pre class="brush: js"> canvasCtx.lineTo(canvas.width, canvas.height/2);
+ canvasCtx.stroke();
+ };</pre>
+
+<p>At the end of this section of code, we invoke the <code>draw()</code> function to start off the whole process:</p>
+
+<pre class="brush: js"> draw();</pre>
+
+<p>This gives us a nice waveform display that updates several times a second:</p>
+
+<p><img alt="a black oscilloscope line, showing the waveform of an audio signal" src="wave.png"></p>
+
+<h2 id="Creating_a_frequency_bar_graph">Creating a frequency bar graph</h2>
+
+<p>Another nice little sound visualization to create is one of those Winamp-style frequency bar graphs. We have one available in Voice-change-O-matic; let's look at how it's done.</p>
+
+<p>First, we again set up our analyser and data array, then clear the current canvas display with <code>clearRect()</code>. The only difference from before is that we have set the fft size to be much smaller; this is so that each bar in the graph is big enough to actually look like a bar rather than a thin strand.</p>
+
+<pre class="brush: js">analyser.fftSize = 256;
+var bufferLength = analyser.frequencyBinCount;
+console.log(bufferLength);
+var dataArray = new Uint8Array(bufferLength);
+
+canvasCtx.clearRect(0, 0, WIDTH, HEIGHT);</pre>
+
+<p>Next, we start our <code>draw()</code> function off, again setting up a loop with <code>requestAnimationFrame()</code> so that the displayed data keeps updating, and clearing the display with each animation frame.</p>
+
+<pre class="brush: js"> function draw() {
+ drawVisual = requestAnimationFrame(draw);
+
+ analyser.getByteFrequencyData(dataArray);
+
+ canvasCtx.fillStyle = 'rgb(0, 0, 0)';
+ canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);</pre>
+
+<p>Now we set our <code>barWidth</code> to be equal to the canvas width divided by the number of bars (the buffer length). However, we are also multiplying that width by 2.5, because most of the frequencies will come back as having no audio in them, as most of the sounds we hear every day are in a certain lower frequency range. We don't want to display loads of empty bars, therefore we shift the ones that will display regularly at a noticeable height across so they fill the canvas display.</p>
+
+<p>We also set a <code>barHeight</code> variable, and an <code>x</code> variable to record how far across the screen to draw the current bar.</p>
+
+<pre class="brush: js">var barWidth = (WIDTH / bufferLength) * 2.5;
+var barHeight;
+var x = 0;</pre>
+
+<p>As before, we now start a for loop and cycle through each value in the <code>dataArray</code>. For each one, we make the <code>barHeight</code> equal to the array value, set a fill color based on the <code>barHeight</code> (taller bars are brighter), and draw a bar at <code>x</code> pixels across the canvas, which is <code>barWidth</code> wide and <code>barHeight/2</code> tall (we eventually decided to cut each bar in half so they would all fit on the canvas better.)</p>
+
+<p>The one value that needs explaining is the vertical offset position we are drawing each bar at: <code>HEIGHT-barHeight/2</code>. I am doing this because I want each bar to stick up from the bottom of the canvas, not down from the top, as it would if we set the vertical position to 0. Therefore, we instead set the vertical position each time to the height of the canvas minus <code>barHeight/2</code>, so therefore each bar will be drawn from partway down the canvas, down to the bottom.</p>
+
+<pre class="brush: js"> for(var i = 0; i &lt; bufferLength; i++) {
+ barHeight = dataArray[i]/2;
+
+ canvasCtx.fillStyle = 'rgb(' + (barHeight+100) + ',50,50)';
+ canvasCtx.fillRect(x,HEIGHT-barHeight/2,barWidth,barHeight);
+
+ x += barWidth + 1;
+ }
+ };</pre>
+
+<p>Again, at the end of the code we invoke the draw() function to set the whole process in motion.</p>
+
+<pre class="brush: js">draw();</pre>
+
+<p>This code gives us a result like the following:</p>
+
+<p><img alt="a series of red bars in a bar graph, showing intensity of different frequencies in an audio signal" src="bar-graph.png"></p>
+
+<div class="note">
+<p><strong>Note</strong>: The examples listed in this article have shown usage of {{ domxref("AnalyserNode.getByteFrequencyData()") }} and {{ domxref("AnalyserNode.getByteTimeDomainData()") }}. For working examples showing {{ domxref("AnalyserNode.getFloatFrequencyData()") }} and {{ domxref("AnalyserNode.getFloatTimeDomainData()") }}, refer to our <a href="https://mdn.github.io/voice-change-o-matic-float-data/">Voice-change-O-matic-float-data</a> demo (see the <a href="https://github.com/mdn/voice-change-o-matic-float-data">source code</a> too) — this is exactly the same as the original <a href="https://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic</a>, except that it uses Float data, not unsigned byte data.</p>
+</div>
diff --git a/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/wave.png b/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/wave.png
new file mode 100644
index 0000000000..9254829d23
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/visualizations_with_web_audio_api/wave.png
Binary files differ
diff --git a/files/ko/web/api/web_audio_api/web_audio_spatialization_basics/index.html b/files/ko/web/api/web_audio_api/web_audio_spatialization_basics/index.html
new file mode 100644
index 0000000000..2846d45d6c
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/web_audio_spatialization_basics/index.html
@@ -0,0 +1,467 @@
+---
+title: Web audio spatialization basics
+slug: Web/API/Web_Audio_API/Web_audio_spatialization_basics
+tags:
+ - PannerNode
+ - Web Audio API
+ - panning
+---
+<div>{{DefaultAPISidebar("Web Audio API")}}</div>
+
+<div class="summary">
+<p><span class="seoSummary">As if its extensive variety of sound processing (and other) options wasn't enough, the Web Audio API also includes facilities to allow you to emulate the difference in sound as a listener moves around a sound source, for example panning as you move around a sound source inside a 3D game. The official term for this is <strong>spatialization</strong>, and this article will cover the basics of how to implement such a system.</span></p>
+</div>
+
+<h2 id="Basics_of_spatialization">Basics of spatialization</h2>
+
+<p>In Web Audio, complex 3D spatializations are created using the {{domxref("PannerNode")}}, which in layman's terms is basically a whole lotta cool maths to make audio appear in 3D space. Think sounds flying over you, creeping up behind you, moving across in front of you. That sort of thing.</p>
+
+<p>It's really useful for WebXR and gaming. In 3D spaces, it's the only way to achieve realistic audio. Libraries like <a href="https://threejs.org/">three.js</a> and <a href="https://aframe.io/">A-frame</a> harness its potential when dealing with sound. It's worth noting that you don't <em>have</em> to move sound within a full 3D space either — you could stick with just a 2D plane, so if you were planning a 2D game, this would still be the node you were looking for.</p>
+
+<div class="note">
+<p><strong>Note</strong>: There's also a {{domxref("StereoPannerNode")}} designed to deal with the common use case of creating simple left and right stereo panning effects. This is much simpler to use, but obviously nowhere near as versatile. If you just want a simple stereo panning effect, our <a href="https://mdn.github.io/webaudio-examples/stereo-panner-node/">StereoPannerNode example</a> (<a href="https://github.com/mdn/webaudio-examples/tree/master/stereo-panner-node">see source code</a>) should give you everything you need.</p>
+</div>
+
+<h2 id="3D_boombox_demo">3D boombox demo</h2>
+
+<p>To demonstrate 3D spatialization we've created a modified version of the boombox demo we created in our basic <a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a> guide. see the <a href="https://mdn.github.io/webaudio-examples/spacialization/">3D spatialization demo live</a> (and see the <a href="https://github.com/mdn/webaudio-examples/tree/master/spacialization">source code</a> also).</p>
+
+<p><img alt="A simple UI with a rotated boombox and controls to move it left and right and in and out, and rotate it." src="web-audio-spatialization.png"></p>
+
+<p>The boombox sits inside a room (defined by the edges of the browser viewport), and in this demo, we can move and rotate it with the provided controls. When we move the boombox, the sound it produces changes accordingly, panning as it moves to the left or right of the room, or becoming quieter as it is moved away from the user or is rotated so the speakers are facing away from them, etc. This is done by setting the different properties of the <code>PannerNode</code> object instance in relation to that movement, to emulate spacialization.</p>
+
+<div class="note">
+<p><strong>Note</strong>: The experience is much better if you use headphones, or have some kind of surround sound system to plug your computer into.</p>
+</div>
+
+<h2 id="Creating_an_audio_listener">Creating an audio listener</h2>
+
+<p>So let's begin! The {{domxref("BaseAudioContext")}} (the interface the {{domxref("AudioContext")}} is extended from) has a <code><a href="/en-US/docs/Web/API/BaseAudioContext/listener">listener</a></code> property that returns an {{domxref("AudioListener")}} object. This represents the listener of the scene, usually your user. You can define where they are in space and in which direction they are facing. They remain static. The <code>pannerNode</code> can then calculate its sound position relative to the position of the listener.</p>
+
+<p>Let's create our context and listener and set the listener's position to emulate a person looking into our room:</p>
+
+<pre class="brush: js">const AudioContext = window.AudioContext || window.webkitAudioContext;
+const audioCtx = new AudioContext();
+const listener = audioCtx.listener;
+
+const posX = window.innerWidth/2;
+const posY = window.innerHeight/2;
+const posZ = 300;
+
+listener.positionX.value = posX;
+listener.positionY.value = posY;
+listener.positionZ.value = posZ-5;
+</pre>
+
+<p>We could move the listener left or right using <code>positionX</code>, up or down using <code>positionY</code>, or in or out of the room using <code>positionZ</code>. Here we are setting the listener to be in the middle of the viewport and slightly in front of our boombox. We can also set the direction the listener is facing. The default values for these work well:</p>
+
+<pre class="brush: js">listener.forwardX.value = 0;
+listener.forwardY.value = 0;
+listener.forwardZ.value = -1;
+listener.upX.value = 0;
+listener.upY.value = 1;
+listener.upZ.value = 0;
+</pre>
+
+<p>The forward properties represent the 3D coordinate position of the listener's forward direction (e.g. the direction they are facing in), while the up properties represent the 3D coordinate position of the top of the listener's head. These two together can nicely set the direction.</p>
+
+<h2 id="Creating_a_panner_node">Creating a panner node</h2>
+
+<p>Let's create our {{domxref("PannerNode")}}. This has a whole bunch of properties associated with it. Let's take a look at each of them:</p>
+
+<p>To start we can set the <a href="/en-US/docs/Web/API/PannerNode/panningModel"><code>panningModel</code></a>. This is the spacialization algorithm that's used to position the audio in 3D space. We can set this to:</p>
+
+<p><code>equalpower</code> — The default and the general way panning is figured out</p>
+
+<p><code>HRTF</code> — This stands for 'Head-related transfer function' and looks to take into account the human head when figuring out where the sound is.</p>
+
+<p>Pretty clever stuff. Let's use the <code>HRTF</code> model!</p>
+
+<pre class="brush: js">const pannerModel = 'HRTF';
+</pre>
+
+<p>The <a href="/en-US/docs/Web/API/PannerNode/coneInnerAngle"><code>coneInnerAngle</code></a> and <a href="/en-US/docs/Web/API/PannerNode/coneOuterAngle"><code>coneOuterAngle</code></a> properties specify where the volume emanates from. By default, both are 360 degrees. Our boombox speakers will have smaller cones, which we can define. The inner cone is where gain (volume) is always emulated at a maximum and the outer cone is where the gain starts to drop away. The gain is reduced by the value of the <a href="/en-US/docs/Web/API/PannerNode/coneOuterGain"><code>coneOuterGain</code></a> value. Let's create constants that store the values we'll use for these parameters later on:</p>
+
+<pre class="brush: js">const innerCone = 60;
+const outerCone = 90;
+const outerGain = 0.3;
+</pre>
+
+<p>The next parameter is <a href="/en-US/docs/Web/API/PannerNode/distanceModel"><code>distanceModel</code></a> — this can only be set to <code>linear</code>, <code>inverse</code>, or <code>exponential</code>. These are different algorithms, which are used to reduce the volume of the audio source as it moves away from the listener. We'll use <code>linear</code>, as it is simple:</p>
+
+<pre class="brush: js">const distanceModel = 'linear';
+</pre>
+
+<p>We can set a maximum distance (<a href="/en-US/docs/Web/API/PannerNode/maxDistance"><code>maxDistance</code></a>) between the source and the listener — the volume will not be reduced anymore if the source moves further away from this point. This can be useful, as you may find you want to emulate distance, but volume can drop out and that's actually not what you want. By default, it's 10,000 (a unitless relative value). We can keep it as this:</p>
+
+<pre class="brush: js">const maxDistance = 10000;
+</pre>
+
+<p>There's also a reference distance (<code><a href="/en-US/docs/Web/API/PannerNode/refDistance">refDistance</a></code>), which is used by the distance models. We can keep that at the default value of <code>1</code> as well:</p>
+
+<pre class="brush: js">const refDistance = 1;
+</pre>
+
+<p>Then there's the roll-off factor (<a href="/en-US/docs/Web/API/PannerNode/rolloffFactor"><code>rolloffFactor</code></a>) — how quickly does the volume reduce as the panner moves away from the listener. The default value is 1; let's make that a bit bigger to exaggerate our movements.</p>
+
+<pre class="brush: js">const rollOff = 10;
+</pre>
+
+<p>Now we can start setting our position and orientation of our boombox. This is a lot like how we did it with our listener. These are also the parameters we're going to change when the controls on our interface are used.</p>
+
+<pre class="brush: js">const positionX = posX;
+const positionY = posY;
+const positionZ = posZ;
+
+const orientationX = 0.0;
+const orientationY = 0.0;
+const orientationZ = -1.0;
+</pre>
+
+<p>Note the minus value on our z orientation — this sets the boombox to face us. A positive value would set the sound source facing away from us.</p>
+
+<p>Let's use the relevant constructor for creating our panner node and pass in all those parameters we set above:</p>
+
+<pre class="brush: js">const panner = new PannerNode(audioCtx, {
+ panningModel: pannerModel,
+ distanceModel: distanceModel,
+ positionX: positionX,
+ positionY: positionY,
+ positionZ: positionZ,
+ orientationX: orientationX,
+ orientationY: orientationY,
+ orientationZ: orientationZ,
+ refDistance: refDistance,
+ maxDistance: maxDistance,
+ rolloffFactor: rollOff,
+ coneInnerAngle: innerCone,
+ coneOuterAngle: outerCone,
+ coneOuterGain: outerGain
+})
+</pre>
+
+<h2 id="Moving_the_boombox">Moving the boombox</h2>
+
+<p>Now we're going to move our boombox around our 'room'. We've got some controls set up to do this. We can move it left and right, up and down, and back and forth; we can also rotate it. The sound direction is coming from the boombox speaker at the front, so when we rotate it, we can alter the sound's direction — i.e. make it project to the back when the boombox is rotated 180 degrees and facing away from us.</p>
+
+<p>We need to set up a few things for the interface. First, we'll get references to the elements we want to move, then we'll store references to the values we'll change when we set up <a href="/en-US/docs/Web/CSS/CSS_Transforms">CSS transforms</a> to actually do the movement. Finally, we'll set some bounds so our boombox doesn't move too far in any direction:</p>
+
+<pre class="brush: js">const moveControls = document.querySelector('#move-controls').querySelectorAll('button');
+const boombox = document.querySelector('.boombox-body');
+
+// the values for our css transforms
+let transform = {
+ xAxis: 0,
+ yAxis: 0,
+ zAxis: 0.8,
+ rotateX: 0,
+ rotateY: 0
+}
+
+// set our bounds
+const topBound = -posY;
+const bottomBound = posY;
+const rightBound = posX;
+const leftBound = -posX;
+const innerBound = 0.1;
+const outerBound = 1.5;
+</pre>
+
+<p>Let's create a function that takes the direction we want to move as a parameter, and both modifies the CSS transform and updates the position and orientation values of our panner node properties to change the sound as appropriate.</p>
+
+<p>To start with let's take a look at our left, right, up and down values as these are pretty straightforward. We'll move the boombox along these axis and update the appropriate position.</p>
+
+<pre class="brush: js">function moveBoombox(direction) {
+ switch (direction) {
+ case 'left':
+ if (transform.xAxis &gt; leftBound) {
+ transform.xAxis -= 5;
+ panner.positionX.value -= 0.1;
+ }
+ break;
+ case 'up':
+ if (transform.yAxis &gt; topBound) {
+ transform.yAxis -= 5;
+ panner.positionY.value -= 0.3;
+ }
+ break;
+ case 'right':
+ if (transform.xAxis &lt; rightBound) {
+ transform.xAxis += 5;
+ panner.positionX.value += 0.1;
+ }
+ break;
+ case 'down':
+ if (transform.yAxis &lt; bottomBound) {
+ transform.yAxis += 5;
+ panner.positionY.value += 0.3;
+ }
+ break;
+ }
+}
+</pre>
+
+<p>It's a similar story for our move in and out values too:</p>
+
+<pre class="brush: js">case 'back':
+ if (transform.zAxis &gt; innerBound) {
+ transform.zAxis -= 0.01;
+ panner.positionZ.value += 40;
+ }
+break;
+case 'forward':
+ if (transform.zAxis &lt; outerBound) {
+ transform.zAxis += 0.01;
+ panner.positionZ.value -= 40;
+ }
+break;
+</pre>
+
+<p>Our rotation values are a little more involved, however, as we need to move the sound <em>around</em>. Not only do we have to update two axis values (e.g. if you rotate an object around the x-axis, you update the y and z coordinates for that object), but we also need to do some more maths for this. The rotation is a circle and we need <code><a href="/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/sin">Math.sin</a></code> and <code><a href="/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/cos">Math.cos</a></code> to help us draw that circle.</p>
+
+<p>Let's set up a rotation rate, which we'll convert into a radian range value for use in <code>Math.sin</code> and <code>Math.cos</code> later, when we want to figure out the new coordinates when we're rotating our boombox:</p>
+
+<pre class="brush: js">// set up rotation constants
+const rotationRate = 60; // bigger number equals slower sound rotation
+
+const q = Math.PI/rotationRate; //rotation increment in radians
+</pre>
+
+<p>We can also use this to work out degrees rotated, which will help with the CSS transforms we will have to create (note we need both an x and y-axis for the CSS transforms):</p>
+
+<pre class="brush: js">// get degrees for css
+const degreesX = (q * 180)/Math.PI;
+const degreesY = (q * 180)/Math.PI;
+</pre>
+
+<p>Let's take a look at our left rotation as an example. We need to change the x orientation and the z orientation of the panner coordinates, to move around the y-axis for our left rotation:</p>
+
+<pre class="brush: js">case 'rotate-left':
+ transform.rotateY -= degreesY;
+
+ // 'left' is rotation about y-axis with negative angle increment
+ z = panner.orientationZ.value*Math.cos(q) - panner.orientationX.value*Math.sin(q);
+ x = panner.orientationZ.value*Math.sin(q) + panner.orientationX.value*Math.cos(q);
+ y = panner.orientationY.value;
+
+ panner.orientationX.value = x;
+ panner.orientationY.value = y;
+ panner.orientationZ.value = z;
+break;
+</pre>
+
+<p>This <em>is</em> a little confusing, but what we're doing is using sin and cos to help us work out the circular motion the coordinates need for the rotation of the boombox.</p>
+
+<p>We can do this for all the axes. We just need to choose the right orientations to update and whether we want a positive or negative increment.</p>
+
+<pre class="brush: js">case 'rotate-right':
+ transform.rotateY += degreesY;
+ // 'right' is rotation about y-axis with positive angle increment
+ z = panner.orientationZ.value*Math.cos(-q) - panner.orientationX.value*Math.sin(-q);
+ x = panner.orientationZ.value*Math.sin(-q) + panner.orientationX.value*Math.cos(-q);
+ y = panner.orientationY.value;
+ panner.orientationX.value = x;
+ panner.orientationY.value = y;
+ panner.orientationZ.value = z;
+break;
+case 'rotate-up':
+ transform.rotateX += degreesX;
+ // 'up' is rotation about x-axis with negative angle increment
+ z = panner.orientationZ.value*Math.cos(-q) - panner.orientationY.value*Math.sin(-q);
+ y = panner.orientationZ.value*Math.sin(-q) + panner.orientationY.value*Math.cos(-q);
+ x = panner.orientationX.value;
+ panner.orientationX.value = x;
+ panner.orientationY.value = y;
+ panner.orientationZ.value = z;
+break;
+case 'rotate-down':
+ transform.rotateX -= degreesX;
+ // 'down' is rotation about x-axis with positive angle increment
+ z = panner.orientationZ.value*Math.cos(q) - panner.orientationY.value*Math.sin(q);
+ y = panner.orientationZ.value*Math.sin(q) + panner.orientationY.value*Math.cos(q);
+ x = panner.orientationX.value;
+ panner.orientationX.value = x;
+ panner.orientationY.value = y;
+ panner.orientationZ.value = z;
+break;
+</pre>
+
+<p>One last thing — we need to update the CSS and keep a reference of the last move for the mouse event. Here's the final <code>moveBoombox</code> function.</p>
+
+<pre class="brush: js">function moveBoombox(direction, prevMove) {
+ switch (direction) {
+ case 'left':
+ if (transform.xAxis &gt; leftBound) {
+ transform.xAxis -= 5;
+ panner.positionX.value -= 0.1;
+ }
+ break;
+ case 'up':
+ if (transform.yAxis &gt; topBound) {
+ transform.yAxis -= 5;
+ panner.positionY.value -= 0.3;
+ }
+ break;
+ case 'right':
+ if (transform.xAxis &lt; rightBound) {
+ transform.xAxis += 5;
+ panner.positionX.value += 0.1;
+ }
+ break;
+ case 'down':
+ if (transform.yAxis &lt; bottomBound) {
+ transform.yAxis += 5;
+ panner.positionY.value += 0.3;
+ }
+ break;
+ case 'back':
+ if (transform.zAxis &gt; innerBound) {
+ transform.zAxis -= 0.01;
+ panner.positionZ.value += 40;
+ }
+ break;
+ case 'forward':
+ if (transform.zAxis &lt; outerBound) {
+ transform.zAxis += 0.01;
+ panner.positionZ.value -= 40;
+ }
+ break;
+ case 'rotate-left':
+ transform.rotateY -= degreesY;
+
+ // 'left' is rotation about y-axis with negative angle increment
+ z = panner.orientationZ.value*Math.cos(q) - panner.orientationX.value*Math.sin(q);
+ x = panner.orientationZ.value*Math.sin(q) + panner.orientationX.value*Math.cos(q);
+ y = panner.orientationY.value;
+
+ panner.orientationX.value = x;
+ panner.orientationY.value = y;
+ panner.orientationZ.value = z;
+ break;
+ case 'rotate-right':
+ transform.rotateY += degreesY;
+ // 'right' is rotation about y-axis with positive angle increment
+ z = panner.orientationZ.value*Math.cos(-q) - panner.orientationX.value*Math.sin(-q);
+ x = panner.orientationZ.value*Math.sin(-q) + panner.orientationX.value*Math.cos(-q);
+ y = panner.orientationY.value;
+ panner.orientationX.value = x;
+ panner.orientationY.value = y;
+ panner.orientationZ.value = z;
+ break;
+ case 'rotate-up':
+ transform.rotateX += degreesX;
+ // 'up' is rotation about x-axis with negative angle increment
+ z = panner.orientationZ.value*Math.cos(-q) - panner.orientationY.value*Math.sin(-q);
+ y = panner.orientationZ.value*Math.sin(-q) + panner.orientationY.value*Math.cos(-q);
+ x = panner.orientationX.value;
+ panner.orientationX.value = x;
+ panner.orientationY.value = y;
+ panner.orientationZ.value = z;
+ break;
+ case 'rotate-down':
+ transform.rotateX -= degreesX;
+ // 'down' is rotation about x-axis with positive angle increment
+ z = panner.orientationZ.value*Math.cos(q) - panner.orientationY.value*Math.sin(q);
+ y = panner.orientationZ.value*Math.sin(q) + panner.orientationY.value*Math.cos(q);
+ x = panner.orientationX.value;
+ panner.orientationX.value = x;
+ panner.orientationY.value = y;
+ panner.orientationZ.value = z;
+ break;
+ }
+
+ boombox.style.transform = 'translateX('+transform.xAxis+'px) translateY('+transform.yAxis+'px) scale('+transform.zAxis+') rotateY('+transform.rotateY+'deg) rotateX('+transform.rotateX+'deg)';
+
+ const move = prevMove || {};
+ move.frameId = requestAnimationFrame(() =&gt; moveBoombox(direction, move));
+ return move;
+}
+</pre>
+
+<h2 id="Wiring_up_our_controls">Wiring up our controls</h2>
+
+<p>Wiring up out control buttons is comparatively simple — now we can listen for a mouse event on our controls and run this function, as well as stop it when the mouse is released:</p>
+
+<pre class="brush: js">// for each of our controls, move the boombox and change the position values
+moveControls.forEach(function(el) {
+
+ let moving;
+ el.addEventListener('mousedown', function() {
+
+        let direction = this.dataset.control;
+        if (moving &amp;&amp; moving.frameId) {
+            window.cancelAnimationFrame(moving.frameId);
+        }
+        moving = moveBoombox(direction);
+
+    }, false);
+
+    window.addEventListener('mouseup', function() {
+        if (moving &amp;&amp; moving.frameId) {
+            window.cancelAnimationFrame(moving.frameId);
+        }
+    }, false)
+
+})
+</pre>
+
+<h2 id="Connecting_Our_Graph">Connecting Our Graph</h2>
+
+<p>Our HTML contains the audio element we want to be affected by the panner node.</p>
+
+<pre class="brush: html">&lt;audio src="myCoolTrack.mp3"&gt;&lt;/audio&gt;</pre>
+
+<p>We need to grab the source from that element and pipe it into the Web Audio API using the {{domxref('AudioContext.createMediaElementSource')}}.</p>
+
+<pre class="brush: js">// get the audio element
+const audioElement = document.querySelector('audio');
+
+// pass it into the audio context
+const track = audioContext.createMediaElementSource(audioElement);
+</pre>
+
+<p>Next we have to connect our audio graph. We connect our input (the track) to our modification node (the panner) to our destination (in this case the speakers).</p>
+
+<pre class="brush: js">track.connect(panner).connect(audioCtx.destination);
+</pre>
+
+<p>Let's create a play button, that when clicked will play or pause the audio depending on the current state.</p>
+
+<pre class="brush: html">&lt;button data-playing="false" role="switch"&gt;Play/Pause&lt;/button&gt;
+</pre>
+
+<pre class="brush: js">// select our play button
+const playButton = document.querySelector('button');
+
+playButton.addEventListener('click', function() {
+
+// check if context is in suspended state (autoplay policy)
+if (audioContext.state === 'suspended') {
+audioContext.resume();
+}
+
+// play or pause track depending on state
+if (this.dataset.playing === 'false') {
+audioElement.play();
+this.dataset.playing = 'true';
+} else if (this.dataset.playing === 'true') {
+audioElement.pause();
+this.dataset.playing = 'false';
+}
+
+}, false);
+</pre>
+
+<p>For a more in depth look at playing/controlling audio and audio graphs check out <a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using The Web Audio API.</a></p>
+
+<h2 id="Summary">Summary</h2>
+
+<p>Hopefully, this article has given you an insight into how Web Audio spatialization works, and what each of the {{domxref("PannerNode")}} properties do (there are quite a few of them). The values can be hard to manipulate sometimes and depending on your use case it can take some time to get them right.</p>
+
+<div class="note">
+<p><strong>Note</strong>: There are slight differences in the way the audio spatialization sounds across different browsers. The panner node does some very involved maths under the hood; there are a <a href="https://wpt.fyi/results/webaudio/the-audio-api/the-pannernode-interface?label=stable&amp;aligned=true">number of tests here</a> so you can keep track of the status of the inner workings of this node across different platforms.</p>
+</div>
+
+<p>Again, you can <a href="https://mdn.github.io/webaudio-examples/spacialization/">check out the final demo here</a>, and the <a href="https://github.com/mdn/webaudio-examples/tree/master/spacialization">final source code is here</a>. There is also a <a href="https://codepen.io/Rumyra/pen/MqayoK?editors=0100">Codepen demo too</a>.</p>
+
+<p>If you are working with 3D games and/or WebXR it's a good idea to harness a 3D library to create such functionality, rather than trying to do this all yourself from first principles. We rolled our own in this article to give you an idea of how it works, but you'll save a lot of time by taking advantage of work others have done before you.</p>
diff --git a/files/ko/web/api/web_audio_api/web_audio_spatialization_basics/web-audio-spatialization.png b/files/ko/web/api/web_audio_api/web_audio_spatialization_basics/web-audio-spatialization.png
new file mode 100644
index 0000000000..18a359e5c1
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/web_audio_spatialization_basics/web-audio-spatialization.png
Binary files differ