aboutsummaryrefslogtreecommitdiff
path: root/files/ko/web/api/web_audio_api/using_audioworklet
diff options
context:
space:
mode:
authorlogic-finder <83723320+logic-finder@users.noreply.github.com>2021-08-15 23:57:52 +0900
committerGitHub <noreply@github.com>2021-08-15 23:57:52 +0900
commitf93ef19d66b0d692ff171d7bdcb82d98f741544a (patch)
tree30350064b1fe030463f06c2a52d9e748ec786f0c /files/ko/web/api/web_audio_api/using_audioworklet
parent86205e9ea4dc7902a3cbbf2612b1adc77d73c4f5 (diff)
downloadtranslated-content-f93ef19d66b0d692ff171d7bdcb82d98f741544a.tar.gz
translated-content-f93ef19d66b0d692ff171d7bdcb82d98f741544a.tar.bz2
translated-content-f93ef19d66b0d692ff171d7bdcb82d98f741544a.zip
[ko] Work done for 'Web Audio API' article. (#1609)
* Work done for 'Web Audio API' article. * fix a hyperlink * small fixes and documents added Co-authored-by: hochan Lee <hochan049@gmail.com>
Diffstat (limited to 'files/ko/web/api/web_audio_api/using_audioworklet')
-rw-r--r--files/ko/web/api/web_audio_api/using_audioworklet/index.html325
1 files changed, 325 insertions, 0 deletions
diff --git a/files/ko/web/api/web_audio_api/using_audioworklet/index.html b/files/ko/web/api/web_audio_api/using_audioworklet/index.html
new file mode 100644
index 0000000000..b103225f09
--- /dev/null
+++ b/files/ko/web/api/web_audio_api/using_audioworklet/index.html
@@ -0,0 +1,325 @@
+---
+title: Background audio processing using AudioWorklet
+slug: Web/API/Web_Audio_API/Using_AudioWorklet
+tags:
+ - API
+ - Audio
+ - AudioWorklet
+ - Background
+ - Examples
+ - Guide
+ - Processing
+ - Web Audio
+ - Web Audio API
+ - WebAudio API
+ - sound
+---
+<p>{{APIRef("Web Audio API")}}</p>
+
+<p>When the Web Audio API was first introduced to browsers, it included the ability to use JavaScript code to create custom audio processors that would be invoked to perform real-time audio manipulations. The drawback to <code>ScriptProcessorNode</code> was simple: it ran on the main thread, thus blocking everything else going on until it completed execution. This was far less than ideal, especially for something that can be as computationally expensive as audio processing.</p>
+
+<p>Enter {{domxref("AudioWorklet")}}. An audio context's audio worklet is a {{domxref("Worklet")}} which runs off the main thread, executing audio processing code added to it by calling the context's {{domxref("Worklet.addModule", "audioWorklet.addModule()")}} method. Calling <code>addModule()</code> loads the specified JavaScript file, which should contain the implementation of the audio processor. With the processor registered, you can create a new {{domxref("AudioWorkletNode")}} which passes the audio through the processor's code when the node is linked into the chain of audio nodes along with any other audio nodes.</p>
+
+<p><span class="seoSummary">The process of creating an audio processor using JavaScript, establishing it as an audio worklet processor, and then using that processor within a Web Audio application is the topic of this article.</span></p>
+
+<p>It's worth noting that because audio processing can often involve substantial computation, your processor may benefit greatly from being built using <a href="/en-US/docs/WebAssembly">WebAssembly</a>, which brings near-native or fully native performance to web apps. Implementing your audio processing algorithm using WebAssembly can make it perform markedly better.</p>
+
+<h2 id="High_level_overview">High level overview</h2>
+
+<p>Before we start looking at the use of AudioWorklet on a step-by-step basis, let's start with a brief high-level overview of what's involved.</p>
+
+<ol>
+ <li>Create module that defines a audio worklet processor class, based on {{domxref("AudioWorkletProcessor")}} which takes audio from one or more incoming sources, performs its operation on the data, and outputs the resulting audio data.</li>
+ <li>Access the audio context's {{domxref("AudioWorklet")}} through its {{domxref("BaseAudioContext.audioWorklet", "audioWorklet")}} property, and call the audio worklet's {{domxref("Worklet.addModule", "addModule()")}} method to install the audio worklet processor module.</li>
+ <li>As needed, create audio processing nodes by passing the processor's name (which is defined by the module) to the {{domxref("AudioWorkletNode.AudioWorkletNode", "AudioWorkletNode()")}} constructor.</li>
+ <li>Set up any audio parameters the {{domxref("AudioWorkletNode")}} needs, or that you wish to configure. These are defined in the audio worklet processor module.</li>
+ <li>Connect the created <code>AudioWorkletNode</code>s into your audio processing pipeline as you would any other node, then use your audio pipeline as usual.</li>
+</ol>
+
+<p>Throughout the remainder of this article, we'll look at these steps in more detail, with examples (including working examples you can try out on your own).</p>
+
+<p>The example code found on this page is derived from <a href="https://mdn.github.io/webaudio-examples/audioworklet/">this working example</a> which is part of MDN's <a href="https://github.com/mdn/webaudio-examples/">GitHub repository of Web Audio examples</a>. The example creates an oscillator node and adds white noise to it using an {{domxref("AudioWorkletNode")}} before playing the resulting sound out. Slider controls are available to allow controlling the gain of both the oscillator and the audio worklet's output.</p>
+
+<p><a href="https://github.com/mdn/webaudio-examples/tree/master/audioworklet"><strong>See the code</strong></a></p>
+
+<p><a href="https://mdn.github.io/webaudio-examples/audioworklet/"><strong>Try it live</strong></a></p>
+
+<h2 id="Creating_an_audio_worklet_processor">Creating an audio worklet processor</h2>
+
+<p>Fundamentally, an audio worklet processor (which we'll refer to usually as either an "audio processor" or as a "processor" because otherwise this article will be about twice as long) is implemented using a JavaScript module that defines and installs the custom audio processor class.</p>
+
+<h3 id="Structure_of_an_audio_worklet_processor">Structure of an audio worklet processor</h3>
+
+<p>An audio worklet processor is a JavaScript module which consists of the following:</p>
+
+<ul>
+ <li>A JavaScript class which defines the audio processor. This class extends the {{domxref("AudioWorkletProcessor")}} class.</li>
+ <li>The audio processor class must implement a {{domxref("AudioWorkletProcessor.process", "process()")}} method, which receives incoming audio data and writes back out the data as manipulated by the processor.</li>
+ <li>The module installs the new audio worklet processor class by calling {{domxref("AudioWorkletGlobalScope.registerProcessor", "registerProcessor()")}}, specifying a name for the audio processor and the class that defines the processor.</li>
+</ul>
+
+<p>A single audio worklet processor module may define multiple processor classes, registering each of them with individual calls to <code>registerProcessor()</code>. As long as each has its own unique name, this will work just fine. It's also more efficient than loading multiple modules from over the network or even the user's local disk.</p>
+
+<h3 id="Basic_code_framework">Basic code framework</h3>
+
+<p>The barest framework of an audio processor class looks like this:</p>
+
+<pre class="brush: js">class MyAudioProcessor extends AudioWorkletProcessor {
+  constructor() {
+  super();
+  }
+
+  process(inputList, outputList, parameters) {
+  /* using the inputs (or not, as needed), write the output
+  into each of the outputs */
+
+  return true;
+  }
+};
+
+registerProcessor("my-audio-processor", MyAudioProcessor);
+</pre>
+
+<p>After the implementation of the processor comes a call to the global function {{domxref("AudioWorkletGlobalScope.registerProcessor", "registerProcessor()")}}, which is only available within the scope of the audio context's {{domxref("AudioWorklet")}}, which is the invoker of the processor script as a result of your call to {{domxref("Worklet.addModule", "audioWorklet.addModule()")}}. This call to <code>registerProcessor()</code> registers your class as the basis for any {{domxref("AudioWorkletProcessor")}}s created when {{domxref("AudioWorkletNode")}}s are set up.</p>
+
+<p>This is the barest framework and actually has no effect until code is added into <code>process()</code> to do something with those inputs and outputs. Which brings us to talking about those inputs and outputs.</p>
+
+<h3 id="The_input_and_output_lists">The input and output lists</h3>
+
+<p>The lists of inputs and outputs can be a little confusing at first, even though they're actually very simple once you realize what's going on.</p>
+
+<p>Let's start at the inside and work our way out. Fundamentally, the audio for a single audio channel (such as the left speaker or the subwoofer, for example) is represented as a <code><a href="/en-US/docs/Web/JavaScript/Reference/Global_Objects/Float32Array">Float32Array</a></code> whose values are the individual audio samples. By specification, each block of audio your <code>process()</code> function receives contains 128 frames (that is, 128 samples for each channel), but it is planned that <em>this value will change in the future</em>, and may in fact vary depending on circumstances, so you should <em>always</em> check the array's <code><a href="/en-US/docs/Web/JavaScript/Reference/Global_Objects/TypedArray/length">length</a></code> rather than assuming a particular size. It is, however, guaranteed that the inputs and outputs will have the same block length.</p>
+
+<p>Each input can have a number of channels. A mono input has a single channel; stereo input has two channels. Surround sound might have six or more channels. So each input is, in turn, an array of channels. That is, an array of <code>Float32Array</code> objects.</p>
+
+<p>Then, there can be multiple inputs, so the <code>inputList</code> is an array of arrays of <code>Float32Array</code> objects. Each input may have a different number of channels, and each channel has its own array of samples.</p>
+
+<p>Thus, given the input list <code>inputList</code>:</p>
+
+<pre class="brush: js">const numberOfInputs = inputList.length;
+const firstInput = inputList[0];
+
+const firstInputChannelCount = firstInput.length;
+const firstInputFirstChannel = firstInput[0]; // (or inputList[0][0])
+
+const firstChannelByteCount = firstInputFirstChannel.length;
+const firstByteOfFirstChannel = firstInputFirstChannel[0]; // (or inputList[0][0][0])
+</pre>
+
+<p>The output list is structured in exactly the same way; it's an array of outputs, each of which is an array of channels, each of which is an array of <code>Float32Array</code> objects, which contain the samples for that channel.</p>
+
+<p>How you use the inputs and how you generate the outputs depends very much on your processor. If your processor is just a generator, it can ignore the inputs and just replace the contents of the outputs with the generated data. Or you can process each input independently, applying an algorithm to the incoming data on each channel of each input and writing the results into the corresponding outputs' channels (keeping in mind that the number of inputs and outputs may differ, and the channel counts on those inputs and outputs may also differ). Or you can take all the inputs and perform mixing or other computations that result in a single output being filled with data (or all the outputs being filled with the same data).</p>
+
+<p>It's entirely up to you. This is a very powerful tool in your audio programming toolkit.</p>
+
+<h3 id="Processing_multiple_inputs">Processing multiple inputs</h3>
+
+<p>Let's take a look at an implementation of <code>process()</code> that can process multiple inputs, with each input being used to generate the corresponding output. Any excess inputs are ignored.</p>
+
+<pre class="brush: js">process(inputList, outputList, parameters) {
+  const sourceLimit = Math.min(inputList.length, outputList.length);
+
+  for (let inputNum = 0; inputNum &lt; sourceLimit; inputNum++) {
+    let input = inputList[inputNum];
+    let output = outputList[inputNum];
+    let channelCount = Math.min(input.length, output.length);
+
+    for (let channelNum = 0; channelNum &lt; channelCount; channelNum++) {
+      let sampleCount = input[channelNum].length;
+
+      for (let i = 0; i &lt; sampleCount; i++) {
+        let sample = input[channelNum][i];
+
+ /* Manipulate the sample */
+
+        output[channelNum][i] = sample;
+      }
+    }
+  };
+
+  return true;
+}
+</pre>
+
+<p>Note that when determining the number of sources to process and send through to the corresponding outputs, we use <code><a href="/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/min">Math.min()</a></code> to ensure that we only process as many channels as we have room for in the output list. The same check is performed when determining how many channels to process in the current input; we only process as many as there are room for in the destination output. This avoids errors due to overrunning these arrays.</p>
+
+<h3 id="Mixing_inputs">Mixing inputs</h3>
+
+<p>Many nodes perform <strong>mixing</strong> operations, where the inputs are combined in some way into a single output. This is demonstrated in the following example.</p>
+
+<pre class="brush: js">process(inputList, outputList, parameters) {
+  const sourceLimit = Math.min(inputList.length, outputList.length);
+   for (let inputNum = 0; inputNum &lt; sourceLimit; inputNum++) {
+     let input = inputList[inputNum];
+     let output = outputList[0];
+     let channelCount = Math.min(input.length, output.length);
+
+     for (let channelNum = 0; channelNum &lt; channelCount; channelNum++) {
+       let sampleCount = input[channelNum].length;
+
+       for (let i = 0; i &lt; sampleCount; i++) {
+         let sample = output[channelNum][i] + input[channelNum][i];
+
+ if (sample &gt; 1.0) {
+  sample = 1.0;
+  } else if (sample &lt; -1.0) {
+  sample = -1.0;
+  }
+
+         output[channelNum][i] = sample;
+       }
+     }
+   };
+
+  return true;
+}
+</pre>
+
+<p>This is similar code to the previous sample in many ways, but only the first output—<code>outputList[0]</code>—is altered. Each sample is added to the corresponding sample in the output buffer, with a simple code fragment in place to prevent the samples from exceeding the legal range of -1.0 to 1.0 by capping the values; there are other ways to avoid clipping that are perhaps less prone to distortion, but this is a simple example that's better than nothing.</p>
+
+<h2 id="Lifetime_of_an_audio_worklet_processor">Lifetime of an audio worklet processor</h2>
+
+<p>The only means by which you can influence the lifespan of your audio worklet processor is through the value returned by <code>process()</code>, which should be a Boolean value indicating whether or not to override the {{Glossary("user agent")}}'s decision-making as to whether or not your node is still in use.</p>
+
+<p>In general, the lifetime policy of any audio node is simple: if the node is still considered to be actively processing audio, it will continue to be used. In the case of an {{domxref("AudioWorkletNode")}}, the node is considered to be active if its <code>process()</code> function returns <code>true</code> <em>and</em> the node is either generating content as a source for audio data, or is receiving data from one or more inputs.</p>
+
+<p>Specifying a value of <code>true</code> as the result from your <code>process()</code> function in essence tells the Web Audio API that your processor needs to keep being called even if the API doesn't think there's anything left for you to do. In other words, <code>true</code> overrides the API's logic and gives you control over your processor's lifetime policy, keeping the processor's owning {{domxref("AudioWorkletNode")}} running even when it would otherwise decide to shut down the node.</p>
+
+<p>Returning <code>false</code> from the <code>process()</code> method tells the API that it should follow its normal logic and shut down your processor node if it deems it appropriate to do so. If the API determines that your node is no longer needed, <code>process()</code> will not be called again.</p>
+
+<div class="notecard note">
+<p><strong>Note:</strong> At this time, unfortunately, Chrome does not implement this algorithm in a manner that matches the current standard. Instead, it keeps the node alive if you return <code>true</code> and shuts it down if you return <code>false</code>. Thus for compatibility reasons you must always return <code>true</code> from <code>process()</code>, at least on Chrome. However, once <a href="https://bugs.chromium.org/p/chromium/issues/detail?id=921354">this Chrome issue</a> is fixed, you will want to change this behavior if possible as it may have a slight negative impact on performance.</p>
+</div>
+
+<h2 id="Creating_an_audio_processor_worklet_node">Creating an audio processor worklet node</h2>
+
+<p>To create an audio node that pumps blocks of audio data through an {{domxref("AudioWorkletProcessor")}}, you need to follow these simple steps:</p>
+
+<ol>
+ <li>Load and install the audio processor module</li>
+ <li>Create an {{domxref("AudioWorkletNode")}}, specifying the audio processor module to use by its name</li>
+ <li>Connect inputs to the <code>AudioWorkletNode</code> and its outputs to appropriate destinations (either other nodes or to the {{domxref("AudioContext")}} object's {{domxref("AudioContext.destination", "destination")}} property.</li>
+</ol>
+
+<p>To use an audio worklet processor, you can use code similar to the following:</p>
+
+<pre class="brush: js">let audioContext = null;
+
+async function createMyAudioProcessor() {
+  if (!audioContext) {
+  try {
+   audioContext = new AudioContext();
+  await audioContext.resume();
+   await audioContext.audioWorklet.addModule("module-url/module.js");
+  } catch(e) {
+  return null;
+  }
+ }
+
+  return new AudioWorkletNode(audioContext, "processor-name");
+}
+</pre>
+
+<p>This <code>createMyAudioProcessor()</code> function creates and returns a new instance of {{domxref("AudioWorkletNode")}} configured to use your audio processor. It also handles creating the audio context if it hasn't already been done.</p>
+
+<p>In order to ensure the context is usable, this starts by creating the context if it's not already available, then adds the module containing the processor to the worklet. Once that's done, it instantiates and returns a new <code>AudioWorkletNode</code>. Once you have that in hand, you connect it to other nodes and otherwise use it just like any other node.</p>
+
+<p>You can then create a new audio processor node by doing this:</p>
+
+<pre class="brush: js">let newProcessorNode = createMyAudioProcessor();</pre>
+
+<p>If the returned value, <code>newProcessorNode</code>, is non-<code>null</code>, we have a valid audio context with its hiss processor node in place and ready to use.</p>
+
+<h2 id="Supporting_audio_parameters">Supporting audio parameters</h2>
+
+<p>Just like any other Web Audio node, {{domxref("AudioWorkletNode")}} supports parameters, which are shared with the {{domxref("AudioWorkletProcessor")}} that does the actual work.</p>
+
+<h3 id="Adding_parameter_support_to_the_processor">Adding parameter support to the processor</h3>
+
+<p>To add parameters to an {{domxref("AudioWorkletNode")}}, you need to define them within your {{domxref("AudioWorkletProcessor")}}-based processor class in your module. This is done by adding the static getter {{domxref("AudioWorkletProcessor.parameterDescriptors", "parameterDescriptors")}} to your class. This function should return an array of {{domxref("AudioParam")}} objects, one for each parameter supported by the processor.</p>
+
+<p>In the following implementation of <code>parameterDescriptors()</code>, the returned array has two <code>AudioParam</code> objects. The first defines <code>gain</code> as a value between 0 and 1, with a default value of 0.5. The second parameter is named <code>frequency</code> and defaults to 440.0, with a range from 27.5 to 4186.009, inclusively.</p>
+
+<pre class="brush: js">static get parameterDescriptors() {
+ return [
+ {
+ name: "gain",
+ defaultValue: 0.5,
+ minValue: 0,
+ maxValue: 1
+ },
+  {
+  name: "frequency",
+  defaultValue: 440.0;
+  minValue: 27.5,
+  maxValue: 4186.009
+  }
+ ];
+}</pre>
+
+<p>Accessing your processor node's parameters is as simple as looking them up in the <code>parameters</code> object passed into your implementation of {{domxref("AudioWorkletProcessor.process", "process()")}}. Within the <code>parameters</code> object are arrays, one for each of your parameters, and sharing the same names as your parameters.</p>
+
+<dl>
+ <dt>A-rate parameters</dt>
+ <dd>For a-rate parameters—parameters whose values automatically change over time—the parameter's entry in the <code>parameters</code> object is an array of {{domxref("AudioParam")}} objects, one for each frame in the block being processed. These values are to be applied to the corresponding frames.</dd>
+ <dt>K-rate parameters</dt>
+ <dd>K-rate parameters, on the other hand, can only change once per block, so the parameter's array has only a single entry. Use that value for every frame in the block.</dd>
+</dl>
+
+<p>In the code below, we see a <code>process()</code> function that handles a <code>gain</code> parameter which can be used as either an a-rate or k-rate parameter. Our node only supports one input, so it just takes the first input in the list, applies the gain to it, and writes the resulting data to the first output's buffer.</p>
+
+<pre class="brush: js">process(inputList, outputList, parameters) {
+  const input = inputList[0];
+  const output = outputList[0];
+  const gain = parameters.gain;
+
+  for (let channelNum = 0; channelNum &lt; input.length; channel++) {
+  const inputChannel = input[channel];
+  const outputChannel = output[channel];
+
+ // If gain.length is 1, it's a k-rate parameter, so apply
+ // the first entry to every frame. Otherwise, apply each
+ // entry to the corresponding frame.
+
+  if (gain.length === 1) {
+  for (let i = 0; i &lt; inputChannel.length; i++) {
+  outputChannel[i] = inputChannel[i] * gain[0];
+  }
+  } else {
+  for (let i = 0; i &lt; inputChannel.length; i++) {
+  outputChannel[i] = inputChannel[i] * gain[i];
+  }
+  }
+  }
+
+  return true;
+}
+</pre>
+
+<p>Here, if <code>gain.length</code> indicates that there's only a single value in the <code>gain</code> parameter's array of values, the first entry in the array is applied to every frame in the block. Otherwise, for each frame in the block, the corresponding entry in <code>gain[]</code> is applied.</p>
+
+<h3 id="Accessing_parameters_from_the_main_thread_script">Accessing parameters from the main thread script</h3>
+
+<p>Your main thread script can access the parameters just like it can any other node. To do so, first you need to get a reference to the parameter by calling the {{domxref("AudioWorkletNode")}}'s {{domxref("AudioWorkletNode.parameters", "parameters")}} property's {{domxref("AudioParamMap.get", "get()")}} method:</p>
+
+<pre class="brush: js">let gainParam = myAudioWorkletNode.parameters.get("gain");
+</pre>
+
+<p>The value returned and stored in <code>gainParam</code> is the {{domxref("AudioParam")}} used to store the <code>gain</code> parameter. You can then change its value effective at a given time using the {{domxref("AudioParam")}} method {{domxref("AudioParam.setValueAtTime", "setValueAtTime()")}}.</p>
+
+<p>Here, for example, we set the value to <code>newValue</code>, effective immediately.</p>
+
+<pre class="brush: js">gainParam.setValueAtTime(newValue, audioContext.currentTime);</pre>
+
+<p>You can similarly use any of the other methods in the {{domxref("AudioParam")}} interface to apply changes over time, to cancel scheduled changes, and so forth.</p>
+
+<p>Reading the value of a parameter is as simple as looking at its {{domxref("AudioParam.value", "value")}} property:</p>
+
+<pre class="brush: js">let currentGain = gainParam.value;</pre>
+
+<h2 id="See_also">See also</h2>
+
+<ul>
+ <li><a href="/en-US/docs/Web/API/Web_Audio_API">Web Audio API</a></li>
+ <li><a href="https://developers.google.com/web/updates/2017/12/audio-worklet">Enter Audio Worklet</a> (Google Developers blog)</li>
+</ul>