--- title: AudioWorkletProcessor slug: Web/API/AudioWorkletProcessor translation_of: Web/API/AudioWorkletProcessor ---
{{APIRef("Web Audio API")}}
Web Audio API的 AudioWorkletProcessor
接口代表了一个 自定义的音频处理代码 {{domxref("AudioWorkletNode")}}. 它身处于 {{domxref("AudioWorkletGlobalScope")}} 并运行在 Web Audio rendering 线程上. 同时, 一个建立在其基础上的 {{domxref("AudioWorkletNode")}} 运行在主线程上.
AudioWorkletProcessor
及其子类不能通过用户提供的的代码直接实例化。它们只能随着与之相联系{{domxref("AudioWorkletNode")}}s的创建而被其创建再内部。其子类的构造函数将被一个可选对象调用,因此您可以执行自定义的初始化过程——详细信息请参见构造函数页面。AudioWorkletProcessor
对象的新实例。AudioWorkletProcessor
接口没有定义任何自己的方法。但是, 您必须提供一个 {{domxref("AudioWorkletProcessor.process", "process()")}} 方法, 用以处理音频流。
AudioWorkletProcessor
接口不响应任何事件。
要自定义音频处理代码, 你必须从AudioWorkletProcessor
接口派生一个类. 这个派生类必须具有在该接口中不曾定义的{{domxref("AudioWorkletProcessor.process", "process")}} 方法. 该方法将被每个含有128 样本帧的块调用并且接受输入和输出数组以及自定义的{{domxref("AudioParam")}}s (如果它们刚被定义了) 的计算值作为参数. 您可以使用输入和 音频参数值去填充输出数组, 这是默认的用于使输出静音。
Optionally, if you want custom {{domxref("AudioParam")}}s on your node, you can supply a {{domxref("AudioWorkletProcessor.parameterDescriptors", "parameterDescriptors")}} property as a static getter on the processor. The array of {{domxref("AudioParamDescriptor")}}-based objects returned is used internally to create the {{domxref("AudioParam")}}s during the instantiation of the AudioWorkletNode
.
The resulting AudioParam
s reside in the {{domxref("AudioWorkletNode.parameters", "parameters")}} property of the node and can be automated using standard methods such as linearRampToValueAtTime
. Their calculated values will be passed into the {{domxref("AudioWorkletProcessor.process", "process()")}} method of the processor for you to shape the node output accordingly.
一个创建自定义音频处理算法的步骤的实例:
AudioWorkletProcessor
class (see "Deriving classes" section) and supply your own {{domxref("AudioWorkletProcessor.process", "process()")}} method in it;AudioWorkletNode
constructor.In the example below we create a custom {{domxref("AudioWorkletNode")}} that outputs white noise.
First, we need to define a custom AudioWorkletProcessor
, which will output white noise, and register it. Note that this should be done in a separate file.
// white-noise-processor.js class WhiteNoiseProcessor extends AudioWorkletProcessor { process (inputs, outputs, parameters) { const output = outputs[0] output.forEach(channel => { for (let i = 0; i < channel.length; i++) { channel[i] = Math.random() * 2 - 1 } }) return true } } registerProcessor('white-noise-processor', WhiteNoiseProcessor)
Next, in our main script file we'll load the processor, create an instance of {{domxref("AudioWorkletNode")}}, passing it the name of the processor, then connect the node to an audio graph.
const audioContext = new AudioContext() await audioContext.audioWorklet.addModule('white-noise-processor.js') const whiteNoiseNode = new AudioWorkletNode(audioContext, 'white-noise-processor') whiteNoiseNode.connect(audioContext.destination)
Specification | Status | Comment |
---|---|---|
{{SpecName('Web Audio API', '#audioworkletprocessor', 'AudioWorkletProcessor')}} | {{Spec2('Web Audio API')}} |
{{Compat("api.AudioWorkletProcessor")}}