--- title: AudioContext.createScriptProcessor() slug: Web/API/BaseAudioContext/createScriptProcessor translation_of: Web/API/BaseAudioContext/createScriptProcessor original_slug: Web/API/AudioContext/createScriptProcessor ---
{{ APIRef("Web Audio API") }}
{{ domxref("AudioContext") }} 接口的createScriptProcessor() 方法创建一个{{domxref("ScriptProcessorNode")}} 用于通过JavaScript直接处理音频.
var audioCtx = new AudioContext(); myScriptProcessor = audioCtx.createScriptProcessor(bufferSize,numberOfInputChannels,numberOfOutputChannels);
bufferSizeaudioprocess事件被分派的频率,以及每一次调用多少样本帧被处理. 较低bufferSzie将导致一定的延迟。较高的bufferSzie就要注意避免音频的崩溃和故障。推荐作者不要给定具体的缓冲区大小,让系统自己选一个好的值来平衡延迟和音频质量。numberOfInputChannelsnumberOfOutputChannels重要: Webkit (version 31)要求调用这个方法的时候必须传入一个有效的bufferSize .
注意: numberOfInputChannels和numberOfOutputChannels的值不能同时为0,二者同时为0是无效的
A {{domxref("ScriptProcessorNode")}}.
下面的例子展示了一个ScriptProcessorNode的基本用法,数据源取自 {{ domxref("AudioContext.decodeAudioData") }}, 给每一个音频样本加一点白噪声,然后通过{{domxref("AudioDestinationNode")}}播放(其实这个就是系统的扬声器)。 对于每一个声道和样本帧,在把结果当成输出样本之前,scriptNode.onaudioprocess方法关联audioProcessingEvent ,并用它来遍历每输入流的每一个声道,和每一个声道中的每一个样本,并添加一点白噪声。
注意: 完整的示例参照 script-processor-node github (查看源码 source code.)
var myScript = document.querySelector('script');
var myPre = document.querySelector('pre');
var playButton = document.querySelector('button');
// Create AudioContext and buffer source
var audioCtx = new AudioContext();
source = audioCtx.createBufferSource();
// Create a ScriptProcessorNode with a bufferSize of 4096 and a single input and output channel
var scriptNode = audioCtx.createScriptProcessor(4096, 1, 1);
console.log(scriptNode.bufferSize);
// load in an audio track via XHR and decodeAudioData
function getData() {
request = new XMLHttpRequest();
request.open('GET', 'viper.ogg', true);
request.responseType = 'arraybuffer';
request.onload = function() {
var audioData = request.response;
audioCtx.decodeAudioData(audioData, function(buffer) {
myBuffer = buffer;
source.buffer = myBuffer;
},
function(e){"Error with decoding audio data" + e.err});
}
request.send();
}
// Give the node a function to process audio events
scriptNode.onaudioprocess = function(audioProcessingEvent) {
// The input buffer is the song we loaded earlier
var inputBuffer = audioProcessingEvent.inputBuffer;
// The output buffer contains the samples that will be modified and played
var outputBuffer = audioProcessingEvent.outputBuffer;
// Loop through the output channels (in this case there is only one)
for (var channel = 0; channel < outputBuffer.numberOfChannels; channel++) {
var inputData = inputBuffer.getChannelData(channel);
var outputData = outputBuffer.getChannelData(channel);
// Loop through the 4096 samples
for (var sample = 0; sample < inputBuffer.length; sample++) {
// make output equal to the same as the input
outputData[sample] = inputData[sample];
// add noise to each output sample
outputData[sample] += ((Math.random() * 2) - 1) * 0.2;
}
}
}
getData();
// wire up play button
playButton.onclick = function() {
source.connect(scriptNode);
scriptNode.connect(audioCtx.destination);
source.start();
}
// When the buffer source stops playing, disconnect everything
source.onended = function() {
source.disconnect(scriptNode);
scriptNode.disconnect(audioCtx.destination);
}
| Specification | Status | Comment |
|---|---|---|
| {{SpecName('Web Audio API', '#widl-AudioContext-createScriptProcessor-ScriptProcessorNode-unsigned-long-bufferSize-unsigned-long-numberOfInputChannels-unsigned-long-numberOfOutputChannels', 'createScriptProcessor')}} | {{Spec2('Web Audio API')}} |