1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
|
---
title: AudioContext.createScriptProcessor()
slug: Web/API/BaseAudioContext/createScriptProcessor
translation_of: Web/API/BaseAudioContext/createScriptProcessor
original_slug: Web/API/AudioContext/createScriptProcessor
---
<p>{{ APIRef("Web Audio API") }}</p>
<div>
<p>{{ domxref("AudioContext") }} 接口的<code>createScriptProcessor()</code> 方法创建一个{{domxref("ScriptProcessorNode")}} 用于通过JavaScript直接处理音频.</p>
</div>
<h2 id="语法">语法</h2>
<pre class="brush: js">var audioCtx = new AudioContext();
myScriptProcessor = audioCtx.createScriptProcessor(<code>bufferSize</code>, <code>numberOfInputChannels</code>, <code>numberOfOutputChannels</code>);</pre>
<h3 id="Parameters" name="Parameters">参数</h3>
<dl>
<dt><code>bufferSize</code></dt>
<dd>缓冲区大小,以样本帧为单位。具体来讲,缓冲区大小必须是下面这些值当中的某一个: 256, 512, 1024, 2048, 4096, 8192, 16384. 如果不传,或者参数为0,则取当前环境最合适的缓冲区大小, 取值为2的幂次方的一个常数,在该node的整个生命周期中都不变.</dd>
<dd>该取值控制着<code>audioprocess事件被分派的频率,以及每一次调用多少样本帧被处理</code>. 较低bufferSzie将导致一定的延迟。较高的bufferSzie就要注意避免音频的崩溃和故障。推荐作者不要给定具体的缓冲区大小,让系统自己选一个好的值来平衡延迟和音频质量。</dd>
<dt><code>numberOfInputChannels</code></dt>
<dd>值为整数,用于指定输入node的声道的数量,默认值是2,最高能取32.</dd>
<dt><code>numberOfOutputChannels</code></dt>
<dd>值为整数,用于指定输出node的声道的数量,默认值是2,最高能取32.</dd>
</dl>
<div class="warning">
<p><span style="font-size: 14px;"><strong>重要</strong></span>: Webkit (version 31)要求调用这个方法的时候必须传入一个有效的bufferSize .</p>
</div>
<div class="note">
<p><span style="font-size: 14px;"><strong>注意</strong></span>: <code>numberOfInputChannels<font face="Open Sans, Arial, sans-serif">和</font></code><code>numberOfOutputChannels的值不能同时为0,二者同时为0是无效的</code></p>
</div>
<h3 id="Description" name="Description">返回</h3>
<p>A {{domxref("ScriptProcessorNode")}}.</p>
<h2 id="示例">示例</h2>
<p><code><font face="Open Sans, Arial, sans-serif">下面的例子展示了一个</font>ScriptProcessorNode的基本用法,数据源取自</code> {{ domxref("AudioContext.decodeAudioData") }}, 给每一个音频样本加一点白噪声,然后通过{{domxref("AudioDestinationNode")}}播放(其实这个就是系统的扬声器)。 对于每一个声道和样本帧,在把结果当成输出样本之前,<code>scriptNode.onaudioprocess方法<font face="Open Sans, Arial, sans-serif">关联</font></code><code>audioProcessingEvent</code> ,并用它来遍历每输入流的每一个声道,和每一个声道中的每一个样本,并添加一点白噪声。</p>
<div class="note">
<p><span style="font-size: 14px;"><strong>注意</strong></span>: 完整的示例参照 <a href="https://mdn.github.io/webaudio-examples/script-processor-node/">script-processor-node</a> github (查看源码 <a href="https://github.com/mdn/webaudio-examples/blob/master/script-processor-node/index.html">source code</a>.)</p>
</div>
<pre class="brush: js">var myScript = document.querySelector('script');
var myPre = document.querySelector('pre');
var playButton = document.querySelector('button');
// Create AudioContext and buffer source
var audioCtx = new AudioContext();
source = audioCtx.createBufferSource();
// Create a ScriptProcessorNode with a bufferSize of 4096 and a single input and output channel
var scriptNode = audioCtx.createScriptProcessor(4096, 1, 1);
console.log(scriptNode.bufferSize);
// load in an audio track via XHR and decodeAudioData
function getData() {
request = new XMLHttpRequest();
request.open('GET', 'viper.ogg', true);
request.responseType = 'arraybuffer';
request.onload = function() {
var audioData = request.response;
audioCtx.decodeAudioData(audioData, function(buffer) {
myBuffer = buffer;
source.buffer = myBuffer;
},
function(e){"Error with decoding audio data" + e.err});
}
request.send();
}
// Give the node a function to process audio events
scriptNode.onaudioprocess = function(audioProcessingEvent) {
// The input buffer is the song we loaded earlier
var inputBuffer = audioProcessingEvent.inputBuffer;
// The output buffer contains the samples that will be modified and played
var outputBuffer = audioProcessingEvent.outputBuffer;
// Loop through the output channels (in this case there is only one)
for (var channel = 0; channel < outputBuffer.numberOfChannels; channel++) {
var inputData = inputBuffer.getChannelData(channel);
var outputData = outputBuffer.getChannelData(channel);
// Loop through the 4096 samples
for (var sample = 0; sample < inputBuffer.length; sample++) {
// make output equal to the same as the input
outputData[sample] = inputData[sample];
// add noise to each output sample
outputData[sample] += ((Math.random() * 2) - 1) * 0.2;
}
}
}
getData();
// wire up play button
playButton.onclick = function() {
source.connect(scriptNode);
scriptNode.connect(audioCtx.destination);
source.start();
}
// When the buffer source stops playing, disconnect everything
source.onended = function() {
source.disconnect(scriptNode);
scriptNode.disconnect(audioCtx.destination);
}
</pre>
<h2 id="规范">规范</h2>
<table class="standard-table">
<tbody>
<tr>
<th scope="col">Specification</th>
<th scope="col">Status</th>
<th scope="col">Comment</th>
</tr>
<tr>
<td>{{SpecName('Web Audio API', '#widl-AudioContext-createScriptProcessor-ScriptProcessorNode-unsigned-long-bufferSize-unsigned-long-numberOfInputChannels-unsigned-long-numberOfOutputChannels', 'createScriptProcessor')}}</td>
<td>{{Spec2('Web Audio API')}}</td>
<td></td>
</tr>
</tbody>
</table>
<h2 id="浏览器兼容性">浏览器兼容性</h2>
{{Compat("api.BaseAudioContext.createScriptProcessor")}}
<h2 id="See_also">See also</h2>
<ul>
<li><a href="/en-US/docs/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a></li>
</ul>
|