1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
|
---
title: AudioContext.createAnalyser()
slug: Web/API/BaseAudioContext/createAnalyser
translation_of: Web/API/BaseAudioContext/createAnalyser
original_slug: Web/API/AudioContext/createAnalyser
---
<p>{{ APIRef("Web Audio API") }}</p>
<div>
<p>{{ domxref("AudioContext") }}的<code>createAnalyser()</code>方法能创建一个{{ domxref("AnalyserNode") }},可以用来获取音频时间和频率数据,以及实现数据可视化。</p>
</div>
<div class="note">
<p><strong>注意</strong>:关于该节点的更多信息,请查看{{domxref("AnalyserNode")}}</p>
</div>
<h2 id="语法">语法</h2>
<pre class="brush: js">var audioCtx = new AudioContext();
var analyser = audioCtx.createAnalyser();</pre>
<h3 id="Description" name="Description">返回值</h3>
<p>{{domxref("AnalyserNode")}}对象</p>
<h2 id="Examples" name="Examples">例子</h2>
<p>下面的例子展示了AudioContext创建分析器节点的基本用法,然后用requestAnimationFrame()来反复获取时域数据,并绘制出当前音频输入的“示波器风格”输出。更多完整例子请查看<a href="https://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic</a> demo (中<a href="https://github.com/mdn/voice-change-o-matic/blob/gh-pages/scripts/app.js#L128-L205">app.js的128–205行</a>代码)</p>
<pre class="brush: js">var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var analyser = audioCtx.createAnalyser();
...
analyser.fftSize = 2048;
var bufferLength = analyser.fftSize;
var dataArray = new Uint8Array(bufferLength);
analyser.getByteTimeDomainData(dataArray);
// draw an oscilloscope of the current audio source
function draw() {
drawVisual = requestAnimationFrame(draw);
analyser.getByteTimeDomainData(dataArray);
canvasCtx.fillStyle = 'rgb(200, 200, 200)';
canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
canvasCtx.lineWidth = 2;
canvasCtx.strokeStyle = 'rgb(0, 0, 0)';
canvasCtx.beginPath();
var sliceWidth = WIDTH * 1.0 / bufferLength;
var x = 0;
for(var i = 0; i < bufferLength; i++) {
var v = dataArray[i] / 128.0;
var y = v * HEIGHT/2;
if(i === 0) {
canvasCtx.moveTo(x, y);
} else {
canvasCtx.lineTo(x, y);
}
x += sliceWidth;
}
canvasCtx.lineTo(canvas.width, canvas.height/2);
canvasCtx.stroke();
};
draw();</pre>
<h2 id="规范">规范</h2>
<table class="standard-table">
<tbody>
<tr>
<th scope="col">Specification</th>
<th scope="col">Status</th>
<th scope="col">Comment</th>
</tr>
<tr>
<td>{{SpecName('Web Audio API', '#widl-AudioContext-createAnalyser-AnalyserNode', 'createAnalyser()')}}</td>
<td>{{Spec2('Web Audio API')}}</td>
<td> </td>
</tr>
</tbody>
</table>
<h2 id="浏览器兼容性">浏览器兼容性</h2>
<div>{{CompatibilityTable}}</div>
<div id="compat-desktop">
<table class="compat-table">
<tbody>
<tr>
<th>Feature</th>
<th>Chrome</th>
<th>Firefox (Gecko)</th>
<th>Internet Explorer</th>
<th>Opera</th>
<th>Safari (WebKit)</th>
</tr>
<tr>
<td>Basic support</td>
<td>{{CompatChrome(10.0)}}{{property_prefix("webkit")}}</td>
<td>{{CompatGeckoDesktop(25.0)}} </td>
<td>{{CompatNo}}</td>
<td>15.0{{property_prefix("webkit")}}<br>
22</td>
<td>6.0{{property_prefix("webkit")}}</td>
</tr>
</tbody>
</table>
</div>
<div id="compat-mobile">
<table class="compat-table">
<tbody>
<tr>
<th>Feature</th>
<th>Android</th>
<th>Firefox Mobile (Gecko)</th>
<th>Firefox OS</th>
<th>IE Mobile</th>
<th>Opera Mobile</th>
<th>Safari Mobile</th>
<th>Chrome for Android</th>
</tr>
<tr>
<td>Basic support</td>
<td>{{CompatUnknown}}</td>
<td>26.0</td>
<td>1.2</td>
<td>{{CompatUnknown}}</td>
<td>{{CompatUnknown}}</td>
<td>{{CompatUnknown}}</td>
<td>33.0</td>
</tr>
</tbody>
</table>
</div>
<h2 id="另见">另见</h2>
<ul>
<li><a href="/en-US/docs/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a></li>
</ul>
|