aboutsummaryrefslogtreecommitdiff
path: root/files/zh-cn/web/api/baseaudiocontext/createchannelmerger/index.html
blob: 33a08220db5a2ecf042266166483e69afcaf6acd (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
title: AudioContext.createChannelMerger()
slug: Web/API/BaseAudioContext/createChannelMerger
tags:
  - API
  - Audio
  - AudioContext
  - Audio_Chinese
translation_of: Web/API/BaseAudioContext/createChannelMerger
original_slug: Web/API/AudioContext/createChannelMerger
---
<p>{{ APIRef("Web Audio API") }}</p>

<div>
<p>AudioContext.<code>createChannelMerger()方法,会创建一个</code>ChannelMergerNode,后者可以把多个音频流的通道整合到一个音频流。</p>
</div>

<h2 id="语法">语法</h2>

<pre class="brush: js">var audioCtx = new AudioContext();
var merger = audioCtx.createChannelMerger(2);</pre>

<h3 id="参数">参数</h3>

<dl>
 <dt>numberOfInputs</dt>
 <dd>通道输入音频流的数量,输出流将包含这个数量的通道。默认值6。</dd>
</dl>

<h3 id="返回值">返回值</h3>

<p>一个 {{domxref("ChannelMergerNode")}}.</p>

<h2 id="(举个)栗(例)子">(举个)栗(例)子</h2>

<p>下面的例子展示了如何分离立体音轨(就是一段音乐),处理使左右声道不同。使用的时候,需要指定AudioNode.connect(AudioNode)方法的第二个和第三个参数,分别用来指定通道链接来源的索引和输出的索引。</p>

<pre class="brush: js;highlight[7,16,17,24]">var ac = new AudioContext();
ac.decodeAudioData(someStereoBuffer, function(data) {
 var source = ac.createBufferSource();
 source.buffer = data;
 var splitter = ac.createChannelSplitter(2);
 source.connect(splitter);
 var merger = ac.createChannelMerger(2);

 // Reduce the volume of the left channel only
 var gainNode = ac.createGain();
 gainNode.gain.value = 0.5;
 splitter.connect(gainNode, 0);

 // Connect the splitter back to the second input of the merger: we
 // effectively swap the channels, here, reversing the stereo image.
 gainNode.connect(merger, 0, 1);
 splitter.connect(merger, 1, 0);

 var dest = ac.createMediaStreamDestination();

 // Because we have used a ChannelMergerNode, we now have a stereo
 // MediaStream we can use to pipe the Web Audio graph to WebRTC,
 // MediaRecorder, etc.
 merger.connect(dest);
});</pre>

<h2 id="规范">规范</h2>

<table class="standard-table">
 <tbody>
  <tr>
   <th scope="col">Specification</th>
   <th scope="col">Status</th>
   <th scope="col">Comment</th>
  </tr>
  <tr>
   <td>{{SpecName('Web Audio API', '#widl-AudioContext-createChannelMerger-ChannelMergerNode-unsigned-long-numberOfInputs', 'createChannelMerger()')}}</td>
   <td>{{Spec2('Web Audio API')}}</td>
   <td> </td>
  </tr>
 </tbody>
</table>

<h2 id="浏览器兼容性">浏览器兼容性</h2>

{{Compat("api.BaseAudioContext.createChannelMerger")}}

<h2 id="相关页面">相关页面</h2>

<ul>
 <li><a href="/en-US/docs/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a></li>
</ul>