--- title: AudioContext.createMediaStreamSource() slug: Web/API/AudioContext/createMediaStreamSource translation_of: Web/API/AudioContext/createMediaStreamSource --- <p>{{ APIRef("Web Audio API") }}</p> <div> <p>{{ domxref("AudioContext") }}接口的<code>createMediaStreamSource()方法用于创建一个新的</code>{{ domxref("MediaStreamAudioSourceNode") }} 对象, 需要传入一个媒体流对象(MediaStream对象)(可以从 {{ domxref("navigator.getUserMedia") }} 获得MediaStream对象实例), 然后来自MediaStream的音频就可以被播放和操作。</p> </div> <p>更多关于媒体流音频源(media stream audio source nodes)的细节, 请参考{{ domxref("MediaStreamAudioSourceNode") }} 页面.</p> <h2 id="语法">语法</h2> <pre class="brush: js">var audioCtx = new AudioContext(); var source = audioCtx.createMediaStreamSource(stream);</pre> <h3 id="参数">参数</h3> <dl> <dt>stream</dt> <dd>一个{{domxref("MediaStream")}} 对象,把他传入一个音频处理器进行操作</dd> </dl> <h3 id="返回">返回</h3> <p> {{domxref("MediaStreamAudioSourceNode")}}</p> <h2 id="示例">示例</h2> <p>本例中,我们从 {{ domxref("navigator.getUserMedia") }}获取媒体 (audio + video) 流,,把它传入 {{ htmlelement("video") }}中播放,并把视频调成静音,然后把获取到的audio传入 {{ domxref("MediaStreamAudioSourceNode") }}。接下来我们把获取到的audio传入{{ domxref("BiquadFilterNode") }} (可以把声音转化为低音),输出到 {{domxref("AudioDestinationNode") }}.</p> <p>{{ htmlelement("video") }} 元素下面滑动杆控制低音过滤器过滤的程度,滑动杆的值越大,低音更明显</p> <div class="note"> <p><span style="font-size: 14px;"><strong>注意:你可以查看</strong></span> <a href="https://mdn.github.io/webaudio-examples/stream-source-buffer/">在线演示</a>,或者 <a href="https://github.com/mdn/webaudio-examples/tree/master/stream-source-buffer">查看源码</a>。</p> </div> <pre class="brush: js;highlight[23]">var pre = document.querySelector('pre'); var video = document.querySelector('video'); var myScript = document.querySelector('script'); var range = document.querySelector('input'); // getUserMedia获取流 // 把流放入MediaStreamAudioSourceNode // 输出到video元素 if (navigator.mediaDevices) { console.log('getUserMedia supported.'); navigator.mediaDevices.getUserMedia ({audio: true, video: true}) .then(function(stream) { video.srcObject = stream; video.onloadedmetadata = function(e) { video.play(); video.muted = true; }; // 创建MediaStreamAudioSourceNode // Feed the HTMLMediaElement into it var audioCtx = new AudioContext(); var source = audioCtx.createMediaStreamSource(stream); // 创建二阶滤波器 var biquadFilter = audioCtx.createBiquadFilter(); biquadFilter.type = "lowshelf"; biquadFilter.frequency.value = 1000; biquadFilter.gain.value = range.value; // 把AudioBufferSourceNode连接到gainNode // gainNode连接到目的地, 所以我们可以播放 // 音乐并用鼠标调节音量 source.connect(biquadFilter); biquadFilter.connect(audioCtx.destination); // Get new mouse pointer coordinates when mouse is moved // then set new gain value range.oninput = function() { biquadFilter.gain.value = range.value; } }) .catch(function(err) { console.log('The following gUM error occured: ' + err); }); } else { console.log('getUserMedia not supported on your browser!'); } // dump script to pre element pre.innerHTML = myScript.innerHTML;</pre> <div class="note"> <p><span style="font-size: 14px;"><strong>注意</strong></span>: 调用<code>createMediaStreamSource()</code>, 来自于媒体流的音频回放将被重新传到AudioContext的处理器中。所以播放/暂停流仍然是可以通过media元素的API和自带的控制器控制。</p> </div> <h2 id="规范">规范</h2> <table class="standard-table"> <tbody> <tr> <th scope="col">Specification</th> <th scope="col">Status</th> <th scope="col">Comment</th> </tr> <tr> <td>{{SpecName('Web Audio API', '#widl-AudioContext-createMediaStreamSource-MediaStreamAudioSourceNode-MediaStream-mediaStream', 'createMediaStreamSource()')}}</td> <td>{{Spec2('Web Audio API')}}</td> <td></td> </tr> </tbody> </table> <h2 id="浏览器兼容性">浏览器兼容性</h2> {{Compat("api.AudioContext.createMediaStreamSource")}} <h2 id="查看更多">查看更多</h2> <ul> <li><a href="/en-US/docs/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a></li> </ul>