aboutsummaryrefslogtreecommitdiff
path: root/files/zh-cn/web/api/web_audio_api/最佳实践/index.html
blob: f0a241a5570771dcb9d1ffe1ab37ffa0e2bb6a8c (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
---
title: Web Audio API 最佳实践
slug: Web/API/Web_Audio_API/最佳实践
tags:
  - Web Audio API
  - 指导
  - 最佳实践
  - 音频
translation_of: Web/API/Web_Audio_API/Best_practices
---
<div>{{apiref("Web Audio API")}}</div>

<p class="summary">在创意编程中(creative coding)没有严格的对错之分。 只要你充分考虑安全性、性能和accessibility,你可以用自己的办法来编写代码。在这篇文章中,我们会分享一些最佳实践——包含使用Web Audio API的指导、小知识和小技巧。</p>

<h2 id="加载声音声音文件">加载声音/声音文件</h2>

<p>使用Web Audio API加载声音的主要方式有四种,你可能会对于应当使用哪种方式有些困惑。</p>

<p>在从文件中加载声音时,你可能会采取从{{domxref("HTMLMediaElement")}} (即  {{htmlelement("audio")}}{{htmlelement("video")}} )中抓取的方式,或提取文件并将其解码到缓冲区。两种工作方式都是合理的,然而,在处理全长音轨时,前一种方法会更加常见。而后一种方法更常见于处理更短的(例如样本)的音轨。</p>

<p>多媒体类HTML元素有开箱即用的媒体流支持。音频会在浏览器判断可以在播放完成之前加载文件的剩余部分时进行播放(when the browser determines it can load the rest of the file before playing finishes.)。你可以在<a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API tutorial</a>这篇文档中看到一个把多媒体类HTML元素与Web Audio API结合使用的例子。</p>

<p>如果你使用缓冲节点(buffer node)来加载音频,你将会有更多的控制权。虽然你需要请求这个文件,然后等待它加载完成(<a href="/en-US/docs/Web/API/Web_Audio_API/Advanced_techniques#Dial_up_%E2%80%94_loading_a_sound_sample">我们的这篇进阶文章中的这一节</a>介绍了一个好办法)。但是,随后您可以直接访问数据,这意味着你能进行更精确,更精确的操作。</p>

<p>对于来自用户的摄像头或麦克风的音频,你可以考虑通过<a href="/en-US/docs/Web/API/Media_Streams_API">Media Stream API</a>{{domxref("MediaStreamAudioSourceNode")}}接口来访问。这在与WebRTC协作以及你想录制或分析音频的场合下很管用。</p>

<p>最后一个要介绍的方法时如何生成声音。这可以通过{{domxref("OscillatorNode")}}和创建一个缓冲区(buffer)然后向其中填充你的数据来完成。你可以在<a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API/Advanced_techniques">这篇指导你如何创建自己的乐器的文章</a>中学习到用这两个工具创建声音的知识。</p>

<h2 id="Cross_browser_legacy_support">Cross browser &amp; legacy support</h2>

<p>The Web Audio API specification is constantly evolving and like most things on the web, there are some issues with it working consistently across browsers. Here we'll look at options for getting around cross-browser problems.</p>

<p>There's the <a href="https://github.com/chrisguttandin/standardized-audio-context"><code>standardised-audio-context</code></a> npm package, which creates API functionality consistently across browsers, full holes as they are found. It's constantly in development and endeavours to keep up with the current specification.</p>

<p>There is also the option of libraries, of which there are a few depending on your use case. For a good all-rounder, <a href="https://howlerjs.com/">howler.js</a> is a good choice. It has cross-browser support and, provides a useful subset of functionality. Although it doesn't harness the full gamut of filters and other effects the Web Audio API comes with, you can do most of what you'd want to do.</p>

<p>If you are looking for sound creation or a more instrument-based option, <a href="https://tonejs.github.io/">tone.js</a> is a great library. It provides advanced scheduling capabilities, synths, and effects, and intuitive musical abstractions built on top of the Web Audio API.</p>

<p><a href="https://github.com/bbc/r-audio">R-audio</a>, from the <a href="https://medium.com/bbc-design-engineering/r-audio-declarative-reactive-and-flexible-web-audio-graphs-in-react-102c44a1c69c">BBC's Research &amp; Development department</a>, is a library of React components aiming to provide a "more intuitive, declarative interface to Web Audio". If you're used to writing JSX it might be worth looking at.</p>

<h2 id="自动播放策略">自动播放策略</h2>

<p>浏览器已经开始实施自动播放策略,这一策略通常可以概括为:</p>

<blockquote>
<p>"Create or resume context from inside a user gesture".</p>
</blockquote>

<p>这在实践中意味着什么呢?user gesture可以解释为用户触发的事件(一般来说,是<code>click</code>事件)。浏览器厂商判定不应该允许Web Audio上下文自动播放音频,而应该由用户开始播放。这是因为自动播放音频非常烦人且令人讨厌。那么,我们该如何处理(handle)呢?</p>

<p>无论是offline还是online,当你创建了一个音频上下文(audio context),它会有一个内部状态(<code>state</code>),这个状态有<code>暂停(suspend)、播放(running)、关闭(closed)</code>三种可能。</p>

<p>(When you create an audio context (either offline or online) it is created with a <code>state</code>, which can be <code>suspended</code>, <code>running</code>, or <code>closed</code>.)</p>

<p>例如,在使用 {{domxref("AudioContext")}}时,如果你在<code>click</code>事件中创建了音频上下文,它的内部状态应该会被自动设置成<code>播放(running)</code>。这里有一个在<code>click</code>事件中创建音频上下文简单的例子:</p>

<pre class="brush: js">const button = document.querySelector('button');
button.addEventListener('click', function() {
    const audioCtx = new AudioContext();
}, false);
</pre>

<p>如果你在用户动作之外创建上下文(create the context outside of a user gesture),它的内部状态会被设置为<code>暂停(suspend)</code>。这里我们可以同样用click事件的例子。我们会检查这个上下文的状态,并且启动它。如果它是<code>暂停(suspend)</code>的状态,使用<code><a href="https://developer.mozilla.org/en-US/docs/Web/API/BaseAudioContext/resume">resume()</a></code>方法来恢复。</p>

<pre class="brush: js">const audioCtx = new AudioContext();
const button = document.querySelector('button');

button.addEventListener('click', function() {
      // check if context is in suspended state (autoplay policy)
    if (audioCtx.state === 'suspended') {
        audioCtx.resume();
    }
}, false);
</pre>

<p>对于{{domxref("OfflineAudioContext")}},你也可以使用<a href="https://developer.mozilla.org/en-US/docs/Web/API/OfflineAudioContext/startRendering"><code>startRendering()</code></a>方法来恢复到播放状态。</p>

<h2 id="User_control">User control</h2>

<p>If your website or application contains sound, you should allow the user control over it, otherwise again, it will become annoying. This can be achieved by play/stop and volume/mute controls. The <a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a> tutorial goes over how to do this.</p>

<p>If you have buttons that switch audio on and off, using the ARIA <a href="https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/Roles/Switch_role"><code>role="switch"</code></a> attribute on them is a good option for signalling to assistive technology what the button's exact purpose is, and therefore making the app more accessible. There's a <a href="https://codepen.io/Wilto/pen/ZoGoQm?editors=1100">demo of how to use it here</a>.</p>

<p>As you work with a lot of changing values within the Web Audio API and will want to provide users with control over these, the <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input/range"><code>range input</code></a> is often a good choice of control to use. It's a good option as you can set minimum and maximum values, as well as increments with the <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input#attr-step"><code>step</code></a> attribute.</p>

<h2 id="Setting_AudioParam_values">Setting AudioParam values</h2>

<p>There are two ways to manipulate {{domxref("AudioNode")}} values, which are themselves objects of type {{domxref("AudioParam")}} interface. The first is to set the value directly via the property. So for instance if we want to change the <code>gain</code> value of a {{domxref("GainNode")}} we would do so thus:</p>

<pre class="brush: js">gainNode.gain.value = 0.5;
</pre>

<p>This will set our volume to half. However, if you're using any of the <code>AudioParam</code>'s defined methods to set these values, they will take precedence over the above property setting. If for example, you want the <code>gain</code> value to be raised to 1 in 2 seconds time, you can do this:</p>

<pre class="brush: js">gainNode.gain.setValueAtTime(1, audioCtx.currentTime + 2);
</pre>

<p>It will override the previous example (as it should), even if it were to come later in your code.</p>

<p>Bearing this in mind, if your website or application requires timing and scheduling, it's best to stick with the {{domxref("AudioParam")}} methods for setting values. If you're sure it doesn't, setting it with the <code>value</code> property is fine.</p>