diff options
author | Peter Bengtsson <mail@peterbe.com> | 2020-12-08 14:42:52 -0500 |
---|---|---|
committer | Peter Bengtsson <mail@peterbe.com> | 2020-12-08 14:42:52 -0500 |
commit | 074785cea106179cb3305637055ab0a009ca74f2 (patch) | |
tree | e6ae371cccd642aa2b67f39752a2cdf1fd4eb040 /files/pt-pt/web/api/web_audio_api | |
parent | da78a9e329e272dedb2400b79a3bdeebff387d47 (diff) | |
download | translated-content-074785cea106179cb3305637055ab0a009ca74f2.tar.gz translated-content-074785cea106179cb3305637055ab0a009ca74f2.tar.bz2 translated-content-074785cea106179cb3305637055ab0a009ca74f2.zip |
initial commit
Diffstat (limited to 'files/pt-pt/web/api/web_audio_api')
-rw-r--r-- | files/pt-pt/web/api/web_audio_api/index.html | 512 | ||||
-rw-r--r-- | files/pt-pt/web/api/web_audio_api/utilizar_api_audio_web/index.html | 259 |
2 files changed, 771 insertions, 0 deletions
diff --git a/files/pt-pt/web/api/web_audio_api/index.html b/files/pt-pt/web/api/web_audio_api/index.html new file mode 100644 index 0000000000..815ab1ad91 --- /dev/null +++ b/files/pt-pt/web/api/web_audio_api/index.html @@ -0,0 +1,512 @@ +--- +title: API de Áudio da Web +slug: Web/API/Web_Audio_API +tags: + - API + - API de Áudio da Web + - Exemplo + - Guía + - Landing + - Overview + - Resumo + - Web Audio API +translation_of: Web/API/Web_Audio_API +--- +<div> +<p>The Web Audio API provides a powerful and versatile system for controlling audio on the Web, allowing developers to choose audio sources, add effects to audio, create audio visualizations, apply spatial effects (such as panning) and much more.</p> +</div> + +<h2 id="Conceitos_e_utilização_de_áudio_da_Web">Conceitos e utilização de áudio da <em>Web</em></h2> + +<p>The Web Audio API involves handling audio operations inside an <strong>audio context</strong>, and has been designed to allow <strong>modular routing</strong>. Basic audio operations are performed with <strong>audio nodes</strong>, which are linked together to form an <strong>audio routing graph</strong>. Several sources — with different types of channel layout — are supported even within a single context. This modular design provides the flexibility to create complex audio functions with dynamic effects.</p> + +<p>Audio nodes are linked into chains and simple webs by their inputs and outputs. They typically start with one or more sources. Sources provide arrays of sound intensities (samples) at very small timeslices, often tens of thousands of them per second. These could be either computed mathematically (such as {{domxref("OscillatorNode")}}), or they can be recordings from sound/video files (like {{domxref("AudioBufferSourceNode")}} and {{domxref("MediaElementAudioSourceNode")}}) and audio streams ({{domxref("MediaStreamAudioSourceNode")}}). In fact, sound files are just recordings of sound intensities themselves, which come in from microphones or electric instruments, and get mixed down into a single, complicated wave.</p> + +<p>Outputs of these nodes could be linked to inputs of others, which mix or modify these streams of sound samples into different streams. A common modification is multiplying the samples by a value to make them louder or quieter (as is the case with {{domxref("GainNode")}}). Once the sound has been sufficiently processed for the intended effect, it can be linked to the input of a destination ({{domxref("AudioContext.destination")}}), which sends the sound to the speakers or headphones. This last connection is only necessary if the user is supposed to hear the audio.</p> + +<p>A simple, typical workflow for web audio would look something like this:</p> + +<ol> + <li>Create audio context</li> + <li>Inside the context, create sources — such as <code><audio></code>, oscillator, stream</li> + <li>Create effects nodes, such as reverb, biquad filter, panner, compressor</li> + <li>Choose final destination of audio, for example your system speakers</li> + <li>Connect the sources up to the effects, and the effects to the destination.</li> +</ol> + +<p><img alt="A simple box diagram with an outer box labeled Audio context, and three inner boxes labeled Sources, Effects and Destination. The three inner boxes have arrow between them pointing from left to right, indicating the flow of audio information." src="https://mdn.mozillademos.org/files/12241/webaudioAPI_en.svg" style="display: block; height: 143px; margin: 0px auto; width: 643px;"></p> + +<p>Timing is controlled with high precision and low latency, allowing developers to write code that responds accurately to events and is able to target specific samples, even at a high sample rate. So applications such as drum machines and sequencers are well within reach.</p> + +<p>The Web Audio API also allows us to control how audio is <em>spatialized</em>. Using a system based on a <em>source-listener model</em>, it allows control of the <em>panning model</em> and deals with <em>distance-induced attenuation</em> or <em>doppler shift</em> induced by a moving source (or moving listener).</p> + +<div class="note"> +<p>You can read about the theory of the Web Audio API in a lot more detail in our article <a href="/en-US/docs/Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API">Basic concepts behind Web Audio API</a>.</p> +</div> + +<h2 id="Interfaces_de_API_de_Áudio_da_Web">Interfaces de API de Áudio da <em>Web</em></h2> + +<p>The Web Audio API has a number of interfaces and associated events, which we have split up into nine categories of functionality.</p> + +<h3 id="General_audio_graph_definition">General audio graph definition</h3> + +<p>General containers and definitions that shape audio graphs in Web Audio API usage.</p> + +<dl> + <dt>{{domxref("AudioContext")}}</dt> + <dd>The <strong><code>AudioContext</code></strong> interface represents an audio-processing graph built from audio modules linked together, each represented by an {{domxref("AudioNode")}}. An audio context controls the creation of the nodes it contains and the execution of the audio processing, or decoding. You need to create an <code>AudioContext</code> before you do anything else, as everything happens inside a context.</dd> + <dt>{{domxref("AudioNode")}}</dt> + <dd>The <strong><code>AudioNode</code></strong><strong> </strong>interface represents an audio-processing module like an <em>audio source</em> (e.g. an HTML {{HTMLElement("audio")}} or {{HTMLElement("video")}} element), <em>audio destination</em>, <em>intermediate processing module</em> (e.g. a filter like {{domxref("BiquadFilterNode")}}, or <em>volume control</em> like {{domxref("GainNode")}}).</dd> + <dt>{{domxref("AudioParam")}}</dt> + <dd>The <strong><code>AudioParam</code></strong><strong> </strong>interface represents an audio-related parameter, like one of an {{domxref("AudioNode")}}. It can be set to a specific value or a change in value, and can be scheduled to happen at a specific time and following a specific pattern.</dd> + <dt>The {{event("ended")}} event</dt> + <dd>The <code>ended</code> event is fired when playback has stopped because the end of the media was reached.</dd> +</dl> + +<h3 id="Defining_audio_sources">Defining audio sources</h3> + +<p>Interfaces that define audio sources for use in the Web Audio API.</p> + +<dl> + <dt>{{domxref("OscillatorNode")}}</dt> + <dd>The <strong><code style="font-size: 14px;">OscillatorNode</code></strong><strong> </strong>interface represents a periodic waveform, such as a sine or triangle wave. It is an {{domxref("AudioNode")}} audio-processing module that causes a given <em>frequency</em> of wave to be created.</dd> + <dt>{{domxref("AudioBuffer")}}</dt> + <dd>The <strong><code>AudioBuffer</code></strong> interface represents a short audio asset residing in memory, created from an audio file using the {{ domxref("AudioContext.decodeAudioData()") }} method, or created with raw data using {{ domxref("AudioContext.createBuffer()") }}. Once decoded into this form, the audio can then be put into an {{ domxref("AudioBufferSourceNode") }}.</dd> + <dt>{{domxref("AudioBufferSourceNode")}}</dt> + <dd>The <strong><code>AudioBufferSourceNode</code></strong> interface represents an audio source consisting of in-memory audio data, stored in an {{domxref("AudioBuffer")}}. It is an {{domxref("AudioNode")}} that acts as an audio source.</dd> + <dt>{{domxref("MediaElementAudioSourceNode")}}</dt> + <dd>The <code><strong>MediaElementAudio</strong></code><strong><code>SourceNode</code></strong> interface represents an audio source consisting of an HTML5 {{ htmlelement("audio") }} or {{ htmlelement("video") }} element. It is an {{domxref("AudioNode")}} that acts as an audio source.</dd> + <dt>{{domxref("MediaStreamAudioSourceNode")}}</dt> + <dd>The <code><strong>MediaStreamAudio</strong></code><strong><code>SourceNode</code></strong> interface represents an audio source consisting of a <a href="/en-US/docs/WebRTC" title="/en-US/docs/WebRTC">WebRTC</a> {{domxref("MediaStream")}} (such as a webcam, microphone, or a stream being sent from a remote computer). It is an {{domxref("AudioNode")}} that acts as an audio source.</dd> +</dl> + +<h3 id="Defining_audio_effects_filters">Defining audio effects filters</h3> + +<p>Interfaces for defining effects that you want to apply to your audio sources.</p> + +<dl> + <dt>{{domxref("BiquadFilterNode")}}</dt> + <dd>The <strong><code>BiquadFilterNode</code></strong><strong> </strong>interface represents a simple low-order filter. It is an {{domxref("AudioNode")}} that can represent different kinds of filters, tone control devices, or graphic equalizers. A <code>BiquadFilterNode</code> always has exactly one input and one output.</dd> + <dt>{{domxref("ConvolverNode")}}</dt> + <dd>The <code><strong>Convolver</strong></code><strong><code>Node</code></strong><strong> </strong>interface is an <span style="line-height: 1.5;">{{domxref("AudioNode")}} that</span><span style="line-height: 1.5;"> performs a Linear Convolution on a given {{domxref("AudioBuffer")}}, and is often used to achieve a reverb effect</span><span style="line-height: 1.5;">.</span></dd> + <dt>{{domxref("DelayNode")}}</dt> + <dd>The <strong><code>DelayNode</code></strong><strong> </strong>interface represents a <a href="http://en.wikipedia.org/wiki/Digital_delay_line" title="http://en.wikipedia.org/wiki/Digital_delay_line">delay-line</a>; an {{domxref("AudioNode")}} audio-processing module that causes a delay between the arrival of an input data and its propagation to the output.</dd> + <dt>{{domxref("DynamicsCompressorNode")}}</dt> + <dd>The <strong><code>DynamicsCompressorNode</code></strong> interface provides a compression effect, which lowers the volume of the loudest parts of the signal in order to help prevent clipping and distortion that can occur when multiple sounds are played and multiplexed together at once.</dd> + <dt>{{domxref("GainNode")}}</dt> + <dd>The <strong><code>GainNode</code></strong><strong> </strong>interface represents a change in volume. It is an {{domxref("AudioNode")}} audio-processing module that causes a given <em>gain</em> to be applied to the input data before its propagation to the output.</dd> + <dt>{{domxref("StereoPannerNode")}}</dt> + <dd>The <code><strong>StereoPannerNode</strong></code> interface represents a simple stereo panner node that can be used to pan an audio stream left or right.</dd> + <dt>{{domxref("WaveShaperNode")}}</dt> + <dd>The <strong><code>WaveShaperNode</code></strong><strong> </strong>interface represents a non-linear distorter. It is an {{domxref("AudioNode")}} that use a curve to apply a waveshaping distortion to the signal. Beside obvious distortion effects, it is often used to add a warm feeling to the signal.</dd> + <dt>{{domxref("PeriodicWave")}}</dt> + <dd>Describes a periodic waveform that can be used to shape the output of an {{ domxref("OscillatorNode") }}.</dd> +</dl> + +<h3 id="Defining_audio_destinations">Defining audio destinations</h3> + +<p>Once you are done processing your audio, these interfaces define where to output it.</p> + +<dl> + <dt>{{domxref("AudioDestinationNode")}}</dt> + <dd>The <strong><code>AudioDestinationNode</code></strong> interface represents the end destination of an audio source in a given context — usually the speakers of your device.</dd> + <dt>{{domxref("MediaStreamAudioDestinationNode")}}</dt> + <dd>The <code><strong>MediaStreamAudio</strong></code><strong><code>DestinationNode</code></strong> interface represents an audio destination consisting of a <a href="/en-US/docs/WebRTC" title="/en-US/docs/WebRTC">WebRTC</a> {{domxref("MediaStream")}} with a single <code>AudioMediaStreamTrack</code>, which can be used in a similar way to a {{domxref("MediaStream")}} obtained from {{ domxref("MediaDevices.getUserMedia", "getUserMedia()") }}. It is an {{domxref("AudioNode")}} that acts as an audio destination.</dd> +</dl> + +<h3 id="Data_analysis_and_visualization">Data analysis and visualization</h3> + +<p>If you want to extract time, frequency, and other data from your audio, the <code>AnalyserNode</code> is what you need.</p> + +<dl> + <dt>{{domxref("AnalyserNode")}}</dt> + <dd>The <strong><code>AnalyserNode</code></strong> interface represents a node able to provide real-time frequency and time-domain analysis information, for the purposes of data analysis and visualization.</dd> +</dl> + +<h3 id="Splitting_and_merging_audio_channels">Splitting and merging audio channels</h3> + +<p>To split and merge audio channels, you'll use these interfaces.</p> + +<dl> + <dt>{{domxref("ChannelSplitterNode")}}</dt> + <dd>The <code><strong>ChannelSplitterNode</strong></code> interface separates the different channels of an audio source out into a set of <em>mono</em> outputs.</dd> + <dt>{{domxref("ChannelMergerNode")}}</dt> + <dd>The <code><strong>ChannelMergerNode</strong></code> interface reunites different mono inputs into a single output. Each input will be used to fill a channel of the output.</dd> +</dl> + +<h3 id="Audio_spatialization">Audio spatialization</h3> + +<p>These interfaces allow you to add audio spatialization panning effects to your audio sources.</p> + +<dl> + <dt>{{domxref("AudioListener")}}</dt> + <dd>The <strong><code>AudioListener</code></strong><strong> </strong>interface represents the position and orientation of the unique person listening to the audio scene used in audio spatialization.</dd> + <dt>{{domxref("PannerNode")}}</dt> + <dd>The <strong><code>PannerNode</code></strong><strong> </strong>interface represents the behavior of a signal in space. It is an {{domxref("AudioNode")}} audio-processing module describing its position with right-hand Cartesian coordinates, its movement using a velocity vector and its directionality using a directionality cone.</dd> +</dl> + +<h3 id="Audio_processing_in_JavaScript">Audio processing in JavaScript</h3> + +<p>You can write JavaScript code to process audio data. To do so, you use the interfaces and events listed below.</p> + +<div class="note"> +<p>As of the August 29, 2014 version of the Web Audio API spec. these features have been marked as deprecated, and are soon to be replaced by {{ anch("Audio_Workers") }}.</p> +</div> + +<dl> + <dt>{{domxref("ScriptProcessorNode")}}</dt> + <dd>The <strong><code>ScriptProcessorNode</code></strong><strong> </strong>interface allows the generation, processing, or analyzing of audio using JavaScript. It is an {{domxref("AudioNode")}} audio-processing module that is linked to two buffers, one containing the current input, one containing the output. An event, implementing the {{domxref("AudioProcessingEvent")}} interface, is sent to the object each time the input buffer contains new data, and the event handler terminates when it has filled the output buffer with data.</dd> + <dt>{{event("audioprocess")}} (event)</dt> + <dd>The <code>audioprocess</code> event is fired when an input buffer of a Web Audio API {{domxref("ScriptProcessorNode")}} is ready to be processed.</dd> + <dt>{{domxref("AudioProcessingEvent")}}</dt> + <dd>The <a href="/en-US/docs/Web_Audio_API" title="/en-US/docs/Web_Audio_API">Web Audio API</a> <code>AudioProcessingEvent</code> represents events that occur when a {{domxref("ScriptProcessorNode")}} input buffer is ready to be processed.</dd> +</dl> + +<h3 id="Offlinebackground_audio_processing">Offline/background audio processing</h3> + +<p>It is possible to process/render an audio graph very quickly in the background — rendering it to an {{domxref("AudioBuffer")}} rather than to the device's speakers — with the following.</p> + +<dl> + <dt>{{domxref("OfflineAudioContext")}}</dt> + <dd>The <strong><code>OfflineAudioContext</code></strong> interface is an {{domxref("AudioContext")}} interface representing an audio-processing graph built from linked together {{domxref("AudioNode")}}s. In contrast with a standard <code>AudioContext</code>, an <code>OfflineAudioContext</code> doesn't really render the audio but rather generates it, <em>as fast as it can</em>, in a buffer.</dd> + <dt>{{event("complete")}} (event)</dt> + <dd>The <code>complete</code> event is fired when the rendering of an {{domxref("OfflineAudioContext")}} is terminated.</dd> + <dt>{{domxref("OfflineAudioCompletionEvent")}}</dt> + <dd>The <code>OfflineAudioCompletionEvent</code> represents events that occur when the processing of an {{domxref("OfflineAudioContext")}} is terminated. The {{event("complete")}} event implements this interface.</dd> +</dl> + +<h3 id="Audio_Workers" name="Audio_Workers">Audio Workers</h3> + +<p>Audio workers provide the ability for direct scripted audio processing to be done inside a <a href="/en-US/docs/Web/Guide/Performance/Using_web_workers">web worker</a> context, and are defined by a couple of interfaces (new as of 29th August 2014). These are not implemented in any browsers yet. When implemented, they will replace {{domxref("ScriptProcessorNode")}}, and the other features discussed in the <a href="#Audio_processing_via_JavaScript">Audio processing in JavaScript</a> section above.</p> + +<dl> + <dt>{{domxref("AudioWorkerNode")}}</dt> + <dd>The AudioWorkerNode interface represents an {{domxref("AudioNode")}} that interacts with a worker thread to generate, process, or analyse audio directly.</dd> + <dt>{{domxref("AudioWorkerGlobalScope")}}</dt> + <dd>The <code>AudioWorkerGlobalScope</code> interface is a <code>DedicatedWorkerGlobalScope</code>-derived object representing a worker context in which an audio processing script is run; it is designed to enable the generation, processing, and analysis of audio data directly using JavaScript in a worker thread.</dd> + <dt>{{domxref("AudioProcessEvent")}}</dt> + <dd>This is an <code>Event</code> object that is dispatched to {{domxref("AudioWorkerGlobalScope")}} objects to perform processing.</dd> +</dl> + +<h2 id="Example" name="Example">Interfaces obsoletas</h2> + +<p>The following interfaces were defined in old versions of the Web Audio API spec, but are now obsolete and have been replaced by other interfaces.</p> + +<dl> + <dt>{{domxref("JavaScriptNode")}}</dt> + <dd>Used for direct audio processing via JavaScript. This interface is obsolete, and has been replaced by {{domxref("ScriptProcessorNode")}}.</dd> + <dt>{{domxref("WaveTableNode")}}</dt> + <dd>Used to define a periodic waveform. This interface is obsolete, and has been replaced by {{domxref("PeriodicWave")}}.</dd> +</dl> + +<h2 id="Example" name="Example">Exemplo</h2> + +<p>This example shows a wide variety of Web Audio API functions being used. You can see this code in action on the <a href="https://mdn.github.io/voice-change-o-matic/">Voice-change-o-matic</a> demo (also check out the <a href="https://github.com/mdn/voice-change-o-matic">full source code at Github</a>) — this is an experimental voice changer toy demo; keep your speakers turned down low when you use it, at least to start!</p> + +<p>The Web Audio API lines are highlighted; if you want to find out more about what the different methods, etc. do, have a search around the reference pages.</p> + +<pre class="brush: js; highlight:[1,2,9,10,11,12,36,37,38,39,40,41,62,63,72,114,115,121,123,124,125,147,151]">var audioCtx = new (window.AudioContext || window.webkitAudioContext)(); // define audio context +// Webkit/blink browsers need prefix, Safari won't work without window. + +var voiceSelect = document.getElementById("voice"); // select box for selecting voice effect options +var visualSelect = document.getElementById("visual"); // select box for selecting audio visualization options +var mute = document.querySelector('.mute'); // mute button +var drawVisual; // requestAnimationFrame + +var analyser = audioCtx.createAnalyser(); +var distortion = audioCtx.createWaveShaper(); +var gainNode = audioCtx.createGain(); +var biquadFilter = audioCtx.createBiquadFilter(); + +function makeDistortionCurve(amount) { // function to make curve shape for distortion/wave shaper node to use + var k = typeof amount === 'number' ? amount : 50, + n_samples = 44100, + curve = new Float32Array(n_samples), + deg = Math.PI / 180, + i = 0, + x; + for ( ; i < n_samples; ++i ) { + x = i * 2 / n_samples - 1; + curve[i] = ( 3 + k ) * x * 20 * deg / ( Math.PI + k * Math.abs(x) ); + } + return curve; +}; + +navigator.getUserMedia ( + // constraints - only audio needed for this app + { + audio: true + }, + + // Success callback + function(stream) { + source = audioCtx.createMediaStreamSource(stream); + source.connect(analyser); + analyser.connect(distortion); + distortion.connect(biquadFilter); + biquadFilter.connect(gainNode); + gainNode.connect(audioCtx.destination); // connecting the different audio graph nodes together + + visualize(stream); + voiceChange(); + + }, + + // Error callback + function(err) { + console.log('The following gUM error occured: ' + err); + } +); + +function visualize(stream) { + WIDTH = canvas.width; + HEIGHT = canvas.height; + + var visualSetting = visualSelect.value; + console.log(visualSetting); + + if(visualSetting == "sinewave") { + analyser.fftSize = 2048; + var bufferLength = analyser.frequencyBinCount; // half the FFT value + var dataArray = new Uint8Array(bufferLength); // create an array to store the data + + canvasCtx.clearRect(0, 0, WIDTH, HEIGHT); + + function draw() { + + drawVisual = requestAnimationFrame(draw); + + analyser.getByteTimeDomainData(dataArray); // get waveform data and put it into the array created above + + canvasCtx.fillStyle = 'rgb(200, 200, 200)'; // draw wave with canvas + canvasCtx.fillRect(0, 0, WIDTH, HEIGHT); + + canvasCtx.lineWidth = 2; + canvasCtx.strokeStyle = 'rgb(0, 0, 0)'; + + canvasCtx.beginPath(); + + var sliceWidth = WIDTH * 1.0 / bufferLength; + var x = 0; + + for(var i = 0; i < bufferLength; i++) { + + var v = dataArray[i] / 128.0; + var y = v * HEIGHT/2; + + if(i === 0) { + canvasCtx.moveTo(x, y); + } else { + canvasCtx.lineTo(x, y); + } + + x += sliceWidth; + } + + canvasCtx.lineTo(canvas.width, canvas.height/2); + canvasCtx.stroke(); + }; + + draw(); + + } else if(visualSetting == "off") { + canvasCtx.clearRect(0, 0, WIDTH, HEIGHT); + canvasCtx.fillStyle = "red"; + canvasCtx.fillRect(0, 0, WIDTH, HEIGHT); + } + +} + +function voiceChange() { + distortion.curve = new Float32Array; + biquadFilter.gain.value = 0; // reset the effects each time the voiceChange function is run + + var voiceSetting = voiceSelect.value; + console.log(voiceSetting); + + if(voiceSetting == "distortion") { + distortion.curve = makeDistortionCurve(400); // apply distortion to sound using waveshaper node + } else if(voiceSetting == "biquad") { + biquadFilter.type = "lowshelf"; + biquadFilter.frequency.value = 1000; + biquadFilter.gain.value = 25; // apply lowshelf filter to sounds using biquad + } else if(voiceSetting == "off") { + console.log("Voice settings turned off"); // do nothing, as off option was chosen + } + +} + +// event listeners to change visualize and voice settings + +visualSelect.onchange = function() { + window.cancelAnimationFrame(drawVisual); + visualize(stream); +} + +voiceSelect.onchange = function() { + voiceChange(); +} + +mute.onclick = voiceMute; + +function voiceMute() { // toggle to mute and unmute sound + if(mute.id == "") { + gainNode.gain.value = 0; // gain set to 0 to mute sound + mute.id = "activated"; + mute.innerHTML = "Unmute"; + } else { + gainNode.gain.value = 1; // gain set to 1 to unmute sound + mute.id = ""; + mute.innerHTML = "Mute"; + } +} +</pre> + +<h2 id="Especificações">Especificações</h2> + +<table class="standard-table"> + <tbody> + <tr> + <th scope="col">Specification</th> + <th scope="col">Status</th> + <th scope="col">Comment</th> + </tr> + <tr> + <td>{{SpecName('Web Audio API')}}</td> + <td>{{Spec2('Web Audio API')}}</td> + <td> </td> + </tr> + </tbody> +</table> + +<h2 id="Compatibilidade_do_navegador">Compatibilidade do navegador</h2> + +<div>{{CompatibilityTable}}</div> + +<div id="compat-desktop"> +<table class="compat-table"> + <tbody> + <tr> + <th>Feature</th> + <th>Chrome</th> + <th>Edge</th> + <th>Firefox (Gecko)</th> + <th>Internet Explorer</th> + <th>Opera</th> + <th>Safari (WebKit)</th> + </tr> + <tr> + <td>Basic support</td> + <td>14 {{property_prefix("webkit")}}</td> + <td>{{CompatVersionUnknown}}</td> + <td>23</td> + <td>{{CompatNo}}</td> + <td>15 {{property_prefix("webkit")}}<br> + 22 (unprefixed)</td> + <td>6 {{property_prefix("webkit")}}</td> + </tr> + </tbody> +</table> +</div> + +<div id="compat-mobile"> +<table class="compat-table"> + <tbody> + <tr> + <th>Feature</th> + <th>Android</th> + <th>Chrome</th> + <th>Edge</th> + <th>Firefox Mobile (Gecko)</th> + <th>Firefox OS</th> + <th>IE Phone</th> + <th>Opera Mobile</th> + <th>Safari Mobile</th> + </tr> + <tr> + <td>Basic support</td> + <td>{{CompatNo}}</td> + <td>28 {{property_prefix("webkit")}}</td> + <td>{{CompatVersionUnknown}}</td> + <td>25</td> + <td>1.2</td> + <td>{{CompatNo}}</td> + <td>{{CompatNo}}</td> + <td>6 {{property_prefix("webkit")}}</td> + </tr> + </tbody> +</table> +</div> + +<h2 id="Consultar_também">Consultar também</h2> + +<ul> + <li><a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a></li> + <li><a href="/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API">Visualizations with Web Audio API</a></li> + <li><a href="http://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic example</a></li> + <li><a href="http://mdn.github.io/violent-theremin/">Violent Theremin example</a></li> + <li><a href="/en-US/docs/Web/API/Web_Audio_API/Web_audio_spatialisation_basics">Web audio spatialisation basics</a></li> + <li><a href="http://www.html5rocks.com/tutorials/webaudio/positional_audio/" title="http://www.html5rocks.com/tutorials/webaudio/positional_audio/">Mixing Positional Audio and WebGL</a></li> + <li><a href="http://www.html5rocks.com/tutorials/webaudio/games/" title="http://www.html5rocks.com/tutorials/webaudio/games/">Developing Game Audio with the Web Audio API</a></li> + <li><a href="/en-US/docs/Web/API/Web_Audio_API/Porting_webkitAudioContext_code_to_standards_based_AudioContext" title="/en-US/docs/Web_Audio_API/Porting_webkitAudioContext_code_to_standards_based_AudioContext">Porting webkitAudioContext code to standards based AudioContext</a></li> + <li><a href="https://github.com/bit101/tones">Tones</a>: a simple library for playing specific tones/notes using the Web Audio API.</li> + <li><a href="https://github.com/goldfire/howler.js/">howler.js</a>: a JS audio library that defaults to <a href="https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html">Web Audio API</a> and falls back to <a href="http://www.whatwg.org/specs/web-apps/current-work/#the-audio-element">HTML5 Audio</a>, as well as providing other useful features.</li> + <li><a href="https://github.com/mattlima/mooog">Mooog</a>: jQuery-style chaining of AudioNodes, mixer-style sends/returns, and more.</li> +</ul> + +<section id="Quick_Links"> +<h3 id="Hiperligações_Rápidas">Hiperligações Rápidas</h3> + +<ol> + <li data-default-state="open"><strong><a href="#">Guias</a></strong> + + <ol> + <li><a href="/en-US/docs/Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API">Basic concepts behind Web Audio API</a></li> + <li><a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a></li> + <li><a href="/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API">Visualizations with Web Audio API</a></li> + <li><a href="/en-US/docs/Web/API/Web_Audio_API/Web_audio_spatialization_basics">Web audio spatialization basics</a></li> + <li><a href="/en-US/docs/Web/API/Web_Audio_API/Porting_webkitAudioContext_code_to_standards_based_AudioContext" title="/en-US/docs/Web_Audio_API/Porting_webkitAudioContext_code_to_standards_based_AudioContext">Porting webkitAudioContext code to standards based AudioContext</a></li> + </ol> + </li> + <li data-default-state="open"><strong><a href="#">Exemplos</a></strong> + <ol> + <li><a href="/en-US/docs/Web/API/Web_Audio_API/Simple_synth">Simple synth keyboard</a></li> + <li><a href="http://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic</a></li> + <li><a href="http://mdn.github.io/violent-theremin/">Violent Theremin</a></li> + </ol> + </li> + <li data-default-state="open"><strong><a href="#">Interfaces</a></strong> + <ol> + <li>{{domxref("AnalyserNode")}}</li> + <li>{{domxref("AudioBuffer")}}</li> + <li>{{domxref("AudioBufferSourceNode")}}</li> + <li>{{domxref("AudioContext")}}</li> + <li>{{domxref("AudioDestinationNode")}}</li> + <li>{{domxref("AudioListener")}}</li> + <li>{{domxref("AudioNode")}}</li> + <li>{{domxref("AudioParam")}}</li> + <li>{{event("audioprocess")}} (event)</li> + <li>{{domxref("AudioProcessingEvent")}}</li> + <li>{{domxref("BiquadFilterNode")}}</li> + <li>{{domxref("ChannelMergerNode")}}</li> + <li>{{domxref("ChannelSplitterNode")}}</li> + <li>{{event("complete")}} (event)</li> + <li>{{domxref("ConvolverNode")}}</li> + <li>{{domxref("DelayNode")}}</li> + <li>{{domxref("DynamicsCompressorNode")}}</li> + <li>{{event("ended_(Web_Audio)", "ended")}} (event)</li> + <li>{{domxref("GainNode")}}</li> + <li>{{domxref("MediaElementAudioSourceNode")}}</li> + <li>{{domxref("MediaStreamAudioDestinationNode")}}</li> + <li>{{domxref("MediaStreamAudioSourceNode")}}</li> + <li>{{domxref("OfflineAudioCompletionEvent")}}</li> + <li>{{domxref("OfflineAudioContext")}}</li> + <li>{{domxref("OscillatorNode")}}</li> + <li>{{domxref("PannerNode")}}</li> + <li>{{domxref("PeriodicWave")}}</li> + <li>{{domxref("ScriptProcessorNode")}}</li> + <li>{{domxref("WaveShaperNode")}}</li> + </ol> + </li> +</ol> +</section> diff --git a/files/pt-pt/web/api/web_audio_api/utilizar_api_audio_web/index.html b/files/pt-pt/web/api/web_audio_api/utilizar_api_audio_web/index.html new file mode 100644 index 0000000000..d9a72f2694 --- /dev/null +++ b/files/pt-pt/web/api/web_audio_api/utilizar_api_audio_web/index.html @@ -0,0 +1,259 @@ +--- +title: Utilizar a API de Áudio da Web +slug: Web/API/Web_Audio_API/Utilizar_api_audio_web +tags: + - API + - API de Áudio da Web + - Guía + - Referencia + - Utilização + - básicos +translation_of: Web/API/Web_Audio_API/Using_Web_Audio_API +--- +<div>{{DefaultAPISidebar("Web Audio API")}}</div> + +<div class="summary"> +<p>Vamos ver como começar a utilizar a API de Áudio da Web. Nós iremos ver resumidamente alguns conceitos, e depois estudar um exemplo simples de caixa de som que nos permite carregar uma faixa de áudio, reproduzi-la e pausá-la, e alterar o seu volume e <em>panning</em> estéreo<span class="seoSummary">.</span></p> +</div> + +<div> +<p>The Web Audio API does not replace the <a href="/en-US/docs/Web/HTML/Element/audio"><audio></a> media element, but rather complements it, just like <a href="/en-US/docs/Web/HTML/Element/canvas"><canvas></a> coexists alongside the <a href="/en-US/docs/Web/HTML/Element/Img"><img></a> element. Your use case will determine what tools you use to implement audio. If you simply want to control playback of an audio track, the <audio> media element provides a better, quicker solution than the Web Audio API. If you want to carry out more complex audio processing, as well as playback, the Web Audio API provides much more power and control.</p> + +<p>A powerful feature of the Web Audio API is that it does not have a strict "sound call limitation". For example, there is no ceiling of 32 or 64 sound calls at one time. Some processors may be capable of playing more than 1,000 simultaneous sounds without stuttering.</p> +</div> + +<h2 id="Código_exemplo">Código exemplo</h2> + +<p>A nossa caixa de música parece-se com isto:</p> + +<p><img alt="A boombox with play, pan, and volume controls" src="https://mdn.mozillademos.org/files/16197/boombox.png" style="border-style: solid; border-width: 1px; height: 646px; width: 1200px;"></p> + +<p>Note the retro cassette deck with a play button, and vol and pan sliders to allow you to alter the volume and stereo panning. We could make this a lot more complex, but this is ideal for simple learning at this stage.</p> + +<p><a href="https://codepen.io/Rumyra/pen/qyMzqN/">Check out the final demo here on Codepen</a>, or see the <a href="https://github.com/mdn/webaudio-examples/tree/master/audio-basics">source code on GitHub</a>.</p> + +<h2 id="Suporte_para_navegador">Suporte para navegador</h2> + +<p>Modern browsers have good support for most features of the Web Audio API. There are a lot of features of the API, so for more exact information, you'll have to check the browser compatibility tables at the bottom of each reference page.</p> + +<h2 id="Gráficos_de_áudio">Gráficos de áudio</h2> + +<p>Everything within the Web Audio API is based around the concept of an audio graph, which is made up of nodes.</p> + +<p>The Web Audio API handles audio operations inside an <strong>audio context</strong>, and has been designed to allow <strong>modular routing</strong>. Basic audio operations are performed with <strong>audio nodes</strong>, which are linked together to form an <strong>audio routing graph</strong>. You have input nodes, which are the source of the sounds you are manipulating, modification nodes that change those sounds as desired, and output nodes (destinations), which allow you to save or hear those sounds.</p> + +<p>Several audio sources with different channel layouts are supported, even within a single context. Because of this modular design, you can create complex audio functions with dynamic effects.</p> + +<h2 id="Contxeto_de_Áudio">Contxeto de Áudio</h2> + +<p>To be able to do anything with the Web Audio API, we need to create an instance of the audio context. This then gives us access to all the features and functionality of the API.</p> + +<pre class="brush: js">// for legacy browsers +const AudioContext = window.AudioContext || window.webkitAudioContext; + +const audioCtx = new AudioContext(); +</pre> + +<p>So what's going on when we do this? A {{domxref("BaseAudioContext")}} is created for us automatically and extended to an online audio context. We'll want this because we're looking to play live sound.</p> + +<div class="note"> +<p><strong>Note</strong>: If you just want to process audio data, for instance, buffer and stream it but not play it, you might want to look into creating an {{domxref("OfflineAudioContext")}}.</p> +</div> + +<h2 id="Carregar_som">Carregar som</h2> + +<p>Now, the audio context we've created needs some sound to play through it. There are a few ways to do this with the API. Let's begin with a simple method — as we have a boombox, we most likely want to play a full song track. Also, for accessibility, it's nice to expose that track in the DOM. We'll expose the song on the page using an {{htmlelement("audio")}} element.</p> + +<pre class="brush: html"><audio src="myCoolTrack.mp3" type="audio/mpeg"></audio> +</pre> + +<div class="note"> +<p><strong>Nota</strong>: If the sound file you're loading is held on a different domain you will need to use the <code>crossorigin</code> attribute; see <a href="/en-US/docs/Web/HTTP/CORS">Cross Origin Resource Sharing (CORS)</a> for more information.</p> +</div> + +<p>To use all the nice things we get with the Web Audio API, we need to grab the source from this element and <em>pipe</em> it into the context we have created. Lucky for us there's a method that allows us to do just that — {{domxref("AudioContext.createMediaElementSource")}}:</p> + +<pre class="brush: js">// get the audio element +const audioElement = document.querySelector('audio'); + +// pass it into the audio context +const track = audioCtx.createMediaElementSource(audioElement); +</pre> + +<div class="note"> +<p><strong>Nota</strong>: The <code><audio></code> element above is represented in the DOM by an object of type {{domxref("HTMLMediaElement")}}, which comes with its own set of functionality. All of this has stayed intact; we are merely allowing the sound to be available to the Web Audio API.</p> +</div> + +<h2 id="Controlar_o_som">Controlar o som</h2> + +<p>When playing sound on the web, it's important to allow the user to control it. Depending on the use case, there's a myriad of options, but we'll provide functionality to play/pause the sound, alter the track's volume, and pan it from left to right.</p> + +<div class="note"> +<p><strong>Nota</strong>: We need to take into account the new autoplay policy that modern browsers have, which calls for a user gesture before media can play (see Chrome's <a href="https://developers.google.com/web/updates/2017/09/autoplay-policy-changes">Autoplay Policy Changes</a>, for example). This has been implemented because autoplaying media is really bad for many reasons — it is annoying and intrusive at the very least, and also causes accessibility problems. This is accounted for by our play/pause button.</p> +</div> + +<p>Let's take a look at our play and pause functionality to start with. We have a play button that changes to a pause button when the track is playing:</p> + +<pre class="brush: html"><button data-playing="false" role="switch" aria-checked="false"> + <span>Play/Pause</span> +</button> +</pre> + +<p>Before we can play our track we need to connect our audio graph from the audio source/input node to the destination.</p> + +<p>We've already created an input node by passing our audio element into the API. For the most part, you don't need to create an output node, you can just connect your other nodes to {{domxref("BaseAudioContext.destination")}}, which handles the situation for you:</p> + +<pre class="brush: js">track.connect(audioCtx.destination); +</pre> + +<p>A good way to visualise these nodes is by drawing an audio graph so you can visualize it. This is what our current audio graph looks like:</p> + +<p><img alt="an audio graph with an audio element source connected to the default destination" src="https://mdn.mozillademos.org/files/16195/graph1.jpg" style="border-style: solid; border-width: 1px; height: 486px; width: 1426px;"></p> + +<p>Now we can add the play and pause functionality.</p> + +<pre class="brush: js">// select our play button +const playButton = document.querySelector('button'); + +playButton.addEventListener('click', function() { + + // check if context is in suspended state (autoplay policy) + if (audioCtx.state === 'suspended') { + audioCtx.resume(); + } + + // play or pause track depending on state + if (this.dataset.playing === 'false') { + audioElement.play(); + this.dataset.playing = 'true'; + } else if (this.dataset.playing === 'true') { + audioElement.pause(); + this.dataset.playing = 'false'; + } + +}, false); +</pre> + +<p>We also need to take into account what to do when the track finishes playing. Our <code>HTMLMediaElement</code> fires an <code>ended</code> event once it's finished playing, so we can listen for that and run code accordingly:</p> + +<pre class="brush: js">audioElement.addEventListener('ended', () => { + playButton.dataset.playing = 'false'; +}, false); +</pre> + +<h2 id="Um_aparte_sobre_o_editor_de_Áudio_da_Web">Um aparte sobre o editor de Áudio da Web</h2> + +<p>Firefox has a tool available called the <a href="/en-US/docs/Tools/Web_Audio_Editor">Web Audio editor</a>. On any page that has an audio graph running on it, you can open the developer tools, and use the Web Audio tab to view the audio graph, see what properties each node has available, and change the values of those properties to see what effect that has.</p> + +<p><img alt="The Firefox web audio editor showing an audio graph with AudioBufferSource, IIRFilter, and AudioDestination" src="https://mdn.mozillademos.org/files/16198/web-audio-editor.png" style="border-style: solid; border-width: 1px; height: 365px; width: 1200px;"></p> + +<div class="note"> +<p><strong>Nota</strong>: The Web Audio editor is not enabled by default. To display it, you need to go into the Firefox developer tools settings and check the <em>Web Audio</em> checkbox in the <em>Default Developer Tools</em> section.</p> +</div> + +<h2 id="Modificar_o_som">Modificar o som</h2> + +<p>Let's delve into some basic modification nodes, to change the sound that we have. This is where the Web Audio API really starts to come in handy. First of all, let's change the volume. This can be done using a {{domxref("GainNode")}}, which represents how big our sound wave is.</p> + +<p>There are two ways you can create nodes with the Web Audio API. You can use the factory method on the context itself (e.g. <code>audioCtx.createGain()</code>) or via a constructor of the node (e.g. <code>new GainNode()</code>). We'll use the factory method in our code:</p> + +<pre class="brush: js">const gainNode = audioCtx.createGain(); +</pre> + +<p>Now we have to update our audio graph from before, so the input is connected to the gain, then the gain node is connected to the destination:</p> + +<pre class="brush: js">track.connect(gainNode).connect(audioCtx.destination); +</pre> + +<p>This will make our audio graph look like this:</p> + +<p><img alt="an audio graph with an audio element source, connected to a gain node that modifies the audio source, and then going to the default destination" src="https://mdn.mozillademos.org/files/16196/graph2.jpg" style="border-style: solid; border-width: 1px; height: 550px; width: 1774px;"></p> + +<p>The default value for gain is 1; this keeps the current volume the same. Gain can be set to a minimum of about -3.4 and a max of about 3.4. Here we'll allow the boombox to move the gain up to 2 (double the original volume) and down to 0 (this will effectively mute our sound).</p> + +<p>Let's give the user control to do this — we'll use a <a href="/en-US/docs/Web/HTML/Element/input/range">range input</a>:</p> + +<pre class="brush: html"><input type="range" id="volume" min="0" max="2" value="1" step="0.01" /> +</pre> + +<div class="note"> +<p><strong>Nota</strong>: Range inputs are a really handy input type for updating values on audio nodes. You can specify a range's values and use them directly with the audio node's parameters.</p> +</div> + +<p>So let's grab this input's value and update the gain value when the input node has its value changed by the user:</p> + +<pre class="brush: js">const volumeControl = document.querySelector('#volume'); + +volumeControl.addEventListener('input', function() { + gainNode.gain.value = this.value; +}, false); +</pre> + +<div class="note"> +<p><strong>Nota</strong>: The values of node objects (e.g. <code>GainNode.gain</code>) are not simple values; they are actually objects of type {{domxref("AudioParam")}} — these called parameters. This is why we have to set <code>GainNode.gain</code>'s <code>value</code> property, rather than just setting the value on <code>gain</code> directly. This enables them to be much more flexible, allowing for passing the parameter a specific set of values to change between over a set period of time, for example.</p> +</div> + +<p>Great, now the user can update the track's volume! The gain node is the perfect node to use if you want to add mute functionality.</p> + +<h2 id="Adicionar_panning_estéreo_à_sua_aplicação">Adicionar <em>panning</em> estéreo à sua aplicação</h2> + +<p>Let's add another modification node to practise what we've just learnt.</p> + +<p>There's a {{domxref("StereoPannerNode")}} node, which changes the balance of the sound between the left and right speakers, if the user has stereo capabilities.</p> + +<p>Note: The <code>StereoPannerNode</code> is for simple cases in which you just want stereo panning from left to right. There is also a {{domxref("PannerNode")}}, which allows for a great deal of control over 3D space, or sound <em>spatialisation</em>, for creating more complex effects. This is used in games and 3D apps to create birds flying overhead, or sound coming from behind the user for instance.</p> + +<p>To visualise it, we will be making our audio graph look like this:</p> + +<p><img alt="An image showing the audio graph showing an input node, two modification nodes (a gain node and a stereo panner node) and a destination node." src="https://mdn.mozillademos.org/files/16229/graphPan.jpg" style="border-style: solid; border-width: 1px; height: 532px; width: 2236px;"></p> + +<p>Let's use the constructor method of creating a node this time. When we do it this way, we have to pass in the context and any options that that particular node may take:</p> + +<pre class="brush: js">const pannerOptions = {pan: 0}; +const panner = new StereoPannerNode(audioCtx, pannerOptions); +</pre> + +<div class="note"> +<p><strong>Nota</strong>: The constructor method of creating nodes is not supported by all browsers at this time. The older factory methods are supported more widely.</p> +</div> + +<p>Here our values range from -1 (far left) and 1 (far right). Again let's use a range type input to vary this parameter:</p> + +<pre class="brush: html"><input type="range" id="panner" min="-1" max="1" value="0" step="0.01" /> +</pre> + +<p>We use the values from that input to adjust our panner values in the same way as we did before:</p> + +<pre class="brush: js">const pannerControl = document.querySelector('#panner'); + +pannerControl.addEventListener('input', function() { + panner.pan.value = this.value; +}, false); +</pre> + +<p>Let's adjust our audio graph again, to connect all the nodes together:</p> + +<pre class="brush: js">track.connect(gainNode).connect(panner).connect(audioCtx.destination); +</pre> + +<p>The only thing left to do is give the app a try: <a href="https://codepen.io/Rumyra/pen/qyMzqN/">Check out the final demo here on Codepen</a>.</p> + +<h2 id="Resumo">Resumo</h2> + +<p>Great! We have a boombox that plays our 'tape', and we can adjust the volume and stereo panning, giving us a fairly basic working audio graph.</p> + +<p>This makes up quite a few basics that you would need to start to add audio to your website or web app. There's a lot more functionality to the Web Audio API, but once you've grasped the concept of nodes and putting your audio graph together, we can move on to looking at more complex functionality.</p> + +<h2 id="Mais_exemplos">Mais exemplos</h2> + +<p>There are other examples available to learn more about the Web Audio API.</p> + +<p>The <a href="https://github.com/mdn/voice-change-o-matic">Voice-change-O-matic</a> is a fun voice manipulator and sound visualization web app that allows you to choose different effects and visualizations. The application is fairly rudimentary, but it demonstrates the simultaneous use of multiple Web Audio API features. (<a href="https://mdn.github.io/voice-change-o-matic/">run the Voice-change-O-matic live</a>).</p> + +<p><img alt="A UI with a sound wave being shown, and options for choosing voice effects and visualizations." src="https://mdn.mozillademos.org/files/7921/voice-change-o-matic.png" style="border-style: solid; border-width: 1px; display: block; height: 500px; margin: 0px auto; width: 640px;"></p> + +<p>Another application developed specifically to demonstrate the Web Audio API is the <a href="http://mdn.github.io/violent-theremin/">Violent Theremin</a>, a simple web application that allows you to change pitch and volume by moving your mouse pointer. It also provides a psychedelic lightshow (<a href="https://github.com/mdn/violent-theremin">see Violent Theremin source code</a>).</p> + +<p><img alt="A page full of rainbow colours, with two buttons labeled Clear screen and mute. " src="https://mdn.mozillademos.org/files/7919/violent-theremin.png" style="border-style: solid; border-width: 1px; display: block; height: 458px; margin: 0px auto; width: 640px;"></p> + +<p>Consulte também a nossa <a href="https://github.com/mdn/webaudio-examples">repositório de exemplos de áudio</a> para mais exemplos.</p> |