1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
|
---
title: Tirar fotografias com a câmara da Web
slug: Web/API/API_WebRTC/Tirar_fotografias
tags:
- API
- Avançado
- Código amostra
- Exemplo
- Vídeo(2)
- WebRTC
- cãmara da Web
translation_of: Web/API/WebRTC_API/Taking_still_photos
---
<p>{{WebRTCSidebar}}</p>
<p><span class="seoSummary">This article shows how to use WebRTC to access the camera on a computer or mobile phone with WebRTC support and take a photo with it.</span> <a href="https://mdn-samples.mozilla.org/s/webrtc-capturestill">Try this sample</a> then read on to learn how it works.</p>
<p>You can also jump straight to the <a class="external" href="https://github.com/mdn/samples-server/tree/master/s/webrtc-capturestill" rel="noopener">code on Github</a> if you like.</p>
<h2 id="The_HTML_markup">The HTML markup</h2>
<p><a class="external" href="https://github.com/mdn/samples-server/tree/master/s/webrtc-capturestill/index.html" rel="noopener">Our HTML interface</a> has two main operational sections: the stream and capture panel and the presentation panel. Each of these is presented side-by-side in its own {{HTMLElement("div")}} to facilitate styling and control.</p>
<p>The first panel on the left contains two components: a {{HTMLElement("video")}} element, which will receive the stream from WebRTC, and a {{HTMLElement("button")}} the user clicks to capture a video frame.</p>
<pre class="brush: html"> <div class="camera">
<video id="video">Video stream not available.</video>
<button id="startbutton">Take photo</button>
</div></pre>
<p>This is straightforward, and we'll see how it ties together when we get into the JavaScript code.</p>
<p>Next, we have a {{HTMLElement("canvas")}} element into which the captured frames are stored, potentially manipulated in some way, and then converted into an output image file. This canvas is kept hidden by styling the canvas with {{cssxref("display")}}<code>:none</code>, to avoid cluttering up the screen — the user does not need to see this intermediate stage.</p>
<p>We also have an {{HTMLElement("img")}} element into which we will draw the image — this is the final display shown to the user.</p>
<pre class="brush: html"> <canvas id="canvas">
</canvas>
<div class="output">
<img id="photo" alt="The screen capture will appear in this box.">
</div></pre>
<p>That's all of the relevant HTML. The rest is just some page layout fluff and a bit of text offering a link back to this page.</p>
<h2 id="O_código_de_JavaScript"><span style="font-size: 30px;">O código de JavaScript</span></h2>
<p>Now let's take a look at the <a class="external" href="https://github.com/mdn/samples-server/tree/master/s/webrtc-capturestill/capture.js" rel="noopener">JavaScript code</a>. We'll break it up into a few bite-sized pieces to make it easier to explain.</p>
<h3 id="Initialização">Initialização</h3>
<p>We start by wrapping the whole script in an anonymous function to avoid global variables, then setting up various variables we'll be using.</p>
<pre class="brush: js">(function() {
var width = 320; // We will scale the photo width to this
var height = 0; // This will be computed based on the input stream
var streaming = false;
var video = null;
var canvas = null;
var photo = null;
var startbutton = null;</pre>
<p>Those variables are:</p>
<dl>
<dt><code>largura</code></dt>
<dd>Whatever size the incoming video is, we're going to scale the resulting image to be 320 pixels wide.</dd>
<dt><code>altura</code></dt>
<dd>The output height of the image will be computed given the <code>width</code> and the aspect ratio of the stream.</dd>
<dt><code>transmissão</code></dt>
<dd>Indicates whether or not there is currently an active stream of video running.</dd>
<dt><code>vídeo</code></dt>
<dd>This will be a reference to the {{HTMLElement("video")}} element after the page is done loading.</dd>
<dt><code>canvas</code></dt>
<dd>This will be a reference to the {{HTMLElement("canvas")}} element after the page is done loading.</dd>
<dt><code>foto</code></dt>
<dd>This will be a reference to the {{HTMLElement("img")}} element after the page is done loading.</dd>
<dt><code>startbutton</code></dt>
<dd>This will be a reference to the {{HTMLElement("button")}} element that's used to trigger capture. We'll get that after the page is done loading.</dd>
</dl>
<h3 id="The_startup()_function">The startup() function</h3>
<p>The <code>startup()</code> function is run when the page has finished loading, courtesy of {{domxref("window.addEventListener()")}}. This function's job is to request access to the user's webcam, initialize the output {{HTMLElement("img")}} to a default state, and to establish the event listeners needed to receive each frame of video from the camera and react when the button is clicked to capture an image.</p>
<h4 id="Getting_element_references">Getting element references</h4>
<p>First, we grab references to the major elements we need to be able to access.</p>
<pre class="brush: js"> function startup() {
video = document.getElementById('video');
canvas = document.getElementById('canvas');
photo = document.getElementById('photo');
startbutton = document.getElementById('startbutton');</pre>
<h4 id="Get_the_media_stream">Get the media stream</h4>
<p>The next task is to get the media stream:</p>
<pre class="brush: js"> navigator.mediaDevices.getUserMedia({ video: true, audio: false })
.then(function(stream) {
video.srcObject = stream;
video.play();
})
.catch(function(err) {
console.log("An error occured! " + err);
});
</pre>
<p>Here, we're calling {{domxref("MediaDevices.getUserMedia()")}} and requesting a video stream (without audio). It returns a promise which we attach success and failure callbacks to.</p>
<p>The success callback receives a <code>stream</code> object as input. It the {{HTMLElement("video")}} element's source to our new stream.</p>
<p>Once the stream is linked to the <code><video></code> element, we start it playing by calling <code><a href="/en-US/docs/Web/API/HTMLMediaElement#play">HTMLMediaElement.play()</a></code>.</p>
<p>The error callback is called if opening the stream doesn't work. This will happen for example if there's no compatible camera connected, or the user denied access.</p>
<h4 id="Listen_for_the_video_to_start_playing">Listen for the video to start playing</h4>
<p>After calling <code><a href="https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement#play">HTMLMediaElement.play()</a></code> on the {{HTMLElement("video")}}, there's a (hopefully brief) period of time that elapses before the stream of video begins to flow. To avoid blocking until that happens, we add an event listener to <code>video</code>, <code>canplay</code>, which is delivered when the video playback actually begins. At that point, all the properties in the <code>video</code> object have been configured based on the stream's format.</p>
<pre class="brush: js"> video.addEventListener('canplay', function(ev){
if (!streaming) {
height = video.videoHeight / (video.videoWidth/width);
video.setAttribute('width', width);
video.setAttribute('height', height);
canvas.setAttribute('width', width);
canvas.setAttribute('height', height);
streaming = true;
}
}, false);</pre>
<p>This callback does nothing unless it's the first time it's been called; this is tested by looking at the value of our <code>streaming</code> variable, which is <code>false</code> the first time this method is run.</p>
<p>If this is indeed the first run, we set the video's height based on the size difference between the video's actual size, <code>video.videoWidth</code>, and the width at which we're going to render it, <code>width</code>.</p>
<p>Finally, the <code>width</code> and <code>height</code> of both the video and the canvas are set to match each other by calling {{domxref("Element.setAttribute()")}} on each of the two properties on each element, and setting widths and heights as appropriate. Finally, we set the <code>streaming</code> variable to <code>true</code> to prevent us from inadvertently running this setup code again.</p>
<h4 id="Handle_clicks_on_the_button">Handle clicks on the button</h4>
<p>To capture a still photo each time the user clicks the <code>startbutton</code>, we need to add an event listener to the button, to be called when the <code>click</code> event is issued:</p>
<pre class="brush: js"> startbutton.addEventListener('click', function(ev){
takepicture();
ev.preventDefault();
}, false);</pre>
<p>This method is simple enough: it just calls our <code>takepicture()</code> function, defined below in the section {{anch("Capturing a frame from the stream")}}, then calls {{domxref("Event.preventDefault()")}} on the received event to prevent the click from being handled more than once.</p>
<h4 id="Wrapping_up_the_startup()_method">Wrapping up the startup() method</h4>
<p>There are only two more lines of code in the <code>startup()</code> method:</p>
<pre class="brush: js"> clearphoto();
}</pre>
<p>This is where we call the <code>clearphoto()</code> method we'll describe below in the section {{anch("Clearing the photo box")}}.</p>
<h3 id="Clearing_the_photo_box">Clearing the photo box</h3>
<p>Clearing the photo box involves creating an image, then converting it into a format usable by the {{HTMLElement("img")}} element that displays the most recently captured frame. That code looks like this:</p>
<pre class="brush: js"> function clearphoto() {
var context = canvas.getContext('2d');
context.fillStyle = "#AAA";
context.fillRect(0, 0, canvas.width, canvas.height);
var data = canvas.toDataURL('image/png');
photo.setAttribute('src', data);
}</pre>
<p>We start by getting a reference to the hidden {{HTMLElement("canvas")}} element that we use for offscreen rendering. Next we set the <code>fillStyle</code> to <code>#AAA</code> (a fairly light grey), and fill the entire canvas with that color by calling {{domxref("CanvasRenderingContext2D.fillRect()","fillRect()")}}.</p>
<p>Last in this function, we convert the canvas into a PNG image and call <code>{{domxref("Element.setAttribute", "photo.setAttribute()")}}</code> to make our captured still box display the image.</p>
<h3 id="Capturing_a_frame_from_the_stream">Capturing a frame from the stream</h3>
<p>There's one last function to define, and it's the point to the entire exercise: the <code>takepicture()</code> function, whose job it is to capture the currently displayed video frame, convert it into a PNG file, and display it in the captured frame box. The code looks like this:</p>
<pre class="brush: js"> function takepicture() {
var context = canvas.getContext('2d');
if (width && height) {
canvas.width = width;
canvas.height = height;
context.drawImage(video, 0, 0, width, height);
var data = canvas.toDataURL('image/png');
photo.setAttribute('src', data);
} else {
clearphoto();
}
}</pre>
<p>As is the case any time we need to work with the contents of a canvas, we start by getting the {{domxref("CanvasRenderingContext2D","2D drawing context")}} for the hidden canvas.</p>
<p>Then, if the width and height are both non-zero (meaning that there's at least potentially valid image data), we set the width and height of the canvas to match that of the captured frame, then call {{domxref("CanvasRenderingContext2D.drawImage()", "drawImage()")}} to draw the current frame of the video into the context, filling the entire canvas with the frame image.</p>
<div class="note">
<p><strong>Note:</strong> This takes advantage of the fact that the {{domxref("HTMLVideoElement")}} interface looks like a {{domxref("HTMLImageElement")}} to any API that accepts an <code>HTMLImageElement</code> as a parameter, with the video's current frame presented as the image's contents.</p>
</div>
<p>Once the canvas contains the captured image, we convert it to PNG format by calling {{domxref("HTMLCanvasElement.toDataURL()")}} on it; finally, we call {{domxref("Element.setAttribute", "photo.setAttribute()")}} to make our captured still box display the image.</p>
<p>If there isn't a valid image available (that is, the <code>width</code> and <code>height</code> are both 0), we clear the contents of the captured frame box by calling <code>clearphoto()</code>.</p>
<h2 id="Fun_with_filters">Fun with filters</h2>
<p>Since we're capturing images from the user's webcam by grabbing frames from a {{HTMLElement("video")}} element, we can very easily apply filters and fun effects to the video. As it turns out, any CSS filters you apply to the element using the {{cssxref("filter")}} property affect the captured photo. These filters can range from the simple (making the image black and white) to the extreme (gaussian blurs and hue rotation).</p>
<p>You can play with this effect using, for example, the Firefox developer tools' <a href="/en-US/docs/Tools/Style_Editor">style editor</a>; see <a href="/en-US/docs/Tools/Page_Inspector/How_to/Edit_CSS_filters">Edit CSS filters</a> for details on how to do so.</p>
<h2 id="Consultar_também">Consultar também</h2>
<ul>
<li><a href="https://mdn-samples.mozilla.org/s/webrtc-capturestill">Try this sample</a></li>
<li><a href="https://github.com/mdn/samples-server/tree/master/s/webrtc-capturestill">Sample code on Github</a></li>
<li>{{domxref("Navigator.getUserMedia()")}}</li>
<li>{{SectionOnPage("/en-US/docs/Web/API/Canvas_API/Tutorial/Using_images", "Using frames from a video")}}</li>
<li>{{domxref("CanvasRenderingContext2D.drawImage()")}}</li>
</ul>
|