blob: a4af82efaa44fea06d846ae804cf100b3cea1820 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
|
---
title: SpeechRecognitionResult
slug: Web/API/SpeechRecognitionResult
tags:
- API
- Experimental
- Interface
- NeedsTranslation
- Reference
- SpeechRecognitionResult
- TopicStub
- Web Speech API
- recognition
- speech
translation_of: Web/API/SpeechRecognitionResult
---
<p>{{APIRef("Web Speech API")}}{{SeeCompatTable}}</p>
<p>The <strong><code>SpeechRecognitionResult</code></strong> interface of the <a href="/en-US/docs/Web/API/Web_Speech_API">Web Speech API</a> represents a single recognition match, which may contain multiple {{domxref("SpeechRecognitionAlternative")}} objects.</p>
<h2 id="Properties">Properties</h2>
<dl>
<dt>{{domxref("SpeechRecognitionResult.isFinal")}} {{readonlyinline}}</dt>
<dd>A {{domxref("Boolean")}} that states whether this result is final (true) or not (false) — if so, then this is the final time this result will be returned; if not, then this result is an interim result, and may be updated later on.</dd>
<dt>{{domxref("SpeechRecognitionResult.length")}} {{readonlyinline}}</dt>
<dd>Returns <cite>the length of the "array" — the </cite>number of {{domxref("SpeechRecognitionAlternative")}} objects contained in the result (also referred to as "n-best alternatives".)</dd>
</dl>
<h2 id="Methods">Methods</h2>
<dl>
<dt>{{domxref("SpeechRecognitionResult.item")}}</dt>
<dd>A standard getter that allows {{domxref("SpeechRecognitionAlternative")}} objects within the result to be accessed via array syntax.</dd>
</dl>
<h2 id="Examples">Examples</h2>
<p>This code is excerpted from our <a href="https://github.com/mdn/web-speech-api/blob/master/speech-color-changer/script.js">Speech color changer</a> example.</p>
<pre class="brush: js">recognition.onresult = function(event) {
// The SpeechRecognitionEvent results property returns a SpeechRecognitionResultList object
// The SpeechRecognitionResultList object contains SpeechRecognitionResult objects.
// It has a getter so it can be accessed like an array
// The first [0] returns the SpeechRecognitionResult at position 0.
// Each SpeechRecognitionResult object contains SpeechRecognitionAlternative objects that contain individual results.
// These also have getters so they can be accessed like arrays.
// The second [0] returns the SpeechRecognitionAlternative at position 0.
// We then return the transcript property of the SpeechRecognitionAlternative object
var color = event.results[0][0].transcript;
diagnostic.textContent = 'Result received: ' + color + '.';
bg.style.backgroundColor = color;
}</pre>
<h2 id="Specifications">Specifications</h2>
<table class="standard-table">
<tbody>
<tr>
<th scope="col">Specification</th>
<th scope="col">Status</th>
<th scope="col">Comment</th>
</tr>
<tr>
<td>{{SpecName('Web Speech API', '#speechreco-result', 'SpeechRecognitionResult')}}</td>
<td>{{Spec2('Web Speech API')}}</td>
<td> </td>
</tr>
</tbody>
</table>
<h2 id="Browser_compatibility">Browser compatibility</h2>
<div>
<p>{{Compat("api.SpeechRecognitionResult")}}</p>
</div>
<h3 id="Firefox_OS_permissions">Firefox OS permissions</h3>
<p>To use speech recognition in an app, you need to specify the following permissions in your <a href="/en-US/docs/Web/Apps/Build/Manifest">manifest</a>:</p>
<pre class="brush: json">"permissions": {
"audio-capture" : {
"description" : "Audio capture"
},
"speech-recognition" : {
"description" : "Speech recognition"
}
}</pre>
<p>You also need a privileged app, so you need to include this as well:</p>
<pre class="brush: json"> "type": "privileged"</pre>
<h2 id="See_also">See also</h2>
<ul>
<li><a href="/en-US/docs/Web/API/Web_Speech_API">Web Speech API</a></li>
</ul>
|