aboutsummaryrefslogtreecommitdiff
path: root/files/ko/web/api/streams_api/concepts/index.html
blob: 9c993b81a3513856fcf2e60e26ae0af730672758 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
---
title: Streams API 컨셉
slug: Web/API/Streams_API/컨셉
translation_of: Web/API/Streams_API/Concepts
---
<div>{{apiref("Streams")}}</div>

<p class="summary">The <a href="/en-US/docs/Web/API/Streams_API">Streams API</a> adds a very useful set of tools to the web platform, providing objects allowing JavaScript to programmatically access streams of data received over the network and process them as desired by the developer. Some of the concepts and terminology associated with streams might be new to you — this article explains all you need to know.</p>

<h2 id="Readable_streams">Readable streams</h2>

<p>A readable stream is a data source represented in JavaScript by a {{domxref("ReadableStream")}} object that flows from an <strong>underlying source</strong> — this is a resource somewhere on the network or elsewhere on your domain that you want to get data from.</p>

<p>There are two types of underlying source:</p>

<ul>
 <li><strong>Push sources</strong> constantly push data at you when you’ve accessed them, and it is up to you to start, pause, or cancel access to the stream. Examples include video streams and TCP/<a href="/en-US/docs/Web/API/WebSockets_API">Web sockets</a>.</li>
 <li><strong>Pull sources</strong> require you to explicitly request data from them once connected to. Examples include a file access operation via a <a href="/en-US/docs/Web/API/Fetch_API">Fetch</a> or <a href="/en-US/docs/Web/API/XMLHttpRequest/XMLHttpRequest">XHR</a> call.</li>
</ul>

<p>컨The data is read sequentially in small pieces called <strong>chunks</strong>. A chunk can be a single byte, or it can be something larger such as a <a href="/en-US/docs/Web/JavaScript/Typed_arrays">typed array</a> of a certain size. A single stream can contain chunks of different sizes and types.</p>

<p><img alt="" src="https://mdn.mozillademos.org/files/15819/Readable%20streams.png" style="height: 452px; width: 1000px;"></p>

<p>The chunks placed in a stream are said to be <strong>enqueued</strong> — this means they are waiting in a queue ready to be read. An <strong>internal queue</strong> keeps track of the chunks that have not yet been read (see the Internal queues and queueing strategies section below).</p>

<p>The chunks inside the stream are read by a <strong>reader</strong> — this processes the data one chunk at a time, allowing you to do whatever kind of operation you want to do on it. The reader plus the other processing code that goes along with it is called a <strong>consumer</strong>.</p>

<p>There is also a construct you’ll use called a <strong>controller</strong> — each reader has an associated controller that allows you to control the stream (for example, to cancel it if wished).</p>

<p>Only one reader can read a stream at a time; when a reader is created and starts reading a stream (an <strong>active reader</strong>), we say it is <strong>locked</strong> to it. If you want another reader to start reading your stream, you typically need to cancel the first reader before you do anything else (although you can <strong>tee</strong> streams, see the Teeing section below)</p>

<p>Note that there are two different types of readable stream. As well as the conventional readable stream there is a type called a byte stream — this is an extended version of a conventional stream for reading underlying byte sources (otherwise known as BYOB, or “bring your own buffer”) sources. These allow streams to be read straight into a buffer supplied by the developer, minimizing the copying required. Which underlying stream (and by extension, reader and controller) your code will use depends on how the stream was created in the first place (see the {{domxref("ReadableStream.ReadableStream()")}} constructor page).</p>

<div class="warning">
<p><strong>Important</strong>: Byte streams are not implemented anywhere as yet, and questions have been raised as to whether the spec details are in a finished enough state for them to be implemented. This may change over time.</p>
</div>

<p>You can make use of ready-made readable streams via mechanisms like a {{domxref("Response.body")}} from a fetch request, or roll your own streams using the {{domxref("ReadableStream.ReadableStream()")}} constructor.</p>

<h2 id="Teeing">Teeing</h2>

<p>Even though only a single reader can read a stream at once, it is possible to split a stream into two identical copies, which can then be read by two separate readers. This is called <strong>teeing</strong>.</p>

<p>In JavaScript, this is achieved via the {{domxref("ReadableStream.tee()")}} method — it outputs an array containing two identical copies of the original readable stream, which can then be read independently by two separate readers.</p>

<p>You might do this for example in a <a href="/en-US/docs/Web/API/Service_Worker_API">ServiceWorker</a> if you want to fetch a response from the server and stream it to the browser, but also stream it to the ServiceWorker cache. Since a response body cannot be consumed more than once, and a stream can't be read by more than one reader at once, you’d need two copies to do this.</p>

<p><img alt="" src="https://mdn.mozillademos.org/files/15820/tee.png" style="height: 527px; width: 1000px;"></p>

<h2 id="Writable_streams">Writable streams</h2>

<p>A <strong>writable stream</strong> is a destination into which you can write data, represented in JavaScript by a {{domxref("WritableStream")}} object. This serves as an abstraction over the top of an <strong>underlying sink</strong> — a lower-level I/O sink into which raw data is written.</p>

<p>The data is written to the stream via a <strong>writer</strong>, one chunk at a time. A chunk can take a multitude of forms, just like the chunks in a reader. You can use whatever code you like to produce the chunks ready for writing; the writer plus the associated code is called a <strong>producer</strong>.</p>

<p>When a writer is created and starts writing to a stream (an <strong>active writer</strong>), it is said to be <strong>locked</strong> to it. Only one writer can write to a writable stream at one time. If you want another writer to start writing to your stream, you typically need to abort it before you then attach another writer to it.</p>

<p>An <strong>internal queue</strong> keeps track of the chunks that have been written to the stream but not yet been processed by the underlying sink.</p>

<p>There is also a construct you’ll use called a controller — each writer has an associated controller that allows you to control the stream (for example, to abort it if wished).</p>

<p><img alt="" src="https://mdn.mozillademos.org/files/15821/writable%20streams.png" style="height: 452px; width: 1000px;"></p>

<p>You can make use of writable streams using the {{domxref("WritableStream.WritableStream()")}} constructor. These currently have very limited availability in browsers.</p>

<h2 id="Pipe_chains">Pipe chains</h2>

<p>The Streams API makes it possible to pipe streams into one another (or at least it will do when browsers implement the relevant functionality) using a structure called a <strong>pipe chain</strong>. There are two methods available in the spec to facilitate this:</p>

<ul>
 <li>{{domxref("ReadableStream.pipeThrough()")}} — pipes the stream through a <strong>transform stream</strong>, which is a pair comprised of a writable stream that has data written to it, and a readable stream that then has the data read out of it — this acts as a kind of treadmill that constantly takes data in and transforms it to a new state. Effectively the same stream is written to, and then the same values are read. A simple example is a text decoder, where raw bytes are written, and then strings are read. You can find more useful ideas and examples in the spec — see <a href="https://streams.spec.whatwg.org/#ts-model">Transform streams</a> for ideas, and <a href="https://streams.spec.whatwg.org/#example-both">this web sockets example</a>.</li>
 <li>{{domxref("ReadableStream.pipeTo()")}} — pipes to a writable stream that acts as the end point of the pipe chain.</li>
</ul>

<p>The start of the pipe chain is called the <strong>original source</strong>, and the end is called the <strong>ultimate sink</strong>.</p>

<p><img alt="" src="https://mdn.mozillademos.org/files/15818/PipeChain.png" style="height: 382px; width: 1000px;"></p>

<div class="note">
<p><strong>Note</strong>: This functionality isn't fully thought through yet, or available in many browsers. At some point the spec writers hope to add something like a <code>TransformStream</code> class to make creating transform streams easier.</p>
</div>

<h2 id="Backpressure">Backpressure</h2>

<p>An important concept in streams is <strong>backpressure</strong> — this is the process by which a single stream or a pipe chain regulates the speed of reading/writing. When a stream later in the chain is still busy and isn't yet ready to accept more chunks, it sends a signal backwards through the chain to tell earlier transform streams (or the original source) to slow down delivery as appropriate so that you don't end up with a bottleneck anywhere.</p>

<p>To use backpressure in a ReadableStream, we can ask the controller for the chunk size  desired by the consumer by querying the {{domxref("ReadableStreamDefaultController.desiredSize")}} attribute on the controller. If it is too low, our ReadableStream can tell its underlying source to stop sending data, and we backpressure along the stream chain.</p>

<p>If later on the consumer again wants to receive data, we can use the pull method in the stream creation to tell our underlying source to feed our stream with data.</p>

<h2 id="Internal_queues_and_queuing_strategies">Internal queues and queuing strategies</h2>

<p>As mentioned earlier, the chunks in a stream that have not yet been processed and finished with are kept track of by an internal queue.</p>

<ul>
 <li>In the case of readable streams, these are the chunks that have been enqueued but not yet read</li>
 <li>In the case of writable streams, these are chunks that have been written but not yet processed by the underlying sink.</li>
</ul>

<p>Internal queues employ a <strong>queuing strategy</strong>, which dictates how to signal backpressure based on the <strong>internal queue state.</strong></p>

<p>In general, the strategy compares the size of the chunks in the queue to a value called the <strong>high water mark</strong>, which is the largest total chunk size that the queue can realistically manage.</p>

<p>The calculation performed is</p>

<pre>high water mark - total size of chunks in queue = desired size</pre>

<p>The <strong>desired size</strong> is the size of chunks the stream can still accept to keep the stream flowing but below the high water mark in size. After the calculation is performed, chunk generation will be slowed down/sped up as appropriate to keep the stream flowing as fast as possible while keeping the desired size above zero. If the value falls to zero (or below in the case of writable streams), it means that chunks are being generated faster than the stream can cope with, which results in problems.</p>

<div class="note">
<p><strong>Note</strong>: What happens in the case of zero or negative desired size hasn’t really been defined in the spec so far. Patience is a virtue.</p>
</div>

<p>As an example, let's take a chunk size of 1, and a high water mark of 3. This means that up to 3 chunks can be enqueued before the high water mark is reached and backpressure is applied.</p>