aboutsummaryrefslogtreecommitdiff
path: root/files/ar/web/http/connection_management_in_http_1.x/index.html
diff options
context:
space:
mode:
Diffstat (limited to 'files/ar/web/http/connection_management_in_http_1.x/index.html')
-rw-r--r--files/ar/web/http/connection_management_in_http_1.x/index.html86
1 files changed, 86 insertions, 0 deletions
diff --git a/files/ar/web/http/connection_management_in_http_1.x/index.html b/files/ar/web/http/connection_management_in_http_1.x/index.html
new file mode 100644
index 0000000000..de95b35122
--- /dev/null
+++ b/files/ar/web/http/connection_management_in_http_1.x/index.html
@@ -0,0 +1,86 @@
+---
+title: Connection management in HTTP/1.x
+slug: Web/HTTP/Connection_management_in_HTTP_1.x
+translation_of: Web/HTTP/Connection_management_in_HTTP_1.x
+---
+<div>{{HTTPSidebar}}</div>
+
+<p class="summary">Connection management is a key topic in HTTP: opening and maintaining connections largely impacts the performance of Web sites and Web applications. In HTTP/1.x, there are several models: <em>short-lived connections</em>, <em>persistent connections</em>, and <em>HTTP pipelining. </em></p>
+
+<p>HTTP mostly relies on TCP for its transport protocol, providing a connection between the client and the server. In its infancy, HTTP used a single model to handle such connections. These connections were short-lived: a new one created each time a request needed sending, and closed once the answer had been received.</p>
+
+<p>This simple model held an innate limitation on performance: opening each TCP connection is a resource-consuming operation. Several messages must be exchanged between the client and the server. Network latency and bandwidth affect performance when a request needs sending. Modern Web pages require many requests (a dozen or more) to serve the amount of information needed, proving this earlier model inefficient.</p>
+
+<p>Two newer models were created in HTTP/1.1. The persistent-connection model keeps connections opened between successive requests, reducing the time needed to open new connections. The HTTP pipelining model goes one step further, by sending several successive requests without even waiting for an answer, reducing much of the latency in the network.</p>
+
+<p><img alt="Compares the performance of the three HTTP/1.x connection models: short-lived connections, persistent connections, and HTTP pipelining." src="https://mdn.mozillademos.org/files/13727/HTTP1_x_Connections.png" style="height: 538px; width: 813px;"></p>
+
+<div class="note">
+<p>HTTP/2 adds additional models for connection management.</p>
+</div>
+
+<p>It's important point to note that connection management in HTTP applies to the connection between two consecutive nodes, which is <a href="/en-US/docs/Web/HTTP/Headers#hbh">hop-by-hop</a> and not <a href="/en-US/docs/Web/HTTP/Headers#e2e">end-to-end</a>. The model used in connections between a client and its first proxy may differ from the model between a proxy and the destination server (or any intermediate proxies). The HTTP headers involved in defining the connection model, like {{HTTPHeader("Connection")}} and {{HTTPHeader("Keep-Alive")}}, are <a href="/en-US/docs/Web/HTTP/Headers#hbh">hop-by-hop</a> headers with their values able to be changed by intermediary nodes.</p>
+
+<p>A related topic is the concept of HTTP connection upgrades, wherein an HTTP/1.1 connection is upgraded to a different protocol, such as TLS/1.0, WebSocket, or even HTTP/2 in cleartext. This <a href="/en-US/docs/Web/HTTP/Protocol_upgrade_mechanism">protocol upgrade mechanism</a> is documented in more detail elsewhere.</p>
+
+<h2 id="Short-lived_connections">Short-lived connections</h2>
+
+<p>The original model of HTTP, and the default one in HTTP/1.0, is <em>short-lived connections</em>. Each HTTP request is completed on its own connection; this means a TCP handshake happens before each HTTP request, and these are serialized.</p>
+
+<p>The TCP handshake itself is time-consuming, but a TCP connection adapts to its load, becoming more efficient with more sustained (or warm) connections. Short-lived connections do not make use of this efficiency feature of TCP, and performance degrades from optimum by persisting to transmit over a new, cold connection.</p>
+
+<p>This model is the default model used in HTTP/1.0 (if there is no {{HTTPHeader("Connection")}} header, or if its value is set to <code>close</code>). In HTTP/1.1, this model is only used when the {{HTTPHeader("Connection")}} header is sent with a value of <code>close</code>.</p>
+
+<div class="note">
+<p>Unless dealing with a very old system, which doesn't support a persistent connection, there is no compelling reason to use this model.</p>
+</div>
+
+<h2 id="Persistent_connections">Persistent connections</h2>
+
+<p>Short-lived connections have two major hitches: the time taken to establish a new connection is significant, and performance of the underlying TCP connection gets better only when this connection has been in use for some time (warm connection). To ease these problems, the concept of a <em>persistent connection</em> has been designed, even prior to HTTP/1.1. Alternatively this may be called a <em>keep-alive connection</em>.</p>
+
+<p>A persistent connection is one which remains open for a period of time, and can be reused for several requests, saving the need for a new TCP handshake, and utilizing TCP's performance enhancing capabilities. This connection will not stay open forever: idle connections are closed after some time (a server may use the {{HTTPHeader("Keep-Alive")}} header to specify a minimum time the connection should be kept open).</p>
+
+<p>Persistent connections also have drawbacks; even when idling they consume server resources, and under heavy load, {{glossary("DoS attack", "DoS attacks")}} can be conducted. In such cases, using non-persistent connections, which are closed as soon as they are idle, can provide better performance.</p>
+
+<p>HTTP/1.0 connections are not persistent by default. Setting {{HTTPHeader("Connection")}} to anything other than <code>close</code>, usually <code>retry-after</code>, will make them persistent.</p>
+
+<p>In HTTP/1.1, persistence is the default, and the header is no longer needed (but it is often added as a defensive measure against cases requiring a fallback to HTTP/1.0).</p>
+
+<h2 id="HTTP_pipelining">HTTP pipelining</h2>
+
+<div class="note">
+<p>HTTP pipelining is not activated by default in modern browsers:</p>
+
+<ul>
+ <li>Buggy <a href="https://en.wikipedia.org/wiki/Proxy_server">proxies</a> are still common and these lead to strange and erratic behaviors that Web developers cannot foresee and diagnose easily.</li>
+ <li>Pipelining is complex to implement correctly: the size of the resource being transferred, the effective <a href="https://en.wikipedia.org/wiki/Round-trip_delay_time">RTT</a> that will be used, as well as the effective bandwidth, have a direct incidence on the improvement provided by the pipeline. Without knowing these, important messages may be delayed behind unimportant ones. The notion of important even evolves during page layout! HTTP pipelining therefore brings a marginal improvement in most cases only.</li>
+ <li>Pipelining is subject to the <a href="https://en.wikipedia.org/wiki/Head-of-line_blocking">HOL</a> problem.</li>
+</ul>
+
+<p>For these reasons, pipelining has been superseded by a better algorithm, <em>multiplexing</em>, that is used by HTTP/2.</p>
+</div>
+
+<p>By default, <a href="/en-US/docs/Web/HTTP" title="en/HTTP">HTTP</a> requests are issued sequentially. The next request is only issued once the response to the current request has been received. As they are affected by network latencies and bandwidth limitations, this can result in significant delay before the next request is <em>seen</em> by the server.</p>
+
+<p>Pipelining is the process to send successive requests, over the same persistent connection, without waiting for the answer. This avoids latency of the connection. Theoretically, performance could also be improved if two HTTP requests were to be packed into the same TCP message. The typical <a href="https://en.wikipedia.org/wiki/Maximum_segment_size">MSS</a> (Maximum Segment Size), is big enough to contain several simple requests, although the demand in size of HTTP requests continues to grow.</p>
+
+<p>Not all types of HTTP requests can be pipelined: only {{glossary("idempotent")}} methods, that is {{HTTPMethod("GET")}}, {{HTTPMethod("HEAD")}}, {{HTTPMethod("PUT")}} and {{HTTPMethod("DELETE")}}, can be replayed safely: should a failure happen, the pipeline content can simply be repeated.</p>
+
+<p>Today, every HTTP/1.1-compliant proxy and server should support pipelining, though many have limitations in practice: a significant reason no modern browser activates this feature by default.</p>
+
+<h2 id="Domain_sharding">Domain sharding</h2>
+
+<div class="note">
+<p>Unless you have a very specific immediate need, don't use this deprecated technique; switch to HTTP/2 instead. In HTTP/2, domain sharding is no longer useful: the HTTP/2 connection is able to handle parallel unprioritized requests very well. Domain sharding is even detrimental to performance. Most HTTP/2 implementations use a technique called <a href="https://daniel.haxx.se/blog/2016/08/18/http2-connection-coalescing/">connection coalescing</a> to revert eventual domain sharding.</p>
+</div>
+
+<p>As an HTTP/1.x connection is serializing requests, even without any ordering, it can't be optimal without large enough available bandwidth. As a solution, browsers open several connections to each domain, sending parallel requests. Default was once 2 to 3 connections, but this has now increased to a more common use of 6 parallel connections. There is a risk of triggering <a href="/en-US/docs/Glossary/DOS_attack">DoS</a> protection on the server side if attempting more than this number.</p>
+
+<p>If the server wishes a faster Web site or application response, it is possible for the server to force the opening of more connections. For example, Instead of having all resources on the same domain, say <code>www.example.com</code>, it could split over several domains, <code>www1.example.com</code>, <code>www2.example.com</code>, <code>www3.example.com</code>. Each of these domains resolve to the <em>same</em> server, and the Web browser will open 6 connections to each (in our example, boosting the connections to 18). This technique is called <em>domain sharding</em>.</p>
+
+<p><img alt="" src="https://mdn.mozillademos.org/files/13783/HTTPSharding.png" style="height: 727px; width: 463px;"></p>
+
+<h2 id="Conclusion">Conclusion</h2>
+
+<p>Improved connection management allows considerable boosting of performance in HTTP. With HTTP/1.1 or HTTP/1.0, using a persistent connection – at least until it becomes idle – leads to the best performance. However, the failure of pipelining has lead to designing superior connection management models, which have been incorporated into HTTP/2.</p>