Load balancing (computing): Difference between revisions

Content deleted Content added
m lc common nouns
Tags: Visual edit Mobile edit Mobile web edit Advanced mobile edit
m lc per MOS:EXPABBR and other common nouns
Tags: Visual edit Mobile edit Mobile web edit Advanced mobile edit
Line 191:
; Priority activation
: When the number of available servers drops below a certain number, or the load gets too high, standby servers can be brought online.
; [[TLS acceleration|TLS Offloadoffload and Accelerationacceleration]]
: TLS (or its predecessor SSL) acceleration is a technique of offloading cryptographic protocol calculations onto specialized hardware. Depending on the workload, processing the encryption and authentication requirements of a [[Transport Layer Security|TLS]] request can become a major part of the demand on the Web Server's CPU; as the demand increases, users will see slower response times, as the TLS overhead is distributed among Web servers. To remove this demand on Web servers, a balancer can terminate TLS connections, passing HTTPS requests as HTTP requests to the Web servers. If the balancer itself is not overloaded, this does not noticeably degrade the performance perceived by end-users. The downside of this approach is that all of the TLS processing is concentrated on a single device (the balancer) which can become a new bottleneck. Some load balancer appliances include specialized hardware to process TLS. Instead of upgrading the load balancer, which is quite expensive dedicated hardware, it may be cheaper to forgo TLS offload and add a few web servers. Also, some server vendors such as Oracle/Sun now incorporate cryptographic acceleration hardware into their CPUs such as the T2000. F5 Networks incorporates a dedicated TLS acceleration hardware card in their local traffic manager (LTM) which is used for encrypting and decrypting TLS traffic. One clear benefit to TLS offloading in the balancer is that it enables it to do balancing or content switching based on data in the HTTPS request.
; [[Distributed denial of service|Distributed Denial of Service]] (DDoS) attack protection
: Load balancers can provide features such as [[SYN cookies]] and delayed-binding (the back-end servers don't see the client until it finishes its TCP handshake) to mitigate [[SYN flood]] attacks and generally offload work from the servers to a more efficient platform.
; [[HTTP compression]]
: HTTP compression reduces the amount of data to be transferred for HTTP objects by utilising gzip compression available in all modern web browsers. The larger the response and the further away the client is, the more this feature can improve response times. The trade-off is that this feature puts additional CPU demand on the load balancer and could be done by web servers instead.
; [[TCP offload]]
: Different vendors use different terms for this, but the idea is that normally each HTTP request from each client is a different TCP connection. This feature utilises HTTP/1.1 to consolidate multiple HTTP requests from multiple clients into a single TCP socket to the back-end servers.
; TCP buffering
: The load balancer can buffer responses from the server and spoon-feed the data out to slow clients, allowing the webserver to free a thread for other tasks faster than it would if it had to send the entire request to the client directly.
; Direct Serverserver Returnreturn
: An option for asymmetrical load distribution, where request and reply have different network paths.
; Health checking