Load balancing (computing): Difference between revisions

Content deleted Content added
m lc per MOS:EXPABBR and other common nouns
Tags: Visual edit Mobile edit Mobile web edit Advanced mobile edit
Tags: Visual edit Mobile edit Mobile web edit Advanced mobile edit
Line 284:
Load balancing is often used to implement [[failover]]—the continuation of service after the failure of one or more of its components. The components are monitored continually (e.g., web servers may be monitored by fetching known pages), and when one becomes unresponsive, the load balancer is informed and no longer sends traffic to it. When a component comes back online, the load balancer starts rerouting traffic to it. For this to work, there must be at least one component in excess of the service's capacity ([[N+1 redundancy]]). This can be much less expensive and more flexible than failover approaches where every single live component is paired with a single backup component that takes over in the event of a failure ([[dual modular redundancy]]). Some [[RAID]] systems can also utilize [[hot spare]] for a similar effect.<ref name="IBM">{{cite web |url=https://www.ibm.com/support/knowledgecenter/en/SSVJJU_6.4.0/com.ibm.IBMDS.doc_6.4/ds_ag_srv_adm_dd_failover_load_balancing.html |title=Failover and load balancing |website=IBM |accessdate=6 January 2019}}</ref>
 
=== Data Ingestioningestion for AI Modelnodel Trainingtraining ===
Increasingly, load balancing techniques are being used to manage high-volume data ingestion pipelines that feed [[artificial intelligence]] [[AI training|training]] and [[inference]] systems—sometimes referred to as “[[AI Factory|AI factories]].” These AI-driven environments require continuous processing of vast amounts of structured and unstructured data, placing heavy demands on networking, storage, and computational resources.<ref>{{Cite web |title=Optimize Traffic Management for AI Factory Data Ingest |url=https://www.f5.com/company/blog/ai-factory-traffic-management-data-ingest |access-date=2025-01-30 |website=F5, Inc. |language=en-US}}</ref> To maintain the necessary high throughput and low latency, organizations commonly deploy load balancing tools capable of advanced TCP optimizations, connection pooling, and adaptive scheduling. Such features help distribute incoming data requests evenly across servers or nodes, prevent congestion, and ensure that compute resources remain efficiently utilized.<ref>{{Cite web |title=Optimize, Scale, and Secure AI Interactions |url=https://www.f5.com/solutions/use-cases/optimize-scale-and-secure-ai |access-date=2025-01-30 |website=F5, Inc. |language=en-US}}</ref>