Load balancing (computing)

This is an old revision of this page, as edited by 66.237.84.51 (talk) at 21:21, 11 August 2005. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In computing, load balancing is a technique used to spread work between many processes, computers, disks or other resources.

Web server load balancing

One major issue for large Internet sites is how to handle the load of the large number of visitors they get. It's routinely encountered as a scalability problem as a site grows. The ways to do this vary. One example of a site using the approach is the Wikimedia Foundation and its projects. In June 2004 the load was balanced using a combination of:

  • Round robin DNS distributed page requests evenly to one of three Squid cache servers.
  • Squid cache servers used response time measurements to distribute page requests between seven web servers. In addition, the Squid servers cached pages and delivered about 75% of all pages without ever asking a web server for help.
  • The PHP scripts which run the web servers distribute load to one of several database servers depending on the type of request, with updates going to a master database server and some database queries going to one or more slave database servers.

Alternative methods include use of layer 4 routers.

Linux Virtual Server is an advanced open source load balancing solution for network services.

Network Load Balancing Services is a Microsoft proprietary clustering and load balancing implementation.

Vendors providing web server load balancing solutions


Vendors providing cluster, grid and HPC load balancing solutions