Load Balancing Evolution

load balancing evolution

The Basics of Server Load Balancing

As websites began to see increased traffic in the mid- 1990s, single servers were reaching their limits to handle the capacity. Additional servers were required to expand applications along with technologies to make it appear to end users that they were accessing a single server. The first method to address this scalability was DNS resolution, also referred to as “Round-robin DNS”.

This method, assigns a group of unique internal IP addresses to servers behind a firewall to a single DNS name. When a user requested a resolution to a website name the DNS would respond back with multiple addresses in order, for example 10.1.0.10, 10.1.0.11 and 10.1.0.12. The next request made to the DNS would be supplied the same addresses, however they would be rotated so the second server would be first (10.1.0.11, 10.1.0.12 and 10.1.0.10). The DNS would continue to rotate through the servers for each sequential response.

Round Robin DNS, was a simple solution that solved the issue of scalability by offering an almost limitless number of servers to be added to a DNS name. However, without the capability to know the status of the server on the receiving end of the request, users could be sent to a server that was down or overloaded.

The Hardware-based Load Balancer

Beginning in the late 1990s, manufacturers introduced the first hardware-based load balancing appliances. By separating load balancing from the applications themselves, the appliances could rely on using network layer techniques like network address translation (NAT) to route inbound and outbound traffic to servers.

Another key component that was introduced, was server health-checking. At predefined intervals, the load balancer would check on the status of the server to determine if it was available and what its traffic load was. If a server was down, traffic would be directed to operational servers. If a server was overloaded, traffic would be redirected until it was back below set thresholds to receive new requests.

Applications could now scale and users would have reliable connections. The only limiting factor was the capacity of the hardware itself. In most cases, organizations that migrated from DNS-based or software load balancing saw an average 25% increase in server performance, reducing the need to add new servers to add more capacity.

The Application Delivery Controller

Simple load balancing is no longer sufficient to meet the basic needs of most organizations. Today web servers aren’t just delivering static content, they’re delivering dynamic, content-rich applications.

Businesses are using web based applications to deliver mission critical functionality to employees and customers. Over the past 10 years load balancers have evolved into Application Delivery Controllers (ADCs). These new devices, understand application specific traffic and can optimize application server performance, by offloading many of the compute intensive tasks that would otherwise bog down CPUs that could be better occupied elsewhere.

Advanced Features of an ADC

Among the advanced acceleration functions, present in modern ADCs are SSL offloading technology, data compression, TCP and HTTP protocol optimization and virtualization awareness.

By offloading and accelerating SSL encryption, decryption and certificate management from servers, ADCs enable web and application servers to use their CPU and memory resources exclusively to deliver application content and thus respond more quickly to user requests. Web-based applications consist of a variety of different data objects which can be delivered by different types of servers.

ADCs provide application-based, routing using file types to direct users to the server (or group of servers) that is set up to handle their specific information requests, such as ASP or PHP applications. User requests can be routed to different servers by sending requests for static file types (jpg, html, etc.) to one server group, and sending user requests for dynamic data to other servers optimized for that purpose.

The ADC knows the optimal path for each destination. Transaction-based applications require connections to the same server in order to operate correctly. The best-known example of this is the “shopping cart” problem when you establish a session with one server to add an item to your cart and then are load balanced to a different server to checkout. If you don’t have a persistent connection to the original server, you’ll find your cart is empty. ADCs use session state with HTTP headers and cookies to ensure that users and servers remain “persistent”.

The ADC uses the cookie within the HTTP header to ensure that users continue to be directed to the specific server where the session state information resides. Without this capability, if the user went to a different server, the previous transaction history would be lost, and the user would need to start the transaction over.

Summary

Server load balancing grew out of the need to scale websites in the 1990s and is the foundation of today’s modern application delivery controller. Building on this core of server load balancing, the advanced features of ADCs not only scale applications, they intelligently provide application availability. Features such as SSL offloading, HTTP compression and intelligent policy-based layer 7 routing, distinguish a basic server load balancer from a modern ADC. The ADC will continue to evolve with new features like virtual environment management, integrated security services and SDN support.