Load Balancing / ADC Terminology

ADC Terminology

In this article we will describe the main concepts related to Application Delivery Controllers, Load Balancer, Load Balancing and Application Delivery Networks. These concepts are not specific to any particular vendor, but current industry definitions.

Load Balancer

A load balancer is a device that acts as a reverse proxy and distributes network or application traffic across a number of servers. Load balancers are used to increase capacity (concurrent users) and reliability of applications. They improve the overall performance of applications by decreasing the burden on servers associated with managing and maintaining application and network sessions, as well as by performing application-specific tasks.

Load balancers are generally grouped into two categories: Layer 4 and Layer 7. Layer 4 load balancers act upon data found in network and transport layer protocols (IP, TCP, FTP, UDP). Layer 7 load balancers distribute requests based upon data found in application layer protocols such as HTTP.

Requests are received by both types of load balancers and they are distributed to a particular server based on a configured algorithm. Some industry standard algorithms are:

  • Round robin
  • Weighted round robin
  • Least connections
  • Least response time

Layer 7 load balancers can further distribute requests based on application specific data such as HTTP headers, cookies, or data within the application message itself, such as the value of a specific parameter.

Load balancers ensure reliability and availability by monitoring the “health” of applications and only sending requests to servers and applications that can respond in a timely manner.

Application Delivery Controller

An application delivery controller is a device that is typically placed in a data center between the firewall and one or more application servers (an area known as the DMZ). First-generation application delivery controllers primarily performed application acceleration and handled load balancing between servers.

The latest generation of application delivery controllers handles a much wider variety of functions, including rate shaping and SSL offloading, as well as serving as a Web application firewall.

A series of ADC devices, often located in widespread data centers within the same enterprise, is capable of working in concert by sharing a common operating system and control language. This holistic approach is termed application delivery networking.

Application Delivery Networking

Application delivery networking is an approach and a suite of technologies that comprises application security, application acceleration and network availability. It ensures that applications are always secure, fast, and available across any network.

Application Traffic Management

Application traffic management refers to the methodology for intercepting, inspecting, translating, and directing Web traffic to the optimum resource based on specific business policies. It allows network administrators to apply availability, scalability, security, and performance standards to any IP-based application, significantly increasing overall network application performance.

Content delivery network (CDN)

A content delivery network (CDN) is an infrastructure that helps organizations deliver static Web content, rich digital media, and everything in between, to employees, vendors, partners, and customers worldwide, in the fastest amount of time at the lowest possible cost.

Fee or subscription-based CDNs are attractive because they can increase a network’s performance, scalability, and reliability. They can enhance content availability, efficiently move large files closer to end users, and help curtail travel and business training expenses by supporting e-learning applications. The CDN is a clear alternative to the often costly solution of establishing multiple points of presence to address the performance problems inherent in delivering Web-based content on a global scale.

Load Balancing

Most commonly, the term load balancing refers to distributing incoming HTTP requests across Web servers in a server farm, to avoid overloading any one server. Because load balancing distributes the requests based on the actual load at each server, it is excellent for ensuring availability and defending against denial of service attacks.

Reverse proxy

A reverse proxy is a device or service placed between a client and a server in a network infrastructure. Incoming requests are handled by the proxy, which interacts on behalf of the client with the desired server or  service residing on the server. The most common use of a reverse proxy is to provide load balancing for web applications and APIs. Reverse proxies can also be deployed to offload services from applications as a way to improve performance through SSL acceleration,intelligent compression, and caching. They can also enable federated security services for multiple applications.

A reverse proxy may act either as a simple forwarding service or actively participate in the exchange between client and server. When the proxy treats the client and server as separate entities by implementing dual network stacks, it is called a full proxy. A full reverse proxy is capable of intercepting, inspecting, and interacting with requests and responses. Interacting with requests and responses enables more advanced traffic management services such as application layer security,web acceleration, page routing, and secure remote access.

A reverse proxy is most commonly used to provide load balancing services for scalability and availability. Increasingly, reverse proxies are also used as a strategic point in the network to enforce web application security through web application firewalls, application delivery firewalls, and deep content inspection to mitigate data leaks.

A reverse proxy also provides the ability to direct requests based on a wide variety of parameters such as user device, location, network conditions, and even the time of day. When combined with cloud, a reverse proxy can enable cloud bursting and split-application architectures that offer the economic benefits of cloud without compromising control or security.

SSL offloading

SSL offloading relieves a Web server of the processing burden of encrypting and/or decrypting traffic sent via SSL, the security protocol that is implemented in every Web browser. The processing is offloaded to a separate device designed specifically to perform SSL acceleration or SSL termination.

SSL termination

SSL termination refers to the process that occurs at the server end of an SSL connection, where the traffic transitions between encrypted and unencrypted forms.