Fujitsu
Not logged in » Login
X

Please login

Please log in with your Fujitsu Partner Account.

Login


» Forgot password

Register now

If you do not have a Fujitsu Partner Account, please register for a new account.

» Register now
Jun 29 2015

Load Balancing Special, Pt. 2: Key Capabilities of Modern SLBs and ADCs

/data/www/ctec-live/application/public/media/images/blogimages/graphics/KEMP_Diagram_RES2.jpg

Part 2 of our four-part blog explains which features distinguish a state-of-the-art load balancer from the more regular models.

In the first chapter of this blog, we defined what server load balancers (SLBs) and application delivery controllers (ADCs) are and put together a shortlist of core and additional features you should look out for if you plan to start selling this kind of network equipment to your customers. This chapter gives first overviews of each of these capabilities; it is based on a series on KEMP Technologies white papers, namely the ones entitled "A Guide to Application Delivery Optimization and Server Load Balancing for the SMB Market" and "Optimizing Web and Application Infrastructure on a Limited IT Budget."

As attested in these papers, the business case for SLBs and ADCs is essentially the same: both device types help companies to improve data center operations by routing incoming requests to the adequate servers, i.e. those that offer the best performance and shortest response times. To achieve this, SLBs and ADCs use algorithms that dynamically 'check' key server elements such as the number of concurrent connections and CPU/memory utilization. This kind of traffic optimization helps to avoid financial losses that would normally be incurred through unexpected downtimes, outages, and laborious restores. At the same time, SLBs and ADCs will try to distribute traffic evenly between backend systems, thus avoiding underutilization and allowing for more energy-efficient server operations. In short, SLBs and ADCs simplify and streamline server and network management and enable IT staff to cut associated costs. However, the best possible results will only be achieved if the device in question offers a selection of the features described below.

SSL Acceleration/SSL Offload
Following a wave of privacy and security breaches, most companies today rely on traffic encryption to protect web requests or online business transactions from prying eyes. But this security gain comes at a cost, because establishing a secure, trustworthy SSL or TLS connection between clients and servers produces a significant overhead that will negatively impact server performance and response times. State-of-the-art SLBs and ADCs will therefore take over that function and disburden backend systems from the workloads associated with negotiating, setting up and terminating SSL/TSL connections.

Content Caching
Modern ADCs store data that is likely to be used again and unlikely to change, rather than requiring computers to retrieve it from the source every time. The device practically acts as a "proxy cache" for so-called hot data and fast-tracks their delivery to the client systems. As a result, response times will be dramatically reduced, transaction rates will grow, and server capacities can be reallocated for other relevant tasks, e.g. internal communication or generating sales reports.

Data Compression/HTTP Compression
As the name implies, this feature dramatically cuts the amount of data packets that must be sent from the server to the client. At the same time, the algorithms used enable ADCs to 'squeeze' larger payloads into each packet, thus reducing network bandwidth consumption while maintaining content quality. Like SSL/TLS negotiations and setups, this task was usually left to the web or application server prior to the arrival of SLBs and ADCs.

Layer 4 Load Balancing
This is one of the more advanced network management techniques mentioned at the end of the first chapter. Essentially, ADCs can use various IP-based methods to direct traffic to backend systems, all of which occur on the Transport Layer (Layer 4) of the OSI model. These load distribution methods include Round-Robin DNS, Least Connection, and Chained Failover/Fixed Weighting among others. In this context, weighting means that an admins can assign a higher priority to a physical server in order to better control traffic distribution. Thus it is possible to direct larger or more complex workloads to stronger systems and run standard tasks on the 'weaker' ones.

Layer 7 Content Switching
According to the KEMP papers mentioned above, "[c]ontent switching refers to the ability to distribute (...) user requests to servers based on Layer 7 payload," i.e. payload on the application layer. Put simply, this means the ADC will route traffic to the adequate server by way of examining page content or rather simple identifiers like URLs. That way, a request that goes out to http://www.xyz.com/gallery will take the user to an image site, whereas https://www.xyz.com/shop directs him to an order page (see our graphic at the top9. This feature is especially popular among network admins, as it gives them more room for performance tuning and increasing infrastructure flexibility.

Persistence
Anyone who buys goods and services online has lived through one or more cases of the "lost shopping cart," i.e. occasions on which a long list of ordered items or a set of complex specifications vanished before they could finish their order. Such failures typically occur when a connection is reset or a user is switched to another server that was previously not involved in the ordering process. Modern ADCs and SLBs counter this problem by offering (session) persistence, that is, the ability to keep up a connection based on either source IP addresses or browser cookies. For users as well as administrators, the second method is much more reliable and therefore the preferable one.

Single ADCs usually consist of a virtual server (VIP), which in turn consists of an IP address and port on a given server system. The virtual server connects to a number of physical servers within a server farm. On the ADC, a VIP is typically presented as the public IP address from which it responds to user requests. Load balancing, content switching and persistence rules and methods are assigned on a per-VIP basis. Providing a large set of VIPs adds more flexibility to the architecture and design of a site or application – since multiple VIPs can be pointed to the same set of real servers. A robust ADC will support up to 256 virtual servers. In other words, an efficient ADC (or SLB) that is truly up to the task will have to provide excellent hardware performance and scalability. Specifications to keep a keen eye on include:

  • The maximum number of physical ("real") servers it can support
  • The maximum number of real servers that can connect to a single VIP
  • The maximum number of SSL- or TLS-protected transactions the ADC/SLB can handle per second
  • Total throughput and bandwidth and throughput/bandwidth reserved for actual payloads
  • The total number of LAN ports supported by the ADC hardware via existing switches or hubs (this can bring the number of supported servers to 1,000 for a single ADC)

For an introduction to the concept of Load Balancing, please see Part 1 of this blog. We discuss practical implementations of KEMP solutions in part 3.

To learn more about KEMP Technologies and its products, please visit our home page.

Thomas Kurz

 

About the Author:

Thomas Kurz

Guest author, Director Channel & Alliances DACH, EE & MENA KEMP Technologies Europe.

SHARE

Comments on this article

No comments yet.

Please Login to leave a comment.