19 October 2000
Source: http://www.packeteer.com/PacketDoc.cfm?DocTitle=Controlling%20TCP/IP%20Bandwidth&DocPath=technology/tcp_bw.htm

Packeteer is one of three programs, with Carnivore and COOL Miner, which constitute the FBI's DragonWare suite. More papers on Packeteer's technology at:

http://www.packeteer.com/technology/index.cfm

For COOL Miner, by iMergence:

http://www.imergence.com/coolminer.html

For an overview of Carnivore and DragonWare see: http://www.securityfocus.com/templates/article.html?id=97


TCP/IP Bandwidth Management Series, Number 1 of 3
The Packeteer Technical Forum

PacketShaper White Paper

Controlling TCP/IP Bandwidth

Introduction
Background
The Bandwidth Challenge
Bandwidth Management Approaches
How PacketShaper Works
The PacketShaper Advantage



INTRODUCTION
PacketShaper provides a unique technology that differentiates it from other network devices. Using PacketShaper, you can identify specific traffic types, guarantee bandwidth to applications that need it, and prevent non-essential traffic from taking bandwidth away from business-critical applications

To appreciate how PacketShaper can guarantee quality of service, you need to understand the way it manages Transmission Control Protocol (TCP) packets and traffic flows. This white paper describes bandwidth-management issues, typical solutions, and how PacketShaper uses TCP mechanisms to pace traffic and prevent packet retransmissions.



BACKGROUND
TCP provides connection-oriented services for the Internet Protocol's application layer--that is, the client and the server must establish a connection to exchange data. TCP transmits data in segments encased in IP datagrams, along with checksums, used to detect data corruption, and sequence numbers to ensure an ordered byte stream. TCP is considered to be a reliable transport mechanism because it requires the receiving computer to acknowledge not only the receipt of data but its completeness and sequence. If the sending computer doesn't receive notification from the receiving computer within an expected time frame, the segment is retransmitted. TCP also maintains a flow-control window to restrict transmissions. The receiver advertises a window size, indicating how many bytes it can handle.

In summary, TCP provides the following reliability checks:



THE BANDWIDTH CHALLENGE
TCP/IP was designed primarily to support two traffic applications--FTP and Telnet. Network applications and user expectations changed with the growth of the Internet. Today--with more high-speed users and bursty, interactive web traffic--greater demands are placed on networks, causing delays and bottlenecks that impact a user's quality of service.

Many of the features that make TCP reliable, contribute to performance problems:

Conventional TCP bandwidth management uses indirect feedback to infer network congestion. TCP increases a connection's transmission rate until it senses a problem and then it backs off. It interprets dropped packets as a sign of congestion. The goal of TCP is for individual connections to burst on demand to use all available bandwidth, while at the same time reacting conservatively to inferred problems in order to alleviate congestion.

TCP uses a sliding window flow-control mechanism to increase the throughput over wide-area networks. It allows the sender to transmit multiple packets before it stops and waits for an acknowledgment. This leads to faster data transfer, since the sender doesn't have to wait for an acknowledgment each time a packet is sent. The sender "fills the pipe" and then waits for an acknowledgment before sending more data. The receiver not only acknowledges that it got the data, but it advertises how much data it can now handle--that is, its window size.

TCP's slow-start algorithm attempts to alleviate the problem of multiple packets filling up router queues. Remember that TCP flow control is typically handled by the receiver, which tells the sender how much data it can handle. The slow-start algorithm, on the other hand, uses a congestion window, which is a flow-control mechanism managed by the sender. With TCP slow-start, when a connection opens, only one packet is sent until an ACK is received. For each received ACK, the congestion window increases by one. For each round trip, the number of outstanding segments doubles, until a threshold is reached.

In summary, TCP uses flow control, determined by client and server operating system configurations, distances, and other network conditions. As you'll see in subsequent sections, PacketShaper provides rate control, explicitly configured in user-defined policies.


BANDWIDTH MANAGEMENT APPROACHES
When faced with bandwidth contention and the resulting poor performance, a number of solutions come to mind. This section addresses the potential solutions, focusing on their advantages and limitations:

Adding Bandwidth

An obvious approach to overcoming bandwidth limitations is to add more bandwidth. As technology trends demonstrate, this is a short-term solution--as soon as bandwidth is increased, it is consumed. Non-urgent traffic will burst--as TCP is designed to do--and consume all available bandwidth at the expense of interactive, business-critical traffic. So, you're back to where you started--trying to manage the bandwidth that you have, more efficiently.

Using Queuing Schemes on Routers

For the most part, network devices have kept pace with evolving high-speed technology. Routers' queuing schemes--such as, weighted fair queuing, priority output queuing, and custom queuing-- attempt to prioritize and distribute bandwidth to individual data flows. Queuing schemes try to prevent low-volume applications, such as interactive web applications, from getting overtaken by large data transfers, typical of FTP traffic.

Router-based queuing schemes have several limitations:

Upgrading Web Servers

Hardware improvements, server software, and HTTP protocols have caused the bottleneck to move away from the server, out to the access link. As illustrated in Figure 1, congestion occurs when data from a LAN's large pipe is passed to a smaller pipe on the WAN.

FIGURE 1. Access Link Bottlenecks

Defining Precise Control--The PacketShaper Solution

Traffic bottlenecks form at access links when a fast LAN pipe narrows into a slower WAN link, which causes multiple traffic sources to compete for limited capacity. This competition results in a backup that we refer to as "chunky" data. This is where PacketShaper makes a difference.

Imagine putting fine sand, rather than gravel, through a network pipe. Sand can pass through the pipe more evenly and quickly than chunks. PacketShaper conditions traffic so that it becomes more like sand than gravel. These smoothly controlled connections are much less likely to incur packet loss and, more importantly, the end user experiences consistent service.

As you'll see in the next section, PacketShaper offers predictable performance by taking advantage of TCP's own mechanisms to overcome TCP deficiencies. Where TCP relies on tossed packets to infer congestion, PacketShaper provides direct feedback to the transmitter by detecting a remote user's access speed and network latency and using this data to optimally pace transmission. This results in smoothed traffic flows.


HOW PACKETSHAPER WORKS

PacketShaper maintains state information about individual TCP connections, giving it the ability to provide direct, quality-of-service feedback to the transmitter. In addition, you can partition bandwidth resources and define PacketShaper policies to explicitly manage different traffic classes. As a result, you gain precise control of your service levels.

Packeteer's PacketShaper provides several key functions that differentiate it from other bandwidth-management solutions:

These features are discussed in more detail in the following sections.

Classifying Traffic for Precise Control

PacketShaper's Traffic Discovery feature automatically monitors the traffic going through the PacketShaper and classifies it by protocol or service. This ability to automatically detect and classify an extensive collection of applications and protocols differentiates PacketShaper from other bandwidth-management tools. Whereas most network devices can identify traffic based on layers two or three of the standard OSI network model, PacketShaper discovers and classifies traffic based on layers two through seven.

PacketShaper uses a hierarchical tree structure to classify traffic. Each traffic type is defined by a traffic class in the tree. For example, email traffic may be listed in the traffic tree as POP3 and SMTP for both inbound and outbound traffic. Using the hierarchical tree structure, you can create additional traffic classes manually by specifying the characteristics of the traffic types you want to control, such as traffic from a particular application (such as Oracle traffic) or even a specific URL.

PacketShaper classifies a traffic flow by traversing the traffic class tree, attempting to match the flow to one of the defined classes. The final step in the classification process maps a flow to a policy--that is, a rule that defines the type of service you want a traffic class to get.

PacketShaper offers rich traffic classification by:

See the PacketShaper white paper entitled "What's on My Network?" for additional traffic classification details.

Controlling the End-to-End Connection

PacketShaper uses two methods to control TCP transmission rates and maximize the resulting throughput:

PacketShaper changes the end-to-end TCP semantics from the middle of the connection. It calculates the round-trip time (RTT), intercepts the acknowledgment, and holds onto it for the amount of time that is required to smooth the traffic flow without incurring retransmission timeout (RTO). It also supplies a window size that helps the sender determine when to send the next packet. To see how this rate-control mechanism works, refer to Figure 2 and the subsequent PacketShaper data-flow example.

FIGURE 2. PacketShaper Takes Control of the Connection

A PacketShaper Data-Flow Example

Figure 2 shows how PacketShaper intervenes and paces the data transmission to deliver predictable service. The following steps trace the data transfer shown in Figure 2.

  1. A data segment is sent to the receiver.

  2. The receiver acknowledges receipt and advertises an 8000-byte window size

  3. PacketShaper intercepts the ACK and determines that the data must be more evenly transmitted. Otherwise, subsequent data segments will queue up and packets will be tossed because insufficient bandwidth is available.

  4. PacketShaper sends an ACK to the sender, calculated to arrive at the sender to cause the sender to emit data immediately--that is, the ACK sequence number plus the window size tells the sender that it's time to transmit another packet.

Smooth Traffic Flows With PacketShaper

Without the benefit of PacketShaper, multiple packets are sent and an intermediate router queues the packets. When the queue reaches its capacity, the router tosses packets, which then must be re-transmitted. Figure 3 shows bursty traffic, when PacketShaper is not used, and evenly spaced data transmissions, when PacketShaper takes control.

FIGURE 3. Traffic Behavior: Before and After PacketShaper

Note: Even if a link is not congested, traffic chunks are more prone to packet loss than evenly spaced traffic.

Allocating Bandwidth

After traffic classes have been created, you can define policies--the rules that govern how PacketShaper allocates bandwidth. You need not manage all network traffic with policies--only the traffic that affects your business' quality of service. As PacketShaper processes a traffic flow, it matches the flow to one of the classes in its tree structure and uses the matching class' assigned policy to set the quality of service for the flow.

PacketShaper offers several policy types--Rate, Priority, Never-Admit, Ignore, and Discard. The following sections describe how PacketShaper determines how to divide bandwidth in accordance with the policies you define.

Assigning Rates for a Traffic Class

Designed to smooth bursty traffic, rate policies let you reserve bandwidth by assigning a minimum guaranteed rate for a traffic class. The guaranteed rate sets a precise rate, in bits per second, for a flow. If additional bandwidth is available, the flow can use some of the excess rate, according to the policy settings you've defined. For example, during a typical web session, the wait period between clicks doesn't consume bandwidth, so PacketShaper frees up bandwidth to satisfy other demands. A rate policy also lets you limit how much bandwidth an individual flow can consume. For example, you may want to give each FTP flow a guaranteed minimum rate, but you don't want each FTP flow to get more than its fair share of bandwidth.

PacketShaper monitors a connection's speed and adjusts bandwidth allocation as the connection speed changes. Low-speed connections and high-speed connections can be assigned separate guaranteed rates so that PacketShaper can scale bandwidth usage accordingly.

Prioritizing Bandwidth Allocation

You can use priority policies whenever guaranteed rate is not your primary objective. Priority policies are ideal for non-IP traffic, non-bursty traffic, or small traffic flows. For example, Telnet traffic consists of small packets that need high-priority treatment to keep interactive sessions viable. You assign a priority (0-7) to a traffic class so that PacketShaper can determine how to manage each flow.

You don't have to assign a policy to all traffic. Any traffic that you haven't explicitly prioritized is treated as priority-based traffic with a priority of 3.

PacketShaper Bandwidth-Allocation Order

PacketShaper uses the policies you've defined to determine how to allocate bandwidth. When determining bandwidth allocation, PacketShaper takes into account all bandwidth demands, not just the individual traffic flows. As shown in Figure 4, bandwidth is allocated based on the following basic allocation scheme:

FIGURE 4. How PacketShaper Allocates Bandwidth

Denying Access

In some cases, you may want to deny access to users--perhaps traffic from a particular IP address. Using PacketShaper's Never-Admit policy, you can block a connection and even inform web users by re-directing them to a URL that displays a message. A Discard policy lets you block a service and intentionally toss packets without notifying the user.

Controlling Admission

You can define what should happen if a flow for a traffic class cannot get the guaranteed bandwidth it needs. For example, if you've run out of bandwidth and the next flow for a class needs a guaranteed rate, PacketShaper can handle the bandwidth request either by refusing the connection, squeezing the connection into the existing bandwidth pipe, or for web requests, re-directing the request.

A PacketShaper Scenario

Anyone using a web browser to access information on the World Wide Web, communicates on the Internet using HTTP (Hypertext Transfer Protocol) over TCP. HTTP traffic tends to be bursty because HTTP transfers data for each user request.

A typical Web-browsing session takes the following course:

User Action PacketShaper Action
Click on a button to select a specific URL. Classifies the traffic flow by protocol (HTTP) and URL--Is it an index? Is it an html file? Is it a gif?
  Maps the traffic class to a policy--the rules for rate control.
  Smooths the data transfer, giving the user an even, non-bursty data display.


THE PACKETSHAPER ADVANTAGE
PacketShaper provides the technology that enables you to explicitly control TCP/IP bandwidth to keep your network under your control. This technology offers the following unique bandwidth-management features: