TCP CONGESTION CONTROL FOR HETEROGENEOUS NETWORKS

0Associated
Cases 
0Associated
Defendants 
0Accused
Products 
36Forward
Citations 
0
Petitions 
2
Assignments
First Claim
1. A method for congestion control of a communication session over a network comprising:
 determining an estimation of the network condition;
determining a congestion window for the communication session based on a number of parallel virtual communication sessions that will fully and fairly utilize the bandwidth of the network and a congestion control mechanism of the communication session; and
setting the congestion window for the communication session to the determined congestion window.
2 Assignments
0 Petitions
Accused Products
Abstract
A congestion control mechanism for TCP communication sessions is described. The congestion control mechanism adjusts the size of the congestion window based on a number, N, of parallel virtual connections. The number N of parallel virtual connections used to determine the congestion window is dynamically adjusted based on an estimation of the network condition.
40 Citations
View as Search Results
Congestion Window Control Based On Queuing Delay and Packet Loss  
Patent #
US 20110261691A1
Filed 04/11/2011

Current Assignee
Akamai Technologies Inc.

Sponsoring Entity
Akamai Technologies Inc.

SYSTEM AND METHOD OF MONITORING VIDEO DATA PACKET DELIVERY  
Patent #
US 20090064248A1
Filed 08/31/2007

Current Assignee
ATT Knowledge Ventures L.P.

Sponsoring Entity
ATT Knowledge Ventures L.P.

Congestion window control based on queuing delay and packet loss  
Patent #
US 8,514,715 B2
Filed 04/11/2011

Current Assignee
Akamai Technologies Inc.

Sponsoring Entity
Akamai Technologies Inc.

Mobile device equipped with mobile network congestion recognition to make intelligent decisions regarding connecting to an operator network  
Patent #
US 8,750,123 B1
Filed 07/31/2013

Current Assignee
Seven Networks LLC

Sponsoring Entity
Seven Networks Inc

Maintaining an IP connection in a mobile network  
Patent #
US 8,761,756 B2
Filed 09/13/2012

Current Assignee
Seven Networks LLC

Sponsoring Entity
Seven Networks Inc

Timing of keepalive messages used in a system for mobile network resource conservation and optimization  
Patent #
US 8,782,222 B2
Filed 09/05/2012

Current Assignee
Seven Networks LLC

Sponsoring Entity
Seven Networks Inc

System and method of a relay server for managing communications and notification between a mobile device and a web access server  
Patent #
US 8,799,410 B2
Filed 04/13/2011

Current Assignee
Seven Networks LLC

Sponsoring Entity
Seven Networks Inc

Integrated messaging  
Patent #
US 8,805,425 B2
Filed 01/28/2009

Current Assignee
Seven Networks LLC

Sponsoring Entity
Seven Networks Inc

Mobile device power management in data synchronization over a mobile network with or without a trigger notification  
Patent #
US 8,811,952 B2
Filed 05/05/2011

Current Assignee
Seven Networks LLC

Sponsoring Entity
Seven Networks Inc

Method and system for management of a virtual network connection without heartbeat messages  
Patent #
US 8,812,695 B2
Filed 04/03/2013

Current Assignee
Seven Networks LLC

Sponsoring Entity
Seven Networks Inc

Distributed caching for resource and mobile network traffic management  
Patent #
US 8,838,783 B2
Filed 07/05/2011

Current Assignee
Seven Networks LLC

Sponsoring Entity
Seven Networks Inc

Flexible realtime inbox access  
Patent #
US 8,839,412 B1
Filed 09/13/2012

Current Assignee
Seven Networks LLC

Sponsoring Entity
Seven Networks Inc

Mobile traffic categorization and policy for network use optimization while preserving user experience  
Patent #
US 8,843,153 B2
Filed 11/01/2011

Current Assignee
Seven Networks LLC

Sponsoring Entity
Seven Networks Inc

Policy based content service  
Patent #
US 8,862,657 B2
Filed 01/25/2008

Current Assignee
Seven Networks LLC

Sponsoring Entity
Seven Networks Inc

System of redundantly clustered machines to provide failover mechanisms for mobile traffic management and network resource conservation  
Patent #
US 8,868,753 B2
Filed 12/06/2012

Current Assignee
Seven Networks LLC

Sponsoring Entity
Seven Networks Inc

Signaling optimization in a wireless network for traffic utilizing proprietary and nonproprietary protocols  
Patent #
US 8,874,761 B2
Filed 03/15/2013

Current Assignee
Seven Networks LLC

Sponsoring Entity
Seven Networks Inc

METHOD AND APPARATUS FOR PROVIDING CONTENT ACCORDING TO TYPE OF COMMUNICATION NETWORK  
Patent #
US 20140330942A1
Filed 12/28/2012

Current Assignee
Cdnetworks Company Limited

Sponsoring Entity
Cdnetworks Company Limited

Network Congestion Control with Awareness of Random Packet Losses  
Patent #
US 20150029863A1
Filed 07/23/2013

Current Assignee
Cisco Technology Incorporated

Sponsoring Entity
Cisco Technology Incorporated

TCP CONGESTION CONTROL FOR LARGE LATENCY NETWORKS  
Patent #
US 20150043339A1
Filed 12/27/2012

Current Assignee
CDF KE YUAN

Sponsoring Entity
CDF KE YUAN

Predictive content delivery  
Patent #
US 9,002,828 B2
Filed 01/02/2009

Current Assignee
Seven Networks LLC

Sponsoring Entity
Seven Networks LLC

Flexible and dynamic integration schemas of a traffic management system with various network operators for network traffic alleviation  
Patent #
US 9,009,250 B2
Filed 12/07/2012

Current Assignee
Seven Networks LLC

Sponsoring Entity
Seven Networks LLC

Mobile network traffic coordination across multiple applications  
Patent #
US 9,043,433 B2
Filed 05/25/2011

Current Assignee
Seven Networks LLC

Sponsoring Entity
Seven Networks LLC

Mobile network traffic coordination across multiple applications  
Patent #
US 9,049,179 B2
Filed 01/20/2012

Current Assignee
Seven Networks LLC

Sponsoring Entity
Seven Networks LLC

Proxy server associated with a mobile carrier for enhancing mobile traffic management in a mobile network  
Patent #
US 9,065,765 B2
Filed 10/08/2013

Current Assignee
Seven Networks LLC

Sponsoring Entity
Seven Networks LLC

System and method of monitoring video data packet delivery  
Patent #
US 9,106,800 B2
Filed 08/31/2007

Current Assignee
ATT Knowledge Ventures L.P.

Sponsoring Entity
ATT Knowledge Ventures L.P.

Systems and methods for application management of mobile device radio state promotion and demotion  
Patent #
US 9,307,493 B2
Filed 03/15/2013

Current Assignee
Seven Networks LLC

Sponsoring Entity
Seven Networks LLC

Congestion window modification  
Patent #
US 9,391,911 B1
Filed 07/15/2011

Current Assignee
Google LLC

Sponsoring Entity
Google Inc.

SYSTEM FOR DYNAMIC SELECTION AND APPLICATION OF TCP CONGESTION AVOIDANCE FLAVORS  
Patent #
US 20160255004A1
Filed 02/26/2015

Current Assignee
Citrix Systems Inc.

Sponsoring Entity
Citrix Systems Inc.

Network congestion control with awareness of random packet losses  
Patent #
US 9,485,186 B2
Filed 07/23/2013

Current Assignee
Cisco Technology Incorporated

Sponsoring Entity
Cisco Technology Incorporated

Mobile device and communication control method  
Patent #
US 9,750,072 B2
Filed 07/25/2014

Current Assignee
Canon Ayutthaya Limited

Sponsoring Entity
Canon Ayutthaya Limited

System for dynamic selection and application of TCP congestion avoidance flavors  
Patent #
US 9,826,066 B2
Filed 02/26/2015

Current Assignee
Citrix Systems Inc.

Sponsoring Entity
Citrix Systems Inc.

TCP congestion control for large latency networks  
Patent #
US 9,838,318 B2
Filed 12/27/2012

Current Assignee
CDF KE YUAN

Sponsoring Entity
CDF KE YUAN

System for bandwidth optimization with traffic priority determination  
Patent #
US 9,985,898 B2
Filed 02/26/2015

Current Assignee
Citrix Systems Inc.

Sponsoring Entity
Citrix Systems Inc.

Predictive adaptive queue management  
Patent #
US 10,412,634 B2
Filed 08/13/2015

Current Assignee


Sponsoring Entity


System and method of monitoring video data packet delivery  
Patent #
US 10,412,343 B2
Filed 06/29/2015

Current Assignee


Sponsoring Entity


Dynamic selection of TCP congestion control for improved performances  
Patent #
US 10,419,968 B2
Filed 03/30/2016

Current Assignee


Sponsoring Entity


Compound transmission control protocol  
Patent #
US 7,577,097 B2
Filed 03/22/2005

Current Assignee
Microsoft Technology Licensing LLC

Sponsoring Entity
Microsoft Corporation

Flow control system and method  
Patent #
US 7,200,672 B2
Filed 04/18/2002

Current Assignee
NEC Corporation

Sponsoring Entity
NEC Corporation

Redundant routing with deadlines in data networks  
Patent #
US 6,661,775 B1
Filed 08/05/1999

Current Assignee
WSOU Investments LLC

Sponsoring Entity
Lucent Technologies Inc.

Data transmission method  
Patent #
US 6,891,799 B1
Filed 11/30/1999

Current Assignee
Matsushita Electric Industrial Company Limited

Sponsoring Entity
Matsushita Electric Industrial Company Limited

20 Claims
 1. A method for congestion control of a communication session over a network comprising:
 determining an estimation of the network condition;
determining a congestion window for the communication session based on a number of parallel virtual communication sessions that will fully and fairly utilize the bandwidth of the network and a congestion control mechanism of the communication session; and
setting the congestion window for the communication session to the determined congestion window.  View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
 determining an estimation of the network condition;
 13. A computing device for controlling congestion of a communication session over a network comprising:
 a processing unit for executing instructions; and
a memory unit for storing instructions for execution by the processing unit, the instructions when executed configuring the computing device to provide;
a network condition estimation means for determining an estimation of the network condition;
a congestion window determination means for determining a congestion window for the communication session based on a number of parallel virtual communication sessions that will fully and fairly utilize the bandwidth of the network and a congestion control mechanism of the communication session, the congestion control window determination means further for setting the congestion window for the communication session to the determined congestion window.  View Dependent Claims (14, 15, 16, 17, 18, 19, 20)
 a processing unit for executing instructions; and
1 Specification
This application claims priority to previously filed U.S. Provisional application Ser. No. 61/342,434 Filed Apr. 13, 2010.
BACKGROUNDThe function of the TCP congestion control algorithm is to adjust the rate with which the protocol sends packets to the network using a congestion control window cwnd. A good congestion algorithm can fully utilize the bandwidth while avoiding overdriving the network and thereby creating packet losses. Since the introduction of the first widely used TCP congestion control algorithm TCP Reno in, many TCP congestion control algorithms have been proposed. On a high level, existing TCP congestion algorithms can be classified into three main categories based on the input to the control mechanism: namely Lossbased TCP, Delaybased TCP and Hybrid TCP.
Lossbased TCP includes: the original TCP Reno, TCP Bic, TCP CUBIC, High Speed TCP, Scalable TCP and so on. Among these Delaybased TCP variants, TCP Reno and TCP CUBIC are widely deployed Lossbased TCP as standard TCP algorithms and default TCP of Linux respectively. Using packet loss as the symptom for network congestion, Lossbased TCP reduces the value of cwnd when packet losses occur and increases the cwnd otherwise. A basic assumption in the design of Lossbased TCP congestion control is that packet losses are caused by overdriving the network only, which is no longer valid when the algorithm is applied to wireless networks. Random physical layer artifacts (e.g. multipath, interferences) introduced packet losses that are typical for wireless networks will cause the congestion control algorithm to aggressively lower the cwnd. On the other hand, in a high BDP network, delaybased TCP requires a very low (in the order 10<sup>−7 </sup>or lower) random packet loss rate to fully utilize network capacity. This requirement is far from reality of network condition.
Delaybased TCP includes TCP Vegas and FAST TCP uses the queuing delay as the symptom for congestion. The queuing delay is defined as the difference between the RTT and the propagation delay, i.e. time actually required for a packet to be transmitted from the sender to the receiver. Delaybased TCP are more resilient to transient changes of network conditions such as random packet losses and are also suitable for high BDP networks. The down side of the approach on the other hand is that, because increase in round trip delay will not necessarily immediately lead to packet loss (due to buffers), when Delaybased TCP shares the same bottleneck with Lossbased TCP, between the time when delay starts to increase and packet loss occurs, the cwnd for the Delaybased TCP will decrease while that for the Lossbased TCP will not, leading to bandwidth “starvation” for the Delaybased sessions.
Hybrid TCP uses both packet loss and delay as inputs to the cwnd control mechanism and includes TCP variants for wireless environments such as Veno and TCP Westwood, as well as TCP variants for high speed links such as Compound TCP, TCPIllinois, HTCP and TCPFusion. Among these algorithms, Compound TCP has been widely deployed as the TCP congestion control algorithm in the Microsoft Windows Vista operating system while TCPFusion was used in the SUN Solaris 10. Although the performance of these TCP variants are good for the application scenarios they were originally designed for, for the emerging generation of high bandwidth wireless networks such as LTE and WiMAX, as well as for applications over heterogeneous networks combining segments of wired and wireless links, it becomes difficult for existing TCP congestion control algorithms to perform well.
Parallel TCP is yet another research area of TCP congestion control. The core idea of Parallel TCP is to create multiple actual or virtual TCP sessions that are controlled jointly so as to fully exploit network bandwidth. Parallel TCP in high speed wireless networks can achieve very good performance, parallel TCP sessions were used to optimize the user experience in multimedia streaming over wireless. In the system, it is required that the contexts of multiple TCP sessions be monitored, and the application layer software modified. In MulTCP, N virtual TCP sessions are utilized to simulate the behavior of multiple actual parallel TCP sessions, and can achieve very good performance when N is chosen properly, but may lead to either underdriving or overdriving the network if the value of N is not appropriate.
It is desired to have a TCP congestion control mechanism that can fully and fairly utilize the bandwidth in various networks, including high BDP networks as well as wireless networks.
SUMMARYIn accordance with the description there is provided a method for congestion control of a communication session over a network comprising determining an estimation of the network condition; determining a congestion window for the communication session based on a number of parallel virtual communication sessions that will fully and fairly utilize the bandwidth of the network and a congestion control mechanism of the communication session; and setting the congestion window for the communication session to the determined congestion window.
In accordance with the description there is provided a computing device for controlling congestion of a communication session over a network comprising a processing unit for executing instructions; and a memory unit for storing instructions for execution by the processing unit. The instructions when executed configuring the computing device to provide a network condition estimation means for determining an estimation of the network condition; a congestion window determination means for determining a congestion window for the communication session based on a number of parallel virtual communication sessions that will fully and fairly utilize the bandwidth of the network and a congestion control mechanism of the communication session, the congestion control window determination means further for setting the congestion window for the communication session to the determined congestion window.
BRIEF DESCRIPTION OF THE DRAWINGSIllustrative embodiments of the invention will now be described with reference to the following drawings in which:
FIG. 1 depicts in a block diagram an illustrative network in which the TCPFIT congestion control mechanism may be used;
FIG. 2 depicts in a block diagram an illustrative computing device that may be used to implement the TCPFIT congestion control mechanism described herein;
FIG. 3 depicts in a flow chart an illustrative method of congestion control;
FIG. 4 depicts in a flow chart a further illustrative embodiment of a method of congestion control;
FIG. 5 depicts in a block diagram an illustrative emulator setup used in evaluating the congestion control mechanism described herein;
FIG. 6 depicts an illustrative graphical user interface of a Linktropy emulator in accordance with emulations of the congestion control mechanism described herein;
FIGS. 7(a) and (b) show the resultant throughput comparison for a 1 Gbps link and 100 Mbps link respectively;
FIGS. 8(a)(d) shows the resultant throughput comparison with the packet loss rate set from between 1% to 7%;
FIGS. 9(a) and (b) show the bandwidth utilization comparison with the packet loss rate set at 0% and 5% respectively;
FIGS. 10(a)(c) shows the Bandwidth Stolen Rate for different congestion control mechanisms;
FIGS. 11(a)(i) shows the throughput of different congestion control mechanisms as tested in various networks; and
FIG. 12 shows a comparison between TCPFIT and EMulTCP as measured over China Telecom's 3G network in Beijing.
DETAILED DESCRIPTIONIt will be appreciated that for simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein may be practiced without these specific details. In other instances, wellknown methods, procedures and components have not been described in detail so as not to obscure the embodiments described herein. Also, the description is not to be considered as limiting the scope of the embodiments described herein.
A TCP congestion control mechanism, referred to as TCPFIT herein, is described. TCPFIT uses multiple parallel virtual TCP connections to fully and fairly utilize the network's bandwidth. In contrast to other parallel TCP techniques, such as MulTCP, TCPFIT is a fully compliant TCP congestion control algorithm requiring no modifications to other layers in the protocol stack and/or any application software. Using TCPFIT, only one actual connection, with an associated cwnd, is established for each TCP session. Although the idea of virtual parallel sessions is useful for the understanding of the congestion control window adjustment formula of TCPFIT, in an actual implementation of the congestion control mechanism, only one physical connection is established, with a cwnd determined based on the apparent cwnd size of a number of parallel virtual connections.
In one embodiment of TCPFIT a dynamic number, N, of virtual TCPReno connections are maintained. The cwnd for each individual virtual connection is adjusted according to the TCPcongestion control mechanism as follows:
<maths id="MATHUS00001" num="00001"><math overflow="scroll"><mrow><mrow><mi>Each</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><mi>RTT</mi><mo></mo><mstyle><mtext>:</mtext></mstyle><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><mi>cwnd</mi></mrow><mo>←</mo><mrow><mi>cwnd</mi><mo>+</mo><mn>1</mn></mrow></mrow></math></maths><maths id="MATHUS000012" num="00001.2"><math overflow="scroll"><mrow><mrow><mi>Each</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><mi>Loss</mi><mo></mo><mstyle><mtext>:</mtext></mstyle><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><mi>cwnd</mi></mrow><mo>←</mo><mrow><mi>cwnd</mi><mo></mo><mrow><mo>(</mo><mfrac><mi>cwnd</mi><mn>2</mn></mfrac><mo>)</mo></mrow></mrow></mrow></math></maths>
TCPFIT uses the congestion control mechanism's, in this example TCPRENO, cwnd adjustment and the number N of virtual connections to determine the Cwnd of the actual physical connection as follows:
<maths id="MATHUS00002" num="00002"><math overflow="scroll"><mrow><mrow><mi>Each</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><mi>RTT</mi><mo></mo><mstyle><mtext>:</mtext></mstyle><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><mi>Cwnd</mi></mrow><mo>←</mo><mrow><mi>Cwnd</mi><mo>+</mo><mi>N</mi></mrow></mrow></math></maths><maths id="MATHUS000022" num="00002.2"><math overflow="scroll"><mrow><mrow><mi>Each</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><mi>Loss</mi><mo></mo><mstyle><mtext>:</mtext></mstyle><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><mi>Cwnd</mi></mrow><mo>←</mo><mrow><mi>Cwnd</mi><mo></mo><mrow><mo>(</mo><mfrac><mi>Cwnd</mi><mrow><mn>2</mn><mo></mo><mi>N</mi></mrow></mfrac><mo>)</mo></mrow></mrow></mrow></math></maths>
Where Cwnd is the value of the congestion window for the actual physical TCP session, consisting of N virtual sessions of congestion window value of cwnd. RTT, or the return trip time is calculated based on the time between sending a packet and receiving an ACK from the receiver for the corresponding packet. The value for RTT may be an instantaneous value, or may be based on an average or weighted average of previous RTT values. For each RTT, that is each time an ACK is received, the RTT may be updated and the size of the Cwnd adjusted accordingly as set forth above. A Loss occurs when a timeout timer associated with the sending of a packet expires before receiving an ACK for the packet. When the timer expires, the Cwnd may be adjusted accordingly as set forth above.
A potential issue with the above adjustment for increasing Cwnd is that for two MulTCP or TCPFIT sessions sharing the same bottleneck the session with the longer RTT will have fewer chances to update its Cwnd and therefore will be at a disadvantage, because the value of the Cwnd is updated every RTT. In order to mitigate this problem, instead of increasing Cwnd as set forth above, a normalization factor may be included in the calculation of Cwnd as follows:
<FORM>Each RTT: Cwnd←Cwnd+λN</FORM>
Where λ=RTT/RTT<sub>0</sub>, and RTT<sub>0 </sub>is a baseline RTT value representing the statistical “floor” of the RTT values in the network.
In TCPFIT, the number N of virtual parallel Renolike sessions may be dynamically adjusted using the following equation:
<maths id="MATHUS00003" num="00003"><math overflow="scroll"><mrow><msub><mi>N</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>=</mo><mrow><mi>max</mi><mo></mo><mrow><mo>{</mo><mrow><mn>1</mn><mo>,</mo><mrow><msub><mi>N</mi><mi>i</mi></msub><mo>+</mo><mrow><mo>(</mo><mrow><mi>α</mi><mo></mo><mrow><mfrac><mi>Q</mi><mi>Cwnd</mi></mfrac><mo></mo><msub><mi>N</mi><mi>i</mi></msub></mrow></mrow><mo>)</mo></mrow></mrow></mrow><mo>}</mo></mrow></mrow></mrow></math></maths>
where Q is an estimate of the number of inflight packets that are currently buffered in the network, α is a parameter:
<maths id="MATHUS00004" num="00004"><math overflow="scroll"><mrow><mi>α</mi><mo>=</mo><mrow><mfrac><mi>Q</mi><mi>Cwnd</mi></mfrac><mo></mo><mi>N</mi></mrow></mrow></math></maths><maths id="MATHUS000042" num="00004.2"><math overflow="scroll"><mrow><mrow><mi>Where</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><mi>Q</mi></mrow><mo>=</mo><mrow><mrow><mo>(</mo><mrow><mi>average_rtt</mi><mo></mo><mi>base_rtt</mi></mrow><mo>)</mo></mrow><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mfrac><mi>Cwnd</mi><mi>average_rtt</mi></mfrac></mrow></mrow></math></maths>
In the above, base_rtt is the minimal RTT value observed in a window of time, and may be used as a reasonable estimate of the current propagation delay of the network. The value of average_rtt−base_rtt represents the estimated value of the queuing delay. Since it takes average_rtt to transmit Cwnd packets from the source to the destination, dividing Cwnd by average_rtt produces the estimated throughput of the network. This, multiplied by the average queuing delay, gives an estimate of Q.
The value of base_rtt has a big impact on the overall performance of the system. Since in realworld applications such as wireless, the propagation delay is impacted by a number of factors such as the modulation and channel coding schemes used. Because these factors can change fairly rapidly in a wireless network, in TCPFIT, the system enters a base_rtt estimation mode periodically. Once the base_rtt has been estimated the previous value of N is restored. In the base_rtt estimation mode, TCPFIT sets N to 1, and set the minimum of m RTT samples as a temporary min_rtt variable. Then the new base_rtt value may be calculated using:
<maths id="MATHUS00005" num="00005"><math overflow="scroll"><mrow><mi>base_rtt</mi><mo>=</mo><mrow><mrow><mfrac><mn>7</mn><mn>8</mn></mfrac><mo></mo><mi>base_rtt</mi></mrow><mo>+</mo><mrow><mfrac><mn>1</mn><mn>8</mn></mfrac><mo></mo><mi>Min_rtt</mi></mrow></mrow></mrow></math></maths>
In another embodiment of TCPFIT, the following lossbased method may be used to adjust the congestion control window Cwnd:
<maths id="MATHUS00006" num="00006"><math overflow="scroll"><mrow><mrow><mi>Each</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><mi>RTT</mi><mo></mo><mstyle><mtext>:</mtext></mstyle><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><mi>cwnd</mi></mrow><mo>←</mo><mrow><mi>cwnd</mi><mo>+</mo><msub><mi>N</mi><mi>t</mi></msub></mrow></mrow></math></maths><maths id="MATHUS000062" num="00006.2"><math overflow="scroll"><mrow><mrow><mi>Each</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><mi>Loss</mi><mo></mo><mstyle><mtext>:</mtext></mstyle><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><mi>cwnd</mi></mrow><mo>←</mo><mrow><mi>cwnd</mi><mo></mo><mrow><mfrac><mn>2</mn><mrow><mrow><mn>3</mn><mo></mo><msub><mi>N</mi><mi>t</mi></msub></mrow><mo>+</mo><mn>1</mn></mrow></mfrac><mo></mo><mi>cwnd</mi></mrow></mrow></mrow></math></maths>
Similar to standard MulTCP, Cwnd of TCPFIT increases by N<sub>t </sub>packets during an RTT. Given the same network conditions, standard MulTCP with N parallel connections can not guarantee that the congestion window is exactly N times that of a single Reno session. Therefore, to improve overall throughput, Cwnd is decreased by a factor of 2/(3N+1) instead of ½N as described above when a packet loss occurs.
The value N<sub>t </sub>may be updated after each packet loss using
<maths id="MATHUS00007" num="00007"><math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><msub><mi>N</mi><mrow><mi>t</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>=</mo><mrow><msub><mi>N</mi><mi>t</mi></msub><mo>+</mo><mn>1</mn></mrow></mrow><mo>,</mo></mrow></mtd><mtd><mrow><mi>Q</mi><mo><</mo><mrow><mi>α</mi><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mfrac><mi>Cwnd</mi><msub><mi>N</mi><mi>t</mi></msub></mfrac></mrow></mrow></mtd></mtr><mtr><mtd><mrow><mrow><msub><mi>N</mi><mrow><mi>t</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>=</mo><msub><mi>N</mi><mi>t</mi></msub></mrow><mo>,</mo></mrow></mtd><mtd><mrow><mi>Q</mi><mo>=</mo><mrow><mi>α</mi><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mfrac><mi>Cwnd</mi><msub><mi>N</mi><mi>t</mi></msub></mfrac></mrow></mrow></mtd></mtr><mtr><mtd><mrow><mrow><msub><mi>N</mi><mrow><mi>t</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>=</mo><mrow><mi>max</mi><mo></mo><mrow><mo>(</mo><mrow><mn>1</mn><mo>,</mo><mrow><msub><mi>N</mi><mi>t</mi></msub><mo></mo><mn>1</mn></mrow></mrow><mo>)</mo></mrow></mrow></mrow><mo>,</mo></mrow></mtd><mtd><mrow><mi>Q</mi><mo>></mo><mrow><mi>α</mi><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mfrac><mi>Cwnd</mi><msub><mi>N</mi><mi>t</mi></msub></mfrac></mrow></mrow></mtd></mtr></mtable></math></maths>
where α is a parameter and 0<α<1. Q is an estimate of the number of inflight packets that are currently queued in the network buffer, and may be calculated in a way that is similar to in existing delaybased TCP and hybrid TCP variants:
<maths id="MATHUS00008" num="00008"><math overflow="scroll"><mrow><mi>Q</mi><mo>=</mo><mrow><mrow><mi>α</mi><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mfrac><mi>Cwnd</mi><mi>N</mi></mfrac></mrow><mo>=</mo><mrow><mrow><mo>(</mo><mrow><mi>curr_rtt</mi><mo></mo><mi>Min_rtt</mi></mrow><mo>)</mo></mrow><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mfrac><mi>curr_cwnd</mi><mi>curr_rtt</mi></mfrac></mrow></mrow></mrow></math></maths>
where curr_rtt and curr_cwnd are the current RTT and congestion window size respectively, min_rtt is the minimal recent RTT observed value used as a reasonable estimate of the propagation delay of the network. curr_rtt−min_rtt represents the estimated value of the queuing delay. Since a TCP session sends cwnd packets in a RTT, (curr_cwnd)/(curr_rtt) may be considered as an estimate of packet transmission rate of current TCP session. In TCPFIT, the average of RTT values between two consecutive packet losses can be used as curr_rtt.
Since
<maths id="MATHUS00009" num="00009"><math overflow="scroll"><mrow><mrow><mi>Q</mi><mo>=</mo><mrow><munder><mi>α</mi><mo>.</mo></munder><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mfrac><mi>Cwnd</mi><mi>N</mi></mfrac></mrow></mrow><mo>,</mo></mrow></math></maths>
α could be set to the ratio between the number of inflight packets that are queued in the network buff and the value of the congestion window for a TCPReno session to achieve interfairness.
From the above, it is easy to find the steady state value of N*
<maths id="MATHUS00010" num="00010"><math overflow="scroll"><mrow><msup><mi>N</mi><mo>*</mo></msup><mo>=</mo><mrow><mi>α</mi><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mfrac><mi>curr_rtt</mi><mrow><mi>curr_rtt</mi><mo></mo><mi>base_rtt</mi></mrow></mfrac></mrow></mrow></math></maths>
FIG. 1 depicts in a block diagram an illustrative network in which the TCPFIT congestion control mechanism may be used. The network 100 comprises a plurality of senders 102a, 102b (referred to collectively as senders 102), each of which comprises a TCP stack 104 that includes the TCPFIT congestion control mechanism 106. The senders 102 send packets of information to a plurality of receivers 108a, 108b (referred to collectively as receivers 108). The receivers 108 each comprise a TCP stack 110. The TCP stacks 110 of the receivers 108 are depicted without a congestion control mechanism. Although the TCP implementation of the receivers 108 may include the congestion control mechanism, it is the senders 102 that are responsible for the congestion control and as such it is omitted from the receivers 108 of FIG. 1.
The senders 102 transmit the packets to the receivers 108 over a network comprised of a plurality of links, one of which will be a bottleneck 112. The bottleneck link has a bandwidth that it can transmit packets at, depicted by section 114 of the bottleneck link. When the number of packets arriving at the bottleneck link 112 exceed the transmission bandwidth capacity 114, the packets 116 cannot be transmitted right away and so are temporarily stored in buffer 118. If packets arrive while the buffer 118 is full, the link is congested and some packet(s) will be dropped. The timeout timer of the sender associated with the dropped packet will expire, and the TCPFIT congestion control mechanism will reduce the Cwnd of the session.
FIG. 2 depicts in a block diagram an illustrative computing device that may be used to implement the TCPFIT congestion control mechanism described herein. The computing device 200 comprises a processing unit 202 for executing instructions and a memory unit 204 for storing instructions. The memory unit 204 may include volatile memory and nonvolatile storage 206. The memory unit 204 stores instructions 208 that when executed by the processing unit 202 configure the computing device 200 to provide the TCPFIT congestion control mechanism 210.
The TCPFIT congestion control mechanism 210 comprises a network condition estimator 212 that receives transmission information, such as the receipt of ACKs or the expiry of a timeout timer. The network condition estimator 212 uses the received information to estimate the network condition, which may include for example updating an average RTT, a minimum and maximum RTT, an estimation of the number of queued packets, the loss rate etc. The estimated network condition is used be an N adjuster 214 that adjusts the dynamic value of the number of parallel virtual connections that will fairly and fully utilize the network bandwidth. A Cwnd adjuster uses the number of parallel virtual connections to set the size of the Cwnd for the connection as described above when an ACK packet is received or a timeout timer expires, indicating a lost packet.
Although the above has been described with regards to providing N parallel virtual TCPReno connections. It is possible to use different control mechanisms other than TCPReno, for example, TCPFast, TCPVeno, TCPWestood etc. Each control mechanism may be suited for different types of networks or network conditions. The TCPFIT congestion control 210 may include a mechanism selector 218 that selects an appropriate control mechanism based on the estimated network condition. The Cwnd adjuster than determines the size of the Cwnd based on the selected control mechanism and the number of parallel virtual connections.
FIG. 3 depicts in a flow chart an illustrative method of congestion control. The method 300 determines an estimate for the network condition (302), which may include determining an estimate for RTT as well as the rate of packet loss. Once the network condition is estimated, the congestion window is determined based on a number of parallel virtual connections and the control mechanism of the connections (304). If the control mechanism is TCPReno, the congestion window may be increased when an ACK is received and decreased when a packet loss occurs. Once the Cwnd size for the actual physical TCP connection has been determined based on the number of parallel virtual connections, it is set for the connection (306) and additional packets may be sent in accordance with the updated Cwnd.
FIG. 4 depicts in a flow chart a further illustrative embodiment of a method of congestion control. Unlike the method 300 which may determine the number of parallel virtual connections to use in determining Cwnd independently from updating Cwnd, the method 400 updates the number of parallel virtual connections each time Cwnd will be updated.
The method 400 transmits a packet (402) and then waits (404) until an ACK is received (406) or a timeout timer expires (408). If an ACK is not received (No at 406) and a timeout timer has not expired (No at 408) the method 400 returns to wait (404) for period of time again. If an ACK is received (Yes at 406) the estimate of the network condition is updated (410), namely the RTT, and the number, N, of parallel virtual connections is updated (412). Once the number of parallel virtual connections is updated, the Cwnd is increased according to the control mechanism of the connection as if it were N connections. Once the Cwnd is updated another packet may be sent (402). If a timeout occurs (Yes at 408), the estimate of the network condition is updated (410), namely the packet loss, and the number, N, of parallel virtual connections is updated (412). Once the number of parallel virtual connections is updated, the Cwnd is decreased according to the control mechanism of the connection and the number N of parallel virtual connections. Once the Cwnd is updated another packet may be sent (402).
As set forth further below, the efficiency and performance of the TCPFit congestion control has been analyzed with both emulation and real world network tests.
As illustrated in FIG. 1, it is assumed that the network K includes one bottleneck link which has a bandwidth limit of B, buffer size of U, and roundtrip propagation delay D, an inherent random packet loss rate of P and several nonbottleneck links with unlimited bandwidth and different roundtrip propagation delays. In the network model, n TCP sessions (2 are depicted) share the bottleneck link at the same time but may have different nonbottleneck routing paths. These TCP session may use different TCP congestion control algorithms. When these TCP sessions send packet through the network, there are on average M packets traversing the bottleneck. The relationship between M, the congestion packet loss rate p, and the queuing delay q can be approximated using:
<maths id="MATHUS00011" num="00011"><math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><mi>p</mi><mo>=</mo><mn>0</mn></mrow><mo>,</mo><mrow><mi>q</mi><mo>=</mo><mn>0</mn></mrow><mo>,</mo></mrow></mtd><mtd><mrow><mn>0</mn><mo>≤</mo><mi>M</mi><mo><</mo><mrow><mi>B</mi><mo>·</mo><mi>D</mi></mrow></mrow></mtd><mtd><mrow><mi>state</mi><mo></mo><mrow><mo>(</mo><mi>a</mi><mo>)</mo></mrow></mrow></mtd></mtr><mtr><mtd><mrow><mrow><mi>p</mi><mo>></mo><mn>0</mn></mrow><mo>,</mo><mrow><mi>q</mi><mo>=</mo><mfrac><mrow><mi>M</mi><mo></mo><mrow><mi>B</mi><mo>·</mo><mi>D</mi></mrow></mrow><mi>B</mi></mfrac></mrow><mo>,</mo></mrow></mtd><mtd><mrow><mrow><mi>B</mi><mo>·</mo><mi>D</mi></mrow><mo>≤</mo><mi>M</mi><mo>≤</mo><mrow><mrow><mi>B</mi><mo>·</mo><mi>D</mi></mrow><mo>+</mo><mi>U</mi></mrow></mrow></mtd><mtd><mrow><mi>state</mi><mo></mo><mrow><mo>(</mo><mi>b</mi><mo>)</mo></mrow></mrow></mtd></mtr><mtr><mtd><mrow><mrow><mi>p</mi><mo>></mo><mn>0</mn></mrow><mo>,</mo><mrow><mi>q</mi><mo>→</mo><mi>∞</mi></mrow><mo>,</mo></mrow></mtd><mtd><mrow><mrow><mrow><mi>B</mi><mo>·</mo><mi>D</mi></mrow><mo>+</mo><mi>U</mi></mrow><mo><</mo><mi>M</mi></mrow></mtd><mtd><mrow><mi>state</mi><mo></mo><mrow><mo>(</mo><mi>c</mi><mo>)</mo></mrow></mrow></mtd></mtr></mtable></math></maths>
In state (a) above, when M is smaller than the link bandwidthdelay product (BDP), no packets will be queued in the bottleneck buffer and no congestion packet loss occurs. If M is higher than BDP but smaller than BDP plus bottleneck buffer size U, which is state (b), the bottleneck bandwidth is fully utilized and packets begin to queue in the buffer. As a result, queuing delay q begins to increase and some packets may be dropped according to the adaptive queue management mechanism of the of the bottleneck link. In state (c), when M is higher than B·D+U, the bottleneck buffer will overflow and some packets will be lost due to congestion, and queuing delay is effectively infinite.
Obviously, the best state of operation is (b) when the bandwidth resource of the bottleneck is fully utilized with no congestion related packet losses. The function of the TCP congestion control algorithm is to adjust the packet sending rate so that the corresponding session operates in this state while using only a fair share of the resources.
The throughput models of several typical TCP variants is listed in Table 1.
<tables id="TABLEUS00001" num="00001"><table frame="none" colsep="0" rowsep="0" pgwide="1"><tgroup align="left" colsep="0" rowsep="0" cols="1"><colspec colname="1" colwidth="385pt" align="center"/><thead><row><entry namest="1" nameend="1" rowsep="1">TABLE 1</entry></row></thead><tbody valign="top"><row><entry namest="1" nameend="1" align="center" rowsep="1"/></row><row><entry>TCP Models</entry></row></tbody></tgroup><tgroup align="left" colsep="0" rowsep="0" cols="4"><colspec colname="1" colwidth="42pt" align="left"/><colspec colname="2" colwidth="154pt" align="center"/><colspec colname="3" colwidth="70pt" align="center"/><colspec colname="4" colwidth="119pt" align="center"/><tbody valign="top"><row><entry>TCP Variants</entry><entry>Throughput Models</entry><entry>Upper Bound</entry><entry>η = T<sub>i</sub>*(D<sub>i</sub>)/T<sub>j</sub>*(D<sub>j</sub>)</entry></row><row><entry namest="1" nameend="4" align="center" rowsep="1"/></row><row><entry>TCPFIT</entry><entry><maths id="MATHUS00012" num="00012"><math overflow="scroll"><mrow><msup><mi>T</mi><mo>*</mo></msup><mo>=</mo><mrow><mfrac><mi>α</mi><mi>q</mi></mfrac><mo></mo><msqrt><mfrac><mn>3</mn><mrow><mn>2</mn><mo></mo><mrow><mo>(</mo><mrow><mi>P</mi><mo>+</mo><mi>p</mi></mrow><mo>)</mo></mrow></mrow></mfrac></msqrt></mrow></mrow></math></maths></entry><entry>x</entry><entry>η = 1</entry></row><row><entry></entry></row><row><entry>Reno</entry><entry><maths id="MATHUS00013" num="00013"><math overflow="scroll"><mrow><msup><mi>T</mi><mo>*</mo></msup><mo>=</mo><mrow><mfrac><mn>1</mn><mrow><mi>D</mi><mo>+</mo><mi>q</mi></mrow></mfrac><mo></mo><mrow><msqrt><mfrac><mn>3</mn><mrow><mn>2</mn><mo></mo><mrow><mo>(</mo><mrow><mi>P</mi><mo>+</mo><mi>p</mi></mrow><mo>)</mo></mrow></mrow></mfrac></msqrt><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo>[</mo><mn>25</mn><mo>]</mo></mrow></mrow></mrow></math></maths></entry><entry><maths id="MATHUS00014" num="00014"><math overflow="scroll"><mrow><msup><mi>T</mi><mo>*</mo></msup><mo>≤</mo><mrow><mfrac><mn>1</mn><mi>D</mi></mfrac><mo></mo><msqrt><mfrac><mn>3</mn><mrow><mn>2</mn><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mi>P</mi></mrow></mfrac></msqrt></mrow></mrow></math></maths></entry><entry><maths id="MATHUS00015" num="00015"><math overflow="scroll"><mrow><mi>η</mi><mo>=</mo><mfrac><mrow><msub><mi>D</mi><mi>j</mi></msub><mo>+</mo><mi>q</mi></mrow><mrow><msub><mi>D</mi><mi>i</mi></msub><mo>+</mo><mi>q</mi></mrow></mfrac></mrow></math></maths></entry></row><row><entry></entry></row><row><entry>CTCP</entry><entry><maths id="MATHUS00016" num="00016"><math overflow="scroll"><mrow><mrow><msup><mi>T</mi><mo>*</mo></msup><mo>=</mo><mrow><mfrac><mn>1</mn><mrow><mi>D</mi><mo>+</mo><mi>q</mi></mrow></mfrac><mo></mo><mfrac><mi>A</mi><msup><mrow><mo>(</mo><mrow><mi>P</mi><mo>+</mo><mi>p</mi></mrow><mo>)</mo></mrow><mfrac><mn>1</mn><mrow><mn>2</mn><mo></mo><mi>k</mi></mrow></mfrac></msup></mfrac></mrow></mrow><mo>,</mo></mrow></math></maths><maths id="MATHUS00017" num="00017"><math overflow="scroll"><mrow><mi>A</mi><mo>=</mo><mfrac><msup><mrow><mo>(</mo><mfrac><mrow><mn>1</mn><mo></mo><msup><mrow><mo>(</mo><mrow><mn>1</mn><mo></mo><msup><mi>β</mi><mi>′</mi></msup></mrow><mo>)</mo></mrow><mrow><mn>2</mn><mo></mo><mi>k</mi></mrow></msup></mrow><mrow><mn>2</mn><mo></mo><mi>k</mi></mrow></mfrac><mo>)</mo></mrow><mfrac><mrow><mn>1</mn><mo></mo><mi>k</mi></mrow><mrow><mn>2</mn><mo></mo><mi>k</mi></mrow></mfrac></msup><mrow><msup><msup><mi>α</mi><mi>′</mi></msup><mfrac><mn>1</mn><mrow><mn>2</mn><mo></mo><mi>k</mi></mrow></mfrac></msup><mo></mo><mrow><mo>(</mo><mrow><mn>1</mn><mo></mo><msup><mrow><mo>(</mo><mrow><mn>1</mn><mo></mo><msup><mi>β</mi><mi>′</mi></msup></mrow><mo>)</mo></mrow><mrow><mi>I</mi><mo></mo><mi>k</mi></mrow></msup></mrow><mo>)</mo></mrow></mrow></mfrac></mrow></math></maths></entry><entry><maths id="MATHUS00018" num="00018"><math overflow="scroll"><mrow><msup><mi>T</mi><mo>*</mo></msup><mo>≤</mo><mrow><mfrac><mn>1</mn><mi>D</mi></mfrac><mo></mo><mfrac><mi>A</mi><msup><mi>P</mi><mfrac><mn>1</mn><mrow><mn>2</mn><mo></mo><mi>k</mi></mrow></mfrac></msup></mfrac></mrow></mrow></math></maths></entry><entry><maths id="MATHUS00019" num="00019"><math overflow="scroll"><mrow><mi>η</mi><mo>≤</mo><mrow><msup><mrow><mo>[</mo><mfrac><mrow><msub><mi>D</mi><mi>j</mi></msub><mo>+</mo><mi>q</mi></mrow><mrow><msub><mi>D</mi><mi>i</mi></msub><mo>+</mo><mi>q</mi></mrow></mfrac><mo>]</mo></mrow><mn>2</mn></msup><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo>[</mo><mn>6</mn><mo>]</mo></mrow></mrow></math></maths></entry></row><row><entry></entry></row><row><entry/><entry>where α′, β′ and k are preset parameters. [6]</entry><entry/><entry/></row><row><entry></entry></row><row><entry>Veno</entry><entry><maths id="MATHUS00020" num="00020"><math overflow="scroll"><mrow><mrow><msup><mi>T</mi><mo>*</mo></msup><mo>=</mo><mrow><mfrac><mn>1</mn><mrow><mi>D</mi><mo>+</mo><mi>q</mi></mrow></mfrac><mo></mo><msqrt><mfrac><mrow><mn>1</mn><mo>+</mo><msup><mi>γ</mi><mi>′</mi></msup></mrow><mrow><mn>2</mn><mo></mo><mrow><mo>(</mo><mrow><mn>1</mn><mo></mo><msup><mi>γ</mi><mi>′</mi></msup></mrow><mo>)</mo></mrow><mo></mo><mrow><mo>(</mo><mrow><mi>P</mi><mo>+</mo><mi>p</mi></mrow><mo>)</mo></mrow></mrow></mfrac></msqrt></mrow></mrow><mo>,</mo></mrow></math></maths></entry><entry><maths id="MATHUS00021" num="00021"><math overflow="scroll"><mrow><msup><mi>T</mi><mo>*</mo></msup><mo>≤</mo><mrow><mfrac><mn>1</mn><mi>D</mi></mfrac><mo></mo><msqrt><mfrac><mrow><mn>1</mn><mo>+</mo><msup><mi>γ</mi><mi>′</mi></msup></mrow><mrow><mn>2</mn><mo></mo><mrow><mo>(</mo><mrow><mn>1</mn><mo></mo><msup><mi>γ</mi><mi>′</mi></msup></mrow><mo>)</mo></mrow><mo></mo><mi>P</mi></mrow></mfrac></msqrt></mrow></mrow></math></maths></entry><entry><maths id="MATHUS00022" num="00022"><math overflow="scroll"><mrow><mi>η</mi><mo>=</mo><mfrac><mrow><mrow><mo>(</mo><mrow><msub><mi>D</mi><mi>j</mi></msub><mo>+</mo><mi>q</mi></mrow><mo>)</mo></mrow><mo></mo><msqrt><mrow><mrow><mo>(</mo><mrow><mn>1</mn><mo>+</mo><msubsup><mi>γ</mi><mi>i</mi><mi>′</mi></msubsup></mrow><mo>)</mo></mrow><mo></mo><mrow><mo>(</mo><mrow><mn>2</mn><mo></mo><mrow><mo>(</mo><mrow><mn>1</mn><mo></mo><msubsup><mi>γ</mi><mi>j</mi><mi>′</mi></msubsup></mrow><mo>)</mo></mrow><mo></mo><mrow><mo>(</mo><mrow><mi>P</mi><mo>+</mo><mi>p</mi></mrow><mo>)</mo></mrow></mrow><mo>)</mo></mrow></mrow></msqrt></mrow><mrow><mrow><mo>(</mo><mrow><msub><mi>D</mi><mi>i</mi></msub><mo>+</mo><mi>q</mi></mrow><mo>)</mo></mrow><mo></mo><msqrt><mrow><mrow><mo>(</mo><mrow><mn>1</mn><mo>+</mo><msubsup><mi>γ</mi><mi>j</mi><mi>′</mi></msubsup></mrow><mo>)</mo></mrow><mo></mo><mrow><mo>(</mo><mrow><mn>2</mn><mo></mo><mrow><mo>(</mo><mrow><mn>1</mn><mo></mo><msubsup><mi>γ</mi><mi>i</mi><mi>′</mi></msubsup></mrow><mo>)</mo></mrow><mo></mo><mrow><mo>(</mo><mrow><mi>P</mi><mo>+</mo><mi>p</mi></mrow><mo>)</mo></mrow></mrow><mo>)</mo></mrow></mrow></msqrt></mrow></mfrac></mrow></math></maths></entry></row><row><entry></entry></row><row><entry/><entry>where ½ ≦ γ′ ≦ ⅘. [26]</entry><entry/><entry/></row><row><entry></entry></row><row><entry>Westwood</entry><entry><maths id="MATHUS00023" num="00023"><math overflow="scroll"><mrow><msup><mi>T</mi><mo>*</mo></msup><mo>=</mo><mrow><mfrac><mn>1</mn><msqrt><mrow><mrow><mo>(</mo><mrow><mi>D</mi><mo>+</mo><mi>q</mi></mrow><mo>)</mo></mrow><mo></mo><mrow><mo>(</mo><mrow><mrow><mrow><mo>(</mo><mrow><mi>P</mi><mo>+</mo><mi>p</mi></mrow><mo>)</mo></mrow><mo></mo><mi>D</mi></mrow><mo>+</mo><mi>q</mi></mrow><mo>)</mo></mrow></mrow></msqrt></mfrac><mo></mo><mrow><msqrt><mfrac><mrow><mo>(</mo><mrow><mn>1</mn><mo></mo><mrow><mo>(</mo><mrow><mi>P</mi><mo>+</mo><mi>p</mi></mrow><mo>)</mo></mrow></mrow><mo>)</mo></mrow><mrow><mi>P</mi><mo>+</mo><mi>p</mi></mrow></mfrac></msqrt><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo>[</mo><mn>27</mn><mo>]</mo></mrow></mrow></mrow></math></maths></entry><entry><maths id="MATHUS00024" num="00024"><math overflow="scroll"><mrow><msup><mi>T</mi><mo>*</mo></msup><mo>≤</mo><mrow><mfrac><mn>1</mn><mi>DP</mi></mfrac><mo></mo><msqrt><mrow><mn>1</mn><mo></mo><mi>P</mi></mrow></msqrt></mrow></mrow></math></maths></entry><entry><maths id="MATHUS00025" num="00025"><math overflow="scroll"><mrow><mi>η</mi><mo>=</mo><msqrt><mfrac><mrow><mrow><mrow><mo>(</mo><mrow><msub><mi>D</mi><mi>j</mi></msub><mo>+</mo><mi>q</mi></mrow><mo>)</mo></mrow><mo></mo><mrow><mo>(</mo><mrow><mrow><mo>(</mo><mrow><mi>P</mi><mo>+</mo><mi>p</mi></mrow><mo>)</mo></mrow><mo></mo><msub><mi>D</mi><mi>j</mi></msub></mrow><mo>)</mo></mrow></mrow><mo>+</mo><mi>q</mi></mrow><mrow><mrow><mrow><mo>(</mo><mrow><msub><mi>D</mi><mi>i</mi></msub><mo>+</mo><mi>q</mi></mrow><mo>)</mo></mrow><mo></mo><mrow><mo>(</mo><mrow><mrow><mo>(</mo><mrow><mi>P</mi><mo>+</mo><mi>p</mi></mrow><mo>)</mo></mrow><mo></mo><msub><mi>D</mi><mi>i</mi></msub></mrow><mo>)</mo></mrow></mrow><mo>+</mo><mi>q</mi></mrow></mfrac></msqrt></mrow></math></maths></entry></row><row><entry></entry></row><row><entry>Vegas/FAST</entry><entry>T* = α″/q, where α″ is preset parameter. [28] [8]</entry><entry>x</entry><entry>η = 1</entry></row><row><entry namest="1" nameend="4" align="center" rowsep="1"/></row></tbody></tgroup></table></tables>
Network UtilizationTheorem 1: The bottleneck link of a network K depicted as in FIG. 1 with a TCPFIT session works in state (b).
Proof. It is obvious that the throughput of TCPFIT T*<B.
 Suppose the bottleneck is in state (a), where q=0. The throughput of TCPFIT T*=∞. This contradicts with T*<B.
 If the bottleneck is in state (c), then q→∞. From Table 1, T*=0 and therefore M→0, which means that the bottleneck will transition to state (a).
Theorem 1 shows that TCPFIT can fully utilize the network capacity. Similar conclusions can be proved for TCP Vegas and FAST but not for other TCP variants.
We use the throughput model of TCP Reno in Table 1 as an example. Since p≧0 and q≧0, there is an upper bound for the throughput of a Reno session for any given P and D that is not a function of B:
<maths id="MATHUS00026" num="00026"><math overflow="scroll"><mrow><msup><mi>T</mi><mo>*</mo></msup><mo>=</mo><mrow><mfrac><mn>1</mn><mrow><mi>D</mi><mo>+</mo><mi>q</mi></mrow></mfrac><mo></mo><msqrt><mrow><mfrac><mn>3</mn><mrow><mn>2</mn><mo></mo><mrow><mo>(</mo><mrow><mi>P</mi><mo>+</mo><mi>p</mi></mrow><mo>)</mo></mrow></mrow></mfrac><mo>≤</mo><mrow><mfrac><mn>1</mn><mi>D</mi></mfrac><mo></mo><msqrt><mfrac><mn>3</mn><mrow><mn>2</mn><mo></mo><mi>P</mi></mrow></mfrac></msqrt></mrow></mrow></msqrt></mrow></mrow></math></maths>
Similar upper bounds for the throughputs of different TCP algorithms except for TCPFIT and Vegas/FAST are given in the third column of Table 1. Because these upper bounds are independent of B, when the bottleneck link state of network K is sufficiently bad (i.e. having a high packet loss rate P and/or a very long propagation delay), or if the bandwidth of bottleneck is sufficiently high, the aggregated throughput of the TCP sessions in K will be lower than the total bottleneck bandwidth B. Theoretically, therefore, except for TCPFIT and Vegas/FAST TCP, other TCP algorithms designed for wireless/largeBDP networks could only mitigate but not completely solve the bandwidth utility problem in wireless/largeBDP environments. At least theoretically, compared with these TCP variants, TCPFIT has an advantage in its capability of achieving higher throughputs.
FairnessRTT fairness: RTTfairness means that flows with similar packet drop rates but different roundtrip times would receive roughly the same throughput.
Again TCP Reno is used as an example. Since different TCP sessions sharing the same bottleneck in K may traverse different routes outside of K, their endtoend propagation RTTs could be different. Using the throughput model of TCP Reno it is possible to find the ratio of the throughputs for two sessions i and j
<maths id="MATHUS00027" num="00027"><math overflow="scroll"><mrow><mi>η</mi><mo>=</mo><mrow><mfrac><msubsup><mi>T</mi><mi>i</mi><mo>*</mo></msubsup><msubsup><mi>T</mi><mi>j</mi><mo>*</mo></msubsup></mfrac><mo>=</mo><mfrac><mrow><msub><mi>D</mi><mi>j</mi></msub><mo>+</mo><mi>q</mi></mrow><mrow><msub><mi>D</mi><mi>i</mi></msub><mo>+</mo><mi>q</mi></mrow></mfrac></mrow></mrow></math></maths>
If D<sub>i</sub>≠D}, the session with a longer propagation delay will have a low throughput and therefore at a disadvantage. That leads to RTTunfairness.
As shown in the last column of Table 1, among the listed algorithms, only TCPFIT and delaybased TCP Vegas/FAST could achieve theoretical RTTfairness.
Interfairness: Interfairness, or TCP friendliness, refers to fairness between new TCP variants and standard TCP congestion control algorithm TCP Reno. The Bandwidth Stolen Rate (BSR) was used to quantify the interfairness of TCP congestion control algorithms. A total of k TCP Reno sessions are assumed to run over a link, and T is defined as the total throughput of the k sessions. Under the same network condition, if the m<k Reno sessions are replaced with m sessions of a different TCP algorithm, and the total throughput of the m new sessions becomes T′, then the BSR is defined by:
<maths id="MATHUS00028" num="00028"><math overflow="scroll"><mrow><mi>BSR</mi><mo>=</mo><mfrac><mrow><mi>T</mi><mo></mo><msup><mi>T</mi><mi>′</mi></msup></mrow><mi>T</mi></mfrac></mrow></math></maths>
To get a low BSR, the number of inflight packets that are queued in bottleneck buffer for m TCPFIT sessions must not be significantly higher than that for m Reno sessions. Recall that for TCPFIT, the number of inflight packets is
<maths id="MATHUS00029" num="00029"><math overflow="scroll"><mrow><mrow><mi>Q</mi><mo>=</mo><mrow><mi>α</mi><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mfrac><mi>Cwnd</mi><mi>N</mi></mfrac></mrow></mrow><mo>,</mo></mrow></math></maths>
this means that α should be set to the ratio between the number of inflight packets that are queued in network buffer and the value of the congestion window for a Reno session to achieve interfairness. In a practical implementation, it is possible to approximate this theoretical α value with (Max_RTT−Min_RTT)/Max<sub>—RTT, where Max</sub>_RTT and Min_RTT are the maximal and minimal endtoend RTTs observed for the session. Experimental results show that both the throughput and fairness of the implementation of TCPFIT when using this approximation are good.
Experimental ResultsIn experiments, TCPFIT was embedded into the Linux kernel (v2.6.31) of a server. For comparisons, the Compound TCP for Linux from CalTech was used and implemented the FAST TCP algorithm based on the NS2 code of FAST TCP on the same server. Other TCP variants are available on the Linux kernel v2.6.31, including:
 TCP Reno and TCP CUBIC;
 TCP Westwood (TCPW) and TCP Veno, as bench mark of optimized congestion control algorithms for wireless links;
 Highspeed TCP (HSTCP) and TCP Illinois as bench mark of optimized congestion control algorithms for high BDP links;
Comparisons were conducted between different TCP algorithms using network emulators. In the experiments, a TCP server and a TCP client were connected through the emulator as depicted in FIG. 5, which injected random packet losses and delays in the connection and capped the network bandwidth to a selected value.
The performance of the different TCP algorithms for high BDP networks was compared. In the experiments, TCPFIT with Reno, CUBIC, CTCP as well as HSTCP, Illinois and FAST which were originally designed for high BDP networks were compared. The value of α″ of FAST was set to 200 for Gbps links and 20 for 100 Mbps links.
FIG. 7(a) shows the resultant throughput comparison for a 1 Gbps link. The propagation delay was set to 20 ms, and varied the packet loss rate from 10<sup>−3 </sup>to 10<sup>−6 </sup>by a dummynet emulator. As can be seen from the figure, TCPFIT achieved similar throughput as FAST TCP, which was much higher than other TCP algorithms. In FIG. 7(b), the bandwidth was set to 100 Mbps with a packet loss rate of 1% using a Linktropy hardware network emulator with a GUI as depicted in FIG. 6. The propagation delay was varied from 40 ms to 200 ms. Again, TCPFIT achieved higher throughput than other TCP variants.
To compare the performances of the algorithm in the presence of wireless losses, TCPFIT with Reno, CUBIC, CTCP and as well as TCP Westwood and Veno, which were originally designed with wireless networks in mind were compared. Although FAST was not originally designed specifically for wireless links, it was included in the tests using finetuned α″ values found by extensive trial and error. The link bandwidth was set to 4 Mbps on a Linktropy emulator. In FIGS. 8(a)8(d), the packet loss rate was set from 1% to 7%. In each experiment, the propagation delay of link was varied from 50 ms to 350 ms: To simulate the random delay fluctuation of wireless link, Gaussian distributed delay noise were generated by Linktropy emulator. The standard deviation of delay noise was 50% of the mean RTT. As can be seen from FIGS. 8(a)8(d), TCPFIT achieved much higher throughput than other TCP congestion control algorithms, including FAST. Compared with FAST, which doesn't react to packet loss, TCPFIT is more robust to random delay fluctuation of wireless link in the experiments, since it uses delay to control N but not control Cwnd directly.
FIGS. 9(a) and 9(b) and FIG. 10(a)10(c) are the results from interfairness experiments. In FIG. 9(a), one connection of the TCP algorithm to be tested competed with four TCP Reno sessions. The combined bandwidth was set to 10 Mbps, with a propagation delay of 10 ms and no random packet losses. As shown in FIG. 9(a), both TCPFIT and Compound TCP occupied about 20% of the total bandwidth, i.e. same as when the algorithm is replaced with a 5th Reno session, or the theoretical “fair share”. FAST TCP and TCP CUBIC on the other hand, occupied up to 35% bandwidth, which means these algorithms might be overly aggressive and not interfair/TCPfriendly. Similar experiments were conducted for FIG. 9(b), with the total network bandwidth still set to 10 Mbps but with 5% packet loss, and Gaussian distributed propagation delay with a mean of 100 ms, and 80 ms standard deviation. In this case, as a result of the worsened network conditions, the TCP sessions combined were only able to use less than 40% of the network bandwidth. TCPFIT in this case was able to pick up some of the capacity that the other Reno sessions left unused, and it is important to also note that the percentages (i.e. between 5% and 10%) of the bandwidth that the other Reno sessions used were still comparable to their respective numbers when 5 Reno sessions were used, indicating that the additional throughput that TCPFIT was able to “grab” did not come at the expense of the Reno sessions.
FIG. 10(a) shows the Bandwidth Stolen Rate. According to the definition of BSR, the experiments, 20 Reno sessions were run over a 50 Mbps, 100 ms delay link time and then replaced 10 Reno sessions with 10 sessions of a different TCP variant. The ideal value of BSR is of course 0. It is shown in FIG. 10(a) that the BSR of TCPFIT is a little higher than Compound TCP, which is wellknown to have extremely good inter fairness characteristics, but much lower than TCP CUBIC. Compared with other wireless and high speed network optimized algorithms, such as Veno and Illinois, the interfairness property of TCPFIT is at a reasonable level. BSRs for FAST and MulTCP under the same link conditions are shown in FIGS. 10(b) and 10(c). As shown in the figures, the interfairness of FAST and MulTCP depends on the selection of parameters α and N.
Table 2 compares the RTTfairness of TCP algorithms. In the experiment, 20 TCP sessions were divided into two groups of 10 to compete for a 50 Mbps link. One of the groups had a shorter delay of 50 ms. The other group had longer delays that varied among 50 ms, 100 ms, 200 ms, and 400 ms. The packet loss rate of the bottleneck link was 0.1%. Table 2 summarizes the throughput ratios between sessions with different propagation delays. The ideal ratio should be 1. As shown in Table 2, the RTTfairness property of TCPFIT is similar to FAST TCP and CUBIC, and better than other algorithms.
<tables id="TABLEUS00002" num="00002"><table frame="none" colsep="0" rowsep="0"><tgroup align="left" colsep="0" rowsep="0" cols="1"><colspec colname="1" colwidth="217pt" align="center"/><thead><row><entry namest="1" nameend="1" rowsep="1">TABLE 2</entry></row></thead><tbody valign="top"><row><entry namest="1" nameend="1" align="center" rowsep="1"/></row><row><entry>Throughput ratio with different propagation delay</entry></row></tbody></tgroup><tgroup align="left" colsep="0" rowsep="0" cols="6"><colspec colname="offset" colwidth="14pt" align="left"/><colspec colname="1" colwidth="56pt" align="left"/><colspec colname="2" colwidth="14pt" align="center"/><colspec colname="3" colwidth="56pt" align="center"/><colspec colname="4" colwidth="21pt" align="center"/><colspec colname="5" colwidth="56pt" align="center"/><tbody valign="top"><row><entry/><entry>RTT ratio</entry><entry>1</entry><entry>1/2</entry><entry>1/4</entry><entry>1/8</entry></row><row><entry/><entry namest="offset" nameend="5" align="center" rowsep="1"/></row></tbody></tgroup><tgroup align="left" colsep="0" rowsep="0" cols="6"><colspec colname="offset" colwidth="14pt" align="left"/><colspec colname="1" colwidth="56pt" align="left"/><colspec colname="2" colwidth="14pt" align="char" char="."/><colspec colname="3" colwidth="56pt" align="char" char="."/><colspec colname="4" colwidth="21pt" align="char" char="."/><colspec colname="5" colwidth="56pt" align="char" char="."/><tbody valign="top"><row><entry/><entry>TCPFit</entry><entry>1</entry><entry>1.17</entry><entry>1.55</entry><entry>1.9</entry></row><row><entry/><entry>FAST</entry><entry>1</entry><entry>1.03</entry><entry>1.45</entry><entry>1.93</entry></row><row><entry/><entry>CUBIC</entry><entry>1</entry><entry>1.34</entry><entry>1.64</entry><entry>2.14</entry></row><row><entry/><entry>CTCP</entry><entry>1</entry><entry>1.37</entry><entry>2.17</entry><entry>2.83</entry></row><row><entry/><entry>Reno</entry><entry>1</entry><entry>1.84</entry><entry>3</entry><entry>5.19</entry></row><row><entry/><entry namest="offset" nameend="5" align="center" rowsep="1"/></row></tbody></tgroup></table></tables>
Performance Measured Over Live NetworksTo test the performance of TCPFIT over live, deployed, realworld networks, TCPFIT was implemented on an internet connected server, located in Orange, Calif., US. TCPFIT was tested using clients located in 5 different cities/towns in 4 countries (China, Switzerland, USA and India) on 3 continents. At each location, a combination of wired line connections, WiFi and whenever possible, 3G wireless networks were tested. The location, network condition and OS of clients are listed in Table 3. In each experiment, a script was used to automatically and periodically cycle through different TCP algorithms on the server over long durations of time (424 hours), while the client collected throughput information and other useful data. The period for changing the TCP algorithms was set to about 510 minutes, so that 1) the algorithms tested were able to reach close to steadystate performances; 2) the period is consistent with the durations of a reasonably large percentage of the TCP based sessions on the Internet (e.g. YouTube streaming of a single piece of content, synchronizing emails, refreshing one web page, etc.). In the performance measure, TCPFIT is compared with CUBIC, CTCP, Reno in all case. HSTCP and Illinois is compared for wired network and TCPW, Veno for wireless.
<tables id="TABLEUS00003" num="00003"><table frame="none" colsep="0" rowsep="0"><tgroup align="left" colsep="0" rowsep="0" cols="1"><colspec colname="1" colwidth="217pt" align="center"/><thead><row><entry namest="1" nameend="1" rowsep="1">TABLE 3</entry></row></thead><tbody valign="top"><row><entry namest="1" nameend="1" align="center" rowsep="1"/></row><row><entry>Experimental environments</entry></row></tbody></tgroup><tgroup align="left" colsep="0" rowsep="0" cols="5"><colspec colname="offset" colwidth="14pt" align="left"/><colspec colname="1" colwidth="49pt" align="left"/><colspec colname="2" colwidth="49pt" align="left"/><colspec colname="3" colwidth="56pt" align="left"/><colspec colname="4" colwidth="49pt" align="left"/><tbody valign="top"><row><entry/><entry>Figures</entry><entry>Location</entry><entry>Network</entry><entry>Client OS</entry></row><row><entry/><entry namest="offset" nameend="4" align="center" rowsep="1"/></row><row><entry/><entry>FIG. 1(a)</entry><entry>Zurich</entry><entry>Ethernet</entry><entry>Win Vista</entry></row><row><entry/><entry>FIG. 11(b)</entry><entry>LA</entry><entry>Ethernet</entry><entry>Win Vista</entry></row><row><entry/><entry>FIG. 11(c)</entry><entry>Beijing</entry><entry>2M ADSL</entry><entry>Win XP</entry></row><row><entry/><entry>FIG. 11(d)</entry><entry>Zurich</entry><entry>WIFI</entry><entry>MAC OS</entry></row><row><entry/><entry>FIG. 11(e)</entry><entry>LA</entry><entry>WIFI</entry><entry>Win Vista</entry></row><row><entry/><entry>FIG. 11(f)</entry><entry>Beijing</entry><entry>WIFI</entry><entry>Linux</entry></row><row><entry/><entry>FIG. 11(g)</entry><entry>Beijing</entry><entry>CDMA 2000</entry><entry>Win XP</entry></row><row><entry/><entry>FIG. 11(h)</entry><entry>Fujian</entry><entry>CDMA 2000</entry><entry>Win Vista</entry></row><row><entry/><entry>FIG. 11(i)</entry><entry>Bangalore</entry><entry>512k ADSL</entry><entry>Win Vista</entry></row><row><entry/><entry namest="offset" nameend="4" align="center" rowsep="1"/></row></tbody></tgroup></table></tables>
The results are summarized in Table 4 and FIG. 11(a)11(i). Throughout the experiments, TCPFIT achieved speedup ratios up to 5.8× as compared with average throughput of other TCP algorithms.
<tables id="TABLEUS00004" num="00004"><table frame="none" colsep="0" rowsep="0"><tgroup align="left" colsep="0" rowsep="0" cols="1"><colspec colname="1" colwidth="217pt" align="center"/><thead><row><entry namest="1" nameend="1" rowsep="1">TABLE 4</entry></row></thead><tbody valign="top"><row><entry namest="1" nameend="1" align="center" rowsep="1"/></row><row><entry>Average throughput (Mbit/s)</entry></row></tbody></tgroup><tgroup align="left" colsep="0" rowsep="0" cols="4"><colspec colname="1" colwidth="42pt" align="left"/><colspec colname="2" colwidth="70pt" align="center"/><colspec colname="3" colwidth="63pt" align="center"/><colspec colname="4" colwidth="42pt" align="center"/><tbody valign="top"><row><entry>Network</entry><entry>Wired Network</entry><entry>WiFi</entry><entry>CDMA 2000</entry></row></tbody></tgroup><tgroup align="left" colsep="0" rowsep="0" cols="9"><colspec colname="1" colwidth="42pt" align="left"/><colspec colname="2" colwidth="21pt" align="center"/><colspec colname="3" colwidth="28pt" align="center"/><colspec colname="4" colwidth="21pt" align="center"/><colspec colname="5" colwidth="21pt" align="center"/><colspec colname="6" colwidth="21pt" align="center"/><colspec colname="7" colwidth="21pt" align="center"/><colspec colname="8" colwidth="21pt" align="center"/><colspec colname="9" colwidth="21pt" align="center"/><tbody valign="top"><row><entry>Location</entry><entry>ZRH</entry><entry>LAX</entry><entry>PEK</entry><entry>ZRH</entry><entry>LAX</entry><entry>PEK</entry><entry>City</entry><entry>Town</entry></row><row><entry namest="1" nameend="9" align="center" rowsep="1"/></row></tbody></tgroup><tgroup align="left" colsep="0" rowsep="0" cols="9"><colspec colname="1" colwidth="42pt" align="left"/><colspec colname="2" colwidth="21pt" align="char" char="."/><colspec colname="3" colwidth="28pt" align="char" char="."/><colspec colname="4" colwidth="21pt" align="char" char="."/><colspec colname="5" colwidth="21pt" align="char" char="."/><colspec colname="6" colwidth="21pt" align="char" char="."/><colspec colname="7" colwidth="21pt" align="char" char="."/><colspec colname="8" colwidth="21pt" align="char" char="."/><colspec colname="9" colwidth="21pt" align="char" char="."/><tbody valign="top"><row><entry>TCPFIT</entry><entry>8.9</entry><entry>86.5</entry><entry>0.77</entry><entry>1.9</entry><entry>10.3</entry><entry>3.4</entry><entry>1.3</entry><entry>1.4</entry></row><row><entry>CUBIC</entry><entry>2.1</entry><entry>29.5</entry><entry>0.26</entry><entry>0.9</entry><entry>3.5</entry><entry>0.75</entry><entry>0.8</entry><entry>0.4</entry></row><row><entry>CTCP</entry><entry>1.4</entry><entry>23.9</entry><entry>0.22</entry><entry>0.7</entry><entry>3.5</entry><entry>0.6</entry><entry>0.7</entry><entry>0.6</entry></row><row><entry>Reno</entry><entry>1.2</entry><entry>22.1</entry><entry>0.22</entry><entry>0.7</entry><entry>4.5</entry><entry>0.58</entry><entry>0.7</entry><entry>0.6</entry></row><row><entry>TCPW</entry><entry>x</entry><entry>x</entry><entry>x</entry><entry>0.6</entry><entry>4.1</entry><entry>0.49</entry><entry>0.5</entry><entry>0.4</entry></row><row><entry>Veno</entry><entry>x</entry><entry>x</entry><entry>x</entry><entry>0.6</entry><entry>3.8</entry><entry>0.55</entry><entry>0.6</entry><entry>0.5</entry></row><row><entry>Illinois</entry><entry>2.1</entry><entry>32.8</entry><entry>0.22</entry><entry>x</entry><entry>x</entry><entry>x</entry><entry>x</entry><entry>x</entry></row><row><entry>HSTCP</entry><entry>1.0</entry><entry>16.7</entry><entry>0.2</entry><entry>x</entry><entry>x</entry><entry>x</entry><entry>x</entry><entry>x</entry></row><row><entry>Speedup</entry><entry>5.8</entry><entry>3.5</entry><entry>3.4</entry><entry>2.7</entry><entry>2.6</entry><entry>5.7</entry><entry>2.0</entry><entry>2.8</entry></row><row><entry>BSR</entry><entry>0.12</entry><entry>0.07</entry><entry>0.19</entry><entry>0.07</entry><entry>0.09</entry><entry>0.05</entry><entry>0</entry><entry>0</entry></row><row><entry namest="1" nameend="9" align="center" rowsep="1"/></row></tbody></tgroup></table></tables>
The throughput TCPFIT was not always higher than other TCP algorithms. In FIG. 11(i), when clients accessed the server through a low speed ADSL network with large bandwidth variations, TCPFIT had no obvious advantage compared with other TCP variants during a 4hour period. This was probably due to the fact that, compared with other TCP algorithms such as FAST which use very advanced network condition estimation algorithms, the corresponding modules in FIT were relatively simplistic. For networks with large variations, this might lead to performance degradations to FIT as the settings of key algorithm parameters such as Q depends on such estimates. The advanced estimation algorithms of FAST could be incorporated into TCPFIT.
To confirm that the performance gains of TCPFIT did not come from grabbing bandwidth of other TCP sessions, the Bandwidth Stolen Rate of different TCP variants was also measured. The results are list in the last row of Table 4. As can be seen from the table, the BSR for TCPFIT remained low.
Finally, FIG. 12 shows a comparison between TCPFIT and EMulTCP as measured over China Telecom's 3G network in Beijing. Although strictly speaking EMulTCP is not a TCP congestion algorithm per se, TCPFIT was still able to achieve an average of about 27% improvement.
It is contemplated that any or all of these hardware and software components could be embodied exclusively in hardware, exclusively in software, exclusively in firmware, or in any combination of hardware, software, and/or firmware. Accordingly, while the following describes example methods and system, persons having ordinary skill in the art will readily appreciate that the examples provided are not the only way to implement such methods and apparatus.
It will be apparent to one skilled in the art that numerous modifications and departures from the specific embodiments described herein may be made without departing from the spirit and scope of the present invention.