Available BandWidth Measurement Status Report It was observed that after turnning on the contant packet traffic load, the sending and receiving timing patterns were affected by the load. The receiving timing patterns severely impact the accuracy and capability of themeasurement techniques. We have carried out several experiments. They were carried out on our testbed with 4 notebooks: gand PII366 128MB 10/100Mbps Linksys PC card. It runs tgs to generate UDP constant packet traffic. winrace PII333 128MB 10Mbps 3Com PC card. It runs the tgr ot receive UDP constant packet traffic and runs tfork to report traffic load. wallop PII233 128MB 100Mbps Linksys PC card. It responds to ICMP request. viva P150 48MB 10Mbps Linksys PC card. It runs abwm3 for measuring the available bandwdith and use tcpdump to capture packet timing patterns. They all connected through a 10Mbps Linksys hub. Here shows the steps involved in an experiment with 6Mbps traffic load. 1. On winrace, run "tgr" it reports using port 1026 for receiving the UDP constant packet traffic. 2. On winrace, run "tfork" it starts to report average traffic load every 2 seconds. 3. On gand, run "tgs -b 6000000 winrace 1026" to start 6Mbps contant packet stream. 4. On winrace, it shows fork time=2000000 usec, totalBytes=1494734.000000, traffic=5978.936000 kbps fork time=2000000 usec, totalBytes=1494734.000000, traffic=5978.936000 kbps fork time=2000014 usec, totalBytes=1495780.000000, traffic=5983.078118 kbps that confirms the traffic load. 5. On viva, run "./td.scr" script which actually is "tcpdump -n -s 100 -w capture.dat host viva" It will start listening to eth0 for incoming or outgoing traffic with src or dst ip address as viva. 6. On viva, run "abwm3 -l 1000 -u 4000000 wallop > A4e61e3T6M.txt" on a different window. We use -u 4000000 to specif the upper bandwidht limit, since we know the upper limit of available bandwidth will not exceed 4Mbps. 7. On viva, stop tcpdump by hitting control-C, then run "probetimegap.pl > A4e61e3T6M.pcap" to generate the actual packet timing data. departTimeGap[0]=15450 arriveTimeGap[0]=15424 departTimeGap[1]=9805 arriveTimeGap[1]=9812 departTimeGap[2]=8356 arriveTimeGap[2]=10724 departTimeGap[3]=1997 arriveTimeGap[3]=3688 departTimeGap[4]=2913 arriveTimeGap[4]=6600 departTimeGap[5]=16310 arriveTimeGap[5]=10823 departTimeGap[6]=1921 arriveTimeGap[6]=2240 departTimeGap[7]=2976 arriveTimeGap[7]=1803 departTimeGap[8]=12968 arriveTimeGap[8]=7459 8. On viva, use "vi A4e61e3T6M.txt" to see the abwm3 output. It indicates Actual depart time gap[1]=17952, receivingTimeGap[1]=14453 Actual depart time gap[2]=9648, receivingTimeGap[2]=9850 Actual depart time gap[3]=7526, receivingTimeGap[3]=17007 Actual depart time gap[4]=2803, receivingTimeGap[4]=3996 Actual depart time gap[5]=5102, receivingTimeGap[5]=172 Actual depart time gap[6]=6945, receivingTimeGap[6]=17523 Actual depart time gap[7]=7702, receivingTimeGap[7]=1064 Actual depart time gap[8]=5105, receivingTimeGap[8]=188 Actual depart time gap[9]=10169, receivingTimeGap[9]=1493 gaps[1, 5, 7, 8, 9] are removed since the receiving timegap is smaller than the depart time gap. The output also show the difference between the calculated sendingTimeGap (in seconds) and the actual departing time gaps. Actual depart time gap[1]=17899, sendingTimeGap[1]=0.017964 Actual depart time gap[2]=8992, sendingTimeGap[2]=0.008992 Actual depart time gap[3]=8380, sendingTimeGap[3]=0.005997 Actual depart time gap[4]=2798, sendingTimeGap[4]=0.004499 Actual depart time gap[5]=2914, sendingTimeGap[5]=0.003599 Actual depart time gap[6]=15446, sendingTimeGap[6]=0.003000 Actual depart time gap[7]=2790, sendingTimeGap[7]=0.002571 Actual depart time gap[8]=2975, sendingTimeGap[8]=0.002250 Actual depart time gap[9]=13009, sendingTimeGap[9]=0.002000 This leaves very few samples for estimating the available bandwidth and it did not point to the right bandwidth range. For comparison, I turned off the traffic load and run "abwm3 -l 1000 -u 4000000 192.168.0.1 > A4e61e3T0M.txt" we get Actual depart time gap[1]=17950, receivingTimeGap[1]=256 Actual depart time gap[2]=8991, receivingTimeGap[2]=166 Actual depart time gap[3]=5997, receivingTimeGap[3]=166 Actual depart time gap[4]=4882, receivingTimeGap[4]=180 Actual depart time gap[5]=3217, receivingTimeGap[5]=165 Actual depart time gap[6]=2998, receivingTimeGap[6]=163 Actual depart time gap[7]=2571, receivingTimeGap[7]=165 Actual depart time gap[8]=2250, receivingTimeGap[8]=166 Actual depart time gap[9]=2000, receivingTimeGap[9]=92 Here we get almost perfect departing pattern since there is no traffic interference but we will the buffering effect. The tcpdump shows departTimeGap[0]=17689 arriveTimeGap[0]=16848 departTimeGap[1]=8980 arriveTimeGap[1]=8986 departTimeGap[2]=5996 arriveTimeGap[2]=5891 departTimeGap[3]=4931 arriveTimeGap[3]=5158 departTimeGap[4]=3161 arriveTimeGap[4]=3322 departTimeGap[5]=3006 arriveTimeGap[5]=4334 departTimeGap[6]=2569 arriveTimeGap[6]=1599 departTimeGap[7]=2243 arriveTimeGap[7]=1657 departTimeGap[8]=1943 arriveTimeGap[8]=114 Timegap[5] may be incorrectly considered as diverging point. When the traffic increases to 9Mbps, we start to lose some return messages.