Nadine Sundquist
CS 526
Homework 3

  • Configuration

Name IP Address Benchmark
r103.csnet.uccs.edu 128.198.61.56
r98.csnet.uccs.edu 128.198.61.98 573.9 req/s
r105.csnet.uccs.edu 128.198.61.105 1270.6 req/s
r108.csnet.uccs.edu 128.198.61.108 1301.2 req/s

  • PART 1

** HTTP Web Access

I used Firefox when I did this test. In order to make sure that I was not pulling from the cache, I would clear the cache on Firefox. You can do that by doing Ctrl-Shift-Del and by clicking OK. I was working from a very slow SSH connection, which may have made my ipvsadm finding inaccurate because I never had any active connections. My connections for HTTP were always listed as inactive by the time ipvsadm reported them.

For this part of the tests, I made my weights as follows:

r98.csnet.uccs.edu 1
r105.csnet.uccs.edu 2
r108.csnet.uccs.edu 2

The following show the real servers that were called followed by the state of ipvsadm for 10 requests:


r108

TCP 128.198.61.56:www wrr
-> 128.198.61.108:www Route 2 0 1
-> 128.198.61.105:www Route 2 0 0
-> 128.198.61.98:www Route 1 0 0

r105

TCP 128.198.61.56:www wrr
-> 128.198.61.108:www Route 2 0 1
-> 128.198.61.105:www Route 2 0 1
-> 128.198.61.98:www Route 1 0 0

r98

TCP 128.198.61.56:www wrr
-> 128.198.61.108:www Route 2 0 0
-> 128.198.61.105:www Route 2 0 1
-> 128.198.61.98:www Route 1 0 1

r105

TCP 128.198.61.56:www wrr
-> 128.198.61.108:www Route 2 0 0
-> 128.198.61.105:www Route 2 0 1
-> 128.198.61.98:www Route 1 0 1

r108

TCP 128.198.61.56:www wrr
-> 128.198.61.108:www Route 2 0 1
-> 128.198.61.105:www Route 2 0 0
-> 128.198.61.98:www Route 1 0 0

r105

TCP 128.198.61.56:www wrr
-> 128.198.61.108:www Route 2 0 0
-> 128.198.61.105:www Route 2 0 1
-> 128.198.61.98:www Route 1 0 0

r98

TCP 128.198.61.56:www wrr
-> 128.198.61.108:www Route 2 0 0
-> 128.198.61.105:www Route 2 0 1
-> 128.198.61.98:www Route 1 0 1

r108

TCP 128.198.61.56:www wrr
-> 128.198.61.108:www Route 2 0 1
-> 128.198.61.105:www Route 2 0 0
-> 128.198.61.98:www Route 1 0 1

r105

TCP 128.198.61.56:www wrr
-> 128.198.61.108:www Route 2 0 0
-> 128.198.61.105:www Route 2 0 1
-> 128.198.61.98:www Route 1 0 0

r108

TCP 128.198.61.56:www wrr
-> 128.198.61.108:www Route 2 0 1
-> 128.198.61.105:www Route 2 0 0
-> 128.198.61.98:www Route 1 0 0

Obervation: The number of times that a real server was chosen is proportional to the weight ratio. Out of 10 times tries, r108 with a weight of 2 was chosen 4 times, r105 with a weight of 2 was chosen 4 times, and r98 with a weight of 1 was chosen 2 times. As to the round-robnin affect, I did nof find it that predictable. The real servers with the same weight of 2 would be interchangeably chosen 3 times. On the fourth time, r98 with a weight of 1 would be chosen. No real server got chosen twice in a row. I always left plenty of time in between calls to the server because ipvadm was reporting at such a slow rate. As a whole, the results did make sense.


** SSH Service

The following are the 12 SSH requests that I made to the VIP. For the first SSH request, I disconnected because I wanted to see the results of ipvsadm. I found that it changed from an active connection to an inactive connection just as soon as I closed that SSH connection. For the requests after that, I kept the SSH connections always open because I wanted to see what would happen to the active connections when they overflowed.

For this part of the tests, I made my weights as follows:

r98.csnet.uccs.edu 1
r105.csnet.uccs.edu 2
r108.csnet.uccs.edu 2

The following is shows firstly what real server was accessed and secondly what the output for ipvsadm was.


r108

TCP 128.198.61.56:ssh wlc
-> 128.198.61.108:ssh Route 2 1 0
-> 128.198.61.105:ssh Route 2 0 0
-> 128.198.61.98:ssh Route 1 0 0

r105 (NOTE: The r108 connection is inactive because I closed the SSH connection.)

TCP 128.198.61.56:ssh wlc
-> 128.198.61.108:ssh Route 2 0 1
-> 128.198.61.105:ssh Route 2 1 0
-> 128.198.61.98:ssh Route 1 0 0

r108

TCP 128.198.61.56:ssh wlc
-> 128.198.61.108:ssh Route 2 1 0
-> 128.198.61.105:ssh Route 2 1 0
-> 128.198.61.98:ssh Route 1 0 0

r98

TCP 128.198.61.56:ssh wlc
-> 128.198.61.108:ssh Route 2 1 0
-> 128.198.61.105:ssh Route 2 1 0
-> 128.198.61.98:ssh Route 1 1 0

r108

TCP 128.198.61.56:ssh wlc
-> 128.198.61.108:ssh Route 2 2 0
-> 128.198.61.105:ssh Route 2 1 0
-> 128.198.61.98:ssh Route 1 1 0

r105

TCP 128.198.61.56:ssh wlc
-> 128.198.61.108:ssh Route 2 2 0
-> 128.198.61.105:ssh Route 2 2 0
-> 128.198.61.98:ssh Route 1 1 0

r108

TCP 128.198.61.56:ssh wlc
-> 128.198.61.108:ssh Route 2 3 0
-> 128.198.61.105:ssh Route 2 2 0
-> 128.198.61.98:ssh Route 1 1 0

r105

TCP 128.198.61.56:ssh wlc
-> 128.198.61.108:ssh Route 2 3 0
-> 128.198.61.105:ssh Route 2 3 0
-> 128.198.61.98:ssh Route 1 1 0

r98

TCP 128.198.61.56:ssh wlc
-> 128.198.61.108:ssh Route 2 3 0
-> 128.198.61.105:ssh Route 2 3 0
-> 128.198.61.98:ssh Route 1 2 0

r108

TCP 128.198.61.56:ssh wlc
-> 128.198.61.108:ssh Route 2 4 0
-> 128.198.61.105:ssh Route 2 3 0
-> 128.198.61.98:ssh Route 1 2 0

r105

TCP 128.198.61.56:ssh wlc
-> 128.198.61.108:ssh Route 2 3 0
-> 128.198.61.105:ssh Route 2 4 0
-> 128.198.61.98:ssh Route 1 2 0

r98

TCP 128.198.61.56:ssh wlc
-> 128.198.61.108:ssh Route 2 3 0
-> 128.198.61.105:ssh Route 2 4 0
-> 128.198.61.98:ssh Route 1 2 0

r108

TCP 128.198.61.56:ssh wlc
-> 128.198.61.108:ssh Route 2 3 0
-> 128.198.61.105:ssh Route 2 4 0
-> 128.198.61.98:ssh Route 1 2 0

r105

TCP 128.198.61.56:ssh wlc
-> 128.198.61.108:ssh Route 2 3 0
-> 128.198.61.105:ssh Route 2 4 0
-> 128.198.61.98:ssh Route 1 2 0

I tried more than 10 connections because I was interested whether or not the system would keep track of all the connections. I found that once I went over a certain number of connections, ipvsadm seemed to have a max on the number of active connections it would monitor. The SSH connections did not disconnect, but ipvsadm did not seem to acknowledge that they existed. The real server that the director would point to was still predictable, but the ipvsadm table seemed to become stagnant.

When it comes to wlc balancing, I found that the algorithm was very predictable. The director would first alternatly schedule the higher weighted servers first. When those servers were at their limit, then it would point to the lower weighted server. Once all of them were balanced according to the correct ratio, the algorithm would start all over again. This algorithm followed what was expected.


** Benchmark

I tried several different variations of benchmarks in order to try to produce the best results. In order to compare with the first benchmark I took with the individual real server, I compared the requetss per second.

For the first benchmark, I kept with my original configuration, which was wrr for HTTP and wlc for SSH, which the following wiehgts.

Server Weight
r108 2
r105 2
r98 1

I got a result of 1298.4 req/s, which was actually the better of all my benchmarks. The reason I think this was a better results was because I based my weights on my original benchmarks of the real servers.

For the second benchmark, I did a weigting of 1 across all the real servers. This got me a result of 490.4 req/s, which I think was my worst result. By not weighting the real servers correctly, the performance will not show the utilization of the more efficient servers.

For the third benchmark, I compared both wlc and wrr with the exact same weights applied to each of the real servers. The weights were as follows:

Server Weight
r108 3
r105 2
r98 1

For wrr, I got 1294.0 req/s, while wlc only processed 566.7 req/s. From this benchmark, I would conclude that wrr is better than wlc.

On the last benchmark, I decided to weight the servers in the opposite order. I thought, I would weight the worst server higher and the best server lower. The weights were as follows:

Server Weight
r108 1
r105 2
r98 3

I was surprised to see that the results were not as bad as I thought. I think that the middle server was able to compensate for how bad the slower real server was. Two servers seemed to be performing at the same fast rate.


  • PART 2

** Question A: In what cases does wlc perfrom better than wrr? Describe a simple case that highlights that.

WLC always checks to see how many connections are open before it sends a request over to a real server. The real server with the least number of connections is the one that will receive the next request. When a server is being overloaded, and the job sizes become unpredictable, then wrr will continue in its algorithm without considering the number of connections, while slc will consider the number of connections. In this case, wlc is better than wrr.

** Question B: Why is LVS-NAT slower than LVS-DR?

LVS-DR talks directly to the server, while LVS-NAT does not. In LVS-NAT, all traffic is directed through the director (input and output traffic); therefore, the director becomes a bottleneck. Every packet in LVS-NAT is rewritten by the director. Also, the network address translation needs to be done on the director, which does take some time. LVS-DR is better suited for a high-throughput system because outbound traffic is directly routed to the end-user.

** Question C: When is LVS-Tunnel performance better than LVS-DR? What additional information does the director need to know to allocate requests to a "better" real server?

LVS-TUN has all the same advantages that LVS-DR has, which includes outbound traffic being directed to the end-user and no special hardware or address translation. In LVS-DR, all servers need to live on the same segment. This is not the case for LVS-TUN. By using an IP tunnel, each server can realize its full speed. In LVS-TUN, the servers can be located geographically in completely different areas. In order to pick the "best" server, the director would have to take into account that some servers are closer than others. The director would need to know if a further geographical distance will affect the time that it wouuld take to service a request.



Note: no space for the text!