Up: The effect of multiplexing
Previous: Simulator Design
The results of the simulations are presented in the following graphs.
Each point on a graph represents a simulation run. The points have
been joined with (straight) lines.
As discussed in section 2 the bandwidth delay product of
the network combined with the 32Kb maximum window size limits the
throughput that can be obtained by a single TCP connection. A
simulation of a single TCP connection over the asymmetric satellite link
is shown in figure 4. The curve is asymptotic to about
700kbps. This corresponds closely to the calculated maximum of 690kbps
suggesting that, for this case at least, the asymmetric nature of the
link behaves in the same way as a symmetric link with the same
Link usage against load for a single TCP connection
In figure 5(a) the load to NZ is plotted against the total
NZ bound traffic presented to the US proxy or router. The presented
load already includes the TCP headers required to deliver it to the US
proxy or router from the HTTP server.
Although each TCP connection is limited to 700kbps there are many
concurrent TCP connections in the no-US-proxy case (see
figure 7(a)). This allows the link to saturate under high
loads. The slope of the line through most of the graph is about
indicating that there are very few retransmissions
occurring. The graph does not tail off until the link is within 0.5%
of being saturated.3 Figure 5(b) shows that page latencies increase
dramaticly at this time.
If there are a large number of concurrent connections between the
caches the US-proxy case performs in a very similar way to the
no-US-proxy case. The lines for 50 and 70 connections have a slightly
smaller slope than the no-US-proxy case indicating a small efficiency
gain through repackaging the load on the more heavily used TCP
connections. Examination of figure 7(a) shows that there are
upto 30 concurrent HTTP requests being carried over a single
inter-cache TCP connection. The efficiency gain is small and is
probably not a significant saving.
For smaller numbers of connections (35 and below) the link does not
reach saturation. Instead the TCP connections reach their saturation
point and they limit the flow of packets to the international link.
Perhaps the most interesting result of the study is shown in
figure 5(b). The graph shows the average time required to
fetch a set of sample pages that were present in all simulations. The
result for the US-proxy case, with a large number of connections, is
around 25% lower (1s per HTTP request) than the no-US-proxy
case. This results from the reuse of the international TCP connections
saving most of the cost of slow start. The saving for an HTML page
with multiple components may be even greater. A closer examination of
the start of the curves (see figure 7(b)) shows that for
very low loads the gain is less. This is the case because the initial
slow start on the international TCP connections is not amortized over
as many HTTP requests and consequently it has a larger effect on the
For smaller numbers of connections the latency rises rapidly as the
TCP throughput limit is approached. Comparison of
figures 5(b) and 5 indicates that this begins to
occur when the TCP connections reach about 75% of their capacity. To
achieve the best HTTP latency performance more connections are
required than are needed to saturate the international link.
Figure 6 shows the buffer space needed in the routers
which feed each end of the international link. Figures (a) and (c) show the mean usage while figures (b) and (d) show the peak usage. Note that the graphs have
different scales. The peak usage is more erratic than the mean because
of subtle interactions between connections.
In the no-US-proxy case the buffer space required to avoid packet loss
becomes very large as the link to NZ saturates. This is also true of
the US-proxy case if the number of connections is large enough to
allow the link to saturate. If there are too few TCP connections to
carry the load the mean buffer usage reduces as the TCP connections
throttle their use of the link.
Buffer (or link) usage is never heavy in the NZ to US direction in the
Figure 7(a) shows the number of connections between the US
proxy and servers for the US-proxy case. In the no-US-proxy case it
shows the number of connections from the NZ proxy to US servers. In
the latter case this increases rapidly when the international link is
saturated because the HTTP requests take a long time to complete (see
figure 5(b)). In general the no-US-proxy case uses more
connections than the US-proxy case because the` connections take
longer to complete.
The single inter-proxy TCP connection curve flattens at around
18Mbps because there is insufficient capacity in the NZ to US
direction to carry the requests. This is also apparent in
figure 7(c). The typical relationship between inbound and
outbound traffic can be seen in figure 7(d). When there are
sufficient TCP connections to carry the load this shows an inbound
to outbound ratio of about 1:19.
The difference between the no-US-proxy case and the US-proxy case in
figures 7(c) and (d) indicates the saving made by
repackaging HTTP requests into a smaller number of larger TCP
packets. This has a more significant effect than in the US to NZ
direction because HTTP requests are smaller than HTTP replies. The
effect is probably not useful in current practice because NZ to US
links are not normally saturated. This is because of the requirement
to purchase symmetric terrestrial connections. In the longer term the
saving may be valuable if the asymmetry introduced by unidirectional
satellite links causes the NZ to US links to saturate.
- ... saturated.3
- Note that this graph shows presented load
against load carried not presented load against useful data
- A real US/NZ link would be more heavily used in
the US direction because of requests on NZ servers from US clients.
These are not simulated here. We assume that sufficient NZ/US
capacity exists to carry the client requests to the US.
Up: The effect of multiplexing
Previous: Simulator Design
A.McGregor, M.Pearson, J.Cleary