Baselines

Baselines are experiments undertaken to establish how a system performs without, or prior to modification. In our case, we take baselines in order to see, for example:

  • What goodput can we achieve without coding on a given link under certain load conditions and queue sizes?
  • How does the input queue to the satellite link behave during such base cases?
  • How long would a certain size download take in such a base case?

This page shows the results of a number of such baseline experiments. The plots shown here are only a small selection of the data collected.

Uncoded baselines

Throughput/goodput baselines

These baselines show the throughput (blue) and goodput (red) seen in 100 ms intervals across the experiment duration. The horizontal brown line shows the channel capacity (which we may slightly exceed on occasion as we account for bytes based on the packet capture timestamp … oh and there’s the peak rate feature in our token bucket filter, too!).

Whenever the blue bit doesn’t reach the brown line, we had spare capacity during the time slot, meaning that the input queue of the “satellite modem” would have run empty on at least one occasion during this interval). Below is the goodput for 20 client channels on a 16 Mbps GEO link with a 100kB input queue and a 40 MB iperf download starting around 12 seconds into the experiment. The iperf download here is the big block of activity from about 12-132 s. This means it took around three times as long as the same download would take on an otherwise empty channel of the same kind. This is a relatively low load case – the link doesn’t see more than 4.5 Mbps average throughput here (these lines really are a little wider than meets the eye!).

 

Goodput for 20 client channels on a 16 Mbps GEO link with a 100kB input queue and a 40 MB iperf download starting around 12 seconds into the experiment.

The next baseline below is for 50 client channels – 150% extra offered load. Throughput and goodput are coming up, but throughput only doubles compared to the 20 client channel scenario – around 9 Mbps, so still well below capacity. iperf gets its business done in 186 seconds.

lexp-16m-100k-50-iperf-40-jitter

Finally, 100 client channels: Throughput is now almost 14 Mbps, iperf took 417 seconds. At these levels, iperf download times can vary between experiments: if the iperf download gets to oscillate jointly with many other longer flows during the download, download times stretch out as no data crosses the link for several seconds at a time. This is a bit of a chance game since big flows are small in number, small flows dominate at high offered loads like here, and quite how many of the longer flows we get during the iperf varies a lot depending on what the random generators pick.

lexp-16m-100k-100-iperf-40-jitter

Going back to the first baseline above (20 client channels), this time with a 200k queue. At first glance, this plot is a little quieter than the 100k case, and in fact that’s what one would expect from a buffer. Goodput and throughput here are a little lower and 4.3 and 4.15 Mbps, although at this load level, that’s probably a result of the particular sample taken. For iperf, the extra buffer works wonders: under 44 seconds, which is almost as low as the “no load” value.

lexp-16m-200k-20-iperf-40-jitter

At 100 client channels, the extra queue sees us at capacity for much of the time. iperf gets a bit of a leg up at 340 seconds.

lexp-16m-200k-100-iperf-40-jitter

 

Print Friendly, PDF & Email