Following our acquisiton of additional hardware, we have upgraded the simulator. This included installation of a new 7ft 19 inch rack, which we didn’t quite realise would turn up fully assembled on a pallet – and I wasn’t even there when it arrived. Lei was in the vicinity and with the help of some other wonderful souls managed to direct the truck to the right entrance and knew that there was only one lift that would handle racks that size – having moved the two existing ones from the 5th floor to the new 4th floor lab was a bit of an exercise but on this occasion it paid off!
Liam Scott-Russell joined us again as an intern for a couple of weeks during his high school holidays in July & together with Lei set up the new servers – a total of 15. So when I got back from ISIT in Aachen, much of the hardware work was done.
Lei in front of the upgraded simulator. The two racks on the left contain the island machines – 10 Intel NUCs and 96 Raspberry Pis. The world servers sit in the rack on the far right and at the top of the rack left of it. This rack also contains the satellite chain (sat emulator, 2xPEP + 2xencoder/decoder), the copper taps (blue boxes), and the capture machines that record the data off the taps.
Recommissioning the simulator was another story altogether! Most of the work went into two aspects: Getting the scripting upgraded and getting the new command & control machine set up. One of the lessons from the existing setup was that troubleshooting things often involved having a large number of terminal windows open, and with a few extra monitors from leftover ISIF funding, we were able to assemble a nice large 2×2 screen array of 27″screens – enough to keep a dozen or so terminals in constant view. Getting this to work was another matter – it took a while to learn that newer versions of Ubuntu require the Composite extension to run their Unity desktop, so we had to switch to xfce4 in order to run Xinerama, which is incompatible with Composite but an absolute must-have for a contiguous screen experience.
Scripting: We added dozens of new scripts, modularised even more than before, and added a lot of error detection and handling to ensure that we would learn about problems early. We’ve also implemented a new directory structure for our data.
The addition of two more machines into the satellite chain as well as the removal of the monitoring to dedicated capture machines necessitated quite a bit of network reconfiguration as well. We can now run PEP traffic through a coded tunnel, too.
Another new feature is a special purpose server on the world side of the simulator, which produces baseline iperf3 and ping measurements to ascertain queue sojourn time and lets us monitor the performance of large standardised TCP transfers. Previously, this load was shouldered by one of the world servers.
We also took a good look at our terrestrial latency distribution and now use a distribution that is based on empirical data from Rarotonga rather than the educated guesswork we used in the simulator’s first edition. Average terrestrial latency has increased a little as a result.
The lab setup at the time of writing with Lei at the command and control seat.
The first experiments are now underway – essentially just a repeat of uncoded baselines with recommended queue capacities to ensure that we have a set of results that is directly comparable when we move into coding and PEP territory again soon. First indications are that goodput with the new latency distribution is a little lower than before, which supports our conjecture that island ISPs should choose the location of their world-side teleport carefully. We’ll look more into this a bit further down the track!
At this point, I hope to have completed the baselines in about a week.
One of the upshots of having upgraded and reworked our scripting is that we can now farm out some of the trace conversion to the new capture machines, which parallelises and hence reduces the time it takes to run an experiment by around 25%.
Data coming soon!