PM Summer 08
PM Summer 08
Revision: Final
Date: 02/08/2008
0 Executive Summary
The SamKnows Performance Monitoring Network was born out of a desire to
demystify the nature of broadband performance in the UK. The project operates
completely independently of the ISPs and relies upon keen volunteers kindly
donating access to their Internet connections for testing.
The solution employs the use of a small hardware monitoring unit, installed in the
volunteers’ homes between their home network and their ISP router. By utilising a
hardware device it has been made possible to detect other network traffic and defer
testing accordingly, which is essential for an accurate result whilst operating in an
uncontrolled network.
Measuring broadband performance goes far beyond looking at speed (or
“throughput”) alone. The testing here reflects this, and the study has examined
latency, packet loss, DNS resolution, web page loading, VoIP performance, sending
of emails and, of course, speeds. The speed measurements delve deeper too, with
comparison between web based speeds and typical peer-to-peer speeds, as well as
looking at how running multiple streams affects speed.
The results produced over an initial six week period from 223 monitoring units were
certainly interesting. In summary:
- In the majority of metrics there was little discernable difference between most
ISPs;
- Zen Internet offered the fewest failures across all metrics;
- Virgin Media’s cable services and Be/O2’s services provided a consistently
low latency throughout, whilst Virgin.Net (Virgin’s ADSL service) performed
poorly;
- BT provided the fastest throughput when measured as a percentage of implied
line speed (an estimate of the potential maximum speed of the line);
- Be/O2 and Virgin Media produced the greatest raw throughput (in megabits
per second), which can likely be attributed to the nature of their products;
- Virgin Media’s cable throughput remained consistent on their 2, 4 and
10Mbps products, but was quite variable on their 20Mbps product;
- Testing highlighted the use of traffic shaping in the networks of BT and
PlusNet, which resulted in certain classes of traffic slowing significantly
during peak hours.
The project will continue into 2009, with improved testing metrics and a greatly
increased sample size (to beyond 2000 units) being the focus points for future work.
It is hoped that this will provide the necessary level of granularity to enable analysis
of performance across ISP products and also across the regions.
A number of third parties have expressed an interest in operating their own
deployments of this solution to monitor their own networks or those of the industry
as a whole. Any such future deployments would be operated entirely independently
(both physically and logically) from the SamKnows Performance Monitoring
Network.
2 © SamKnows Limited
Performance Monitoring Report 01/08/2008
Contents
0 Executive Summary 2
1 Introduction 4
1.1 Motivation 4
2 Methodology 5
2.1 Technical solution election process 5
2.2 Software 6
2.3 Volunteer selection process 6
2.4 Testing ICMP latency and packet loss 9
2.5 Testing recursive DNS resolver responsiveness and failures 9
2.6 Testing web page loading times 9
2.7 Testing VoIP capability 10
2.8 Testing SMTP email relaying 10
2.9 Speed tests 11
2.10 Data aggregation 12
3 Results 13
3.1 ICMP ping latency and packet loss 13
3.2 DNS 17
3.3 Web page loading 20
3.4 Voice over IP performance 23
3.5 SMTP relay performance 27
3.6 Speed tests 28
3.6.1. Port 80 HTTP download speed tests 29
3.6.2. Port 80 HTTP upload speed tests 33
3.6.3. Download tests over other ports 34
3.6.4. Speed tests using multiple connections (threads) 38
3.6.5. Variation in implied line speeds 41
4 Future work 42
5 Conclusion 43
6 References 45
3 © SamKnows Limited
Performance Monitoring Report 01/08/2008
1 Introduction
This report details the results of the first round of our broadband performance
monitoring project. Twelve UK based ISPs were tested over a period of eight weeks
and this has led to some fascinating findings. The results have been presented with a
detailed analysis, in an effort to avoid misinterpretation and/or misrepresentation of
the data.
1.1 Motivation
The key motivations behind this project were four fold:
- The lack of a truly independent measure of broadband performance
- The lack of a statistically sound methodology to facilitate such performance
testing
- The continuing perception that performance equals speed (it does not)
- The desire to dispel some myths
The frustration caused by the points above led us to ultimately develop the solution
that has produced the results presented here.
4 © SamKnows Limited
Performance Monitoring Report 01/08/2008
2 Methodology
The testing methodology employed gives us the best combination of accuracy and
access to a reasonable sample size of results.
5 © SamKnows Limited
Performance Monitoring Report 01/08/2008
2.2 Software
A customised FreeWRT [2] firmware image was developed and installed on the
units. At the point of delivery, this is all that is present on the device. Aside from a
single script that checks for the availability of the software component upon boot, the
physical unit contains no additional software. This is beneficial both from a security
perspective (everything is destroyed when the power is lost) and also from a support
perspective (any problems with a unit’s configuration can be undone simply by
power cycling it). New versions of the software can be delivered remotely without
requiring a reboot.
The software itself utilises standard Linux tools (where possible) to perform the tests.
Tools such as ping, dig, curl, iperf and tcpdump/libpcap have been used extensively. By
relying upon the years of development and testing that has been poured into these
applications we are helping to ensure the accuracy of our own results and can realise
a reduced development overhead.
All monitoring units maintain accurate time using ntp.
Attention was also paid to ensure a fair distribution between users of differing
products on the same ISP.
6 © SamKnows Limited
Performance Monitoring Report 01/08/2008
In total, the results of 223 units were aggregated to produce the results detailed
within this document. Some units’ results were discounted due to clear configuration
or user network issues (these were typically shown by 100% packet loss, or failures
that exceeded the norm by a significant margin). The breakdown by ISP is as follows:
7 © SamKnows Limited
Performance Monitoring Report 01/08/2008
ISP Units
Be Unlimited 24
BT 25
Entanet 18
Karoo 12
Orange 15
Plus Net 18
Sky 20
TalkTalk 16
Tiscali 15
Virgin Media 26
Virgin.net 15
Zen Internet 19
Total 223
Note: AOL’s results have been excluded from this report due to an insufficient
number of monitoring units reporting data for the duration of the tests. An increased
push for AOL participants is planned for the next round of tests.
Whilst 223 results have been presented here, there were in fact 258 devices involved
in the testing. The missing 25 units were excluded from the results because either:
- The ISP was AOL (note above caveat);
- The results deviated massively from the norm of the ISP in question (typified
by 50%+ packet loss, or web pages loading in ~20 seconds, etc);
- The volunteer contacted me to state that a fault had developed on the line;
- The unit was never powered on.
In total 9 people changed ISPs during the 2 months that data was collected for. The
changes in ISP have been reflected in the results.
8 © SamKnows Limited
Performance Monitoring Report 01/08/2008
9 © SamKnows Limited
Performance Monitoring Report 01/08/2008
2.7 Testing
Testing VoIP capability
This test emulates the properties of a Voice over IP phone call in an attempt to
determine how suitable the line is for VoIP purposes. Note that an actual VoIP call is
not made – but the characteristics of it are emulated.
The test sends a 10 second burst of UDP traffic to one of three target servers residing
on our network. Each UDP packet contains 160 bytes, and the traffic is sent at
64kbps. These characteristics match those of the G.711 [4] voice codec.
Please note: This only tests upstream bandwidth. Due to NAT implementation issues on some
volunteers’ routers, downstream testing proved too unreliable.
The test records the three major characteristics that determine the quality of a VoIP
call: delay, loss and jitter. From these an R-value can be derived, and subsequently an
estimated MOS (Mean Opinion Score) value. MOS is rated on a level from 1 (poorest)
to 5 (perfect audio). The absolute maximum MOS value for G.711 is 4.4.
Also note: Our test assumes a worst case jitter buffer of zero milliseconds. Most VoIP capable
routers (those that natively support VoIP channels) incorporate a small ~20ms jitter buffer
nowadays.
This test was conducted once per hour.
10 © SamKnows Limited
Performance Monitoring Report 01/08/2008
The terms “single thread” and “multi-thread” above refer to the number of
simultaneous connections to the speed test server. A single threaded test uses a
single connection (as one would typically find when downloading a file from a
website). The multi-threaded test uses three simultaneous connections to complete
the download.
All single threaded download tests download a randomly generated 6MB binary file
per test. All single threaded upload tests upload a randomly generated 1MB file to
the server using an HTTP POST request.
All multi-threaded download tests download a three randomly generated 2MB files
simultaneously from the same server. All multi-threaded upload tests upload three
randomly generated 500KB files to the server using an HTTP POST request.
Testing of non web-based traffic is emulated by using randomised port numbers in
the ranges commonly associated with peer-to-peer traffic. It is acknowledged that
HTTP traffic operating over ports other than port 80 can still be detected as HTTP
(through the use of deep packet inspection). This method was chosen because it
ensured validity in comparison between the two types of speed tests (as they used
the same utility to test and the same servers). The future work section notes possible
improvements to this specific test.
Additionally, it is understood that some ISPs operate transparent HTTP proxy
servers on their networks. To overcome this, our webservers were configured to
respond with the following headers, which should disable caching in standards-
compliant proxy servers:
Cache-Control: "private, pre-check=0, post-check=0, max-age=0"
Expires: 0
Pragma: no-cache
All speed tests run every once every six hours (although each unit’s tests may occur
at any fixed point within that six hour period). This predictability of traffic volumes
allowed us to accurately predict the capacity that we would have to cater for –
something that online speedtesters do not have the luxury of.
Five speedtest servers were deployed in five different datacenters in and
immediately around London to handle the traffic. Each server was monitored
constantly for excessive network load and CPU, disk and memory load.
Furthermore, the test results gathered by each server were compared against one
11 © SamKnows Limited
Performance Monitoring Report 01/08/2008
another daily to ensure no significant variation in the speed attainable per server.
Units cycled through the speed test servers in a round-robin fashion when testing.
Of course, the graphs shown in the results may average data at a higher granularity
than this (as there would simply be too many data points on the graphs otherwise).
12 © SamKnows Limited
Performance Monitoring Report 01/08/2008
3 Results
3.1 ICMP ping latency and packet loss
Latency is the measure of time for one packet of data to travel from your computer to
the destination and back. A connection with a low latency will feel more responsive
for simple tasks like web browsing, and certain applications will function far better
with lower latencies. Online gamers, for example, will be particularly aware of the
latency of their connections, as a lower latency than a counterpart will give them an
advantage.
Figure 2 depicts the latency of the connections monitored over six weeks from 1 June
to 12 July.
This rather chaotic graph actually shows that the vast majority of providers have
rather solid performance, at least in terms of latency. Be Unlimited, Virgin Media’s
cable service and BT do particularly well here.
Sky has a rather high latency, which is perhaps surprising when you consider the
majority of connections sampled here were on their LLU platform (which operates
over the Easynet network). However, the reason for the latency will quickly become
apparent after searching forums that Sky Broadband users frequent: Sky enables
interleaving on all of their ADSL customers by default, as a method of increasing line
stability (albeit at the cost of latency).
Aside from a few spikes from Entanet and Orange, the only ISP to stand out here for
the wrong reasons is Virgin’s ADSL service (aka Virgin.net). The below graph from a
one week period in July highlights the disparity between the Virgin.net latency and
the average of all other providers.
13 © SamKnows Limited
Performance Monitoring Report 01/08/2008
Note in particular the cyclic nature of the graph, with latency spiking to around
100ms during the evening hours and then flattening out almost completely during
the early hours of the morning.
Packet loss is a relatively rare occurrence in modern networks. Of course, it should
be noted that if there is heavy congestion in the network then ICMP packets will be
dropped by routers first (as they are deemed as lower priority than other traffic). For
this reason a regular level of ICMP packet loss can be seen as an indicator of network
congestion.
Figure 4 - Packet loss across ISPs (averaged into twelve hour periods)
14 © SamKnows Limited
Performance Monitoring Report 01/08/2008
Figure 4, above, indicates a low average packet loss across all ISPs, with the total loss
peaking at about 1% during normal operation.
Some interesting characteristics can be observed from the graph though. During mid
to late June some Be Unlimited users were suffering high packet loss at peak times.
Whilst the loss never peaks above 1% on the graph, the cyclic nature suggests that
this issue was being felt regularly, even if it was limited in scale.
Like Sky, Zen’s average latency was rather higher than might have been expected.
However, as the data tables below show, Zen have the lowest level of packet loss
across all of the providers tested, with Sky not trailing too far behind.
It is worth noting that BT and some of the LLU operators had a noticeably wide
spread of latencies, which is largely a result of the xDSL technology in place. Whilst
some connections operated with a 10ms round trip time, others ran as high as 70ms.
Future work will break these figures down further, so per product latency figures
can be studied.
Many will also note the issue on July 1 that saw the results of many ISPs spike to well
beyond the norm. Closer examination of the results show that this spike was felt
across two of the four hosts being tested against, suggesting that some intermediate
route between the ISPs and the target network(s) was at fault temporarily. A similar
incident affecting all ISPs occurred on June 14.
15 © SamKnows Limited
Performance Monitoring Report 01/08/2008
BT 37.84 BT 0.50
Table 2 - Average ICMP latency by ISP Table 3 - Average ICMP packet loss by ISP
16 © SamKnows Limited
Performance Monitoring Report 01/08/2008
3.2 DNS
The DNS (Domain Name System) predates the Internet itself. It allows computers to
convert names such as www.bbc.co.uk to their associated IP address (e.g.
212.58.251.195). Indeed, every website a person visits will require a DNS A-record
query for the website’s hostname. A slow DNS server will not affect download
speeds, but it will severely affect the responsiveness of browsing around the Internet.
As with the latency tests, the vast majority of providers perform well, with queries
answered in an average of 45.62ms. There is close correlation to the results of those
the ICMP latency tests here – BT and Virgin Media both perform very well here,
whereas Sky and Virgin.net suffer (due to the inherent latency in the connections).
The only notable exception here is Be Unlimited. Frequent spikes indicate a
problematic and intermittent DNS service.
17 © SamKnows Limited
Performance Monitoring Report 01/08/2008
DNS query failures, depicted in the graph above, clearly highlight the issue affecting
Be Unlimited users. Whilst the average failure rate across all ISPs is 0.81%, Be come
in with a 2.82% failure rate across a six week period.
The problem with Be’s DNS has improved though. Our earliest figures for Be, dating
back to mid March, show the problem was even more pronounced back then.
18 © SamKnows Limited
Performance Monitoring Report 01/08/2008
Putting the Be issue aside, all other providers come in with a sub 1% failure rate,
which indicates a fairly robust DNS service. It should be noted Zen Internet lead the
pack here by a considerable margin, having over seven times fewer DNS query
failures than the average.
Summary data tables for DNS query failure and resolution times
BT 0.47 BT 37.10
Table 4 - DNS query failure rate by ISP Table 5 - DNS query resolution time by ISP
19 © SamKnows Limited
Performance Monitoring Report 01/08/2008
Here we see close correlation to the results of the latency tests. Virgin Media’s cable
service does particularly well, with most others falling on or around the average. The
poor performance of the Virgin ADSL service matches well to the poor performance
they exhibited in the latency tests.
Be stands out as a notable exception here too. Whilst they may have had one of the
lowest latencies, their web page loading performance is relatively poor. This can be
explained by examining the correlation with their DNS performance (See Figure 9).
20 © SamKnows Limited
Performance Monitoring Report 01/08/2008
Figure 9 - Correlation between Be's DNS issues and their web page loading times
Note how the web page loading times (in blue) peak as the DNS query time increases
(in red). This is a clear example of how poor DNS server performance can impact real
world applications such as web browsing.
Failures whilst loading web pages should be quite rare on modern networks. Indeed,
the average failure rate for the six weeks from June 1st was 0.34%.
21 © SamKnows Limited
Performance Monitoring Report 01/08/2008
The large spike and small spike in early June (see Figure 10) can be attributed to a
popular UK based website going offline for a number of hours. Note how the issues
affected all providers simultaneously, indicating that the endpoint itself was at fault
and not some intermediate route.
Again, Virgin’s ADSL service comes across poorly here whilst Zen’s performance
tops this chart. Zen’s success here is most likely attributable to their low packet loss
and DNS query failure rate.
Summary data tables for web page fetching times and failure rate
BT 493.43 BT 0.34
Table 6 – Web page fetching times Table 7 – Web page fetching failure rate
22 © SamKnows Limited
Performance Monitoring Report 01/08/2008
Figure 11 depicts the calculated Mean Opinion Scores of the simulated VoIP calls.
Note that the vast majority are near the theoretical maximum of 4.4, which indicates
near zero packet loss, very little jitter and a low packet delay.
The results at first may seem surprising. Virgin Media’s cable service appears to
perform poorly here, despite the fact that they performed very well in the latency
tests. Virgin’s ADSL service (labelled Virgin.net) seems to perform about average, in
stark contrast to their previous results. The reason behind both of these is jitter. VoIP
calls are particularly badly affected by jitter, which is the standard deviation of the
packet delay. So, a connection that operates with 10ms delay but frequently spikes to
15ms might not be a problem for normal usage, but this 5ms represents a 50%
increase in jitter.
This is what has affected Virgin Media’s cable service here – their normally low
latency is adversely affected by spikes within the 10 second UDP burst. Conversely,
the Virgin.net ADSL service may have poor latency, but it is consistently poor –
something that VoIP codecs are capable of dealing with.
23 © SamKnows Limited
Performance Monitoring Report 01/08/2008
Figure 12 shows precisely why Virgin Media and, to a lesser extent, Entanet appear
to do badly in the VoIP tests. Their average jitter is considerably higher than other
ISPs.
Of course, it is very important to note that we are seeing 6ms jitter at worst here – a
level which most voice codecs will have no trouble with at all. Furthermore, many
hardware-based VoIP routers now incorporate a “jitter buffer” (typically capable of
safely handling jitter of 20ms or less) so the relatively low jitter exhibited here is
unlikely to cause any issues. As discussed in the methodology, our test assumes a
zero jitter buffer (effectively exaggerating real world results).
Packet loss is also a big factor in VoIP call quality. Whilst the odd dropped packet is
acceptable (as each packet in the test here accounts for only 20ms of audio), extended
periods of loss will lead to choppy and broken up audio.
24 © SamKnows Limited
Performance Monitoring Report 01/08/2008
Nearly all ISPs operate at near-zero packet loss for this test, which is impressive.
However, TalkTalk begin to feature in the graph for the wrong reasons as of early
July. Of particular interest is the spike on July 12 that sees TalkTalk hit 18% packet
loss – a very significant amount. Their previous results would tend to indicate that
this is a temporary issue, but at the time of writing no newer data was available so
we are unable to confirm this.
25 © SamKnows Limited
Performance Monitoring Report 01/08/2008
BT 4.34 BT 0.85
Table 8 – VoIP Mean Opinion Score by ISP Table 8 – Jitter when sening 501 UDP packets
Be Unlimited 0.03
BT 0.05
Entanet 0.02
Karoo 0.11
Orange 0.05
Sky 0.18
TalkTalk 0.38
Tiscali 0.08
Virgin.net 0.02
26 © SamKnows Limited
Performance Monitoring Report 01/08/2008
One important thing to note is the occasional spikes that affect all ISPs, suggesting
that the destination mail server itself (or the route between) is somehow at fault.
These spikes were investigated and an over-zealous spam filter was discovered to be
the root cause.
Ignoring this glitch, we see that most providers are delivering the vast majority of
email within three minutes. There are clear spikes affecting Karoo, Tiscali, Plus Net
and Be, but the infrequency of these suggests that it is not a recurring problem and
can most likely be attributed to internal problems or temporarily high loads on the
mail servers.
27 © SamKnows Limited
Performance Monitoring Report 01/08/2008
28 © SamKnows Limited
Performance Monitoring Report 01/08/2008
Figure 15, above, depicts the daily averaged download speeds of each ISP, expressed
as a percentage of the implied line speed. Most providers hover around the 75%
mark, meaning that if you can reach 6Mbps under optimum conditions you might
expect to achieve 4.5Mbps on average.
BT noticeably outperform the rest of the ISPs in this test, whilst Virgin.net and even
Virgin Media’s cable service both suffer somewhat. BT’s strong performance here
may partly be due to the fact that they use the BT Central Plus [5] product from BT
Wholesale, which provides LNS-based access to their core network via ten region
PoPs. ISPs using the standard BT Centrals would typically take an L2TP handoff at
one location. BT Retail are the only BT Wholesale customer, to our knowledge, that
utilise the BT Central Plus product.
A subset of the above data, averaged into six hour data points, was used to produce
Figure 16 (below).
29 © SamKnows Limited
Performance Monitoring Report 01/08/2008
Here we see the reason for Virgin’s relatively low performance in the previous
graph. Both their ADSL and Cable service drop significantly in speed during peak
hours, then ramp back up to the average in quieter hours.
Providers such as Be, BT, TalkTalk and Sky all maintain a near constant speed
throughout the day. Given the higher average connection speed of their customers,
this is a particularly impressive feat on the Be network.
The raw throughput (in Mbps) graph is shown below, again averaged over 24 hours.
30 © SamKnows Limited
Performance Monitoring Report 01/08/2008
Unsurprisingly Virgin Media and Be Unlimited lead the pack in this view, which can
be accounted for by their higher proportion of customers on faster connections (See
the section on Variation in Implied Line Speeds for more).
The contrasting results for Virgin Media highlight how much difference data
presentation can make. Whilst Figure 17 shows them to have one of the fastest
average raw throughput speeds, the variability in the performance is high, with it
falling to nearly 50% during peak hours (see Figure 16).
The Virgin Media results are particularly interesting once you begin to break the
results down by product.
Figure 18 - Virgin Media cable HTTP speeds (Mbps) by headline product speed
Figure 18, above, shows the reason for Virgin’s dips in speed very clearly. The 2Mbps
and 4Mbps products maintain their full speed consistently, and the 10Mbps product
does a good job too, apart from the occasional dip. The 20Mbps product is where the
issue lays, with significant dips seen every evening (the above graph being shown
over a 5 day period).
It should be noted that whilst the maximum speed above for the 20Mbps product is
shown to be around 14Mbps, this is of course an average over all of the 20Mbps lines
monitored. Significant variation was seen between individual 20Mbps lines, with
many running at 20Mbps during off-peak hours, but some dipped down to below
10Mbps (hence the average presented above).
31 © SamKnows Limited
Performance Monitoring Report 01/08/2008
Percentage of Throughput
ISP ISP
implied line speed (Mbps)
BT 86.24 BT 3.94
32 © SamKnows Limited
Performance Monitoring Report 01/08/2008
Figure 19 shows nearly all providers hitting near full upstream speed all of the time.
Orange and BT perform particularly well, with Virgin.net and – surprisingly – Sky
falling behind in this test. Again, this could be due to the fact that interleaving is
enabled on all Sky lines as standard, although this should not significantly affect
sustained transfer speeds.
33 © SamKnows Limited
Performance Monitoring Report 01/08/2008
Percentage of Throughput
ISP ISP
implied line speed (Mbps)
BT 96.35 BT 0.35
34 © SamKnows Limited
Performance Monitoring Report 01/08/2008
35 © SamKnows Limited
Performance Monitoring Report 01/08/2008
In all but two cases we see that the two lines very nearly match, and given that both
tests are being run consecutively, this is what one would expect on a normal
network.
However, in the case of both BT and Plus Net we see something rather different.
Whilst the port 80 test performed very well at all times, non port 80 traffic drops to
nearly 15% of the line speed on BT connections during peak hours. The cause of this
can without doubt be attributed to traffic shaping – the practice of prioritising one
type of traffic over another. The term “traffic management” is also frequently used
by some providers.
Plus Net openly admits and advocates their traffic shaping policies on their website,
so these results were to be expected for them. However, BT is not so forthcoming,
and the breadth and scale of the activity is rather surprising. In fact, the same
characteristics can be seen when looking at any of the BT connections we monitored
individually – including business connections and also including two connections
that were used only by the monitoring units we installed. This suggests that the
policy is applied universally, regardless of product and regardless of usage volume.
However, it should be noted that it is this shaping that likely helps their port 80
(HTTP) speedtest results to perform so well. Whilst some may see any form of traffic
shaping or traffic management as a bad thing, if you are not a peer-to-peer user or a
heavy downloader then BT’s and PlusNet’s practices will actually benefit you. Future
work will examine how these policies affect other “interactive” applications, such as
SSH, VoIP and video streaming.
It may also surprise some to note that certain other ISPs are not demonstrating
similar dips for non port 80 traffic. The possible reasons for this are numerous:
36 © SamKnows Limited
Performance Monitoring Report 01/08/2008
- They could be traffic shaping based upon volume rather than traffic type (as
Virgin Media do);
- Their traffic analysers could be more intelligent or configured differently to
those of BT’s and recognise that this was not real peer to peer traffic, and thus
not shape it;
- They could not be employing traffic shaping at all – the equipment required
to do this properly is very expensive.
Percentage of Throughput
ISP ISP
implied line speed (Mbps)
BT 48.13 BT 2.12
37 © SamKnows Limited
Performance Monitoring Report 01/08/2008
38 © SamKnows Limited
Performance Monitoring Report 01/08/2008
39 © SamKnows Limited
Performance Monitoring Report 01/08/2008
The results are interesting if nothing else. Some providers demonstrate a large
increase when utilising multiple connections (e.g. Virgin Media and Sky), whereas
others show almost no change (e.g. BT).
Whilst it is tempting to immediately blame network contention for the sudden
increase obtained when using multiple connections, this is unlikely to be the root
cause in all cases. The simple fact is that some variance is to be expected and many
other factors could just as easily be the cause of larger variance, including:
40 © SamKnows Limited
Performance Monitoring Report 01/08/2008
Whilst we have been warning off using raw throughput as a fair metric for
comparison in this document, it is still an interesting value to look at. Of particular
note here is Be’s and Virgin’s relatively very high implied line speeds.
We noted earlier in the speed test results that Virgin’s cable speeds fluctuated
significantly during peak hours, whilst BT’s were very stable. But is there more to
this?
With an average implied line speed of 10.55Mbps Virgin Media would have to
handle 9.5Mbps on each line consistently to reach a 90% throughput speed. BT, for
example, had an implied line speed of 4.67Mbps, meaning that they need only
handle 4.2Mbps to hit 90%.
Whilst many of the ADSL providers appear to do better in the speed tests than
Virgin’s cable product, the fact that Virgin have to handle far more traffic to hit the
same percentage mark cannot be simply ignored.
41 © SamKnows Limited
Performance Monitoring Report 01/08/2008
4 Future work
There are many possible avenues of improvement that we are keen to explore.
In order for us to be able to draw firmer conclusions about some tentative results
(which have not been presented here) and to begin to regionalise results we need to
increase our sample size significantly. Regionalising the results is of particular
interest to us, as it will allow us to begin to show areas in which some providers
perform brilliantly and some in which they do not. With the boom in LLU (where the
LLU operators have to provide their own backhaul) we suspect that significant
differences in regional results can be seen. To date, no formal study of regional
differences in broadband performance has been carried out.
Improving and expanding the suite of tests is also high on the to-do list. Clearly the
SMTP test, which is still in its infancy, needs to be expanded to include all of the ISPs
being tested. Improvements to the non port 80 speed tests are also being planned.
These will likely involve rewriting/obfuscation of HTTP headers to avoid detection
by deep packet inspecting traffic shapers (this will then appear as bulk binary traffic).
Some have requested that we include BitTorrent-specific tests in the suite. If this
were to be conducted it would have to operate within a controlled environment (i.e.
communicate with a managed tracker with a fixed number of high bandwidth seeds).
Initial research suggests that the operation of the protocol would likely make results
difficult to compare. One possible solution is to record the two sides of a BitTorrent
conversation and use an application such as tcpreplay to repeat the stream between
client and server. Whether this is a true test of BitTorrent (which makes heavy use of
multiple connections) is questionable.
Finally, we are acutely aware of next generation broadband services on the horizon.
Virgin Media are launching their 50Mbps product later this year (or early next), and
H2O have their 100Mbps products on the horizon too. The recent announcement
from BT regarding their £1.5bn fibre-optic investment has not gone unnoticed either.
Active development is underway on the next generation monitoring unit, which will
be capable of accurately testing all of these new products.
42 © SamKnows Limited
Performance Monitoring Report 01/08/2008
5 Conclusion
The results detailed in this report have given a sound statistical grounding to some
beliefs, dispelled others, and unearthed some interesting facts too.
Observant readers will notice that only a few of the ISPs tested were discussed
extensively in the results. Virgin, BT, Be, Sky and Zen all received focus in the text
surrounding the results, but the other seven ISPs went largely unmentioned. The fact
is that the other ISPs all performed around average – their results were neither
excellent nor were they poor, but instead “middle of the road”.
This may surprise some - particularly if you are a frequent reader of the various
broadband related forums online. Users in Hull, where there is no choice for fixed-
line broadband (with Karoo being the only option), were particularly keen to have
their provider represented here. Many of these users were convinced their provider
would stack up poorly against the competition, so the fact they are on-par should be
the source of some comfort that they are no worse off than users in other areas.
But there were many observable traits, and it’s worth highlighting the most
prominent of these again here:
- Zen excelled in terms of reliability, with the fewest recorded failures across all
tests;
- Be Unlimited and Virgin Media’s cable service shone in the latency tests,
albeit for different reasons. Be Unlimited accept orders from customers with
lines of 5km or less, so are more likely to have a faster connection, whilst
Virgin Media have their cable network which is not dependent on distance
from the exchange, so latency is nearly constant;
- Sky suffered somewhat in the latency tests, most likely due to interleaving
being enabled on lines by default (customers can request for it to be
removed);
- Be Unlimited fell down on the DNS and web page tests due to their ongoing
DNS resolution issues;
- Virgin.net, the ADSL product from Virgin Media, performed rather poorly in
the majority of tests. Highly variable latency that bottomed out during off-
peak hours suggest significant contention as being the cause;
- All providers offered a sufficiently reliable network for carrying VoIP traffic;
- BT’s port 80 (world wide web) speeds were excellent and could not be topped
(when measured as a percentage of implied line speed);
- Be and Virgin Media’s cable service offered the fastest raw throughput speeds
in Mbps, largely owing to the technologies used. Significant variation in
speed was demonstrated on Virgin’s 20Mbps products, whilst other products
performed relatively consistently;
- BT and PlusNet were both demonstrated to use traffic shaping at peak times
on non port 80 traffic, although, and perhaps surprisingly, no other ISPs
demonstrated such usage. Future work will be conducted in this area to see
how this affects other applications;
- A clear disparity can be seen between running speed tests with a single
stream and running them with multiple streams. When testing your own
connection, be sure you are testing on a level playing field.
43 © SamKnows Limited
Performance Monitoring Report 01/08/2008
Future work will now focus on developing the solution further (i.e. improved tests
and faster testing units for next generation connections) and increasing the sample
size. The sample size is the key though, and we plan to expand this to over 2000 units
by Spring 2009. This will provide approximately 200 units per ISP, which should
allow us to begin comparing an individual provider’s products and regional
variations in performance.
44 © SamKnows Limited
Performance Monitoring Report 01/08/2008
6 References
2. FreeWRT
http://www.freewrt.org
45 © SamKnows Limited