LIRNEasia is ready to start Phase 2 of its Broadband QoSE testing methodology development.
In Phase 1 we developed AshokaTissa methodology and tested broadband performance of key players in India, Sri Lanka and (partially) Singapore. We used tools like Bandwidth Monitor, Ping and Tracert. The tests were done manually. Each link was tested for two weekdays and two weekend days. Since it was essential to take the averages of three readings it took 2-3 hours to take one set of readings – so only one link could be tested in a day.
Our tests measured six metrics. Throughput (download and upload), RTT, jitter, packet loss and availability. We tested these while downloading/uploading from (a) ISP server (b) local server other than ISP’s and (c) international server. The parameters were recorded six times per day from 08:00 hrs to 20:30 hrs. (More details of methodology and Test Results)
Phase 2 plans to avoid this tedious process of manual testing. We want to automate the process. We want to develop a software tool that does the same. This is how Rohan Samarajiva describes it:
In the first instance, we can install the software in the computers of a larger number of selected persons in multiple locations and collate the data.
What if, adapting the concept of public-resource computing, where complex computing tasks are broken up into small chunks that are then run in the background of large numbers of computers of volunteers that are simultaneously engaged in other tasks, this software is installed in thousands of computers that are connected to the Internet and run in the background while the host computers are doing other things? And what if the results of these thousands, and even millions, of measurements are aggregated in real-time on a server, averaging out the various biases caused by computer idiosyncrasies and location-specific features? This would take the quality of the results to a whole different level, averaging out anomalies and allowing continuous coverage.
And what if this real-time aggregate measurement of quality of service across a range of dimensions is available for all consumers to see on the web? (Full document)
However, the ideal software tool we have in mind might not be the most practical. That is why we value your opinion on following issues.
Active monitor vs. Passive monitor: We can have only one. Active monitor actually downloads from predetermined sites and measure speeds while passive monitor measures speeds of ongoing user downloads. Both have pros and cons. Active monitor eats precious MBs – will be an issue if there is a limit in package, but passive monitor does not give us the breakdown – so we might not know where the bottleneck is. Our preference is active monitor.
Windows XP vs. Windows Vista versions: We can test the application for only one OS. We guess if we develop for Windows XP with luck it might work for Vista.
These are other key features (or constrains, depending on how one looks at them):
- It measures all above metrics, except upload speed –which will be difficult to automate
- The ‘user’ has to schedule the test times and make sure he/she does not run any other programs when tests are being conducted.
- The user also has to upload the test results to our specified website. But this is easy and we expect a single user to do no more than two days testing, unless he/she volunteers to do more.
The important question: Will it be realist for us to find a reasonably large group of volunteers who could do this?
We invite all techies and broadband users out there to discuss/ciritcise/agree/disagree. We value your input. Thanks in advance.