Vision Statement
Commercial Internet speed test platforms are widely used to evaluate network performance, but some of their internal implementation and network performance metrics are not publicly available. This lack of transparency and standardization makes it difficult to compare results or understand what they mean. This project will be made up of multiple phases. The first phase is to perform an in-depth analysis of the behavior of Ookla, the largest Internet speed test platform, by using Netlog captures collected during a browser-based speed test. This work builds off of previous work performed in the Spring 2025 semester. The goal through the first half of the Fall 2025 semester is to perform more tests to validate our initial findings and then submit a publishable paper to IMC. A significant portion of this paper will also investigate different models of throughput measurement. After this paper has been written, we will shift to validating a toolkit developed by CAIDA at UC San Diego. This toolkit is used as a wrapper to standardize the parameters for various speed test platforms, thus making them more comparable. By looking deeper into Ookla’s implementation, we can verify that the toolkit developed by CAIDA works as intended.
Project Background
This project was initially started in 2023 by CAIDA at UC San Diego, who developed an open source toolkit called RABBITS, which is a wrapper over six speed test platforms to provide standardized parameters for running speed tests. In the Spring of 2024, a senior project team from Calvin University began working on the project. Their work involved using Netlog captures to extract important data and perform an initial analysis of RABBITS using the Ookla speed test server. In their analysis, the team examined the effect of the number of TCP flows and the size of the HTTP message request on the throughput measurement.
In the Spring of 2025, we povioted to investigating the behavior of Ookla in more detail. By doing so, we can then return to RABBITs and perform a comparative analysis. We produced an initial paper in the beginning of the Spring, and will be returning to this paper to validate the initial claims. Over the Summer of 2025, a Summer Researcher worked on this project as well. Measuring the internet has a substantial community, and there is an annual Internet Measurement Conference (IMC). The ICM allows papers to be published on various internet measuring investigations. ArXiv also has a repository of related research to internet measuring, such as a paper on early termination of speed tests to reduce unnecessary traffic
Most Internet speed test platforms, except for M-Lab, generally use the same measurement method which is to flood the network with synthetic data to the point where a bottleneck is reached. This bottleneck is the narrowest bandwidth in both the download paths and upload paths. In order to flood the bottleneck along the network path, a large amount of data must be sent. If the volume of data is too low, the bottleneck will not be saturated and throughput measurements will be an under-estimate of the true throughput. Speed tests are also traditionally performed on the “last mile” server, which is the closest server to the user.
Team Members
- Ben Kosters: A senior at Calvin University who is focused on learning more about networking and security.
- Priscilla Chen (CS390): A junior at Calvin University who is focused on learning more about data analysis.
Project Advisors
- Professor Rocky Chang
- Dr. Ricky Mok (CAIDA)
- Tanmay Nale (UCSD)